Sample records for factor analysis linear

  1. Effects of measurement errors on psychometric measurements in ergonomics studies: Implications for correlations, ANOVA, linear regression, factor analysis, and linear discriminant analysis.

    PubMed

    Liu, Yan; Salvendy, Gavriel

    2009-05-01

    This paper aims to demonstrate the effects of measurement errors on psychometric measurements in ergonomics studies. A variety of sources can cause random measurement errors in ergonomics studies and these errors can distort virtually every statistic computed and lead investigators to erroneous conclusions. The effects of measurement errors on five most widely used statistical analysis tools have been discussed and illustrated: correlation; ANOVA; linear regression; factor analysis; linear discriminant analysis. It has been shown that measurement errors can greatly attenuate correlations between variables, reduce statistical power of ANOVA, distort (overestimate, underestimate or even change the sign of) regression coefficients, underrate the explanation contributions of the most important factors in factor analysis and depreciate the significance of discriminant function and discrimination abilities of individual variables in discrimination analysis. The discussions will be restricted to subjective scales and survey methods and their reliability estimates. Other methods applied in ergonomics research, such as physical and electrophysiological measurements and chemical and biomedical analysis methods, also have issues of measurement errors, but they are beyond the scope of this paper. As there has been increasing interest in the development and testing of theories in ergonomics research, it has become very important for ergonomics researchers to understand the effects of measurement errors on their experiment results, which the authors believe is very critical to research progress in theory development and cumulative knowledge in the ergonomics field.

  2. Factor Analysis of Linear Type Traits and Their Relation with Longevity in Brazilian Holstein Cattle

    PubMed Central

    Kern, Elisandra Lurdes; Cobuci, Jaime Araújo; Costa, Cláudio Napolis; Pimentel, Concepta Margaret McManus

    2014-01-01

    In this study we aimed to evaluate the reduction in dimensionality of 20 linear type traits and more final score in 14,943 Holstein cows in Brazil using factor analysis, and indicate their relationship with longevity and 305 d first lactation milk production. Low partial correlations (−0.19 to 0.38), the medium to high Kaiser sampling mean (0.79) and the significance of the Bartlett sphericity test (p<0.001), indicated correlations between type traits and the suitability of these data for a factor analysis, after the elimination of seven traits. Two factors had autovalues greater than one. The first included width and height of posterior udder, udder texture, udder cleft, loin strength, bone quality and final score. The second included stature, top line, chest width, body depth, fore udder attachment, angularity and final score. The linear regression of the factors on several measures of longevity and 305 d milk production showed that selection considering only the first factor should lead to improvements in longevity and 305 milk production. PMID:25050015

  3. Factor Analysis via Components Analysis

    ERIC Educational Resources Information Center

    Bentler, Peter M.; de Leeuw, Jan

    2011-01-01

    When the factor analysis model holds, component loadings are linear combinations of factor loadings, and vice versa. This interrelation permits us to define new optimization criteria and estimation methods for exploratory factor analysis. Although this article is primarily conceptual in nature, an illustrative example and a small simulation show…

  4. Maximizing the Information and Validity of a Linear Composite in the Factor Analysis Model for Continuous Item Responses

    ERIC Educational Resources Information Center

    Ferrando, Pere J.

    2008-01-01

    This paper develops results and procedures for obtaining linear composites of factor scores that maximize: (a) test information, and (b) validity with respect to external variables in the multiple factor analysis (FA) model. I treat FA as a multidimensional item response theory model, and use Ackerman's multidimensional information approach based…

  5. Analysis of Nonlinear Dynamics in Linear Compressors Driven by Linear Motors

    NASA Astrophysics Data System (ADS)

    Chen, Liangyuan

    2018-03-01

    The analysis of dynamic characteristics of the mechatronics system is of great significance for the linear motor design and control. Steady-state nonlinear response characteristics of a linear compressor are investigated theoretically based on the linearized and nonlinear models. First, the influence factors considering the nonlinear gas force load were analyzed. Then, a simple linearized model was set up to analyze the influence on the stroke and resonance frequency. Finally, the nonlinear model was set up to analyze the effects of piston mass, spring stiffness, driving force as an example of design parameter variation. The simulating results show that the stroke can be obtained by adjusting the excitation amplitude, frequency and other adjustments, the equilibrium position can be adjusted by adjusting the DC input, and to make the more efficient operation, the operating frequency must always equal to the resonance frequency.

  6. Using Linear Regression To Determine the Number of Factors To Retain in Factor Analysis and the Number of Issues To Retain in Delphi Studies and Other Surveys.

    ERIC Educational Resources Information Center

    Jurs, Stephen; And Others

    The scree test and its linear regression technique are reviewed, and results of its use in factor analysis and Delphi data sets are described. The scree test was originally a visual approach for making judgments about eigenvalues, which considered the relationships of the eigenvalues to one another as well as their actual values. The graph that is…

  7. Linearization of digital derived rate algorithm for use in linear stability analysis

    NASA Technical Reports Server (NTRS)

    Graham, R. E.; Porada, T. W.

    1985-01-01

    The digital derived rate (DDR) algorithm is used to calculate the rate of rotation of the Centaur upper-stage rocket. The DDR is highly nonlinear algorithm, and classical linear stability analysis of the spacecraft cannot be performed without linearization. The performance of this rate algorithm is characterized by a gain and phase curve that drop off at the same frequency. This characteristic is desirable for many applications. A linearization technique for the DDR algorithm is investigated. The linearization method is described. Examples of the results of the linearization technique are illustrated, and the effects of linearization are described. A linear digital filter may be used as a substitute for performing classical linear stability analyses, while the DDR itself may be used in time response analysis.

  8. Investigation on Constrained Matrix Factorization for Hyperspectral Image Analysis

    DTIC Science & Technology

    2005-07-25

    analysis. Keywords: matrix factorization; nonnegative matrix factorization; linear mixture model ; unsupervised linear unmixing; hyperspectral imagery...spatial resolution permits different materials present in the area covered by a single pixel. The linear mixture model says that a pixel reflectance in...in r. In the linear mixture model , r is considered as the linear mixture of m1, m2, …, mP as nMαr += (1) where n is included to account for

  9. Updating QR factorization procedure for solution of linear least squares problem with equality constraints.

    PubMed

    Zeb, Salman; Yousaf, Muhammad

    2017-01-01

    In this article, we present a QR updating procedure as a solution approach for linear least squares problem with equality constraints. We reduce the constrained problem to unconstrained linear least squares and partition it into a small subproblem. The QR factorization of the subproblem is calculated and then we apply updating techniques to its upper triangular factor R to obtain its solution. We carry out the error analysis of the proposed algorithm to show that it is backward stable. We also illustrate the implementation and accuracy of the proposed algorithm by providing some numerical experiments with particular emphasis on dense problems.

  10. Comparison of Linear and Non-linear Regression Analysis to Determine Pulmonary Pressure in Hyperthyroidism.

    PubMed

    Scarneciu, Camelia C; Sangeorzan, Livia; Rus, Horatiu; Scarneciu, Vlad D; Varciu, Mihai S; Andreescu, Oana; Scarneciu, Ioan

    2017-01-01

    This study aimed at assessing the incidence of pulmonary hypertension (PH) at newly diagnosed hyperthyroid patients and at finding a simple model showing the complex functional relation between pulmonary hypertension in hyperthyroidism and the factors causing it. The 53 hyperthyroid patients (H-group) were evaluated mainly by using an echocardiographical method and compared with 35 euthyroid (E-group) and 25 healthy people (C-group). In order to identify the factors causing pulmonary hypertension the statistical method of comparing the values of arithmetical means is used. The functional relation between the two random variables (PAPs and each of the factors determining it within our research study) can be expressed by linear or non-linear function. By applying the linear regression method described by a first-degree equation the line of regression (linear model) has been determined; by applying the non-linear regression method described by a second degree equation, a parabola-type curve of regression (non-linear or polynomial model) has been determined. We made the comparison and the validation of these two models by calculating the determination coefficient (criterion 1), the comparison of residuals (criterion 2), application of AIC criterion (criterion 3) and use of F-test (criterion 4). From the H-group, 47% have pulmonary hypertension completely reversible when obtaining euthyroidism. The factors causing pulmonary hypertension were identified: previously known- level of free thyroxin, pulmonary vascular resistance, cardiac output; new factors identified in this study- pretreatment period, age, systolic blood pressure. According to the four criteria and to the clinical judgment, we consider that the polynomial model (graphically parabola- type) is better than the linear one. The better model showing the functional relation between the pulmonary hypertension in hyperthyroidism and the factors identified in this study is given by a polynomial equation of second

  11. CFORM- LINEAR CONTROL SYSTEM DESIGN AND ANALYSIS: CLOSED FORM SOLUTION AND TRANSIENT RESPONSE OF THE LINEAR DIFFERENTIAL EQUATION

    NASA Technical Reports Server (NTRS)

    Jamison, J. W.

    1994-01-01

    CFORM was developed by the Kennedy Space Center Robotics Lab to assist in linear control system design and analysis using closed form and transient response mechanisms. The program computes the closed form solution and transient response of a linear (constant coefficient) differential equation. CFORM allows a choice of three input functions: the Unit Step (a unit change in displacement); the Ramp function (step velocity); and the Parabolic function (step acceleration). It is only accurate in cases where the differential equation has distinct roots, and does not handle the case for roots at the origin (s=0). Initial conditions must be zero. Differential equations may be input to CFORM in two forms - polynomial and product of factors. In some linear control analyses, it may be more appropriate to use a related program, Linear Control System Design and Analysis (KSC-11376), which uses root locus and frequency response methods. CFORM was written in VAX FORTRAN for a VAX 11/780 under VAX VMS 4.7. It has a central memory requirement of 30K. CFORM was developed in 1987.

  12. Advanced analysis technique for the evaluation of linear alternators and linear motors

    NASA Technical Reports Server (NTRS)

    Holliday, Jeffrey C.

    1995-01-01

    A method for the mathematical analysis of linear alternator and linear motor devices and designs is described, and an example of its use is included. The technique seeks to surpass other methods of analysis by including more rigorous treatment of phenomena normally omitted or coarsely approximated such as eddy braking, non-linear material properties, and power losses generated within structures surrounding the device. The technique is broadly applicable to linear alternators and linear motors involving iron yoke structures and moving permanent magnets. The technique involves the application of Amperian current equivalents to the modeling of the moving permanent magnet components within a finite element formulation. The resulting steady state and transient mode field solutions can simultaneously account for the moving and static field sources within and around the device.

  13. Linear regression analysis: part 14 of a series on evaluation of scientific publications.

    PubMed

    Schneider, Astrid; Hommel, Gerhard; Blettner, Maria

    2010-11-01

    Regression analysis is an important statistical method for the analysis of medical data. It enables the identification and characterization of relationships among multiple factors. It also enables the identification of prognostically relevant risk factors and the calculation of risk scores for individual prognostication. This article is based on selected textbooks of statistics, a selective review of the literature, and our own experience. After a brief introduction of the uni- and multivariable regression models, illustrative examples are given to explain what the important considerations are before a regression analysis is performed, and how the results should be interpreted. The reader should then be able to judge whether the method has been used correctly and interpret the results appropriately. The performance and interpretation of linear regression analysis are subject to a variety of pitfalls, which are discussed here in detail. The reader is made aware of common errors of interpretation through practical examples. Both the opportunities for applying linear regression analysis and its limitations are presented.

  14. Operator Factorization and the Solution of Second-Order Linear Ordinary Differential Equations

    ERIC Educational Resources Information Center

    Robin, W.

    2007-01-01

    The theory and application of second-order linear ordinary differential equations is reviewed from the standpoint of the operator factorization approach to the solution of ordinary differential equations (ODE). Using the operator factorization approach, the general second-order linear ODE is solved, exactly, in quadratures and the resulting…

  15. Common pitfalls in statistical analysis: Linear regression analysis

    PubMed Central

    Aggarwal, Rakesh; Ranganathan, Priya

    2017-01-01

    In a previous article in this series, we explained correlation analysis which describes the strength of relationship between two continuous variables. In this article, we deal with linear regression analysis which predicts the value of one continuous variable from another. We also discuss the assumptions and pitfalls associated with this analysis. PMID:28447022

  16. Linear and nonlinear models for predicting fish bioconcentration factors for pesticides.

    PubMed

    Yuan, Jintao; Xie, Chun; Zhang, Ting; Sun, Jinfang; Yuan, Xuejie; Yu, Shuling; Zhang, Yingbiao; Cao, Yunyuan; Yu, Xingchen; Yang, Xuan; Yao, Wu

    2016-08-01

    This work is devoted to the applications of the multiple linear regression (MLR), multilayer perceptron neural network (MLP NN) and projection pursuit regression (PPR) to quantitative structure-property relationship analysis of bioconcentration factors (BCFs) of pesticides tested on Bluegill (Lepomis macrochirus). Molecular descriptors of a total of 107 pesticides were calculated with the DRAGON Software and selected by inverse enhanced replacement method. Based on the selected DRAGON descriptors, a linear model was built by MLR, nonlinear models were developed using MLP NN and PPR. The robustness of the obtained models was assessed by cross-validation and external validation using test set. Outliers were also examined and deleted to improve predictive power. Comparative results revealed that PPR achieved the most accurate predictions. This study offers useful models and information for BCF prediction, risk assessment, and pesticide formulation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Orthogonal sparse linear discriminant analysis

    NASA Astrophysics Data System (ADS)

    Liu, Zhonghua; Liu, Gang; Pu, Jiexin; Wang, Xiaohong; Wang, Haijun

    2018-03-01

    Linear discriminant analysis (LDA) is a linear feature extraction approach, and it has received much attention. On the basis of LDA, researchers have done a lot of research work on it, and many variant versions of LDA were proposed. However, the inherent problem of LDA cannot be solved very well by the variant methods. The major disadvantages of the classical LDA are as follows. First, it is sensitive to outliers and noises. Second, only the global discriminant structure is preserved, while the local discriminant information is ignored. In this paper, we present a new orthogonal sparse linear discriminant analysis (OSLDA) algorithm. The k nearest neighbour graph is first constructed to preserve the locality discriminant information of sample points. Then, L2,1-norm constraint on the projection matrix is used to act as loss function, which can make the proposed method robust to outliers in data points. Extensive experiments have been performed on several standard public image databases, and the experiment results demonstrate the performance of the proposed OSLDA algorithm.

  18. Discriminative analysis of non-linear brain connectivity for leukoaraiosis with resting-state fMRI

    NASA Astrophysics Data System (ADS)

    Lai, Youzhi; Xu, Lele; Yao, Li; Wu, Xia

    2015-03-01

    Leukoaraiosis (LA) describes diffuse white matter abnormalities on CT or MR brain scans, often seen in the normal elderly and in association with vascular risk factors such as hypertension, or in the context of cognitive impairment. The mechanism of cognitive dysfunction is still unclear. The recent clinical studies have revealed that the severity of LA was not corresponding to the cognitive level, and functional connectivity analysis is an appropriate method to detect the relation between LA and cognitive decline. However, existing functional connectivity analyses of LA have been mostly limited to linear associations. In this investigation, a novel measure utilizing the extended maximal information coefficient (eMIC) was applied to construct non-linear functional connectivity in 44 LA subjects (9 dementia, 25 mild cognitive impairment (MCI) and 10 cognitively normal (CN)). The strength of non-linear functional connections for the first 1% of discriminative power increased in MCI compared with CN and dementia, which was opposed to its linear counterpart. Further functional network analysis revealed that the changes of the non-linear and linear connectivity have similar but not completely the same spatial distribution in human brain. In the multivariate pattern analysis with multiple classifiers, the non-linear functional connectivity mostly identified dementia, MCI and CN from LA with a relatively higher accuracy rate than the linear measure. Our findings revealed the non-linear functional connectivity provided useful discriminative power in classification of LA, and the spatial distributed changes between the non-linear and linear measure may indicate the underlying mechanism of cognitive dysfunction in LA.

  19. Generalized Linear Mixed Model Analysis of Urban-Rural Differences in Social and Behavioral Factors for Colorectal Cancer Screening

    PubMed Central

    Wang, Ke-Sheng; Liu, Xuefeng; Ategbole, Muyiwa; Xie, Xin; Liu, Ying; Xu, Chun; Xie, Changchun; Sha, Zhanxin

    2017-01-01

    Objective: Screening for colorectal cancer (CRC) can reduce disease incidence, morbidity, and mortality. However, few studies have investigated the urban-rural differences in social and behavioral factors influencing CRC screening. The objective of the study was to investigate the potential factors across urban-rural groups on the usage of CRC screening. Methods: A total of 38,505 adults (aged ≥40 years) were selected from the 2009 California Health Interview Survey (CHIS) data - the latest CHIS data on CRC screening. The weighted generalized linear mixed-model (WGLIMM) was used to deal with this hierarchical structure data. Weighted simple and multiple mixed logistic regression analyses in SAS ver. 9.4 were used to obtain the odds ratios (ORs) and their 95% confidence intervals (CIs). Results: The overall prevalence of CRC screening was 48.1% while the prevalence in four residence groups - urban, second city, suburban, and town/rural, were 45.8%, 46.9%, 53.7% and 50.1%, respectively. The results of WGLIMM analysis showed that there was residence effect (p<0.0001) and residence groups had significant interactions with gender, age group, education level, and employment status (p<0.05). Multiple logistic regression analysis revealed that age, race, marital status, education level, employment stats, binge drinking, and smoking status were associated with CRC screening (p<0.05). Stratified by residence regions, age and poverty level showed associations with CRC screening in all four residence groups. Education level was positively associated with CRC screening in second city and suburban. Infrequent binge drinking was associated with CRC screening in urban and suburban; while current smoking was a protective factor in urban and town/rural groups. Conclusions: Mixed models are useful to deal with the clustered survey data. Social factors and behavioral factors (binge drinking and smoking) were associated with CRC screening and the associations were affected by living

  20. Generalized Linear Mixed Model Analysis of Urban-Rural Differences in Social and Behavioral Factors for Colorectal Cancer Screening

    PubMed

    Wang, Ke-Sheng; Liu, Xuefeng; Ategbole, Muyiwa; Xie, Xin; Liu, Ying; Xu, Chun; Xie, Changchun; Sha, Zhanxin

    2017-09-27

    Objective: Screening for colorectal cancer (CRC) can reduce disease incidence, morbidity, and mortality. However, few studies have investigated the urban-rural differences in social and behavioral factors influencing CRC screening. The objective of the study was to investigate the potential factors across urban-rural groups on the usage of CRC screening. Methods: A total of 38,505 adults (aged ≥40 years) were selected from the 2009 California Health Interview Survey (CHIS) data - the latest CHIS data on CRC screening. The weighted generalized linear mixed-model (WGLIMM) was used to deal with this hierarchical structure data. Weighted simple and multiple mixed logistic regression analyses in SAS ver. 9.4 were used to obtain the odds ratios (ORs) and their 95% confidence intervals (CIs). Results: The overall prevalence of CRC screening was 48.1% while the prevalence in four residence groups - urban, second city, suburban, and town/rural, were 45.8%, 46.9%, 53.7% and 50.1%, respectively. The results of WGLIMM analysis showed that there was residence effect (p<0.0001) and residence groups had significant interactions with gender, age group, education level, and employment status (p<0.05). Multiple logistic regression analysis revealed that age, race, marital status, education level, employment stats, binge drinking, and smoking status were associated with CRC screening (p<0.05). Stratified by residence regions, age and poverty level showed associations with CRC screening in all four residence groups. Education level was positively associated with CRC screening in second city and suburban. Infrequent binge drinking was associated with CRC screening in urban and suburban; while current smoking was a protective factor in urban and town/rural groups. Conclusions: Mixed models are useful to deal with the clustered survey data. Social factors and behavioral factors (binge drinking and smoking) were associated with CRC screening and the associations were affected by living

  1. Old and New Ideas for Data Screening and Assumption Testing for Exploratory and Confirmatory Factor Analysis

    PubMed Central

    Flora, David B.; LaBrish, Cathy; Chalmers, R. Philip

    2011-01-01

    We provide a basic review of the data screening and assumption testing issues relevant to exploratory and confirmatory factor analysis along with practical advice for conducting analyses that are sensitive to these concerns. Historically, factor analysis was developed for explaining the relationships among many continuous test scores, which led to the expression of the common factor model as a multivariate linear regression model with observed, continuous variables serving as dependent variables, and unobserved factors as the independent, explanatory variables. Thus, we begin our paper with a review of the assumptions for the common factor model and data screening issues as they pertain to the factor analysis of continuous observed variables. In particular, we describe how principles from regression diagnostics also apply to factor analysis. Next, because modern applications of factor analysis frequently involve the analysis of the individual items from a single test or questionnaire, an important focus of this paper is the factor analysis of items. Although the traditional linear factor model is well-suited to the analysis of continuously distributed variables, commonly used item types, including Likert-type items, almost always produce dichotomous or ordered categorical variables. We describe how relationships among such items are often not well described by product-moment correlations, which has clear ramifications for the traditional linear factor analysis. An alternative, non-linear factor analysis using polychoric correlations has become more readily available to applied researchers and thus more popular. Consequently, we also review the assumptions and data-screening issues involved in this method. Throughout the paper, we demonstrate these procedures using an historic data set of nine cognitive ability variables. PMID:22403561

  2. A Factorization Approach to the Linear Regulator Quadratic Cost Problem

    NASA Technical Reports Server (NTRS)

    Milman, M. H.

    1985-01-01

    A factorization approach to the linear regulator quadratic cost problem is developed. This approach makes some new connections between optimal control, factorization, Riccati equations and certain Wiener-Hopf operator equations. Applications of the theory to systems describable by evolution equations in Hilbert space and differential delay equations in Euclidean space are presented.

  3. Linear model analysis of the influencing factors of boar longevity in Southern China.

    PubMed

    Wang, Chao; Li, Jia-Lian; Wei, Hong-Kui; Zhou, Yuan-Fei; Jiang, Si-Wen; Peng, Jian

    2017-04-15

    This study aimed to investigate the factors influencing the boar herd life month (BHLM) in Southern China. A total of 1630 records of culling boars from nine artificial insemination centers were collected from January 2013 to May 2016. A logistic regression model and two linear models were used to analyze the effects of breed, housing type, age at herd entry, and seed stock herd on boar removal reason and BHLM, respectively. Boar breed and the age at herd entry had significant effects on the removal reasons (P < 0.001). Results of the two linear models (with or without removal reason including) showed boars raised individually in stalls exhibited shorter BHLM than those raised in pens (P < 0.001). Boars aged 5 and 6 months at herd entry (44.6%) showed shorter BHLM than those aged 8 and 9 months at herd entry (P < 0.05). Approximately 95% boars were culled for different reasons other than old age, and the BHLM of these boars was at least 12.3 months longer than that of boars culled for other reasons (P < 0.001). In conclusion, abnormal elimination in boars is serious and it had a negative effect on boar BHLM. Boar removal reason and BHLM can be affected by breed, housing type, and seed stock herd. Importantly, 8 months is suggested as the most suitable age for boar introduction. Copyright © 2017. Published by Elsevier Inc.

  4. Guidance for the utility of linear models in meta-analysis of genetic association studies of binary phenotypes.

    PubMed

    Cook, James P; Mahajan, Anubha; Morris, Andrew P

    2017-02-01

    Linear mixed models are increasingly used for the analysis of genome-wide association studies (GWAS) of binary phenotypes because they can efficiently and robustly account for population stratification and relatedness through inclusion of random effects for a genetic relationship matrix. However, the utility of linear (mixed) models in the context of meta-analysis of GWAS of binary phenotypes has not been previously explored. In this investigation, we present simulations to compare the performance of linear and logistic regression models under alternative weighting schemes in a fixed-effects meta-analysis framework, considering designs that incorporate variable case-control imbalance, confounding factors and population stratification. Our results demonstrate that linear models can be used for meta-analysis of GWAS of binary phenotypes, without loss of power, even in the presence of extreme case-control imbalance, provided that one of the following schemes is used: (i) effective sample size weighting of Z-scores or (ii) inverse-variance weighting of allelic effect sizes after conversion onto the log-odds scale. Our conclusions thus provide essential recommendations for the development of robust protocols for meta-analysis of binary phenotypes with linear models.

  5. Local hyperspectral data multisharpening based on linear/linear-quadratic nonnegative matrix factorization by integrating lidar data

    NASA Astrophysics Data System (ADS)

    Benhalouche, Fatima Zohra; Karoui, Moussa Sofiane; Deville, Yannick; Ouamri, Abdelaziz

    2015-10-01

    In this paper, a new Spectral-Unmixing-based approach, using Nonnegative Matrix Factorization (NMF), is proposed to locally multi-sharpen hyperspectral data by integrating a Digital Surface Model (DSM) obtained from LIDAR data. In this new approach, the nature of the local mixing model is detected by using the local variance of the object elevations. The hyper/multispectral images are explored using small zones. In each zone, the variance of the object elevations is calculated from the DSM data in this zone. This variance is compared to a threshold value and the adequate linear/linearquadratic spectral unmixing technique is used in the considered zone to independently unmix hyperspectral and multispectral data, using an adequate linear/linear-quadratic NMF-based approach. The obtained spectral and spatial information thus respectively extracted from the hyper/multispectral images are then recombined in the considered zone, according to the selected mixing model. Experiments based on synthetic hyper/multispectral data are carried out to evaluate the performance of the proposed multi-sharpening approach and literature linear/linear-quadratic approaches used on the whole hyper/multispectral data. In these experiments, real DSM data are used to generate synthetic data containing linear and linear-quadratic mixed pixel zones. The DSM data are also used for locally detecting the nature of the mixing model in the proposed approach. Globally, the proposed approach yields good spatial and spectral fidelities for the multi-sharpened data and significantly outperforms the used literature methods.

  6. Factorizing the factorization - a spectral-element solver for elliptic equations with linear operation count

    NASA Astrophysics Data System (ADS)

    Huismann, Immo; Stiller, Jörg; Fröhlich, Jochen

    2017-10-01

    The paper proposes a novel factorization technique for static condensation of a spectral-element discretization matrix that yields a linear operation count of just 13N multiplications for the residual evaluation, where N is the total number of unknowns. In comparison to previous work it saves a factor larger than 3 and outpaces unfactored variants for all polynomial degrees. Using the new technique as a building block for a preconditioned conjugate gradient method yields linear scaling of the runtime with N which is demonstrated for polynomial degrees from 2 to 32. This makes the spectral-element method cost effective even for low polynomial degrees. Moreover, the dependence of the iterative solution on the element aspect ratio is addressed, showing only a slight increase in the number of iterations for aspect ratios up to 128. Hence, the solver is very robust for practical applications.

  7. Analysis of linear energy transfers and quality factors of charged particles produced by spontaneous fission neutrons from 252Cf and 244Pu in the human body.

    PubMed

    Endo, Akira; Sato, Tatsuhiko

    2013-04-01

    Absorbed doses, linear energy transfers (LETs) and quality factors of secondary charged particles in organs and tissues, generated via the interactions of the spontaneous fission neutrons from (252)Cf and (244)Pu within the human body, were studied using the Particle and Heavy Ion Transport Code System (PHITS) coupled with the ICRP Reference Phantom. Both the absorbed doses and the quality factors in target organs generally decrease with increasing distance from the source organ. The analysis of LET distributions of secondary charged particles led to the identification of the relationship between LET spectra and target-source organ locations. A comparison between human body-averaged mean quality factors and fluence-averaged radiation weighting factors showed that the current numerical conventions for the radiation weighting factors of neutrons, updated in ICRP103, and the quality factors for internal exposure are valid.

  8. Linear mixed-effects modeling approach to FMRI group analysis

    PubMed Central

    Chen, Gang; Saad, Ziad S.; Britton, Jennifer C.; Pine, Daniel S.; Cox, Robert W.

    2013-01-01

    Conventional group analysis is usually performed with Student-type t-test, regression, or standard AN(C)OVA in which the variance–covariance matrix is presumed to have a simple structure. Some correction approaches are adopted when assumptions about the covariance structure is violated. However, as experiments are designed with different degrees of sophistication, these traditional methods can become cumbersome, or even be unable to handle the situation at hand. For example, most current FMRI software packages have difficulty analyzing the following scenarios at group level: (1) taking within-subject variability into account when there are effect estimates from multiple runs or sessions; (2) continuous explanatory variables (covariates) modeling in the presence of a within-subject (repeated measures) factor, multiple subject-grouping (between-subjects) factors, or the mixture of both; (3) subject-specific adjustments in covariate modeling; (4) group analysis with estimation of hemodynamic response (HDR) function by multiple basis functions; (5) various cases of missing data in longitudinal studies; and (6) group studies involving family members or twins. Here we present a linear mixed-effects modeling (LME) methodology that extends the conventional group analysis approach to analyze many complicated cases, including the six prototypes delineated above, whose analyses would be otherwise either difficult or unfeasible under traditional frameworks such as AN(C)OVA and general linear model (GLM). In addition, the strength of the LME framework lies in its flexibility to model and estimate the variance–covariance structures for both random effects and residuals. The intraclass correlation (ICC) values can be easily obtained with an LME model with crossed random effects, even at the presence of confounding fixed effects. The simulations of one prototypical scenario indicate that the LME modeling keeps a balance between the control for false positives and the

  9. Linear mixed-effects modeling approach to FMRI group analysis.

    PubMed

    Chen, Gang; Saad, Ziad S; Britton, Jennifer C; Pine, Daniel S; Cox, Robert W

    2013-06-01

    Conventional group analysis is usually performed with Student-type t-test, regression, or standard AN(C)OVA in which the variance-covariance matrix is presumed to have a simple structure. Some correction approaches are adopted when assumptions about the covariance structure is violated. However, as experiments are designed with different degrees of sophistication, these traditional methods can become cumbersome, or even be unable to handle the situation at hand. For example, most current FMRI software packages have difficulty analyzing the following scenarios at group level: (1) taking within-subject variability into account when there are effect estimates from multiple runs or sessions; (2) continuous explanatory variables (covariates) modeling in the presence of a within-subject (repeated measures) factor, multiple subject-grouping (between-subjects) factors, or the mixture of both; (3) subject-specific adjustments in covariate modeling; (4) group analysis with estimation of hemodynamic response (HDR) function by multiple basis functions; (5) various cases of missing data in longitudinal studies; and (6) group studies involving family members or twins. Here we present a linear mixed-effects modeling (LME) methodology that extends the conventional group analysis approach to analyze many complicated cases, including the six prototypes delineated above, whose analyses would be otherwise either difficult or unfeasible under traditional frameworks such as AN(C)OVA and general linear model (GLM). In addition, the strength of the LME framework lies in its flexibility to model and estimate the variance-covariance structures for both random effects and residuals. The intraclass correlation (ICC) values can be easily obtained with an LME model with crossed random effects, even at the presence of confounding fixed effects. The simulations of one prototypical scenario indicate that the LME modeling keeps a balance between the control for false positives and the sensitivity

  10. Lattice Boltzmann methods for global linear instability analysis

    NASA Astrophysics Data System (ADS)

    Pérez, José Miguel; Aguilar, Alfonso; Theofilis, Vassilis

    2017-12-01

    Modal global linear instability analysis is performed using, for the first time ever, the lattice Boltzmann method (LBM) to analyze incompressible flows with two and three inhomogeneous spatial directions. Four linearization models have been implemented in order to recover the linearized Navier-Stokes equations in the incompressible limit. Two of those models employ the single relaxation time and have been proposed previously in the literature as linearization of the collision operator of the lattice Boltzmann equation. Two additional models are derived herein for the first time by linearizing the local equilibrium probability distribution function. Instability analysis results are obtained in three benchmark problems, two in closed geometries and one in open flow, namely the square and cubic lid-driven cavity flow and flow in the wake of the circular cylinder. Comparisons with results delivered by classic spectral element methods verify the accuracy of the proposed new methodologies and point potential limitations particular to the LBM approach. The known issue of appearance of numerical instabilities when the SRT model is used in direct numerical simulations employing the LBM is shown to be reflected in a spurious global eigenmode when the SRT model is used in the instability analysis. Although this mode is absent in the multiple relaxation times model, other spurious instabilities can also arise and are documented herein. Areas of potential improvements in order to make the proposed methodology competitive with established approaches for global instability analysis are discussed.

  11. Linear microbunching analysis for recirculation machines

    DOE PAGES

    Tsai, C. -Y.; Douglas, D.; Li, R.; ...

    2016-11-28

    Microbunching instability (MBI) has been one of the most challenging issues in designs of magnetic chicanes for short-wavelength free-electron lasers or linear colliders, as well as those of transport lines for recirculating or energy-recovery-linac machines. To quantify MBI for a recirculating machine and for more systematic analyses, we have recently developed a linear Vlasov solver and incorporated relevant collective effects into the code, including the longitudinal space charge, coherent synchrotron radiation, and linac geometric impedances, with extension of the existing formulation to include beam acceleration. In our code, we semianalytically solve the linearized Vlasov equation for microbunching amplification factor formore » an arbitrary linear lattice. In this study we apply our code to beam line lattices of two comparative isochronous recirculation arcs and one arc lattice preceded by a linac section. The resultant microbunching gain functions and spectral responses are presented, with some results compared to particle tracking simulation by elegant (M. Borland, APS Light Source Note No. LS-287, 2002). These results demonstrate clearly the impact of arc lattice design on the microbunching development. Lastly, the underlying physics with inclusion of those collective effects is elucidated and the limitation of the existing formulation is also discussed.« less

  12. Linearized unsteady jet analysis

    NASA Technical Reports Server (NTRS)

    Viets, H.; Piatt, M.

    1979-01-01

    The introduction of a time dependency into a jet flow to change the rate at which it mixes with a coflowing stream or ambient condition is investigated. The advantages and disadvantages of the unsteady flow are discussed in terms of steady state mass and momentum transfer. A linear system which is not limited by frequency constraints and evolves through a simplification of the equations of motion is presented for the analysis of the unsteady flow field generated by the time dependent jet.

  13. An exploratory analysis of treatment completion and client and organizational factors using hierarchical linear modeling.

    PubMed

    Woodward, Albert; Das, Abhik; Raskin, Ira E; Morgan-Lopez, Antonio A

    2006-11-01

    Data from the Alcohol and Drug Services Study (ADSS) are used to analyze the structure and operation of the substance abuse treatment industry in the United States. Published literature contains little systematic empirical analysis of the interaction between organizational characteristics and treatment outcomes. This paper addresses that deficit. It develops and tests a hierarchical linear model (HLM) to address questions about the empirical relationship between treatment inputs (industry costs, types and use of counseling and medical personnel, diagnosis mix, patient demographics, and the nature and level of services used in substance abuse treatment), and patient outcomes (retention and treatment completion rates). The paper adds to the literature by demonstrating a direct and statistically significant link between treatment completion and the organizational and staffing structure of the treatment setting. Related reimbursement issues, questions for future analysis, and limitations of the ADSS for this analysis are discussed.

  14. MTF measurement and analysis of linear array HgCdTe infrared detectors

    NASA Astrophysics Data System (ADS)

    Zhang, Tong; Lin, Chun; Chen, Honglei; Sun, Changhong; Lin, Jiamu; Wang, Xi

    2018-01-01

    The slanted-edge technique is the main method for measurement detectors MTF, however this method is commonly used on planar array detectors. In this paper the authors present a modified slanted-edge method to measure the MTF of linear array HgCdTe detectors. Crosstalk is one of the major factors that degrade the MTF value of such an infrared detector. This paper presents an ion implantation guard-ring structure which was designed to effectively absorb photo-carriers that may laterally defuse between adjacent pixels thereby suppressing crosstalk. Measurement and analysis of the MTF of the linear array detectors with and without a guard-ring were carried out. The experimental results indicated that the ion implantation guard-ring structure effectively suppresses crosstalk and increases MTF value.

  15. Calibrating Nonlinear Soil Material Properties for Seismic Analysis Using Soil Material Properties Intended for Linear Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spears, Robert Edward; Coleman, Justin Leigh

    2015-08-01

    Seismic analysis of nuclear structures is routinely performed using guidance provided in “Seismic Analysis of Safety-Related Nuclear Structures and Commentary (ASCE 4, 1998).” This document, which is currently under revision, provides detailed guidance on linear seismic soil-structure-interaction (SSI) analysis of nuclear structures. To accommodate the linear analysis, soil material properties are typically developed as shear modulus and damping ratio versus cyclic shear strain amplitude. A new Appendix in ASCE 4-2014 (draft) is being added to provide guidance for nonlinear time domain SSI analysis. To accommodate the nonlinear analysis, a more appropriate form of the soil material properties includes shear stressmore » and energy absorbed per cycle versus shear strain. Ideally, nonlinear soil model material properties would be established with soil testing appropriate for the nonlinear constitutive model being used. However, much of the soil testing done for SSI analysis is performed for use with linear analysis techniques. Consequently, a method is described in this paper that uses soil test data intended for linear analysis to develop nonlinear soil material properties. To produce nonlinear material properties that are equivalent to the linear material properties, the linear and nonlinear model hysteresis loops are considered. For equivalent material properties, the shear stress at peak shear strain and energy absorbed per cycle should match when comparing the linear and nonlinear model hysteresis loops. Consequently, nonlinear material properties are selected based on these criteria.« less

  16. State-variable analysis of non-linear circuits with a desk computer

    NASA Technical Reports Server (NTRS)

    Cohen, E.

    1981-01-01

    State variable analysis was used to analyze the transient performance of non-linear circuits on a desk top computer. The non-linearities considered were not restricted to any circuit element. All that is required for analysis is the relationship defining each non-linearity be known in terms of points on a curve.

  17. A Comparison of Measurement Equivalence Methods Based on Confirmatory Factor Analysis and Item Response Theory.

    ERIC Educational Resources Information Center

    Flowers, Claudia P.; Raju, Nambury S.; Oshima, T. C.

    Current interest in the assessment of measurement equivalence emphasizes two methods of analysis, linear, and nonlinear procedures. This study simulated data using the graded response model to examine the performance of linear (confirmatory factor analysis or CFA) and nonlinear (item-response-theory-based differential item function or IRT-Based…

  18. Evaluation of beach cleanup effects using linear system analysis.

    PubMed

    Kataoka, Tomoya; Hinata, Hirofumi

    2015-02-15

    We established a method for evaluating beach cleanup effects (BCEs) based on a linear system analysis, and investigated factors determining BCEs. Here we focus on two BCEs: decreasing the total mass of toxic metals that could leach into a beach from marine plastics and preventing the fragmentation of marine plastics on the beach. Both BCEs depend strongly on the average residence time of marine plastics on the beach (τ(r)) and the period of temporal variability of the input flux of marine plastics (T). Cleanups on the beach where τ(r) is longer than T are more effective than those where τ(r) is shorter than T. In addition, both BCEs are the highest near the time when the remnants of plastics reach the local maximum (peak time). Therefore, it is crucial to understand the following three factors for effective cleanups: the average residence time, the plastic input period and the peak time. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. CFD analysis of linear compressors considering load conditions

    NASA Astrophysics Data System (ADS)

    Bae, Sanghyun; Oh, Wonsik

    2017-08-01

    This paper is a study on computational fluid dynamics (CFD) analysis of linear compressor considering load conditions. In the conventional CFD analysis of the linear compressor, the load condition was not considered in the behaviour of the piston. In some papers, behaviour of piston is assumed as sinusoidal motion provided by user defined function (UDF). In the reciprocating type compressor, the stroke of the piston is restrained by the rod, while the stroke of the linear compressor is not restrained, and the stroke changes depending on the load condition. The greater the pressure difference between the discharge refrigerant and the suction refrigerant, the more the centre point of the stroke is pushed backward. And the behaviour of the piston is not a complete sine wave. For this reason, when the load condition changes in the CFD analysis of the linear compressor, it may happen that the ANSYS code is changed or unfortunately the modelling is changed. In addition, a separate analysis or calculation is required to find a stroke that meets the load condition, which may contain errors. In this study, the coupled mechanical equations and electrical equations are solved using the UDF, and the behaviour of the piston is solved considering the pressure difference across the piston. Using the above method, the stroke of the piston with respect to the motor specification of the analytical model can be calculated according to the input voltage, and the piston behaviour can be realized considering the thrust amount due to the pressure difference.

  20. On the null distribution of Bayes factors in linear regression

    USDA-ARS?s Scientific Manuscript database

    We show that under the null, the 2 log (Bayes factor) is asymptotically distributed as a weighted sum of chi-squared random variables with a shifted mean. This claim holds for Bayesian multi-linear regression with a family of conjugate priors, namely, the normal-inverse-gamma prior, the g-prior, and...

  1. Numerical analysis method for linear induction machines.

    NASA Technical Reports Server (NTRS)

    Elliott, D. G.

    1972-01-01

    A numerical analysis method has been developed for linear induction machines such as liquid metal MHD pumps and generators and linear motors. Arbitrary phase currents or voltages can be specified and the moving conductor can have arbitrary velocity and conductivity variations from point to point. The moving conductor is divided into a mesh and coefficients are calculated for the voltage induced at each mesh point by unit current at every other mesh point. Combining the coefficients with the mesh resistances yields a set of simultaneous equations which are solved for the unknown currents.

  2. Tutorial on Biostatistics: Linear Regression Analysis of Continuous Correlated Eye Data.

    PubMed

    Ying, Gui-Shuang; Maguire, Maureen G; Glynn, Robert; Rosner, Bernard

    2017-04-01

    To describe and demonstrate appropriate linear regression methods for analyzing correlated continuous eye data. We describe several approaches to regression analysis involving both eyes, including mixed effects and marginal models under various covariance structures to account for inter-eye correlation. We demonstrate, with SAS statistical software, applications in a study comparing baseline refractive error between one eye with choroidal neovascularization (CNV) and the unaffected fellow eye, and in a study determining factors associated with visual field in the elderly. When refractive error from both eyes were analyzed with standard linear regression without accounting for inter-eye correlation (adjusting for demographic and ocular covariates), the difference between eyes with CNV and fellow eyes was 0.15 diopters (D; 95% confidence interval, CI -0.03 to 0.32D, p = 0.10). Using a mixed effects model or a marginal model, the estimated difference was the same but with narrower 95% CI (0.01 to 0.28D, p = 0.03). Standard regression for visual field data from both eyes provided biased estimates of standard error (generally underestimated) and smaller p-values, while analysis of the worse eye provided larger p-values than mixed effects models and marginal models. In research involving both eyes, ignoring inter-eye correlation can lead to invalid inferences. Analysis using only right or left eyes is valid, but decreases power. Worse-eye analysis can provide less power and biased estimates of effect. Mixed effects or marginal models using the eye as the unit of analysis should be used to appropriately account for inter-eye correlation and maximize power and precision.

  3. Linearized spectrum correlation analysis for line emission measurements

    NASA Astrophysics Data System (ADS)

    Nishizawa, T.; Nornberg, M. D.; Den Hartog, D. J.; Sarff, J. S.

    2017-08-01

    A new spectral analysis method, Linearized Spectrum Correlation Analysis (LSCA), for charge exchange and passive ion Doppler spectroscopy is introduced to provide a means of measuring fast spectral line shape changes associated with ion-scale micro-instabilities. This analysis method is designed to resolve the fluctuations in the emission line shape from a stationary ion-scale wave. The method linearizes the fluctuations around a time-averaged line shape (e.g., Gaussian) and subdivides the spectral output channels into two sets to reduce contributions from uncorrelated fluctuations without averaging over the fast time dynamics. In principle, small fluctuations in the parameters used for a line shape model can be measured by evaluating the cross spectrum between different channel groupings to isolate a particular fluctuating quantity. High-frequency ion velocity measurements (100-200 kHz) were made by using this method. We also conducted simulations to compare LSCA with a moment analysis technique under a low photon count condition. Both experimental and synthetic measurements demonstrate the effectiveness of LSCA.

  4. Application of Bounded Linear Stability Analysis Method for Metrics-Driven Adaptive Control

    NASA Technical Reports Server (NTRS)

    Bakhtiari-Nejad, Maryam; Nguyen, Nhan T.; Krishnakumar, Kalmanje

    2009-01-01

    This paper presents the application of Bounded Linear Stability Analysis (BLSA) method for metrics-driven adaptive control. The bounded linear stability analysis method is used for analyzing stability of adaptive control models, without linearizing the adaptive laws. Metrics-driven adaptive control introduces a notion that adaptation should be driven by some stability metrics to achieve robustness. By the application of bounded linear stability analysis method the adaptive gain is adjusted during the adaptation in order to meet certain phase margin requirements. Analysis of metrics-driven adaptive control is evaluated for a second order system that represents a pitch attitude control of a generic transport aircraft. The analysis shows that the system with the metrics-conforming variable adaptive gain becomes more robust to unmodeled dynamics or time delay. The effect of analysis time-window for BLSA is also evaluated in order to meet the stability margin criteria.

  5. Generalized Linear Covariance Analysis

    NASA Technical Reports Server (NTRS)

    Carpenter, James R.; Markley, F. Landis

    2014-01-01

    This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.

  6. Linear ordinary differential equations with constant coefficients. Revisiting the impulsive response method using factorization

    NASA Astrophysics Data System (ADS)

    Camporesi, Roberto

    2011-06-01

    We present an approach to the impulsive response method for solving linear constant-coefficient ordinary differential equations based on the factorization of the differential operator. The approach is elementary, we only assume a basic knowledge of calculus and linear algebra. In particular, we avoid the use of distribution theory, as well as of the other more advanced approaches: Laplace transform, linear systems, the general theory of linear equations with variable coefficients and the variation of constants method. The approach presented here can be used in a first course on differential equations for science and engineering majors.

  7. Tutorial on Biostatistics: Linear Regression Analysis of Continuous Correlated Eye Data

    PubMed Central

    Ying, Gui-shuang; Maguire, Maureen G; Glynn, Robert; Rosner, Bernard

    2017-01-01

    Purpose To describe and demonstrate appropriate linear regression methods for analyzing correlated continuous eye data. Methods We describe several approaches to regression analysis involving both eyes, including mixed effects and marginal models under various covariance structures to account for inter-eye correlation. We demonstrate, with SAS statistical software, applications in a study comparing baseline refractive error between one eye with choroidal neovascularization (CNV) and the unaffected fellow eye, and in a study determining factors associated with visual field data in the elderly. Results When refractive error from both eyes were analyzed with standard linear regression without accounting for inter-eye correlation (adjusting for demographic and ocular covariates), the difference between eyes with CNV and fellow eyes was 0.15 diopters (D; 95% confidence interval, CI −0.03 to 0.32D, P=0.10). Using a mixed effects model or a marginal model, the estimated difference was the same but with narrower 95% CI (0.01 to 0.28D, P=0.03). Standard regression for visual field data from both eyes provided biased estimates of standard error (generally underestimated) and smaller P-values, while analysis of the worse eye provided larger P-values than mixed effects models and marginal models. Conclusion In research involving both eyes, ignoring inter-eye correlation can lead to invalid inferences. Analysis using only right or left eyes is valid, but decreases power. Worse-eye analysis can provide less power and biased estimates of effect. Mixed effects or marginal models using the eye as the unit of analysis should be used to appropriately account for inter-eye correlation and maximize power and precision. PMID:28102741

  8. Linear Ordinary Differential Equations with Constant Coefficients. Revisiting the Impulsive Response Method Using Factorization

    ERIC Educational Resources Information Center

    Camporesi, Roberto

    2011-01-01

    We present an approach to the impulsive response method for solving linear constant-coefficient ordinary differential equations based on the factorization of the differential operator. The approach is elementary, we only assume a basic knowledge of calculus and linear algebra. In particular, we avoid the use of distribution theory, as well as of…

  9. Bounded Linear Stability Margin Analysis of Nonlinear Hybrid Adaptive Control

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Boskovic, Jovan D.

    2008-01-01

    This paper presents a bounded linear stability analysis for a hybrid adaptive control that blends both direct and indirect adaptive control. Stability and convergence of nonlinear adaptive control are analyzed using an approximate linear equivalent system. A stability margin analysis shows that a large adaptive gain can lead to a reduced phase margin. This method can enable metrics-driven adaptive control whereby the adaptive gain is adjusted to meet stability margin requirements.

  10. Employment of CB models for non-linear dynamic analysis

    NASA Technical Reports Server (NTRS)

    Klein, M. R. M.; Deloo, P.; Fournier-Sicre, A.

    1990-01-01

    The non-linear dynamic analysis of large structures is always very time, effort and CPU consuming. Whenever possible the reduction of the size of the mathematical model involved is of main importance to speed up the computational procedures. Such reduction can be performed for the part of the structure which perform linearly. Most of the time, the classical Guyan reduction process is used. For non-linear dynamic process where the non-linearity is present at interfaces between different structures, Craig-Bampton models can provide a very rich information, and allow easy selection of the relevant modes with respect to the phenomenon driving the non-linearity. The paper presents the employment of Craig-Bampton models combined with Newmark direct integration for solving non-linear friction problems appearing at the interface between the Hubble Space Telescope and its solar arrays during in-orbit maneuvers. Theory, implementation in the FEM code ASKA, and practical results are shown.

  11. Method for factor analysis of GC/MS data

    DOEpatents

    Van Benthem, Mark H; Kotula, Paul G; Keenan, Michael R

    2012-09-11

    The method of the present invention provides a fast, robust, and automated multivariate statistical analysis of gas chromatography/mass spectroscopy (GC/MS) data sets. The method can involve systematic elimination of undesired, saturated peak masses to yield data that follow a linear, additive model. The cleaned data can then be subjected to a combination of PCA and orthogonal factor rotation followed by refinement with MCR-ALS to yield highly interpretable results.

  12. Comparison of linear, skewed-linear, and proportional hazard models for the analysis of lambing interval in Ripollesa ewes.

    PubMed

    Casellas, J; Bach, R

    2012-06-01

    Lambing interval is a relevant reproductive indicator for sheep populations under continuous mating systems, although there is a shortage of selection programs accounting for this trait in the sheep industry. Both the historical assumption of small genetic background and its unorthodox distribution pattern have limited its implementation as a breeding objective. In this manuscript, statistical performances of 3 alternative parametrizations [i.e., symmetric Gaussian mixed linear (GML) model, skew-Gaussian mixed linear (SGML) model, and piecewise Weibull proportional hazard (PWPH) model] have been compared to elucidate the preferred methodology to handle lambing interval data. More specifically, flock-by-flock analyses were performed on 31,986 lambing interval records (257.3 ± 0.2 d) from 6 purebred Ripollesa flocks. Model performances were compared in terms of deviance information criterion (DIC) and Bayes factor (BF). For all flocks, PWPH models were clearly preferred; they generated a reduction of 1,900 or more DIC units and provided BF estimates larger than 100 (i.e., PWPH models against linear models). These differences were reduced when comparing PWPH models with different number of change points for the baseline hazard function. In 4 flocks, only 2 change points were required to minimize the DIC, whereas 4 and 6 change points were needed for the 2 remaining flocks. These differences demonstrated a remarkable degree of heterogeneity across sheep flocks that must be properly accounted for in genetic evaluation models to avoid statistical biases and suboptimal genetic trends. Within this context, all 6 Ripollesa flocks revealed substantial genetic background for lambing interval with heritabilities ranging between 0.13 and 0.19. This study provides the first evidence of the suitability of PWPH models for lambing interval analysis, clearly discarding previous parametrizations focused on mixed linear models.

  13. A Three-Dimensional Linearized Unsteady Euler Analysis for Turbomachinery Blade Rows

    NASA Technical Reports Server (NTRS)

    Montgomery, Matthew D.; Verdon, Joseph M.

    1997-01-01

    A three-dimensional, linearized, Euler analysis is being developed to provide an efficient unsteady aerodynamic analysis that can be used to predict the aeroelastic and aeroacoustic responses of axial-flow turbo-machinery blading.The field equations and boundary conditions needed to describe nonlinear and linearized inviscid unsteady flows through a blade row operating within a cylindrical annular duct are presented. A numerical model for linearized inviscid unsteady flows, which couples a near-field, implicit, wave-split, finite volume analysis to a far-field eigenanalysis, is also described. The linearized aerodynamic and numerical models have been implemented into a three-dimensional linearized unsteady flow code, called LINFLUX. This code has been applied to selected, benchmark, unsteady, subsonic flows to establish its accuracy and to demonstrate its current capabilities. The unsteady flows considered, have been chosen to allow convenient comparisons between the LINFLUX results and those of well-known, two-dimensional, unsteady flow codes. Detailed numerical results for a helical fan and a three-dimensional version of the 10th Standard Cascade indicate that important progress has been made towards the development of a reliable and useful, three-dimensional, prediction capability that can be used in aeroelastic and aeroacoustic design studies.

  14. Analysis of linear elasticity and non-linearity due to plasticity and material damage in woven and biaxial braided composites

    NASA Astrophysics Data System (ADS)

    Goyal, Deepak

    Textile composites have a wide variety of applications in the aerospace, sports, automobile, marine and medical industries. Due to the availability of a variety of textile architectures and numerous parameters associated with each, optimal design through extensive experimental testing is not practical. Predictive tools are needed to perform virtual experiments of various options. The focus of this research is to develop a better understanding of linear elastic response, plasticity and material damage induced nonlinear behavior and mechanics of load flow in textile composites. Textile composites exhibit multiple scales of complexity. The various textile behaviors are analyzed using a two-scale finite element modeling. A framework to allow use of a wide variety of damage initiation and growth models is proposed. Plasticity induced non-linear behavior of 2x2 braided composites is investigated using a modeling approach based on Hill's yield function for orthotropic materials. The mechanics of load flow in textile composites is demonstrated using special non-standard postprocessing techniques that not only highlight the important details, but also transform the extensive amount of output data into comprehensible modes of behavior. The investigations show that the damage models differ from each other in terms of amount of degradation as well as the properties to be degraded under a particular failure mode. When compared with experimental data, predictions of some models match well for glass/epoxy composite whereas other's match well for carbon/epoxy composites. However, all the models predicted very similar response when damage factors were made similar, which shows that the magnitude of damage factors are very important. Full 3D as well as equivalent tape laminate predictions lie within the range of the experimental data for a wide variety of braided composites with different material systems, which validated the plasticity analysis. Conclusions about the effect of

  15. The Linear Imperative: An Inventory and Conceptual Analysis of Students Overuse of Linearity

    ERIC Educational Resources Information Center

    Van Dooren, Wim; De Bock, Dirk; Janssens, Dirk; Verschaffel, Lieven

    2008-01-01

    The overreliance on linear methods in students' reasoning and problem solving has been documented and discussed by several scholars in the field. So far, however, there have been no attempts to assemble the evidence and to analyze it is a systematic way. This article provides an overview and a conceptual analysis of students' tendency to use…

  16. Multivariate linear regression analysis to identify general factors for quantitative predictions of implant stability quotient values

    PubMed Central

    Huang, Hairong; Xu, Zanzan; Shao, Xianhong; Wismeijer, Daniel; Sun, Ping; Wang, Jingxiao

    2017-01-01

    Objectives This study identified potential general influencing factors for a mathematical prediction of implant stability quotient (ISQ) values in clinical practice. Methods We collected the ISQ values of 557 implants from 2 different brands (SICace and Osstem) placed by 2 surgeons in 336 patients. Surgeon 1 placed 329 SICace implants, and surgeon 2 placed 113 SICace implants and 115 Osstem implants. ISQ measurements were taken at T1 (immediately after implant placement) and T2 (before dental restoration). A multivariate linear regression model was used to analyze the influence of the following 11 candidate factors for stability prediction: sex, age, maxillary/mandibular location, bone type, immediate/delayed implantation, bone grafting, insertion torque, I-stage or II-stage healing pattern, implant diameter, implant length and T1-T2 time interval. Results The need for bone grafting as a predictor significantly influenced ISQ values in all three groups at T1 (weight coefficients ranging from -4 to -5). In contrast, implant diameter consistently influenced the ISQ values in all three groups at T2 (weight coefficients ranging from 3.4 to 4.2). Other factors, such as sex, age, I/II-stage implantation and bone type, did not significantly influence ISQ values at T2, and implant length did not significantly influence ISQ values at T1 or T2. Conclusions These findings provide a rational basis for mathematical models to quantitatively predict the ISQ values of implants in clinical practice. PMID:29084260

  17. Determining Predictor Importance in Hierarchical Linear Models Using Dominance Analysis

    ERIC Educational Resources Information Center

    Luo, Wen; Azen, Razia

    2013-01-01

    Dominance analysis (DA) is a method used to evaluate the relative importance of predictors that was originally proposed for linear regression models. This article proposes an extension of DA that allows researchers to determine the relative importance of predictors in hierarchical linear models (HLM). Commonly used measures of model adequacy in…

  18. Development of a linearized unsteady Euler analysis for turbomachinery blade rows

    NASA Technical Reports Server (NTRS)

    Verdon, Joseph M.; Montgomery, Matthew D.; Kousen, Kenneth A.

    1995-01-01

    A linearized unsteady aerodynamic analysis for axial-flow turbomachinery blading is described in this report. The linearization is based on the Euler equations of fluid motion and is motivated by the need for an efficient aerodynamic analysis that can be used in predicting the aeroelastic and aeroacoustic responses of blade rows. The field equations and surface conditions required for inviscid, nonlinear and linearized, unsteady aerodynamic analyses of three-dimensional flow through a single, blade row operating within a cylindrical duct, are derived. An existing numerical algorithm for determining time-accurate solutions of the nonlinear unsteady flow problem is described, and a numerical model, based upon this nonlinear flow solver, is formulated for the first-harmonic linear unsteady problem. The linearized aerodynamic and numerical models have been implemented into a first-harmonic unsteady flow code, called LINFLUX. At present this code applies only to two-dimensional flows, but an extension to three-dimensions is planned as future work. The three-dimensional aerodynamic and numerical formulations are described in this report. Numerical results for two-dimensional unsteady cascade flows, excited by prescribed blade motions and prescribed aerodynamic disturbances at inlet and exit, are also provided to illustrate the present capabilities of the LINFLUX analysis.

  19. Using Linear Algebra to Introduce Computer Algebra, Numerical Analysis, Data Structures and Algorithms (and To Teach Linear Algebra, Too).

    ERIC Educational Resources Information Center

    Gonzalez-Vega, Laureano

    1999-01-01

    Using a Computer Algebra System (CAS) to help with the teaching of an elementary course in linear algebra can be one way to introduce computer algebra, numerical analysis, data structures, and algorithms. Highlights the advantages and disadvantages of this approach to the teaching of linear algebra. (Author/MM)

  20. Local linear discriminant analysis framework using sample neighbors.

    PubMed

    Fan, Zizhu; Xu, Yong; Zhang, David

    2011-07-01

    The linear discriminant analysis (LDA) is a very popular linear feature extraction approach. The algorithms of LDA usually perform well under the following two assumptions. The first assumption is that the global data structure is consistent with the local data structure. The second assumption is that the input data classes are Gaussian distributions. However, in real-world applications, these assumptions are not always satisfied. In this paper, we propose an improved LDA framework, the local LDA (LLDA), which can perform well without needing to satisfy the above two assumptions. Our LLDA framework can effectively capture the local structure of samples. According to different types of local data structure, our LLDA framework incorporates several different forms of linear feature extraction approaches, such as the classical LDA and principal component analysis. The proposed framework includes two LLDA algorithms: a vector-based LLDA algorithm and a matrix-based LLDA (MLLDA) algorithm. MLLDA is directly applicable to image recognition, such as face recognition. Our algorithms need to train only a small portion of the whole training set before testing a sample. They are suitable for learning large-scale databases especially when the input data dimensions are very high and can achieve high classification accuracy. Extensive experiments show that the proposed algorithms can obtain good classification results.

  1. Development of a Linear Stirling Model with Varying Heat Inputs

    NASA Technical Reports Server (NTRS)

    Regan, Timothy F.; Lewandowski, Edward J.

    2007-01-01

    The linear model of the Stirling system developed by NASA Glenn Research Center (GRC) has been extended to include a user-specified heat input. Previously developed linear models were limited to the Stirling convertor and electrical load. They represented the thermodynamic cycle with pressure factors that remained constant. The numerical values of the pressure factors were generated by linearizing GRC s non-linear System Dynamic Model (SDM) of the convertor at a chosen operating point. The pressure factors were fixed for that operating point, thus, the model lost accuracy if a transition to a different operating point were simulated. Although the previous linear model was used in developing controllers that manipulated current, voltage, and piston position, it could not be used in the development of control algorithms that regulated hot-end temperature. This basic model was extended to include the thermal dynamics associated with a hot-end temperature that varies over time in response to external changes as well as to changes in the Stirling cycle. The linear model described herein includes not only dynamics of the piston, displacer, gas, and electrical circuit, but also the transient effects of the heater head thermal inertia. The linear version algebraically couples two separate linear dynamic models, one model of the Stirling convertor and one model of the thermal system, through the pressure factors. The thermal system model includes heat flow of heat transfer fluid, insulation loss, and temperature drops from the heat source to the Stirling convertor expansion space. The linear model was compared to a nonlinear model, and performance was very similar. The resulting linear model can be implemented in a variety of computing environments, and is suitable for analysis with classical and state space controls analysis techniques.

  2. Robust Linear Models for Cis-eQTL Analysis.

    PubMed

    Rantalainen, Mattias; Lindgren, Cecilia M; Holmes, Christopher C

    2015-01-01

    Expression Quantitative Trait Loci (eQTL) analysis enables characterisation of functional genetic variation influencing expression levels of individual genes. In outbread populations, including humans, eQTLs are commonly analysed using the conventional linear model, adjusting for relevant covariates, assuming an allelic dosage model and a Gaussian error term. However, gene expression data generally have noise that induces heavy-tailed errors relative to the Gaussian distribution and often include atypical observations, or outliers. Such departures from modelling assumptions can lead to an increased rate of type II errors (false negatives), and to some extent also type I errors (false positives). Careful model checking can reduce the risk of type-I errors but often not type II errors, since it is generally too time-consuming to carefully check all models with a non-significant effect in large-scale and genome-wide studies. Here we propose the application of a robust linear model for eQTL analysis to reduce adverse effects of deviations from the assumption of Gaussian residuals. We present results from a simulation study as well as results from the analysis of real eQTL data sets. Our findings suggest that in many situations robust models have the potential to provide more reliable eQTL results compared to conventional linear models, particularly in respect to reducing type II errors due to non-Gaussian noise. Post-genomic data, such as that generated in genome-wide eQTL studies, are often noisy and frequently contain atypical observations. Robust statistical models have the potential to provide more reliable results and increased statistical power under non-Gaussian conditions. The results presented here suggest that robust models should be considered routinely alongside other commonly used methodologies for eQTL analysis.

  3. Linear and Nonlinear Analysis of Brain Dynamics in Children with Cerebral Palsy

    ERIC Educational Resources Information Center

    Sajedi, Firoozeh; Ahmadlou, Mehran; Vameghi, Roshanak; Gharib, Masoud; Hemmati, Sahel

    2013-01-01

    This study was carried out to determine linear and nonlinear changes of brain dynamics and their relationships with the motor dysfunctions in CP children. For this purpose power of EEG frequency bands (as a linear analysis) and EEG fractality (as a nonlinear analysis) were computed in eyes-closed resting state and statistically compared between 26…

  4. Functional linear models for association analysis of quantitative traits.

    PubMed

    Fan, Ruzong; Wang, Yifan; Mills, James L; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao

    2013-11-01

    Functional linear models are developed in this paper for testing associations between quantitative traits and genetic variants, which can be rare variants or common variants or the combination of the two. By treating multiple genetic variants of an individual in a human population as a realization of a stochastic process, the genome of an individual in a chromosome region is a continuum of sequence data rather than discrete observations. The genome of an individual is viewed as a stochastic function that contains both linkage and linkage disequilibrium (LD) information of the genetic markers. By using techniques of functional data analysis, both fixed and mixed effect functional linear models are built to test the association between quantitative traits and genetic variants adjusting for covariates. After extensive simulation analysis, it is shown that the F-distributed tests of the proposed fixed effect functional linear models have higher power than that of sequence kernel association test (SKAT) and its optimal unified test (SKAT-O) for three scenarios in most cases: (1) the causal variants are all rare, (2) the causal variants are both rare and common, and (3) the causal variants are common. The superior performance of the fixed effect functional linear models is most likely due to its optimal utilization of both genetic linkage and LD information of multiple genetic variants in a genome and similarity among different individuals, while SKAT and SKAT-O only model the similarities and pairwise LD but do not model linkage and higher order LD information sufficiently. In addition, the proposed fixed effect models generate accurate type I error rates in simulation studies. We also show that the functional kernel score tests of the proposed mixed effect functional linear models are preferable in candidate gene analysis and small sample problems. The methods are applied to analyze three biochemical traits in data from the Trinity Students Study. © 2013 WILEY

  5. Linear stability analysis of scramjet unstart

    NASA Astrophysics Data System (ADS)

    Jang, Ik; Nichols, Joseph; Moin, Parviz

    2015-11-01

    We investigate the bifurcation structure of unstart and restart events in a dual-mode scramjet using the Reynolds-averaged Navier-Stokes equations. The scramjet of interest (HyShot II, Laurence et al., AIAA2011-2310) operates at a free-stream Mach number of approximately 8, and the length of the combustor chamber is 300mm. A heat-release model is applied to mimic the combustion process. Pseudo-arclength continuation with Newton-Raphson iteration is used to calculate multiple solution branches. Stability analysis based on linearized dynamics about the solution curves reveals a metric that optimally forewarns unstart. By combining direct and adjoint eigenmodes, structural sensitivity analysis suggests strategies for unstart mitigation, including changing the isolator length. This work is supported by DOE/NNSA and AFOSR.

  6. Hyperspectral and multispectral data fusion based on linear-quadratic nonnegative matrix factorization

    NASA Astrophysics Data System (ADS)

    Benhalouche, Fatima Zohra; Karoui, Moussa Sofiane; Deville, Yannick; Ouamri, Abdelaziz

    2017-04-01

    This paper proposes three multisharpening approaches to enhance the spatial resolution of urban hyperspectral remote sensing images. These approaches, related to linear-quadratic spectral unmixing techniques, use a linear-quadratic nonnegative matrix factorization (NMF) multiplicative algorithm. These methods begin by unmixing the observable high-spectral/low-spatial resolution hyperspectral and high-spatial/low-spectral resolution multispectral images. The obtained high-spectral/high-spatial resolution features are then recombined, according to the linear-quadratic mixing model, to obtain an unobservable multisharpened high-spectral/high-spatial resolution hyperspectral image. In the first designed approach, hyperspectral and multispectral variables are independently optimized, once they have been coherently initialized. These variables are alternately updated in the second designed approach. In the third approach, the considered hyperspectral and multispectral variables are jointly updated. Experiments, using synthetic and real data, are conducted to assess the efficiency, in spatial and spectral domains, of the designed approaches and of linear NMF-based approaches from the literature. Experimental results show that the designed methods globally yield very satisfactory spectral and spatial fidelities for the multisharpened hyperspectral data. They also prove that these methods significantly outperform the used literature approaches.

  7. Using Log Linear Analysis for Categorical Family Variables.

    ERIC Educational Resources Information Center

    Moen, Phyllis

    The Goodman technique of log linear analysis is ideal for family research, because it is designed for categorical (non-quantitative) variables. Variables are dichotomized (for example, married/divorced, childless/with children) or otherwise categorized (for example, level of permissiveness, life cycle stage). Contingency tables are then…

  8. A Three-Dimensional Linearized Unsteady Euler Analysis for Turbomachinery Blade Rows

    NASA Technical Reports Server (NTRS)

    Montgomery, Matthew D.; Verdon, Joseph M.

    1996-01-01

    A three-dimensional, linearized, Euler analysis is being developed to provide an efficient unsteady aerodynamic analysis that can be used to predict the aeroelastic and aeroacoustic response characteristics of axial-flow turbomachinery blading. The field equations and boundary conditions needed to describe nonlinear and linearized inviscid unsteady flows through a blade row operating within a cylindrical annular duct are presented. In addition, a numerical model for linearized inviscid unsteady flow, which is based upon an existing nonlinear, implicit, wave-split, finite volume analysis, is described. These aerodynamic and numerical models have been implemented into an unsteady flow code, called LINFLUX. A preliminary version of the LINFLUX code is applied herein to selected, benchmark three-dimensional, subsonic, unsteady flows, to illustrate its current capabilities and to uncover existing problems and deficiencies. The numerical results indicate that good progress has been made toward developing a reliable and useful three-dimensional prediction capability. However, some problems, associated with the implementation of an unsteady displacement field and numerical errors near solid boundaries, still exist. Also, accurate far-field conditions must be incorporated into the FINFLUX analysis, so that this analysis can be applied to unsteady flows driven be external aerodynamic excitations.

  9. Bayesian linearized amplitude-versus-frequency inversion for quality factor and its application

    NASA Astrophysics Data System (ADS)

    Yang, Xinchao; Teng, Long; Li, Jingnan; Cheng, Jiubing

    2018-06-01

    We propose a straightforward attenuation inversion method by utilizing the amplitude-versus-frequency (AVF) characteristics of seismic data. A new linearized approximation equation of the angle and frequency dependent reflectivity in viscoelastic media is derived. We then use the presented equation to implement the Bayesian linear AVF inversion. The inversion result includes not only P-wave and S-wave velocities, and densities, but also P-wave and S-wave quality factors. Synthetic tests show that the AVF inversion surpasses the AVA inversion for quality factor estimation. However, a higher signal noise ratio (SNR) of data is necessary for the AVF inversion. To show its feasibility, we apply both the new Bayesian AVF inversion and conventional AVA inversion to a tight gas reservoir data in Sichuan Basin in China. Considering the SNR of the field data, a combination of AVF inversion for attenuation parameters and AVA inversion for elastic parameters is recommended. The result reveals that attenuation estimations could serve as a useful complement in combination with the AVA inversion results for the detection of tight gas reservoirs.

  10. Robust L1-norm two-dimensional linear discriminant analysis.

    PubMed

    Li, Chun-Na; Shao, Yuan-Hai; Deng, Nai-Yang

    2015-05-01

    In this paper, we propose an L1-norm two-dimensional linear discriminant analysis (L1-2DLDA) with robust performance. Different from the conventional two-dimensional linear discriminant analysis with L2-norm (L2-2DLDA), where the optimization problem is transferred to a generalized eigenvalue problem, the optimization problem in our L1-2DLDA is solved by a simple justifiable iterative technique, and its convergence is guaranteed. Compared with L2-2DLDA, our L1-2DLDA is more robust to outliers and noises since the L1-norm is used. This is supported by our preliminary experiments on toy example and face datasets, which show the improvement of our L1-2DLDA over L2-2DLDA. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Linear stability analysis of collective neutrino oscillations without spurious modes

    NASA Astrophysics Data System (ADS)

    Morinaga, Taiki; Yamada, Shoichi

    2018-01-01

    Collective neutrino oscillations are induced by the presence of neutrinos themselves. As such, they are intrinsically nonlinear phenomena and are much more complex than linear counterparts such as the vacuum or Mikheyev-Smirnov-Wolfenstein oscillations. They obey integro-differential equations, for which it is also very challenging to obtain numerical solutions. If one focuses on the onset of collective oscillations, on the other hand, the equations can be linearized and the technique of linear analysis can be employed. Unfortunately, however, it is well known that such an analysis, when applied with discretizations of continuous angular distributions, suffers from the appearance of so-called spurious modes: unphysical eigenmodes of the discretized linear equations. In this paper, we analyze in detail the origin of these unphysical modes and present a simple solution to this annoying problem. We find that the spurious modes originate from the artificial production of pole singularities instead of a branch cut on the Riemann surface by the discretizations. The branching point singularities on the Riemann surface for the original nondiscretized equations can be recovered by approximating the angular distributions with polynomials and then performing the integrals analytically. We demonstrate for some examples that this simple prescription does remove the spurious modes. We also propose an even simpler method: a piecewise linear approximation to the angular distribution. It is shown that the same methodology is applicable to the multienergy case as well as to the dispersion relation approach that was proposed very recently.

  12. Application of Local Linear Embedding to Nonlinear Exploratory Latent Structure Analysis

    ERIC Educational Resources Information Center

    Wang, Haonan; Iyer, Hari

    2007-01-01

    In this paper we discuss the use of a recent dimension reduction technique called Locally Linear Embedding, introduced by Roweis and Saul, for performing an exploratory latent structure analysis. The coordinate variables from the locally linear embedding describing the manifold on which the data reside serve as the latent variable scores. We…

  13. Development of a Linear Stirling System Model with Varying Heat Inputs

    NASA Technical Reports Server (NTRS)

    Regan, Timothy F.; Lewandowski, Edward J.

    2007-01-01

    The linear model of the Stirling system developed by NASA Glenn Research Center (GRC) has been extended to include a user-specified heat input. Previously developed linear models were limited to the Stirling convertor and electrical load. They represented the thermodynamic cycle with pressure factors that remained constant. The numerical values of the pressure factors were generated by linearizing GRC's nonlinear System Dynamic Model (SDM) of the convertor at a chosen operating point. The pressure factors were fixed for that operating point, thus, the model lost accuracy if a transition to a different operating point were simulated. Although the previous linear model was used in developing controllers that manipulated current, voltage, and piston position, it could not be used in the development of control algorithms that regulated hot-end temperature. This basic model was extended to include the thermal dynamics associated with a hot-end temperature that varies over time in response to external changes as well as to changes in the Stirling cycle. The linear model described herein includes not only dynamics of the piston, displacer, gas, and electrical circuit, but also the transient effects of the heater head thermal inertia. The linear version algebraically couples two separate linear dynamic models, one model of the Stirling convertor and one model of the thermal system, through the pressure factors. The thermal system model includes heat flow of heat transfer fluid, insulation loss, and temperature drops from the heat source to the Stirling convertor expansion space. The linear model was compared to a nonlinear model, and performance was very similar. The resulting linear model can be implemented in a variety of computing environments, and is suitable for analysis with classical and state space controls analysis techniques.

  14. TENSOR DECOMPOSITIONS AND SPARSE LOG-LINEAR MODELS

    PubMed Central

    Johndrow, James E.; Bhattacharya, Anirban; Dunson, David B.

    2017-01-01

    Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. We derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions. PMID:29332971

  15. Calculation of the orientational linear and nonlinear correlation factors of polar liquids from the rotational Dean-Kawasaki equation.

    PubMed

    Déjardin, P M; Cornaton, Y; Ghesquière, P; Caliot, C; Brouzet, R

    2018-01-28

    A calculation of the Kirkwood and Piekara-Kielich correlation factors of polar liquids is presented using the forced rotational diffusion theory of Cugliandolo et al. [Phys. Rev. E 91, 032139 (2015)]. These correlation factors are obtained as a function of density and temperature. Our results compare reasonably well with the experimental temperature dependence of the linear dielectric constant of some simple polar liquids across a wide temperature range. A comparison of our results for the linear dielectric constant and the Kirkwood correlation factor with relevant numerical simulations of liquid water and methanol is given.

  16. Robust linear discriminant analysis with distance based estimators

    NASA Astrophysics Data System (ADS)

    Lim, Yai-Fung; Yahaya, Sharipah Soaad Syed; Ali, Hazlina

    2017-11-01

    Linear discriminant analysis (LDA) is one of the supervised classification techniques concerning relationship between a categorical variable and a set of continuous variables. The main objective of LDA is to create a function to distinguish between populations and allocating future observations to previously defined populations. Under the assumptions of normality and homoscedasticity, the LDA yields optimal linear discriminant rule (LDR) between two or more groups. However, the optimality of LDA highly relies on the sample mean and pooled sample covariance matrix which are known to be sensitive to outliers. To alleviate these conflicts, a new robust LDA using distance based estimators known as minimum variance vector (MVV) has been proposed in this study. The MVV estimators were used to substitute the classical sample mean and classical sample covariance to form a robust linear discriminant rule (RLDR). Simulation and real data study were conducted to examine on the performance of the proposed RLDR measured in terms of misclassification error rates. The computational result showed that the proposed RLDR is better than the classical LDR and was comparable with the existing robust LDR.

  17. [Comparison of application of Cochran-Armitage trend test and linear regression analysis for rate trend analysis in epidemiology study].

    PubMed

    Wang, D Z; Wang, C; Shen, C F; Zhang, Y; Zhang, H; Song, G D; Xue, X D; Xu, Z L; Zhang, S; Jiang, G H

    2017-05-10

    We described the time trend of acute myocardial infarction (AMI) from 1999 to 2013 in Tianjin incidence rate with Cochran-Armitage trend (CAT) test and linear regression analysis, and the results were compared. Based on actual population, CAT test had much stronger statistical power than linear regression analysis for both overall incidence trend and age specific incidence trend (Cochran-Armitage trend P value<linear regression P value). The statistical power of CAT test decreased, while the result of linear regression analysis remained the same when population size was reduced by 100 times and AMI incidence rate remained unchanged. The two statistical methods have their advantages and disadvantages. It is necessary to choose statistical method according the fitting degree of data, or comprehensively analyze the results of two methods.

  18. Comparing the Fit of Item Response Theory and Factor Analysis Models

    ERIC Educational Resources Information Center

    Maydeu-Olivares, Alberto; Cai, Li; Hernandez, Adolfo

    2011-01-01

    Linear factor analysis (FA) models can be reliably tested using test statistics based on residual covariances. We show that the same statistics can be used to reliably test the fit of item response theory (IRT) models for ordinal data (under some conditions). Hence, the fit of an FA model and of an IRT model to the same data set can now be…

  19. Multivariate meta-analysis for non-linear and other multi-parameter associations

    PubMed Central

    Gasparrini, A; Armstrong, B; Kenward, M G

    2012-01-01

    In this paper, we formalize the application of multivariate meta-analysis and meta-regression to synthesize estimates of multi-parameter associations obtained from different studies. This modelling approach extends the standard two-stage analysis used to combine results across different sub-groups or populations. The most straightforward application is for the meta-analysis of non-linear relationships, described for example by regression coefficients of splines or other functions, but the methodology easily generalizes to any setting where complex associations are described by multiple correlated parameters. The modelling framework of multivariate meta-analysis is implemented in the package mvmeta within the statistical environment R. As an illustrative example, we propose a two-stage analysis for investigating the non-linear exposure–response relationship between temperature and non-accidental mortality using time-series data from multiple cities. Multivariate meta-analysis represents a useful analytical tool for studying complex associations through a two-stage procedure. Copyright © 2012 John Wiley & Sons, Ltd. PMID:22807043

  20. Virtual Estimator for Piecewise Linear Systems Based on Observability Analysis

    PubMed Central

    Morales-Morales, Cornelio; Adam-Medina, Manuel; Cervantes, Ilse; Vela-Valdés and, Luis G.; García Beltrán, Carlos Daniel

    2013-01-01

    This article proposes a virtual sensor for piecewise linear systems based on observability analysis that is in function of a commutation law related with the system's outpu. This virtual sensor is also known as a state estimator. Besides, it presents a detector of active mode when the commutation sequences of each linear subsystem are arbitrary and unknown. For the previous, this article proposes a set of virtual estimators that discern the commutation paths of the system and allow estimating their output. In this work a methodology in order to test the observability for piecewise linear systems with discrete time is proposed. An academic example is presented to show the obtained results. PMID:23447007

  1. On the Relation between the Linear Factor Model and the Latent Profile Model

    ERIC Educational Resources Information Center

    Halpin, Peter F.; Dolan, Conor V.; Grasman, Raoul P. P. P.; De Boeck, Paul

    2011-01-01

    The relationship between linear factor models and latent profile models is addressed within the context of maximum likelihood estimation based on the joint distribution of the manifest variables. Although the two models are well known to imply equivalent covariance decompositions, in general they do not yield equivalent estimates of the…

  2. Polynomial elimination theory and non-linear stability analysis for the Euler equations

    NASA Technical Reports Server (NTRS)

    Kennon, S. R.; Dulikravich, G. S.; Jespersen, D. C.

    1986-01-01

    Numerical methods are presented that exploit the polynomial properties of discretizations of the Euler equations. It is noted that most finite difference or finite volume discretizations of the steady-state Euler equations produce a polynomial system of equations to be solved. These equations are solved using classical polynomial elimination theory, with some innovative modifications. This paper also presents some preliminary results of a new non-linear stability analysis technique. This technique is applicable to determining the stability of polynomial iterative schemes. Results are presented for applying the elimination technique to a one-dimensional test case. For this test case, the exact solution is computed in three iterations. The non-linear stability analysis is applied to determine the optimal time step for solving Burgers' equation using the MacCormack scheme. The estimated optimal time step is very close to the time step that arises from a linear stability analysis.

  3. Advanced statistics: linear regression, part II: multiple linear regression.

    PubMed

    Marill, Keith A

    2004-01-01

    The applications of simple linear regression in medical research are limited, because in most situations, there are multiple relevant predictor variables. Univariate statistical techniques such as simple linear regression use a single predictor variable, and they often may be mathematically correct but clinically misleading. Multiple linear regression is a mathematical technique used to model the relationship between multiple independent predictor variables and a single dependent outcome variable. It is used in medical research to model observational data, as well as in diagnostic and therapeutic studies in which the outcome is dependent on more than one factor. Although the technique generally is limited to data that can be expressed with a linear function, it benefits from a well-developed mathematical framework that yields unique solutions and exact confidence intervals for regression coefficients. Building on Part I of this series, this article acquaints the reader with some of the important concepts in multiple regression analysis. These include multicollinearity, interaction effects, and an expansion of the discussion of inference testing, leverage, and variable transformations to multivariate models. Examples from the first article in this series are expanded on using a primarily graphic, rather than mathematical, approach. The importance of the relationships among the predictor variables and the dependence of the multivariate model coefficients on the choice of these variables are stressed. Finally, concepts in regression model building are discussed.

  4. Describing three-class task performance: three-class linear discriminant analysis and three-class ROC analysis

    NASA Astrophysics Data System (ADS)

    He, Xin; Frey, Eric C.

    2007-03-01

    Binary ROC analysis has solid decision-theoretic foundations and a close relationship to linear discriminant analysis (LDA). In particular, for the case of Gaussian equal covariance input data, the area under the ROC curve (AUC) value has a direct relationship to the Hotelling trace. Many attempts have been made to extend binary classification methods to multi-class. For example, Fukunaga extended binary LDA to obtain multi-class LDA, which uses the multi-class Hotelling trace as a figure-of-merit, and we have previously developed a three-class ROC analysis method. This work explores the relationship between conventional multi-class LDA and three-class ROC analysis. First, we developed a linear observer, the three-class Hotelling observer (3-HO). For Gaussian equal covariance data, the 3- HO provides equivalent performance to the three-class ideal observer and, under less strict conditions, maximizes the signal to noise ratio for classification of all pairs of the three classes simultaneously. The 3-HO templates are not the eigenvectors obtained from multi-class LDA. Second, we show that the three-class Hotelling trace, which is the figureof- merit in the conventional three-class extension of LDA, has significant limitations. Third, we demonstrate that, under certain conditions, there is a linear relationship between the eigenvectors obtained from multi-class LDA and 3-HO templates. We conclude that the 3-HO based on decision theory has advantages both in its decision theoretic background and in the usefulness of its figure-of-merit. Additionally, there exists the possibility of interpreting the two linear features extracted by the conventional extension of LDA from a decision theoretic point of view.

  5. Feasible logic Bell-state analysis with linear optics

    PubMed Central

    Zhou, Lan; Sheng, Yu-Bo

    2016-01-01

    We describe a feasible logic Bell-state analysis protocol by employing the logic entanglement to be the robust concatenated Greenberger-Horne-Zeilinger (C-GHZ) state. This protocol only uses polarization beam splitters and half-wave plates, which are available in current experimental technology. We can conveniently identify two of the logic Bell states. This protocol can be easily generalized to the arbitrary C-GHZ state analysis. We can also distinguish two N-logic-qubit C-GHZ states. As the previous theory and experiment both showed that the C-GHZ state has the robustness feature, this logic Bell-state analysis and C-GHZ state analysis may be essential for linear-optical quantum computation protocols whose building blocks are logic-qubit entangled state. PMID:26877208

  6. Feasible logic Bell-state analysis with linear optics.

    PubMed

    Zhou, Lan; Sheng, Yu-Bo

    2016-02-15

    We describe a feasible logic Bell-state analysis protocol by employing the logic entanglement to be the robust concatenated Greenberger-Horne-Zeilinger (C-GHZ) state. This protocol only uses polarization beam splitters and half-wave plates, which are available in current experimental technology. We can conveniently identify two of the logic Bell states. This protocol can be easily generalized to the arbitrary C-GHZ state analysis. We can also distinguish two N-logic-qubit C-GHZ states. As the previous theory and experiment both showed that the C-GHZ state has the robustness feature, this logic Bell-state analysis and C-GHZ state analysis may be essential for linear-optical quantum computation protocols whose building blocks are logic-qubit entangled state.

  7. Performance of an Axisymmetric Rocket Based Combined Cycle Engine During Rocket Only Operation Using Linear Regression Analysis

    NASA Technical Reports Server (NTRS)

    Smith, Timothy D.; Steffen, Christopher J., Jr.; Yungster, Shaye; Keller, Dennis J.

    1998-01-01

    The all rocket mode of operation is shown to be a critical factor in the overall performance of a rocket based combined cycle (RBCC) vehicle. An axisymmetric RBCC engine was used to determine specific impulse efficiency values based upon both full flow and gas generator configurations. Design of experiments methodology was used to construct a test matrix and multiple linear regression analysis was used to build parametric models. The main parameters investigated in this study were: rocket chamber pressure, rocket exit area ratio, injected secondary flow, mixer-ejector inlet area, mixer-ejector area ratio, and mixer-ejector length-to-inlet diameter ratio. A perfect gas computational fluid dynamics analysis, using both the Spalart-Allmaras and k-omega turbulence models, was performed with the NPARC code to obtain values of vacuum specific impulse. Results from the multiple linear regression analysis showed that for both the full flow and gas generator configurations increasing mixer-ejector area ratio and rocket area ratio increase performance, while increasing mixer-ejector inlet area ratio and mixer-ejector length-to-diameter ratio decrease performance. Increasing injected secondary flow increased performance for the gas generator analysis, but was not statistically significant for the full flow analysis. Chamber pressure was found to be not statistically significant.

  8. Linear Combination Fitting (LCF)-XANES analysis of As speciation in selected mine-impacted materials

    EPA Pesticide Factsheets

    This table provides sample identification labels and classification of sample type (tailings, calcinated, grey slime). For each sample, total arsenic and iron concentrations determined by acid digestion and ICP analysis are provided along with arsenic in-vitro bioaccessibility (As IVBA) values to estimate arsenic risk. Lastly, the table provides linear combination fitting results from synchrotron XANES analysis showing the distribution of arsenic speciation phases present in each sample along with fitting error (R-factor).This dataset is associated with the following publication:Ollson, C., E. Smith, K. Scheckel, A. Betts, and A. Juhasz. Assessment of arsenic speciation and bioaccessibility in mine-impacted materials. Diana Aga, Wonyong Choi, Andrew Daugulis, Gianluca Li Puma, Gerasimos Lyberatos, and Joo Hwa Tay JOURNAL OF HAZARDOUS MATERIALS. Elsevier Science Ltd, New York, NY, USA, 313: 130-137, (2016).

  9. Linear Covariance Analysis for a Lunar Lander

    NASA Technical Reports Server (NTRS)

    Jang, Jiann-Woei; Bhatt, Sagar; Fritz, Matthew; Woffinden, David; May, Darryl; Braden, Ellen; Hannan, Michael

    2017-01-01

    A next-generation lunar lander Guidance, Navigation, and Control (GNC) system, which includes a state-of-the-art optical sensor suite, is proposed in a concept design cycle. The design goal is to allow the lander to softly land within the prescribed landing precision. The achievement of this precision landing requirement depends on proper selection of the sensor suite. In this paper, a robust sensor selection procedure is demonstrated using a Linear Covariance (LinCov) analysis tool developed by Draper.

  10. Linear discriminant analysis with misallocation in training samples

    NASA Technical Reports Server (NTRS)

    Chhikara, R. (Principal Investigator); Mckeon, J.

    1982-01-01

    Linear discriminant analysis for a two-class case is studied in the presence of misallocation in training samples. A general appraoch to modeling of mislocation is formulated, and the mean vectors and covariance matrices of the mixture distributions are derived. The asymptotic distribution of the discriminant boundary is obtained and the asymptotic first two moments of the two types of error rate given. Certain numerical results for the error rates are presented by considering the random and two non-random misallocation models. It is shown that when the allocation procedure for training samples is objectively formulated, the effect of misallocation on the error rates of the Bayes linear discriminant rule can almost be eliminated. If, however, this is not possible, the use of Fisher rule may be preferred over the Bayes rule.

  11. Performance bounds for modal analysis using sparse linear arrays

    NASA Astrophysics Data System (ADS)

    Li, Yuanxin; Pezeshki, Ali; Scharf, Louis L.; Chi, Yuejie

    2017-05-01

    We study the performance of modal analysis using sparse linear arrays (SLAs) such as nested and co-prime arrays, in both first-order and second-order measurement models. We treat SLAs as constructed from a subset of sensors in a dense uniform linear array (ULA), and characterize the performance loss of SLAs with respect to the ULA due to using much fewer sensors. In particular, we claim that, provided the same aperture, in order to achieve comparable performance in terms of Cramér-Rao bound (CRB) for modal analysis, SLAs require more snapshots, of which the number is about the number of snapshots used by ULA times the compression ratio in the number of sensors. This is shown analytically for the case with one undamped mode, as well as empirically via extensive numerical experiments for more complex scenarios. Moreover, the misspecified CRB proposed by Richmond and Horowitz is also studied, where SLAs suffer more performance loss than their ULA counterpart.

  12. Study on power grid characteristics in summer based on Linear regression analysis

    NASA Astrophysics Data System (ADS)

    Tang, Jin-hui; Liu, You-fei; Liu, Juan; Liu, Qiang; Liu, Zhuan; Xu, Xi

    2018-05-01

    The correlation analysis of power load and temperature is the precondition and foundation for accurate load prediction, and a great deal of research has been made. This paper constructed the linear correlation model between temperature and power load, then the correlation of fault maintenance work orders with the power load is researched. Data details of Jiangxi province in 2017 summer such as temperature, power load, fault maintenance work orders were adopted in this paper to develop data analysis and mining. Linear regression models established in this paper will promote electricity load growth forecast, fault repair work order review, distribution network operation weakness analysis and other work to further deepen the refinement.

  13. Use of factor scores for predicting body weight from linear body measurements in three South African indigenous chicken breeds.

    PubMed

    Malomane, Dorcus Kholofelo; Norris, David; Banga, Cuthbert B; Ngambi, Jones W

    2014-02-01

    Body weight and weight of body parts are of economic importance. It is difficult to directly predict body weight from highly correlated morphological traits through multiple regression. Factor analysis was carried out to examine the relationship between body weight and five linear body measurements (body length, body girth, wing length, shank thickness, and shank length) in South African Venda (VN), Naked neck (NN), and Potchefstroom koekoek (PK) indigenous chicken breeds, with a view to identify those factors that define body conformation. Multiple regression was subsequently performed to predict body weight, using orthogonal traits derived from the factor analysis. Measurements were obtained from 210 chickens, 22 weeks of age, 70 chickens per breed. High correlations were obtained between body weight and all body measurements except for wing length in PK. Two factors extracted after varimax rotation explained 91, 95, and 83% of total variation in VN, NN, and PK, respectively. Factor 1 explained 73, 90, and 64% in VN, NN, and PK, respectively, and was loaded on all body measurements except for wing length in VN and PK. In a multiple regression, these two factors accounted for 72% variation in body weight in VN, while only factor 1 accounted for 83 and 74% variation in body weight in NN and PK, respectively. The two factors could be used to define body size and conformation of these breeds. Factor 1 could predict body weight in all three breeds. Body measurements can be better selected jointly to improve body weight in these breeds.

  14. LINEAR - DERIVATION AND DEFINITION OF A LINEAR AIRCRAFT MODEL

    NASA Technical Reports Server (NTRS)

    Duke, E. L.

    1994-01-01

    The Derivation and Definition of a Linear Model program, LINEAR, provides the user with a powerful and flexible tool for the linearization of aircraft aerodynamic models. LINEAR was developed to provide a standard, documented, and verified tool to derive linear models for aircraft stability analysis and control law design. Linear system models define the aircraft system in the neighborhood of an analysis point and are determined by the linearization of the nonlinear equations defining vehicle dynamics and sensors. LINEAR numerically determines a linear system model using nonlinear equations of motion and a user supplied linear or nonlinear aerodynamic model. The nonlinear equations of motion used are six-degree-of-freedom equations with stationary atmosphere and flat, nonrotating earth assumptions. LINEAR is capable of extracting both linearized engine effects, such as net thrust, torque, and gyroscopic effects and including these effects in the linear system model. The point at which this linear model is defined is determined either by completely specifying the state and control variables, or by specifying an analysis point on a trajectory and directing the program to determine the control variables and the remaining state variables. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to provide easy selection of state, control, and observation variables to be used in a particular model. Thus, the order of the system model is completely under user control. Further, the program provides the flexibility of allowing alternate formulations of both the state and observation equations. Data describing the aircraft and the test case is input to the program through a terminal or formatted data files. All data can be modified interactively from case to case. The aerodynamic model can be defined in two ways: a set of nondimensional stability and control derivatives for the flight point of

  15. Linear analysis of a force reflective teleoperator

    NASA Technical Reports Server (NTRS)

    Biggers, Klaus B.; Jacobsen, Stephen C.; Davis, Clark C.

    1989-01-01

    Complex force reflective teleoperation systems are often very difficult to analyze due to the large number of components and control loops involved. One mode of a force reflective teleoperator is described. An analysis of the performance of the system based on a linear analysis of the general full order model is presented. Reduced order models are derived and correlated with the full order models. Basic effects of force feedback and position feedback are examined and the effects of time delays between the master and slave are studied. The results show that with symmetrical position-position control of teleoperators, a basic trade off must be made between the intersystem stiffness of the teleoperator, and the impedance felt by the operator in free space.

  16. Use of linear regression models to determine influence factors on the concentration levels of radon in occupied houses

    NASA Astrophysics Data System (ADS)

    Buermeyer, Jonas; Gundlach, Matthias; Grund, Anna-Lisa; Grimm, Volker; Spizyn, Alexander; Breckow, Joachim

    2016-09-01

    This work is part of the analysis of the effects of constructional energy-saving measures to radon concentration levels in dwellings performed on behalf of the German Federal Office for Radiation Protection. In parallel to radon measurements for five buildings, both meteorological data outside the buildings and the indoor climate factors were recorded. In order to access effects of inhabited buildings, the amount of carbon dioxide (CO2) was measured. For a statistical linear regression model, the data of one object was chosen as an example. Three dummy variables were extracted from the process of the CO2 concentration to provide information on the usage and ventilation of the room. The analysis revealed a highly autoregressive model for the radon concentration with additional influence by the natural environmental factors. The autoregression implies a strong dependency on a radon source since it reflects a backward dependency in time. At this point of the investigation, it cannot be determined whether the influence by outside factors affects the source of radon or the habitant’s ventilation behavior resulting in variation of the occurring concentration levels. In any case, the regression analysis might provide further information that would help to distinguish these effects. In the next step, the influence factors will be weighted according to their impact on the concentration levels. This might lead to a model that enables the prediction of radon concentration levels based on the measurement of CO2 in combination with environmental parameters, as well as the development of advices for ventilation.

  17. Linear stability analysis of detonations via numerical computation and dynamic mode decomposition

    NASA Astrophysics Data System (ADS)

    Kabanov, Dmitry I.; Kasimov, Aslan R.

    2018-03-01

    We introduce a new method to investigate linear stability of gaseous detonations that is based on an accurate shock-fitting numerical integration of the linearized reactive Euler equations with a subsequent analysis of the computed solution via the dynamic mode decomposition. The method is applied to the detonation models based on both the standard one-step Arrhenius kinetics and two-step exothermic-endothermic reaction kinetics. Stability spectra for all cases are computed and analyzed. The new approach is shown to be a viable alternative to the traditional normal-mode analysis used in detonation theory.

  18. Comparative analysis of linear motor geometries for Stirling coolers

    NASA Astrophysics Data System (ADS)

    R, Rajesh V.; Kuzhiveli, Biju T.

    2017-12-01

    Compared to rotary motor driven Stirling coolers, linear motor coolers are characterized by small volume and long life, making them more suitable for space and military applications. The motor design and operational characteristics have a direct effect on the operation of the cooler. In this perspective, ample scope exists in understanding the behavioural description of linear motor systems. In the present work, the authors compare and analyze different moving magnet linear motor geometries to finalize the most favourable one for Stirling coolers. The required axial force in the linear motors is generated by the interaction of magnetic fields of a current carrying coil and that of a permanent magnet. The compact size, commercial availability of permanent magnets and low weight requirement of the system are quite a few constraints for the design. The finite element analysis performed using Maxwell software serves as the basic tool to analyze the magnet movement, flux distribution in the air gap and the magnetic saturation levels on the core. A number of material combinations are investigated for core before finalizing the design. The effect of varying the core geometry on the flux produced in the air gap is also analyzed. The electromagnetic analysis of the motor indicates that the permanent magnet height ought to be taken in such a way that it is under the influence of electromagnetic field of current carrying coil as well as the outer core in the balanced position. This is necessary so that sufficient amount of thrust force is developed by efficient utilisation of the air gap flux density. Also, the outer core ends need to be designed to facilitate enough room for the magnet movement under the operating conditions.

  19. Non-linear principal component analysis applied to Lorenz models and to North Atlantic SLP

    NASA Astrophysics Data System (ADS)

    Russo, A.; Trigo, R. M.

    2003-04-01

    A non-linear generalisation of Principal Component Analysis (PCA), denoted Non-Linear Principal Component Analysis (NLPCA), is introduced and applied to the analysis of three data sets. Non-Linear Principal Component Analysis allows for the detection and characterisation of low-dimensional non-linear structure in multivariate data sets. This method is implemented using a 5-layer feed-forward neural network introduced originally in the chemical engineering literature (Kramer, 1991). The method is described and details of its implementation are addressed. Non-Linear Principal Component Analysis is first applied to a data set sampled from the Lorenz attractor (1963). It is found that the NLPCA approximations are more representative of the data than are the corresponding PCA approximations. The same methodology was applied to the less known Lorenz attractor (1984). However, the results obtained weren't as good as those attained with the famous 'Butterfly' attractor. Further work with this model is underway in order to assess if NLPCA techniques can be more representative of the data characteristics than are the corresponding PCA approximations. The application of NLPCA to relatively 'simple' dynamical systems, such as those proposed by Lorenz, is well understood. However, the application of NLPCA to a large climatic data set is much more challenging. Here, we have applied NLPCA to the sea level pressure (SLP) field for the entire North Atlantic area and the results show a slight imcrement of explained variance associated. Finally, directions for future work are presented.%}

  20. Linear Covariance Analysis and Epoch State Estimators

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Carpenter, J. Russell

    2014-01-01

    This paper extends in two directions the results of prior work on generalized linear covariance analysis of both batch least-squares and sequential estimators. The first is an improved treatment of process noise in the batch, or epoch state, estimator with an epoch time that may be later than some or all of the measurements in the batch. The second is to account for process noise in specifying the gains in the epoch state estimator. We establish the conditions under which the latter estimator is equivalent to the Kalman filter.

  1. Linear Covariance Analysis and Epoch State Estimators

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Carpenter, J. Russell

    2012-01-01

    This paper extends in two directions the results of prior work on generalized linear covariance analysis of both batch least-squares and sequential estimators. The first is an improved treatment of process noise in the batch, or epoch state, estimator with an epoch time that may be later than some or all of the measurements in the batch. The second is to account for process noise in specifying the gains in the epoch state estimator. We establish the conditions under which the latter estimator is equivalent to the Kalman filter.

  2. Factor Analysis and Counseling Research

    ERIC Educational Resources Information Center

    Weiss, David J.

    1970-01-01

    Topics discussed include factor analysis versus cluster analysis, analysis of Q correlation matrices, ipsativity and factor analysis, and tests for the significance of a correlation matrix prior to application of factor analytic techniques. Techniques for factor extraction discussed include principal components, canonical factor analysis, alpha…

  3. Comparative analysis of risk-based cleanup levels and associated remediation costs using linearized multistage model (cancer slope factor) vs. threshold approach (reference dose) for three chlorinated alkenes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lawton, L.J.; Mihalich, J.P.

    1995-12-31

    The chlorinated alkenes 1,1-dichloroethene (1,1-DCE), tetrachloroethene (PCE), and trichloroethene (TCE) are common environmental contaminants found in soil and groundwater at hazardous waste sites. Recent assessment of data from epidemiology and mechanistic studies indicates that although exposure to 1,1-DCE, PCE, and TCE causes tumor formation in rodents, it is unlikely that these chemicals are carcinogenic to humans. Nevertheless, many state and federal agencies continue to regulate these compounds as carcinogens through the use of the linearized multistage model and resulting cancer slope factor (CSF). The available data indicate that 1,1-DCE, PCE, and TCE should be assessed using a threshold (i.e., referencemore » dose [RfD]) approach rather than a CSF. This paper summarizes the available metabolic, toxicologic, and epidemiologic data that question the use of the linear multistage model (and CSF) for extrapolation from rodents to humans. A comparative analysis of potential risk-based cleanup goals (RBGs) for these three compounds in soil is presented for a hazardous waste site. Goals were calculated using the USEPA CSFs and using a threshold (i.e., RfD) approach. Costs associated with remediation activities required to meet each set of these cleanup goals are presented and compared.« less

  4. Analysis of baseline, average, and longitudinally measured blood pressure data using linear mixed models.

    PubMed

    Hossain, Ahmed; Beyene, Joseph

    2014-01-01

    This article compares baseline, average, and longitudinal data analysis methods for identifying genetic variants in genome-wide association study using the Genetic Analysis Workshop 18 data. We apply methods that include (a) linear mixed models with baseline measures, (b) random intercept linear mixed models with mean measures outcome, and (c) random intercept linear mixed models with longitudinal measurements. In the linear mixed models, covariates are included as fixed effects, whereas relatedness among individuals is incorporated as the variance-covariance structure of the random effect for the individuals. The overall strategy of applying linear mixed models decorrelate the data is based on Aulchenko et al.'s GRAMMAR. By analyzing systolic and diastolic blood pressure, which are used separately as outcomes, we compare the 3 methods in identifying a known genetic variant that is associated with blood pressure from chromosome 3 and simulated phenotype data. We also analyze the real phenotype data to illustrate the methods. We conclude that the linear mixed model with longitudinal measurements of diastolic blood pressure is the most accurate at identifying the known single-nucleotide polymorphism among the methods, but linear mixed models with baseline measures perform best with systolic blood pressure as the outcome.

  5. Mathematical Methods in Wave Propagation: Part 2--Non-Linear Wave Front Analysis

    ERIC Educational Resources Information Center

    Jeffrey, Alan

    1971-01-01

    The paper presents applications and methods of analysis for non-linear hyperbolic partial differential equations. The paper is concluded by an account of wave front analysis as applied to the piston problem of gas dynamics. (JG)

  6. Linear Stability Analysis of an Acoustically Vaporized Droplet

    NASA Astrophysics Data System (ADS)

    Siddiqui, Junaid; Qamar, Adnan; Samtaney, Ravi

    2015-11-01

    Acoustic droplet vaporization (ADV) is a phase transition phenomena of a superheat liquid (Dodecafluoropentane, C5F12) droplet to a gaseous bubble, instigated by a high-intensity acoustic pulse. This approach was first studied in imaging applications, and applicable in several therapeutic areas such as gas embolotherapy, thrombus dissolution, and drug delivery. High-speed imaging and theoretical modeling of ADV has elucidated several physical aspects, ranging from bubble nucleation to its subsequent growth. Surface instabilities are known to exist and considered responsible for evolving bubble shapes (non-spherical growth, bubble splitting and bubble droplet encapsulation). We present a linear stability analysis of the dynamically evolving interfaces of an acoustically vaporized micro-droplet (liquid A) in an infinite pool of a second liquid (liquid B). We propose a thermal ADV model for the base state. The linear analysis utilizes spherical harmonics (Ynm, of degree m and order n) and under various physical assumptions results in a time-dependent ODE of the perturbed interface amplitudes (one at the vapor/liquid A interface and the other at the liquid A/liquid B interface). The perturbation amplitudes are found to grow exponentially and do not depend on m. Supported by KAUST Baseline Research Funds.

  7. On the Use of Equivalent Linearization for High-Cycle Fatigue Analysis of Geometrically Nonlinear Structures

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.

    2003-01-01

    The use of stress predictions from equivalent linearization analyses in the computation of high-cycle fatigue life is examined. Stresses so obtained differ in behavior from the fully nonlinear analysis in both spectral shape and amplitude. Consequently, fatigue life predictions made using this data will be affected. Comparisons of fatigue life predictions based upon the stress response obtained from equivalent linear and numerical simulation analyses are made to determine the range over which the equivalent linear analysis is applicable.

  8. A fresh look at linear ordinary differential equations with constant coefficients. Revisiting the impulsive response method using factorization

    NASA Astrophysics Data System (ADS)

    Camporesi, Roberto

    2016-01-01

    We present an approach to the impulsive response method for solving linear constant-coefficient ordinary differential equations of any order based on the factorization of the differential operator. The approach is elementary, we only assume a basic knowledge of calculus and linear algebra. In particular, we avoid the use of distribution theory, as well as of the other more advanced approaches: Laplace transform, linear systems, the general theory of linear equations with variable coefficients and variation of parameters. The approach presented here can be used in a first course on differential equations for science and engineering majors.

  9. A simple linear regression method for quantitative trait loci linkage analysis with censored observations.

    PubMed

    Anderson, Carl A; McRae, Allan F; Visscher, Peter M

    2006-07-01

    Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using simulation we compare this method to both the Cox and Weibull proportional hazards models and a standard linear regression method that ignores censoring. The grouped linear regression method is of equivalent power to both the Cox and Weibull proportional hazards methods and is significantly better than the standard linear regression method when censored observations are present. The method is also robust to the proportion of censored individuals and the underlying distribution of the trait. On the basis of linear regression methodology, the grouped linear regression model is computationally simple and fast and can be implemented readily in freely available statistical software.

  10. ASTROP2-LE: A Mistuned Aeroelastic Analysis System Based on a Two Dimensional Linearized Euler Solver

    NASA Technical Reports Server (NTRS)

    Reddy, T. S. R.; Srivastava, R.; Mehmed, Oral

    2002-01-01

    An aeroelastic analysis system for flutter and forced response analysis of turbomachines based on a two-dimensional linearized unsteady Euler solver has been developed. The ASTROP2 code, an aeroelastic stability analysis program for turbomachinery, was used as a basis for this development. The ASTROP2 code uses strip theory to couple a two dimensional aerodynamic model with a three dimensional structural model. The code was modified to include forced response capability. The formulation was also modified to include aeroelastic analysis with mistuning. A linearized unsteady Euler solver, LINFLX2D is added to model the unsteady aerodynamics in ASTROP2. By calculating the unsteady aerodynamic loads using LINFLX2D, it is possible to include the effects of transonic flow on flutter and forced response in the analysis. The stability is inferred from an eigenvalue analysis. The revised code, ASTROP2-LE for ASTROP2 code using Linearized Euler aerodynamics, is validated by comparing the predictions with those obtained using linear unsteady aerodynamic solutions.

  11. Development of a Linearized Unsteady Euler Analysis with Application to Wake/Blade-Row Interactions

    NASA Technical Reports Server (NTRS)

    Verdon, Joseph M.; Montgomery, Matthew D.; Chuang, H. Andrew

    1999-01-01

    A three-dimensional, linearized, Euler analysis is being developed to provide a comprehensive and efficient unsteady aerodynamic analysis for predicting the aeroacoustic and aeroelastic responses of axial-flow turbomachinery blading. The mathematical models needed to describe nonlinear and linearized, inviscid, unsteady flows through a blade row operating within a cylindrical annular duct are presented in this report. A numerical model for linearized inviscid unsteady flows, which couples a near-field, implicit, wave-split, finite volume analysis to far-field eigen analyses, is also described. The linearized aerodynamic and numerical models have been implemented into the three-dimensional unsteady flow code, LINFLUX. This code is applied herein to predict unsteady subsonic flows driven by wake or vortical excitations. The intent is to validate the LINFLUX analysis via numerical results for simple benchmark unsteady flows and to demonstrate this analysis via application to a realistic wake/blade-row interaction. Detailed numerical results for a three-dimensional version of the 10th Standard Cascade and a fan exit guide vane indicate that LINFLUX is becoming a reliable and useful unsteady aerodynamic prediction capability that can be applied, in the future, to assess the three-dimensional flow physics important to blade-row, aeroacoustic and aeroelastic responses.

  12. A Fresh Look at Linear Ordinary Differential Equations with Constant Coefficients. Revisiting the Impulsive Response Method Using Factorization

    ERIC Educational Resources Information Center

    Camporesi, Roberto

    2016-01-01

    We present an approach to the impulsive response method for solving linear constant-coefficient ordinary differential equations of any order based on the factorization of the differential operator. The approach is elementary, we only assume a basic knowledge of calculus and linear algebra. In particular, we avoid the use of distribution theory, as…

  13. Non-linear analytic and coanalytic problems ( L_p-theory, Clifford analysis, examples)

    NASA Astrophysics Data System (ADS)

    Dubinskii, Yu A.; Osipenko, A. S.

    2000-02-01

    Two kinds of new mathematical model of variational type are put forward: non-linear analytic and coanalytic problems. The formulation of these non-linear boundary-value problems is based on a decomposition of the complete scale of Sobolev spaces into the "orthogonal" sum of analytic and coanalytic subspaces. A similar decomposition is considered in the framework of Clifford analysis. Explicit examples are presented.

  14. Stability analysis and stabilization strategies for linear supply chains

    NASA Astrophysics Data System (ADS)

    Nagatani, Takashi; Helbing, Dirk

    2004-04-01

    Due to delays in the adaptation of production or delivery rates, supply chains can be dynamically unstable with respect to perturbations in the consumption rate, which is known as “bull-whip effect”. Here, we study several conceivable production strategies to stabilize supply chains, which is expressed by different specifications of the management function controlling the production speed in dependence of the stock levels. In particular, we will investigate, whether the reaction to stock levels of other producers or suppliers has a stabilizing effect. We will also demonstrate that the anticipation of future stock levels can stabilize the supply system, given the forecast horizon τ is long enough. To show this, we derive linear stability conditions and carry out simulations for different control strategies. The results indicate that the linear stability analysis is a helpful tool for the judgement of the stabilization effect, although unexpected deviations can occur in the non-linear regime. There are also signs of phase transitions and chaotic behavior, but this remains to be investigated more thoroughly in the future.

  15. Ultrahigh-Dimensional Multiclass Linear Discriminant Analysis by Pairwise Sure Independence Screening

    PubMed Central

    Pan, Rui; Wang, Hansheng; Li, Runze

    2016-01-01

    This paper is concerned with the problem of feature screening for multi-class linear discriminant analysis under ultrahigh dimensional setting. We allow the number of classes to be relatively large. As a result, the total number of relevant features is larger than usual. This makes the related classification problem much more challenging than the conventional one, where the number of classes is small (very often two). To solve the problem, we propose a novel pairwise sure independence screening method for linear discriminant analysis with an ultrahigh dimensional predictor. The proposed procedure is directly applicable to the situation with many classes. We further prove that the proposed method is screening consistent. Simulation studies are conducted to assess the finite sample performance of the new procedure. We also demonstrate the proposed methodology via an empirical analysis of a real life example on handwritten Chinese character recognition. PMID:28127109

  16. Linear and nonlinear analysis of fluid slosh dampers

    NASA Astrophysics Data System (ADS)

    Sayar, B. A.; Baumgarten, J. R.

    1982-11-01

    A vibrating structure and a container partially filled with fluid are considered coupled in a free vibration mode. To simplify the mathematical analysis, a pendulum model to duplicate the fluid motion and a mass-spring dashpot representing the vibrating structure are used. The equations of motion are derived by Lagrange's energy approach and expressed in parametric form. For a wide range of parametric values the logarithmic decrements of the main system are calculated from theoretical and experimental response curves in the linear analysis. However, for the nonlinear analysis the theoretical and experimental response curves of the main system are compared. Theoretical predictions are justified by experimental observations with excellent agreement. It is concluded finally that for a proper selection of design parameters, containers partially filled with viscous fluids serve as good vibration dampers.

  17. Analysis and comparison of end effects in linear switched reluctance and hybrid motors

    NASA Astrophysics Data System (ADS)

    Barhoumi, El Manaa; Abo-Khalil, Ahmed Galal; Berrouche, Youcef; Wurtz, Frederic

    2017-03-01

    This paper presents and discusses the longitudinal and transversal end effects which affects the propulsive force of linear motors. Generally, the modeling of linear machine considers the forces distortion due to the specific geometry of linear actuators. The insertion of permanent magnets on the stator allows improving the propulsive force produced by switched reluctance linear motors. Also, the inserted permanent magnets in the hybrid structure allow reducing considerably the ends effects observed in linear motors. The analysis was conducted using 2D and 3D finite elements method. The permanent magnet reinforces the flux produced by the winding and reorients it which allows modifying the impact of end effects. Presented simulations and discussions show the importance of this study to characterize the end effects in two different linear motors.

  18. Non-Linear Vibroisolation Pads Design, Numerical FEM Analysis and Introductory Experimental Investigations

    NASA Astrophysics Data System (ADS)

    Zielnica, J.; Ziółkowski, A.; Cempel, C.

    2003-03-01

    Design and theoretical and experimental investigation of vibroisolation pads with non-linear static and dynamic responses is the objective of the paper. The analytical investigations are based on non-linear finite element analysis where the load-deflection response is traced against the shape and material properties of the analysed model of the vibroisolation pad. A new model of vibroisolation pad of antisymmetrical type was designed and analysed by the finite element method based on the second-order theory (large displacements and strains) with the assumption of material's non-linearities (Mooney-Rivlin model). Stability loss phenomenon was used in the design of the vibroisolators, and it was proved that it would be possible to design a model of vibroisolator in the form of a continuous pad with non-linear static and dynamic response, typical to vibroisolation purposes. The materials used for the vibroisolator are those of rubber, elastomers, and similar ones. The results of theoretical investigations were examined experimentally. A series of models made of soft rubber were designed for the test purposes. The experimental investigations of the vibroisolation models, under static and dynamic loads, confirmed the results of the FEM analysis.

  19. Weighted functional linear regression models for gene-based association analysis.

    PubMed

    Belonogova, Nadezhda M; Svishcheva, Gulnara R; Wilson, James F; Campbell, Harry; Axenovich, Tatiana I

    2018-01-01

    Functional linear regression models are effectively used in gene-based association analysis of complex traits. These models combine information about individual genetic variants, taking into account their positions and reducing the influence of noise and/or observation errors. To increase the power of methods, where several differently informative components are combined, weights are introduced to give the advantage to more informative components. Allele-specific weights have been introduced to collapsing and kernel-based approaches to gene-based association analysis. Here we have for the first time introduced weights to functional linear regression models adapted for both independent and family samples. Using data simulated on the basis of GAW17 genotypes and weights defined by allele frequencies via the beta distribution, we demonstrated that type I errors correspond to declared values and that increasing the weights of causal variants allows the power of functional linear models to be increased. We applied the new method to real data on blood pressure from the ORCADES sample. Five of the six known genes with P < 0.1 in at least one analysis had lower P values with weighted models. Moreover, we found an association between diastolic blood pressure and the VMP1 gene (P = 8.18×10-6), when we used a weighted functional model. For this gene, the unweighted functional and weighted kernel-based models had P = 0.004 and 0.006, respectively. The new method has been implemented in the program package FREGAT, which is freely available at https://cran.r-project.org/web/packages/FREGAT/index.html.

  20. Modeling containment of large wildfires using generalized linear mixed-model analysis

    Treesearch

    Mark Finney; Isaac C. Grenfell; Charles W. McHugh

    2009-01-01

    Billions of dollars are spent annually in the United States to contain large wildland fires, but the factors contributing to suppression success remain poorly understood. We used a regression model (generalized linear mixed-model) to model containment probability of individual fires, assuming that containment was a repeated-measures problem (fixed effect) and...

  1. Bayes factors for the linear ballistic accumulator model of decision-making.

    PubMed

    Evans, Nathan J; Brown, Scott D

    2018-04-01

    Evidence accumulation models of decision-making have led to advances in several different areas of psychology. These models provide a way to integrate response time and accuracy data, and to describe performance in terms of latent cognitive processes. Testing important psychological hypotheses using cognitive models requires a method to make inferences about different versions of the models which assume different parameters to cause observed effects. The task of model-based inference using noisy data is difficult, and has proven especially problematic with current model selection methods based on parameter estimation. We provide a method for computing Bayes factors through Monte-Carlo integration for the linear ballistic accumulator (LBA; Brown and Heathcote, 2008), a widely used evidence accumulation model. Bayes factors are used frequently for inference with simpler statistical models, and they do not require parameter estimation. In order to overcome the computational burden of estimating Bayes factors via brute force integration, we exploit general purpose graphical processing units; we provide free code for this. This approach allows estimation of Bayes factors via Monte-Carlo integration within a practical time frame. We demonstrate the method using both simulated and real data. We investigate the stability of the Monte-Carlo approximation, and the LBA's inferential properties, in simulation studies.

  2. Runtime Analysis of Linear Temporal Logic Specifications

    NASA Technical Reports Server (NTRS)

    Giannakopoulou, Dimitra; Havelund, Klaus

    2001-01-01

    This report presents an approach to checking a running program against its Linear Temporal Logic (LTL) specifications. LTL is a widely used logic for expressing properties of programs viewed as sets of executions. Our approach consists of translating LTL formulae to finite-state automata, which are used as observers of the program behavior. The translation algorithm we propose modifies standard LTL to B chi automata conversion techniques to generate automata that check finite program traces. The algorithm has been implemented in a tool, which has been integrated with the generic JPaX framework for runtime analysis of Java programs.

  3. Feature-space-based FMRI analysis using the optimal linear transformation.

    PubMed

    Sun, Fengrong; Morris, Drew; Lee, Wayne; Taylor, Margot J; Mills, Travis; Babyn, Paul S

    2010-09-01

    The optimal linear transformation (OLT), an image analysis technique of feature space, was first presented in the field of MRI. This paper proposes a method of extending OLT from MRI to functional MRI (fMRI) to improve the activation-detection performance over conventional approaches of fMRI analysis. In this method, first, ideal hemodynamic response time series for different stimuli were generated by convolving the theoretical hemodynamic response model with the stimulus timing. Second, constructing hypothetical signature vectors for different activity patterns of interest by virtue of the ideal hemodynamic responses, OLT was used to extract features of fMRI data. The resultant feature space had particular geometric clustering properties. It was then classified into different groups, each pertaining to an activity pattern of interest; the applied signature vector for each group was obtained by averaging. Third, using the applied signature vectors, OLT was applied again to generate fMRI composite images with high SNRs for the desired activity patterns. Simulations and a blocked fMRI experiment were employed for the method to be verified and compared with the general linear model (GLM)-based analysis. The simulation studies and the experimental results indicated the superiority of the proposed method over the GLM-based analysis in detecting brain activities.

  4. The Langley Stability and Transition Analysis Code (LASTRAC) : LST, Linear and Nonlinear PSE for 2-D, Axisymmetric, and Infinite Swept Wing Boundary Layers

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan

    2003-01-01

    During the past two decades, our understanding of laminar-turbulent transition flow physics has advanced significantly owing to, in a large part, the NASA program support such as the National Aerospace Plane (NASP), High-speed Civil Transport (HSCT), and Advanced Subsonic Technology (AST). Experimental, theoretical, as well as computational efforts on various issues such as receptivity and linear and nonlinear evolution of instability waves take part in broadening our knowledge base for this intricate flow phenomenon. Despite all these advances, transition prediction remains a nontrivial task for engineers due to the lack of a widely available, robust, and efficient prediction tool. The design and development of the LASTRAC code is aimed at providing one such engineering tool that is easy to use and yet capable of dealing with a broad range of transition related issues. LASTRAC was written from scratch based on the state-of-the-art numerical methods for stability analysis and modem software technologies. At low fidelity, it allows users to perform linear stability analysis and N-factor transition correlation for a broad range of flow regimes and configurations by using either the linear stability theory (LST) or linear parabolized stability equations (LPSE) method. At high fidelity, users may use nonlinear PSE to track finite-amplitude disturbances until the skin friction rise. Coupled with the built-in receptivity model that is currently under development, the nonlinear PSE method offers a synergistic approach to predict transition onset for a given disturbance environment based on first principles. This paper describes the governing equations, numerical methods, code development, and case studies for the current release of LASTRAC. Practical applications of LASTRAC are demonstrated for linear stability calculations, N-factor transition correlation, non-linear breakdown simulations, and controls of stationary crossflow instability in supersonic swept wing boundary

  5. Linear and nonlinear subspace analysis of hand movements during grasping.

    PubMed

    Cui, Phil Hengjun; Visell, Yon

    2014-01-01

    This study investigated nonlinear patterns of coordination, or synergies, underlying whole-hand grasping kinematics. Prior research has shed considerable light on roles played by such coordinated degrees-of-freedom (DOF), illuminating how motor control is facilitated by structural and functional specializations in the brain, peripheral nervous system, and musculoskeletal system. However, existing analyses suppose that the patterns of coordination can be captured by means of linear analyses, as linear combinations of nominally independent DOF. In contrast, hand kinematics is itself highly nonlinear in nature. To address this discrepancy, we sought to to determine whether nonlinear synergies might serve to more accurately and efficiently explain human grasping kinematics than is possible with linear analyses. We analyzed motion capture data acquired from the hands of individuals as they grasped an array of common objects, using four of the most widely used linear and nonlinear dimensionality reduction algorithms. We compared the results using a recently developed algorithm-agnostic quality measure, which enabled us to assess the quality of the dimensional reductions that resulted by assessing the extent to which local neighborhood information in the data was preserved. Although qualitative inspection of this data suggested that nonlinear correlations between kinematic variables were present, we found that linear modeling, in the form of Principle Components Analysis, could perform better than any of the nonlinear techniques we applied.

  6. Multiplication factor versus regression analysis in stature estimation from hand and foot dimensions.

    PubMed

    Krishan, Kewal; Kanchan, Tanuj; Sharma, Abhilasha

    2012-05-01

    Estimation of stature is an important parameter in identification of human remains in forensic examinations. The present study is aimed to compare the reliability and accuracy of stature estimation and to demonstrate the variability in estimated stature and actual stature using multiplication factor and regression analysis methods. The study is based on a sample of 246 subjects (123 males and 123 females) from North India aged between 17 and 20 years. Four anthropometric measurements; hand length, hand breadth, foot length and foot breadth taken on the left side in each subject were included in the study. Stature was measured using standard anthropometric techniques. Multiplication factors were calculated and linear regression models were derived for estimation of stature from hand and foot dimensions. Derived multiplication factors and regression formula were applied to the hand and foot measurements in the study sample. The estimated stature from the multiplication factors and regression analysis was compared with the actual stature to find the error in estimated stature. The results indicate that the range of error in estimation of stature from regression analysis method is less than that of multiplication factor method thus, confirming that the regression analysis method is better than multiplication factor analysis in stature estimation. Copyright © 2012 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  7. Under which climate and soil conditions the plant productivity-precipitation relationship is linear or nonlinear?

    PubMed

    Ye, Jian-Sheng; Pei, Jiu-Ying; Fang, Chao

    2018-03-01

    Understanding under which climate and soil conditions the plant productivity-precipitation relationship is linear or nonlinear is useful for accurately predicting the response of ecosystem function to global environmental change. Using long-term (2000-2016) net primary productivity (NPP)-precipitation datasets derived from satellite observations, we identify >5600pixels in the North Hemisphere landmass that fit either linear or nonlinear temporal NPP-precipitation relationships. Differences in climate (precipitation, radiation, ratio of actual to potential evapotranspiration, temperature) and soil factors (nitrogen, phosphorous, organic carbon, field capacity) between the linear and nonlinear types are evaluated. Our analysis shows that both linear and nonlinear types exhibit similar interannual precipitation variabilities and occurrences of extreme precipitation. Permutational multivariate analysis of variance suggests that linear and nonlinear types differ significantly regarding to radiation, ratio of actual to potential evapotranspiration, and soil factors. The nonlinear type possesses lower radiation and/or less soil nutrients than the linear type, thereby suggesting that nonlinear type features higher degree of limitation from resources other than precipitation. This study suggests several factors limiting the responses of plant productivity to changes in precipitation, thus causing nonlinear NPP-precipitation pattern. Precipitation manipulation and modeling experiments should combine with changes in other climate and soil factors to better predict the response of plant productivity under future climate. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Analysis of facial motion patterns during speech using a matrix factorization algorithm

    PubMed Central

    Lucero, Jorge C.; Munhall, Kevin G.

    2008-01-01

    This paper presents an analysis of facial motion during speech to identify linearly independent kinematic regions. The data consists of three-dimensional displacement records of a set of markers located on a subject’s face while producing speech. A QR factorization with column pivoting algorithm selects a subset of markers with independent motion patterns. The subset is used as a basis to fit the motion of the other facial markers, which determines facial regions of influence of each of the linearly independent markers. Those regions constitute kinematic “eigenregions” whose combined motion produces the total motion of the face. Facial animations may be generated by driving the independent markers with collected displacement records. PMID:19062866

  9. Linear regression analysis of survival data with missing censoring indicators.

    PubMed

    Wang, Qihua; Dinse, Gregg E

    2011-04-01

    Linear regression analysis has been studied extensively in a random censorship setting, but typically all of the censoring indicators are assumed to be observed. In this paper, we develop synthetic data methods for estimating regression parameters in a linear model when some censoring indicators are missing. We define estimators based on regression calibration, imputation, and inverse probability weighting techniques, and we prove all three estimators are asymptotically normal. The finite-sample performance of each estimator is evaluated via simulation. We illustrate our methods by assessing the effects of sex and age on the time to non-ambulatory progression for patients in a brain cancer clinical trial.

  10. Classification and regression tree analysis vs. multivariable linear and logistic regression methods as statistical tools for studying haemophilia.

    PubMed

    Henrard, S; Speybroeck, N; Hermans, C

    2015-11-01

    Haemophilia is a rare genetic haemorrhagic disease characterized by partial or complete deficiency of coagulation factor VIII, for haemophilia A, or IX, for haemophilia B. As in any other medical research domain, the field of haemophilia research is increasingly concerned with finding factors associated with binary or continuous outcomes through multivariable models. Traditional models include multiple logistic regressions, for binary outcomes, and multiple linear regressions for continuous outcomes. Yet these regression models are at times difficult to implement, especially for non-statisticians, and can be difficult to interpret. The present paper sought to didactically explain how, why, and when to use classification and regression tree (CART) analysis for haemophilia research. The CART method is non-parametric and non-linear, based on the repeated partitioning of a sample into subgroups based on a certain criterion. Breiman developed this method in 1984. Classification trees (CTs) are used to analyse categorical outcomes and regression trees (RTs) to analyse continuous ones. The CART methodology has become increasingly popular in the medical field, yet only a few examples of studies using this methodology specifically in haemophilia have to date been published. Two examples using CART analysis and previously published in this field are didactically explained in details. There is increasing interest in using CART analysis in the health domain, primarily due to its ease of implementation, use, and interpretation, thus facilitating medical decision-making. This method should be promoted for analysing continuous or categorical outcomes in haemophilia, when applicable. © 2015 John Wiley & Sons Ltd.

  11. A meta-analysis of cambium phenology and growth: linear and non-linear patterns in conifers of the northern hemisphere

    PubMed Central

    Rossi, Sergio; Anfodillo, Tommaso; Čufar, Katarina; Cuny, Henri E.; Deslauriers, Annie; Fonti, Patrick; Frank, David; Gričar, Jožica; Gruber, Andreas; King, Gregory M.; Krause, Cornelia; Morin, Hubert; Oberhuber, Walter; Prislan, Peter; Rathgeber, Cyrille B. K.

    2013-01-01

    Background and Aims Ongoing global warming has been implicated in shifting phenological patterns such as the timing and duration of the growing season across a wide variety of ecosystems. Linear models are routinely used to extrapolate these observed shifts in phenology into the future and to estimate changes in associated ecosystem properties such as net primary productivity. Yet, in nature, linear relationships may be special cases. Biological processes frequently follow more complex, non-linear patterns according to limiting factors that generate shifts and discontinuities, or contain thresholds beyond which responses change abruptly. This study investigates to what extent cambium phenology is associated with xylem growth and differentiation across conifer species of the northern hemisphere. Methods Xylem cell production is compared with the periods of cambial activity and cell differentiation assessed on a weekly time scale on histological sections of cambium and wood tissue collected from the stems of nine species in Canada and Europe over 1–9 years per site from 1998 to 2011. Key Results The dynamics of xylogenesis were surprisingly homogeneous among conifer species, although dispersions from the average were obviously observed. Within the range analysed, the relationships between the phenological timings were linear, with several slopes showing values close to or not statistically different from 1. The relationships between the phenological timings and cell production were distinctly non-linear, and involved an exponential pattern Conclusions The trees adjust their phenological timings according to linear patterns. Thus, shifts of one phenological phase are associated with synchronous and comparable shifts of the successive phases. However, small increases in the duration of xylogenesis could correspond to a substantial increase in cell production. The findings suggest that the length of the growing season and the resulting amount of growth could respond

  12. A meta-analysis of cambium phenology and growth: linear and non-linear patterns in conifers of the northern hemisphere.

    PubMed

    Rossi, Sergio; Anfodillo, Tommaso; Cufar, Katarina; Cuny, Henri E; Deslauriers, Annie; Fonti, Patrick; Frank, David; Gricar, Jozica; Gruber, Andreas; King, Gregory M; Krause, Cornelia; Morin, Hubert; Oberhuber, Walter; Prislan, Peter; Rathgeber, Cyrille B K

    2013-12-01

    Ongoing global warming has been implicated in shifting phenological patterns such as the timing and duration of the growing season across a wide variety of ecosystems. Linear models are routinely used to extrapolate these observed shifts in phenology into the future and to estimate changes in associated ecosystem properties such as net primary productivity. Yet, in nature, linear relationships may be special cases. Biological processes frequently follow more complex, non-linear patterns according to limiting factors that generate shifts and discontinuities, or contain thresholds beyond which responses change abruptly. This study investigates to what extent cambium phenology is associated with xylem growth and differentiation across conifer species of the northern hemisphere. Xylem cell production is compared with the periods of cambial activity and cell differentiation assessed on a weekly time scale on histological sections of cambium and wood tissue collected from the stems of nine species in Canada and Europe over 1-9 years per site from 1998 to 2011. The dynamics of xylogenesis were surprisingly homogeneous among conifer species, although dispersions from the average were obviously observed. Within the range analysed, the relationships between the phenological timings were linear, with several slopes showing values close to or not statistically different from 1. The relationships between the phenological timings and cell production were distinctly non-linear, and involved an exponential pattern. The trees adjust their phenological timings according to linear patterns. Thus, shifts of one phenological phase are associated with synchronous and comparable shifts of the successive phases. However, small increases in the duration of xylogenesis could correspond to a substantial increase in cell production. The findings suggest that the length of the growing season and the resulting amount of growth could respond differently to changes in environmental conditions.

  13. A multiple linear regression analysis of factors affecting the simulated Basic Life Support (BLS) performance with Automated External Defibrillator (AED) in Flemish lifeguards.

    PubMed

    Iserbyt, Peter; Schouppe, Gilles; Charlier, Nathalie

    2015-04-01

    Research investigating lifeguards' performance of Basic Life Support (BLS) with Automated External Defibrillator (AED) is limited. Assessing simulated BLS/AED performance in Flemish lifeguards and identifying factors affecting this performance. Six hundred and sixteen (217 female and 399 male) certified Flemish lifeguards (aged 16-71 years) performed BLS with an AED on a Laerdal ResusciAnne manikin simulating an adult victim of drowning. Stepwise multiple linear regression analysis was conducted with BLS/AED performance as outcome variable and demographic data as explanatory variables. Mean BLS/AED performance for all lifeguards was 66.5%. Compression rate and depth adhered closely to ERC 2010 guidelines. Ventilation volume and flow rate exceeded the guidelines. A significant regression model, F(6, 415)=25.61, p<.001, ES=.38, explained 27% of the variance in BLS performance (R2=.27). Significant predictors were age (beta=-.31, p<.001), years of certification (beta=-.41, p<.001), time on duty per year (beta=-.25, p<.001), practising BLS skills (beta=.11, p=.011), and being a professional lifeguard (beta=-.13, p=.029). 71% of lifeguards reported not practising BLS/AED. Being young, recently certified, few days of employment per year, practising BLS skills and not being a professional lifeguard are factors associated with higher BLS/AED performance. Measures should be taken to prevent BLS/AED performances from decaying with age and longer certification. Refresher courses could include a formal skills test and lifeguards should be encouraged to practise their BLS/AED skills. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  14. Adjustment of Adaptive Gain with Bounded Linear Stability Analysis to Improve Time-Delay Margin for Metrics-Driven Adaptive Control

    NASA Technical Reports Server (NTRS)

    Bakhtiari-Nejad, Maryam; Nguyen, Nhan T.; Krishnakumar, Kalmanje Srinvas

    2009-01-01

    This paper presents the application of Bounded Linear Stability Analysis (BLSA) method for metrics driven adaptive control. The bounded linear stability analysis method is used for analyzing stability of adaptive control models, without linearizing the adaptive laws. Metrics-driven adaptive control introduces a notion that adaptation should be driven by some stability metrics to achieve robustness. By the application of bounded linear stability analysis method the adaptive gain is adjusted during the adaptation in order to meet certain phase margin requirements. Analysis of metrics-driven adaptive control is evaluated for a linear damaged twin-engine generic transport model of aircraft. The analysis shows that the system with the adjusted adaptive gain becomes more robust to unmodeled dynamics or time delay.

  15. Edmonton obesity staging system among pediatric patients: a validation and obesogenic risk factor analysis.

    PubMed

    Grammatikopoulou, M G; Chourdakis, M; Gkiouras, K; Roumeli, P; Poulimeneas, D; Apostolidou, E; Chountalas, I; Tirodimos, I; Filippou, O; Papadakou-Lagogianni, S; Dardavessis, T

    2018-01-08

    The Edmonton Obesity Staging System for Pediatrics (EOSS-P) is a useful tool, delineating different obesity severity tiers associated with distinct treatment barriers. The aim of the study was to apply the EOSS-P on a Greek pediatric cohort and assess risk factors associated with each stage, compared to normal weight controls. A total of 361 children (2-14 years old), outpatients of an Athenian hospital, participated in this case-control study by forming two groups: the obese (n = 203) and the normoweight controls (n = 158). Anthropometry, blood pressure, blood and biochemical markers, comorbidities and obesogenic lifestyle parameters were recorded and the EOSS-P was applied. Validation of EOSS-P stages was conducted by juxtaposing them with IOTF-defined weight status. Obesogenic risk factors' analysis was conducted by constructing gender-and-age-adjusted (GA) and multivariate logistic models. The majority of obese children were stratified at stage 1 (46.0%), 17.0% were on stage 0, and 37.0% on stage 2. The validation analysis revealed that EOSS-P stages greater than 0 were associated with diastolic blood pressure and levels of glucose, cholesterol, LDL and ALT. Reduced obesity odds were observed among children playing outdoors and increased odds for every screen time hour, both in the GA and in the multivariate analyses (all P < 0.05). Although participation in sports > 2 times/week was associated with reduced obesity odds in the GA analysis (OR = 0.57, 95% CI = 0.33-0.98, P linear = 0.047), it lost its significance in the multivariate analysis (P linear = 0.145). Analogous results were recorded in the analyses of the abovementioned physical activity risk factors for the EOSS-P stages. Linear relationships were observed for fast-food consumption and IOTF-defined obesity and higher than 0 EOSS-P stages. Parental obesity status was associated with all EOSS-P stages and IOTF-defined obesity status. Few outpatients were healthy obese (stage 0), while

  16. Quantitative Approach to Failure Mode and Effect Analysis for Linear Accelerator Quality Assurance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Daniel, Jennifer C., E-mail: jennifer.odaniel@duke.edu; Yin, Fang-Fang

    Purpose: To determine clinic-specific linear accelerator quality assurance (QA) TG-142 test frequencies, to maximize physicist time efficiency and patient treatment quality. Methods and Materials: A novel quantitative approach to failure mode and effect analysis is proposed. Nine linear accelerator-years of QA records provided data on failure occurrence rates. The severity of test failure was modeled by introducing corresponding errors into head and neck intensity modulated radiation therapy treatment plans. The relative risk of daily linear accelerator QA was calculated as a function of frequency of test performance. Results: Although the failure severity was greatest for daily imaging QA (imaging vsmore » treatment isocenter and imaging positioning/repositioning), the failure occurrence rate was greatest for output and laser testing. The composite ranking results suggest that performing output and lasers tests daily, imaging versus treatment isocenter and imaging positioning/repositioning tests weekly, and optical distance indicator and jaws versus light field tests biweekly would be acceptable for non-stereotactic radiosurgery/stereotactic body radiation therapy linear accelerators. Conclusions: Failure mode and effect analysis is a useful tool to determine the relative importance of QA tests from TG-142. Because there are practical time limitations on how many QA tests can be performed, this analysis highlights which tests are the most important and suggests the frequency of testing based on each test's risk priority number.« less

  17. Examining Factors Affecting Science Achievement of Hong Kong in PISA 2006 Using Hierarchical Linear Modeling

    ERIC Educational Resources Information Center

    Lam, Terence Yuk Ping; Lau, Kwok Chi

    2014-01-01

    This study uses hierarchical linear modeling to examine the influence of a range of factors on the science performances of Hong Kong students in PISA 2006. Hong Kong has been consistently ranked highly in international science assessments, such as Programme for International Student Assessment and Trends in International Mathematics and Science…

  18. SU-E-T-627: Failure Modes and Effect Analysis for Monthly Quality Assurance of Linear Accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, J; Xiao, Y; Wang, J

    2014-06-15

    Purpose: To develop and implement a failure mode and effect analysis (FMEA) on routine monthly Quality Assurance (QA) tests (physical tests part) of linear accelerator. Methods: A systematic failure mode and effect analysis method was performed for monthly QA procedures. A detailed process tree of monthly QA was created and potential failure modes were defined. Each failure mode may have many influencing factors. For each factor, a risk probability number (RPN) was calculated from the product of probability of occurrence (O), the severity of effect (S), and detectability of the failure (D). The RPN scores are in a range ofmore » 1 to 1000, with higher scores indicating stronger correlation to a given influencing factor of a failure mode. Five medical physicists in our institution were responsible to discuss and to define the O, S, D values. Results: 15 possible failure modes were identified and all RPN scores of all influencing factors of these 15 failue modes were from 8 to 150, and the checklist of FMEA in monthly QA was drawn. The system showed consistent and accurate response to erroneous conditions. Conclusion: The influencing factors of RPN greater than 50 were considered as highly-correlated factors of a certain out-oftolerance monthly QA test. FMEA is a fast and flexible tool to develop an implement a quality management (QM) frame work of monthly QA, which improved the QA efficiency of our QA team. The FMEA work may incorporate more quantification and monitoring fuctions in future.« less

  19. Airfoil stall interpreted through linear stability analysis

    NASA Astrophysics Data System (ADS)

    Busquet, Denis; Juniper, Matthew; Richez, Francois; Marquet, Olivier; Sipp, Denis

    2017-11-01

    Although airfoil stall has been widely investigated, the origin of this phenomenon, which manifests as a sudden drop of lift, is still not clearly understood. In the specific case of static stall, multiple steady solutions have been identified experimentally and numerically around the stall angle. We are interested here in investigating the stability of these steady solutions so as to first model and then control the dynamics. The study is performed on a 2D helicopter blade airfoil OA209 at low Mach number, M 0.2 and high Reynolds number, Re 1.8 ×106 . Steady RANS computation using a Spalart-Allmaras model is coupled with continuation methods (pseudo-arclength and Newton's method) to obtain steady states for several angles of incidence. The results show one upper branch (high lift), one lower branch (low lift) connected by a middle branch, characterizing an hysteresis phenomenon. A linear stability analysis performed around these equilibrium states highlights a mode responsible for stall, which starts with a low frequency oscillation. A bifurcation scenario is deduced from the behaviour of this mode. To shed light on the nonlinear behavior, a low order nonlinear model is created with the same linear stability behavior as that observed for that airfoil.

  20. Mathematical modelling and linear stability analysis of laser fusion cutting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hermanns, Torsten; Schulz, Wolfgang; Vossen, Georg

    A model for laser fusion cutting is presented and investigated by linear stability analysis in order to study the tendency for dynamic behavior and subsequent ripple formation. The result is a so called stability function that describes the correlation of the setting values of the process and the process’ amount of dynamic behavior.

  1. Analysis of an inventory model for both linearly decreasing demand and holding cost

    NASA Astrophysics Data System (ADS)

    Malik, A. K.; Singh, Parth Raj; Tomar, Ajay; Kumar, Satish; Yadav, S. K.

    2016-03-01

    This study proposes the analysis of an inventory model for linearly decreasing demand and holding cost for non-instantaneous deteriorating items. The inventory model focuses on commodities having linearly decreasing demand without shortages. The holding cost doesn't remain uniform with time due to any form of variation in the time value of money. Here we consider that the holding cost decreases with respect to time. The optimal time interval for the total profit and the optimal order quantity are determined. The developed inventory model is pointed up through a numerical example. It also includes the sensitivity analysis.

  2. Meta-Analysis in Higher Education: An Illustrative Example Using Hierarchical Linear Modeling

    ERIC Educational Resources Information Center

    Denson, Nida; Seltzer, Michael H.

    2011-01-01

    The purpose of this article is to provide higher education researchers with an illustrative example of meta-analysis utilizing hierarchical linear modeling (HLM). This article demonstrates the step-by-step process of meta-analysis using a recently-published study examining the effects of curricular and co-curricular diversity activities on racial…

  3. Using Horn's Parallel Analysis Method in Exploratory Factor Analysis for Determining the Number of Factors

    ERIC Educational Resources Information Center

    Çokluk, Ömay; Koçak, Duygu

    2016-01-01

    In this study, the number of factors obtained from parallel analysis, a method used for determining the number of factors in exploratory factor analysis, was compared to that of the factors obtained from eigenvalue and scree plot--two traditional methods for determining the number of factors--in terms of consistency. Parallel analysis is based on…

  4. The use of modified scaling factors in the design of high-power, non-linear, transmitting rod-core antennas

    NASA Astrophysics Data System (ADS)

    Jordan, Jared Williams; Dvorak, Steven L.; Sternberg, Ben K.

    2010-10-01

    In this paper, we develop a technique for designing high-power, non-linear, transmitting rod-core antennas by using simple modified scale factors rather than running labor-intensive numerical models. By using modified scale factors, a designer can predict changes in magnetic moment, inductance, core series loss resistance, etc. We define modified scale factors as the case when all physical dimensions of the rod antenna are scaled by p, except for the cross-sectional area of the individual wires or strips that are used to construct the core. This allows one to make measurements on a scaled-down version of the rod antenna using the same core material that will be used in the final antenna design. The modified scale factors were derived from prolate spheroidal analytical expressions for a finite-length rod antenna and were verified with experimental results. The modified scaling factors can only be used if the magnetic flux densities within the two scaled cores are the same. With the magnetic flux density constant, the two scaled cores will operate with the same complex permeability, thus changing the non-linear problem to a quasi-linear problem. We also demonstrate that by holding the number of turns times the drive current constant, while changing the number of turns, the inductance and core series loss resistance change by the number of turns squared. Experimental measurements were made on rod cores made from varying diameters of black oxide, low carbon steel wires and different widths of Metglas foil. Furthermore, we demonstrate that the modified scale factors work even in the presence of eddy currents within the core material.

  5. Quantifying the predictive consequences of model error with linear subspace analysis

    USGS Publications Warehouse

    White, Jeremy T.; Doherty, John E.; Hughes, Joseph D.

    2014-01-01

    All computer models are simplified and imperfect simulators of complex natural systems. The discrepancy arising from simplification induces bias in model predictions, which may be amplified by the process of model calibration. This paper presents a new method to identify and quantify the predictive consequences of calibrating a simplified computer model. The method is based on linear theory, and it scales efficiently to the large numbers of parameters and observations characteristic of groundwater and petroleum reservoir models. The method is applied to a range of predictions made with a synthetic integrated surface-water/groundwater model with thousands of parameters. Several different observation processing strategies and parameterization/regularization approaches are examined in detail, including use of the Karhunen-Loève parameter transformation. Predictive bias arising from model error is shown to be prediction specific and often invisible to the modeler. The amount of calibration-induced bias is influenced by several factors, including how expert knowledge is applied in the design of parameterization schemes, the number of parameters adjusted during calibration, how observations and model-generated counterparts are processed, and the level of fit with observations achieved through calibration. Failure to properly implement any of these factors in a prediction-specific manner may increase the potential for predictive bias in ways that are not visible to the calibration and uncertainty analysis process.

  6. A linear programming manual

    NASA Technical Reports Server (NTRS)

    Tuey, R. C.

    1972-01-01

    Computer solutions of linear programming problems are outlined. Information covers vector spaces, convex sets, and matrix algebra elements for solving simultaneous linear equations. Dual problems, reduced cost analysis, ranges, and error analysis are illustrated.

  7. Relationship between linear type and fertility traits in Nguni cows.

    PubMed

    Zindove, T J; Chimonyo, M; Nephawe, K A

    2015-06-01

    The objective of the study was to assess the dimensionality of seven linear traits (body condition score, body stature, body length, heart girth, navel height, body depth and flank circumference) in Nguni cows using factor analysis and indicate the relationship between the extracted latent variables and calving interval (CI) and age at first calving (AFC). The traits were measured between December 2012 and November 2013 on 1559 Nguni cows kept under thornveld, succulent karoo, grassland and bushveld vegetation types. Low partial correlations (-0.04 to 0.51), high Kaiser statistic for measure of sampling adequacy scores and significance of the Bartlett sphericity test (P1. Factor 1 included body condition score, body depth, flank circumference and heart girth and represented body capacity of cows. Factor 2 included body length, body stature and navel height and represented frame size of cows. CI and AFC decreased linearly with increase of factor 1. There was a quadratic increase in AFC as factor 2 increased (P<0.05). It was concluded that the linear type traits under study can be grouped into two distinct factors, one linked to body capacity and the other to the frame size of the cows. Small-framed cows with large body capacities have shorter CI and AFC.

  8. Short-term effects of meteorological factors on hand, foot and mouth disease among children in Shenzhen, China: Non-linearity, threshold and interaction.

    PubMed

    Zhang, Zhen; Xie, Xu; Chen, Xiliang; Li, Yuan; Lu, Yan; Mei, Shujiang; Liao, Yuxue; Lin, Hualiang

    2016-01-01

    Various meteorological factors have been associated with hand, foot and mouth disease (HFMD) among children; however, fewer studies have examined the non-linearity and interaction among the meteorological factors. A generalized additive model with a log link allowing Poisson auto-regression and over-dispersion was applied to investigate the short-term effects daily meteorological factors on children HFMD with adjustment of potential confounding factors. We found positive effects of mean temperature and wind speed, the excess relative risk (ERR) was 2.75% (95% CI: 1.98%, 3.53%) for one degree increase in daily mean temperature on lag day 6, and 3.93% (95% CI: 2.16% to 5.73%) for 1m/s increase in wind speed on lag day 3. We found a non-linear effect of relative humidity with thresholds with the low threshold at 45% and high threshold at 85%, within which there was positive effect, the ERR was 1.06% (95% CI: 0.85% to 1.27%) for 1 percent increase in relative humidity on lag day 5. No significant effect was observed for rainfall and sunshine duration. For the interactive effects, we found a weak additive interaction between mean temperature and relative humidity, and slightly antagonistic interaction between mean temperature and wind speed, and between relative humidity and wind speed in the additive models, but the interactions were not statistically significant. This study suggests that mean temperature, relative humidity and wind speed might be risk factors of children HFMD in Shenzhen, and the interaction analysis indicates that these meteorological factors might have played their roles individually. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. A linearized Euler analysis of unsteady flows in turbomachinery

    NASA Technical Reports Server (NTRS)

    Hall, Kenneth C.; Crawley, Edward F.

    1987-01-01

    A method for calculating unsteady flows in cascades is presented. The model, which is based on the linearized unsteady Euler equations, accounts for blade loading shock motion, wake motion, and blade geometry. The mean flow through the cascade is determined by solving the full nonlinear Euler equations. Assuming the unsteadiness in the flow is small, then the Euler equations are linearized about the mean flow to obtain a set of linear variable coefficient equations which describe the small amplitude, harmonic motion of the flow. These equations are discretized on a computational grid via a finite volume operator and solved directly subject to an appropriate set of linearized boundary conditions. The steady flow, which is calculated prior to the unsteady flow, is found via a Newton iteration procedure. An important feature of the analysis is the use of shock fitting to model steady and unsteady shocks. Use of the Euler equations with the unsteady Rankine-Hugoniot shock jump conditions correctly models the generation of steady and unsteady entropy and vorticity at shocks. In particular, the low frequency shock displacement is correctly predicted. Results of this method are presented for a variety of test cases. Predicted unsteady transonic flows in channels are compared to full nonlinear Euler solutions obtained using time-accurate, time-marching methods. The agreement between the two methods is excellent for small to moderate levels of flow unsteadiness. The method is also used to predict unsteady flows in cascades due to blade motion (flutter problem) and incoming disturbances (gust response problem).

  10. A primer for biomedical scientists on how to execute model II linear regression analysis.

    PubMed

    Ludbrook, John

    2012-04-01

    1. There are two very different ways of executing linear regression analysis. One is Model I, when the x-values are fixed by the experimenter. The other is Model II, in which the x-values are free to vary and are subject to error. 2. I have received numerous complaints from biomedical scientists that they have great difficulty in executing Model II linear regression analysis. This may explain the results of a Google Scholar search, which showed that the authors of articles in journals of physiology, pharmacology and biochemistry rarely use Model II regression analysis. 3. I repeat my previous arguments in favour of using least products linear regression analysis for Model II regressions. I review three methods for executing ordinary least products (OLP) and weighted least products (WLP) regression analysis: (i) scientific calculator and/or computer spreadsheet; (ii) specific purpose computer programs; and (iii) general purpose computer programs. 4. Using a scientific calculator and/or computer spreadsheet, it is easy to obtain correct values for OLP slope and intercept, but the corresponding 95% confidence intervals (CI) are inaccurate. 5. Using specific purpose computer programs, the freeware computer program smatr gives the correct OLP regression coefficients and obtains 95% CI by bootstrapping. In addition, smatr can be used to compare the slopes of OLP lines. 6. When using general purpose computer programs, I recommend the commercial programs systat and Statistica for those who regularly undertake linear regression analysis and I give step-by-step instructions in the Supplementary Information as to how to use loss functions. © 2011 The Author. Clinical and Experimental Pharmacology and Physiology. © 2011 Blackwell Publishing Asia Pty Ltd.

  11. Applications of multivariate modeling to neuroimaging group analysis: A comprehensive alternative to univariate general linear model

    PubMed Central

    Chen, Gang; Adleman, Nancy E.; Saad, Ziad S.; Leibenluft, Ellen; Cox, RobertW.

    2014-01-01

    All neuroimaging packages can handle group analysis with t-tests or general linear modeling (GLM). However, they are quite hamstrung when there are multiple within-subject factors or when quantitative covariates are involved in the presence of a within-subject factor. In addition, sphericity is typically assumed for the variance–covariance structure when there are more than two levels in a within-subject factor. To overcome such limitations in the traditional AN(C)OVA and GLM, we adopt a multivariate modeling (MVM) approach to analyzing neuroimaging data at the group level with the following advantages: a) there is no limit on the number of factors as long as sample sizes are deemed appropriate; b) quantitative covariates can be analyzed together with within- subject factors; c) when a within-subject factor is involved, three testing methodologies are provided: traditional univariate testing (UVT)with sphericity assumption (UVT-UC) and with correction when the assumption is violated (UVT-SC), and within-subject multivariate testing (MVT-WS); d) to correct for sphericity violation at the voxel level, we propose a hybrid testing (HT) approach that achieves equal or higher power via combining traditional sphericity correction methods (Greenhouse–Geisser and Huynh–Feldt) with MVT-WS. PMID:24954281

  12. Estimation of the behavior factor of existing RC-MRF buildings

    NASA Astrophysics Data System (ADS)

    Vona, Marco; Mastroberti, Monica

    2018-01-01

    In recent years, several research groups have studied a new generation of analysis methods for seismic response assessment of existing buildings. Nevertheless, many important developments are still needed in order to define more reliable and effective assessment procedures. Moreover, regarding existing buildings, it should be highlighted that due to the low knowledge level, the linear elastic analysis is the only analysis method allowed. The same codes (such as NTC2008, EC8) consider the linear dynamic analysis with behavior factor as the reference method for the evaluation of seismic demand. This type of analysis is based on a linear-elastic structural model subject to a design spectrum, obtained by reducing the elastic spectrum through a behavior factor. The behavior factor (reduction factor or q factor in some codes) is used to reduce the elastic spectrum ordinate or the forces obtained from a linear analysis in order to take into account the non-linear structural capacities. The behavior factors should be defined based on several parameters that influence the seismic nonlinear capacity, such as mechanical materials characteristics, structural system, irregularity and design procedures. In practical applications, there is still an evident lack of detailed rules and accurate behavior factor values adequate for existing buildings. In this work, some investigations of the seismic capacity of the main existing RC-MRF building types have been carried out. In order to make a correct evaluation of the seismic force demand, actual behavior factor values coherent with force based seismic safety assessment procedure have been proposed and compared with the values reported in the Italian seismic code, NTC08.

  13. Meshless analysis of shear deformable shells: the linear model

    NASA Astrophysics Data System (ADS)

    Costa, Jorge C.; Tiago, Carlos M.; Pimenta, Paulo M.

    2013-10-01

    This work develops a kinematically linear shell model departing from a consistent nonlinear theory. The initial geometry is mapped from a flat reference configuration by a stress-free finite deformation, after which, the actual shell motion takes place. The model maintains the features of a complete stress-resultant theory with Reissner-Mindlin kinematics based on an inextensible director. A hybrid displacement variational formulation is presented, where the domain displacements and kinematic boundary reactions are independently approximated. The resort to a flat reference configuration allows the discretization using 2-D Multiple Fixed Least-Squares (MFLS) on the domain. The consistent definition of stress resultants and consequent plane stress assumption led to a neat formulation for the analysis of shells. The consistent linear approximation, combined with MFLS, made possible efficient computations with a desired continuity degree, leading to smooth results for the displacement, strain and stress fields, as shown by several numerical examples.

  14. Determining the Number of Factors in P-Technique Factor Analysis

    ERIC Educational Resources Information Center

    Lo, Lawrence L.; Molenaar, Peter C. M.; Rovine, Michael

    2017-01-01

    Determining the number of factors is a critical first step in exploratory factor analysis. Although various criteria and methods for determining the number of factors have been evaluated in the usual between-subjects R-technique factor analysis, there is still question of how these methods perform in within-subjects P-technique factor analysis. A…

  15. Analysis of Linear Antibody Epitopes on Factor H and CFHR1 Using Sera of Patients with Autoimmune Atypical Hemolytic Uremic Syndrome.

    PubMed

    Trojnár, Eszter; Józsi, Mihály; Uray, Katalin; Csuka, Dorottya; Szilágyi, Ágnes; Milosevic, Danko; Stojanović, Vesna D; Spasojević, Brankica; Rusai, Krisztina; Müller, Thomas; Arbeiter, Klaus; Kelen, Kata; Szabó, Attila J; Reusz, György S; Hyvärinen, Satu; Jokiranta, T Sakari; Prohászka, Zoltán

    2017-01-01

    In autoimmune atypical hemolytic uremic syndrome (aHUS), the complement regulator factor H (FH) is blocked by FH autoantibodies, while 90% of the patients carry a homozygous deletion of its homolog complement FH-related protein 1 (CFHR1). The functional consequence of FH-blockade is widely established; however, the molecular basis of autoantibody binding and the role of CFHR1 deficiency in disease pathogenesis are still unknown. We performed epitope mapping of FH to provide structural insight in the autoantibody recruitment on FH and potentially CFHR1. Eight anti-FH positive aHUS patients were enrolled in this study. With overlapping synthetic FH and CFHR1 peptides, we located the amino acids (aa) involved in binding of acute and convalescence stage autoantibodies. We confirmed the location of the mapped epitopes using recombinant FH domains 19-20 that carried single-aa substitutions at the suspected antibody binding sites in three of our patients. Location of the linear epitopes and the introduced point mutations was visualized using crystal structures of the corresponding domains of FH and CFHR1. We identified three linear epitopes on FH (aa1157-1171; aa1177-1191; and aa1207-1226) and one on CFHR1 (aa276-290) that are recognized both in the acute and convalescence stages of aHUS. We observed a similar extent of autoantibody binding to the aHUS-specific epitope aa1177-1191 on FH and aa276-290 on CFHR1, despite seven of our patients being deficient for CFHR1. Epitope mapping with the domain constructs validated the location of the linear epitopes on FH with a distinct autoantibody binding motif within aa1183-1198 in line with published observations. According to the results, the linear epitopes we identified are located close to each other on the crystal structure of FH domains 19-20. This tertiary configuration contains the amino acids reported to be involved in C3b and sialic acid binding on the regulator, which may explain the functional deficiency of FH in the

  16. Preoperative factors affecting cost and length of stay for isolated off-pump coronary artery bypass grafting: hierarchical linear model analysis.

    PubMed

    Shinjo, Daisuke; Fushimi, Kiyohide

    2015-11-17

    To determine the effect of preoperative patient and hospital factors on resource use, cost and length of stay (LOS) among patients undergoing off-pump coronary artery bypass grafting (OPCAB). Observational retrospective study. Data from the Japanese Administrative Database. Patients who underwent isolated, elective OPCAB between April 2011 and March 2012. The primary outcomes of this study were inpatient cost and LOS associated with OPCAB. A two-level hierarchical linear model was used to examine the effects of patient and hospital characteristics on inpatient costs and LOS. The independent variables were patient and hospital factors. We identified 2491 patients who underwent OPCAB at 268 hospitals. The mean cost of OPCAB was $40 665 ±7774, and the mean LOS was 23.4±8.2 days. The study found that select patient factors and certain comorbidities were associated with a high cost and long LOS. A high hospital OPCAB volume was associated with a low cost (-6.6%; p=0.024) as well as a short LOS (-17.6%, p<0.001). The hospital OPCAB volume is associated with efficient resource use. The findings of the present study indicate the need to focus on hospital elective OPCAB volume in Japan in order to improve cost and LOS. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  17. Bounded Linear Stability Analysis - A Time Delay Margin Estimation Approach for Adaptive Control

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Ishihara, Abraham K.; Krishnakumar, Kalmanje Srinlvas; Bakhtiari-Nejad, Maryam

    2009-01-01

    This paper presents a method for estimating time delay margin for model-reference adaptive control of systems with almost linear structured uncertainty. The bounded linear stability analysis method seeks to represent the conventional model-reference adaptive law by a locally bounded linear approximation within a small time window using the comparison lemma. The locally bounded linear approximation of the combined adaptive system is cast in a form of an input-time-delay differential equation over a small time window. The time delay margin of this system represents a local stability measure and is computed analytically by a matrix measure method, which provides a simple analytical technique for estimating an upper bound of time delay margin. Based on simulation results for a scalar model-reference adaptive control system, both the bounded linear stability method and the matrix measure method are seen to provide a reasonably accurate and yet not too conservative time delay margin estimation.

  18. Using Perturbed QR Factorizations To Solve Linear Least-Squares Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Avron, Haim; Ng, Esmond G.; Toledo, Sivan

    2008-03-21

    We propose and analyze a new tool to help solve sparse linear least-squares problems min{sub x} {parallel}Ax-b{parallel}{sub 2}. Our method is based on a sparse QR factorization of a low-rank perturbation {cflx A} of A. More precisely, we show that the R factor of {cflx A} is an effective preconditioner for the least-squares problem min{sub x} {parallel}Ax-b{parallel}{sub 2}, when solved using LSQR. We propose applications for the new technique. When A is rank deficient we can add rows to ensure that the preconditioner is well-conditioned without column pivoting. When A is sparse except for a few dense rows we canmore » drop these dense rows from A to obtain {cflx A}. Another application is solving an updated or downdated problem. If R is a good preconditioner for the original problem A, it is a good preconditioner for the updated/downdated problem {cflx A}. We can also solve what-if scenarios, where we want to find the solution if a column of the original matrix is changed/removed. We present a spectral theory that analyzes the generalized spectrum of the pencil (A*A,R*R) and analyze the applications.« less

  19. Triacylglycerol stereospecific analysis and linear discriminant analysis for milk speciation.

    PubMed

    Blasi, Francesca; Lombardi, Germana; Damiani, Pietro; Simonetti, Maria Stella; Giua, Laura; Cossignani, Lina

    2013-05-01

    Product authenticity is an important topic in dairy sector. Dairy products sold for public consumption must be accurately labelled in accordance with the contained milk species. Linear discriminant analysis (LDA), a common chemometric procedure, has been applied to fatty acid% composition to classify pure milk samples (cow, ewe, buffalo, donkey, goat). All original grouped cases were correctly classified, while 90% of cross-validated grouped cases were correctly classified. Another objective of this research was the characterisation of cow-ewe milk mixtures in order to reveal a common fraud in dairy field, that is the addition of cow to ewe milk. Stereospecific analysis of triacylglycerols (TAG), a method based on chemical-enzymatic procedures coupled with chromatographic techniques, has been carried out to detect fraudulent milk additions, in particular 1, 3, 5% cow milk added to ewe milk. When only TAG composition data were used for the elaboration, 75% of original grouped cases were correctly classified, while totally correct classified samples were obtained when both total and intrapositional TAG data were used. Also the results of cross validation were better when TAG stereospecific analysis data were considered as LDA variables. In particular, 100% of cross-validated grouped cases were obtained when 5% cow milk mixtures were considered.

  20. Linear degrees of freedom in speech production: analysis of cineradio- and labio-film data and articulatory-acoustic modeling.

    PubMed

    Beautemps, D; Badin, P; Bailly, G

    2001-05-01

    The following contribution addresses several issues concerning speech degrees of freedom in French oral vowels, stop, and fricative consonants based on an analysis of tongue and lip shapes extracted from cineradio- and labio-films. The midsagittal tongue shapes have been submitted to a linear decomposition where some of the loading factors were selected such as jaw and larynx position while four other components were derived from principal component analysis (PCA). For the lips, in addition to the more traditional protrusion and opening components, a supplementary component was extracted to explain the upward movement of both the upper and lower lips in [v] production. A linear articulatory model was developed; the six tongue degrees of freedom were used as the articulatory control parameters of the midsagittal tongue contours and explained 96% of the tongue data variance. These control parameters were also used to specify the frontal lip width dimension derived from the labio-film front views. Finally, this model was complemented by a conversion model going from the midsagittal to the area function, based on a fitting of the midsagittal distances and the formant frequencies for both vowels and consonants.

  1. Credibility analysis of risk classes by generalized linear model

    NASA Astrophysics Data System (ADS)

    Erdemir, Ovgucan Karadag; Sucu, Meral

    2016-06-01

    In this paper generalized linear model (GLM) and credibility theory which are frequently used in nonlife insurance pricing are combined for reliability analysis. Using full credibility standard, GLM is associated with limited fluctuation credibility approach. Comparison criteria such as asymptotic variance and credibility probability are used to analyze the credibility of risk classes. An application is performed by using one-year claim frequency data of a Turkish insurance company and results of credible risk classes are interpreted.

  2. [Relations between biomedical variables: mathematical analysis or linear algebra?].

    PubMed

    Hucher, M; Berlie, J; Brunet, M

    1977-01-01

    The authors, after a short reminder of one pattern's structure, stress on the possible double approach of relations uniting the variables of this pattern: use of fonctions, what is within the mathematical analysis sphere, use of linear algebra profiting by matricial calculation's development and automatiosation. They precise the respective interests on these methods, their bounds and the imperatives for utilization, according to the kind of variables, of data, and the objective for work, understanding phenomenons or helping towards decision.

  3. Determination and analysis of non-linear index profiles in electron-beam-deposited MgOAl2O3ZrO2 ternary composite thin-film optical coatings

    NASA Astrophysics Data System (ADS)

    Sahoo, N. K.; Thakur, S.; Senthilkumar, M.; Das, N. C.

    2005-02-01

    Thickness-dependent index non-linearity in thin films has been a thought provoking as well as intriguing topic in the field of optical coatings. The characterization and analysis of such inhomogeneous index profiles pose several degrees of challenges to thin-film researchers depending upon the availability of relevant experimental and process-monitoring-related information. In the present work, a variety of novel experimental non-linear index profiles have been observed in thin films of MgOAl2O3ZrO2 ternary composites in solid solution under various electron-beam deposition parameters. Analysis and derivation of these non-linear spectral index profiles have been carried out by an inverse-synthesis approach using a real-time optical monitoring signal and post-deposition transmittance and reflection spectra. Most of the non-linear index functions are observed to fit polynomial equations of order seven or eight very well. In this paper, the application of such a non-linear index function has also been demonstrated in designing electric-field-optimized high-damage-threshold multilayer coatings such as normal- and oblique-incidence edge filters and a broadband beam splitter for p-polarized light. Such designs can also advantageously maintain the microstructural stability of the multilayer structure due to the low stress factor of the non-linear ternary composite layers.

  4. Linear regression analysis of emissions factors when firing fossil fuels and biofuels in a commercial water-tube boiler

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharon Falcone Miller; Bruce G. Miller

    2007-12-15

    This paper compares the emissions factors for a suite of liquid biofuels (three animal fats, waste restaurant grease, pressed soybean oil, and a biodiesel produced from soybean oil) and four fossil fuels (i.e., natural gas, No. 2 fuel oil, No. 6 fuel oil, and pulverized coal) in Penn State's commercial water-tube boiler to assess their viability as fuels for green heat applications. The data were broken into two subsets, i.e., fossil fuels and biofuels. The regression model for the liquid biofuels (as a subset) did not perform well for all of the gases. In addition, the coefficient in the modelsmore » showed the EPA method underestimating CO and NOx emissions. No relation could be studied for SO{sub 2} for the liquid biofuels as they contain no sulfur; however, the model showed a good relationship between the two methods for SO{sub 2} in the fossil fuels. AP-42 emissions factors for the fossil fuels were also compared to the mass balance emissions factors and EPA CFR Title 40 emissions factors. Overall, the AP-42 emissions factors for the fossil fuels did not compare well with the mass balance emissions factors or the EPA CFR Title 40 emissions factors. Regression analysis of the AP-42, EPA, and mass balance emissions factors for the fossil fuels showed a significant relationship only for CO{sub 2} and SO{sub 2}. However, the regression models underestimate the SO{sub 2} emissions by 33%. These tests illustrate the importance in performing material balances around boilers to obtain the most accurate emissions levels, especially when dealing with biofuels. The EPA emissions factors were very good at predicting the mass balance emissions factors for the fossil fuels and to a lesser degree the biofuels. While the AP-42 emissions factors and EPA CFR Title 40 emissions factors are easier to perform, especially in large, full-scale systems, this study illustrated the shortcomings of estimation techniques. 23 refs., 3 figs., 8 tabs.« less

  5. Direct use of linear time-domain aerodynamics in aeroservoelastic analysis: Aerodynamic model

    NASA Technical Reports Server (NTRS)

    Woods, J. A.; Gilbert, Michael G.

    1990-01-01

    The work presented here is the first part of a continuing effort to expand existing capabilities in aeroelasticity by developing the methodology which is necessary to utilize unsteady time-domain aerodynamics directly in aeroservoelastic design and analysis. The ultimate objective is to define a fully integrated state-space model of an aeroelastic vehicle's aerodynamics, structure and controls which may be used to efficiently determine the vehicle's aeroservoelastic stability. Here, the current status of developing a state-space model for linear or near-linear time-domain indicial aerodynamic forces is presented.

  6. Application of variational and Galerkin equations to linear and nonlinear finite element analysis

    NASA Technical Reports Server (NTRS)

    Yu, Y.-Y.

    1974-01-01

    The paper discusses the application of the variational equation to nonlinear finite element analysis. The problem of beam vibration with large deflection is considered. The variational equation is shown to be flexible in both the solution of a general problem and in the finite element formulation. Difficulties are shown to arise when Galerkin's equations are used in the consideration of the finite element formulation of two-dimensional linear elasticity and of the linear classical beam.

  7. Stabilization and robustness of non-linear unity-feedback system - Factorization approach

    NASA Technical Reports Server (NTRS)

    Desoer, C. A.; Kabuli, M. G.

    1988-01-01

    The paper is a self-contained discussion of a right factorization approach in the stability analysis of the nonlinear continuous-time or discrete-time, time-invariant or time-varying, well-posed unity-feedback system S1(P, C). It is shown that a well-posed stable feedback system S1(P, C) implies that P and C have right factorizations. In the case where C is stable, P has a normalized right-coprime factorization. The factorization approach is used in stabilization and simultaneous stabilization results.

  8. Linear modal stability analysis of bowed-strings.

    PubMed

    Debut, V; Antunes, J; Inácio, O

    2017-03-01

    Linearised models are often invoked as a starting point to study complex dynamical systems. Besides their attractive mathematical simplicity, they have a central role for determining the stability properties of static or dynamical states, and can often shed light on the influence of the control parameters on the system dynamical behaviour. While the bowed string dynamics has been thoroughly studied from a number of points of view, mainly by time-domain computer simulations, this paper proposes to explore its dynamical behaviour adopting a linear framework, linearising the friction force near an equilibrium state in steady sliding conditions, and using a modal representation of the string dynamics. Starting from the simplest idealisation of the friction force given by Coulomb's law with a velocity-dependent friction coefficient, the linearised modal equations of the bowed string are presented, and the dynamical changes of the system as a function of the bowing parameters are studied using linear stability analysis. From the computed complex eigenvalues and eigenvectors, several plots of the evolution of the modal frequencies, damping values, and modeshapes with the bowing parameters are produced, as well as stability charts for each system mode. By systematically exploring the influence of the parameters, this approach appears as a preliminary numerical characterisation of the bifurcations of the bowed string dynamics, with the advantage of being very simple compared to sophisticated numerical approaches which demand the regularisation of the nonlinear interaction force. To fix the idea about the potential of the proposed approach, the classic one-degree-of-freedom friction-excited oscillator is first considered, and then the case of the bowed string. Even if the actual stick-slip behaviour is rather far from the linear description adopted here, the results show that essential musical features of bowed string vibrations can be interpreted from this simple approach

  9. Aerodynamic preliminary analysis system. Part 1: Theory. [linearized potential theory

    NASA Technical Reports Server (NTRS)

    Bonner, E.; Clever, W.; Dunn, K.

    1978-01-01

    A comprehensive aerodynamic analysis program based on linearized potential theory is described. The solution treats thickness and attitude problems at subsonic and supersonic speeds. Three dimensional configurations with or without jet flaps having multiple non-planar surfaces of arbitrary planform and open or closed slender bodies of non-circular contour may be analyzed. Longitudinal and lateral-directional static and rotary derivative solutions may be generated. The analysis was implemented on a time sharing system in conjunction with an input tablet digitizer and an interactive graphics input/output display and editing terminal to maximize its responsiveness to the preliminary analysis problem. Nominal case computation time of 45 CPU seconds on the CDC 175 for a 200 panel simulation indicates the program provides an efficient analysis for systematically performing various aerodynamic configuration tradeoff and evaluation studies.

  10. Stability, performance and sensitivity analysis of I.I.D. jump linear systems

    NASA Astrophysics Data System (ADS)

    Chávez Fuentes, Jorge R.; González, Oscar R.; Gray, W. Steven

    2018-06-01

    This paper presents a symmetric Kronecker product analysis of independent and identically distributed jump linear systems to develop new, lower dimensional equations for the stability and performance analysis of this type of systems than what is currently available. In addition, new closed form expressions characterising multi-parameter relative sensitivity functions for performance metrics are introduced. The analysis technique is illustrated with a distributed fault-tolerant flight control example where the communication links are allowed to fail randomly.

  11. Gene Level Meta-Analysis of Quantitative Traits by Functional Linear Models.

    PubMed

    Fan, Ruzong; Wang, Yifan; Boehnke, Michael; Chen, Wei; Li, Yun; Ren, Haobo; Lobach, Iryna; Xiong, Momiao

    2015-08-01

    Meta-analysis of genetic data must account for differences among studies including study designs, markers genotyped, and covariates. The effects of genetic variants may differ from population to population, i.e., heterogeneity. Thus, meta-analysis of combining data of multiple studies is difficult. Novel statistical methods for meta-analysis are needed. In this article, functional linear models are developed for meta-analyses that connect genetic data to quantitative traits, adjusting for covariates. The models can be used to analyze rare variants, common variants, or a combination of the two. Both likelihood-ratio test (LRT) and F-distributed statistics are introduced to test association between quantitative traits and multiple variants in one genetic region. Extensive simulations are performed to evaluate empirical type I error rates and power performance of the proposed tests. The proposed LRT and F-distributed statistics control the type I error very well and have higher power than the existing methods of the meta-analysis sequence kernel association test (MetaSKAT). We analyze four blood lipid levels in data from a meta-analysis of eight European studies. The proposed methods detect more significant associations than MetaSKAT and the P-values of the proposed LRT and F-distributed statistics are usually much smaller than those of MetaSKAT. The functional linear models and related test statistics can be useful in whole-genome and whole-exome association studies. Copyright © 2015 by the Genetics Society of America.

  12. Linear, multivariable robust control with a mu perspective

    NASA Technical Reports Server (NTRS)

    Packard, Andy; Doyle, John; Balas, Gary

    1993-01-01

    The structured singular value is a linear algebra tool developed to study a particular class of matrix perturbation problems arising in robust feedback control of multivariable systems. These perturbations are called linear fractional, and are a natural way to model many types of uncertainty in linear systems, including state-space parameter uncertainty, multiplicative and additive unmodeled dynamics uncertainty, and coprime factor and gap metric uncertainty. The structured singular value theory provides a natural extension of classical SISO robustness measures and concepts to MIMO systems. The structured singular value analysis, coupled with approximate synthesis methods, make it possible to study the tradeoff between performance and uncertainty that occurs in all feedback systems. In MIMO systems, the complexity of the spatial interactions in the loop gains make it difficult to heuristically quantify the tradeoffs that must occur. This paper examines the role played by the structured singular value (and its computable bounds) in answering these questions, as well as its role in the general robust, multivariable control analysis and design problem.

  13. Stress Induced in Periodontal Ligament under Orthodontic Loading (Part II): A Comparison of Linear Versus Non-Linear Fem Study.

    PubMed

    Hemanth, M; Deoli, Shilpi; Raghuveer, H P; Rani, M S; Hegde, Chatura; Vedavathi, B

    2015-09-01

    Simulation of periodontal ligament (PDL) using non-linear finite element method (FEM) analysis gives better insight into understanding of the biology of tooth movement. The stresses in the PDL were evaluated for intrusion and lingual root torque using non-linear properties. A three-dimensional (3D) FEM model of the maxillary incisors was generated using Solidworks modeling software. Stresses in the PDL were evaluated for intrusive and lingual root torque movements by 3D FEM using ANSYS software. These stresses were compared with linear and non-linear analyses. For intrusive and lingual root torque movements, distribution of stress over the PDL was within the range of optimal stress value as proposed by Lee, but was exceeding the force system given by Proffit as optimum forces for orthodontic tooth movement with linear properties. When same force load was applied in non-linear analysis, stresses were more compared to linear analysis and were beyond the optimal stress range as proposed by Lee for both intrusive and lingual root torque. To get the same stress as linear analysis, iterations were done using non-linear properties and the force level was reduced. This shows that the force level required for non-linear analysis is lesser than that of linear analysis.

  14. Computational Tools for Probing Interactions in Multiple Linear Regression, Multilevel Modeling, and Latent Curve Analysis

    ERIC Educational Resources Information Center

    Preacher, Kristopher J.; Curran, Patrick J.; Bauer, Daniel J.

    2006-01-01

    Simple slopes, regions of significance, and confidence bands are commonly used to evaluate interactions in multiple linear regression (MLR) models, and the use of these techniques has recently been extended to multilevel or hierarchical linear modeling (HLM) and latent curve analysis (LCA). However, conducting these tests and plotting the…

  15. Analysis of key factors influencing the evaporation performances of an oriented linear cutting copper fiber sintered felt

    NASA Astrophysics Data System (ADS)

    Pan, Minqiang; Zhong, Yujian

    2018-01-01

    Porous structure can effectively enhance the heat transfer efficiency. A kind of micro vaporizer using the oriented linear cutting copper fiber sintered felt is proposed in this work. Multiple long cutting copper fibers are firstly fabricated with a multi-tooth tool and then sintered together in parallel to form uniform thickness metal fiber sintered felts that provided a characteristic of oriented microchannels. The temperature rise response and thermal conversion efficiency are experimentally investigated to evaluate the influences of porosity, surface structure, feed flow rate and input power on the evaporation characteristics. It is indicated that the temperature rise response of water is mainly affected by input power and feed flow rate. High input power and low feed flow rate present better temperature rise response of water. Porosity rather than surface structure plays an important role in the temperature rise response of water at a relatively high input power. The thermal conversion efficiency is dominated by the input power and surface structure. The oriented linear cutting copper fiber sintered felts for three kinds of porosities show better thermal conversion efficiency than that of the oriented linear copper wire sintered felt when the input power is less than 115 W. All the sintered felts have almost the same performance of thermal conversion at a high input power.

  16. Advanced statistics: linear regression, part I: simple linear regression.

    PubMed

    Marill, Keith A

    2004-01-01

    Simple linear regression is a mathematical technique used to model the relationship between a single independent predictor variable and a single dependent outcome variable. In this, the first of a two-part series exploring concepts in linear regression analysis, the four fundamental assumptions and the mechanics of simple linear regression are reviewed. The most common technique used to derive the regression line, the method of least squares, is described. The reader will be acquainted with other important concepts in simple linear regression, including: variable transformations, dummy variables, relationship to inference testing, and leverage. Simplified clinical examples with small datasets and graphic models are used to illustrate the points. This will provide a foundation for the second article in this series: a discussion of multiple linear regression, in which there are multiple predictor variables.

  17. Principal Component Analysis: Resources for an Essential Application of Linear Algebra

    ERIC Educational Resources Information Center

    Pankavich, Stephen; Swanson, Rebecca

    2015-01-01

    Principal Component Analysis (PCA) is a highly useful topic within an introductory Linear Algebra course, especially since it can be used to incorporate a number of applied projects. This method represents an essential application and extension of the Spectral Theorem and is commonly used within a variety of fields, including statistics,…

  18. Spherically symmetric analysis on open FLRW solution in non-linear massive gravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiang, Chien-I; Izumi, Keisuke; Chen, Pisin, E-mail: chienichiang@berkeley.edu, E-mail: izumi@phys.ntu.edu.tw, E-mail: chen@slac.stanford.edu

    2012-12-01

    We study non-linear massive gravity in the spherically symmetric context. Our main motivation is to investigate the effect of helicity-0 mode which remains elusive after analysis of cosmological perturbation around an open Friedmann-Lemaitre-Robertson-Walker (FLRW) universe. The non-linear form of the effective energy-momentum tensor stemming from the mass term is derived for the spherically symmetric case. Only in the special case where the area of the two sphere is not deviated away from the FLRW universe, the effective energy momentum tensor becomes completely the same as that of cosmological constant. This opens a window for discriminating the non-linear massive gravity frommore » general relativity (GR). Indeed, by further solving these spherically symmetric gravitational equations of motion in vacuum to the linear order, we obtain a solution which has an arbitrary time-dependent parameter. In GR, this parameter is a constant and corresponds to the mass of a star. Our result means that Birkhoff's theorem no longer holds in the non-linear massive gravity and suggests that energy can probably be emitted superluminously (with infinite speed) on the self-accelerating background by the helicity-0 mode, which could be a potential plague of this theory.« less

  19. A heteroscedastic generalized linear model with a non-normal speed factor for responses and response times.

    PubMed

    Molenaar, Dylan; Bolsinova, Maria

    2017-05-01

    In generalized linear modelling of responses and response times, the observed response time variables are commonly transformed to make their distribution approximately normal. A normal distribution for the transformed response times is desirable as it justifies the linearity and homoscedasticity assumptions in the underlying linear model. Past research has, however, shown that the transformed response times are not always normal. Models have been developed to accommodate this violation. In the present study, we propose a modelling approach for responses and response times to test and model non-normality in the transformed response times. Most importantly, we distinguish between non-normality due to heteroscedastic residual variances, and non-normality due to a skewed speed factor. In a simulation study, we establish parameter recovery and the power to separate both effects. In addition, we apply the model to a real data set. © 2017 The Authors. British Journal of Mathematical and Statistical Psychology published by John Wiley & Sons Ltd on behalf of British Psychological Society.

  20. Linear analysis near a steady-state of biochemical networks: control analysis, correlation metrics and circuit theory.

    PubMed

    Heuett, William J; Beard, Daniel A; Qian, Hong

    2008-05-15

    Several approaches, including metabolic control analysis (MCA), flux balance analysis (FBA), correlation metric construction (CMC), and biochemical circuit theory (BCT), have been developed for the quantitative analysis of complex biochemical networks. Here, we present a comprehensive theory of linear analysis for nonequilibrium steady-state (NESS) biochemical reaction networks that unites these disparate approaches in a common mathematical framework and thermodynamic basis. In this theory a number of relationships between key matrices are introduced: the matrix A obtained in the standard, linear-dynamic-stability analysis of the steady-state can be decomposed as A = SRT where R and S are directly related to the elasticity-coefficient matrix for the fluxes and chemical potentials in MCA, respectively; the control-coefficients for the fluxes and chemical potentials can be written in terms of RTBS and STBS respectively where matrix B is the inverse of A; the matrix S is precisely the stoichiometric matrix in FBA; and the matrix eAt plays a central role in CMC. One key finding that emerges from this analysis is that the well-known summation theorems in MCA take different forms depending on whether metabolic steady-state is maintained by flux injection or concentration clamping. We demonstrate that if rate-limiting steps exist in a biochemical pathway, they are the steps with smallest biochemical conductances and largest flux control-coefficients. We hypothesize that biochemical networks for cellular signaling have a different strategy for minimizing energy waste and being efficient than do biochemical networks for biosynthesis. We also discuss the intimate relationship between MCA and biochemical systems analysis (BSA).

  1. The association between meteorological factors and road traffic injuries: a case analysis from Shantou city, China

    PubMed Central

    Gao, Jinghong; Chen, Xiaojun; Woodward, Alistair; Liu, Xiaobo; Wu, Haixia; Lu, Yaogui; Li, Liping; Liu, Qiyong

    2016-01-01

    Few studies examined the associations of meteorological factors with road traffic injuries (RTIs). The purpose of the present study was to quantify the contributions of meteorological factors to RTI cases treated at a tertiary level hospital in Shantou city, China. A time-series diagram was employed to illustrate the time trends and seasonal variation of RTIs, and correlation analysis and multiple linear regression analysis were conducted to investigate the relationships between meteorological parameters and RTIs. RTIs followed a seasonal pattern as more cases occurred during summer and winter months. RTIs are positively correlated with temperature and sunshine duration, while negatively associated with wind speed. Temperature, sunshine hour and wind speed were included in the final linear model with regression coefficients of 0.65 (t = 2.36, P = 0.019), 2.23 (t = 2.72, P = 0.007) and −27.66 (t = −5.67, P < 0.001), respectively, accounting for 19.93% of the total variation of RTI cases. The findings can help us better understand the associations between meteorological factors and RTIs, and with potential contributions to the development and implementation of regional level evidence-based weather-responsive traffic management system in the future. PMID:27853316

  2. Gain optimization with non-linear controls

    NASA Technical Reports Server (NTRS)

    Slater, G. L.; Kandadai, R. D.

    1984-01-01

    An algorithm has been developed for the analysis and design of controls for non-linear systems. The technical approach is to use statistical linearization to model the non-linear dynamics of a system by a quasi-Gaussian model. A covariance analysis is performed to determine the behavior of the dynamical system and a quadratic cost function. Expressions for the cost function and its derivatives are determined so that numerical optimization techniques can be applied to determine optimal feedback laws. The primary application for this paper is centered about the design of controls for nominally linear systems but where the controls are saturated or limited by fixed constraints. The analysis is general, however, and numerical computation requires only that the specific non-linearity be considered in the analysis.

  3. Application of factor analysis of infrared spectra for quantitative determination of beta-tricalcium phosphate in calcium hydroxylapatite.

    PubMed

    Arsenyev, P A; Trezvov, V V; Saratovskaya, N V

    1997-01-01

    This work represents a method, which allows to determine phase composition of calcium hydroxylapatite basing on its infrared spectrum. The method uses factor analysis of the spectral data of calibration set of samples to determine minimal number of factors required to reproduce the spectra within experimental error. Multiple linear regression is applied to establish correlation between factor scores of calibration standards and their properties. The regression equations can be used to predict the property value of unknown sample. The regression model was built for determination of beta-tricalcium phosphate content in hydroxylapatite. Statistical estimation of quality of the model was carried out. Application of the factor analysis on spectral data allows to increase accuracy of beta-tricalcium phosphate determination and expand the range of determination towards its less concentration. Reproducibility of results is retained.

  4. Method of Individual Adjustment for 3D CT Analysis: Linear Measurement.

    PubMed

    Kim, Dong Kyu; Choi, Dong Hun; Lee, Jeong Woo; Yang, Jung Dug; Chung, Ho Yun; Cho, Byung Chae; Choi, Kang Young

    2016-01-01

    Introduction . We aim to regularize measurement values in three-dimensional (3D) computed tomography (CT) reconstructed images for higher-precision 3D analysis, focusing on length-based 3D cephalometric examinations. Methods . We measure the linear distances between points on different skull models using Vernier calipers (real values). We use 10 differently tilted CT scans for 3D CT reconstruction of the models and measure the same linear distances from the picture archiving and communication system (PACS). In both cases, each measurement is performed three times by three doctors, yielding nine measurements. The real values are compared with the PACS values. Each PACS measurement is revised based on the display field of view (DFOV) values and compared with the real values. Results . The real values and the PACS measurement changes according to tilt value have no significant correlations ( p > 0.05). However, significant correlations appear between the real values and DFOV-adjusted PACS measurements ( p < 0.001). Hence, we obtain a correlation expression that can yield real physical values from PACS measurements. The DFOV value intervals for various age groups are also verified. Conclusion . Precise confirmation of individual preoperative length and precise analysis of postoperative improvements through 3D analysis is possible, which is helpful for facial-bone-surgery symmetry correction.

  5. Linear Covariance Analysis For Proximity Operations Around Asteroid 2008 EV5

    NASA Technical Reports Server (NTRS)

    Wright, Cinnamon A.; Bhatt, Sagar; Woffinden, David; Strube, Matthew; D'Souza, Chris

    2015-01-01

    The NASA initiative to collect an asteroid, the Asteroid Robotic Redirect Mission (ARRM), is currently investigating the option of retrieving a boulder from an asteroid, demonstrating planetary defense with an enhanced gravity tractor technique, and returning it to a lunar orbit. Techniques for accomplishing this are being investigated by the Satellite Servicing Capabilities Office (SSCO) at NASA GSFC in collaboration with JPL, NASA JSC, LaRC, and Draper Laboratory, Inc. Two critical phases of the mission are the descent to the boulder and the Enhanced Gravity Tractor demonstration. A linear covariance analysis is done for these phases to assess the feasibility of these concepts with the proposed design of the sensor and actuator suite of the Asteroid Redirect Vehicle (ARV). The sensor suite for this analysis includes a wide field of view camera, LiDAR, and an IMU. The proposed asteroid of interest is currently the C-type asteroid 2008 EV5, a carbonaceous chondrite that is of high interest to the scientific community. This paper presents an overview of the linear covariance analysis techniques and simulation tool, provides sensor and actuator models, and addresses the feasibility of descending to the surface of the asteroid within allocated requirements as well as the possibility of maintaining a halo orbit to demonstrate the Enhanced Gravity Tractor technique.

  6. Dense grid sibling frames with linear phase filters

    NASA Astrophysics Data System (ADS)

    Abdelnour, Farras

    2013-09-01

    We introduce new 5-band dyadic sibling frames with dense time-frequency grid. Given a lowpass filter satisfying certain conditions, the remaining filters are obtained using spectral factorization. The analysis and synthesis filterbanks share the same lowpass and bandpass filters but have different and oversampled highpass filters. This leads to wavelets approximating shift-invariance. The filters are FIR, have linear phase, and the resulting wavelets have vanishing moments. The filters are designed using spectral factorization method. The proposed method leads to smooth limit functions with higher approximation order, and computationally stable filterbanks.

  7. Rise time analysis of pulsed klystron-modulator for efficiency improvement of linear colliders

    NASA Astrophysics Data System (ADS)

    Oh, J. S.; Cho, M. H.; Namkung, W.; Chung, K. H.; Shintake, T.; Matsumoto, H.

    2000-04-01

    In linear accelerators, the periods during the rise and fall of a klystron-modulator pulse cannot be used to generate RF power. Thus, these periods need to be minimized to get high efficiency, especially in large-scale machines. In this paper, we present a simplified and generalized voltage rise time function of a pulsed modulator with a high-power klystron load using the equivalent circuit analysis method. The optimum pulse waveform is generated when this pulsed power system is tuned with a damping factor of ˜0.85. The normalized rise time chart presented in this paper allows one to predict the rise time and pulse shape of the pulsed power system in general. The results can be summarized as follows: The large distributed capacitance in the pulse tank and operating parameters, Vs× Tp , where Vs is load voltage and Tp is the pulse width, are the main factors determining the pulse rise time in the high-power RF system. With an RF pulse compression scheme, up to ±3% ripple of the modulator voltage is allowed without serious loss of compressor efficiency, which allows the modulator efficiency to be improved as well. The wiring inductance should be minimized to get the fastest rise time.

  8. An improved multiple linear regression and data analysis computer program package

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.

    1972-01-01

    NEWRAP, an improved version of a previous multiple linear regression program called RAPIER, CREDUC, and CRSPLT, allows for a complete regression analysis including cross plots of the independent and dependent variables, correlation coefficients, regression coefficients, analysis of variance tables, t-statistics and their probability levels, rejection of independent variables, plots of residuals against the independent and dependent variables, and a canonical reduction of quadratic response functions useful in optimum seeking experimentation. A major improvement over RAPIER is that all regression calculations are done in double precision arithmetic.

  9. Above threshold spectral dependence of linewidth enhancement factor, optical duration and linear chirp of quantum dot lasers.

    PubMed

    Kim, Jimyung; Delfyett, Peter J

    2009-12-07

    The spectral dependence of the linewidth enhancement factor above threshold is experimentally observed from a quantum dot Fabry-Pérot semiconductor laser. The linewidth enhancement factor is found to be reduced when the quantum dot laser operates approximately 10 nm offset to either side of the gain peak. It becomes significantly reduced on the anti-Stokes side as compared to the Stokes side. It is also found that the temporal duration of the optical pulses generated from quantum dot mode-locked lasers is shorter when the laser operates away from the gain peak. In addition, less linear chirp is impressed on the pulse train generated from the anti-Stokes side whereas the pulses generated from the gain peak and Stokes side possess a large linear chirp. These experimental results imply that enhanced performance characteristics of quantum dot lasers can be achieved by operating on the anti-Stokes side, approximately 10 nm away from the gain peak.

  10. Numerical linear analysis of the effects of diamagnetic and shear flow on ballooning modes

    NASA Astrophysics Data System (ADS)

    Yanqing, HUANG; Tianyang, XIA; Bin, GUI

    2018-04-01

    The linear analysis of the influence of diamagnetic effect and toroidal rotation at the edge of tokamak plasmas with BOUT++ is discussed in this paper. This analysis is done by solving the dispersion relation, which is calculated through the numerical integration of the terms with different physics. This method is able to reveal the contributions of the different terms to the total growth rate. The diamagnetic effect stabilizes the ideal ballooning modes through inhibiting the contribution of curvature. The toroidal rotation effect is also able to suppress the curvature-driving term, and the stronger shearing rate leads to a stronger stabilization effect. In addition, through linear analysis using the energy form, the curvature-driving term provides the free energy absorbed by the line-bending term, diamagnetic term and convective term.

  11. Population response to climate change: linear vs. non-linear modeling approaches.

    PubMed

    Ellis, Alicia M; Post, Eric

    2004-03-31

    Research on the ecological consequences of global climate change has elicited a growing interest in the use of time series analysis to investigate population dynamics in a changing climate. Here, we compare linear and non-linear models describing the contribution of climate to the density fluctuations of the population of wolves on Isle Royale, Michigan from 1959 to 1999. The non-linear self excitatory threshold autoregressive (SETAR) model revealed that, due to differences in the strength and nature of density dependence, relatively small and large populations may be differentially affected by future changes in climate. Both linear and non-linear models predict a decrease in the population of wolves with predicted changes in climate. Because specific predictions differed between linear and non-linear models, our study highlights the importance of using non-linear methods that allow the detection of non-linearity in the strength and nature of density dependence. Failure to adopt a non-linear approach to modelling population response to climate change, either exclusively or in addition to linear approaches, may compromise efforts to quantify ecological consequences of future warming.

  12. Frame sequences analysis technique of linear objects movement

    NASA Astrophysics Data System (ADS)

    Oshchepkova, V. Y.; Berg, I. A.; Shchepkin, D. V.; Kopylova, G. V.

    2017-12-01

    Obtaining data by noninvasive methods are often needed in many fields of science and engineering. This is achieved through video recording in various frame rate and light spectra. In doing so quantitative analysis of movement of the objects being studied becomes an important component of the research. This work discusses analysis of motion of linear objects on the two-dimensional plane. The complexity of this problem increases when the frame contains numerous objects whose images may overlap. This study uses a sequence containing 30 frames at the resolution of 62 × 62 pixels and frame rate of 2 Hz. It was required to determine the average velocity of objects motion. This velocity was found as an average velocity for 8-12 objects with the error of 15%. After processing dependencies of the average velocity vs. control parameters were found. The processing was performed in the software environment GMimPro with the subsequent approximation of the data obtained using the Hill equation.

  13. Information theoretic analysis of linear shift-invariant edge-detection operators

    NASA Astrophysics Data System (ADS)

    Jiang, Bo; Rahman, Zia-ur

    2012-06-01

    Generally, the designs of digital image processing algorithms and image gathering devices remain separate. Consequently, the performance of digital image processing algorithms is evaluated without taking into account the influences by the image gathering process. However, experiments show that the image gathering process has a profound impact on the performance of digital image processing and the quality of the resulting images. Huck et al. proposed one definitive theoretic analysis of visual communication channels, where the different parts, such as image gathering, processing, and display, are assessed in an integrated manner using Shannon's information theory. We perform an end-to-end information theory based system analysis to assess linear shift-invariant edge-detection algorithms. We evaluate the performance of the different algorithms as a function of the characteristics of the scene and the parameters, such as sampling, additive noise etc., that define the image gathering system. The edge-detection algorithm is regarded as having high performance only if the information rate from the scene to the edge image approaches its maximum possible. This goal can be achieved only by jointly optimizing all processes. Our information-theoretic assessment provides a new tool that allows us to compare different linear shift-invariant edge detectors in a common environment.

  14. Linear analysis near a steady-state of biochemical networks: Control analysis, correlation metrics and circuit theory

    PubMed Central

    Heuett, William J; Beard, Daniel A; Qian, Hong

    2008-01-01

    Background Several approaches, including metabolic control analysis (MCA), flux balance analysis (FBA), correlation metric construction (CMC), and biochemical circuit theory (BCT), have been developed for the quantitative analysis of complex biochemical networks. Here, we present a comprehensive theory of linear analysis for nonequilibrium steady-state (NESS) biochemical reaction networks that unites these disparate approaches in a common mathematical framework and thermodynamic basis. Results In this theory a number of relationships between key matrices are introduced: the matrix A obtained in the standard, linear-dynamic-stability analysis of the steady-state can be decomposed as A = SRT where R and S are directly related to the elasticity-coefficient matrix for the fluxes and chemical potentials in MCA, respectively; the control-coefficients for the fluxes and chemical potentials can be written in terms of RTBS and STBS respectively where matrix B is the inverse of A; the matrix S is precisely the stoichiometric matrix in FBA; and the matrix eAt plays a central role in CMC. Conclusion One key finding that emerges from this analysis is that the well-known summation theorems in MCA take different forms depending on whether metabolic steady-state is maintained by flux injection or concentration clamping. We demonstrate that if rate-limiting steps exist in a biochemical pathway, they are the steps with smallest biochemical conductances and largest flux control-coefficients. We hypothesize that biochemical networks for cellular signaling have a different strategy for minimizing energy waste and being efficient than do biochemical networks for biosynthesis. We also discuss the intimate relationship between MCA and biochemical systems analysis (BSA). PMID:18482450

  15. Classical linear-control analysis applied to business-cycle dynamics and stability

    NASA Technical Reports Server (NTRS)

    Wingrove, R. C.

    1983-01-01

    Linear control analysis is applied as an aid in understanding the fluctuations of business cycles in the past, and to examine monetary policies that might improve stabilization. The analysis shows how different policies change the frequency and damping of the economic system dynamics, and how they modify the amplitude of the fluctuations that are caused by random disturbances. Examples are used to show how policy feedbacks and policy lags can be incorporated, and how different monetary strategies for stabilization can be analytically compared. Representative numerical results are used to illustrate the main points.

  16. Dynamic analysis of space-related linear and non-linear structures

    NASA Technical Reports Server (NTRS)

    Bosela, Paul A.; Shaker, Francis J.; Fertis, Demeter G.

    1990-01-01

    In order to be cost effective, space structures must be extremely light weight, and subsequently, very flexible structures. The power system for Space Station Freedom is such a structure. Each array consists of a deployable truss mast and a split blanket of photo-voltaic solar collectors. The solar arrays are deployed in orbit, and the blanket is stretched into position as the mast is extended. Geometric stiffness due to the preload make this an interesting non-linear problem. The space station will be subjected to various dynamic loads, during shuttle docking, solar tracking, attitude adjustment, etc. Accurate prediction of the natural frequencies and mode shapes of the space station components, including the solar arrays, is critical for determining the structural adequacy of the components, and for designing a dynamic control system. The process used in developing and verifying the finite element dynamic model of the photo-voltaic arrays is documented. Various problems were identified, such as grounding effects due to geometric stiffness, large displacement effects, and pseudo-stiffness (grounding) due to lack of required rigid body modes. Analysis techniques, such as development of rigorous solutions using continuum mechanics, finite element solution sequence altering, equivalent systems using a curvature basis, Craig-Bampton superelement approach, and modal ordering schemes were utilized. The grounding problems associated with the geometric stiffness are emphasized.

  17. Dynamic analysis of space-related linear and non-linear structures

    NASA Technical Reports Server (NTRS)

    Bosela, Paul A.; Shaker, Francis J.; Fertis, Demeter G.

    1990-01-01

    In order to be cost effective, space structures must be extremely light weight, and subsequently, very flexible structures. The power system for Space Station Freedom is such a structure. Each array consists of a deployable truss mast and a split blanket of photovoltaic solar collectors. The solar arrays are deployed in orbit, and the blanket is stretched into position as the mast is extended. Geometric stiffness due to the preload make this an interesting non-linear problem. The space station will be subjected to various dynamic loads, during shuttle docking, solar tracking, attitude adjustment, etc. Accurate prediction of the natural frequencies and mode shapes of the space station components, including the solar arrays, is critical for determining the structural adequacy of the components, and for designing a dynamic controls system. The process used in developing and verifying the finite element dynamic model of the photo-voltaic arrays is documented. Various problems were identified, such as grounding effects due to geometric stiffness, large displacement effects, and pseudo-stiffness (grounding) due to lack of required rigid body modes. Analysis techniques, such as development of rigorous solutions using continuum mechanics, finite element solution sequence altering, equivalent systems using a curvature basis, Craig-Bampton superelement approach, and modal ordering schemes were utilized. The grounding problems associated with the geometric stiffness are emphasized.

  18. Combined linear theory/impact theory method for analysis and design of high speed configurations

    NASA Technical Reports Server (NTRS)

    Brooke, D.; Vondrasek, D. V.

    1980-01-01

    Pressure distributions on a wing body at Mach 4.63 are calculated. The combined theory is shown to give improved predictions over either linear theory or impact theory alone. The combined theory is also applied in the inverse design mode to calculate optimum camber slopes at Mach 4.63. Comparisons with optimum camber slopes obtained from unmodified linear theory show large differences. Analysis of the results indicate that the combined theory correctly predicts the effect of thickness on the loading distributions at high Mach numbers, and that finite thickness wings optimized at high Mach numbers using unmodified linear theory will not achieve the minimum drag characteristics for which they are designed.

  19. Local linear regression for function learning: an analysis based on sample discrepancy.

    PubMed

    Cervellera, Cristiano; Macciò, Danilo

    2014-11-01

    Local linear regression models, a kind of nonparametric structures that locally perform a linear estimation of the target function, are analyzed in the context of empirical risk minimization (ERM) for function learning. The analysis is carried out with emphasis on geometric properties of the available data. In particular, the discrepancy of the observation points used both to build the local regression models and compute the empirical risk is considered. This allows to treat indifferently the case in which the samples come from a random external source and the one in which the input space can be freely explored. Both consistency of the ERM procedure and approximating capabilities of the estimator are analyzed, proving conditions to ensure convergence. Since the theoretical analysis shows that the estimation improves as the discrepancy of the observation points becomes smaller, low-discrepancy sequences, a family of sampling methods commonly employed for efficient numerical integration, are also analyzed. Simulation results involving two different examples of function learning are provided.

  20. Schizophrenia with prominent catatonic features ('catatonic schizophrenia'). II. Factor analysis of the catatonic syndrome.

    PubMed

    Ungvari, Gabor S; Goggins, William; Leung, Siu-Kau; Gerevich, Jozsef

    2007-03-30

    Previous factor analyses of catatonia have yielded conflicting results for several reasons including small and/or diagnostically heterogeneous samples and incomparability or lack of standardized assessment. This study examined the factor structure of catatonia in a large, diagnostically homogenous sample of patients with chronic schizophrenia using standardized rating instruments. A random sample of 225 Chinese inpatients diagnosed with schizophrenia according to DSM-IV criteria were selected from the long-stay wards of a psychiatric hospital. They were assessed with a battery of rating scales measuring psychopathology, extrapyramidal motor status, and level of functioning. Catatonia was rated using the Bush-Francis Catatonia Rating Scale. Factor analysis using principal component analysis and Varimax rotation with Kaiser normalization was performed. Four factors were identified with Eigenvalues of 3.27, 2.58, 2.28 and 1.88. The percentage of variance explained by each of the four factors was 15.9%, 12.0%, 11.8% and 10.2% respectively, and together they explained 49.9% of the total variance. Factor 1 loaded on "negative/withdrawn" phenomena, Factor 2 on "automatic" phenomena, Factor 3 on "repetitive/echo" phenomena and Factor 4 on "agitated/resistive" phenomena. In multivariate linear regression analysis negative symptoms and akinesia were associated with 'negative' catatonic symptoms, antipsychotic doses and atypical antipsychotics with 'automatic' symptoms, length of current admission, severity of psychopathology and younger age at onset with 'repetitive' symptoms and age, poor functioning and severity of psychopathology with 'agitated' catatonic symptom scores. The results support recent findings that four main factors underlie catatonic signs/symptoms in chronic schizophrenia.

  1. Predicting groundwater redox status on a regional scale using linear discriminant analysis.

    PubMed

    Close, M E; Abraham, P; Humphries, B; Lilburne, L; Cuthill, T; Wilson, S

    2016-08-01

    Reducing conditions are necessary for denitrification, thus the groundwater redox status can be used to identify subsurface zones where potentially significant nitrate reduction can occur. Groundwater chemistry in two contrasting regions of New Zealand was classified with respect to redox status and related to mappable factors, such as geology, topography and soil characteristics using discriminant analysis. Redox assignment was carried out for water sampled from 568 and 2223 wells in the Waikato and Canterbury regions, respectively. For the Waikato region 64% of wells sampled indicated oxic conditions in the water; 18% indicated reduced conditions and 18% had attributes indicating both reducing and oxic conditions termed "mixed". In Canterbury 84% of wells indicated oxic conditions; 10% were mixed; and only 5% indicated reduced conditions. The analysis was performed over three different well depths, <25m, 25 to 100 and >100m. For both regions, the percentage of oxidised groundwater decreased with increasing well depth. Linear discriminant analysis was used to develop models to differentiate between the three redox states. Models were derived for each depth and region using 67% of the data, and then subsequently validated on the remaining 33%. The average agreement between predicted and measured redox status was 63% and 70% for the Waikato and Canterbury regions, respectively. The models were incorporated into GIS and the prediction of redox status was extended over the whole region, excluding mountainous land. This knowledge improves spatial prediction of reduced groundwater zones, and therefore, when combined with groundwater flow paths, improves estimates of denitrification. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Factors affecting construction performance: exploratory factor analysis

    NASA Astrophysics Data System (ADS)

    Soewin, E.; Chinda, T.

    2018-04-01

    The present work attempts to develop a multidimensional performance evaluation framework for a construction company by considering all relevant measures of performance. Based on the previous studies, this study hypothesizes nine key factors, with a total of 57 associated items. The hypothesized factors, with their associated items, are then used to develop questionnaire survey to gather data. The exploratory factor analysis (EFA) was applied to the collected data which gave rise 10 factors with 57 items affecting construction performance. The findings further reveal that the items constituting ten key performance factors (KPIs) namely; 1) Time, 2) Cost, 3) Quality, 4) Safety & Health, 5) Internal Stakeholder, 6) External Stakeholder, 7) Client Satisfaction, 8) Financial Performance, 9) Environment, and 10) Information, Technology & Innovation. The analysis helps to develop multi-dimensional performance evaluation framework for an effective measurement of the construction performance. The 10 key performance factors can be broadly categorized into economic aspect, social aspect, environmental aspect, and technology aspects. It is important to understand a multi-dimension performance evaluation framework by including all key factors affecting the construction performance of a company, so that the management level can effectively plan to implement an effective performance development plan to match with the mission and vision of the company.

  3. Multiple Linear Regression Analysis of Factors Affecting Real Property Price Index From Case Study Research In Istanbul/Turkey

    NASA Astrophysics Data System (ADS)

    Denli, H. H.; Koc, Z.

    2015-12-01

    Estimation of real properties depending on standards is difficult to apply in time and location. Regression analysis construct mathematical models which describe or explain relationships that may exist between variables. The problem of identifying price differences of properties to obtain a price index can be converted into a regression problem, and standard techniques of regression analysis can be used to estimate the index. Considering regression analysis for real estate valuation, which are presented in real marketing process with its current characteristics and quantifiers, the method will help us to find the effective factors or variables in the formation of the value. In this study, prices of housing for sale in Zeytinburnu, a district in Istanbul, are associated with its characteristics to find a price index, based on information received from a real estate web page. The associated variables used for the analysis are age, size in m2, number of floors having the house, floor number of the estate and number of rooms. The price of the estate represents the dependent variable, whereas the rest are independent variables. Prices from 60 real estates have been used for the analysis. Same price valued locations have been found and plotted on the map and equivalence curves have been drawn identifying the same valued zones as lines.

  4. Pleiotropy Analysis of Quantitative Traits at Gene Level by Multivariate Functional Linear Models

    PubMed Central

    Wang, Yifan; Liu, Aiyi; Mills, James L.; Boehnke, Michael; Wilson, Alexander F.; Bailey-Wilson, Joan E.; Xiong, Momiao; Wu, Colin O.; Fan, Ruzong

    2015-01-01

    In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai–Bartlett trace, Hotelling–Lawley trace, and Wilks’s Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case. PMID:25809955

  5. Pleiotropy analysis of quantitative traits at gene level by multivariate functional linear models.

    PubMed

    Wang, Yifan; Liu, Aiyi; Mills, James L; Boehnke, Michael; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao; Wu, Colin O; Fan, Ruzong

    2015-05-01

    In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai-Bartlett trace, Hotelling-Lawley trace, and Wilks's Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case. © 2015 WILEY PERIODICALS, INC.

  6. Design and analysis of linear oscillatory single-phase permanent magnet generator for free-piston stirling engine systems

    NASA Astrophysics Data System (ADS)

    Kim, Jeong-Man; Choi, Jang-Young; Lee, Kyu-Seok; Lee, Sung-Ho

    2017-05-01

    This study focuses on the design and analysis of a linear oscillatory single-phase permanent magnet generator for free-piston stirling engine (FPSE) systems. In order to implement the design of linear oscillatory generator (LOG) for suitable FPSEs, we conducted electromagnetic analysis of LOGs with varying design parameters. Then, detent force analysis was conducted using assisted PM. Using the assisted PM gave us the advantage of using mechanical strength by detent force. To improve the efficiency, we conducted characteristic analysis of eddy-current loss with respect to the PM segment. Finally, the experimental result was analyzed to confirm the prediction of the FEA.

  7. Extension Procedures for Confirmatory Factor Analysis

    ERIC Educational Resources Information Center

    Nagy, Gabriel; Brunner, Martin; Lüdtke, Oliver; Greiff, Samuel

    2017-01-01

    We present factor extension procedures for confirmatory factor analysis that provide estimates of the relations of common and unique factors with external variables that do not undergo factor analysis. We present identification strategies that build upon restrictions of the pattern of correlations between unique factors and external variables. The…

  8. Consistent linearization of the element-independent corotational formulation for the structural analysis of general shells

    NASA Technical Reports Server (NTRS)

    Rankin, C. C.

    1988-01-01

    A consistent linearization is provided for the element-dependent corotational formulation, providing the proper first and second variation of the strain energy. As a result, the warping problem that has plagued flat elements has been overcome, with beneficial effects carried over to linear solutions. True Newton quadratic convergence has been restored to the Structural Analysis of General Shells (STAGS) code for conservative loading using the full corotational implementation. Some implications for general finite element analysis are discussed, including what effect the automatic frame invariance provided by this work might have on the development of new, improved elements.

  9. Linear and non-linear regression analysis for the sorption kinetics of methylene blue onto activated carbon.

    PubMed

    Kumar, K Vasanth

    2006-10-11

    Batch kinetic experiments were carried out for the sorption of methylene blue onto activated carbon. The experimental kinetics were fitted to the pseudo first-order and pseudo second-order kinetics by linear and a non-linear method. The five different types of Ho pseudo second-order expression have been discussed. A comparison of linear least-squares method and a trial and error non-linear method of estimating the pseudo second-order rate kinetic parameters were examined. The sorption process was found to follow a both pseudo first-order kinetic and pseudo second-order kinetic model. Present investigation showed that it is inappropriate to use a type 1 and type pseudo second-order expressions as proposed by Ho and Blanachard et al. respectively for predicting the kinetic rate constants and the initial sorption rate for the studied system. Three correct possible alternate linear expressions (type 2 to type 4) to better predict the initial sorption rate and kinetic rate constants for the studied system (methylene blue/activated carbon) was proposed. Linear method was found to check only the hypothesis instead of verifying the kinetic model. Non-linear regression method was found to be the more appropriate method to determine the rate kinetic parameters.

  10. Analysis of Nuclear Factor-κB (NF-κB) Essential Modulator (NEMO) Binding to Linear and Lysine-linked Ubiquitin Chains and Its Role in the Activation of NF-κB*

    PubMed Central

    Kensche, Tobias; Tokunaga, Fuminori; Ikeda, Fumiyo; Goto, Eiji; Iwai, Kazuhiro; Dikic, Ivan

    2012-01-01

    Nuclear factor-κB (NF-κB) essential modulator (NEMO), a component of the inhibitor of κB kinase (IKK) complex, controls NF-κB signaling by binding to ubiquitin chains. Structural studies of NEMO provided a rationale for the specific binding between the UBAN (ubiquitin binding in ABIN and NEMO) domain of NEMO and linear (Met-1-linked) di-ubiquitin chains. Full-length NEMO can also interact with Lys-11-, Lys-48-, and Lys-63-linked ubiquitin chains of varying length in cells. Here, we show that purified full-length NEMO binds preferentially to linear ubiquitin chains in competition with lysine-linked ubiquitin chains of defined length, including long Lys-63-linked deca-ubiquitins. Linear di-ubiquitins were sufficient to activate both the IKK complex in vitro and to trigger maximal NF-κB activation in cells. In TNFα-stimulated cells, NEMO chimeras engineered to bind exclusively to Lys-63-linked ubiquitin chains mediated partial NF-κB activation compared with cells expressing NEMO that binds to linear ubiquitin chains. We propose that NEMO functions as a high affinity receptor for linear ubiquitin chains and a low affinity receptor for long lysine-linked ubiquitin chains. This phenomenon could explain quantitatively distinct NF-κB activation patterns in response to numerous cell stimuli. PMID:22605335

  11. Parallel-vector computation for linear structural analysis and non-linear unconstrained optimization problems

    NASA Technical Reports Server (NTRS)

    Nguyen, D. T.; Al-Nasra, M.; Zhang, Y.; Baddourah, M. A.; Agarwal, T. K.; Storaasli, O. O.; Carmona, E. A.

    1991-01-01

    Several parallel-vector computational improvements to the unconstrained optimization procedure are described which speed up the structural analysis-synthesis process. A fast parallel-vector Choleski-based equation solver, pvsolve, is incorporated into the well-known SAP-4 general-purpose finite-element code. The new code, denoted PV-SAP, is tested for static structural analysis. Initial results on a four processor CRAY 2 show that using pvsolve reduces the equation solution time by a factor of 14-16 over the original SAP-4 code. In addition, parallel-vector procedures for the Golden Block Search technique and the BFGS method are developed and tested for nonlinear unconstrained optimization. A parallel version of an iterative solver and the pvsolve direct solver are incorporated into the BFGS method. Preliminary results on nonlinear unconstrained optimization test problems, using pvsolve in the analysis, show excellent parallel-vector performance indicating that these parallel-vector algorithms can be used in a new generation of finite-element based structural design/analysis-synthesis codes.

  12. Identifying Plant Part Composition of Forest Logging Residue Using Infrared Spectral Data and Linear Discriminant Analysis

    PubMed Central

    Acquah, Gifty E.; Via, Brian K.; Billor, Nedret; Fasina, Oladiran O.; Eckhardt, Lori G.

    2016-01-01

    As new markets, technologies and economies evolve in the low carbon bioeconomy, forest logging residue, a largely untapped renewable resource will play a vital role. The feedstock can however be variable depending on plant species and plant part component. This heterogeneity can influence the physical, chemical and thermochemical properties of the material, and thus the final yield and quality of products. Although it is challenging to control compositional variability of a batch of feedstock, it is feasible to monitor this heterogeneity and make the necessary changes in process parameters. Such a system will be a first step towards optimization, quality assurance and cost-effectiveness of processes in the emerging biofuel/chemical industry. The objective of this study was therefore to qualitatively classify forest logging residue made up of different plant parts using both near infrared spectroscopy (NIRS) and Fourier transform infrared spectroscopy (FTIRS) together with linear discriminant analysis (LDA). Forest logging residue harvested from several Pinus taeda (loblolly pine) plantations in Alabama, USA, were classified into three plant part components: clean wood, wood and bark and slash (i.e., limbs and foliage). Five-fold cross-validated linear discriminant functions had classification accuracies of over 96% for both NIRS and FTIRS based models. An extra factor/principal component (PC) was however needed to achieve this in FTIRS modeling. Analysis of factor loadings of both NIR and FTIR spectra showed that, the statistically different amount of cellulose in the three plant part components of logging residue contributed to their initial separation. This study demonstrated that NIR or FTIR spectroscopy coupled with PCA and LDA has the potential to be used as a high throughput tool in classifying the plant part makeup of a batch of forest logging residue feedstock. Thus, NIR/FTIR could be employed as a tool to rapidly probe/monitor the variability of forest

  13. Linear stability analysis of the three-dimensional thermally-driven ocean circulation: application to interdecadal oscillations

    NASA Astrophysics Data System (ADS)

    Huck, Thierry; Vallis, Geoffrey K.

    2001-08-01

    What can we learn from performing a linear stability analysis of the large-scale ocean circulation? Can we predict from the basic state the occurrence of interdecadal oscillations, such as might be found in a forward integration of the full equations of motion? If so, do the structure and period of the linearly unstable modes resemble those found in a forward integration? We pursue here a preliminary study of these questions for a case in idealized geometry, in which the full nonlinear behavior can also be explored through forward integrations. Specifically, we perform a three-dimensional linear stability analysis of the thermally-driven circulation of the planetary geostrophic equations. We examine the resulting eigenvalues and eigenfunctions, comparing them with the structure of the interdecadal oscillations found in the fully nonlinear model in various parameter regimes. We obtain a steady state by running the time-dependent, nonlinear model to equilibrium using restoring boundary conditions on surface temperature. If the surface heat fluxes are then diagnosed, and these values applied as constant flux boundary conditions, the nonlinear model switches into a state of perpetual, finite amplitude, interdecadal oscillations. We construct a linearized version of the model by empirically evaluating the tangent linear matrix at the steady state, under both restoring and constant-flux boundary conditions. An eigen-analysis shows there are no unstable eigenmodes of the linearized model with restoring conditions. In contrast, under constant flux conditions, we find a single unstable eigenmode that shows a striking resemblance to the fully-developed oscillations in terms of three-dimensional structure, period and growth rate. The mode may be damped through either surface restoring boundary conditions or sufficiently large horizontal tracer diffusion. The success of this simple numerical method in idealized geometry suggests applications in the study of the stability of

  14. Enhanced linear-array photoacoustic beamforming using modified coherence factor.

    PubMed

    Mozaffarzadeh, Moein; Yan, Yan; Mehrmohammadi, Mohammad; Makkiabadi, Bahador

    2018-02-01

    Photoacoustic imaging (PAI) is a promising medical imaging modality providing the spatial resolution of ultrasound imaging and the contrast of optical imaging. For linear-array PAI, a beamformer can be used as the reconstruction algorithm. Delay-and-sum (DAS) is the most prevalent beamforming algorithm in PAI. However, using DAS beamformer leads to low-resolution images as well as high sidelobes due to nondesired contribution of off-axis signals. Coherence factor (CF) is a weighting method in which each pixel of the reconstructed image is weighted, based on the spatial spectrum of the aperture, to mainly improve the contrast. We demonstrate that the numerator of the formula of CF contains a DAS algebra and propose the use of a delay-multiply-and-sum beamformer instead of the available DAS on the numerator. The proposed weighting technique, modified CF (MCF), has been evaluated numerically and experimentally compared to CF. It was shown that MCF leads to lower sidelobes and better detectable targets. The quantitative results of the experiment (using wire targets) show that MCF leads to for about 45% and 40% improvement, in comparison with CF, in the terms of signal-to-noise ratio and full-width-half-maximum, respectively. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  15. Hybrid PV/diesel solar power system design using multi-level factor analysis optimization

    NASA Astrophysics Data System (ADS)

    Drake, Joshua P.

    Solar power systems represent a large area of interest across a spectrum of organizations at a global level. It was determined that a clear understanding of current state of the art software and design methods, as well as optimization methods, could be used to improve the design methodology. Solar power design literature was researched for an in depth understanding of solar power system design methods and algorithms. Multiple software packages for the design and optimization of solar power systems were analyzed for a critical understanding of their design workflow. In addition, several methods of optimization were studied, including brute force, Pareto analysis, Monte Carlo, linear and nonlinear programming, and multi-way factor analysis. Factor analysis was selected as the most efficient optimization method for engineering design as it applied to solar power system design. The solar power design algorithms, software work flow analysis, and factor analysis optimization were combined to develop a solar power system design optimization software package called FireDrake. This software was used for the design of multiple solar power systems in conjunction with an energy audit case study performed in seven Tibetan refugee camps located in Mainpat, India. A report of solar system designs for the camps, as well as a proposed schedule for future installations was generated. It was determined that there were several improvements that could be made to the state of the art in modern solar power system design, though the complexity of current applications is significant.

  16. Complexity-reduced implementations of complete and null-space-based linear discriminant analysis.

    PubMed

    Lu, Gui-Fu; Zheng, Wenming

    2013-10-01

    Dimensionality reduction has become an important data preprocessing step in a lot of applications. Linear discriminant analysis (LDA) is one of the most well-known dimensionality reduction methods. However, the classical LDA cannot be used directly in the small sample size (SSS) problem where the within-class scatter matrix is singular. In the past, many generalized LDA methods has been reported to address the SSS problem. Among these methods, complete linear discriminant analysis (CLDA) and null-space-based LDA (NLDA) provide good performances. The existing implementations of CLDA are computationally expensive. In this paper, we propose a new and fast implementation of CLDA. Our proposed implementation of CLDA, which is the most efficient one, is equivalent to the existing implementations of CLDA in theory. Since CLDA is an extension of null-space-based LDA (NLDA), our implementation of CLDA also provides a fast implementation of NLDA. Experiments on some real-world data sets demonstrate the effectiveness of our proposed new CLDA and NLDA algorithms. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Focal spot motion of linear accelerators and its effect on portal image analysis.

    PubMed

    Sonke, Jan-Jakob; Brand, Bob; van Herk, Marcel

    2003-06-01

    The focal spot of a linear accelerator is often considered to have a fully stable position. In practice, however, the beam control loop of a linear accelerator needs to stabilize after the beam is turned on. As a result, some motion of the focal spot might occur during the start-up phase of irradiation. When acquiring portal images, this motion will affect the projected position of anatomy and field edges, especially when low exposures are used. In this paper, the motion of the focal spot and the effect of this motion on portal image analysis are quantified. A slightly tilted narrow slit phantom was placed at the isocenter of several linear accelerators and images were acquired (3.5 frames per second) by means of an amorphous silicon flat panel imager positioned approximately 0.7 m below the isocenter. The motion of the focal spot was determined by converting the tilted slit images to subpixel accurate line spread functions. The error in portal image analysis due to focal spot motionwas estimated by a subtraction of the relative displacement of the projected slit from the relative displacement of the field edges. It was found that the motion of the focal spot depends on the control system and design of the accelerator. The shift of the focal spot at the start of irradiation ranges between 0.05-0.7 mm in the gun-target (GT) direction. In the left-right (AB) direction the shift is generally smaller. The resulting error in portal image analysis due to focal spotmotion ranges between 0.05-1.1 mm for a dose corresponding to two monitor units (MUs). For 20 MUs, the effect of the focal spot motion reduces to 0.01-0.3 mm. The error in portal image analysis due to focal spot motion can be reduced by reducing the applied dose rate.

  18. Deep data analysis via physically constrained linear unmixing: universal framework, domain examples, and a community-wide platform.

    PubMed

    Kannan, R; Ievlev, A V; Laanait, N; Ziatdinov, M A; Vasudevan, R K; Jesse, S; Kalinin, S V

    2018-01-01

    Many spectral responses in materials science, physics, and chemistry experiments can be characterized as resulting from the superposition of a number of more basic individual spectra. In this context, unmixing is defined as the problem of determining the individual spectra, given measurements of multiple spectra that are spatially resolved across samples, as well as the determination of the corresponding abundance maps indicating the local weighting of each individual spectrum. Matrix factorization is a popular linear unmixing technique that considers that the mixture model between the individual spectra and the spatial maps is linear. Here, we present a tutorial paper targeted at domain scientists to introduce linear unmixing techniques, to facilitate greater understanding of spectroscopic imaging data. We detail a matrix factorization framework that can incorporate different domain information through various parameters of the matrix factorization method. We demonstrate many domain-specific examples to explain the expressivity of the matrix factorization framework and show how the appropriate use of domain-specific constraints such as non-negativity and sum-to-one abundance result in physically meaningful spectral decompositions that are more readily interpretable. Our aim is not only to explain the off-the-shelf available tools, but to add additional constraints when ready-made algorithms are unavailable for the task. All examples use the scalable open source implementation from https://github.com/ramkikannan/nmflibrary that can run from small laptops to supercomputers, creating a user-wide platform for rapid dissemination and adoption across scientific disciplines.

  19. Sampling factors influencing accuracy of sperm kinematic analysis.

    PubMed

    Owen, D H; Katz, D F

    1993-01-01

    Sampling conditions that influence the accuracy of experimental measurement of sperm head kinematics were studied by computer simulation methods. Several archetypal sperm trajectories were studied. First, mathematical models of typical flagellar beats were input to hydrodynamic equations of sperm motion. The instantaneous swimming velocities of such sperm were computed over sequences of flagellar beat cycles, from which the resulting trajectories were determined. In a second, idealized approach, direct mathematical models of trajectories were utilized, based upon similarities to the previous hydrodynamic constructs. In general, it was found that analyses of sampling factors produced similar results for the hydrodynamic and idealized trajectories. A number of experimental sampling factors were studied, including the number of sperm head positions measured per flagellar beat, and the time interval over which these measurements are taken. It was found that when one flagellar beat is sampled, values of amplitude of lateral head displacement (ALH) and linearity (LIN) approached their actual values when five or more sample points per beat were taken. Mean angular displacement (MAD) values, however, remained sensitive to sampling rate even when large sampling rates were used. Values of MAD were also much more sensitive to the initial starting point of the sampling procedure than were ALH or LIN. On the basis of these analyses of measurement accuracy for individual sperm, simulations were then performed of cumulative effects when studying entire populations of motile cells. It was found that substantial (double digit) errors occurred in the mean values of curvilinear velocity (VCL), LIN, and MAD under the conditions of 30 video frames per second and 0.5 seconds of analysis time. Increasing the analysis interval to 1 second did not appreciably improve the results. However, increasing the analysis rate to 60 frames per second significantly reduced the errors. These findings

  20. Practical Session: Multiple Linear Regression

    NASA Astrophysics Data System (ADS)

    Clausel, M.; Grégoire, G.

    2014-12-01

    Three exercises are proposed to illustrate the simple linear regression. In the first one investigates the influence of several factors on atmospheric pollution. It has been proposed by D. Chessel and A.B. Dufour in Lyon 1 (see Sect. 6 of http://pbil.univ-lyon1.fr/R/pdf/tdr33.pdf) and is based on data coming from 20 cities of U.S. Exercise 2 is an introduction to model selection whereas Exercise 3 provides a first example of analysis of variance. Exercises 2 and 3 have been proposed by A. Dalalyan at ENPC (see Exercises 2 and 3 of http://certis.enpc.fr/~dalalyan/Download/TP_ENPC_5.pdf).

  1. Psychometric evaluation of the revised Illness Perception Questionnaire (IPQ-R) in cancer patients: confirmatory factor analysis and Rasch analysis.

    PubMed

    Ashley, Laura; Smith, Adam B; Keding, Ada; Jones, Helen; Velikova, Galina; Wright, Penny

    2013-12-01

    To provide new insights into the psychometrics of the revised Illness Perception Questionnaire (IPQ-R) in cancer patients. To undertake, for the first time using data from breast, colorectal and prostate cancer patients, a confirmatory factor analysis (CFA) to assess the validity of the IPQ-R's core seven-factor structure. Also, for the first time in any illness group, to undertake Rasch analysis to explore the extent to which the IPQ-R factors form unidimensional scales, with linear measurement properties and no Differential Item Functioning (DIF). Patients with potentially curable breast, colorectal or prostate cancer, within 6months post-diagnosis, completed the IPQ-R online (N=531). CFA was conducted, including multi-sample analysis, and for each IPQ-R factor fit to the Rasch model was assessed by examining, amongst other things, item fit, DIF and unidimensionality. The CFA showed a moderate fit of the data to the IPQ-R model, and stability across diagnosis, although fit was significantly improved following the removal of selected items. All seven factors achieved fit to the Rasch model, and exhibited unidimensionality and minimal DIF, although in most cases this was after some item rescoring and/or deletion. In both analyses, IPQ-R items 12, 18 and 24 were indicated as misfitting and removed. Given the rigorous standard of Rasch measurement, and the generic nature of the IPQ-R, it stood up well to the demands of the Rasch model in this study. Importantly, the results show that with some relatively minor, pragmatic modifications the IPQ-R could possess Rasch-standard measurement in cancer patients. © 2013.

  2. Exploratory Bi-factor Analysis: The Oblique Case.

    PubMed

    Jennrich, Robert I; Bentler, Peter M

    2012-07-01

    Bi-factor analysis is a form of confirmatory factor analysis originally introduced by Holzinger and Swineford (Psychometrika 47:41-54, 1937). The bi-factor model has a general factor, a number of group factors, and an explicit bi-factor structure. Jennrich and Bentler (Psychometrika 76:537-549, 2011) introduced an exploratory form of bi-factor analysis that does not require one to provide an explicit bi-factor structure a priori. They use exploratory factor analysis and a bifactor rotation criterion designed to produce a rotated loading matrix that has an approximate bi-factor structure. Among other things this can be used as an aid in finding an explicit bi-factor structure for use in a confirmatory bi-factor analysis. They considered only orthogonal rotation. The purpose of this paper is to consider oblique rotation and to compare it to orthogonal rotation. Because there are many more oblique rotations of an initial loading matrix than orthogonal rotations, one expects the oblique results to approximate a bi-factor structure better than orthogonal rotations and this is indeed the case. A surprising result arises when oblique bi-factor rotation methods are applied to ideal data.

  3. Analysis of the factors creating consumer attributes of roasted beef steaks.

    PubMed

    Guzek, Dominika; Głąbska, Dominika; Gutkowska, Krystyna; Wierzbicki, Jerzy; Woźniak, Alicja; Wierzbicka, Agnieszka

    2015-03-01

    The aim of the study was to analyze the factors creating consumer attributes of roasted beef steaks of various animals. Eight cuts from 30 carcasses (characterized by various types of animal, conformation and fat class, rib fat thickness, ossification score) were selected. Samples were prepared using the roasting method and consumers rated the tenderness, juiciness, flavor, overall acceptability (rated in a 100-point scale), and satisfaction (rated from 2 to 5) for analyzed samples. No influence of type of animal, fat class, conformation class or ossification score on the results of consumer analysis was observed. For all analyzed factors, the influence of cut on consumer analysis was observed (the highest values of all consumer attributes were observed for tenderloin - for juiciness significantly higher than for other cuts, for tenderness, flavor and MQ4 comparable only with rump (RMP231), while for overall acceptability and satisfaction - with both rump cuts). For rib fat thickness consumer attributes of roasted beef meat were not linear, but the influence was observed - the highest values of consumer attributes were observed for 13 mm rib fat thickness. © 2014 Japanese Society of Animal Science.

  4. Linear and Nonlinear Time-Frequency Analysis for Parameter Estimation of Resident Space Objects

    DTIC Science & Technology

    2017-02-22

    AFRL-AFOSR-UK-TR-2017-0023 Linear and Nonlinear Time -Frequency Analysis for Parameter Estimation of Resident Space Objects Marco Martorella...estimated to average 1 hour per response, including the time for reviewing instructions, searching existing   data sources, gathering and maintaining the...Nonlinear Time -Frequency Analysis for Parameter Estimation of Resident Space Objects 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER FA9550-14-1-0183 5c.  PROGRAM

  5. A hybrid-stress finite element approach for stress and vibration analysis in linear anisotropic elasticity

    NASA Technical Reports Server (NTRS)

    Oden, J. Tinsley; Fly, Gerald W.; Mahadevan, L.

    1987-01-01

    A hybrid stress finite element method is developed for accurate stress and vibration analysis of problems in linear anisotropic elasticity. A modified form of the Hellinger-Reissner principle is formulated for dynamic analysis and an algorithm for the determination of the anisotropic elastic and compliance constants from experimental data is developed. These schemes were implemented in a finite element program for static and dynamic analysis of linear anisotropic two dimensional elasticity problems. Specific numerical examples are considered to verify the accuracy of the hybrid stress approach and compare it with that of the standard displacement method, especially for highly anisotropic materials. It is that the hybrid stress approach gives much better results than the displacement method. Preliminary work on extensions of this method to three dimensional elasticity is discussed, and the stress shape functions necessary for this extension are included.

  6. A Linear Variable-[theta] Model for Measuring Individual Differences in Response Precision

    ERIC Educational Resources Information Center

    Ferrando, Pere J.

    2011-01-01

    Models for measuring individual response precision have been proposed for binary and graded responses. However, more continuous formats are quite common in personality measurement and are usually analyzed with the linear factor analysis model. This study extends the general Gaussian person-fluctuation model to the continuous-response case and…

  7. Finite Element Analysis and Optimization of Flexure Bearing for Linear Motor Compressor

    NASA Astrophysics Data System (ADS)

    Khot, Maruti; Gawali, Bajirao

    Nowadays linear motor compressors are commonly used in miniature cryocoolers instead of rotary compressors because rotary compressors apply large radial forces to the piston, which provide no useful work, cause large amount of wear and usually require lubrication. Recent trends favour flexure supported configurations for long life. The present work aims at designing and geometrical optimization of flexure bearings using finite element analysis and the development of design charts for selection purposes. The work also covers the manufacturing of flexures using different materials and the validation of the experimental finite element analysis results.

  8. voom: precision weights unlock linear model analysis tools for RNA-seq read counts

    PubMed Central

    2014-01-01

    New normal linear modeling strategies are presented for analyzing read counts from RNA-seq experiments. The voom method estimates the mean-variance relationship of the log-counts, generates a precision weight for each observation and enters these into the limma empirical Bayes analysis pipeline. This opens access for RNA-seq analysts to a large body of methodology developed for microarrays. Simulation studies show that voom performs as well or better than count-based RNA-seq methods even when the data are generated according to the assumptions of the earlier methods. Two case studies illustrate the use of linear modeling and gene set testing methods. PMID:24485249

  9. voom: Precision weights unlock linear model analysis tools for RNA-seq read counts.

    PubMed

    Law, Charity W; Chen, Yunshun; Shi, Wei; Smyth, Gordon K

    2014-02-03

    New normal linear modeling strategies are presented for analyzing read counts from RNA-seq experiments. The voom method estimates the mean-variance relationship of the log-counts, generates a precision weight for each observation and enters these into the limma empirical Bayes analysis pipeline. This opens access for RNA-seq analysts to a large body of methodology developed for microarrays. Simulation studies show that voom performs as well or better than count-based RNA-seq methods even when the data are generated according to the assumptions of the earlier methods. Two case studies illustrate the use of linear modeling and gene set testing methods.

  10. Factors affecting the HIV/AIDS epidemic: an ecological analysis of global data.

    PubMed

    Mondal, M N I; Shitan, M

    2013-06-01

    All over the world the prevalence of Human Immunodeficiency Virus (HIV)/Acquired Immune Deficiency Syndrome (AIDS) has became a stumbling stone in progress of human civilization and is a huge concern for people worldwide. To determine the social and health factors which contribute to increase the size of HIV epidemic globally. The country level indicators of HIV prevalence rates, are contraceptive prevalence rate, physicians density, proportion of Muslim populations, adolescent fertility rate, and mean year of schooling were compiled of 187 countries from the United Nations (UN) agencies. To extract the major factors from those indicators of the later five categories, backward multiple regression analysis was used as the statistical tool. The national HIV prevalence rate was significantly correlated with almost all the predictors. Backward multiple linear regression analysis identified the proportion of Muslims, physicians density, and adolescent fertility rate are as the three most prominent factors linked with the national HIV epidemic. The findings support the hypotheses that a higher adolescent fertility rate in the population is the adverse effect of premarital and extramarital sex that leads to longer period of sexual activity which increases the risk of HIV infection. On the hand, and cultural restrictions of Muslims and sufficient physicians will decelerate the spread of HIV infections in the society.

  11. Background recovery via motion-based robust principal component analysis with matrix factorization

    NASA Astrophysics Data System (ADS)

    Pan, Peng; Wang, Yongli; Zhou, Mingyuan; Sun, Zhipeng; He, Guoping

    2018-03-01

    Background recovery is a key technique in video analysis, but it still suffers from many challenges, such as camouflage, lighting changes, and diverse types of image noise. Robust principal component analysis (RPCA), which aims to recover a low-rank matrix and a sparse matrix, is a general framework for background recovery. The nuclear norm is widely used as a convex surrogate for the rank function in RPCA, which requires computing the singular value decomposition (SVD), a task that is increasingly costly as matrix sizes and ranks increase. However, matrix factorization greatly reduces the dimension of the matrix for which the SVD must be computed. Motion information has been shown to improve low-rank matrix recovery in RPCA, but this method still finds it difficult to handle original video data sets because of its batch-mode formulation and implementation. Hence, in this paper, we propose a motion-assisted RPCA model with matrix factorization (FM-RPCA) for background recovery. Moreover, an efficient linear alternating direction method of multipliers with a matrix factorization (FL-ADM) algorithm is designed for solving the proposed FM-RPCA model. Experimental results illustrate that the method provides stable results and is more efficient than the current state-of-the-art algorithms.

  12. Moment method analysis of linearly tapered slot antennas

    NASA Technical Reports Server (NTRS)

    Koeksal, Adnan

    1993-01-01

    A method of moments (MOM) model for the analysis of the Linearly Tapered Slot Antenna (LTSA) is developed and implemented. The model employs an unequal size rectangular sectioning for conducting parts of the antenna. Piecewise sinusoidal basis functions are used for the expansion of conductor current. The effect of the dielectric is incorporated in the model by using equivalent volume polarization current density and solving the equivalent problem in free-space. The feed section of the antenna including the microstripline is handled rigorously in the MOM model by including slotline short circuit and microstripline currents among the unknowns. Comparison with measurements is made to demonstrate the validity of the model for both the air case and the dielectric case. Validity of the model is also verified by extending the model to handle the analysis of the skew-plate antenna and comparing the results to those of a skew-segmentation modeling results of the same structure and to available data in the literature. Variation of the radiation pattern for the air LTSA with length, height, and taper angle is investigated, and the results are tabulated. Numerical results for the effect of the dielectric thickness and permittivity are presented.

  13. Analysis, design, and testing of a low cost, direct force command linear proof mass actuator for structural control

    NASA Technical Reports Server (NTRS)

    Slater, G. L.; Shelley, Stuart; Jacobson, Mark

    1993-01-01

    In this paper, the design, analysis, and test of a low cost, linear proof mass actuator for vibration control is presented. The actuator is based on a linear induction coil from a large computer disk drive. Such disk drives are readily available and provide the linear actuator, current feedback amplifier, and power supply for a highly effective, yet inexpensive, experimental laboratory actuator. The device is implemented as a force command input system, and the performance is virtually the same as other, more sophisticated, linear proof mass systems.

  14. Factors Affecting Online Groupwork Interest: A Multilevel Analysis

    ERIC Educational Resources Information Center

    Du, Jianxia; Xu, Jianzhong; Fan, Xitao

    2013-01-01

    The purpose of the present study is to examine the personal and contextual factors that may affect students' online groupwork interest. Using the data obtained from graduate students in an online course, both student- and group-level predictors for online groupwork interest were analyzed within the framework of hierarchical linear modeling…

  15. The effect of zinc supplementation on linear growth, body composition, and growth factors in preterm infants.

    PubMed

    Díaz-Gómez, N Marta; Doménech, Eduardo; Barroso, Flora; Castells, Silvia; Cortabarria, Carmen; Jiménez, Alejandro

    2003-05-01

    The aim of our study was to evaluate the effect of zinc supplementation on linear growth, body composition, and growth factors in premature infants. Thirty-six preterm infants (gestational age: 32.0 +/- 2.1 weeks, birth weight: 1704 +/- 364 g) participated in a longitudinal double-blind, randomized clinical trial. They were randomly allocated either to the supplemental (S) group fed with a standard term formula supplemented with zinc (final content 10 mg/L) and a small quantity of copper (final content 0.6 mg/L), or to the placebo group fed with the same formula without supplementation (final content of zinc: 5 mg/L and copper: 0.4 mg/L), from 36 weeks postconceptional age until 6 months corrected postnatal age. At each evaluation, anthropometric variables and bioelectrical impedance were measured, a 3-day dietary record was collected, and a blood sample was taken. We analyzed serum levels of total alkaline phosphatase, skeletal alkaline phosphatase (sALP), insulin growth factor (IGF)-I, IGF binding protein-3, IGF binding protein-1, zinc and copper, and the concentrations of zinc in erythrocytes. The S group had significantly higher zinc levels in serum and erythrocytes and lower serum copper levels with respect to the placebo group. We found that the S group had a greater linear growth (from baseline to 3 months corrected age: Delta score deviation standard length: 1.32 +/-.8 vs.38 +/-.8). The increase in total body water and in serum levels of sALP was also significantly higher in the S group (total body water: 3 months; corrected age: 3.8 +/-.5 vs 3.5 +/-.4 kg, 6 months; corrected age: 4.5 +/-.5 vs 4.2 +/-.4 kg; sALP: 3 months; corrected age: 140.2 +/- 28.7 vs 118.7 +/- 18.8 micro g/L). Zinc supplementation has a positive effect on linear growth in premature infants.

  16. Cement Leakage in Percutaneous Vertebral Augmentation for Osteoporotic Vertebral Compression Fractures: Analysis of Risk Factors.

    PubMed

    Xie, Weixing; Jin, Daxiang; Ma, Hui; Ding, Jinyong; Xu, Jixi; Zhang, Shuncong; Liang, De

    2016-05-01

    The risk factors for cement leakage were retrospectively reviewed in 192 patients who underwent percutaneous vertebral augmentation (PVA). To discuss the factors related to the cement leakage in PVA procedure for the treatment of osteoporotic vertebral compression fractures. PVA is widely applied for the treatment of osteoporotic vertebral fractures. Cement leakage is a major complication of this procedure. The risk factors for cement leakage were controversial. A retrospective review of 192 patients who underwent PVA was conducted. The following data were recorded: age, sex, bone density, number of fractured vertebrae before surgery, number of treated vertebrae, severity of the treated vertebrae, operative approach, volume of injected bone cement, preoperative vertebral compression ratio, preoperative local kyphosis angle, intraosseous clefts, preoperative vertebral cortical bone defect, and ratio and type of cement leakage. To study the correlation between each factor and cement leakage ratio, bivariate regression analysis was employed to perform univariate analysis, whereas multivariate linear regression analysis was employed to perform multivariate analysis. The study included 192 patients (282 treated vertebrae), and cement leakage occurred in 100 vertebrae (35.46%). The vertebrae with preoperative cortical bone defects generally exhibited higher cement leakage ratio, and the leakage is typically type C. Vertebrae with intact cortical bones before the procedure tend to experience type S leakage. Univariate analysis showed that patient age, bone density, number of fractured vertebrae before surgery, and vertebral cortical bone were associated with cement leakage ratio (P<0.05). Multivariate analysis showed that the main factors influencing bone cement leakage are bone density and vertebral cortical bone defect, with standardized partial regression coefficients of -0.085 and 0.144, respectively. High bone density and vertebral cortical bone defect are

  17. Application of Linear Discriminant Analysis in Dimensionality Reduction for Hand Motion Classification

    NASA Astrophysics Data System (ADS)

    Phinyomark, A.; Hu, H.; Phukpattaranont, P.; Limsakul, C.

    2012-01-01

    The classification of upper-limb movements based on surface electromyography (EMG) signals is an important issue in the control of assistive devices and rehabilitation systems. Increasing the number of EMG channels and features in order to increase the number of control commands can yield a high dimensional feature vector. To cope with the accuracy and computation problems associated with high dimensionality, it is commonplace to apply a processing step that transforms the data to a space of significantly lower dimensions with only a limited loss of useful information. Linear discriminant analysis (LDA) has been successfully applied as an EMG feature projection method. Recently, a number of extended LDA-based algorithms have been proposed, which are more competitive in terms of both classification accuracy and computational costs/times with classical LDA. This paper presents the findings of a comparative study of classical LDA and five extended LDA methods. From a quantitative comparison based on seven multi-feature sets, three extended LDA-based algorithms, consisting of uncorrelated LDA, orthogonal LDA and orthogonal fuzzy neighborhood discriminant analysis, produce better class separability when compared with a baseline system (without feature projection), principle component analysis (PCA), and classical LDA. Based on a 7-dimension time domain and time-scale feature vectors, these methods achieved respectively 95.2% and 93.2% classification accuracy by using a linear discriminant classifier.

  18. Dynamic Stability Analysis of Linear Time-varying Systems via an Extended Modal Identification Approach

    NASA Astrophysics Data System (ADS)

    Ma, Zhisai; Liu, Li; Zhou, Sida; Naets, Frank; Heylen, Ward; Desmet, Wim

    2017-03-01

    The problem of linear time-varying(LTV) system modal analysis is considered based on time-dependent state space representations, as classical modal analysis of linear time-invariant systems and current LTV system modal analysis under the "frozen-time" assumption are not able to determine the dynamic stability of LTV systems. Time-dependent state space representations of LTV systems are first introduced, and the corresponding modal analysis theories are subsequently presented via a stability-preserving state transformation. The time-varying modes of LTV systems are extended in terms of uniqueness, and are further interpreted to determine the system's stability. An extended modal identification is proposed to estimate the time-varying modes, consisting of the estimation of the state transition matrix via a subspace-based method and the extraction of the time-varying modes by the QR decomposition. The proposed approach is numerically validated by three numerical cases, and is experimentally validated by a coupled moving-mass simply supported beam experimental case. The proposed approach is capable of accurately estimating the time-varying modes, and provides a new way to determine the dynamic stability of LTV systems by using the estimated time-varying modes.

  19. Characterising non-linear dynamics in nocturnal breathing patterns of healthy infants using recurrence quantification analysis.

    PubMed

    Terrill, Philip I; Wilson, Stephen J; Suresh, Sadasivam; Cooper, David M; Dakin, Carolyn

    2013-05-01

    Breathing dynamics vary between infant sleep states, and are likely to exhibit non-linear behaviour. This study applied the non-linear analytical tool recurrence quantification analysis (RQA) to 400 breath interval periods of REM and N-REM sleep, and then using an overlapping moving window. The RQA variables were different between sleep states, with REM radius 150% greater than N-REM radius, and REM laminarity 79% greater than N-REM laminarity. RQA allowed the observation of temporal variations in non-linear breathing dynamics across a night's sleep at 30s resolution, and provides a basis for quantifying changes in complex breathing dynamics with physiology and pathology. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Quantization of liver tissue in dual kVp computed tomography using linear discriminant analysis

    NASA Astrophysics Data System (ADS)

    Tkaczyk, J. Eric; Langan, David; Wu, Xiaoye; Xu, Daniel; Benson, Thomas; Pack, Jed D.; Schmitz, Andrea; Hara, Amy; Palicek, William; Licato, Paul; Leverentz, Jaynne

    2009-02-01

    Linear discriminate analysis (LDA) is applied to dual kVp CT and used for tissue characterization. The potential to quantitatively model both malignant and benign, hypo-intense liver lesions is evaluated by analysis of portal-phase, intravenous CT scan data obtained on human patients. Masses with an a priori classification are mapped to a distribution of points in basis material space. The degree of localization of tissue types in the material basis space is related to both quantum noise and real compositional differences. The density maps are analyzed with LDA and studied with system simulations to differentiate these factors. The discriminant analysis is formulated so as to incorporate the known statistical properties of the data. Effective kVp separation and mAs relates to precision of tissue localization. Bias in the material position is related to the degree of X-ray scatter and partial-volume effect. Experimental data and simulations demonstrate that for single energy (HU) imaging or image-based decomposition pixel values of water-like tissues depend on proximity to other iodine-filled bodies. Beam-hardening errors cause a shift in image value on the scale of that difference sought between in cancerous and cystic lessons. In contrast, projection-based decomposition or its equivalent when implemented on a carefully calibrated system can provide accurate data. On such a system, LDA may provide novel quantitative capabilities for tissue characterization in dual energy CT.

  1. Exploratory Bi-Factor Analysis: The Oblique Case

    ERIC Educational Resources Information Center

    Jennrich, Robert I.; Bentler, Peter M.

    2012-01-01

    Bi-factor analysis is a form of confirmatory factor analysis originally introduced by Holzinger and Swineford ("Psychometrika" 47:41-54, 1937). The bi-factor model has a general factor, a number of group factors, and an explicit bi-factor structure. Jennrich and Bentler ("Psychometrika" 76:537-549, 2011) introduced an exploratory form of bi-factor…

  2. Bayesian Exploratory Factor Analysis

    PubMed Central

    Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James J.; Piatek, Rémi

    2014-01-01

    This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study confirms the validity of the approach. The method is used to produce interpretable low dimensional aggregates from a high dimensional set of psychological measurements. PMID:25431517

  3. Linear circuit analysis program for IBM 1620 Monitor 2, 1311/1443 data processing system /CIRCS/

    NASA Technical Reports Server (NTRS)

    Hatfield, J.

    1967-01-01

    CIRCS is modification of IBSNAP Circuit Analysis Program, for use on smaller systems. This data processing system retains the basic dc, transient analysis, and FORTRAN 2 formats. It can be used on the IBM 1620/1311 Monitor I Mod 5 system, and solves a linear network containing 15 nodes and 45 branches.

  4. Linear analysis of auto-organization in Hebbian neural networks.

    PubMed

    Carlos Letelier, J; Mpodozis, J

    1995-01-01

    The self-organization of neurotopies where neural connections follow Hebbian dynamics is framed in terms of linear operator theory. A general and exact equation describing the time evolution of the overall synaptic strength connecting two neural laminae is derived. This linear matricial equation, which is similar to the equations used to describe oscillating systems in physics, is modified by the introduction of non-linear terms, in order to capture self-organizing (or auto-organizing) processes. The behavior of a simple and small system, that contains a non-linearity that mimics a metabolic constraint, is analyzed by computer simulations. The emergence of a simple "order" (or degree of organization) in this low-dimensionality model system is discussed.

  5. Local Laplacian Coding From Theoretical Analysis of Local Coding Schemes for Locally Linear Classification.

    PubMed

    Pang, Junbiao; Qin, Lei; Zhang, Chunjie; Zhang, Weigang; Huang, Qingming; Yin, Baocai

    2015-12-01

    Local coordinate coding (LCC) is a framework to approximate a Lipschitz smooth function by combining linear functions into a nonlinear one. For locally linear classification, LCC requires a coding scheme that heavily determines the nonlinear approximation ability, posing two main challenges: 1) the locality making faraway anchors have smaller influences on current data and 2) the flexibility balancing well between the reconstruction of current data and the locality. In this paper, we address the problem from the theoretical analysis of the simplest local coding schemes, i.e., local Gaussian coding and local student coding, and propose local Laplacian coding (LPC) to achieve the locality and the flexibility. We apply LPC into locally linear classifiers to solve diverse classification tasks. The comparable or exceeded performances of state-of-the-art methods demonstrate the effectiveness of the proposed method.

  6. Linear Discriminant Analysis on a Spreadsheet.

    ERIC Educational Resources Information Center

    Busbey, Arthur Bresnahan III

    1989-01-01

    Described is a software package, "Trapeze," within which a routine called LinDis can be used. Discussed are teaching methods, the linear discriminant model and equations, the LinDis worksheet, and an example. The set up for this routine is included. (CW)

  7. A linear stability analysis for nonlinear, grey, thermal radiative transfer problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wollaber, Allan B., E-mail: wollaber@lanl.go; Larsen, Edward W., E-mail: edlarsen@umich.ed

    2011-02-20

    We present a new linear stability analysis of three time discretizations and Monte Carlo interpretations of the nonlinear, grey thermal radiative transfer (TRT) equations: the widely used 'Implicit Monte Carlo' (IMC) equations, the Carter Forest (CF) equations, and the Ahrens-Larsen or 'Semi-Analog Monte Carlo' (SMC) equations. Using a spatial Fourier analysis of the 1-D Implicit Monte Carlo (IMC) equations that are linearized about an equilibrium solution, we show that the IMC equations are unconditionally stable (undamped perturbations do not exist) if {alpha}, the IMC time-discretization parameter, satisfies 0.5 < {alpha} {<=} 1. This is consistent with conventional wisdom. However, wemore » also show that for sufficiently large time steps, unphysical damped oscillations can exist that correspond to the lowest-frequency Fourier modes. After numerically confirming this result, we develop a method to assess the stability of any time discretization of the 0-D, nonlinear, grey, thermal radiative transfer problem. Subsequent analyses of the CF and SMC methods then demonstrate that the CF method is unconditionally stable and monotonic, but the SMC method is conditionally stable and permits unphysical oscillatory solutions that can prevent it from reaching equilibrium. This stability theory provides new conditions on the time step to guarantee monotonicity of the IMC solution, although they are likely too conservative to be used in practice. Theoretical predictions are tested and confirmed with numerical experiments.« less

  8. A linear stability analysis for nonlinear, grey, thermal radiative transfer problems

    NASA Astrophysics Data System (ADS)

    Wollaber, Allan B.; Larsen, Edward W.

    2011-02-01

    We present a new linear stability analysis of three time discretizations and Monte Carlo interpretations of the nonlinear, grey thermal radiative transfer (TRT) equations: the widely used “Implicit Monte Carlo” (IMC) equations, the Carter Forest (CF) equations, and the Ahrens-Larsen or “Semi-Analog Monte Carlo” (SMC) equations. Using a spatial Fourier analysis of the 1-D Implicit Monte Carlo (IMC) equations that are linearized about an equilibrium solution, we show that the IMC equations are unconditionally stable (undamped perturbations do not exist) if α, the IMC time-discretization parameter, satisfies 0.5 < α ⩽ 1. This is consistent with conventional wisdom. However, we also show that for sufficiently large time steps, unphysical damped oscillations can exist that correspond to the lowest-frequency Fourier modes. After numerically confirming this result, we develop a method to assess the stability of any time discretization of the 0-D, nonlinear, grey, thermal radiative transfer problem. Subsequent analyses of the CF and SMC methods then demonstrate that the CF method is unconditionally stable and monotonic, but the SMC method is conditionally stable and permits unphysical oscillatory solutions that can prevent it from reaching equilibrium. This stability theory provides new conditions on the time step to guarantee monotonicity of the IMC solution, although they are likely too conservative to be used in practice. Theoretical predictions are tested and confirmed with numerical experiments.

  9. A single-degree-of-freedom model for non-linear soil amplification

    USGS Publications Warehouse

    Erdik, Mustafa Ozder

    1979-01-01

    For proper understanding of soil behavior during earthquakes and assessment of a realistic surface motion, studies of the large-strain dynamic response of non-linear hysteretic soil systems are indispensable. Most of the presently available studies are based on the assumption that the response of a soil deposit is mainly due to the upward propagation of horizontally polarized shear waves from the underlying bedrock. Equivalent-linear procedures, currently in common use in non-linear soil response analysis, provide a simple approach and have been favorably compared with the actual recorded motions in some particular cases. Strain compatibility in these equivalent-linear approaches is maintained by selecting values of shear moduli and damping ratios in accordance with the average soil strains, in an iterative manner. Truly non-linear constitutive models with complete strain compatibility have also been employed. The equivalent-linear approaches often raise some doubt as to the reliability of their results concerning the system response in high frequency regions. In these frequency regions the equivalent-linear methods may underestimate the surface motion by as much as a factor of two or more. Although studies are complete in their methods of analysis, they inevitably provide applications pertaining only to a few specific soil systems, and do not lead to general conclusions about soil behavior. This report attempts to provide a general picture of the soil response through the use of a single-degree-of-freedom non-linear-hysteretic model. Although the investigation is based on a specific type of nonlinearity and a set of dynamic soil properties, the method described does not limit itself to these assumptions and is equally applicable to other types of nonlinearity and soil parameters.

  10. Graph-based normalization and whitening for non-linear data analysis.

    PubMed

    Aaron, Catherine

    2006-01-01

    In this paper we construct a graph-based normalization algorithm for non-linear data analysis. The principle of this algorithm is to get a spherical average neighborhood with unit radius. First we present a class of global dispersion measures used for "global normalization"; we then adapt these measures using a weighted graph to build a local normalization called "graph-based" normalization. Then we give details of the graph-based normalization algorithm and illustrate some results. In the second part we present a graph-based whitening algorithm built by analogy between the "global" and the "local" problem.

  11. Transcription factors, coregulators, and epigenetic marks are linearly correlated and highly redundant

    PubMed Central

    Ahsendorf, Tobias; Müller, Franz-Josef; Topkar, Ved; Gunawardena, Jeremy; Eils, Roland

    2017-01-01

    The DNA microstates that regulate transcription include sequence-specific transcription factors (TFs), coregulatory complexes, nucleosomes, histone modifications, DNA methylation, and parts of the three-dimensional architecture of genomes, which could create an enormous combinatorial complexity across the genome. However, many proteins and epigenetic marks are known to colocalize, suggesting that the information content encoded in these marks can be compressed. It has so far proved difficult to understand this compression in a systematic and quantitative manner. Here, we show that simple linear models can reliably predict the data generated by the ENCODE and Roadmap Epigenomics consortia. Further, we demonstrate that a small number of marks can predict all other marks with high average correlation across the genome, systematically revealing the substantial information compression that is present in different cell lines. We find that the linear models for activating marks are typically cell line-independent, while those for silencing marks are predominantly cell line-specific. Of particular note, a nuclear receptor corepressor, transducin beta-like 1 X-linked receptor 1 (TBLR1), was highly predictive of other marks in two hematopoietic cell lines. The methodology presented here shows how the potentially vast complexity of TFs, coregulators, and epigenetic marks at eukaryotic genes is highly redundant and that the information present can be compressed onto a much smaller subset of marks. These findings could be used to efficiently characterize cell lines and tissues based on a small number of diagnostic marks and suggest how the DNA microstates, which regulate the expression of individual genes, can be specified. PMID:29216191

  12. Design of a transverse-flux permanent-magnet linear generator and controller for use with a free-piston stirling engine

    NASA Astrophysics Data System (ADS)

    Zheng, Jigui; Huang, Yuping; Wu, Hongxing; Zheng, Ping

    2016-07-01

    Transverse-flux with high efficiency has been applied in Stirling engine and permanent magnet synchronous linear generator system, however it is restricted for large application because of low and complex process. A novel type of cylindrical, non-overlapping, transverse-flux, and permanent-magnet linear motor(TFPLM) is investigated, furthermore, a high power factor and less process complexity structure research is developed. The impact of magnetic leakage factor on power factor is discussed, by using the Finite Element Analysis(FEA) model of stirling engine and TFPLM, an optimization method for electro-magnetic design of TFPLM is proposed based on magnetic leakage factor. The relation between power factor and structure parameter is investigated, and a structure parameter optimization method is proposed taking power factor maximum as a goal. At last, the test bench is founded, starting experimental and generating experimental are performed, and a good agreement of simulation and experimental is achieved. The power factor is improved and the process complexity is decreased. This research provides the instruction to design high-power factor permanent-magnet linear generator.

  13. Linear ordered collagen scaffolds loaded with collagen-binding basic fibroblast growth factor facilitate recovery of sciatic nerve injury in rats.

    PubMed

    Ma, Fukai; Xiao, Zhifeng; Chen, Bing; Hou, Xianglin; Dai, Jianwu; Xu, Ruxiang

    2014-04-01

    Natural biological functional scaffolds, consisting of biological materials filled with promoting elements, provide a promising strategy for the regeneration of peripheral nerve defects. Collagen conduits have been used widely due to their excellent biological properties. Linear ordered collagen scaffold (LOCS) fibers are good lumen fillers that can guide nerve regeneration in an ordered direction. In addition, basic fibroblast growth factor (bFGF) is important in the recovery of nerve injury. However, the traditional method for delivering bFGF to the lesion site has no long-term effect because of its short half-life and rapid diffusion. Therefore, we fused a specific collagen-binding domain (CBD) peptide to the N-terminal of native basic fibroblast growth factor (NAT-bFGF) to retain bFGF on the collagen scaffolds. In this study, a natural biological functional scaffold was constructed using collagen tubes filled with collagen-binding bFGF (CBD-bFGF)-loaded LOCS to promote regeneration in a 5-mm rat sciatic nerve transection model. Functional evaluation, histological investigation, and morphometric analysis indicated that the natural biological functional scaffold retained more bFGF at the injury site, guided axon growth, and promoted nerve regeneration as well as functional restoration.

  14. Non-linear analysis of wave progagation using transform methods and plates and shells using integral equations

    NASA Astrophysics Data System (ADS)

    Pipkins, Daniel Scott

    Two diverse topics of relevance in modern computational mechanics are treated. The first involves the modeling of linear and non-linear wave propagation in flexible, lattice structures. The technique used combines the Laplace Transform with the Finite Element Method (FEM). The procedure is to transform the governing differential equations and boundary conditions into the transform domain where the FEM formulation is carried out. For linear problems, the transformed differential equations can be solved exactly, hence the method is exact. As a result, each member of the lattice structure is modeled using only one element. In the non-linear problem, the method is no longer exact. The approximation introduced is a spatial discretization of the transformed non-linear terms. The non-linear terms are represented in the transform domain by making use of the complex convolution theorem. A weak formulation of the resulting transformed non-linear equations yields a set of element level matrix equations. The trial and test functions used in the weak formulation correspond to the exact solution of the linear part of the transformed governing differential equation. Numerical results are presented for both linear and non-linear systems. The linear systems modeled are longitudinal and torsional rods and Bernoulli-Euler and Timoshenko beams. For non-linear systems, a viscoelastic rod and Von Karman type beam are modeled. The second topic is the analysis of plates and shallow shells under-going finite deflections by the Field/Boundary Element Method. Numerical results are presented for two plate problems. The first is the bifurcation problem associated with a square plate having free boundaries which is loaded by four, self equilibrating corner forces. The results are compared to two existing numerical solutions of the problem which differ substantially. linear model are compared to those

  15. Non-linear dynamic analysis of geared systems, part 2

    NASA Technical Reports Server (NTRS)

    Singh, Rajendra; Houser, Donald R.; Kahraman, Ahmet

    1990-01-01

    A good understanding of the steady state dynamic behavior of a geared system is required in order to design reliable and quiet transmissions. This study focuses on a system containing a spur gear pair with backlash and periodically time-varying mesh stiffness, and rolling element bearings with clearance type non-linearities. A dynamic finite element model of the linear time-invariant (LTI) system is developed. Effects of several system parameters, such as torsional and transverse flexibilities of the shafts and prime mover/load inertias, on free and force vibration characteristics are investigated. Several reduced order LTI models are developed and validated by comparing their eigen solution with the finite element model results. Several key system parameters such as mean load and damping ratio are identified and their effects on the non-linear frequency response are evaluated quantitatively. Other fundamental issues such as the dynamic coupling between non-linear modes, dynamic interactions between component non-linearities and time-varying mesh stiffness, and the existence of subharmonic and chaotic solutions including routes to chaos have also been examined in depth.

  16. Optical analysis and thermal management of 2-cell strings linear concentrating photovoltaic system

    NASA Astrophysics Data System (ADS)

    Reddy, K. S.; Kamnapure, Nikhilesh R.

    2015-09-01

    This paper presents the optical and thermal analyses for a linear concentrating photovoltaic/thermal collector under different operating conditions. Linear concentrating photovoltaic system (CPV) consists of a highly reflective mirror, a receiver and semi-dual axis tracking mechanism. The CPV receiver embodies two strings of triple-junction cells (100 cells in each string) adhered to a mild steel circular tube mounted at the focal length of trough. This system provides 560 W of electricity and 1580 W of heat which needs to be dissipated by active cooling. The Al2O3/Water nanofluid is used as heat transfer fluid (HTF) flowing through circular receiver for CPV cells cooling. Optical analysis of linear CPV system with 3.35 m2 aperture and geometric concentration ratio (CR) of 35 is carried out using Advanced System Analysis Program (ASAP) an optical simulation tool. Non-uniform intensity distribution model of solar disk is used to model the sun in ASAP. The impact of random errors including slope error (σslope), tracking error (σtrack) and apparent change in sun's width (σsun) on optical performance of collector is shown. The result from the optical simulations shows the optical efficiency (ηo) of 88.32% for 2-cell string CPV concentrator. Thermal analysis of CPV receiver is carried out with conjugate heat transfer modeling in ANSYS FLUENT-14. Numerical simulations of Al2O3/Water nanofluid turbulent forced convection are performed for various parameters such as nanoparticle volume fraction (φ), Reynolds number (Re). The addition of the nanoparticle in water enhances the heat transfer in the ranges of 3.28% - 35.6% for φ = 1% - 6%. Numerical results are compared with literature data which shows the reasonable agreement.

  17. A systematic review of methodology: time series regression analysis for environmental factors and infectious diseases.

    PubMed

    Imai, Chisato; Hashizume, Masahiro

    2015-03-01

    Time series analysis is suitable for investigations of relatively direct and short-term effects of exposures on outcomes. In environmental epidemiology studies, this method has been one of the standard approaches to assess impacts of environmental factors on acute non-infectious diseases (e.g. cardiovascular deaths), with conventionally generalized linear or additive models (GLM and GAM). However, the same analysis practices are often observed with infectious diseases despite of the substantial differences from non-infectious diseases that may result in analytical challenges. Following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines, systematic review was conducted to elucidate important issues in assessing the associations between environmental factors and infectious diseases using time series analysis with GLM and GAM. Published studies on the associations between weather factors and malaria, cholera, dengue, and influenza were targeted. Our review raised issues regarding the estimation of susceptible population and exposure lag times, the adequacy of seasonal adjustments, the presence of strong autocorrelations, and the lack of a smaller observation time unit of outcomes (i.e. daily data). These concerns may be attributable to features specific to infectious diseases, such as transmission among individuals and complicated causal mechanisms. The consequence of not taking adequate measures to address these issues is distortion of the appropriate risk quantifications of exposures factors. Future studies should pay careful attention to details and examine alternative models or methods that improve studies using time series regression analysis for environmental determinants of infectious diseases.

  18. Non-linear Analysis of Scalp EEG by Using Bispectra: The Effect of the Reference Choice

    PubMed Central

    Chella, Federico; D'Andrea, Antea; Basti, Alessio; Pizzella, Vittorio; Marzetti, Laura

    2017-01-01

    Bispectral analysis is a signal processing technique that makes it possible to capture the non-linear and non-Gaussian properties of the EEG signals. It has found various applications in EEG research and clinical practice, including the assessment of anesthetic depth, the identification of epileptic seizures, and more recently, the evaluation of non-linear cross-frequency brain functional connectivity. However, the validity and reliability of the indices drawn from bispectral analysis of EEG signals are potentially biased by the use of a non-neutral EEG reference. The present study aims at investigating the effects of the reference choice on the analysis of the non-linear features of EEG signals through bicoherence, as well as on the estimation of cross-frequency EEG connectivity through two different non-linear measures, i.e., the cross-bicoherence and the antisymmetric cross-bicoherence. To this end, four commonly used reference schemes were considered: the vertex electrode (Cz), the digitally linked mastoids, the average reference, and the Reference Electrode Standardization Technique (REST). The reference effects were assessed both in simulations and in a real EEG experiment. The simulations allowed to investigated: (i) the effects of the electrode density on the performance of the above references in the estimation of bispectral measures; and (ii) the effects of the head model accuracy in the performance of the REST. For real data, the EEG signals recorded from 10 subjects during eyes open resting state were examined, and the distortions induced by the reference choice in the patterns of alpha-beta bicoherence, cross-bicoherence, and antisymmetric cross-bicoherence were assessed. The results showed significant differences in the findings depending on the chosen reference, with the REST providing superior performance than all the other references in approximating the ideal neutral reference. In conclusion, this study highlights the importance of considering the

  19. Linear system theory

    NASA Technical Reports Server (NTRS)

    Callier, Frank M.; Desoer, Charles A.

    1991-01-01

    The aim of this book is to provide a systematic and rigorous access to the main topics of linear state-space system theory in both the continuous-time case and the discrete-time case; and the I/O description of linear systems. The main thrusts of the work are the analysis of system descriptions and derivations of their properties, LQ-optimal control, state feedback and state estimation, and MIMO unity-feedback systems.

  20. Sensitivity Analysis of the USLE Soil Erodibility Factor to Its Determining Parameters

    NASA Astrophysics Data System (ADS)

    Mitova, Milena; Rousseva, Svetla

    2014-05-01

    Soil erosion is recognized as one of the most serious soil threats worldwide. Soil erosion prediction is the first step in soil conservation planning. The Universal Soil Loss Equation (USLE) is one of the most widely used models for soil erosion predictions. One of the five USLE predictors is the soil erodibility factor (K-factor), which evaluates the impact of soil characteristics on soil erosion rates. Soil erodibility nomograph defines K-factor depending on soil characteristics, such as: particle size distribution (fractions finer that 0.002 mm and from 0.1 to 0.002 mm), organic matter content, soil structure and soil profile water permeability. Identifying the soil characteristics, which mostly influence the K-factor would give an opportunity to control the soil loss through erosion by controlling the parameters, which reduce the K-factor value. The aim of the report is to present the results of analysis of the relative weight of these soil characteristics in the K-factor values. The relative impact of the soil characteristics on K-factor was studied through a series of statistical analyses of data from the geographic database for soil erosion risk assessments in Bulgaria. Degree of correlation between K-factor values and the parameters that determine it was studied by correlation analysis. The sensitivity of the K-factor was determined by studying the variance of each parameter within the range between minimum and maximum possible values considering average value of the other factors. Normalizing transformation of data sets was applied because of the different dimensions and the orders of variation of the values of the various parameters. The results show that the content of particles finer than 0.002 mm has the most significant relative impact on the soil erodibility, followed by the content of particles with size from 0.1 mm to 0.002 mm, the class of the water permeability of the soil profile, the content of organic matter and the aggregation class. The

  1. Linear regression analysis and its application to multivariate chromatographic calibration for the quantitative analysis of two-component mixtures.

    PubMed

    Dinç, Erdal; Ozdemir, Abdil

    2005-01-01

    Multivariate chromatographic calibration technique was developed for the quantitative analysis of binary mixtures enalapril maleate (EA) and hydrochlorothiazide (HCT) in tablets in the presence of losartan potassium (LST). The mathematical algorithm of multivariate chromatographic calibration technique is based on the use of the linear regression equations constructed using relationship between concentration and peak area at the five-wavelength set. The algorithm of this mathematical calibration model having a simple mathematical content was briefly described. This approach is a powerful mathematical tool for an optimum chromatographic multivariate calibration and elimination of fluctuations coming from instrumental and experimental conditions. This multivariate chromatographic calibration contains reduction of multivariate linear regression functions to univariate data set. The validation of model was carried out by analyzing various synthetic binary mixtures and using the standard addition technique. Developed calibration technique was applied to the analysis of the real pharmaceutical tablets containing EA and HCT. The obtained results were compared with those obtained by classical HPLC method. It was observed that the proposed multivariate chromatographic calibration gives better results than classical HPLC.

  2. Factor Analysis of Intern Effectiveness

    ERIC Educational Resources Information Center

    Womack, Sid T.; Hannah, Shellie Louise; Bell, Columbus David

    2012-01-01

    Four factors in teaching intern effectiveness, as measured by a Praxis III-similar instrument, were found among observational data of teaching interns during the 2010 spring semester. Those factors were lesson planning, teacher/student reflection, fairness & safe environment, and professionalism/efficacy. This factor analysis was as much of a…

  3. A step-by-step guide to non-linear regression analysis of experimental data using a Microsoft Excel spreadsheet.

    PubMed

    Brown, A M

    2001-06-01

    The objective of this present study was to introduce a simple, easily understood method for carrying out non-linear regression analysis based on user input functions. While it is relatively straightforward to fit data with simple functions such as linear or logarithmic functions, fitting data with more complicated non-linear functions is more difficult. Commercial specialist programmes are available that will carry out this analysis, but these programmes are expensive and are not intuitive to learn. An alternative method described here is to use the SOLVER function of the ubiquitous spreadsheet programme Microsoft Excel, which employs an iterative least squares fitting routine to produce the optimal goodness of fit between data and function. The intent of this paper is to lead the reader through an easily understood step-by-step guide to implementing this method, which can be applied to any function in the form y=f(x), and is well suited to fast, reliable analysis of data in all fields of biology.

  4. A spectral analysis of the domain decomposed Monte Carlo method for linear systems

    DOE PAGES

    Slattery, Stuart R.; Evans, Thomas M.; Wilson, Paul P. H.

    2015-09-08

    The domain decomposed behavior of the adjoint Neumann-Ulam Monte Carlo method for solving linear systems is analyzed using the spectral properties of the linear oper- ator. Relationships for the average length of the adjoint random walks, a measure of convergence speed and serial performance, are made with respect to the eigenvalues of the linear operator. In addition, relationships for the effective optical thickness of a domain in the decomposition are presented based on the spectral analysis and diffusion theory. Using the effective optical thickness, the Wigner rational approxi- mation and the mean chord approximation are applied to estimate the leakagemore » frac- tion of random walks from a domain in the decomposition as a measure of parallel performance and potential communication costs. The one-speed, two-dimensional neutron diffusion equation is used as a model problem in numerical experiments to test the models for symmetric operators with spectral qualities similar to light water reactor problems. We find, in general, the derived approximations show good agreement with random walk lengths and leakage fractions computed by the numerical experiments.« less

  5. Linear Instability Analysis of non-uniform Bubbly Mixing layer with Two-Fluid model

    NASA Astrophysics Data System (ADS)

    Sharma, Subash; Chetty, Krishna; Lopez de Bertodano, Martin

    We examine the inviscid instability of a non-uniform adiabatic bubbly shear layer with a Two-Fluid model. The Two-Fluid model is made well-posed with the closure relations for interfacial forces. First, a characteristic analysis is carried out to study the well posedness of the model over range of void fraction with interfacial forces for virtual mass, interfacial drag, interfacial pressure. A dispersion analysis then allow us to obtain growth rate and wavelength. Then, the well-posed two-fluid model is solved using CFD to validate the results obtained with the linear stability analysis. The effect of the void fraction and the distribution profile on stability is analyzed.

  6. Linear and nonlinear dynamic analysis by boundary element method. Ph.D. Thesis, 1986 Final Report

    NASA Technical Reports Server (NTRS)

    Ahmad, Shahid

    1991-01-01

    An advanced implementation of the direct boundary element method (BEM) applicable to free-vibration, periodic (steady-state) vibration and linear and nonlinear transient dynamic problems involving two and three-dimensional isotropic solids of arbitrary shape is presented. Interior, exterior, and half-space problems can all be solved by the present formulation. For the free-vibration analysis, a new real variable BEM formulation is presented which solves the free-vibration problem in the form of algebraic equations (formed from the static kernels) and needs only surface discretization. In the area of time-domain transient analysis, the BEM is well suited because it gives an implicit formulation. Although the integral formulations are elegant, because of the complexity of the formulation it has never been implemented in exact form. In the present work, linear and nonlinear time domain transient analysis for three-dimensional solids has been implemented in a general and complete manner. The formulation and implementation of the nonlinear, transient, dynamic analysis presented here is the first ever in the field of boundary element analysis. Almost all the existing formulation of BEM in dynamics use the constant variation of the variables in space and time which is very unrealistic for engineering problems and, in some cases, it leads to unacceptably inaccurate results. In the present work, linear and quadratic isoparametric boundary elements are used for discretization of geometry and functional variations in space. In addition, higher order variations in time are used. These methods of analysis are applicable to piecewise-homogeneous materials, such that not only problems of the layered media and the soil-structure interaction can be analyzed but also a large problem can be solved by the usual sub-structuring technique. The analyses have been incorporated in a versatile, general-purpose computer program. Some numerical problems are solved and, through comparisons

  7. Robust Bayesian Factor Analysis

    ERIC Educational Resources Information Center

    Hayashi, Kentaro; Yuan, Ke-Hai

    2003-01-01

    Bayesian factor analysis (BFA) assumes the normal distribution of the current sample conditional on the parameters. Practical data in social and behavioral sciences typically have significant skewness and kurtosis. If the normality assumption is not attainable, the posterior analysis will be inaccurate, although the BFA depends less on the current…

  8. Analysis of the faster-than-Nyquist optimal linear multicarrier system

    NASA Astrophysics Data System (ADS)

    Marquet, Alexandre; Siclet, Cyrille; Roque, Damien

    2017-02-01

    Faster-than-Nyquist signalization enables a better spectral efficiency at the expense of an increased computational complexity. Regarding multicarrier communications, previous work mainly relied on the study of non-linear systems exploiting coding and/or equalization techniques, with no particular optimization of the linear part of the system. In this article, we analyze the performance of the optimal linear multicarrier system when used together with non-linear receiving structures (iterative decoding and direct feedback equalization), or in a standalone fashion. We also investigate the limits of the normality assumption of the interference, used for implementing such non-linear systems. The use of this optimal linear system leads to a closed-form expression of the bit-error probability that can be used to predict the performance and help the design of coded systems. Our work also highlights the great performance/complexity trade-off offered by decision feedback equalization in a faster-than-Nyquist context. xml:lang="fr"

  9. Assessing variance components in multilevel linear models using approximate Bayes factors: A case study of ethnic disparities in birthweight

    PubMed Central

    Saville, Benjamin R.; Herring, Amy H.; Kaufman, Jay S.

    2013-01-01

    Racial/ethnic disparities in birthweight are a large source of differential morbidity and mortality worldwide and have remained largely unexplained in epidemiologic models. We assess the impact of maternal ancestry and census tract residence on infant birth weights in New York City and the modifying effects of race and nativity by incorporating random effects in a multilevel linear model. Evaluating the significance of these predictors involves the test of whether the variances of the random effects are equal to zero. This is problematic because the null hypothesis lies on the boundary of the parameter space. We generalize an approach for assessing random effects in the two-level linear model to a broader class of multilevel linear models by scaling the random effects to the residual variance and introducing parameters that control the relative contribution of the random effects. After integrating over the random effects and variance components, the resulting integrals needed to calculate the Bayes factor can be efficiently approximated with Laplace’s method. PMID:24082430

  10. Estimate the contribution of incubation parameters influence egg hatchability using multiple linear regression analysis

    PubMed Central

    Khalil, Mohamed H.; Shebl, Mostafa K.; Kosba, Mohamed A.; El-Sabrout, Karim; Zaki, Nesma

    2016-01-01

    Aim: This research was conducted to determine the most affecting parameters on hatchability of indigenous and improved local chickens’ eggs. Materials and Methods: Five parameters were studied (fertility, early and late embryonic mortalities, shape index, egg weight, and egg weight loss) on four strains, namely Fayoumi, Alexandria, Matrouh, and Montazah. Multiple linear regression was performed on the studied parameters to determine the most influencing one on hatchability. Results: The results showed significant differences in commercial and scientific hatchability among strains. Alexandria strain has the highest significant commercial hatchability (80.70%). Regarding the studied strains, highly significant differences in hatching chick weight among strains were observed. Using multiple linear regression analysis, fertility made the greatest percent contribution (71.31%) to hatchability, and the lowest percent contributions were made by shape index and egg weight loss. Conclusion: A prediction of hatchability using multiple regression analysis could be a good tool to improve hatchability percentage in chickens. PMID:27651666

  11. Development of a linearized unsteady aerodynamic analysis for cascade gust response predictions

    NASA Technical Reports Server (NTRS)

    Verdon, Joseph M.; Hall, Kenneth C.

    1990-01-01

    A method for predicting the unsteady aerodynamic response of a cascade of airfoils to entropic, vortical, and acoustic gust excitations is being developed. Here, the unsteady flow is regarded as a small perturbation of a nonuniform isentropic and irrotational steady background flow. A splitting technique is used to decompose the linearized unsteady velocity into rotational and irrotational parts leading to equations for the complex amplitudes of the linearized unsteady entropy, rotational velocity, and velocity potential that are coupled only sequentially. The entropic and rotational velocity fluctuations are described by transport equations for which closed-form solutions in terms of the mean-flow drift and stream functions can be determined. The potential fluctuation is described by an inhomogeneous convected wave equation in which the source term depends on the rotational velocity field, and is determined using finite-difference procedures. The analytical and numerical techniques used to determine the linearized unsteady flow are outlined. Results are presented to indicate the status of the solution procedure and to demonstrate the impact of blade geometry and mean blade loading on the aerodynamic response of cascades to vortical gust excitations. The analysis described herein leads to very efficient predictions of cascade unsteady aerodynamic response phenomena making it useful for turbomachinery aeroelastic and aeroacoustic design applications.

  12. Non-linear vibrations of sandwich viscoelastic shells

    NASA Astrophysics Data System (ADS)

    Benchouaf, Lahcen; Boutyour, El Hassan; Daya, El Mostafa; Potier-Ferry, Michel

    2018-04-01

    This paper deals with the non-linear vibration of sandwich viscoelastic shell structures. Coupling a harmonic balance method with the Galerkin's procedure, one obtains an amplitude equation depending on two complex coefficients. The latter are determined by solving a classical eigenvalue problem and two linear ones. This permits to get the non-linear frequency and the non-linear loss factor as functions of the displacement amplitude. To validate our approach, these relationships are illustrated in the case of a circular sandwich ring.

  13. Comparative analysis of linear and non-linear method of estimating the sorption isotherm parameters for malachite green onto activated carbon.

    PubMed

    Kumar, K Vasanth

    2006-08-21

    The experimental equilibrium data of malachite green onto activated carbon were fitted to the Freundlich, Langmuir and Redlich-Peterson isotherms by linear and non-linear method. A comparison between linear and non-linear of estimating the isotherm parameters was discussed. The four different linearized form of Langmuir isotherm were also discussed. The results confirmed that the non-linear method as a better way to obtain isotherm parameters. The best fitting isotherm was Langmuir and Redlich-Peterson isotherm. Redlich-Peterson is a special case of Langmuir when the Redlich-Peterson isotherm constant g was unity.

  14. High-throughput quantitative biochemical characterization of algal biomass by NIR spectroscopy; multiple linear regression and multivariate linear regression analysis.

    PubMed

    Laurens, L M L; Wolfrum, E J

    2013-12-18

    One of the challenges associated with microalgal biomass characterization and the comparison of microalgal strains and conversion processes is the rapid determination of the composition of algae. We have developed and applied a high-throughput screening technology based on near-infrared (NIR) spectroscopy for the rapid and accurate determination of algal biomass composition. We show that NIR spectroscopy can accurately predict the full composition using multivariate linear regression analysis of varying lipid, protein, and carbohydrate content of algal biomass samples from three strains. We also demonstrate a high quality of predictions of an independent validation set. A high-throughput 96-well configuration for spectroscopy gives equally good prediction relative to a ring-cup configuration, and thus, spectra can be obtained from as little as 10-20 mg of material. We found that lipids exhibit a dominant, distinct, and unique fingerprint in the NIR spectrum that allows for the use of single and multiple linear regression of respective wavelengths for the prediction of the biomass lipid content. This is not the case for carbohydrate and protein content, and thus, the use of multivariate statistical modeling approaches remains necessary.

  15. Stability analysis of piecewise non-linear systems and its application to chaotic synchronisation with intermittent control

    NASA Astrophysics Data System (ADS)

    Wang, Qingzhi; Tan, Guanzheng; He, Yong; Wu, Min

    2017-10-01

    This paper considers a stability analysis issue of piecewise non-linear systems and applies it to intermittent synchronisation of chaotic systems. First, based on piecewise Lyapunov function methods, more general and less conservative stability criteria of piecewise non-linear systems in periodic and aperiodic cases are presented, respectively. Next, intermittent synchronisation conditions of chaotic systems are derived which extend existing results. Finally, Chua's circuit is taken as an example to verify the validity of our methods.

  16. Social inequality, lifestyles and health - a non-linear canonical correlation analysis based on the approach of Pierre Bourdieu.

    PubMed

    Grosse Frie, Kirstin; Janssen, Christian

    2009-01-01

    Based on the theoretical and empirical approach of Pierre Bourdieu, a multivariate non-linear method is introduced as an alternative way to analyse the complex relationships between social determinants and health. The analysis is based on face-to-face interviews with 695 randomly selected respondents aged 30 to 59. Variables regarding socio-economic status, life circumstances, lifestyles, health-related behaviour and health were chosen for the analysis. In order to determine whether the respondents can be differentiated and described based on these variables, a non-linear canonical correlation analysis (OVERALS) was performed. The results can be described on three dimensions; Eigenvalues add up to the fit of 1.444, which can be interpreted as approximately 50 % of explained variance. The three-dimensional space illustrates correspondences between variables and provides a framework for interpretation based on latent dimensions, which can be described by age, education, income and gender. Using non-linear canonical correlation analysis, health characteristics can be analysed in conjunction with socio-economic conditions and lifestyles. Based on Bourdieus theoretical approach, the complex correlations between these variables can be more substantially interpreted and presented.

  17. Analysis of ERTS-1 linear features in New York State

    NASA Technical Reports Server (NTRS)

    Isachsen, Y. W. (Principal Investigator); Fakundiny, R. H.; Forster, S. W.

    1974-01-01

    The author has identified the following significant results. All ERTS-1 linears confirmed to date have topographic expression although they may appear as featureless tonal linears on the imagery. A bias is unavoidably introduced against any linears which may parallel raster lines, lithological trends, or the azimuth of solar illumination. Ground study of ERTS-1 topographic lineaments in the Adirondacks indicates: outcrops along linears are even more rare than expected, fault breccias are found along some NNE lineaments, chloritization and slickensiding without brecciation characterize one EW lineament whereas closely-spaced jointing plus a zone of plastic shear define another. Field work in the Catskills suggests that the prominent new NNE lineaments may be surface manifestations of normal faulting in the basement, and that it may become possible to map major joint sets over extensive plateau regions directly on the imagery. Fall and winter images each display some unique linears, and long linears on the fall image commonly appear as aligned segments on the winter scene. A computer-processed color composite image permitted the extraction or additional information on the shaded side of mountains.

  18. Stability Analysis of Finite Difference Schemes for Hyperbolic Systems, and Problems in Applied and Computational Linear Algebra.

    DTIC Science & Technology

    FINITE DIFFERENCE THEORY, * LINEAR ALGEBRA , APPLIED MATHEMATICS, APPROXIMATION(MATHEMATICS), BOUNDARY VALUE PROBLEMS, COMPUTATIONS, HYPERBOLAS, MATHEMATICAL MODELS, NUMERICAL ANALYSIS, PARTIAL DIFFERENTIAL EQUATIONS, STABILITY.

  19. Linear discriminant analysis based on L1-norm maximization.

    PubMed

    Zhong, Fujin; Zhang, Jiashu

    2013-08-01

    Linear discriminant analysis (LDA) is a well-known dimensionality reduction technique, which is widely used for many purposes. However, conventional LDA is sensitive to outliers because its objective function is based on the distance criterion using L2-norm. This paper proposes a simple but effective robust LDA version based on L1-norm maximization, which learns a set of local optimal projection vectors by maximizing the ratio of the L1-norm-based between-class dispersion and the L1-norm-based within-class dispersion. The proposed method is theoretically proved to be feasible and robust to outliers while overcoming the singular problem of the within-class scatter matrix for conventional LDA. Experiments on artificial datasets, standard classification datasets and three popular image databases demonstrate the efficacy of the proposed method.

  20. Improved application of independent component analysis to functional magnetic resonance imaging study via linear projection techniques.

    PubMed

    Long, Zhiying; Chen, Kewei; Wu, Xia; Reiman, Eric; Peng, Danling; Yao, Li

    2009-02-01

    Spatial Independent component analysis (sICA) has been widely used to analyze functional magnetic resonance imaging (fMRI) data. The well accepted implicit assumption is the spatially statistical independency of intrinsic sources identified by sICA, making the sICA applications difficult for data in which there exist interdependent sources and confounding factors. This interdependency can arise, for instance, from fMRI studies investigating two tasks in a single session. In this study, we introduced a linear projection approach and considered its utilization as a tool to separate task-related components from two-task fMRI data. The robustness and feasibility of the method are substantiated through simulation on computer data and fMRI real rest data. Both simulated and real two-task fMRI experiments demonstrated that sICA in combination with the projection method succeeded in separating spatially dependent components and had better detection power than pure model-based method when estimating activation induced by each task as well as both tasks.

  1. The Infinitesimal Jackknife with Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Preacher, Kristopher J.; Jennrich, Robert I.

    2012-01-01

    The infinitesimal jackknife, a nonparametric method for estimating standard errors, has been used to obtain standard error estimates in covariance structure analysis. In this article, we adapt it for obtaining standard errors for rotated factor loadings and factor correlations in exploratory factor analysis with sample correlation matrices. Both…

  2. Application of Exactly Linearized Error Transport Equations to AIAA CFD Prediction Workshops

    NASA Technical Reports Server (NTRS)

    Derlaga, Joseph M.; Park, Michael A.; Rallabhandi, Sriram

    2017-01-01

    The computational fluid dynamics (CFD) prediction workshops sponsored by the AIAA have created invaluable opportunities in which to discuss the predictive capabilities of CFD in areas in which it has struggled, e.g., cruise drag, high-lift, and sonic boom pre diction. While there are many factors that contribute to disagreement between simulated and experimental results, such as modeling or discretization error, quantifying the errors contained in a simulation is important for those who make decisions based on the computational results. The linearized error transport equations (ETE) combined with a truncation error estimate is a method to quantify one source of errors. The ETE are implemented with a complex-step method to provide an exact linearization with minimal source code modifications to CFD and multidisciplinary analysis methods. The equivalency of adjoint and linearized ETE functional error correction is demonstrated. Uniformly refined grids from a series of AIAA prediction workshops demonstrate the utility of ETE for multidisciplinary analysis with a connection between estimated discretization error and (resolved or under-resolved) flow features.

  3. Simulating the effect of non-linear mode coupling in cosmological parameter estimation

    NASA Astrophysics Data System (ADS)

    Kiessling, A.; Taylor, A. N.; Heavens, A. F.

    2011-09-01

    Fisher Information Matrix methods are commonly used in cosmology to estimate the accuracy that cosmological parameters can be measured with a given experiment and to optimize the design of experiments. However, the standard approach usually assumes both data and parameter estimates are Gaussian-distributed. Further, for survey forecasts and optimization it is usually assumed that the power-spectrum covariance matrix is diagonal in Fourier space. However, in the low-redshift Universe, non-linear mode coupling will tend to correlate small-scale power, moving information from lower to higher order moments of the field. This movement of information will change the predictions of cosmological parameter accuracy. In this paper we quantify this loss of information by comparing naïve Gaussian Fisher matrix forecasts with a maximum likelihood parameter estimation analysis of a suite of mock weak lensing catalogues derived from N-body simulations, based on the SUNGLASS pipeline, for a 2D and tomographic shear analysis of a Euclid-like survey. In both cases, we find that the 68 per cent confidence area of the Ωm-σ8 plane increases by a factor of 5. However, the marginal errors increase by just 20-40 per cent. We propose a new method to model the effects of non-linear shear-power mode coupling in the Fisher matrix by approximating the shear-power distribution as a multivariate Gaussian with a covariance matrix derived from the mock weak lensing survey. We find that this approximation can reproduce the 68 per cent confidence regions of the full maximum likelihood analysis in the Ωm-σ8 plane to high accuracy for both 2D and tomographic weak lensing surveys. Finally, we perform a multiparameter analysis of Ωm, σ8, h, ns, w0 and wa to compare the Gaussian and non-linear mode-coupled Fisher matrix contours. The 6D volume of the 1σ error contours for the non-linear Fisher analysis is a factor of 3 larger than for the Gaussian case, and the shape of the 68 per cent confidence

  4. A Systematic Review of Methodology: Time Series Regression Analysis for Environmental Factors and Infectious Diseases

    PubMed Central

    Imai, Chisato; Hashizume, Masahiro

    2015-01-01

    Background: Time series analysis is suitable for investigations of relatively direct and short-term effects of exposures on outcomes. In environmental epidemiology studies, this method has been one of the standard approaches to assess impacts of environmental factors on acute non-infectious diseases (e.g. cardiovascular deaths), with conventionally generalized linear or additive models (GLM and GAM). However, the same analysis practices are often observed with infectious diseases despite of the substantial differences from non-infectious diseases that may result in analytical challenges. Methods: Following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines, systematic review was conducted to elucidate important issues in assessing the associations between environmental factors and infectious diseases using time series analysis with GLM and GAM. Published studies on the associations between weather factors and malaria, cholera, dengue, and influenza were targeted. Findings: Our review raised issues regarding the estimation of susceptible population and exposure lag times, the adequacy of seasonal adjustments, the presence of strong autocorrelations, and the lack of a smaller observation time unit of outcomes (i.e. daily data). These concerns may be attributable to features specific to infectious diseases, such as transmission among individuals and complicated causal mechanisms. Conclusion: The consequence of not taking adequate measures to address these issues is distortion of the appropriate risk quantifications of exposures factors. Future studies should pay careful attention to details and examine alternative models or methods that improve studies using time series regression analysis for environmental determinants of infectious diseases. PMID:25859149

  5. Factor Retention in Exploratory Factor Analysis: A Comparison of Alternative Methods.

    ERIC Educational Resources Information Center

    Mumford, Karen R.; Ferron, John M.; Hines, Constance V.; Hogarty, Kristine Y.; Kromrey, Jeffery D.

    This study compared the effectiveness of 10 methods of determining the number of factors to retain in exploratory common factor analysis. The 10 methods included the Kaiser rule and a modified Kaiser criterion, 3 variations of parallel analysis, 4 regression-based variations of the scree procedure, and the minimum average partial procedure. The…

  6. Nanoimaging of resonating hyperbolic polaritons in linear boron nitride antennas

    PubMed Central

    Alfaro-Mozaz, F. J.; Alonso-González, P.; Vélez, S.; Dolado, I.; Autore, M.; Mastel, S.; Casanova, F.; Hueso, L. E.; Li, P.; Nikitin, A. Y.; Hillenbrand, R.

    2017-01-01

    Polaritons in layered materials—including van der Waals materials—exhibit hyperbolic dispersion and strong field confinement, which makes them highly attractive for applications including optical nanofocusing, sensing and control of spontaneous emission. Here we report a near-field study of polaritonic Fabry–Perot resonances in linear antennas made of a hyperbolic material. Specifically, we study hyperbolic phonon–polaritons in rectangular waveguide antennas made of hexagonal boron nitride (h-BN, a prototypical van der Waals crystal). Infrared nanospectroscopy and nanoimaging experiments reveal sharp resonances with large quality factors around 100, exhibiting atypical modal near-field patterns that have no analogue in conventional linear antennas. By performing a detailed mode analysis, we can assign the antenna resonances to a single waveguide mode originating from the hybridization of hyperbolic surface phonon–polaritons (Dyakonov polaritons) that propagate along the edges of the h-BN waveguide. Our work establishes the basis for the understanding and design of linear waveguides, resonators, sensors and metasurface elements based on hyperbolic materials and metamaterials. PMID:28589941

  7. Exploratory factor analysis in Rehabilitation Psychology: a content analysis.

    PubMed

    Roberson, Richard B; Elliott, Timothy R; Chang, Jessica E; Hill, Jessica N

    2014-11-01

    Our objective was to examine the use and quality of exploratory factor analysis (EFA) in articles published in Rehabilitation Psychology. Trained raters examined 66 separate exploratory factor analyses in 47 articles published between 1999 and April 2014. The raters recorded the aim of the EFAs, the distributional statistics, sample size, factor retention method(s), extraction and rotation method(s), and whether the pattern coefficients, structure coefficients, and the matrix of association were reported. The primary use of the EFAs was scale development, but the most widely used extraction and rotation method was principle component analysis, with varimax rotation. When determining how many factors to retain, multiple methods (e.g., scree plot, parallel analysis) were used most often. Many articles did not report enough information to allow for the duplication of their results. EFA relies on authors' choices (e.g., factor retention rules extraction, rotation methods), and few articles adhered to all of the best practices. The current findings are compared to other empirical investigations into the use of EFA in published research. Recommendations for improving EFA reporting practices in rehabilitation psychology research are provided.

  8. Generation of wavy structure on lipid membrane by peripheral proteins: a linear elastic analysis.

    PubMed

    Mahata, Paritosh; Das, Sovan Lal

    2017-05-01

    We carry out a linear elastic analysis to study wavy structure generation on lipid membrane by peripheral membrane proteins. We model the lipid membrane as linearly elastic and anisotropic material. The hydrophobic insertion by proteins into the lipid membrane has been idealized as penetration of rigid rod-like inclusions into the membrane and the electrostatic interaction between protein and membrane has been modeled by a distributed surface traction acting on the membrane surface. With the proposed model we study curvature generation by several binding domains of peripheral membrane proteins containing BAR domains and amphipathic alpha-helices. It is observed that electrostatic interaction is essential for curvature generation by the BAR domains. © 2017 Federation of European Biochemical Societies.

  9. Extending local canonical correlation analysis to handle general linear contrasts for FMRI data.

    PubMed

    Jin, Mingwu; Nandy, Rajesh; Curran, Tim; Cordes, Dietmar

    2012-01-01

    Local canonical correlation analysis (CCA) is a multivariate method that has been proposed to more accurately determine activation patterns in fMRI data. In its conventional formulation, CCA has several drawbacks that limit its usefulness in fMRI. A major drawback is that, unlike the general linear model (GLM), a test of general linear contrasts of the temporal regressors has not been incorporated into the CCA formalism. To overcome this drawback, a novel directional test statistic was derived using the equivalence of multivariate multiple regression (MVMR) and CCA. This extension will allow CCA to be used for inference of general linear contrasts in more complicated fMRI designs without reparameterization of the design matrix and without reestimating the CCA solutions for each particular contrast of interest. With the proper constraints on the spatial coefficients of CCA, this test statistic can yield a more powerful test on the inference of evoked brain regional activations from noisy fMRI data than the conventional t-test in the GLM. The quantitative results from simulated and pseudoreal data and activation maps from fMRI data were used to demonstrate the advantage of this novel test statistic.

  10. Refining and end use study of coal liquids II - linear programming analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lowe, C.; Tam, S.

    1995-12-31

    A DOE-funded study is underway to determine the optimum refinery processing schemes for producing transportation fuels that will meet CAAA regulations from direct and indirect coal liquids. The study consists of three major parts: pilot plant testing of critical upgrading processes, linear programming analysis of different processing schemes, and engine emission testing of final products. Currently, fractions of a direct coal liquid produced form bituminous coal are being tested in sequence of pilot plant upgrading processes. This work is discussed in a separate paper. The linear programming model, which is the subject of this paper, has been completed for themore » petroleum refinery and is being modified to handle coal liquids based on the pilot plant test results. Preliminary coal liquid evaluation studies indicate that, if a refinery expansion scenario is adopted, then the marginal value of the coal liquid (over the base petroleum crude) is $3-4/bbl.« less

  11. Dynamic analysis of geometrically non-linear three-dimensional beams under moving mass

    NASA Astrophysics Data System (ADS)

    Zupan, E.; Zupan, D.

    2018-01-01

    In this paper, we present a coupled dynamic analysis of a moving particle on a deformable three-dimensional frame. The presented numerical model is capable of considering arbitrary curved and twisted initial geometry of the beam and takes into account geometric non-linearity of the structure. Coupled with dynamic equations of the structure, the equations of moving particle are solved. The moving particle represents the dynamic load and varies the mass distribution of the structure and at the same time its path is adapting due to deformability of the structure. A coupled geometrically non-linear behaviour of beam and particle is studied. The equation of motion of the particle is added to the system of the beam dynamic equations and an additional unknown representing the coordinate of the curvilinear path of the particle is introduced. The specially designed finite-element formulation of the three-dimensional beam based on the weak form of consistency conditions is employed where only the boundary conditions are affected by the contact forces.

  12. Computing Linear Mathematical Models Of Aircraft

    NASA Technical Reports Server (NTRS)

    Duke, Eugene L.; Antoniewicz, Robert F.; Krambeer, Keith D.

    1991-01-01

    Derivation and Definition of Linear Aircraft Model (LINEAR) computer program provides user with powerful, and flexible, standard, documented, and verified software tool for linearization of mathematical models of aerodynamics of aircraft. Intended for use in software tool to drive linear analysis of stability and design of control laws for aircraft. Capable of both extracting such linearized engine effects as net thrust, torque, and gyroscopic effects, and including these effects in linear model of system. Designed to provide easy selection of state, control, and observation variables used in particular model. Also provides flexibility of allowing alternate formulations of both state and observation equations. Written in FORTRAN.

  13. Quasi-closed phase forward-backward linear prediction analysis of speech for accurate formant detection and estimation.

    PubMed

    Gowda, Dhananjaya; Airaksinen, Manu; Alku, Paavo

    2017-09-01

    Recently, a quasi-closed phase (QCP) analysis of speech signals for accurate glottal inverse filtering was proposed. However, the QCP analysis which belongs to the family of temporally weighted linear prediction (WLP) methods uses the conventional forward type of sample prediction. This may not be the best choice especially in computing WLP models with a hard-limiting weighting function. A sample selective minimization of the prediction error in WLP reduces the effective number of samples available within a given window frame. To counter this problem, a modified quasi-closed phase forward-backward (QCP-FB) analysis is proposed, wherein each sample is predicted based on its past as well as future samples thereby utilizing the available number of samples more effectively. Formant detection and estimation experiments on synthetic vowels generated using a physical modeling approach as well as natural speech utterances show that the proposed QCP-FB method yields statistically significant improvements over the conventional linear prediction and QCP methods.

  14. A FORTRAN program for the analysis of linear continuous and sample-data systems

    NASA Technical Reports Server (NTRS)

    Edwards, J. W.

    1976-01-01

    A FORTRAN digital computer program which performs the general analysis of linearized control systems is described. State variable techniques are used to analyze continuous, discrete, and sampled data systems. Analysis options include the calculation of system eigenvalues, transfer functions, root loci, root contours, frequency responses, power spectra, and transient responses for open- and closed-loop systems. A flexible data input format allows the user to define systems in a variety of representations. Data may be entered by inputing explicit data matrices or matrices constructed in user written subroutines, by specifying transfer function block diagrams, or by using a combination of these methods.

  15. Linear and non-linear Modified Gravity forecasts with future surveys

    NASA Astrophysics Data System (ADS)

    Casas, Santiago; Kunz, Martin; Martinelli, Matteo; Pettorino, Valeria

    2017-12-01

    Modified Gravity theories generally affect the Poisson equation and the gravitational slip in an observable way, that can be parameterized by two generic functions (η and μ) of time and space. We bin their time dependence in redshift and present forecasts on each bin for future surveys like Euclid. We consider both Galaxy Clustering and Weak Lensing surveys, showing the impact of the non-linear regime, with two different semi-analytical approximations. In addition to these future observables, we use a prior covariance matrix derived from the Planck observations of the Cosmic Microwave Background. In this work we neglect the information from the cross correlation of these observables, and treat them as independent. Our results show that η and μ in different redshift bins are significantly correlated, but including non-linear scales reduces or even eliminates the correlation, breaking the degeneracy between Modified Gravity parameters and the overall amplitude of the matter power spectrum. We further apply a Zero-phase Component Analysis and identify which combinations of the Modified Gravity parameter amplitudes, in different redshift bins, are best constrained by future surveys. We extend the analysis to two particular parameterizations of μ and η and consider, in addition to Euclid, also SKA1, SKA2, DESI: we find in this case that future surveys will be able to constrain the current values of η and μ at the 2-5% level when using only linear scales (wavevector k < 0 . 15 h/Mpc), depending on the specific time parameterization; sensitivity improves to about 1% when non-linearities are included.

  16. Linearity assumption in soil-to-plant transfer factors of natural uranium and radium in Helianthus annuus L.

    PubMed

    Rodríguez, P Blanco; Tomé, F Vera; Fernández, M Pérez; Lozano, J C

    2006-05-15

    The linearity assumption of the validation of soil-to-plant transfer factors of natural uranium and (226)Ra was tested using Helianthus annuus L. (sunflower) grown in a hydroponic medium. Transfer of natural uranium and (226)Ra was tested in both the aerial fraction of plants and in the overall seedlings (roots and shoots). The results show that the linearity assumption can be considered valid in the hydroponic growth of sunflowers for the radionuclides studied. The ability of sunflowers to translocate uranium and (226)Ra was also investigated, as well as the feasibility of using sunflower plants to remove uranium and radium from contaminated water, and by extension, their potential for phytoextraction. In this sense, the removal percentages obtained for natural uranium and (226)Ra were 24% and 42%, respectively. Practically all the uranium is accumulated in the roots. However, 86% of the (226)Ra activity concentration in roots was translocated to the aerial part.

  17. Spatial Bayesian Latent Factor Regression Modeling of Coordinate-based Meta-analysis Data

    PubMed Central

    Montagna, Silvia; Wager, Tor; Barrett, Lisa Feldman; Johnson, Timothy D.; Nichols, Thomas E.

    2017-01-01

    Summary Now over 20 years old, functional MRI (fMRI) has a large and growing literature that is best synthesised with meta-analytic tools. As most authors do not share image data, only the peak activation coordinates (foci) reported in the paper are available for Coordinate-Based Meta-Analysis (CBMA). Neuroimaging meta-analysis is used to 1) identify areas of consistent activation; and 2) build a predictive model of task type or cognitive process for new studies (reverse inference). To simultaneously address these aims, we propose a Bayesian point process hierarchical model for CBMA. We model the foci from each study as a doubly stochastic Poisson process, where the study-specific log intensity function is characterised as a linear combination of a high-dimensional basis set. A sparse representation of the intensities is guaranteed through latent factor modeling of the basis coefficients. Within our framework, it is also possible to account for the effect of study-level covariates (meta-regression), significantly expanding the capabilities of the current neuroimaging meta-analysis methods available. We apply our methodology to synthetic data and neuroimaging meta-analysis datasets. PMID:28498564

  18. Spatial Bayesian latent factor regression modeling of coordinate-based meta-analysis data.

    PubMed

    Montagna, Silvia; Wager, Tor; Barrett, Lisa Feldman; Johnson, Timothy D; Nichols, Thomas E

    2018-03-01

    Now over 20 years old, functional MRI (fMRI) has a large and growing literature that is best synthesised with meta-analytic tools. As most authors do not share image data, only the peak activation coordinates (foci) reported in the article are available for Coordinate-Based Meta-Analysis (CBMA). Neuroimaging meta-analysis is used to (i) identify areas of consistent activation; and (ii) build a predictive model of task type or cognitive process for new studies (reverse inference). To simultaneously address these aims, we propose a Bayesian point process hierarchical model for CBMA. We model the foci from each study as a doubly stochastic Poisson process, where the study-specific log intensity function is characterized as a linear combination of a high-dimensional basis set. A sparse representation of the intensities is guaranteed through latent factor modeling of the basis coefficients. Within our framework, it is also possible to account for the effect of study-level covariates (meta-regression), significantly expanding the capabilities of the current neuroimaging meta-analysis methods available. We apply our methodology to synthetic data and neuroimaging meta-analysis datasets. © 2017, The International Biometric Society.

  19. Human factors evaluation of teletherapy: Function and task analysis. Volume 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaye, R.D.; Henriksen, K.; Jones, R.

    1995-07-01

    As a treatment methodology, teletherapy selectively destroys cancerous and other tissue by exposure to an external beam of ionizing radiation. Sources of radiation are either a radioactive isotope, typically Cobalt-60 (Co-60), or a linear accelerator. Records maintained by the NRC have identified instances of teletherapy misadministration where the delivered radiation dose has differed from the radiation prescription (e.g., instances where fractions were delivered to the wrong patient, to the wrong body part, or were too great or too little with respect to the defined treatment volume). Both human error and machine malfunction have led to misadministrations. Effective and safe treatmentmore » requires a concern for precision and consistency of human-human and human-machine interactions throughout the course of therapy. The present study is the first part of a series of human factors evaluations for identifying the root causes that lead to human error in the teletherapy environment. The human factors evaluations included: (1) a function and task analysis of teletherapy activities, (2) an evaluation of the human-system interfaces, (3) an evaluation of procedures used by teletherapy staff, (4) an evaluation of the training and qualifications of treatment staff (excluding the oncologists), (5) an evaluation of organizational practices and policies, and (6) an identification of problems and alternative approaches for NRC and industry attention. The present report addresses the function and task analysis of teletherapy activities and provides the foundation for the conduct of the subsequent evaluations. The report includes sections on background, methodology, a description of the function and task analysis, and use of the task analysis findings for the subsequent tasks. The function and task analysis data base also is included.« less

  20. Analysis of friction and instability by the centre manifold theory for a non-linear sprag-slip model

    NASA Astrophysics Data System (ADS)

    Sinou, J.-J.; Thouverez, F.; Jezequel, L.

    2003-08-01

    This paper presents the research devoted to the study of instability phenomena in non-linear model with a constant brake friction coefficient. Indeed, the impact of unstable oscillations can be catastrophic. It can cause vehicle control problems and component degradation. Accordingly, complex stability analysis is required. This paper outlines stability analysis and centre manifold approach for studying instability problems. To put it more precisely, one considers brake vibrations and more specifically heavy trucks judder where the dynamic characteristics of the whole front axle assembly is concerned, even if the source of judder is located in the brake system. The modelling introduces the sprag-slip mechanism based on dynamic coupling due to buttressing. The non-linearity is expressed as a polynomial with quadratic and cubic terms. This model does not require the use of brake negative coefficient, in order to predict the instability phenomena. Finally, the centre manifold approach is used to obtain equations for the limit cycle amplitudes. The centre manifold theory allows the reduction of the number of equations of the original system in order to obtain a simplified system, without loosing the dynamics of the original system as well as the contributions of non-linear terms. The goal is the study of the stability analysis and the validation of the centre manifold approach for a complex non-linear model by comparing results obtained by solving the full system and by using the centre manifold approach. The brake friction coefficient is used as an unfolding parameter of the fundamental Hopf bifurcation point.

  1. Analysis of factors that inhibiting implementation of Information Security Management System (ISMS) based on ISO 27001

    NASA Astrophysics Data System (ADS)

    Tatiara, R.; Fajar, A. N.; Siregar, B.; Gunawan, W.

    2018-03-01

    The purpose of this research is to determine multi factors that inhibiting the implementation of the ISMS based on ISO 2700. It is also to propose a follow-up recommendation on the factors that inhibit the implementation of the ISMS. Data collection is derived from questionnaires to 182 respondents from users in data center operation (DCO) at bca, Indonesian telecommunication international (telin), and data centre division at Indonesian Ministry of Health. We analysing data collection with multiple linear regression analysis and paired t-test. The results are multiple factors which inhibiting the implementation of the ISMS from the three organizations which has implement and operate the ISMS, ISMS documentation management, and continual improvement. From this research, we concluded that the processes of implementation in ISMS is the necessity of the role of all parties in succeeding the implementation of the ISMS continuously.

  2. Non-Linear Analysis of Mode II Fracture in the end Notched Flexure Beam

    NASA Astrophysics Data System (ADS)

    Rizov, V.

    2016-03-01

    Analysis is carried-out of fracture in the End Notched Flex- ure (ENF) beam configuration, taking into account the material nonlin- earity. For this purpose, the J-integral approach is applied. A non-linear model, based on the Classical beam theory is used. The mechanical be- haviour of the ENF configuration is described by the Ramberg-Osgood stress-strain curve. It is assumed that the material possesses the same properties in tension and compression. The influence is evaluated of the material constants in the Ramberg-Osgood stress-strain equation on the fracture behaviour. The effect of the crack length on the J-integral value is investigated, too. The analytical approach, developed in the present paper, is very useful for parametric analyses, since the simple formulae obtained capture the essentials of the non-linear fracture in the ENF con- figuration.

  3. A linear and non-linear polynomial neural network modeling of dissolved oxygen content in surface water: Inter- and extrapolation performance with inputs' significance analysis.

    PubMed

    Šiljić Tomić, Aleksandra; Antanasijević, Davor; Ristić, Mirjana; Perić-Grujić, Aleksandra; Pocajt, Viktor

    2018-01-01

    Accurate prediction of water quality parameters (WQPs) is an important task in the management of water resources. Artificial neural networks (ANNs) are frequently applied for dissolved oxygen (DO) prediction, but often only their interpolation performance is checked. The aims of this research, beside interpolation, were the determination of extrapolation performance of ANN model, which was developed for the prediction of DO content in the Danube River, and the assessment of relationship between the significance of inputs and prediction error in the presence of values which were of out of the range of training. The applied ANN is a polynomial neural network (PNN) which performs embedded selection of most important inputs during learning, and provides a model in the form of linear and non-linear polynomial functions, which can then be used for a detailed analysis of the significance of inputs. Available dataset that contained 1912 monitoring records for 17 water quality parameters was split into a "regular" subset that contains normally distributed and low variability data, and an "extreme" subset that contains monitoring records with outlier values. The results revealed that the non-linear PNN model has good interpolation performance (R 2 =0.82), but it was not robust in extrapolation (R 2 =0.63). The analysis of extrapolation results has shown that the prediction errors are correlated with the significance of inputs. Namely, the out-of-training range values of the inputs with low importance do not affect significantly the PNN model performance, but their influence can be biased by the presence of multi-outlier monitoring records. Subsequently, linear PNN models were successfully applied to study the effect of water quality parameters on DO content. It was observed that DO level is mostly affected by temperature, pH, biological oxygen demand (BOD) and phosphorus concentration, while in extreme conditions the importance of alkalinity and bicarbonates rises over p

  4. Linear control of a boiler-turbine unit: analysis and design.

    PubMed

    Tan, Wen; Fang, Fang; Tian, Liang; Fu, Caifen; Liu, Jizhen

    2008-04-01

    Linear control of a boiler-turbine unit is discussed in this paper. Based on the nonlinear model of the unit, this paper analyzes the nonlinearity of the unit, and selects the appropriate operating points so that the linear controller can achieve wide-range performance. Simulation and experimental results at the No. 4 Unit at the Dalate Power Plant show that the linear controller can achieve the desired performance under a specific range of load variations.

  5. Psychometric properties of the Intrinsic Motivation Inventory in a competitive sport setting: a confirmatory factor analysis.

    PubMed

    McAuley, E; Duncan, T; Tammen, V V

    1989-03-01

    The present study was designed to assess selected psychometric properties of the Intrinsic Motivation Inventory (IMI) (Ryan, 1982), a multidimensional measure of subjects' experience with regard to experimental tasks. Subjects (N = 116) competed in a basketball free-throw shooting game, following which they completed the IMI. The LISREL VI computer program was employed to conduct a confirmatory factor analysis to assess the tenability of a five factor hierarchical model representing four first-order factors or dimensions and a second-order general factor representing intrinsic motivation. Indices of model acceptability tentatively suggest that the sport data adequately fit the hypothesized five factor hierarchical model. Alternative models were tested but did not result in significant improvements in the goodness-of-fit indices, suggesting the proposed model to be the most accurate of the models tested. Coefficient alphas for the four dimensions and the overall scale indicated adequate reliability. The results are discussed with regard to the importance of accurate assessment of psychological constructs and the use of linear structural equations in confirming the factor structures of measures.

  6. Linearized blade row compression component model. Stability and frequency response analysis of a J85-3 compressor

    NASA Technical Reports Server (NTRS)

    Tesch, W. A.; Moszee, R. H.; Steenken, W. G.

    1976-01-01

    NASA developed stability and frequency response analysis techniques were applied to a dynamic blade row compression component stability model to provide a more economic approach to surge line and frequency response determination than that provided by time-dependent methods. This blade row model was linearized and the Jacobian matrix was formed. The clean-inlet-flow stability characteristics of the compressors of two J85-13 engines were predicted by applying the alternate Routh-Hurwitz stability criterion to the Jacobian matrix. The predicted surge line agreed with the clean-inlet-flow surge line predicted by the time-dependent method to a high degree except for one engine at 94% corrected speed. No satisfactory explanation of this discrepancy was found. The frequency response of the linearized system was determined by evaluating its Laplace transfer function. The results of the linearized-frequency-response analysis agree with the time-dependent results when the time-dependent inlet total-pressure and exit-flow function amplitude boundary conditions are less than 1 percent and 3 percent, respectively. The stability analysis technique was extended to a two-sector parallel compressor model with and without interstage crossflow and predictions were carried out for total-pressure distortion extents of 180 deg, 90 deg, 60 deg, and 30 deg.

  7. Diagnostics for generalized linear hierarchical models in network meta-analysis.

    PubMed

    Zhao, Hong; Hodges, James S; Carlin, Bradley P

    2017-09-01

    Network meta-analysis (NMA) combines direct and indirect evidence comparing more than 2 treatments. Inconsistency arises when these 2 information sources differ. Previous work focuses on inconsistency detection, but little has been done on how to proceed after identifying inconsistency. The key issue is whether inconsistency changes an NMA's substantive conclusions. In this paper, we examine such discrepancies from a diagnostic point of view. Our methods seek to detect influential and outlying observations in NMA at a trial-by-arm level. These observations may have a large effect on the parameter estimates in NMA, or they may deviate markedly from other observations. We develop formal diagnostics for a Bayesian hierarchical model to check the effect of deleting any observation. Diagnostics are specified for generalized linear hierarchical NMA models and investigated for both published and simulated datasets. Results from our example dataset using either contrast- or arm-based models and from the simulated datasets indicate that the sources of inconsistency in NMA tend not to be influential, though results from the example dataset suggest that they are likely to be outliers. This mimics a familiar result from linear model theory, in which outliers with low leverage are not influential. Future extensions include incorporating baseline covariates and individual-level patient data. Copyright © 2017 John Wiley & Sons, Ltd.

  8. A Secondary Antibody-Detecting Molecular Weight Marker with Mouse and Rabbit IgG Fc Linear Epitopes for Western Blot Analysis.

    PubMed

    Lin, Wen-Wei; Chen, I-Ju; Cheng, Ta-Chun; Tung, Yi-Ching; Chu, Pei-Yu; Chuang, Chih-Hung; Hsieh, Yuan-Chin; Huang, Chien-Chiao; Wang, Yeng-Tseng; Kao, Chien-Han; Roffler, Steve R; Cheng, Tian-Lu

    2016-01-01

    Molecular weight markers that can tolerate denaturing conditions and be auto-detected by secondary antibodies offer great efficacy and convenience for Western Blotting. Here, we describe M&R LE protein markers which contain linear epitopes derived from the heavy chain constant regions of mouse and rabbit immunoglobulin G (IgG Fc LE). These markers can be directly recognized and stained by a wide range of anti-mouse and anti-rabbit secondary antibodies. We selected three mouse (M1, M2 and M3) linear IgG1 and three rabbit (R1, R2 and R3) linear IgG heavy chain epitope candidates based on their respective crystal structures. Western blot analysis indicated that M2 and R2 linear epitopes are effectively recognized by anti-mouse and anti-rabbit secondary antibodies, respectively. We fused the M2 and R2 epitopes (M&R LE) and incorporated the polypeptide in a range of 15-120 kDa auto-detecting markers (M&R LE protein marker). The M&R LE protein marker can be auto-detected by anti-mouse and anti-rabbit IgG secondary antibodies in standard immunoblots. Linear regression analysis of the M&R LE protein marker plotted as gel mobility versus the log of the marker molecular weights revealed good linearity with a correlation coefficient R2 value of 0.9965, indicating that the M&R LE protein marker displays high accuracy for determining protein molecular weights. This accurate, regular and auto-detected M&R LE protein marker may provide a simple, efficient and economical tool for protein analysis.

  9. LINEAR LATTICE AND TRAJECTORY RECONSTRUCTION AND CORRECTION AT FAST LINEAR ACCELERATOR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romanov, A.; Edstrom, D.; Halavanau, A.

    2017-07-16

    The low energy part of the FAST linear accelerator based on 1.3 GHz superconducting RF cavities was successfully commissioned [1]. During commissioning, beam based model dependent methods were used to correct linear lattice and trajectory. Lattice correction algorithm is based on analysis of beam shape from profile monitors and trajectory responses to dipole correctors. Trajectory responses to field gradient variations in quadrupoles and phase variations in superconducting RF cavities were used to correct bunch offsets in quadrupoles and accelerating cavities relative to their magnetic axes. Details of used methods and experimental results are presented.

  10. Linear stability analysis of particle-laden hypopycnal plumes

    NASA Astrophysics Data System (ADS)

    Farenzena, Bruno Avila; Silvestrini, Jorge Hugo

    2017-12-01

    Gravity-driven riverine outflows are responsible for carrying sediments to the coastal waters. The turbulent mixing in these flows is associated with shear and gravitational instabilities such as Kelvin-Helmholtz, Holmboe, and Rayleigh-Taylor. Results from temporal linear stability analysis of a two-layer stratified flow are presented, investigating the behavior of settling particles and mixing region thickness on the flow stability in the presence of ambient shear. The particles are considered suspended in the transport fluid, and its sedimentation is modeled with a constant valued settling velocity. Three scenarios, regarding the mixing region thickness, were identified: the poorly mixed environment, the strong mixed environment, and intermediate scenario. It was observed that Kelvin-Helmholtz and settling convection modes are the two fastest growing modes depending on the particles settling velocity and the total Richardson number. The second scenario presents a modified Rayleigh-Taylor instability, which is the dominant mode. The third case can have Kelvin-Helmholtz, settling convection, and modified Rayleigh-Taylor modes as the fastest growing mode depending on the combination of parameters.

  11. A Bivariate Generalized Linear Item Response Theory Modeling Framework to the Analysis of Responses and Response Times.

    PubMed

    Molenaar, Dylan; Tuerlinckx, Francis; van der Maas, Han L J

    2015-01-01

    A generalized linear modeling framework to the analysis of responses and response times is outlined. In this framework, referred to as bivariate generalized linear item response theory (B-GLIRT), separate generalized linear measurement models are specified for the responses and the response times that are subsequently linked by cross-relations. The cross-relations can take various forms. Here, we focus on cross-relations with a linear or interaction term for ability tests, and cross-relations with a curvilinear term for personality tests. In addition, we discuss how popular existing models from the psychometric literature are special cases in the B-GLIRT framework depending on restrictions in the cross-relation. This allows us to compare existing models conceptually and empirically. We discuss various extensions of the traditional models motivated by practical problems. We also illustrate the applicability of our approach using various real data examples, including data on personality and cognitive ability.

  12. A 3-D Magnetic Analysis of a Stirling Convertor Linear Alternator Under Load

    NASA Technical Reports Server (NTRS)

    Geng, Steven M.; Schwarze, Gene E.; Niedra, Janis M.; Regan, Timothy F.

    2001-01-01

    The NASA Glenn Research Center (GRC), the Department of Energy (DOE), and the Stirling Technology Company (STC) are developing Stirling convertors for Stirling Radioisotope Power Systems (SRPS) to provide electrical power for future NASA deep space missions. STC is developing the 55-We Technology Demonstration Convertor (TDC) under contract to DOE. Of critical importance to the successful development of the Stirling convertor for space power applications is the development of a lightweight and highly efficient linear alternator. This paper presents a 3-dimensional finite element method (FEM) approach for evaluating Stirling convertor linear alternators. The model extends a magnetostatic analysis previously reported at the 35th Intersociety Energy Conversion Engineering Conference (IECEC) to include the effects of the load current. STC's 55-We linear alternator design was selected to validate the model. Spatial plots of magnetic field strength (H) are presented in the region of the exciting permanent magnets. The margin for permanent magnet demagnetization is calculated at the expected magnet operating temperature for the near earth environment and for various average magnet temperatures. These thermal conditions were selected to represent a worst-case condition for the planned deep space missions. This paper presents plots that identify regions of high H where the potential to alter the magnetic moment of the magnets exists.

  13. Analysis of b quark pair production signal from neutral 2HDM Higgs bosons at future linear colliders

    NASA Astrophysics Data System (ADS)

    Hashemi, Majid; MahdaviKhorrami, Mostafa

    2018-06-01

    In this paper, the b quark pair production events are analyzed as a source of neutral Higgs bosons of the two Higgs doublet model type I at linear colliders. The production mechanism is e+e- → Z^{(*)} → HA → b{\\bar{b}}b{\\bar{b}} assuming a fully hadronic final state. The analysis aim is to identify both CP-even and CP-odd Higgs bosons in different benchmark points accommodating moderate boson masses. Due to pair production of Higgs bosons, the analysis is most suitable for a linear collider operating at √{s} = 1 TeV. Results show that in selected benchmark points, signal peaks are observable in the b-jet pair invariant mass distributions at integrated luminosity of 500 fb^{-1}.

  14. Quasi-likelihood generalized linear regression analysis of fatality risk data

    DOT National Transportation Integrated Search

    2009-01-01

    Transportation-related fatality risks is a function of many interacting human, vehicle, and environmental factors. Statisitcally valid analysis of such data is challenged both by the complexity of plausable structural models relating fatality rates t...

  15. Linear bubble plume model for hypolimnetic oxygenation: Full-scale validation and sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Singleton, V. L.; Gantzer, P.; Little, J. C.

    2007-02-01

    An existing linear bubble plume model was improved, and data collected from a full-scale diffuser installed in Spring Hollow Reservoir, Virginia, were used to validate the model. The depth of maximum plume rise was simulated well for two of the three diffuser tests. Temperature predictions deviated from measured profiles near the maximum plume rise height, but predicted dissolved oxygen profiles compared very well with observations. A sensitivity analysis was performed. The gas flow rate had the greatest effect on predicted plume rise height and induced water flow rate, both of which were directly proportional to gas flow rate. Oxygen transfer within the hypolimnion was independent of all parameters except initial bubble radius and was inversely proportional for radii greater than approximately 1 mm. The results of this work suggest that plume dynamics and oxygen transfer can successfully be predicted for linear bubble plumes using the discrete-bubble approach.

  16. Serosal Laceration During Firing of Powered Linear Stapler Is a Predictor of Staple Malformation.

    PubMed

    Matsuzawa, Fumihiko; Homma, Shigenori; Yoshida, Tadashi; Konishi, Yuji; Shibasaki, Susumu; Ishikawa, Takahisa; Kawamura, Hideki; Takahashi, Norihiko; Iijima, Hiroaki; Taketomi, Akinobu

    2017-12-01

    Although several types of staplers have been developed, staple-line leaks have been a great problem in gastrointestinal surgery. Powered linear staplers were recently developed to further reduce the risk of tissue trauma during laparoscopic surgery. The aim of this study was to identify the factors that predict staple malformation and determine the effect of precompression and slow firing on the staple formation of this novel powered stapling method. Porcine stomachs were divided using an endoscopic powered linear stapler with gold reloads. We divided the specimens into 9 groups according to the precompression time (0/60/180 seconds) and firing time (0/60/180 seconds). The occurrence and length of laceration and the shape of the staples were evaluated. We examined the factors influencing successful stapling and investigated the key factors for staple malformation. Precompression significantly decreased the occurrence and length of serosal laceration. Precompression and slow firing significantly improved the optimal stapling formation rate. Univariate analysis showed that the precompression time (0 seconds), firing time (0 seconds), and presence of serosal laceration were significantly associated with a low optimal formation rate. Multivariate analysis showed that these three factors were associated independently with low optimal formation rate and that the presence of serosal laceration was the only factor that could be detected during the stapling procedure. We have shown that serosal laceration is a predictor of staple malformation and demonstrated the importance of precompression and slow stapling when using the powered stapling method.

  17. Phylogenetic Factor Analysis.

    PubMed

    Tolkoff, Max R; Alfaro, Michael E; Baele, Guy; Lemey, Philippe; Suchard, Marc A

    2018-05-01

    Phylogenetic comparative methods explore the relationships between quantitative traits adjusting for shared evolutionary history. This adjustment often occurs through a Brownian diffusion process along the branches of the phylogeny that generates model residuals or the traits themselves. For high-dimensional traits, inferring all pair-wise correlations within the multivariate diffusion is limiting. To circumvent this problem, we propose phylogenetic factor analysis (PFA) that assumes a small unknown number of independent evolutionary factors arise along the phylogeny and these factors generate clusters of dependent traits. Set in a Bayesian framework, PFA provides measures of uncertainty on the factor number and groupings, combines both continuous and discrete traits, integrates over missing measurements and incorporates phylogenetic uncertainty with the help of molecular sequences. We develop Gibbs samplers based on dynamic programming to estimate the PFA posterior distribution, over 3-fold faster than for multivariate diffusion and a further order-of-magnitude more efficiently in the presence of latent traits. We further propose a novel marginal likelihood estimator for previously impractical models with discrete data and find that PFA also provides a better fit than multivariate diffusion in evolutionary questions in columbine flower development, placental reproduction transitions and triggerfish fin morphometry.

  18. Correction for spatial averaging in laser speckle contrast analysis

    PubMed Central

    Thompson, Oliver; Andrews, Michael; Hirst, Evan

    2011-01-01

    Practical laser speckle contrast analysis systems face a problem of spatial averaging of speckles, due to the pixel size in the cameras used. Existing practice is to use a system factor in speckle contrast analysis to account for spatial averaging. The linearity of the system factor correction has not previously been confirmed. The problem of spatial averaging is illustrated using computer simulation of time-integrated dynamic speckle, and the linearity of the correction confirmed using both computer simulation and experimental results. The valid linear correction allows various useful compromises in the system design. PMID:21483623

  19. Linear fixed-field multipass arcs for recirculating linear accelerators

    DOE PAGES

    Morozov, V. S.; Bogacz, S. A.; Roblin, Y. R.; ...

    2012-06-14

    Recirculating Linear Accelerators (RLA's) provide a compact and efficient way of accelerating particle beams to medium and high energies by reusing the same linac for multiple passes. In the conventional scheme, after each pass, the different energy beams coming out of the linac are separated and directed into appropriate arcs for recirculation, with each pass requiring a separate fixed-energy arc. In this paper we present a concept of an RLA return arc based on linear combined-function magnets, in which two and potentially more consecutive passes with very different energies are transported through the same string of magnets. By adjusting themore » dipole and quadrupole components of the constituting linear combined-function magnets, the arc is designed to be achromatic and to have zero initial and final reference orbit offsets for all transported beam energies. We demonstrate the concept by developing a design for a droplet-shaped return arc for a dog-bone RLA capable of transporting two beam passes with momenta different by a factor of two. Finally, we present the results of tracking simulations of the two passes and lay out the path to end-to-end design and simulation of a complete dog-bone RLA.« less

  20. Determination of small field synthetic single-crystal diamond detector correction factors for CyberKnife, Leksell Gamma Knife Perfexion and linear accelerator.

    PubMed

    Veselsky, T; Novotny, J; Pastykova, V; Koniarova, I

    2017-12-01

    The aim of this study was to determine small field correction factors for a synthetic single-crystal diamond detector (PTW microDiamond) for routine use in clinical dosimetric measurements. Correction factors following small field Alfonso formalism were calculated by comparison of PTW microDiamond measured ratio M Qclin fclin /M Qmsr fmsr with Monte Carlo (MC) based field output factors Ω Qclin,Qmsr fclin,fmsr determined using Dosimetry Diode E or with MC simulation itself. Diode measurements were used for the CyberKnife and Varian Clinac 2100C/D linear accelerator. PTW microDiamond correction factors for Leksell Gamma Knife (LGK) were derived using MC simulated reference values from the manufacturer. PTW microDiamond correction factors for CyberKnife field sizes 25-5 mm were mostly smaller than 1% (except for 2.9% for 5 mm Iris field and 1.4% for 7.5 mm fixed cone field). The correction of 0.1% and 2.0% for 8 mm and 4 mm collimators, respectively, needed to be applied to PTW microDiamond measurements for LGK Perfexion. Finally, PTW microDiamond M Qclin fclin /M Qmsr fmsr for the linear accelerator varied from MC corrected Dosimetry Diode data by less than 0.5% (except for 1 × 1 cm 2 field size with 1.3% deviation). Regarding low resulting correction factor values, the PTW microDiamond detector may be considered an almost ideal tool for relative small field dosimetry in a large variety of stereotactic and radiosurgery treatment devices. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  1. An analysis of value function learning with piecewise linear control

    NASA Astrophysics Data System (ADS)

    Tutsoy, Onder; Brown, Martin

    2016-05-01

    Reinforcement learning (RL) algorithms attempt to learn optimal control actions by iteratively estimating a long-term measure of system performance, the so-called value function. For example, RL algorithms have been applied to walking robots to examine the connection between robot motion and the brain, which is known as embodied cognition. In this paper, RL algorithms are analysed using an exemplar test problem. A closed form solution for the value function is calculated and this is represented in terms of a set of basis functions and parameters, which is used to investigate parameter convergence. The value function expression is shown to have a polynomial form where the polynomial terms depend on the plant's parameters and the value function's discount factor. It is shown that the temporal difference error introduces a null space for the differenced higher order basis associated with the effects of controller switching (saturated to linear control or terminating an experiment) apart from the time of the switch. This leads to slow convergence in the relevant subspace. It is also shown that badly conditioned learning problems can occur, and this is a function of the value function discount factor and the controller switching points. Finally, a comparison is performed between the residual gradient and TD(0) learning algorithms, and it is shown that the former has a faster rate of convergence for this test problem.

  2. A Secondary Antibody-Detecting Molecular Weight Marker with Mouse and Rabbit IgG Fc Linear Epitopes for Western Blot Analysis

    PubMed Central

    Cheng, Ta-Chun; Tung, Yi-Ching; Chu, Pei-Yu; Chuang, Chih-Hung; Hsieh, Yuan-Chin; Huang, Chien-Chiao; Wang, Yeng-Tseng; Kao, Chien-Han; Roffler, Steve R.; Cheng, Tian-Lu

    2016-01-01

    Molecular weight markers that can tolerate denaturing conditions and be auto-detected by secondary antibodies offer great efficacy and convenience for Western Blotting. Here, we describe M&R LE protein markers which contain linear epitopes derived from the heavy chain constant regions of mouse and rabbit immunoglobulin G (IgG Fc LE). These markers can be directly recognized and stained by a wide range of anti-mouse and anti-rabbit secondary antibodies. We selected three mouse (M1, M2 and M3) linear IgG1 and three rabbit (R1, R2 and R3) linear IgG heavy chain epitope candidates based on their respective crystal structures. Western blot analysis indicated that M2 and R2 linear epitopes are effectively recognized by anti-mouse and anti-rabbit secondary antibodies, respectively. We fused the M2 and R2 epitopes (M&R LE) and incorporated the polypeptide in a range of 15–120 kDa auto-detecting markers (M&R LE protein marker). The M&R LE protein marker can be auto-detected by anti-mouse and anti-rabbit IgG secondary antibodies in standard immunoblots. Linear regression analysis of the M&R LE protein marker plotted as gel mobility versus the log of the marker molecular weights revealed good linearity with a correlation coefficient R2 value of 0.9965, indicating that the M&R LE protein marker displays high accuracy for determining protein molecular weights. This accurate, regular and auto-detected M&R LE protein marker may provide a simple, efficient and economical tool for protein analysis. PMID:27494183

  3. Applications of multivariate modeling to neuroimaging group analysis: a comprehensive alternative to univariate general linear model.

    PubMed

    Chen, Gang; Adleman, Nancy E; Saad, Ziad S; Leibenluft, Ellen; Cox, Robert W

    2014-10-01

    All neuroimaging packages can handle group analysis with t-tests or general linear modeling (GLM). However, they are quite hamstrung when there are multiple within-subject factors or when quantitative covariates are involved in the presence of a within-subject factor. In addition, sphericity is typically assumed for the variance-covariance structure when there are more than two levels in a within-subject factor. To overcome such limitations in the traditional AN(C)OVA and GLM, we adopt a multivariate modeling (MVM) approach to analyzing neuroimaging data at the group level with the following advantages: a) there is no limit on the number of factors as long as sample sizes are deemed appropriate; b) quantitative covariates can be analyzed together with within-subject factors; c) when a within-subject factor is involved, three testing methodologies are provided: traditional univariate testing (UVT) with sphericity assumption (UVT-UC) and with correction when the assumption is violated (UVT-SC), and within-subject multivariate testing (MVT-WS); d) to correct for sphericity violation at the voxel level, we propose a hybrid testing (HT) approach that achieves equal or higher power via combining traditional sphericity correction methods (Greenhouse-Geisser and Huynh-Feldt) with MVT-WS. To validate the MVM methodology, we performed simulations to assess the controllability for false positives and power achievement. A real FMRI dataset was analyzed to demonstrate the capability of the MVM approach. The methodology has been implemented into an open source program 3dMVM in AFNI, and all the statistical tests can be performed through symbolic coding with variable names instead of the tedious process of dummy coding. Our data indicates that the severity of sphericity violation varies substantially across brain regions. The differences among various modeling methodologies were addressed through direct comparisons between the MVM approach and some of the GLM implementations in

  4. Linear morphoea follows Blaschko's lines.

    PubMed

    Weibel, L; Harper, J I

    2008-07-01

    The aetiology of morphoea (or localized scleroderma) remains unknown. It has previously been suggested that lesions of linear morphoea may follow Blaschko's lines and thus reflect an embryological development. However, the distribution of linear morphoea has never been accurately evaluated. We aimed to identify common patterns of clinical presentation in children with linear morphoea and to establish whether linear morphoea follows the lines of Blaschko. A retrospective chart review of 65 children with linear morphoea was performed. According to clinical photographs the skin lesions of these patients were plotted on to standardized head and body charts. With the aid of Adobe Illustrator a final figure was produced including an overlay of all individual lesions which was used for comparison with the published lines of Blaschko. Thirty-four (53%) patients had the en coup de sabre subtype, 27 (41%) presented with linear morphoea on the trunk and/or limbs and four (6%) children had a combination of the two. In 55 (85%) children the skin lesions were confined to one side of the body, showing no preference for either left or right side. On comparing the overlays of all body and head lesions with the original lines of Blaschko there was an excellent correlation. Our data indicate that linear morphoea follows the lines of Blaschko. We hypothesize that in patients with linear morphoea susceptible cells are present in a mosaic state and that exposure to some trigger factor may result in the development of this condition.

  5. Menu-Driven Solver Of Linear-Programming Problems

    NASA Technical Reports Server (NTRS)

    Viterna, L. A.; Ferencz, D.

    1992-01-01

    Program assists inexperienced user in formulating linear-programming problems. A Linear Program Solver (ALPS) computer program is full-featured LP analysis program. Solves plain linear-programming problems as well as more-complicated mixed-integer and pure-integer programs. Also contains efficient technique for solution of purely binary linear-programming problems. Written entirely in IBM's APL2/PC software, Version 1.01. Packed program contains licensed material, property of IBM (copyright 1988, all rights reserved).

  6. Design and analysis of tubular permanent magnet linear generator for small-scale wave energy converter

    NASA Astrophysics Data System (ADS)

    Kim, Jeong-Man; Koo, Min-Mo; Jeong, Jae-Hoon; Hong, Keyyong; Cho, Il-Hyoung; Choi, Jang-Young

    2017-05-01

    This paper reports the design and analysis of a tubular permanent magnet linear generator (TPMLG) for a small-scale wave-energy converter. The analytical field computation is performed by applying a magnetic vector potential and a 2-D analytical model to determine design parameters. Based on analytical solutions, parametric analysis is performed to meet the design specifications of a wave-energy converter (WEC). Then, 2-D FEA is employed to validate the analytical method. Finally, the experimental result confirms the predictions of the analytical and finite element analysis (FEA) methods under regular and irregular wave conditions.

  7. A single factor underlies the metabolic syndrome: a confirmatory factor analysis.

    PubMed

    Pladevall, Manel; Singal, Bonita; Williams, L Keoki; Brotons, Carlos; Guyer, Heidi; Sadurni, Josep; Falces, Carles; Serrano-Rios, Manuel; Gabriel, Rafael; Shaw, Jonathan E; Zimmet, Paul Z; Haffner, Steven

    2006-01-01

    Confirmatory factor analysis (CFA) was used to test the hypothesis that the components of the metabolic syndrome are manifestations of a single common factor. Three different datasets were used to test and validate the model. The Spanish and Mauritian studies included 207 men and 203 women and 1,411 men and 1,650 women, respectively. A third analytical dataset including 847 men was obtained from a previously published CFA of a U.S. population. The one-factor model included the metabolic syndrome core components (central obesity, insulin resistance, blood pressure, and lipid measurements). We also tested an expanded one-factor model that included uric acid and leptin levels. Finally, we used CFA to compare the goodness of fit of one-factor models with the fit of two previously published four-factor models. The simplest one-factor model showed the best goodness-of-fit indexes (comparative fit index 1, root mean-square error of approximation 0.00). Comparisons of one-factor with four-factor models in the three datasets favored the one-factor model structure. The selection of variables to represent the different metabolic syndrome components and model specification explained why previous exploratory and confirmatory factor analysis, respectively, failed to identify a single factor for the metabolic syndrome. These analyses support the current clinical definition of the metabolic syndrome, as well as the existence of a single factor that links all of the core components.

  8. Comparison of various error functions in predicting the optimum isotherm by linear and non-linear regression analysis for the sorption of basic red 9 by activated carbon.

    PubMed

    Kumar, K Vasanth; Porkodi, K; Rocha, F

    2008-01-15

    A comparison of linear and non-linear regression method in selecting the optimum isotherm was made to the experimental equilibrium data of basic red 9 sorption by activated carbon. The r(2) was used to select the best fit linear theoretical isotherm. In the case of non-linear regression method, six error functions namely coefficient of determination (r(2)), hybrid fractional error function (HYBRID), Marquardt's percent standard deviation (MPSD), the average relative error (ARE), sum of the errors squared (ERRSQ) and sum of the absolute errors (EABS) were used to predict the parameters involved in the two and three parameter isotherms and also to predict the optimum isotherm. Non-linear regression was found to be a better way to obtain the parameters involved in the isotherms and also the optimum isotherm. For two parameter isotherm, MPSD was found to be the best error function in minimizing the error distribution between the experimental equilibrium data and predicted isotherms. In the case of three parameter isotherm, r(2) was found to be the best error function to minimize the error distribution structure between experimental equilibrium data and theoretical isotherms. The present study showed that the size of the error function alone is not a deciding factor to choose the optimum isotherm. In addition to the size of error function, the theory behind the predicted isotherm should be verified with the help of experimental data while selecting the optimum isotherm. A coefficient of non-determination, K(2) was explained and was found to be very useful in identifying the best error function while selecting the optimum isotherm.

  9. Wavelets, non-linearity and turbulence in fusion plasmas

    NASA Astrophysics Data System (ADS)

    van Milligen, B. Ph.

    Introduction Linear spectral analysis tools Wavelet analysis Wavelet spectra and coherence Joint wavelet phase-frequency spectra Non-linear spectral analysis tools Wavelet bispectra and bicoherence Interpretation of the bicoherence Analysis of computer-generated data Coupled van der Pol oscillators A large eddy simulation model for two-fluid plasma turbulence A long wavelength plasma drift wave model Analysis of plasma edge turbulence from Langmuir probe data Radial coherence observed on the TJ-IU torsatron Bicoherence profile at the L/H transition on CCT Conclusions

  10. Extending Local Canonical Correlation Analysis to Handle General Linear Contrasts for fMRI Data

    PubMed Central

    Jin, Mingwu; Nandy, Rajesh; Curran, Tim; Cordes, Dietmar

    2012-01-01

    Local canonical correlation analysis (CCA) is a multivariate method that has been proposed to more accurately determine activation patterns in fMRI data. In its conventional formulation, CCA has several drawbacks that limit its usefulness in fMRI. A major drawback is that, unlike the general linear model (GLM), a test of general linear contrasts of the temporal regressors has not been incorporated into the CCA formalism. To overcome this drawback, a novel directional test statistic was derived using the equivalence of multivariate multiple regression (MVMR) and CCA. This extension will allow CCA to be used for inference of general linear contrasts in more complicated fMRI designs without reparameterization of the design matrix and without reestimating the CCA solutions for each particular contrast of interest. With the proper constraints on the spatial coefficients of CCA, this test statistic can yield a more powerful test on the inference of evoked brain regional activations from noisy fMRI data than the conventional t-test in the GLM. The quantitative results from simulated and pseudoreal data and activation maps from fMRI data were used to demonstrate the advantage of this novel test statistic. PMID:22461786

  11. Quantifying the linear and nonlinear relations between the urban form fragmentation and the carbon emission distribution

    NASA Astrophysics Data System (ADS)

    Zuo, S.; Dai, S.; Ren, Y.; Yu, Z.

    2017-12-01

    Scientifically revealing the spatial heterogeneity and the relationship between the fragmentation of urban landscape and the direct carbon emissions are of great significance to land management and urban planning. In fact, the linear and nonlinear effects among the various factors resulted in the carbon emission spatial map. However, there is lack of the studies on the direct and indirect relations between the carbon emission and the city functional spatial form changes, which could not be reflected by the land use change. The linear strength and direction of the single factor could be calculated through the correlation and Geographically Weighted Regression (GWR) analysis, the nonlinear power of one factor and the interaction power of each two factors could be quantified by the Geodetector analysis. Therefore, we compared the landscape fragmentation metrics of the urban land cover and functional district patches to characterize the landscape form and then revealed the relations between the landscape fragmentation level and the direct the carbon emissions based on the three methods. The results showed that fragmentation decreased and the fragmented patches clustered at the coarser resolution. The direct CO2 emission density and the population density increased when the fragmentation level aggregated. The correlation analysis indicated the weak linear relation between them. The spatial variation of GWR output indicated the fragmentation indicator (MESH) had the positive influence on the carbon emission located in the relatively high emission region, and the negative effects regions accounted for the small part of the area. The Geodetector which explores the nonlinear relation identified the DIVISION and MESH as the most powerful direct factor for the land cover patches, NP and PD for the functional district patches, and the interactions between fragmentation indicator (MESH) and urban sprawl metrics (PUA and DIS) had the greatly increased explanation powers on the

  12. LCFIPlus: A framework for jet analysis in linear collider studies

    NASA Astrophysics Data System (ADS)

    Suehara, Taikan; Tanabe, Tomohiko

    2016-02-01

    We report on the progress in flavor identification tools developed for a future e+e- linear collider such as the International Linear Collider (ILC) and Compact Linear Collider (CLIC). Building on the work carried out by the LCFIVertex collaboration, we employ new strategies in vertex finding and jet finding, and introduce new discriminating variables for jet flavor identification. We present the performance of the new algorithms in the conditions simulated using a detector concept designed for the ILC. The algorithms have been successfully used in ILC physics simulation studies, such as those presented in the ILC Technical Design Report.

  13. Anthropometric data reduction using confirmatory factor analysis.

    PubMed

    Rohani, Jafri Mohd; Olusegun, Akanbi Gabriel; Rani, Mat Rebi Abdul

    2014-01-01

    The unavailability of anthropometric data especially in developing countries has remained a limiting factor towards the design of learning facilities with sufficient ergonomic consideration. Attempts to use anthropometric data from developed countries have led to provision of school facilities unfit for the users. The purpose of this paper is to use factor analysis to investigate the suitability of the collected anthropometric data as a database for school design in Nigerian tertiary institutions. Anthropometric data were collected from 288 male students in a Federal Polytechnic in North-West of Nigeria. Their age is between 18-25 years. Nine vertical anthropometric dimensions related to heights were collected using the conventional traditional equipment. Exploratory factor analysis was used to categorize the variables into a model consisting of two factors. Thereafter, confirmatory factor analysis was used to investigate the fit of the data to the proposed model. A just identified model, made of two factors, each with three variables was developed. The variables within the model accounted for 81% of the total variation of the entire data. The model was found to demonstrate adequate validity and reliability. Various measuring indices were used to verify that the model fits the data properly. The final model reveals that stature height and eye height sitting were the most stable variables for designs that have to do with standing and sitting construct. The study has shown the application of factor analysis in anthropometric data analysis. The study highlighted the relevance of these statistical tools to investigate variability among anthropometric data involving diverse population, which has not been widely used for analyzing previous anthropometric data. The collected data is therefore suitable for use while designing for Nigerian students.

  14. A generalized linear factor model approach to the hierarchical framework for responses and response times.

    PubMed

    Molenaar, Dylan; Tuerlinckx, Francis; van der Maas, Han L J

    2015-05-01

    We show how the hierarchical model for responses and response times as developed by van der Linden (2007), Fox, Klein Entink, and van der Linden (2007), Klein Entink, Fox, and van der Linden (2009), and Glas and van der Linden (2010) can be simplified to a generalized linear factor model with only the mild restriction that there is no hierarchical model at the item side. This result is valuable as it enables all well-developed modelling tools and extensions that come with these methods. We show that the restriction we impose on the hierarchical model does not influence parameter recovery under realistic circumstances. In addition, we present two illustrative real data analyses to demonstrate the practical benefits of our approach. © 2014 The British Psychological Society.

  15. Linear transmitter design for MSAT terminals

    NASA Technical Reports Server (NTRS)

    Wilkinson, Ross; Macleod, John; Beach, Mark; Bateman, Andrew

    1990-01-01

    One of the factors that will undoubtedly influence the choice of modulation format for mobile satellites, is the availability of cheap, power-efficient, linear amplifiers for mobile terminal equipment operating in the 1.5-1.7 GHz band. Transmitter linearity is not easily achieved at these frequencies, although high power (20W) class A/AB devices are becoming available. However, these components are expensive and require careful design to achieve a modest degree of linearity. In this paper an alternative approach to radio frequency (RF) power amplifier design for mobile satellite (MSAT) terminals using readily-available, power-efficient, and cheap class C devices in a feedback amplifier architecture is presented.

  16. From elementary flux modes to elementary flux vectors: Metabolic pathway analysis with arbitrary linear flux constraints.

    PubMed

    Klamt, Steffen; Regensburger, Georg; Gerstl, Matthias P; Jungreuthmayer, Christian; Schuster, Stefan; Mahadevan, Radhakrishnan; Zanghellini, Jürgen; Müller, Stefan

    2017-04-01

    Elementary flux modes (EFMs) emerged as a formal concept to describe metabolic pathways and have become an established tool for constraint-based modeling and metabolic network analysis. EFMs are characteristic (support-minimal) vectors of the flux cone that contains all feasible steady-state flux vectors of a given metabolic network. EFMs account for (homogeneous) linear constraints arising from reaction irreversibilities and the assumption of steady state; however, other (inhomogeneous) linear constraints, such as minimal and maximal reaction rates frequently used by other constraint-based techniques (such as flux balance analysis [FBA]), cannot be directly integrated. These additional constraints further restrict the space of feasible flux vectors and turn the flux cone into a general flux polyhedron in which the concept of EFMs is not directly applicable anymore. For this reason, there has been a conceptual gap between EFM-based (pathway) analysis methods and linear optimization (FBA) techniques, as they operate on different geometric objects. One approach to overcome these limitations was proposed ten years ago and is based on the concept of elementary flux vectors (EFVs). Only recently has the community started to recognize the potential of EFVs for metabolic network analysis. In fact, EFVs exactly represent the conceptual development required to generalize the idea of EFMs from flux cones to flux polyhedra. This work aims to present a concise theoretical and practical introduction to EFVs that is accessible to a broad audience. We highlight the close relationship between EFMs and EFVs and demonstrate that almost all applications of EFMs (in flux cones) are possible for EFVs (in flux polyhedra) as well. In fact, certain properties can only be studied with EFVs. Thus, we conclude that EFVs provide a powerful and unifying framework for constraint-based modeling of metabolic networks.

  17. From elementary flux modes to elementary flux vectors: Metabolic pathway analysis with arbitrary linear flux constraints

    PubMed Central

    Klamt, Steffen; Gerstl, Matthias P.; Jungreuthmayer, Christian; Mahadevan, Radhakrishnan; Müller, Stefan

    2017-01-01

    Elementary flux modes (EFMs) emerged as a formal concept to describe metabolic pathways and have become an established tool for constraint-based modeling and metabolic network analysis. EFMs are characteristic (support-minimal) vectors of the flux cone that contains all feasible steady-state flux vectors of a given metabolic network. EFMs account for (homogeneous) linear constraints arising from reaction irreversibilities and the assumption of steady state; however, other (inhomogeneous) linear constraints, such as minimal and maximal reaction rates frequently used by other constraint-based techniques (such as flux balance analysis [FBA]), cannot be directly integrated. These additional constraints further restrict the space of feasible flux vectors and turn the flux cone into a general flux polyhedron in which the concept of EFMs is not directly applicable anymore. For this reason, there has been a conceptual gap between EFM-based (pathway) analysis methods and linear optimization (FBA) techniques, as they operate on different geometric objects. One approach to overcome these limitations was proposed ten years ago and is based on the concept of elementary flux vectors (EFVs). Only recently has the community started to recognize the potential of EFVs for metabolic network analysis. In fact, EFVs exactly represent the conceptual development required to generalize the idea of EFMs from flux cones to flux polyhedra. This work aims to present a concise theoretical and practical introduction to EFVs that is accessible to a broad audience. We highlight the close relationship between EFMs and EFVs and demonstrate that almost all applications of EFMs (in flux cones) are possible for EFVs (in flux polyhedra) as well. In fact, certain properties can only be studied with EFVs. Thus, we conclude that EFVs provide a powerful and unifying framework for constraint-based modeling of metabolic networks. PMID:28406903

  18. Determinants of linear growth in Malaysian children with cerebral palsy.

    PubMed

    Zainah, S H; Ong, L C; Sofiah, A; Poh, B K; Hussain, I H

    2001-08-01

    To compare the linear growth and nutritional parameters of a group of Malaysian children with cerebral palsy (CP) against a group of controls, and to determine the nutritional, medical and sociodemographic factors associated with poor growth in children with CP. The linear growth of 101 children with CP and of their healthy controls matched for age, sex and ethnicity was measured using upper-arm length (UAL). Nutritional parameters of weight, triceps skin-fold thickness and mid-arm circumference were also measured. Total caloric intake was assessed using a 24-h recall of a 3-day food intake and calculated as a percentage of the Recommended Daily Allowance. Multiple regression analysis was used to determine nutritional, medical and sociodemographic factors associated with poor growth (using z-scores of UAL) in children with CP. Compared with the controls, children with CP had significantly lower mean UAL measurements (difference between means -1.1, 95% confidence interval -1.65 to - 0.59), weight (difference between means -6.0, 95% CI -7.66 to -4.34), mid-arm circumference (difference between means -1.3, 95% CI -2.06 to -0.56) and triceps skin-fold thickness (difference between means -2.5, 95% CI -3.5 to -1.43). Factors associated with low z-scores of UAL were a lower percentage of median weight (P < 0.001), tube feeding (P < 0.001) and increasing age (P < 0.001). A large proportion of Malaysian children with CP have poor nutritional status and linear growth. Nutritional assessment and management at an early age might help this group of children achieve adequate growth.

  19. Source Apportionment and Influencing Factor Analysis of Residential Indoor PM2.5 in Beijing

    PubMed Central

    Yang, Yibing; Liu, Liu; Xu, Chunyu; Li, Na; Liu, Zhe; Wang, Qin; Xu, Dongqun

    2018-01-01

    In order to identify the sources of indoor PM2.5 and to check which factors influence the concentration of indoor PM2.5 and chemical elements, indoor concentrations of PM2.5 and its related elements in residential houses in Beijing were explored. Indoor and outdoor PM2.5 samples that were monitored continuously for one week were collected. Indoor and outdoor concentrations of PM2.5 and 15 elements (Al, As, Ca, Cd, Cu, Fe, K, Mg, Mn, Na, Pb, Se, Tl, V, Zn) were calculated and compared. The median indoor concentration of PM2.5 was 57.64 μg/m3. For elements in indoor PM2.5, Cd and As may be sensitive to indoor smoking, Zn, Ca and Al may be related to indoor sources other than smoking, Pb, V and Se may mainly come from outdoor. Five factors were extracted for indoor PM2.5 by factor analysis, explained 76.8% of total variance, outdoor sources contributed more than indoor sources. Multiple linear regression analysis for indoor PM2.5, Cd and Pb was performed. Indoor PM2.5 was influenced by factors including outdoor PM2.5, smoking during sampling, outdoor temperature and time of air conditioner use. Indoor Cd was affected by factors including smoking during sampling, outdoor Cd and building age. Indoor Pb concentration was associated with factors including outdoor Pb and time of window open per day, building age and RH. In conclusion, indoor PM2.5 mainly comes from outdoor sources, and the contributions of indoor sources also cannot be ignored. Factors associated indoor and outdoor air exchange can influence the concentrations of indoor PM2.5 and its constituents. PMID:29621164

  20. Source Apportionment and Influencing Factor Analysis of Residential Indoor PM2.5 in Beijing.

    PubMed

    Yang, Yibing; Liu, Liu; Xu, Chunyu; Li, Na; Liu, Zhe; Wang, Qin; Xu, Dongqun

    2018-04-05

    In order to identify the sources of indoor PM 2.5 and to check which factors influence the concentration of indoor PM 2.5 and chemical elements, indoor concentrations of PM 2.5 and its related elements in residential houses in Beijing were explored. Indoor and outdoor PM 2.5 samples that were monitored continuously for one week were collected. Indoor and outdoor concentrations of PM 2.5 and 15 elements (Al, As, Ca, Cd, Cu, Fe, K, Mg, Mn, Na, Pb, Se, Tl, V, Zn) were calculated and compared. The median indoor concentration of PM 2.5 was 57.64 μg/m³. For elements in indoor PM 2.5 , Cd and As may be sensitive to indoor smoking, Zn, Ca and Al may be related to indoor sources other than smoking, Pb, V and Se may mainly come from outdoor. Five factors were extracted for indoor PM 2.5 by factor analysis, explained 76.8% of total variance, outdoor sources contributed more than indoor sources. Multiple linear regression analysis for indoor PM 2.5 , Cd and Pb was performed. Indoor PM 2.5 was influenced by factors including outdoor PM 2.5 , smoking during sampling, outdoor temperature and time of air conditioner use. Indoor Cd was affected by factors including smoking during sampling, outdoor Cd and building age. Indoor Pb concentration was associated with factors including outdoor Pb and time of window open per day, building age and RH. In conclusion, indoor PM 2.5 mainly comes from outdoor sources, and the contributions of indoor sources also cannot be ignored. Factors associated indoor and outdoor air exchange can influence the concentrations of indoor PM 2.5 and its constituents.

  1. Investigation of charge weight and shock factor effect on non-linear transient structural response of rectangular plates subjected to underwater explosion (UNDEX) shock loading

    NASA Astrophysics Data System (ADS)

    Demir, Ozgur; Sahin, Abdurrahman; Yilmaz, Tamer

    2012-09-01

    Underwater explosion induced shock loads are capable of causing considerable structural damage. Investigations of the underwater explosion (UNDEX) effects on structures have seen continuous developments because of security risks. Most of the earlier experimental investigations were performed by military since the World War I. Subsequently; Cole [1] established mathematical relations for modeling underwater explosion shock loading, which were the outcome of many experimental investigations This study predicts and establishes the transient responses of a panel structure to underwater explosion shock loads using non-linear finite element code Ls-Dyna. Accordingly, in this study a new MATLAB code has been developed for predicting shock loading profile for different weight of explosive and different shock factors. Numerical analysis was performed for various test conditions and results are compared with Ramajeyathilagam's experimental study [8].

  2. Beyond linear methods of data analysis: time series analysis and its applications in renal research.

    PubMed

    Gupta, Ashwani K; Udrea, Andreea

    2013-01-01

    Analysis of temporal trends in medicine is needed to understand normal physiology and to study the evolution of disease processes. It is also useful for monitoring response to drugs and interventions, and for accountability and tracking of health care resources. In this review, we discuss what makes time series analysis unique for the purposes of renal research and its limitations. We also introduce nonlinear time series analysis methods and provide examples where these have advantages over linear methods. We review areas where these computational methods have found applications in nephrology ranging from basic physiology to health services research. Some examples include noninvasive assessment of autonomic function in patients with chronic kidney disease, dialysis-dependent renal failure and renal transplantation. Time series models and analysis methods have been utilized in the characterization of mechanisms of renal autoregulation and to identify the interaction between different rhythms of nephron pressure flow regulation. They have also been used in the study of trends in health care delivery. Time series are everywhere in nephrology and analyzing them can lead to valuable knowledge discovery. The study of time trends of vital signs, laboratory parameters and the health status of patients is inherent to our everyday clinical practice, yet formal models and methods for time series analysis are not fully utilized. With this review, we hope to familiarize the reader with these techniques in order to assist in their proper use where appropriate.

  3. New dual asymmetric CEC linear Fresnel concentrator for evacuated tubular receivers

    NASA Astrophysics Data System (ADS)

    Canavarro, Diogo; Chaves, Julio; Collares-Pereira, Manuel

    2017-06-01

    Linear Fresnel Reflector concentrators (LFR) are a potential solution for low-cost electricity production. Nevertheless in order to become more competitive with other CSP (Concentrated Solar Power) technologies, in particular with the Parabolic Trough concentrator, their overall solar to electricity efficiencies must increase. A possible path to achieve this goal is to increase the concentration factor, hence increasing the working temperatures for higher thermodynamic efficiency (more energy collection) and decrease the total number of rows of the solar field (less parasitic losses and corresponding cost reduction). This paper presents a dual asymmetric CEC-type (Compound Elliptical Concentrator) LFR (Linear Fresnel Concentrator) for evacuated tubular receivers. The concentrator is designed for a high concentration factor, presenting an asymmetric configuration enabling a very compact solution. The CEC-type secondary mirror is introduced to accommodate very high concentration values with a wide enough acceptance-angle (augmenting optical tolerances) for simple mechanical tracking solutions, achieving a higher CAP (Concentration Acceptance Product) in comparison with conventional LFR solutions. The paper presents an optical and thermal analysis of the concentrator using two different locations, Faro (Portugal) and Hurghada (Egypt).

  4. Sufficient Forecasting Using Factor Models

    PubMed Central

    Fan, Jianqing; Xue, Lingzhou; Yao, Jiawei

    2017-01-01

    We consider forecasting a single time series when there is a large number of predictors and a possible nonlinear effect. The dimensionality was first reduced via a high-dimensional (approximate) factor model implemented by the principal component analysis. Using the extracted factors, we develop a novel forecasting method called the sufficient forecasting, which provides a set of sufficient predictive indices, inferred from high-dimensional predictors, to deliver additional predictive power. The projected principal component analysis will be employed to enhance the accuracy of inferred factors when a semi-parametric (approximate) factor model is assumed. Our method is also applicable to cross-sectional sufficient regression using extracted factors. The connection between the sufficient forecasting and the deep learning architecture is explicitly stated. The sufficient forecasting correctly estimates projection indices of the underlying factors even in the presence of a nonparametric forecasting function. The proposed method extends the sufficient dimension reduction to high-dimensional regimes by condensing the cross-sectional information through factor models. We derive asymptotic properties for the estimate of the central subspace spanned by these projection directions as well as the estimates of the sufficient predictive indices. We further show that the natural method of running multiple regression of target on estimated factors yields a linear estimate that actually falls into this central subspace. Our method and theory allow the number of predictors to be larger than the number of observations. We finally demonstrate that the sufficient forecasting improves upon the linear forecasting in both simulation studies and an empirical study of forecasting macroeconomic variables. PMID:29731537

  5. Recursive Factorization of the Inverse Overlap Matrix in Linear-Scaling Quantum Molecular Dynamics Simulations.

    PubMed

    Negre, Christian F A; Mniszewski, Susan M; Cawkwell, Marc J; Bock, Nicolas; Wall, Michael E; Niklasson, Anders M N

    2016-07-12

    We present a reduced complexity algorithm to compute the inverse overlap factors required to solve the generalized eigenvalue problem in a quantum-based molecular dynamics (MD) simulation. Our method is based on the recursive, iterative refinement of an initial guess of Z (inverse square root of the overlap matrix S). The initial guess of Z is obtained beforehand by using either an approximate divide-and-conquer technique or dynamical methods, propagated within an extended Lagrangian dynamics from previous MD time steps. With this formulation, we achieve long-term stability and energy conservation even under the incomplete, approximate, iterative refinement of Z. Linear-scaling performance is obtained using numerically thresholded sparse matrix algebra based on the ELLPACK-R sparse matrix data format, which also enables efficient shared-memory parallelization. As we show in this article using self-consistent density-functional-based tight-binding MD, our approach is faster than conventional methods based on the diagonalization of overlap matrix S for systems as small as a few hundred atoms, substantially accelerating quantum-based simulations even for molecular structures of intermediate size. For a 4158-atom water-solvated polyalanine system, we find an average speedup factor of 122 for the computation of Z in each MD step.

  6. Recursive Factorization of the Inverse Overlap Matrix in Linear Scaling Quantum Molecular Dynamics Simulations

    DOE PAGES

    Negre, Christian F. A; Mniszewski, Susan M.; Cawkwell, Marc Jon; ...

    2016-06-06

    We present a reduced complexity algorithm to compute the inverse overlap factors required to solve the generalized eigenvalue problem in a quantum-based molecular dynamics (MD) simulation. Our method is based on the recursive iterative re nement of an initial guess Z of the inverse overlap matrix S. The initial guess of Z is obtained beforehand either by using an approximate divide and conquer technique or dynamically, propagated within an extended Lagrangian dynamics from previous MD time steps. With this formulation, we achieve long-term stability and energy conservation even under incomplete approximate iterative re nement of Z. Linear scaling performance ismore » obtained using numerically thresholded sparse matrix algebra based on the ELLPACK-R sparse matrix data format, which also enables e cient shared memory parallelization. As we show in this article using selfconsistent density functional based tight-binding MD, our approach is faster than conventional methods based on the direct diagonalization of the overlap matrix S for systems as small as a few hundred atoms, substantially accelerating quantum-based simulations even for molecular structures of intermediate size. For a 4,158 atom water-solvated polyalanine system we nd an average speedup factor of 122 for the computation of Z in each MD step.« less

  7. Comparisons of Exploratory and Confirmatory Factor Analysis.

    ERIC Educational Resources Information Center

    Daniel, Larry G.

    Historically, most researchers conducting factor analysis have used exploratory methods. However, more recently, confirmatory factor analytic methods have been developed that can directly test theory either during factor rotation using "best fit" rotation methods or during factor extraction, as with the LISREL computer programs developed…

  8. Assessing risk factors for periodontitis using regression

    NASA Astrophysics Data System (ADS)

    Lobo Pereira, J. A.; Ferreira, Maria Cristina; Oliveira, Teresa

    2013-10-01

    Multivariate statistical analysis is indispensable to assess the associations and interactions between different factors and the risk of periodontitis. Among others, regression analysis is a statistical technique widely used in healthcare to investigate and model the relationship between variables. In our work we study the impact of socio-demographic, medical and behavioral factors on periodontal health. Using regression, linear and logistic models, we can assess the relevance, as risk factors for periodontitis disease, of the following independent variables (IVs): Age, Gender, Diabetic Status, Education, Smoking status and Plaque Index. The multiple linear regression analysis model was built to evaluate the influence of IVs on mean Attachment Loss (AL). Thus, the regression coefficients along with respective p-values will be obtained as well as the respective p-values from the significance tests. The classification of a case (individual) adopted in the logistic model was the extent of the destruction of periodontal tissues defined by an Attachment Loss greater than or equal to 4 mm in 25% (AL≥4mm/≥25%) of sites surveyed. The association measures include the Odds Ratios together with the correspondent 95% confidence intervals.

  9. Correlation and simple linear regression.

    PubMed

    Zou, Kelly H; Tuncali, Kemal; Silverman, Stuart G

    2003-06-01

    In this tutorial article, the concepts of correlation and regression are reviewed and demonstrated. The authors review and compare two correlation coefficients, the Pearson correlation coefficient and the Spearman rho, for measuring linear and nonlinear relationships between two continuous variables. In the case of measuring the linear relationship between a predictor and an outcome variable, simple linear regression analysis is conducted. These statistical concepts are illustrated by using a data set from published literature to assess a computed tomography-guided interventional technique. These statistical methods are important for exploring the relationships between variables and can be applied to many radiologic studies.

  10. Detection of non-milk fat in milk fat by gas chromatography and linear discriminant analysis.

    PubMed

    Gutiérrez, R; Vega, S; Díaz, G; Sánchez, J; Coronado, M; Ramírez, A; Pérez, J; González, M; Schettino, B

    2009-05-01

    Gas chromatography was utilized to determine triacylglycerol profiles in milk and non-milk fat. The values of triacylglycerol were subjected to linear discriminant analysis to detect and quantify non-milk fat in milk fat. Two groups of milk fat were analyzed: A) raw milk fat from the central region of Mexico (n = 216) and B) ultrapasteurized milk fat from 3 industries (n = 36), as well as pork lard (n = 2), bovine tallow (n = 2), fish oil (n = 2), peanut (n = 2), corn (n = 2), olive (n = 2), and soy (n = 2). The samples of raw milk fat were adulterated with non-milk fats in proportions of 0, 5, 10, 15, and 20% to form 5 groups. The first function obtained from the linear discriminant analysis allowed the correct classification of 94.4% of the samples with levels <10% of adulteration. The triacylglycerol values of the ultrapasteurized milk fats were evaluated with the discriminant function, demonstrating that one industry added non-milk fat to its product in 80% of the samples analyzed.

  11. Non-linear analysis and the design of Pumpkin Balloons: stress, stability and viscoelasticity

    NASA Astrophysics Data System (ADS)

    Rand, J. L.; Wakefield, D. S.

    Tensys have a long-established background in the shape generation and load analysis of architectural stressed membrane structures Founded upon their inTENS finite element analysis suite these activities have broadened to encompass lighter than air structures such as aerostats hybrid air-vehicles and stratospheric balloons Winzen Engineering couple many years of practical balloon design and fabrication experience with both academic and practical knowledge of the characterisation of the non-linear viscoelastic response of the polymeric films typically used for high-altitude scientific balloons Both companies have provided consulting services to the NASA Ultra Long Duration Balloon ULDB Program Early implementations of pumpkin balloons have shown problems of geometric instability characterised by improper deployment and these difficulties have been reproduced numerically using inTENS The solution lies in both the shapes of the membrane lobes and also the need to generate a biaxial stress field in order to mobilise in-plane shear stiffness Balloons undergo significant temperature and pressure variations in flight The different thermal characteristics between tendons and film can lead to significant meridional stress Fabrication tolerances can lead to significant local hoop stress concentrations particularly adjacent to the base and apex end fittings The non-linear viscoelastic response of the envelope film acts positively to help dissipate stress concentrations However creep over time may produce lobe geometry variations that may

  12. Linear least-squares method for global luminescent oil film skin friction field analysis

    NASA Astrophysics Data System (ADS)

    Lee, Taekjin; Nonomura, Taku; Asai, Keisuke; Liu, Tianshu

    2018-06-01

    A data analysis method based on the linear least-squares (LLS) method was developed for the extraction of high-resolution skin friction fields from global luminescent oil film (GLOF) visualization images of a surface in an aerodynamic flow. In this method, the oil film thickness distribution and its spatiotemporal development are measured by detecting the luminescence intensity of the thin oil film. From the resulting set of GLOF images, the thin oil film equation is solved to obtain an ensemble-averaged (steady) skin friction field as an inverse problem. In this paper, the formulation of a discrete linear system of equations for the LLS method is described, and an error analysis is given to identify the main error sources and the relevant parameters. Simulations were conducted to evaluate the accuracy of the LLS method and the effects of the image patterns, image noise, and sample numbers on the results in comparison with the previous snapshot-solution-averaging (SSA) method. An experimental case is shown to enable the comparison of the results obtained using conventional oil flow visualization and those obtained using both the LLS and SSA methods. The overall results show that the LLS method is more reliable than the SSA method and the LLS method can yield a more detailed skin friction topology in an objective way.

  13. The effect of choosing three different C factor formulae derived from NDVI on a fully raster-based erosion modelling

    NASA Astrophysics Data System (ADS)

    Sulistyo, Bambang

    2016-11-01

    The research was aimed at studying the efect of choosing three different C factor formulae derived from NDVI on a fully raster-based erosion modelling of The USLE using remote sensing data and GIS technique. Methods applied was by analysing all factors affecting erosion such that all data were in the form of raster. Those data were R, K, LS, C and P factors. Monthly R factor was evaluated based on formula developed by Abdurachman. K factor was determined using modified formula used by Ministry of Forestry based on soil samples taken in the field. LS factor was derived from Digital Elevation Model. Three C factors used were all derived from NDVI and developed by Suriyaprasit (non-linear) and by Sulistyo (linear and non-linear). P factor was derived from the combination between slope data and landcover classification interpreted from Landsat 7 ETM+. Another analysis was the creation of map of Bulk Density used to convert erosion unit. To know the model accuracy, model validation was done by applying statistical analysis and by comparing Emodel with Eactual. A threshold value of ≥ 0.80 or ≥ 80% was chosen to justify. The research result showed that all Emodel using three formulae of C factors have coeeficient of correlation value of > 0.8. The results of analysis of variance showed that there was significantly difference between Emodel and Eactual when using C factor formula developed by Suriyaprasit and Sulistyo (non-linear). Among the three formulae, only Emodel using C factor formula developed by Sulistyo (linear) reached the accuracy of 81.13% while the other only 56.02% as developed by Sulistyo (nonlinear) and 4.70% as developed by Suriyaprasit, respectively.

  14. Factor Scores, Structure and Communality Coefficients: A Primer

    ERIC Educational Resources Information Center

    Odum, Mary

    2011-01-01

    (Purpose) The purpose of this paper is to present an easy-to-understand primer on three important concepts of factor analysis: Factor scores, structure coefficients, and communality coefficients. Given that statistical analyses are a part of a global general linear model (GLM), and utilize weights as an integral part of analyses (Thompson, 2006;…

  15. Bootstrap Standard Error Estimates in Dynamic Factor Analysis

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Browne, Michael W.

    2010-01-01

    Dynamic factor analysis summarizes changes in scores on a battery of manifest variables over repeated measurements in terms of a time series in a substantially smaller number of latent factors. Algebraic formulae for standard errors of parameter estimates are more difficult to obtain than in the usual intersubject factor analysis because of the…

  16. Linear and Non-linear Information Flows In Rainfall Field

    NASA Astrophysics Data System (ADS)

    Molini, A.; La Barbera, P.; Lanza, L. G.

    The rainfall process is the result of a complex framework of non-linear dynamical in- teractions between the different components of the atmosphere. It preserves the com- plexity and the intermittent features of the generating system in space and time as well as the strong dependence of these properties on the scale of observations. The understanding and quantification of how the non-linearity of the generating process comes to influence the single rain events constitute relevant research issues in the field of hydro-meteorology, especially in those applications where a timely and effective forecasting of heavy rain events is able to reduce the risk of failure. This work focuses on the characterization of the non-linear properties of the observed rain process and on the influence of these features on hydrological models. Among the goals of such a survey is the research of regular structures of the rainfall phenomenon and the study of the information flows within the rain field. The research focuses on three basic evo- lution directions for the system: in time, in space and between the different scales. In fact, the information flows that force the system to evolve represent in general a connection between the different locations in space, the different instants in time and, unless assuming the hypothesis of scale invariance is verified "a priori", the different characteristic scales. A first phase of the analysis is carried out by means of classic statistical methods, then a survey of the information flows within the field is devel- oped by means of techniques borrowed from the Information Theory, and finally an analysis of the rain signal in the time and frequency domains is performed, with par- ticular reference to its intermittent structure. The methods adopted in this last part of the work are both the classic techniques of statistical inference and a few procedures for the detection of non-linear and non-stationary features within the process starting from

  17. A 3-D Magnetic Analysis of a Linear Alternator For a Stirling Power System

    NASA Technical Reports Server (NTRS)

    Geng, Steven M.; Schwarze, Gene E.; Niedra, Janis M.

    2000-01-01

    The NASA Glenn Research Center and the Department of Energy (DOE) are developing advanced radioisotope Stirling convertors, under contract with Stirling Technology Company (STC), for space applications. Of critical importance to the successful development of the Stirling convertor for space power applications is the development of a lightweight and highly efficient linear alternator. This paper presents a 3-D finite element method (FEM) approach for evaluating Stirling convertor linear alternators. Preliminary correlations with open-circuit voltage measurements provide an encouraging level of confidence in the model. Spatial plots of magnetic field strength (H) are presented in the region of the exciting permanent magnets. These plots identify regions of high H, where at elevated temperature and under electrical load, the potential to alter the magnetic moment of the magnets exists. This implies the need for further testing and analysis.

  18. Non-linear programming in shakedown analysis with plasticity and friction

    NASA Astrophysics Data System (ADS)

    Spagnoli, A.; Terzano, M.; Barber, J. R.; Klarbring, A.

    2017-07-01

    Complete frictional contacts, when subjected to cyclic loading, may sometimes develop a favourable situation where slip ceases after a few cycles, an occurrence commonly known as frictional shakedown. Its resemblance to shakedown in plasticity has prompted scholars to apply direct methods, derived from the classical theorems of limit analysis, in order to assess a safe limit to the external loads applied on the system. In circumstances where zones of plastic deformation develop in the material (e.g., because of the large stress concentrations near the sharp edges of a complete contact), it is reasonable to expect an effect of mutual interaction of frictional slip and plastic strains on the load limit below which the global behaviour is non dissipative, i.e., both slip and plastic strains go to zero after some dissipative load cycles. In this paper, shakedown of general two-dimensional discrete systems, involving both friction and plasticity, is discussed and the shakedown limit load is calculated using a non-linear programming algorithm based on the static theorem of limit analysis. An illustrative example related to an elastic-plastic solid containing a frictional crack is provided.

  19. A two-stage linear discriminant analysis via QR-decomposition.

    PubMed

    Ye, Jieping; Li, Qi

    2005-06-01

    Linear Discriminant Analysis (LDA) is a well-known method for feature extraction and dimension reduction. It has been used widely in many applications involving high-dimensional data, such as image and text classification. An intrinsic limitation of classical LDA is the so-called singularity problems; that is, it fails when all scatter matrices are singular. Many LDA extensions were proposed in the past to overcome the singularity problems. Among these extensions, PCA+LDA, a two-stage method, received relatively more attention. In PCA+LDA, the LDA stage is preceded by an intermediate dimension reduction stage using Principal Component Analysis (PCA). Most previous LDA extensions are computationally expensive, and not scalable, due to the use of Singular Value Decomposition or Generalized Singular Value Decomposition. In this paper, we propose a two-stage LDA method, namely LDA/QR, which aims to overcome the singularity problems of classical LDA, while achieving efficiency and scalability simultaneously. The key difference between LDA/QR and PCA+LDA lies in the first stage, where LDA/QR applies QR decomposition to a small matrix involving the class centroids, while PCA+LDA applies PCA to the total scatter matrix involving all training data points. We further justify the proposed algorithm by showing the relationship among LDA/QR and previous LDA methods. Extensive experiments on face images and text documents are presented to show the effectiveness of the proposed algorithm.

  20. Effects of Initial Geometric Imperfections On the Non-Linear Response of the Space Shuttle Superlightweight Liquid-Oxygen Tank

    NASA Technical Reports Server (NTRS)

    Nemeth, Michael P.; Young, Richard D.; Collins, Timothy J.; Starnes, James H., Jr.

    2002-01-01

    The results of an analytical study of the elastic buckling and nonlinear behavior of the liquid-oxygen tank for the new Space Shuttle superlightweight external fuel tank are presented. Selected results that illustrate three distinctly different types of non-linear response phenomena for thin-walled shells which are subjected to combined mechanical and thermal loads are presented. These response phenomena consist of a bifurcation-type buckling response, a short-wavelength non-linear bending response and a non-linear collapse or "snap-through" response associated with a limit point. The effects of initial geometric imperfections on the response characteristics are emphasized. The results illustrate that the buckling and non-linear response of a geometrically imperfect shell structure subjected to complex loading conditions may not be adequately characterized by an elastic linear bifurcation buckling analysis, and that the traditional industry practice of applying a buckling-load knock-down factor can result in an ultraconservative design. Results are also presented that show that a fluid-filled shell can be highly sensitive to initial geometric imperfections, and that the use a buckling-load knock-down factor is needed for this case.

  1. Theoretical analysis of linearized acoustics and aerodynamics of advanced supersonic propellers

    NASA Technical Reports Server (NTRS)

    Farassat, F.

    1985-01-01

    The derivation of a formula for prediction of the noise of supersonic propellers using time domain analysis is presented. This formula is a solution of the Ffowcs Williams-Hawkings equation and does not have the Doppler singularity of some other formulations. The result presented involves some surface integrals over the blade and line integrals over the leading and trailing edges. The blade geometry, motion and surface pressure are needed for noise calculation. To obtain the blade surface pressure, the observer is moved onto the blade surface and a linear singular integral equation is derived which can be solved numerically. Two examples of acoustic calculations using a computer program are currently under development.

  2. Analysis of Monte Carlo accelerated iterative methods for sparse linear systems: Analysis of Monte Carlo accelerated iterative methods for sparse linear systems

    DOE PAGES

    Benzi, Michele; Evans, Thomas M.; Hamilton, Steven P.; ...

    2017-03-05

    Here, we consider hybrid deterministic-stochastic iterative algorithms for the solution of large, sparse linear systems. Starting from a convergent splitting of the coefficient matrix, we analyze various types of Monte Carlo acceleration schemes applied to the original preconditioned Richardson (stationary) iteration. We expect that these methods will have considerable potential for resiliency to faults when implemented on massively parallel machines. We also establish sufficient conditions for the convergence of the hybrid schemes, and we investigate different types of preconditioners including sparse approximate inverses. Numerical experiments on linear systems arising from the discretization of partial differential equations are presented.

  3. Linearity, Bias, and Precision of Hepatic Proton Density Fat Fraction Measurements by Using MR Imaging: A Meta-Analysis.

    PubMed

    Yokoo, Takeshi; Serai, Suraj D; Pirasteh, Ali; Bashir, Mustafa R; Hamilton, Gavin; Hernando, Diego; Hu, Houchun H; Hetterich, Holger; Kühn, Jens-Peter; Kukuk, Guido M; Loomba, Rohit; Middleton, Michael S; Obuchowski, Nancy A; Song, Ji Soo; Tang, An; Wu, Xinhuai; Reeder, Scott B; Sirlin, Claude B

    2018-02-01

    Purpose To determine the linearity, bias, and precision of hepatic proton density fat fraction (PDFF) measurements by using magnetic resonance (MR) imaging across different field strengths, imager manufacturers, and reconstruction methods. Materials and Methods This meta-analysis was performed in accordance with Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. A systematic literature search identified studies that evaluated the linearity and/or bias of hepatic PDFF measurements by using MR imaging (hereafter, MR imaging-PDFF) against PDFF measurements by using colocalized MR spectroscopy (hereafter, MR spectroscopy-PDFF) or the precision of MR imaging-PDFF. The quality of each study was evaluated by using the Quality Assessment of Studies of Diagnostic Accuracy 2 tool. De-identified original data sets from the selected studies were pooled. Linearity was evaluated by using linear regression between MR imaging-PDFF and MR spectroscopy-PDFF measurements. Bias, defined as the mean difference between MR imaging-PDFF and MR spectroscopy-PDFF measurements, was evaluated by using Bland-Altman analysis. Precision, defined as the agreement between repeated MR imaging-PDFF measurements, was evaluated by using a linear mixed-effects model, with field strength, imager manufacturer, reconstruction method, and region of interest as random effects. Results Twenty-three studies (1679 participants) were selected for linearity and bias analyses and 11 studies (425 participants) were selected for precision analyses. MR imaging-PDFF was linear with MR spectroscopy-PDFF (R 2 = 0.96). Regression slope (0.97; P < .001) and mean Bland-Altman bias (-0.13%; 95% limits of agreement: -3.95%, 3.40%) indicated minimal underestimation by using MR imaging-PDFF. MR imaging-PDFF was precise at the region-of-interest level, with repeatability and reproducibility coefficients of 2.99% and 4.12%, respectively. Field strength, imager manufacturer, and reconstruction method

  4. How Factor Analysis Can Be Used in Classification.

    ERIC Educational Resources Information Center

    Harman, Harry H.

    This is a methodological study that suggests a taxometric technique for objective classification of yeasts. It makes use of the minres method of factor analysis and groups strains of yeast according to their factor profiles. The similarities are judged in the higher-dimensional space determined by the factor analysis, but otherwise rely on the…

  5. CSOLNP: Numerical Optimization Engine for Solving Non-linearly Constrained Problems.

    PubMed

    Zahery, Mahsa; Maes, Hermine H; Neale, Michael C

    2017-08-01

    We introduce the optimizer CSOLNP, which is a C++ implementation of the R package RSOLNP (Ghalanos & Theussl, 2012, Rsolnp: General non-linear optimization using augmented Lagrange multiplier method. R package version, 1) alongside some improvements. CSOLNP solves non-linearly constrained optimization problems using a Sequential Quadratic Programming (SQP) algorithm. CSOLNP, NPSOL (a very popular implementation of SQP method in FORTRAN (Gill et al., 1986, User's guide for NPSOL (version 4.0): A Fortran package for nonlinear programming (No. SOL-86-2). Stanford, CA: Stanford University Systems Optimization Laboratory), and SLSQP (another SQP implementation available as part of the NLOPT collection (Johnson, 2014, The NLopt nonlinear-optimization package. Retrieved from http://ab-initio.mit.edu/nlopt)) are three optimizers available in OpenMx package. These optimizers are compared in terms of runtimes, final objective values, and memory consumption. A Monte Carlo analysis of the performance of the optimizers was performed on ordinal and continuous models with five variables and one or two factors. While the relative difference between the objective values is less than 0.5%, CSOLNP is in general faster than NPSOL and SLSQP for ordinal analysis. As for continuous data, none of the optimizers performs consistently faster than the others. In terms of memory usage, we used Valgrind's heap profiler tool, called Massif, on one-factor threshold models. CSOLNP and NPSOL consume the same amount of memory, while SLSQP uses 71 MB more memory than the other two optimizers.

  6. Classification of electroencephalograph signals using time-frequency decomposition and linear discriminant analysis

    NASA Astrophysics Data System (ADS)

    Szuflitowska, B.; Orlowski, P.

    2017-08-01

    Automated detection system consists of two key steps: extraction of features from EEG signals and classification for detection of pathology activity. The EEG sequences were analyzed using Short-Time Fourier Transform and the classification was performed using Linear Discriminant Analysis. The accuracy of the technique was tested on three sets of EEG signals: epilepsy, healthy and Alzheimer's Disease. The classification error below 10% has been considered a success. The higher accuracy are obtained for new data of unknown classes than testing data. The methodology can be helpful in differentiation epilepsy seizure and disturbances in the EEG signal in Alzheimer's Disease.

  7. Estimation of motion fields by non-linear registration for local lung motion analysis in 4D CT image data.

    PubMed

    Werner, René; Ehrhardt, Jan; Schmidt-Richberg, Alexander; Heiss, Anabell; Handels, Heinz

    2010-11-01

    Motivated by radiotherapy of lung cancer non- linear registration is applied to estimate 3D motion fields for local lung motion analysis in thoracic 4D CT images. Reliability of analysis results depends on the registration accuracy. Therefore, our study consists of two parts: optimization and evaluation of a non-linear registration scheme for motion field estimation, followed by a registration-based analysis of lung motion patterns. The study is based on 4D CT data of 17 patients. Different distance measures and force terms for thoracic CT registration are implemented and compared: sum of squared differences versus a force term related to Thirion's demons registration; masked versus unmasked force computation. The most accurate approach is applied to local lung motion analysis. Masked Thirion forces outperform the other force terms. The mean target registration error is 1.3 ± 0.2 mm, which is in the order of voxel size. Based on resulting motion fields and inter-patient normalization of inner lung coordinates and breathing depths a non-linear dependency between inner lung position and corresponding strength of motion is identified. The dependency is observed for all patients without or with only small tumors. Quantitative evaluation of the estimated motion fields indicates high spatial registration accuracy. It allows for reliable registration-based local lung motion analysis. The large amount of information encoded in the motion fields makes it possible to draw detailed conclusions, e.g., to identify the dependency of inner lung localization and motion. Our examinations illustrate the potential of registration-based motion analysis.

  8. Indoor calibration of Sky Quality Meters: Linearity, spectral responsivity and uncertainty analysis

    NASA Astrophysics Data System (ADS)

    Pravettoni, M.; Strepparava, D.; Cereghetti, N.; Klett, S.; Andretta, M.; Steiger, M.

    2016-09-01

    The indoor calibration of brightness sensors requires extremely low values of irradiance in the most accurate and reproducible way. In this work the testing equipment of an ISO 17025 accredited laboratory for electrical testing, qualification and type approval of solar photovoltaic modules was modified in order to test the linearity of the instruments from few mW/cm2 down to fractions of nW/cm2, corresponding to levels of simulated brightness from 6 to 19 mag/arcsec2. Sixteen Sky Quality Meter (SQM) produced by Unihedron, a Canadian manufacturer, were tested, also assessing the impact of the ageing of their protective glasses on the calibration coefficients and the drift of the instruments. The instruments are in operation on measurement points and observatories at different sites and altitudes in Southern Switzerland, within the framework of OASI, the Environmental Observatory of Southern Switzerland. The authors present the results of the calibration campaign: linearity; brightness calibration, with and without protective glasses; transmittance measurement of the glasses; and spectral responsivity of the devices. A detailed uncertainty analysis is also provided, according to the ISO 17025 standard.

  9. Evaluation of site effects on ground motions based on equivalent linear site response analysis and liquefaction potential in Chennai, south India

    NASA Astrophysics Data System (ADS)

    Nampally, Subhadra; Padhy, Simanchal; Trupti, S.; Prabhakar Prasad, P.; Seshunarayana, T.

    2018-05-01

    We study local site effects with detailed geotechnical and geophysical site characterization to evaluate the site-specific seismic hazard for the seismic microzonation of the Chennai city in South India. A Maximum Credible Earthquake (MCE) of magnitude 6.0 is considered based on the available seismotectonic and geological information of the study area. We synthesized strong ground motion records for this target event using stochastic finite-fault technique, based on a dynamic corner frequency approach, at different sites in the city, with the model parameters for the source, site, and path (attenuation) most appropriately selected for this region. We tested the influence of several model parameters on the characteristics of ground motion through simulations and found that stress drop largely influences both the amplitude and frequency of ground motion. To minimize its influence, we estimated stress drop after finite bandwidth correction, as expected from an M6 earthquake in Indian peninsula shield for accurately predicting the level of ground motion. Estimates of shear wave velocity averaged over the top 30 m of soil (V S30) are obtained from multichannel analysis of surface wave (MASW) at 210 sites at depths of 30 to 60 m below the ground surface. Using these V S30 values, along with the available geotechnical information and synthetic ground motion database obtained, equivalent linear one-dimensional site response analysis that approximates the nonlinear soil behavior within the linear analysis framework was performed using the computer program SHAKE2000. Fundamental natural frequency, Peak Ground Acceleration (PGA) at surface and rock levels, response spectrum at surface level for different damping coefficients, and amplification factors are presented at different sites of the city. Liquefaction study was done based on the V S30 and PGA values obtained. The major findings suggest show that the northeast part of the city is characterized by (i) low V S30 values

  10. PLATSIM: An efficient linear simulation and analysis package for large-order flexible systems

    NASA Technical Reports Server (NTRS)

    Maghami, Periman; Kenny, Sean P.; Giesy, Daniel P.

    1995-01-01

    PLATSIM is a software package designed to provide efficient time and frequency domain analysis of large-order generic space platforms implemented with any linear time-invariant control system. Time domain analysis provides simulations of the overall spacecraft response levels due to either onboard or external disturbances. The time domain results can then be processed by the jitter analysis module to assess the spacecraft's pointing performance in a computationally efficient manner. The resulting jitter analysis algorithms have produced an increase in speed of several orders of magnitude over the brute force approach of sweeping minima and maxima. Frequency domain analysis produces frequency response functions for uncontrolled and controlled platform configurations. The latter represents an enabling technology for large-order flexible systems. PLATSIM uses a sparse matrix formulation for the spacecraft dynamics model which makes both the time and frequency domain operations quite efficient, particularly when a large number of modes are required to capture the true dynamics of the spacecraft. The package is written in MATLAB script language. A graphical user interface (GUI) is included in the PLATSIM software package. This GUI uses MATLAB's Handle graphics to provide a convenient way for setting simulation and analysis parameters.

  11. Mode switching and linear stability analysis of resonant acoustic flows

    NASA Astrophysics Data System (ADS)

    Panickar, Praveen

    Resonant acoustic flows occur in a wide variety of practical, aerospace-related applications and are a rich source of complex flow-physics. The primary concern associated with these types of flows is the high-amplitude fluctuating pressures associated with the resonant tones that could lead to sonic fatigue failure of sensitive components in the vicinity of such flows. However, before attempting to devise methods to suppress the resonant tones, it is imperative to understand the physics governing these flows in the hope that such an understanding will lead to more robust and effective suppression techniques. To this end, an in-depth study of various resonant acoustic flows was undertaken in this thesis, the main aim being to bring about a better understanding of such flows by revealing physically relevant information. Starting with the resonant acoustic mechanism in underexpanded jets from two-dimensional nozzles, it was shown that, for a variety of flow situations (geometries, shock-cell structures and orientations) in such jets, the nonlinear interaction density acted as a faithful precursor to a, hitherto unpredictable, spanwise instability mode switch. Following this, a study of the occurrence of, previously undocumented and theoretically unexpected, helical instabilities in subsonic impinging jets was undertaken. Using metrics from linear stability analysis, it was shown that the presence of the helical modes was justified. The results from this study on impinging jets are directly applicable to modern Stationary Take-Off and Vertical Landing (STOVL) aircraft that have twin, closely spaced exhausts. Finally, a novel technique that yielded dramatic suppression of resonant acoustic tones using high frequency excitation, in subsonic flows over open cavities, was investigated. Linear stability calculations of the experimentally measured baseline and excited velocity profiles showed that the instability of the high frequency excitation corresponded to a spatially

  12. Complete characterization of fourth-order symplectic integrators with extended-linear coefficients.

    PubMed

    Chin, Siu A

    2006-02-01

    The structure of symplectic integrators up to fourth order can be completely and analytically understood when the factorization (split) coefficients are related linearly but with a uniform nonlinear proportional factor. The analytic form of these extended-linear symplectic integrators greatly simplified proofs of their general properties and allowed easy construction of both forward and nonforward fourth-order algorithms with an arbitrary number of operators. Most fourth-order forward integrators can now be derived analytically from this extended-linear formulation without the use of symbolic algebra.

  13. Double Linear Damage Rule for Fatigue Analysis

    NASA Technical Reports Server (NTRS)

    Halford, G.; Manson, S.

    1985-01-01

    Double Linear Damage Rule (DLDR) method for use by structural designers to determine fatigue-crack-initiation life when structure subjected to unsteady, variable-amplitude cyclic loadings. Method calculates in advance of service how many loading cycles imposed on structural component before macroscopic crack initiates. Approach eventually used in design of high performance systems and incorporated into design handbooks and codes.

  14. Study of free-piston Stirling engine driven linear alternators

    NASA Technical Reports Server (NTRS)

    Nasar, S. A.; Chen, C.

    1987-01-01

    The analysis, design and operation of single phase, single slot tubular permanent magnet linear alternator is presented. Included is the no-load and on-load magnetic field investigation, permanent magnet's leakage field analysis, parameter identification, design guidelines and an optimal design of a permanent magnet linear alternator. For analysis of the magnetic field, a simplified magnetic circuit is utilized. The analysis accounts for saturation, leakage and armature reaction.

  15. The non-linear power spectrum of the Lyman alpha forest

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arinyo-i-Prats, Andreu; Miralda-Escudé, Jordi; Viel, Matteo

    2015-12-01

    The Lyman alpha forest power spectrum has been measured on large scales by the BOSS survey in SDSS-III at z∼ 2.3, has been shown to agree well with linear theory predictions, and has provided the first measurement of Baryon Acoustic Oscillations at this redshift. However, the power at small scales, affected by non-linearities, has not been well examined so far. We present results from a variety of hydrodynamic simulations to predict the redshift space non-linear power spectrum of the Lyα transmission for several models, testing the dependence on resolution and box size. A new fitting formula is introduced to facilitate themore » comparison of our simulation results with observations and other simulations. The non-linear power spectrum has a generic shape determined by a transition scale from linear to non-linear anisotropy, and a Jeans scale below which the power drops rapidly. In addition, we predict the two linear bias factors of the Lyα forest and provide a better physical interpretation of their values and redshift evolution. The dependence of these bias factors and the non-linear power on the amplitude and slope of the primordial fluctuations power spectrum, the temperature-density relation of the intergalactic medium, and the mean Lyα transmission, as well as the redshift evolution, is investigated and discussed in detail. A preliminary comparison to the observations shows that the predicted redshift distortion parameter is in good agreement with the recent determination of Blomqvist et al., but the density bias factor is lower than observed. We make all our results publicly available in the form of tables of the non-linear power spectrum that is directly obtained from all our simulations, and parameters of our fitting formula.« less

  16. Selecting the correct weighting factors for linear and quadratic calibration curves with least-squares regression algorithm in bioanalytical LC-MS/MS assays and impacts of using incorrect weighting factors on curve stability, data quality, and assay performance.

    PubMed

    Gu, Huidong; Liu, Guowen; Wang, Jian; Aubry, Anne-Françoise; Arnold, Mark E

    2014-09-16

    A simple procedure for selecting the correct weighting factors for linear and quadratic calibration curves with least-squares regression algorithm in bioanalytical LC-MS/MS assays is reported. The correct weighting factor is determined by the relationship between the standard deviation of instrument responses (σ) and the concentrations (x). The weighting factor of 1, 1/x, or 1/x(2) should be selected if, over the entire concentration range, σ is a constant, σ(2) is proportional to x, or σ is proportional to x, respectively. For the first time, we demonstrated with detailed scientific reasoning, solid historical data, and convincing justification that 1/x(2) should always be used as the weighting factor for all bioanalytical LC-MS/MS assays. The impacts of using incorrect weighting factors on curve stability, data quality, and assay performance were thoroughly investigated. It was found that the most stable curve could be obtained when the correct weighting factor was used, whereas other curves using incorrect weighting factors were unstable. It was also found that there was a very insignificant impact on the concentrations reported with calibration curves using incorrect weighting factors as the concentrations were always reported with the passing curves which actually overlapped with or were very close to the curves using the correct weighting factor. However, the use of incorrect weighting factors did impact the assay performance significantly. Finally, the difference between the weighting factors of 1/x(2) and 1/y(2) was discussed. All of the findings can be generalized and applied into other quantitative analysis techniques using calibration curves with weighted least-squares regression algorithm.

  17. Large power factor and anomalous Hall effect and their correlation with observed linear magneto resistance in Co-doped Bi2Se3 3D topological insulator

    NASA Astrophysics Data System (ADS)

    Singh, Rahul; Shukla, K. K.; Kumar, A.; Okram, G. S.; Singh, D.; Ganeshan, V.; Lakhani, Archana; Ghosh, A. K.; Chatterjee, Sandip

    2016-09-01

    Magnetoresistance (MR), thermo power, magnetization and Hall effect measurements have been performed on Co-doped Bi2Se3 topological insulators. The undoped sample shows that the maximum MR as a destructive interference due to a π-Berry phase leads to a decrease of MR. As the Co is doped, the linearity in MR is increased. The observed MR of Bi2Se3 can be explained with the classical model. The low temperature MR behavior of Co doped samples cannot be explained with the same model, but can be explained with the quantum linear MR model. Magnetization behavior indicates the establishment of ferromagnetic ordering with Co doping. Hall effect data also supports the establishment of ferromagnetic ordering in Co-doped Bi2Se3 samples by showing the anomalous Hall effect. Furthermore, when spectral weight suppression is insignificant, Bi2Se3 behaves as a dilute magnetic semiconductor. Moreover, the maximum power factor is observed when time reversal symmetry (TRS) is maintained. As the TRS is broken the power factor value is decreased, which indicates that with the rise of Dirac cone above the Fermi level the anomalous Hall effect and linearity in MR increase and the power factor decreases.

  18. Estimating linear effects in ANOVA designs: the easy way.

    PubMed

    Pinhas, Michal; Tzelgov, Joseph; Ganor-Stern, Dana

    2012-09-01

    Research in cognitive science has documented numerous phenomena that are approximated by linear relationships. In the domain of numerical cognition, the use of linear regression for estimating linear effects (e.g., distance and SNARC effects) became common following Fias, Brysbaert, Geypens, and d'Ydewalle's (1996) study on the SNARC effect. While their work has become the model for analyzing linear effects in the field, it requires statistical analysis of individual participants and does not provide measures of the proportions of variability accounted for (cf. Lorch & Myers, 1990). In the present methodological note, using both the distance and SNARC effects as examples, we demonstrate how linear effects can be estimated in a simple way within the framework of repeated measures analysis of variance. This method allows for estimating effect sizes in terms of both slope and proportions of variability accounted for. Finally, we show that our method can easily be extended to estimate linear interaction effects, not just linear effects calculated as main effects.

  19. Applicability of hybrid linear ion trap-high resolution mass spectrometry and quadrupole-linear ion trap-mass spectrometry for mycotoxin analysis in baby food.

    PubMed

    Rubert, Josep; James, Kevin J; Mañes, Jordi; Soler, Carla

    2012-02-03

    Recent developments in mass spectrometers have created a paradoxical situation; different mass spectrometers are available, each of them with their specific strengths and drawbacks. Hybrid instruments try to unify several advantages in one instrument. In this study two of wide-used hybrid instruments were compared: hybrid quadrupole-linear ion trap-mass spectrometry (QTRAP®) and the hybrid linear ion trap-high resolution mass spectrometry (LTQ-Orbitrap®). Both instruments were applied to detect the presence of 18 selected mycotoxins in baby food. Analytical parameters were validated according to 2002/657/CE. Limits of quantification (LOQs) obtained by QTRAP® instrument ranged from 0.45 to 45 μg kg⁻¹ while lower limits of quantification (LLOQs) values were obtained by LTQ-Orbitrap®: 7-70 μg kg⁻¹. The correlation coefficients (r) in both cases were upper than 0.989. These values highlighted that both instruments were complementary for the analysis of mycotoxin in baby food; while QTRAP® reached best sensitivity and selectivity, LTQ-Orbitrap® allowed the identification of non-target and unknowns compounds. Copyright © 2011 Elsevier B.V. All rights reserved.

  20. A Review of CEFA Software: Comprehensive Exploratory Factor Analysis Program

    ERIC Educational Resources Information Center

    Lee, Soon-Mook

    2010-01-01

    CEFA 3.02(Browne, Cudeck, Tateneni, & Mels, 2008) is a factor analysis computer program designed to perform exploratory factor analysis. It provides the main properties that are needed for exploratory factor analysis, namely a variety of factoring methods employing eight different discrepancy functions to be minimized to yield initial…

  1. Axial displacement of external and internal implant-abutment connection evaluated by linear mixed model analysis.

    PubMed

    Seol, Hyon-Woo; Heo, Seong-Joo; Koak, Jai-Young; Kim, Seong-Kyun; Kim, Shin-Koo

    2015-01-01

    To analyze the axial displacement of external and internal implant-abutment connection after cyclic loading. Three groups of external abutments (Ext group), an internal tapered one-piece-type abutment (Int-1 group), and an internal tapered two-piece-type abutment (Int-2 group) were prepared. Cyclic loading was applied to implant-abutment assemblies at 150 N with a frequency of 3 Hz. The amount of axial displacement, the Periotest values (PTVs), and the removal torque values(RTVs) were measured. Both a repeated measures analysis of variance and pattern analysis based on the linear mixed model were used for statistical analysis. Scanning electron microscopy (SEM) was used to evaluate the surface of the implant-abutment connection. The mean axial displacements after 1,000,000 cycles were 0.6 μm in the Ext group, 3.7 μm in the Int-1 group, and 9.0 μm in the Int-2 group. Pattern analysis revealed a breakpoint at 171 cycles. The Ext group showed no declining pattern, and the Int-1 group showed no declining pattern after the breakpoint (171 cycles). However, the Int-2 group experienced continuous axial displacement. After cyclic loading, the PTV decreased in the Int-2 group, and the RTV decreased in all groups. SEM imaging revealed surface wear in all groups. Axial displacement and surface wear occurred in all groups. The PTVs remained stable, but the RTVs decreased after cyclic loading. Based on linear mixed model analysis, the Ext and Int-1 groups' axial displacements plateaued after little cyclic loading. The Int-2 group's rate of axial displacement slowed after 100,000 cycles.

  2. Financial Distress Prediction using Linear Discriminant Analysis and Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Santoso, Noviyanti; Wibowo, Wahyu

    2018-03-01

    A financial difficulty is the early stages before the bankruptcy. Bankruptcies caused by the financial distress can be seen from the financial statements of the company. The ability to predict financial distress became an important research topic because it can provide early warning for the company. In addition, predicting financial distress is also beneficial for investors and creditors. This research will be made the prediction model of financial distress at industrial companies in Indonesia by comparing the performance of Linear Discriminant Analysis (LDA) and Support Vector Machine (SVM) combined with variable selection technique. The result of this research is prediction model based on hybrid Stepwise-SVM obtains better balance among fitting ability, generalization ability and model stability than the other models.

  3. Source apportionment of PAH in Hamilton Harbour suspended sediments: comparison of two factor analysis methods.

    PubMed

    Sofowote, Uwayemi M; McCarry, Brian E; Marvin, Christopher H

    2008-08-15

    A total of 26 suspended sediment samples collected over a 5-year period in Hamilton Harbour, Ontario, Canada and surrounding creeks were analyzed for a suite of polycyclic aromatic hydrocarbons and sulfur heterocycles. Hamilton Harbour sediments contain relatively high levels of polycyclic aromatic compounds and heavy metals due to emissions from industrial and mobile sources. Two receptor modeling methods using factor analyses were compared to determine the profiles and relative contributions of pollution sources to the harbor; these methods are principal component analyses (PCA) with multiple linear regression analysis (MLR) and positive matrix factorization (PMF). Both methods identified four factors and gave excellent correlation coefficients between predicted and measured levels of 25 aromatic compounds; both methods predicted similar contributions from coal tar/coal combustion sources to the harbor (19 and 26%, respectively). One PCA factor was identified as contributions from vehicular emissions (61%); PMF was able to differentiate vehicular emissions into two factors, one attributed to gasoline emissions sources (28%) and the other to diesel emissions sources (24%). Overall, PMF afforded better source identification than PCA with MLR. This work constitutes one of the few examples of the application of PMF to the source apportionment of sediments; the addition of sulfur heterocycles to the analyte list greatly aided in the source identification process.

  4. Monte Carlo simulation for Neptun 10 PC medical linear accelerator and calculations of output factor for electron beam

    PubMed Central

    Bahreyni Toossi, Mohammad Taghi; Momennezhad, Mehdi; Hashemi, Seyed Mohammad

    2012-01-01

    Aim Exact knowledge of dosimetric parameters is an essential pre-requisite of an effective treatment in radiotherapy. In order to fulfill this consideration, different techniques have been used, one of which is Monte Carlo simulation. Materials and methods This study used the MCNP-4Cb to simulate electron beams from Neptun 10 PC medical linear accelerator. Output factors for 6, 8 and 10 MeV electrons applied to eleven different conventional fields were both measured and calculated. Results The measurements were carried out by a Wellhofler-Scanditronix dose scanning system. Our findings revealed that output factors acquired by MCNP-4C simulation and the corresponding values obtained by direct measurements are in a very good agreement. Conclusion In general, very good consistency of simulated and measured results is a good proof that the goal of this work has been accomplished. PMID:24377010

  5. Study on magnetic circuit of moving magnet linear compressor

    NASA Astrophysics Data System (ADS)

    Xia, Ming; Chen, Xiaoping; Chen, Jun

    2015-05-01

    The moving magnet linear compressors are very popular in the tactical miniature stirling cryocoolers. The magnetic circuit of LFC3600 moving magnet linear compressor, manufactured by Kunming institute of Physics, was studied in this study. Three methods of the analysis theory, numerical calculation and experiment study were applied in the analysis process. The calculated formula of magnetic reluctance and magnetomotive force were given in theoretical analysis model. The magnetic flux density and magnetic flux line were analyzed in numerical analysis model. A testing method was designed to test the magnetic flux density of the linear compressor. When the piston of the motor was in the equilibrium position, the value of the magnetic flux density was at the maximum of 0.27T. The results were almost equal to the ones from numerical analysis.

  6. Physics Metacognition Inventory Part II: Confirmatory factor analysis and Rasch analysis

    NASA Astrophysics Data System (ADS)

    Taasoobshirazi, Gita; Bailey, MarLynn; Farley, John

    2015-11-01

    The Physics Metacognition Inventory was developed to measure physics students' metacognition for problem solving. In one of our earlier studies, an exploratory factor analysis provided evidence of preliminary construct validity, revealing six components of students' metacognition when solving physics problems including knowledge of cognition, planning, monitoring, evaluation, debugging, and information management. The college students' scores on the inventory were found to be reliable and related to students' physics motivation and physics grade. However, the results of the exploratory factor analysis indicated that the questionnaire could be revised to improve its construct validity. The goal of this study was to revise the questionnaire and establish its construct validity through a confirmatory factor analysis. In addition, a Rasch analysis was applied to the data to better understand the psychometric properties of the inventory and to further evaluate the construct validity. Results indicated that the final, revised inventory is a valid, reliable, and efficient tool for assessing student metacognition for physics problem solving.

  7. Multiscale analysis of information dynamics for linear multivariate processes.

    PubMed

    Faes, Luca; Montalto, Alessandro; Stramaglia, Sebastiano; Nollo, Giandomenico; Marinazzo, Daniele

    2016-08-01

    In the study of complex physical and physiological systems represented by multivariate time series, an issue of great interest is the description of the system dynamics over a range of different temporal scales. While information-theoretic approaches to the multiscale analysis of complex dynamics are being increasingly used, the theoretical properties of the applied measures are poorly understood. This study introduces for the first time a framework for the analytical computation of information dynamics for linear multivariate stochastic processes explored at different time scales. After showing that the multiscale processing of a vector autoregressive (VAR) process introduces a moving average (MA) component, we describe how to represent the resulting VARMA process using statespace (SS) models and how to exploit the SS model parameters to compute analytical measures of information storage and information transfer for the original and rescaled processes. The framework is then used to quantify multiscale information dynamics for simulated unidirectionally and bidirectionally coupled VAR processes, showing that rescaling may lead to insightful patterns of information storage and transfer but also to potentially misleading behaviors.

  8. Introducing linear functions: an alternative statistical approach

    NASA Astrophysics Data System (ADS)

    Nolan, Caroline; Herbert, Sandra

    2015-12-01

    The introduction of linear functions is the turning point where many students decide if mathematics is useful or not. This means the role of parameters and variables in linear functions could be considered to be `threshold concepts'. There is recognition that linear functions can be taught in context through the exploration of linear modelling examples, but this has its limitations. Currently, statistical data is easily attainable, and graphics or computer algebra system (CAS) calculators are common in many classrooms. The use of this technology provides ease of access to different representations of linear functions as well as the ability to fit a least-squares line for real-life data. This means these calculators could support a possible alternative approach to the introduction of linear functions. This study compares the results of an end-of-topic test for two classes of Australian middle secondary students at a regional school to determine if such an alternative approach is feasible. In this study, test questions were grouped by concept and subjected to concept by concept analysis of the means of test results of the two classes. This analysis revealed that the students following the alternative approach demonstrated greater competence with non-standard questions.

  9. The consistency of positive fully fuzzy linear system

    NASA Astrophysics Data System (ADS)

    Malkawi, Ghassan O.; Alfifi, Hassan Y.

    2017-11-01

    In this paper, the consistency of fuzziness of positive solution of the n × n fully fuzzy linear system (P - FFLS) is studied based on its associated linear system (P - ALS). That can consist of the whole entries of triangular fuzzy numbers in a linear system without fuzzy operations. The nature of solution is differentiated in case of fuzzy solution, non-fuzzy solution and fuzzy non-positive solution. Moreover, the analysis reveals that the P - ALS is applicable to provide the set of infinite number of solutions. Numerical examples are presented to illustrate the proposed analysis.

  10. Confirmatory factor analysis of the Chinese Breast Cancer Screening Beliefs Questionnaire.

    PubMed

    Kwok, Cannas; Fethney, Judith; White, Kate

    2012-01-01

    Chinese women have been consistently reported as having low breast cancer screening practices. The Chinese Breast Cancer Screening Beliefs Questionnaire (CBCSB) was designed to assess Chinese Australian women's beliefs, knowledge, and attitudes toward breast cancer and screening practices. The objectives of the study were to confirm the factor structure of the CBCSB with a new, larger sample of immigrant Chinese Australian women and to report its clinical validity. A convenience sample of 785 Chinese Australian women was recruited from Chinese community organizations and shopping malls. Cronbach α was used to assess internal consistency reliability, and Amos v18 was used for confirmatory factor analysis. Clinical validity was assessed through linear regression using SPSS v18. The 3-factor structure of the CBCSB was confirmed, although the model required respecification to arrive at a suitable model fit as measured by the goodness-of-fit index (0.98), adjusted goodness-of-fit index (0.97), normed fit index (0.95), and root mean square error of approximation (0.031). Internal consistency reliability coefficients were satisfactory (>.6). Women who engaged in all 3 types of screening had more proactive attitudes to health checkups and perceived less barriers to mammographic screening. The CBCSB is a valid and reliable tool for assessing Chinese women's beliefs, knowledge, and attitudes about breast cancer and breast cancer screening practices. The CBCSB can be used for providing practicing nurses with insights into the provision of culturally sensitive breast health education.

  11. Linear and Order Statistics Combiners for Pattern Classification

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Ghosh, Joydeep; Lau, Sonie (Technical Monitor)

    2001-01-01

    Several researchers have experimentally shown that substantial improvements can be obtained in difficult pattern recognition problems by combining or integrating the outputs of multiple classifiers. This chapter provides an analytical framework to quantify the improvements in classification results due to combining. The results apply to both linear combiners and order statistics combiners. We first show that to a first order approximation, the error rate obtained over and above the Bayes error rate, is directly proportional to the variance of the actual decision boundaries around the Bayes optimum boundary. Combining classifiers in output space reduces this variance, and hence reduces the 'added' error. If N unbiased classifiers are combined by simple averaging. the added error rate can be reduced by a factor of N if the individual errors in approximating the decision boundaries are uncorrelated. Expressions are then derived for linear combiners which are biased or correlated, and the effect of output correlations on ensemble performance is quantified. For order statistics based non-linear combiners, we derive expressions that indicate how much the median, the maximum and in general the i-th order statistic can improve classifier performance. The analysis presented here facilitates the understanding of the relationships among error rates, classifier boundary distributions, and combining in output space. Experimental results on several public domain data sets are provided to illustrate the benefits of combining and to support the analytical results.

  12. A Linear Electromagnetic Piston Pump

    NASA Astrophysics Data System (ADS)

    Hogan, Paul H.

    Advancements in mobile hydraulics for human-scale applications have increased demand for a compact hydraulic power supply. Conventional designs couple a rotating electric motor to a hydraulic pump, which increases the package volume and requires several energy conversions. This thesis investigates the use of a free piston as the moving element in a linear motor to eliminate multiple energy conversions and decrease the overall package volume. A coupled model used a quasi-static magnetic equivalent circuit to calculate the motor inductance and the electromagnetic force acting on the piston. The force was an input to a time domain model to evaluate the mechanical and pressure dynamics. The magnetic circuit model was validated with finite element analysis and an experimental prototype linear motor. The coupled model was optimized using a multi-objective genetic algorithm to explore the parameter space and maximize power density and efficiency. An experimental prototype linear pump coupled pistons to an off-the-shelf linear motor to validate the mechanical and pressure dynamics models. The magnetic circuit force calculation agreed within 3% of finite element analysis, and within 8% of experimental data from the unoptimized prototype linear motor. The optimized motor geometry also had good agreement with FEA; at zero piston displacement, the magnetic circuit calculates optimized motor force within 10% of FEA in less than 1/1000 the computational time. This makes it well suited to genetic optimization algorithms. The mechanical model agrees very well with the experimental piston pump position data when tuned for additional unmodeled mechanical friction. Optimized results suggest that an improvement of 400% of the state of the art power density is attainable with as high as 85% net efficiency. This demonstrates that a linear electromagnetic piston pump has potential to serve as a more compact and efficient supply of fluid power for the human scale.

  13. Non-Linearity in Wide Dynamic Range CMOS Image Sensors Utilizing a Partial Charge Transfer Technique.

    PubMed

    Shafie, Suhaidi; Kawahito, Shoji; Halin, Izhal Abdul; Hasan, Wan Zuha Wan

    2009-01-01

    The partial charge transfer technique can expand the dynamic range of a CMOS image sensor by synthesizing two types of signal, namely the long and short accumulation time signals. However the short accumulation time signal obtained from partial transfer operation suffers of non-linearity with respect to the incident light. In this paper, an analysis of the non-linearity in partial charge transfer technique has been carried, and the relationship between dynamic range and the non-linearity is studied. The results show that the non-linearity is caused by two factors, namely the current diffusion, which has an exponential relation with the potential barrier, and the initial condition of photodiodes in which it shows that the error in the high illumination region increases as the ratio of the long to the short accumulation time raises. Moreover, the increment of the saturation level of photodiodes also increases the error in the high illumination region.

  14. Intrinsic noise analysis and stochastic simulation on transforming growth factor beta signal pathway

    NASA Astrophysics Data System (ADS)

    Wang, Lu; Ouyang, Qi

    2010-10-01

    A typical biological cell lives in a small volume at room temperature; the noise effect on the cell signal transduction pathway may play an important role in its dynamics. Here, using the transforming growth factor-β signal transduction pathway as an example, we report our stochastic simulations of the dynamics of the pathway and introduce a linear noise approximation method to calculate the transient intrinsic noise of pathway components. We compare the numerical solutions of the linear noise approximation with the statistic results of chemical Langevin equations, and find that they are quantitatively in agreement with the other. When transforming growth factor-β dose decreases to a low level, the time evolution of noise fluctuation of nuclear Smad2—Smad4 complex indicates the abnormal enhancement in the transient signal activation process.

  15. Design and Analysis of a Novel Fully Decoupled Tri-axis Linear Vibratory Gyroscope with Matched Modes.

    PubMed

    Xia, Dunzhu; Kong, Lun; Gao, Haiyu

    2015-07-13

    We present in this paper a novel fully decoupled silicon micromachined tri-axis linear vibratory gyroscope. The proposed gyroscope structure is highly symmetrical and can be limited to an area of about 8.5 mm × 8.5 mm. It can differentially detect three axes' angular velocities at the same time. By elaborately arranging different beams, anchors and sensing frames, the drive and sense modes are fully decoupled from each other. Moreover, the quadrature error correction and frequency tuning functions are taken into consideration in the structure design for all the sense modes. Since there exists an unwanted in-plane rotational mode, theoretical analysis is implemented to eliminate it. To accelerate the mode matching process, the particle swam optimization (PSO) algorithm is adopted and a frequency split of 149 Hz is first achieved by this method. Then, after two steps of manual adjustment of the springs' dimensions, the frequency gap is further decreased to 3 Hz. With the help of the finite element method (FEM) software ANSYS, the natural frequencies of drive, yaw, and pitch/roll modes are found to be 14,017 Hz, 14,018 Hz and 14,020 Hz, respectively. The cross-axis effect and scale factor of each mode are also simulated. All the simulation results are in good accordance with the theoretical analysis, which means the design is effective and worthy of further investigation on the integration of tri-axis accelerometers on the same single chip to form an inertial measurement unit.

  16. A comparison study on detection of key geochemical variables and factors through three different types of factor analysis

    NASA Astrophysics Data System (ADS)

    Hoseinzade, Zohre; Mokhtari, Ahmad Reza

    2017-10-01

    Large numbers of variables have been measured to explain different phenomena. Factor analysis has widely been used in order to reduce the dimension of datasets. Additionally, the technique has been employed to highlight underlying factors hidden in a complex system. As geochemical studies benefit from multivariate assays, application of this method is widespread in geochemistry. However, the conventional protocols in implementing factor analysis have some drawbacks in spite of their advantages. In the present study, a geochemical dataset including 804 soil samples collected from a mining area in central Iran in order to search for MVT type Pb-Zn deposits was considered to outline geochemical analysis through various fractal methods. Routine factor analysis, sequential factor analysis, and staged factor analysis were applied to the dataset after opening the data with (additive logratio) alr-transformation to extract mineralization factor in the dataset. A comparison between these methods indicated that sequential factor analysis has more clearly revealed MVT paragenesis elements in surface samples with nearly 50% variation in F1. In addition, staged factor analysis has given acceptable results while it is easy to practice. It could detect mineralization related elements while larger factor loadings are given to these elements resulting in better pronunciation of mineralization.

  17. Job compensable factors and factor weights derived from job analysis data.

    PubMed

    Chi, Chia-Fen; Chang, Tin-Chang; Hsia, Ping-Ling; Song, Jen-Chieh

    2007-06-01

    Government data on 1,039 job titles in Taiwan were analyzed to assess possible relationships between job attributes and compensation. For each job title, 79 specific variables in six major classes (required education and experience, aptitude, interest, work temperament, physical demands, task environment) were coded to derive the statistical predictors of wage for managers, professionals, technical, clerical, service, farm, craft, operatives, and other workers. Of the 79 variables, only 23 significantly related to pay rate were subjected to a factor and multiple regression analysis for predicting monthly wages. Given the heterogeneous nature of collected job titles, a 4-factor solution (occupational knowledge and skills, human relations skills, work schedule hardships, physical hardships) explaining 43.8% of the total variance but predicting only 23.7% of the monthly pay rate was derived. On the other hand, multiple regression with 9 job analysis items (required education, professional training, professional certificate, professional experience, coordinating, leadership and directing, demand on hearing, proportion of shift working indoors, outdoors and others, rotating shift) better predicted pay and explained 32.5% of the variance. A direct comparison of factors and subfactors of job evaluation plans indicated mental effort and responsibility (accountability) had not been measured with the current job analysis data. Cross-validation of job evaluation factors and ratings with the wage rates is required to calibrate both.

  18. Analysis and correction of linear optics errors, and operational improvements in the Indus-2 storage ring

    NASA Astrophysics Data System (ADS)

    Husain, Riyasat; Ghodke, A. D.

    2017-08-01

    Estimation and correction of the optics errors in an operational storage ring is always vital to achieve the design performance. To achieve this task, the most suitable and widely used technique, called linear optics from closed orbit (LOCO) is used in almost all storage ring based synchrotron radiation sources. In this technique, based on the response matrix fit, errors in the quadrupole strengths, beam position monitor (BPM) gains, orbit corrector calibration factors etc. can be obtained. For correction of the optics, suitable changes in the quadrupole strengths can be applied through the driving currents of the quadrupole power supplies to achieve the desired optics. The LOCO code has been used at the Indus-2 storage ring for the first time. The estimation of linear beam optics errors and their correction to minimize the distortion of linear beam dynamical parameters by using the installed number of quadrupole power supplies is discussed. After the optics correction, the performance of the storage ring is improved in terms of better beam injection/accumulation, reduced beam loss during energy ramping, and improvement in beam lifetime. It is also useful in controlling the leakage in the orbit bump required for machine studies or for commissioning of new beamlines.

  19. Chaos as an intermittently forced linear system.

    PubMed

    Brunton, Steven L; Brunton, Bingni W; Proctor, Joshua L; Kaiser, Eurika; Kutz, J Nathan

    2017-05-30

    Understanding the interplay of order and disorder in chaos is a central challenge in modern quantitative science. Approximate linear representations of nonlinear dynamics have long been sought, driving considerable interest in Koopman theory. We present a universal, data-driven decomposition of chaos as an intermittently forced linear system. This work combines delay embedding and Koopman theory to decompose chaotic dynamics into a linear model in the leading delay coordinates with forcing by low-energy delay coordinates; this is called the Hankel alternative view of Koopman (HAVOK) analysis. This analysis is applied to the Lorenz system and real-world examples including Earth's magnetic field reversal and measles outbreaks. In each case, forcing statistics are non-Gaussian, with long tails corresponding to rare intermittent forcing that precedes switching and bursting phenomena. The forcing activity demarcates coherent phase space regions where the dynamics are approximately linear from those that are strongly nonlinear.The huge amount of data generated in fields like neuroscience or finance calls for effective strategies that mine data to reveal underlying dynamics. Here Brunton et al.develop a data-driven technique to analyze chaotic systems and predict their dynamics in terms of a forced linear model.

  20. Non-linear behavior of fiber composite laminates

    NASA Technical Reports Server (NTRS)

    Hashin, Z.; Bagchi, D.; Rosen, B. W.

    1974-01-01

    The non-linear behavior of fiber composite laminates which results from lamina non-linear characteristics was examined. The analysis uses a Ramberg-Osgood representation of the lamina transverse and shear stress strain curves in conjunction with deformation theory to describe the resultant laminate non-linear behavior. A laminate having an arbitrary number of oriented layers and subjected to a general state of membrane stress was treated. Parametric results and comparison with experimental data and prior theoretical results are presented.

  1. Analysis of a Linear System for Variable-Thrust Control in the Terminal Phase of Rendezvous

    NASA Technical Reports Server (NTRS)

    Hord, Richard A.; Durling, Barbara J.

    1961-01-01

    A linear system for applying thrust to a ferry vehicle in the 3 terminal phase of rendezvous with a satellite is analyzed. This system requires that the ferry thrust vector per unit mass be variable and equal to a suitable linear combination of the measured position and velocity vectors of the ferry relative to the satellite. The variations of the ferry position, speed, acceleration, and mass ratio are examined for several combinations of the initial conditions and two basic control parameters analogous to the undamped natural frequency and the fraction of critical damping. Upon making a desirable selection of one control parameter and requiring minimum fuel expenditure for given terminal-phase initial conditions, a simplified analysis in one dimension practically fixes the choice of the remaining control parameter. The system can be implemented by an automatic controller or by a pilot.

  2. Linear Inverse Modeling and Scaling Analysis of Drainage Inventories.

    NASA Astrophysics Data System (ADS)

    O'Malley, C.; White, N. J.

    2016-12-01

    It is widely accepted that the stream power law can be used to describe the evolution of longitudinal river profiles. Over the last 5 years, this phenomenological law has been used to develop non-linear and linear inversion algorithms that enable uplift rate histories to be calculated by minimizing the misfit between observed and calculated river profiles. Substantial, continent-wide inventories of river profiles have been successfully inverted to yield uplift as a function of time and space. Erosional parameters can be determined by independent geological calibration. Our results help to illuminate empirical scaling laws that are well known to the geomorphological community. Here we present an analysis of river profiles from Asia. The timing and magnitude of uplift events across Asia, including the Himalayas and Tibet, have long been debated. River profile analyses have played an important role in clarifying the timing of uplift events. However, no attempt has yet been made to invert a comprehensive database of river profiles from the entire region. Asian rivers contain information which allows us to investigate putative uplift events quantitatively and to determine a cumulative uplift history for Asia. Long wavelength shapes of river profiles are governed by regional uplift and moderated by erosional processes. These processes are parameterised using the stream power law in the form of an advective-diffusive equation. Our non-negative, least-squares inversion scheme was applied to an inventory of 3722 Asian river profiles. We calibrate the key erosional parameters by predicting solid sedimentary flux for a set of Asian rivers and by comparing the flux predictions against published depositional histories for major river deltas. The resultant cumulative uplift history is compared with a range of published geological constraints for uplift and palaeoelevation. We have found good agreement for many regions across Asia. Surprisingly, single values of erosional

  3. Analysis of the effects of geological and geomorphological factors on earthquake triggered landslides using artificial neural networks (ANN)

    NASA Astrophysics Data System (ADS)

    Kawabata, D.; Bandibas, J.

    2007-12-01

    The occurrence of landslide is the result of the interaction of complex and diverse environmental factors. The geomorphic and geologic features, rock types and vegetative cover are important base factors of landslide occurrence. However, determining the relationship between these factors and landslide occurrence is very difficult using conventional mathematical analysis. The use of an advanced computing technique for this kind of analysis is very important. Artificial neural network (ANN) has recently been included in the list of analytical tools for a wide range of applications in the natural sciences research fields. One of the advantages of using ANN for pattern recognition is that it can handle data at any measurement scale ranging from nominal, ordinal to linear and ratio, and any form of data distribution (Wang et al., 1995). In addition, it can easily handle qualitative variables making it widely used in integrated analysis of spatial data from multiple sources for predicting and classification. This study focuses on the definition of the relationship between geological factors and landslide occurrence using artificial neural networks. The study also focuses on the effect of the DTMs (e.g. ASTER DTM, ALSM, digitized from paper map and digital photogrammetric measurement data). The main aim of the study is to generate landslide susceptibility index map using the defined relationship using ANN. Landslide data in the Chuetsu region were used in this research. The 2004 earthquake triggered many landslides in the region. The initial results of the study showed that ANN is more accurate in defining the relationship between geological and geomorphological factors and landslide occurrence. It also determined the best combination of geological and geomorphological factors that is directly related to landslide occurrence.

  4. Extracting factors for interest rate scenarios

    NASA Astrophysics Data System (ADS)

    Molgedey, L.; Galic, E.

    2001-04-01

    Factor based interest rate models are widely used for risk managing purposes, for option pricing and for identifying and capturing yield curve anomalies. The movements of a term structure of interest rates are commonly assumed to be driven by a small number of orthogonal factors such as SHIFT, TWIST and BUTTERFLY (BOW). These factors are usually obtained by a Principal Component Analysis (PCA) of historical bond prices (interest rates). Although PCA diagonalizes the covariance matrix of either the interest rates or the interest rate changes, it does not use both covariance matrices simultaneously. Furthermore higher linear and nonlinear correlations are neglected. These correlations as well as the mean reverting properties of the interest rates become crucial, if one is interested in a longer time horizon (infrequent hedging or trading). We will show that Independent Component Analysis (ICA) is a more appropriate tool than PCA, since ICA uses the covariance matrix of the interest rates as well as the covariance matrix of the interest rate changes simultaneously. Additionally higher linear and nonlinear correlations may be easily incorporated. The resulting factors are uncorrelated for various time delays, approximately independent but nonorthogonal. This is in contrast to the factors obtained from the PCA, which are orthogonal and uncorrelated for identical times only. Although factors from the ICA are nonorthogonal, it is sufficient to consider only a few factors in order to explain most of the variation in the original data. Finally we will present examples that ICA based hedges outperforms PCA based hedges specifically if the portfolio is sensitive to structural changes of the yield curve.

  5. Q-mode versus R-mode principal component analysis for linear discriminant analysis (LDA)

    NASA Astrophysics Data System (ADS)

    Lee, Loong Chuen; Liong, Choong-Yeun; Jemain, Abdul Aziz

    2017-05-01

    Many literature apply Principal Component Analysis (PCA) as either preliminary visualization or variable con-struction methods or both. Focus of PCA can be on the samples (R-mode PCA) or variables (Q-mode PCA). Traditionally, R-mode PCA has been the usual approach to reduce high-dimensionality data before the application of Linear Discriminant Analysis (LDA), to solve classification problems. Output from PCA composed of two new matrices known as loadings and scores matrices. Each matrix can then be used to produce a plot, i.e. loadings plot aids identification of important variables whereas scores plot presents spatial distribution of samples on new axes that are also known as Principal Components (PCs). Fundamentally, the scores matrix always be the input variables for building classification model. A recent paper uses Q-mode PCA but the focus of analysis was not on the variables but instead on the samples. As a result, the authors have exchanged the use of both loadings and scores plots in which clustering of samples was studied using loadings plot whereas scores plot has been used to identify important manifest variables. Therefore, the aim of this study is to statistically validate the proposed practice. Evaluation is based on performance of external error obtained from LDA models according to number of PCs. On top of that, bootstrapping was also conducted to evaluate the external error of each of the LDA models. Results show that LDA models produced by PCs from R-mode PCA give logical performance and the matched external error are also unbiased whereas the ones produced with Q-mode PCA show the opposites. With that, we concluded that PCs produced from Q-mode is not statistically stable and thus should not be applied to problems of classifying samples, but variables. We hope this paper will provide some insights on the disputable issues.

  6. Using BMDP and SPSS for a Q factor analysis.

    PubMed

    Tanner, B A; Koning, S M

    1980-12-01

    While Euclidean distances and Q factor analysis may sometimes be preferred to correlation coefficients and cluster analysis for developing a typology, commercially available software does not always facilitate their use. Commands are provided for using BMDP and SPSS in a Q factor analysis with Euclidean distances.

  7. Pseudo-second order models for the adsorption of safranin onto activated carbon: comparison of linear and non-linear regression methods.

    PubMed

    Kumar, K Vasanth

    2007-04-02

    Kinetic experiments were carried out for the sorption of safranin onto activated carbon particles. The kinetic data were fitted to pseudo-second order model of Ho, Sobkowsk and Czerwinski, Blanchard et al. and Ritchie by linear and non-linear regression methods. Non-linear method was found to be a better way of obtaining the parameters involved in the second order rate kinetic expressions. Both linear and non-linear regression showed that the Sobkowsk and Czerwinski and Ritchie's pseudo-second order models were the same. Non-linear regression analysis showed that both Blanchard et al. and Ho have similar ideas on the pseudo-second order model but with different assumptions. The best fit of experimental data in Ho's pseudo-second order expression by linear and non-linear regression method showed that Ho pseudo-second order model was a better kinetic expression when compared to other pseudo-second order kinetic expressions.

  8. Linearized-moment analysis of the temperature jump and temperature defect in the Knudsen layer of a rarefied gas.

    PubMed

    Gu, Xiao-Jun; Emerson, David R

    2014-06-01

    Understanding the thermal behavior of a rarefied gas remains a fundamental problem. In the present study, we investigate the predictive capabilities of the regularized 13 and 26 moment equations. In this paper, we consider low-speed problems with small gradients, and to simplify the analysis, a linearized set of moment equations is derived to explore a classic temperature problem. Analytical solutions obtained for the linearized 26 moment equations are compared with available kinetic models and can reliably capture all qualitative trends for the temperature-jump coefficient and the associated temperature defect in the thermal Knudsen layer. In contrast, the linearized 13 moment equations lack the necessary physics to capture these effects and consistently underpredict kinetic theory. The deviation from kinetic theory for the 13 moment equations increases significantly for specular reflection of gas molecules, whereas the 26 moment equations compare well with results from kinetic theory. To improve engineering analyses, expressions for the effective thermal conductivity and Prandtl number in the Knudsen layer are derived with the linearized 26 moment equations.

  9. A Spreadsheet for a 2 x 3 x 2 Log-Linear Analysis. AIR 1991 Annual Forum Paper.

    ERIC Educational Resources Information Center

    Saupe, Joe L.

    This paper describes a personal computer spreadsheet set up to carry out hierarchical log-linear analyses, a type of analysis useful for institutional research into multidimensional frequency tables formed from categorical variables such as faculty rank, student class level, gender, or retention status. The spreadsheet provides a concrete vehicle…

  10. Likelihood-Based Confidence Intervals in Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Oort, Frans J.

    2011-01-01

    In exploratory or unrestricted factor analysis, all factor loadings are free to be estimated. In oblique solutions, the correlations between common factors are free to be estimated as well. The purpose of this article is to show how likelihood-based confidence intervals can be obtained for rotated factor loadings and factor correlations, by…

  11. STICAP: A linear circuit analysis program with stiff systems capability. Volume 1: Theory manual. [network analysis

    NASA Technical Reports Server (NTRS)

    Cooke, C. H.

    1975-01-01

    STICAP (Stiff Circuit Analysis Program) is a FORTRAN 4 computer program written for the CDC-6400-6600 computer series and SCOPE 3.0 operating system. It provides the circuit analyst a tool for automatically computing the transient responses and frequency responses of large linear time invariant networks, both stiff and nonstiff (algorithms and numerical integration techniques are described). The circuit description and user's program input language is engineer-oriented, making simple the task of using the program. Engineering theories underlying STICAP are examined. A user's manual is included which explains user interaction with the program and gives results of typical circuit design applications. Also, the program structure from a systems programmer's viewpoint is depicted and flow charts and other software documentation are given.

  12. Small field detector correction factors: effects of the flattening filter for Elekta and Varian linear accelerators

    PubMed Central

    Liu, Paul Z.Y.; Lee, Christopher; McKenzie, David R.; Suchowerska, Natalka

    2016-01-01

    Flattening filter‐free (FFF) beams are becoming the preferred beam type for stereotactic radiosurgery (SRS) and stereotactic ablative radiation therapy (SABR), as they enable an increase in dose rate and a decrease in treatment time. This work assesses the effects of the flattening filter on small field output factors for 6 MV beams generated by both Elekta and Varian linear accelerators, and determines differences between detector response in flattened (FF) and FFF beams. Relative output factors were measured with a range of detectors (diodes, ionization chambers, radiochromic film, and microDiamond) and referenced to the relative output factors measured with an air core fiber optic dosimeter (FOD), a scintillation dosimeter developed at Chris O'Brien Lifehouse, Sydney. Small field correction factors were generated for both FF and FFF beams. Diode measured detector response was compared with a recently published mathematical relation to predict diode response corrections in small fields. The effect of flattening filter removal on detector response was quantified using a ratio of relative detector responses in FFF and FF fields for the same field size. The removal of the flattening filter was found to have a small but measurable effect on ionization chamber response with maximum deviations of less than ±0.9% across all field sizes measured. Solid‐state detectors showed an increased dependence on the flattening filter of up to ±1.6%. Measured diode response was within ±1.1% of the published mathematical relation for all fields up to 30 mm, independent of linac type and presence or absence of a flattening filter. For 6 MV beams, detector correction factors between FFF and FF beams are interchangeable for a linac between FF and FFF modes, providing that an additional uncertainty of up to ±1.6% is accepted. PACS number(s): 87.55.km, 87.56.bd, 87.56.Da PMID:27167280

  13. The application of two-step linear temperature program to thermal analysis for monitoring the lipid induction of Nostoc sp. KNUA003 in large scale cultivation.

    PubMed

    Kang, Bongmun; Yoon, Ho-Sung

    2015-02-01

    Recently, microalgae was considered as a renewable energy for fuel production because its production is nonseasonal and may take place on nonarable land. Despite all of these advantages, microalgal oil production is significantly affected by environmental factors. Furthermore, the large variability remains an important problem in measurement of algae productivity and compositional analysis, especially, the total lipid content. Thus, there is considerable interest in accurate determination of total lipid content during the biotechnological process. For these reason, various high-throughput technologies were suggested for accurate measurement of total lipids contained in the microorganisms, especially oleaginous microalgae. In addition, more advanced technologies were employed to quantify the total lipids of the microalgae without a pretreatment. However, these methods are difficult to measure total lipid content in wet form microalgae obtained from large-scale production. In present study, the thermal analysis performed with two-step linear temeperature program was applied to measure heat evolved in temperature range from 310 to 351 °C of Nostoc sp. KNUA003 obtained from large-scale cultivation. And then, we examined the relationship between the heat evolved in 310-351 °C (HE) and total lipid content of the wet Nostoc cell cultivated in raceway. As a result, the linear relationship was determined between HE value and total lipid content of Nostoc sp. KNUA003. Particularly, there was a linear relationship of 98% between the HE value and the total lipid content of the tested microorganism. Based on this relationship, the total lipid content converted from the heat evolved of wet Nostoc sp. KNUA003 could be used for monitoring its lipid induction in large-scale cultivation. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Design and analysis of an unconventional permanent magnet linear machine for energy harvesting

    NASA Astrophysics Data System (ADS)

    Zeng, Peng

    This Ph.D. dissertation proposes an unconventional high power density linear electromagnetic kinetic energy harvester, and a high-performance two-stage interface power electronics to maintain maximum power abstraction from the energy source and charge the Li-ion battery load with constant current. The proposed machine architecture is composed of a double-sided flat type silicon steel stator with winding slots, a permanent magnet mover, coil windings, a linear motion guide and an adjustable spring bearing. The unconventional design of the machine is that NdFeB magnet bars in the mover are placed with magnetic fields in horizontal direction instead of vertical direction and the same magnetic poles are facing each other. The derived magnetic equivalent circuit model proves the average air-gap flux density of the novel topology is as high as 0.73 T with 17.7% improvement over that of the conventional topology at the given geometric dimensions of the proof-of-concept machine. Subsequently, the improved output voltage and power are achieved. The dynamic model of the linear generator is also developed, and the analytical equations of output maximum power are derived for the case of driving vibration with amplitude that is equal, smaller and larger than the relative displacement between the mover and the stator of the machine respectively. Furthermore, the finite element analysis (FEA) model has been simulated to prove the derived analytical results and the improved power generation capability. Also, an optimization framework is explored to extend to the multi-Degree-of-Freedom (n-DOF) vibration based linear energy harvesting devices. Moreover, a boost-buck cascaded switch mode converter with current controller is designed to extract the maximum power from the harvester and charge the Li-ion battery with trickle current. Meanwhile, a maximum power point tracking (MPPT) algorithm is proposed and optimized for low frequency driving vibrations. Finally, a proof

  15. What School Psychologists Need to Know about Factor Analysis

    ERIC Educational Resources Information Center

    McGill, Ryan J.; Dombrowski, Stefan C.

    2017-01-01

    Factor analysis is a versatile class of psychometric techniques used by researchers to provide insight into the psychological dimensions (factors) that may account for the relationships among variables in a given dataset. The primary goal of a factor analysis is to determine a more parsimonious set of variables (i.e., fewer than the number of…

  16. Item Factor Analysis: Current Approaches and Future Directions

    ERIC Educational Resources Information Center

    Wirth, R. J.; Edwards, Michael C.

    2007-01-01

    The rationale underlying factor analysis applies to continuous and categorical variables alike; however, the models and estimation methods for continuous (i.e., interval or ratio scale) data are not appropriate for item-level data that are categorical in nature. The authors provide a targeted review and synthesis of the item factor analysis (IFA)…

  17. Q-Type Factor Analysis of Healthy Aged Men.

    ERIC Educational Resources Information Center

    Kleban, Morton H.

    Q-type factor analysis was used to re-analyze baseline data collected in 1957, on 47 men aged 65-91. Q-type analysis is the use of factor methods to study persons rather than tests. Although 550 variables were originally studied involving psychiatry, medicine, cerebral metabolism and chemistry, personality, audiometry, dichotic and diotic memory,…

  18. Determinants of linear judgment: a meta-analysis of lens model studies.

    PubMed

    Karelaia, Natalia; Hogarth, Robin M

    2008-05-01

    The mathematical representation of E. Brunswik's (1952) lens model has been used extensively to study human judgment and provides a unique opportunity to conduct a meta-analysis of studies that covers roughly 5 decades. Specifically, the authors analyzed statistics of the "lens model equation" (L. R. Tucker, 1964) associated with 249 different task environments obtained from 86 articles. On average, fairly high levels of judgmental achievement were found, and people were seen to be capable of achieving similar levels of cognitive performance in noisy and predictable environments. Further, the effects of task characteristics that influence judgment (numbers and types of cues, inter-cue redundancy, function forms and cue weights in the ecology, laboratory versus field studies, and experience with the task) were identified and estimated. A detailed analysis of learning studies revealed that the most effective form of feedback was information about the task. The authors also analyzed empirically under what conditions the application of bootstrapping--or replacing judges by their linear models--is advantageous. Finally, the authors note shortcomings of the kinds of studies conducted to date, limitations in the lens model methodology, and possibilities for future research. (Copyright) 2008 APA, all rights reserved.

  19. Linear analysis of the Richtmyer-Meshkov instability in shock-flame interactions

    NASA Astrophysics Data System (ADS)

    Massa, L.; Jha, P.

    2012-05-01

    Shock-flame interactions enhance supersonic mixing and detonation formation. Therefore, their analysis is important to explosion safety, internal combustion engine performance, and supersonic combustor design. The fundamental process at the basis of the interaction is the Richtmyer-Meshkov instability supported by the density difference between burnt and fresh mixtures. In the present study we analyze the effect of reactivity on the Richtmyer-Meshkov instability with particular emphasis on combustion lengths that typify the scaling between perturbation growth and induction. The results of the present linear analysis study show that reactivity changes the perturbation growth rate by developing a pressure gradient at the flame surface. The baroclinic torque based on the density gradient across the flame acts to slow down the instability growth of high wave-number perturbations. A gasdynamic flame representation leads to the definition of a Peclet number representing the scaling between perturbation and thermal diffusion lengths within the flame. Peclet number effects on perturbation growth are observed to be marginal. The gasdynamic model also considers a finite flame Mach number that supports a separation between flame and contact discontinuity. Such a separation destabilizes the interface growth by augmenting the tangential shear.

  20. On summary measure analysis of linear trend repeated measures data: performance comparison with two competing methods.

    PubMed

    Vossoughi, Mehrdad; Ayatollahi, S M T; Towhidi, Mina; Ketabchi, Farzaneh

    2012-03-22

    The summary measure approach (SMA) is sometimes the only applicable tool for the analysis of repeated measurements in medical research, especially when the number of measurements is relatively large. This study aimed to describe techniques based on summary measures for the analysis of linear trend repeated measures data and then to compare performances of SMA, linear mixed model (LMM), and unstructured multivariate approach (UMA). Practical guidelines based on the least squares regression slope and mean of response over time for each subject were provided to test time, group, and interaction effects. Through Monte Carlo simulation studies, the efficacy of SMA vs. LMM and traditional UMA, under different types of covariance structures, was illustrated. All the methods were also employed to analyze two real data examples. Based on the simulation and example results, it was found that the SMA completely dominated the traditional UMA and performed convincingly close to the best-fitting LMM in testing all the effects. However, the LMM was not often robust and led to non-sensible results when the covariance structure for errors was misspecified. The results emphasized discarding the UMA which often yielded extremely conservative inferences as to such data. It was shown that summary measure is a simple, safe and powerful approach in which the loss of efficiency compared to the best-fitting LMM was generally negligible. The SMA is recommended as the first choice to reliably analyze the linear trend data with a moderate to large number of measurements and/or small to moderate sample sizes.

  1. Text mining factor analysis (TFA) in green tea patent data

    NASA Astrophysics Data System (ADS)

    Rahmawati, Sela; Suprijadi, Jadi; Zulhanif

    2017-03-01

    Factor analysis has become one of the most widely used multivariate statistical procedures in applied research endeavors across a multitude of domains. There are two main types of analyses based on factor analysis: Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA). Both EFA and CFA aim to observed relationships among a group of indicators with a latent variable, but they differ fundamentally, a priori and restrictions made to the factor model. This method will be applied to patent data technology sector green tea to determine the development technology of green tea in the world. Patent analysis is useful in identifying the future technological trends in a specific field of technology. Database patent are obtained from agency European Patent Organization (EPO). In this paper, CFA model will be applied to the nominal data, which obtain from the presence absence matrix. While doing processing, analysis CFA for nominal data analysis was based on Tetrachoric matrix. Meanwhile, EFA model will be applied on a title from sector technology dominant. Title will be pre-processing first using text mining analysis.

  2. Mixture Factor Analysis for Approximating a Nonnormally Distributed Continuous Latent Factor with Continuous and Dichotomous Observed Variables

    ERIC Educational Resources Information Center

    Wall, Melanie M.; Guo, Jia; Amemiya, Yasuo

    2012-01-01

    Mixture factor analysis is examined as a means of flexibly estimating nonnormally distributed continuous latent factors in the presence of both continuous and dichotomous observed variables. A simulation study compares mixture factor analysis with normal maximum likelihood (ML) latent factor modeling. Different results emerge for continuous versus…

  3. Linear Spectral Analysis of Plume Emissions Using an Optical Matrix Processor

    NASA Technical Reports Server (NTRS)

    Gary, C. K.

    1992-01-01

    Plume spectrometry provides a means to monitor the health of a burning rocket engine, and optical matrix processors provide a means to analyze the plume spectra in real time. By observing the spectrum of the exhaust plume of a rocket engine, researchers have detected anomalous behavior of the engine and have even determined the failure of some equipment before it would normally have been noticed. The spectrum of the plume is analyzed by isolating information in the spectrum about the various materials present to estimate what materials are being burned in the engine. Scientists at the Marshall Space Flight Center (MSFC) have implemented a high resolution spectrometer to discriminate the spectral peaks of the many species present in the plume. Researchers at the Stennis Space Center Demonstration Testbed Facility (DTF) have implemented a high resolution spectrometer observing a 1200-lb. thrust engine. At this facility, known concentrations of contaminants can be introduced into the burn, allowing for the confirmation of diagnostic algorithms. While the high resolution of the measured spectra has allowed greatly increased insight into the functioning of the engine, the large data flows generated limit the ability to perform real-time processing. The use of an optical matrix processor and the linear analysis technique described below may allow for the detailed real-time analysis of the engine's health. A small optical matrix processor can perform the required mathematical analysis both quicker and with less energy than a large electronic computer dedicated to the same spectral analysis routine.

  4. Classical Testing in Functional Linear Models.

    PubMed

    Kong, Dehan; Staicu, Ana-Maria; Maity, Arnab

    2016-01-01

    We extend four tests common in classical regression - Wald, score, likelihood ratio and F tests - to functional linear regression, for testing the null hypothesis, that there is no association between a scalar response and a functional covariate. Using functional principal component analysis, we re-express the functional linear model as a standard linear model, where the effect of the functional covariate can be approximated by a finite linear combination of the functional principal component scores. In this setting, we consider application of the four traditional tests. The proposed testing procedures are investigated theoretically for densely observed functional covariates when the number of principal components diverges. Using the theoretical distribution of the tests under the alternative hypothesis, we develop a procedure for sample size calculation in the context of functional linear regression. The four tests are further compared numerically for both densely and sparsely observed noisy functional data in simulation experiments and using two real data applications.

  5. Classical Testing in Functional Linear Models

    PubMed Central

    Kong, Dehan; Staicu, Ana-Maria; Maity, Arnab

    2016-01-01

    We extend four tests common in classical regression - Wald, score, likelihood ratio and F tests - to functional linear regression, for testing the null hypothesis, that there is no association between a scalar response and a functional covariate. Using functional principal component analysis, we re-express the functional linear model as a standard linear model, where the effect of the functional covariate can be approximated by a finite linear combination of the functional principal component scores. In this setting, we consider application of the four traditional tests. The proposed testing procedures are investigated theoretically for densely observed functional covariates when the number of principal components diverges. Using the theoretical distribution of the tests under the alternative hypothesis, we develop a procedure for sample size calculation in the context of functional linear regression. The four tests are further compared numerically for both densely and sparsely observed noisy functional data in simulation experiments and using two real data applications. PMID:28955155

  6. Energy conserving, linear scaling Born-Oppenheimer molecular dynamics.

    PubMed

    Cawkwell, M J; Niklasson, Anders M N

    2012-10-07

    Born-Oppenheimer molecular dynamics simulations with long-term conservation of the total energy and a computational cost that scales linearly with system size have been obtained simultaneously. Linear scaling with a low pre-factor is achieved using density matrix purification with sparse matrix algebra and a numerical threshold on matrix elements. The extended Lagrangian Born-Oppenheimer molecular dynamics formalism [A. M. N. Niklasson, Phys. Rev. Lett. 100, 123004 (2008)] yields microcanonical trajectories with the approximate forces obtained from the linear scaling method that exhibit no systematic drift over hundreds of picoseconds and which are indistinguishable from trajectories computed using exact forces.

  7. Common factor analysis versus principal component analysis: choice for symptom cluster research.

    PubMed

    Kim, Hee-Ju

    2008-03-01

    The purpose of this paper is to examine differences between two factor analytical methods and their relevance for symptom cluster research: common factor analysis (CFA) versus principal component analysis (PCA). Literature was critically reviewed to elucidate the differences between CFA and PCA. A secondary analysis (N = 84) was utilized to show the actual result differences from the two methods. CFA analyzes only the reliable common variance of data, while PCA analyzes all the variance of data. An underlying hypothetical process or construct is involved in CFA but not in PCA. PCA tends to increase factor loadings especially in a study with a small number of variables and/or low estimated communality. Thus, PCA is not appropriate for examining the structure of data. If the study purpose is to explain correlations among variables and to examine the structure of the data (this is usual for most cases in symptom cluster research), CFA provides a more accurate result. If the purpose of a study is to summarize data with a smaller number of variables, PCA is the choice. PCA can also be used as an initial step in CFA because it provides information regarding the maximum number and nature of factors. In using factor analysis for symptom cluster research, several issues need to be considered, including subjectivity of solution, sample size, symptom selection, and level of measure.

  8. QALMA: A computational toolkit for the analysis of quality protocols for medical linear accelerators in radiation therapy

    NASA Astrophysics Data System (ADS)

    Rahman, Md Mushfiqur; Lei, Yu; Kalantzis, Georgios

    2018-01-01

    Quality Assurance (QA) for medical linear accelerator (linac) is one of the primary concerns in external beam radiation Therapy. Continued advancements in clinical accelerators and computer control technology make the QA procedures more complex and time consuming which often, adequate software accompanied with specific phantoms is required. To ameliorate that matter, we introduce QALMA (Quality Assurance for Linac with MATLAB), a MALAB toolkit which aims to simplify the quantitative analysis of QA for linac which includes Star-Shot analysis, Picket Fence test, Winston-Lutz test, Multileaf Collimator (MLC) log file analysis and verification of light & radiation field coincidence test.

  9. Automatic Assessment and Reduction of Noise using Edge Pattern Analysis in Non-Linear Image Enhancement

    NASA Technical Reports Server (NTRS)

    Jobson, Daniel J.; Rahman, Zia-Ur; Woodell, Glenn A.; Hines, Glenn D.

    2004-01-01

    Noise is the primary visibility limit in the process of non-linear image enhancement, and is no longer a statistically stable additive noise in the post-enhancement image. Therefore novel approaches are needed to both assess and reduce spatially variable noise at this stage in overall image processing. Here we will examine the use of edge pattern analysis both for automatic assessment of spatially variable noise and as a foundation for new noise reduction methods.

  10. Linearized Unsteady Aerodynamic Analysis of the Acoustic Response to Wake/Blade-Row Interaction

    NASA Technical Reports Server (NTRS)

    Verdon, Joseph M.; Huff, Dennis L. (Technical Monitor)

    2001-01-01

    The three-dimensional, linearized Euler analysis, LINFLUX, is being developed to provide a comprehensive and efficient unsteady aerodynamic scheme for predicting the aeroacoustic and aeroelastic responses of axial-flow turbomachinery blading. LINFLUX couples a near-field, implicit, wave-split, finite-volume solution to far-field acoustic eigensolutions, to predict the aerodynamic responses of a blade row to prescribed structural and aerodynamic excitations. It is applied herein to predict the acoustic responses of a fan exit guide vane (FEGV) to rotor wake excitations. The intent is to demonstrate and assess the LINFLUX analysis via application to realistic wake/blade-row interactions. Numerical results are given for the unsteady pressure responses of the FEGV, including the modal pressure responses at inlet and exit. In addition, predictions for the modal and total acoustic power levels at the FEGV exit are compared with measurements. The present results indicate that the LINFLUX analysis should be useful in the aeroacoustic design process, and for understanding the three-dimensional flow physics relevant to blade-row noise generation and propagation.

  11. Neck-focused panic attacks among Cambodian refugees; a logistic and linear regression analysis.

    PubMed

    Hinton, Devon E; Chhean, Dara; Pich, Vuth; Um, Khin; Fama, Jeanne M; Pollack, Mark H

    2006-01-01

    Consecutive Cambodian refugees attending a psychiatric clinic were assessed for the presence and severity of current--i.e., at least one episode in the last month--neck-focused panic. Among the whole sample (N=130), in a logistic regression analysis, the Anxiety Sensitivity Index (ASI; odds ratio=3.70) and the Clinician-Administered PTSD Scale (CAPS; odds ratio=2.61) significantly predicted the presence of current neck panic (NP). Among the neck panic patients (N=60), in the linear regression analysis, NP severity was significantly predicted by NP-associated flashbacks (beta=.42), NP-associated catastrophic cognitions (beta=.22), and CAPS score (beta=.28). Further analysis revealed the effect of the CAPS score to be significantly mediated (Sobel test [Baron, R. M., & Kenny, D. A. (1986). The moderator-mediator variable distinction in social psychological research: conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology, 51, 1173-1182]) by both NP-associated flashbacks and catastrophic cognitions. In the care of traumatized Cambodian refugees, NP severity, as well as NP-associated flashbacks and catastrophic cognitions, should be specifically assessed and treated.

  12. Improved Equivalent Linearization Implementations Using Nonlinear Stiffness Evaluation

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.; Muravyov, Alexander A.

    2001-01-01

    This report documents two new implementations of equivalent linearization for solving geometrically nonlinear random vibration problems of complicated structures. The implementations are given the acronym ELSTEP, for "Equivalent Linearization using a STiffness Evaluation Procedure." Both implementations of ELSTEP are fundamentally the same in that they use a novel nonlinear stiffness evaluation procedure to numerically compute otherwise inaccessible nonlinear stiffness terms from commercial finite element programs. The commercial finite element program MSC/NASTRAN (NASTRAN) was chosen as the core of ELSTEP. The FORTRAN implementation calculates the nonlinear stiffness terms and performs the equivalent linearization analysis outside of NASTRAN. The Direct Matrix Abstraction Program (DMAP) implementation performs these operations within NASTRAN. Both provide nearly identical results. Within each implementation, two error minimization approaches for the equivalent linearization procedure are available - force and strain energy error minimization. Sample results for a simply supported rectangular plate are included to illustrate the analysis procedure.

  13. Indirect synthesis of multi-degree of freedom transient systems. [linear programming for a kinematically linear system

    NASA Technical Reports Server (NTRS)

    Pilkey, W. D.; Chen, Y. H.

    1974-01-01

    An indirect synthesis method is used in the efficient optimal design of multi-degree of freedom, multi-design element, nonlinear, transient systems. A limiting performance analysis which requires linear programming for a kinematically linear system is presented. The system is selected using system identification methods such that the designed system responds as closely as possible to the limiting performance. The efficiency is a result of the method avoiding the repetitive systems analyses accompanying other numerical optimization methods.

  14. Performance analysis of structured gradient algorithm. [for adaptive beamforming linear arrays

    NASA Technical Reports Server (NTRS)

    Godara, Lal C.

    1990-01-01

    The structured gradient algorithm uses a structured estimate of the array correlation matrix (ACM) to estimate the gradient required for the constrained least-mean-square (LMS) algorithm. This structure reflects the structure of the exact array correlation matrix for an equispaced linear array and is obtained by spatial averaging of the elements of the noisy correlation matrix. In its standard form the LMS algorithm does not exploit the structure of the array correlation matrix. The gradient is estimated by multiplying the array output with the receiver outputs. An analysis of the two algorithms is presented to show that the covariance of the gradient estimated by the structured method is less sensitive to the look direction signal than that estimated by the standard method. The effect of the number of elements on the signal sensitivity of the two algorithms is studied.

  15. Factors influencing crime rates: an econometric analysis approach

    NASA Astrophysics Data System (ADS)

    Bothos, John M. A.; Thomopoulos, Stelios C. A.

    2016-05-01

    The scope of the present study is to research the dynamics that determine the commission of crimes in the US society. Our study is part of a model we are developing to understand urban crime dynamics and to enhance citizens' "perception of security" in large urban environments. The main targets of our research are to highlight dependence of crime rates on certain social and economic factors and basic elements of state anticrime policies. In conducting our research, we use as guides previous relevant studies on crime dependence, that have been performed with similar quantitative analyses in mind, regarding the dependence of crime on certain social and economic factors using statistics and econometric modelling. Our first approach consists of conceptual state space dynamic cross-sectional econometric models that incorporate a feedback loop that describes crime as a feedback process. In order to define dynamically the model variables, we use statistical analysis on crime records and on records about social and economic conditions and policing characteristics (like police force and policing results - crime arrests), to determine their influence as independent variables on crime, as the dependent variable of our model. The econometric models we apply in this first approach are an exponential log linear model and a logit model. In a second approach, we try to study the evolvement of violent crime through time in the US, independently as an autonomous social phenomenon, using autoregressive and moving average time-series econometric models. Our findings show that there are certain social and economic characteristics that affect the formation of crime rates in the US, either positively or negatively. Furthermore, the results of our time-series econometric modelling show that violent crime, viewed solely and independently as a social phenomenon, correlates with previous years crime rates and depends on the social and economic environment's conditions during previous years.

  16. Generalized Structured Component Analysis

    ERIC Educational Resources Information Center

    Hwang, Heungsun; Takane, Yoshio

    2004-01-01

    We propose an alternative method to partial least squares for path analysis with components, called generalized structured component analysis. The proposed method replaces factors by exact linear combinations of observed variables. It employs a well-defined least squares criterion to estimate model parameters. As a result, the proposed method…

  17. Fast linear feature detection using multiple directional non-maximum suppression.

    PubMed

    Sun, C; Vallotton, P

    2009-05-01

    The capacity to detect linear features is central to image analysis, computer vision and pattern recognition and has practical applications in areas such as neurite outgrowth detection, retinal vessel extraction, skin hair removal, plant root analysis and road detection. Linear feature detection often represents the starting point for image segmentation and image interpretation. In this paper, we present a new algorithm for linear feature detection using multiple directional non-maximum suppression with symmetry checking and gap linking. Given its low computational complexity, the algorithm is very fast. We show in several examples that it performs very well in terms of both sensitivity and continuity of detected linear features.

  18. Derived Basic Ability Factors: A Factor Analysis Replication Study.

    ERIC Educational Resources Information Center

    Lee, Mickey, M.; Lee, Lynda Newby

    The purpose of this study was to replicate the study conducted by Potter, Sagraves, and McDonald to determine whether their recommended analysis could separate criterion variables into similar factors that were stable from year to year and from school to school. The replication samples consisted of all students attending Louisiana State University…

  19. Logistic regression analysis of risk factors for postoperative recurrence of spinal tumors and analysis of prognostic factors.

    PubMed

    Zhang, Shanyong; Yang, Lili; Peng, Chuangang; Wu, Minfei

    2018-02-01

    The aim of the present study was to investigate the risk factors for postoperative recurrence of spinal tumors by logistic regression analysis and analysis of prognostic factors. In total, 77 male and 48 female patients with spinal tumor were selected in our hospital from January, 2010 to December, 2015 and divided into the benign (n=76) and malignant groups (n=49). All the patients underwent microsurgical resection of spinal tumors and were reviewed regularly 3 months after operation. The McCormick grading system was used to evaluate the postoperative spinal cord function. Data were subjected to statistical analysis. Of the 125 cases, 63 cases showed improvement after operation, 50 cases were stable, and deterioration was found in 12 cases. The improvement rate of patients with cervical spine tumor, which reached 56.3%, was the highest. Fifty-two cases of sensory disturbance, 34 cases of pain, 30 cases of inability to exercise, 26 cases of ataxia, and 12 cases of sphincter disorders were found after operation. Seventy-two cases (57.6%) underwent total resection, 18 cases (14.4%) received subtotal resection, 23 cases (18.4%) received partial resection, and 12 cases (9.6%) were only treated with biopsy/decompression. Postoperative recurrence was found in 57 cases (45.6%). The mean recurrence time of patients in the malignant group was 27.49±6.09 months, and the mean recurrence time of patients in the benign group was 40.62±4.34. The results were significantly different (P<0.001). Recurrence was found in 18 cases of the benign group and 39 cases of the malignant group, and results were significantly different (P<0.001). Tumor recurrence was shorter in patients with a higher McCormick grade (P<0.001). Recurrence was found in 13 patients with resection and all the patients with partial resection or biopsy/decompression. The results were significantly different (P<0.001). Logistic regression analysis of total resection-related factors showed that total resection

  20. Logistic regression analysis of risk factors for postoperative recurrence of spinal tumors and analysis of prognostic factors

    PubMed Central

    Zhang, Shanyong; Yang, Lili; Peng, Chuangang; Wu, Minfei

    2018-01-01

    The aim of the present study was to investigate the risk factors for postoperative recurrence of spinal tumors by logistic regression analysis and analysis of prognostic factors. In total, 77 male and 48 female patients with spinal tumor were selected in our hospital from January, 2010 to December, 2015 and divided into the benign (n=76) and malignant groups (n=49). All the patients underwent microsurgical resection of spinal tumors and were reviewed regularly 3 months after operation. The McCormick grading system was used to evaluate the postoperative spinal cord function. Data were subjected to statistical analysis. Of the 125 cases, 63 cases showed improvement after operation, 50 cases were stable, and deterioration was found in 12 cases. The improvement rate of patients with cervical spine tumor, which reached 56.3%, was the highest. Fifty-two cases of sensory disturbance, 34 cases of pain, 30 cases of inability to exercise, 26 cases of ataxia, and 12 cases of sphincter disorders were found after operation. Seventy-two cases (57.6%) underwent total resection, 18 cases (14.4%) received subtotal resection, 23 cases (18.4%) received partial resection, and 12 cases (9.6%) were only treated with biopsy/decompression. Postoperative recurrence was found in 57 cases (45.6%). The mean recurrence time of patients in the malignant group was 27.49±6.09 months, and the mean recurrence time of patients in the benign group was 40.62±4.34. The results were significantly different (P<0.001). Recurrence was found in 18 cases of the benign group and 39 cases of the malignant group, and results were significantly different (P<0.001). Tumor recurrence was shorter in patients with a higher McCormick grade (P<0.001). Recurrence was found in 13 patients with resection and all the patients with partial resection or biopsy/decompression. The results were significantly different (P<0.001). Logistic regression analysis of total resection-related factors showed that total resection

  1. Linear and nonlinear analysis of kinetic Alfven waves in quantum magneto-plasmas with arbitrary temperature degeneracy

    NASA Astrophysics Data System (ADS)

    Sadiq, Nauman; Ahmad, Mushtaq; Farooq, M.; Jan, Qasim

    2018-06-01

    Linear and nonlinear kinetic Alfven waves (KAWs) are studied in collisionless, non-relativistic two fluid quantum magneto-plasmas by considering arbitrary temperature degeneracy. A general coupling parameter is applied to discuss the range of validity of the proposed model in nearly degenerate and nearly non-degenerate plasma limits. Linear analysis of KAWs shows an increase (decrease) in frequency with the increase in parameter ζ ( δ ) for the nearly non-degenerate (nearly degenerate) plasma limit. The energy integral equation in the form of Sagdeev potential is obtained by using the approach of the Lorentz transformation. The analysis reveals that the amplitude of the Sagdeev potential curves and soliton structures remains the same, but the potential depth and width of soliton structure change for both the limiting cases. It is further observed that only density hump structures are formed in the sub-alfvenic region for value Kz 2 > 1 . The effects of parameters ζ, δ on the nonlinear properties of KAWs are shown in graphical plots. New results for comparison with earlier work have also been highlighted. The significance of this work to astrophysical plasmas is also emphasized.

  2. Linearized T-Matrix and Mie Scattering Computations

    NASA Technical Reports Server (NTRS)

    Spurr, R.; Wang, J.; Zeng, J.; Mishchenko, M. I.

    2011-01-01

    We present a new linearization of T-Matrix and Mie computations for light scattering by non-spherical and spherical particles, respectively. In addition to the usual extinction and scattering cross-sections and the scattering matrix outputs, the linearized models will generate analytical derivatives of these optical properties with respect to the real and imaginary parts of the particle refractive index, and (for non-spherical scatterers) with respect to the ''shape'' parameter (the spheroid aspect ratio, cylinder diameter/height ratio, Chebyshev particle deformation factor). These derivatives are based on the essential linearity of Maxwell's theory. Analytical derivatives are also available for polydisperse particle size distribution parameters such as the mode radius. The T-matrix formulation is based on the NASA Goddard Institute for Space Studies FORTRAN 77 code developed in the 1990s. The linearized scattering codes presented here are in FORTRAN 90 and will be made publicly available.

  3. Focal activation of primary visual cortex following supra-choroidal electrical stimulation of the retina: Intrinsic signal imaging and linear model analysis.

    PubMed

    Cloherty, Shaun L; Hietanen, Markus A; Suaning, Gregg J; Ibbotson, Michael R

    2010-01-01

    We performed optical intrinsic signal imaging of cat primary visual cortex (Area 17 and 18) while delivering bipolar electrical stimulation to the retina by way of a supra-choroidal electrode array. Using a general linear model (GLM) analysis we identified statistically significant (p < 0.01) activation in a localized region of cortex following supra-threshold electrical stimulation at a single retinal locus. (1) demonstrate that intrinsic signal imaging combined with linear model analysis provides a powerful tool for assessing cortical responses to prosthetic stimulation, and (2) confirm that supra-choroidal electrical stimulation can achieve localized activation of the cortex consistent with focal activation of the retina.

  4. On the Likelihood Ratio Test for the Number of Factors in Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Hayashi, Kentaro; Bentler, Peter M.; Yuan, Ke-Hai

    2007-01-01

    In the exploratory factor analysis, when the number of factors exceeds the true number of factors, the likelihood ratio test statistic no longer follows the chi-square distribution due to a problem of rank deficiency and nonidentifiability of model parameters. As a result, decisions regarding the number of factors may be incorrect. Several…

  5. A Brief History of the Philosophical Foundations of Exploratory Factor Analysis.

    ERIC Educational Resources Information Center

    Mulaik, Stanley A.

    1987-01-01

    Exploratory factor analysis derives its key ideas from many sources, including Aristotle, Francis Bacon, Descartes, Pearson and Yule, and Kant. The conclusions of exploratory factor analysis are never complete without subsequent confirmatory factor analysis. (Author/GDC)

  6. Biostatistics Series Module 6: Correlation and Linear Regression.

    PubMed

    Hazra, Avijit; Gogtay, Nithya

    2016-01-01

    Correlation and linear regression are the most commonly used techniques for quantifying the association between two numeric variables. Correlation quantifies the strength of the linear relationship between paired variables, expressing this as a correlation coefficient. If both variables x and y are normally distributed, we calculate Pearson's correlation coefficient ( r ). If normality assumption is not met for one or both variables in a correlation analysis, a rank correlation coefficient, such as Spearman's rho (ρ) may be calculated. A hypothesis test of correlation tests whether the linear relationship between the two variables holds in the underlying population, in which case it returns a P < 0.05. A 95% confidence interval of the correlation coefficient can also be calculated for an idea of the correlation in the population. The value r 2 denotes the proportion of the variability of the dependent variable y that can be attributed to its linear relation with the independent variable x and is called the coefficient of determination. Linear regression is a technique that attempts to link two correlated variables x and y in the form of a mathematical equation ( y = a + bx ), such that given the value of one variable the other may be predicted. In general, the method of least squares is applied to obtain the equation of the regression line. Correlation and linear regression analysis are based on certain assumptions pertaining to the data sets. If these assumptions are not met, misleading conclusions may be drawn. The first assumption is that of linear relationship between the two variables. A scatter plot is essential before embarking on any correlation-regression analysis to show that this is indeed the case. Outliers or clustering within data sets can distort the correlation coefficient value. Finally, it is vital to remember that though strong correlation can be a pointer toward causation, the two are not synonymous.

  7. Biostatistics Series Module 6: Correlation and Linear Regression

    PubMed Central

    Hazra, Avijit; Gogtay, Nithya

    2016-01-01

    Correlation and linear regression are the most commonly used techniques for quantifying the association between two numeric variables. Correlation quantifies the strength of the linear relationship between paired variables, expressing this as a correlation coefficient. If both variables x and y are normally distributed, we calculate Pearson's correlation coefficient (r). If normality assumption is not met for one or both variables in a correlation analysis, a rank correlation coefficient, such as Spearman's rho (ρ) may be calculated. A hypothesis test of correlation tests whether the linear relationship between the two variables holds in the underlying population, in which case it returns a P < 0.05. A 95% confidence interval of the correlation coefficient can also be calculated for an idea of the correlation in the population. The value r2 denotes the proportion of the variability of the dependent variable y that can be attributed to its linear relation with the independent variable x and is called the coefficient of determination. Linear regression is a technique that attempts to link two correlated variables x and y in the form of a mathematical equation (y = a + bx), such that given the value of one variable the other may be predicted. In general, the method of least squares is applied to obtain the equation of the regression line. Correlation and linear regression analysis are based on certain assumptions pertaining to the data sets. If these assumptions are not met, misleading conclusions may be drawn. The first assumption is that of linear relationship between the two variables. A scatter plot is essential before embarking on any correlation-regression analysis to show that this is indeed the case. Outliers or clustering within data sets can distort the correlation coefficient value. Finally, it is vital to remember that though strong correlation can be a pointer toward causation, the two are not synonymous. PMID:27904175

  8. Comparison of linear and non-linear models for predicting energy expenditure from raw accelerometer data.

    PubMed

    Montoye, Alexander H K; Begum, Munni; Henning, Zachary; Pfeiffer, Karin A

    2017-02-01

    This study had three purposes, all related to evaluating energy expenditure (EE) prediction accuracy from body-worn accelerometers: (1) compare linear regression to linear mixed models, (2) compare linear models to artificial neural network models, and (3) compare accuracy of accelerometers placed on the hip, thigh, and wrists. Forty individuals performed 13 activities in a 90 min semi-structured, laboratory-based protocol. Participants wore accelerometers on the right hip, right thigh, and both wrists and a portable metabolic analyzer (EE criterion). Four EE prediction models were developed for each accelerometer: linear regression, linear mixed, and two ANN models. EE prediction accuracy was assessed using correlations, root mean square error (RMSE), and bias and was compared across models and accelerometers using repeated-measures analysis of variance. For all accelerometer placements, there were no significant differences for correlations or RMSE between linear regression and linear mixed models (correlations: r  =  0.71-0.88, RMSE: 1.11-1.61 METs; p  >  0.05). For the thigh-worn accelerometer, there were no differences in correlations or RMSE between linear and ANN models (ANN-correlations: r  =  0.89, RMSE: 1.07-1.08 METs. Linear models-correlations: r  =  0.88, RMSE: 1.10-1.11 METs; p  >  0.05). Conversely, one ANN had higher correlations and lower RMSE than both linear models for the hip (ANN-correlation: r  =  0.88, RMSE: 1.12 METs. Linear models-correlations: r  =  0.86, RMSE: 1.18-1.19 METs; p  <  0.05), and both ANNs had higher correlations and lower RMSE than both linear models for the wrist-worn accelerometers (ANN-correlations: r  =  0.82-0.84, RMSE: 1.26-1.32 METs. Linear models-correlations: r  =  0.71-0.73, RMSE: 1.55-1.61 METs; p  <  0.01). For studies using wrist-worn accelerometers, machine learning models offer a significant improvement in EE prediction

  9. The effect of changes in sea surface temperature on linear growth of Porites coral in Ambon Bay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Corvianawatie, Corry, E-mail: corvianawatie@students.itb.ac.id; Putri, Mutiara R., E-mail: mutiara.putri@fitb.itb.ac.id; Cahyarini, Sri Y., E-mail: yuda@geotek.lipi.go.id

    Coral is one of the most important organisms in the coral reef ecosystem. There are several factors affecting coral growth, one of them is changes in sea surface temperature (SST). The purpose of this research is to understand the influence of SST variability on the annual linear growth of Porites coral taken from Ambon Bay. The annual coral linear growth was calculated and compared to the annual SST from the Extended Reconstructed Sea Surface Temperature version 3b (ERSST v3b) model. Coral growth was calculated by using Coral X-radiograph Density System (CoralXDS) software. Coral sample X-radiographs were used as input data.more » Chronology was developed by calculating the coral’s annual growth bands. A pair of high and low density banding patterns observed in the coral’s X-radiograph represent one year of coral growth. The results of this study shows that Porites coral extents from 2001-2009 and had an average growth rate of 1.46 cm/year. Statistical analysis shows that the annual coral linear growth declined by 0.015 cm/year while the annual SST declined by 0.013°C/year. SST and the annual linear growth of Porites coral in the Ambon Bay is insignificantly correlated with r=0.304 (n=9, p>0.05). This indicates that annual SST variability does not significantly influence the linear growth of Porites coral from Ambon Bay. It is suggested that sedimentation load, salinity, pH or other environmental factors may affect annual linear coral growth.« less

  10. Large Spatial and Temporal Separations of Cause and Effect in Policy Making - Dealing with Non-linear Effects

    NASA Astrophysics Data System (ADS)

    McCaskill, John

    There can be large spatial and temporal separation of cause and effect in policy making. Determining the correct linkage between policy inputs and outcomes can be highly impractical in the complex environments faced by policy makers. In attempting to see and plan for the probable outcomes, standard linear models often overlook, ignore, or are unable to predict catastrophic events that only seem improbable due to the issue of multiple feedback loops. There are several issues with the makeup and behaviors of complex systems that explain the difficulty many mathematical models (factor analysis/structural equation modeling) have in dealing with non-linear effects in complex systems. This chapter highlights those problem issues and offers insights to the usefulness of ABM in dealing with non-linear effects in complex policy making environments.

  11. Hand function evaluation: a factor analysis study.

    PubMed

    Jarus, T; Poremba, R

    1993-05-01

    The purpose of this study was to investigate hand function evaluations. Factor analysis with varimax rotation was used to assess the fundamental characteristics of the items included in the Jebsen Hand Function Test and the Smith Hand Function Evaluation. The study sample consisted of 144 subjects without disabilities and 22 subjects with Colles fracture. Results suggest a four factor solution: Factor I--pinch movement; Factor II--grasp; Factor III--target accuracy; and Factor IV--activities of daily living. These categories differentiated the subjects without Colles fracture from the subjects with Colles fracture. A hand function evaluation consisting of these four factors would be useful. Such an evaluation that can be used for current clinical purposes is provided.

  12. Vestibular coriolis effect differences modeled with three-dimensional linear-angular interactions.

    PubMed

    Holly, Jan E

    2004-01-01

    The vestibular coriolis (or "cross-coupling") effect is traditionally explained by cross-coupled angular vectors, which, however, do not explain the differences in perceptual disturbance under different acceleration conditions. For example, during head roll tilt in a rotating chair, the magnitude of perceptual disturbance is affected by a number of factors, including acceleration or deceleration of the chair rotation or a zero-g environment. Therefore, it has been suggested that linear-angular interactions play a role. The present research investigated whether these perceptual differences and others involving linear coriolis accelerations could be explained under one common framework: the laws of motion in three dimensions, which include all linear-angular interactions among all six components of motion (three angular and three linear). The results show that the three-dimensional laws of motion predict the differences in perceptual disturbance. No special properties of the vestibular system or nervous system are required. In addition, simulations were performed with angular, linear, and tilt time constants inserted into the model, giving the same predictions. Three-dimensional graphics were used to highlight the manner in which linear-angular interaction causes perceptual disturbance, and a crucial component is the Stretch Factor, which measures the "unexpected" linear component.

  13. Towards Stability Analysis of Jump Linear Systems with State-Dependent and Stochastic Switching

    NASA Technical Reports Server (NTRS)

    Tejada, Arturo; Gonzalez, Oscar R.; Gray, W. Steven

    2004-01-01

    This paper analyzes the stability of hierarchical jump linear systems where the supervisor is driven by a Markovian stochastic process and by the values of the supervised jump linear system s states. The stability framework for this class of systems is developed over infinite and finite time horizons. The framework is then used to derive sufficient stability conditions for a specific class of hybrid jump linear systems with performance supervision. New sufficient stochastic stability conditions for discrete-time jump linear systems are also presented.

  14. Analysis of separation test for automatic brake adjuster based on linear radon transformation

    NASA Astrophysics Data System (ADS)

    Luo, Zai; Jiang, Wensong; Guo, Bin; Fan, Weijun; Lu, Yi

    2015-01-01

    The linear Radon transformation is applied to extract inflection points for online test system under the noise conditions. The linear Radon transformation has a strong ability of anti-noise and anti-interference by fitting the online test curve in several parts, which makes it easy to handle consecutive inflection points. We applied the linear Radon transformation to the separation test system to solve the separating clearance of automatic brake adjuster. The experimental results show that the feature point extraction error of the gradient maximum optimal method is approximately equal to ±0.100, while the feature point extraction error of linear Radon transformation method can reach to ±0.010, which has a lower error than the former one. In addition, the linear Radon transformation is robust.

  15. Comparison of linear and non-linear method in estimating the sorption isotherm parameters for safranin onto activated carbon.

    PubMed

    Kumar, K Vasanth; Sivanesan, S

    2005-08-31

    Comparison analysis of linear least square method and non-linear method for estimating the isotherm parameters was made using the experimental equilibrium data of safranin onto activated carbon at two different solution temperatures 305 and 313 K. Equilibrium data were fitted to Freundlich, Langmuir and Redlich-Peterson isotherm equations. All the three isotherm equations showed a better fit to the experimental equilibrium data. The results showed that non-linear method could be a better way to obtain the isotherm parameters. Redlich-Peterson isotherm is a special case of Langmuir isotherm when the Redlich-Peterson isotherm constant g was unity.

  16. Performance analysis of a GPS equipment by general linear models approach

    NASA Astrophysics Data System (ADS)

    Teodoro, M. Filomena; Gonçalves, Fernando M.; Correia, Anacleto

    2017-06-01

    One of the major challenges in processing high-accurate long baselines is the presence of un-modelled ionospheric and tropospheric delays. There are effective mitigation strategies for ionospheric biases, such as the ionosphere-free linear combination of L1 and L2 carrier-phase, which can remove about 98% of the first-order ionospheric biases. With few exceptions this was the solution found by LGO for the 11760 baselines processed in this research. Therefore, for successful results, the appropriated approach to the mitigation of biases due to tropospheric delays is vital. The main aim of the investigations presented in this work was to evaluate the improvements, or not, of the rate of baselines successfully produced by adopting an advanced tropospheric bias mitigation strategy as opposed to a sample tropospheric bias mitigation approach. In both cases LGO uses as a priori tropospheric model the simplified Hopfield model, improved in the first case with a zenith tropospheric scale factor per station. Being aware that 1D and 2D present different behaviors, both cases are analyzed individually with each strategy.

  17. Conical Pendulum--Linearization Analyses

    ERIC Educational Resources Information Center

    Dean, Kevin; Mathew, Jyothi

    2016-01-01

    A theoretical analysis is presented, showing the derivations of seven different linearization equations for the conical pendulum period "T", as a function of radial and angular parameters. Experimental data obtained over a large range of fixed conical pendulum lengths (0.435 m-2.130 m) are plotted with the theoretical lines and…

  18. On structural identifiability analysis of the cascaded linear dynamic systems in isotopically non-stationary 13C labelling experiments.

    PubMed

    Lin, Weilu; Wang, Zejian; Huang, Mingzhi; Zhuang, Yingping; Zhang, Siliang

    2018-06-01

    The isotopically non-stationary 13C labelling experiments, as an emerging experimental technique, can estimate the intracellular fluxes of the cell culture under an isotopic transient period. However, to the best of our knowledge, the issue of the structural identifiability analysis of non-stationary isotope experiments is not well addressed in the literature. In this work, the local structural identifiability analysis for non-stationary cumomer balance equations is conducted based on the Taylor series approach. The numerical rank of the Jacobian matrices of the finite extended time derivatives of the measured fractions with respect to the free parameters is taken as the criterion. It turns out that only one single time point is necessary to achieve the structural identifiability analysis of the cascaded linear dynamic system of non-stationary isotope experiments. The equivalence between the local structural identifiability of the cascaded linear dynamic systems and the local optimum condition of the nonlinear least squares problem is elucidated in the work. Optimal measurements sets can then be determined for the metabolic network. Two simulated metabolic networks are adopted to demonstrate the utility of the proposed method. Copyright © 2018 Elsevier Inc. All rights reserved.

  19. Multivariate analysis of prognostic factors in synovial sarcoma.

    PubMed

    Koh, Kyoung Hwan; Cho, Eun Yoon; Kim, Dong Wook; Seo, Sung Wook

    2009-11-01

    Many studies have described the diversity of synovial sarcoma in terms of its biological characteristics and clinical features. Moreover, much effort has been expended on the identification of prognostic factors because of unpredictable behaviors of synovial sarcomas. However, with the exception of tumor size, published results have been inconsistent. We attempted to identify independent risk factors using survival analysis. Forty-one consecutive patients with synovial sarcoma were prospectively followed from January 1997 to March 2008. Overall and progression-free survival for age, sex, tumor size, tumor location, metastasis at presentation, histologic subtype, chemotherapy, radiation therapy, and resection margin were analyzed, and standard multivariate Cox proportional hazard regression analysis was used to evaluate potential prognostic factors. Tumor size (>5 cm), nonlimb-based tumors, metastasis at presentation, and a monophasic subtype were associated with poorer overall survival. Multivariate analysis showed metastasis at presentation and monophasic tumor subtype affected overall survival. For the progression-free survival, monophasic subtype was found to be only 1 prognostic factor. The study confirmed that histologic subtype is the single most important independent prognostic factors of synovial sarcoma regardless of tumor stage.

  20. Modified global and modified linear contrast stretching algorithms: new colour contrast enhancement techniques for microscopic analysis of malaria slide images.

    PubMed

    Abdul-Nasir, Aimi Salihah; Mashor, Mohd Yusoff; Mohamed, Zeehaida

    2012-01-01

    Malaria is one of the serious global health problem, causing widespread sufferings and deaths in various parts of the world. With the large number of cases diagnosed over the year, early detection and accurate diagnosis which facilitates prompt treatment is an essential requirement to control malaria. For centuries now, manual microscopic examination of blood slide remains the gold standard for malaria diagnosis. However, low contrast of the malaria and variable smears quality are some factors that may influence the accuracy of interpretation by microbiologists. In order to reduce this problem, this paper aims to investigate the performance of the proposed contrast enhancement techniques namely, modified global and modified linear contrast stretching as well as the conventional global and linear contrast stretching that have been applied on malaria images of P. vivax species. The results show that the proposed modified global and modified linear contrast stretching techniques have successfully increased the contrast of the parasites and the infected red blood cells compared to the conventional global and linear contrast stretching. Hence, the resultant images would become useful to microbiologists for identification of various stages and species of malaria.

  1. Analysis of periodically excited non-linear systems by a parametric continuation technique

    NASA Astrophysics Data System (ADS)

    Padmanabhan, C.; Singh, R.

    1995-07-01

    The dynamic behavior and frequency response of harmonically excited piecewise linear and/or non-linear systems has been the subject of several recent investigations. Most of the prior studies employed harmonic balance or Galerkin schemes, piecewise linear techniques, analog simulation and/or direct numerical integration (digital simulation). Such techniques are somewhat limited in their ability to predict all of the dynamic characteristics, including bifurcations leading to the occurrence of unstable, subharmonic, quasi-periodic and/or chaotic solutions. To overcome this problem, a parametric continuation scheme, based on the shooting method, is applied specifically to a periodically excited piecewise linear/non-linear system, in order to improve understanding as well as to obtain the complete dynamic response. Parameter regions exhibiting bifurcations to harmonic, subharmonic or quasi-periodic solutions are obtained quite efficiently and systematically. Unlike other techniques, the proposed scheme can follow period-doubling bifurcations, and with some modifications obtain stable quasi-periodic solutions and their bifurcations. This knowledge is essential in establishing conditions for the occurrence of chaotic oscillations in any non-linear system. The method is first validated through the Duffing oscillator example, the solutions to which are also obtained by conventional one-term harmonic balance and perturbation methods. The second example deals with a clearance non-linearity problem for both harmonic and periodic excitations. Predictions from the proposed scheme match well with available analog simulation data as well as with multi-term harmonic balance results. Potential savings in computational time over direct numerical integration is demonstrated for some of the example cases. Also, this work has filled in some of the solution regimes for an impact pair, which were missed previously in the literature. Finally, one main limitation associated with the

  2. 1r2dinv: A finite-difference model for inverse analysis of two dimensional linear or radial groundwater flow

    USGS Publications Warehouse

    Bohling, Geoffrey C.; Butler, J.J.

    2001-01-01

    We have developed a program for inverse analysis of two-dimensional linear or radial groundwater flow problems. The program, 1r2dinv, uses standard finite difference techniques to solve the groundwater flow equation for a horizontal or vertical plane with heterogeneous properties. In radial mode, the program simulates flow to a well in a vertical plane, transforming the radial flow equation into an equivalent problem in Cartesian coordinates. The physical parameters in the model are horizontal or x-direction hydraulic conductivity, anisotropy ratio (vertical to horizontal conductivity in a vertical model, y-direction to x-direction in a horizontal model), and specific storage. The program allows the user to specify arbitrary and independent zonations of these three parameters and also to specify which zonal parameter values are known and which are unknown. The Levenberg-Marquardt algorithm is used to estimate parameters from observed head values. Particularly powerful features of the program are the ability to perform simultaneous analysis of heads from different tests and the inclusion of the wellbore in the radial mode. These capabilities allow the program to be used for analysis of suites of well tests, such as multilevel slug tests or pumping tests in a tomographic format. The combination of information from tests stressing different vertical levels in an aquifer provides the means for accurately estimating vertical variations in conductivity, a factor profoundly influencing contaminant transport in the subsurface. ?? 2001 Elsevier Science Ltd. All rights reserved.

  3. Design and analysis of linear cascade DNA hybridization chain reactions using DNA hairpins

    NASA Astrophysics Data System (ADS)

    Bui, Hieu; Garg, Sudhanshu; Miao, Vincent; Song, Tianqi; Mokhtar, Reem; Reif, John

    2017-01-01

    DNA self-assembly has been employed non-conventionally to construct nanoscale structures and dynamic nanoscale machines. The technique of hybridization chain reactions by triggered self-assembly has been shown to form various interesting nanoscale structures ranging from simple linear DNA oligomers to dendritic DNA structures. Inspired by earlier triggered self-assembly works, we present a system for controlled self-assembly of linear cascade DNA hybridization chain reactions using nine distinct DNA hairpins. NUPACK is employed to assist in designing DNA sequences and Matlab has been used to simulate DNA hairpin interactions. Gel electrophoresis and ensemble fluorescence reaction kinetics data indicate strong evidence of linear cascade DNA hybridization chain reactions. The half-time completion of the proposed linear cascade reactions indicates a linear dependency on the number of hairpins.

  4. On the linear programming bound for linear Lee codes.

    PubMed

    Astola, Helena; Tabus, Ioan

    2016-01-01

    Based on an invariance-type property of the Lee-compositions of a linear Lee code, additional equality constraints can be introduced to the linear programming problem of linear Lee codes. In this paper, we formulate this property in terms of an action of the multiplicative group of the field [Formula: see text] on the set of Lee-compositions. We show some useful properties of certain sums of Lee-numbers, which are the eigenvalues of the Lee association scheme, appearing in the linear programming problem of linear Lee codes. Using the additional equality constraints, we formulate the linear programming problem of linear Lee codes in a very compact form, leading to a fast execution, which allows to efficiently compute the bounds for large parameter values of the linear codes.

  5. Analyzing linear spatial features in ecology.

    PubMed

    Buettel, Jessie C; Cole, Andrew; Dickey, John M; Brook, Barry W

    2018-06-01

    The spatial analysis of dimensionless points (e.g., tree locations on a plot map) is common in ecology, for instance using point-process statistics to detect and compare patterns. However, the treatment of one-dimensional linear features (fiber processes) is rarely attempted. Here we appropriate the methods of vector sums and dot products, used regularly in fields like astrophysics, to analyze a data set of mapped linear features (logs) measured in 12 × 1-ha forest plots. For this demonstrative case study, we ask two deceptively simple questions: do trees tend to fall downhill, and if so, does slope gradient matter? Despite noisy data and many potential confounders, we show clearly that topography (slope direction and steepness) of forest plots does matter to treefall. More generally, these results underscore the value of mathematical methods of physics to problems in the spatial analysis of linear features, and the opportunities that interdisciplinary collaboration provides. This work provides scope for a variety of future ecological analyzes of fiber processes in space. © 2018 by the Ecological Society of America.

  6. Factors associated with postoperative C5 palsy after expansive open-door laminoplasty: retrospective cohort study using multivariable analysis.

    PubMed

    Tsuji, Takashi; Matsumoto, Morio; Nakamura, Masaya; Ishii, Ken; Fujita, Nobuyuki; Chiba, Kazuhiro; Watanabe, Kota

    2017-09-01

    The aim of the present study was to investigate the factors associated with C5 palsy by focusing on radiological parameters using multivariable analysis. The authors retrospectively assessed 190 patients with cervical spondylotic myelopathy treated by open-door laminoplasty. Four radiographic parameters-the number of expanded lamina, C3-C7 angle, lamina open angle and space anterior to the spinal cord-were evaluated to clarify the factors associated with C5 palsy. Of the 190 patients, 11 developed C5 palsy, giving an overall incidence of 5.8%. Although the number of expanded lamina, lamina open angle and space anterior to the spinal cord were significantly larger in C5 palsy group than those in non-palsy group, a multiple logistic regression analysis revealed that only the space anterior to the spinal cord (odds ratio 2.60) was a significant independent factor associated with C5 palsy. A multiple linear regression analysis indicated that the lamina open angle was associated with the space anterior to the spinal cord and the analysis identified the following equation: space anterior to the spinal cord (mm) = 1.54 + 0.09 × lamina open angle (degree). A cut-off value of 53.5° for the lamina open angle predicted the development of C5 palsy with a sensitivity of 72.7% and a specificity of 83.2%. The larger postoperative space anterior to the spinal cord, which was associated with the lamina open angle, was positively correlated with the higher incidence of C5 palsy.

  7. Numerical linear algebra in data mining

    NASA Astrophysics Data System (ADS)

    Eldén, Lars

    Ideas and algorithms from numerical linear algebra are important in several areas of data mining. We give an overview of linear algebra methods in text mining (information retrieval), pattern recognition (classification of handwritten digits), and PageRank computations for web search engines. The emphasis is on rank reduction as a method of extracting information from a data matrix, low-rank approximation of matrices using the singular value decomposition and clustering, and on eigenvalue methods for network analysis.

  8. Evaluation of linear discriminant analysis for automated Raman histological mapping of esophageal high-grade dysplasia

    NASA Astrophysics Data System (ADS)

    Hutchings, Joanne; Kendall, Catherine; Shepherd, Neil; Barr, Hugh; Stone, Nicholas

    2010-11-01

    Rapid Raman mapping has the potential to be used for automated histopathology diagnosis, providing an adjunct technique to histology diagnosis. The aim of this work is to evaluate the feasibility of automated and objective pathology classification of Raman maps using linear discriminant analysis. Raman maps of esophageal tissue sections are acquired. Principal component (PC)-fed linear discriminant analysis (LDA) is carried out using subsets of the Raman map data (6483 spectra). An overall (validated) training classification model performance of 97.7% (sensitivity 95.0 to 100% and specificity 98.6 to 100%) is obtained. The remainder of the map spectra (131,672 spectra) are projected onto the classification model resulting in Raman images, demonstrating good correlation with contiguous hematoxylin and eosin (HE) sections. Initial results suggest that LDA has the potential to automate pathology diagnosis of esophageal Raman images, but since the classification of test spectra is forced into existing training groups, further work is required to optimize the training model. A small pixel size is advantageous for developing the training datasets using mapping data, despite lengthy mapping times, due to additional morphological information gained, and could facilitate differentiation of further tissue groups, such as the basal cells/lamina propria, in the future, but larger pixels sizes (and faster mapping) may be more feasible for clinical application.

  9. Response of Non-Linear Shock Absorbers-Boundary Value Problem Analysis

    NASA Astrophysics Data System (ADS)

    Rahman, M. A.; Ahmed, U.; Uddin, M. S.

    2013-08-01

    A nonlinear boundary value problem of two degrees-of-freedom (DOF) untuned vibration damper systems using nonlinear springs and dampers has been numerically studied. As far as untuned damper is concerned, sixteen different combinations of linear and nonlinear springs and dampers have been comprehensively analyzed taking into account transient terms. For different cases, a comparative study is made for response versus time for different spring and damper types at three important frequency ratios: one at r = 1, one at r > 1 and one at r <1. The response of the system is changed because of the spring and damper nonlinearities; the change is different for different cases. Accordingly, an initially stable absorber may become unstable with time and vice versa. The analysis also shows that higher nonlinearity terms make the system more unstable. Numerical simulation includes transient vibrations. Although problems are much more complicated compared to those for a tuned absorber, a comparison of the results generated by the present numerical scheme with the exact one shows quite a reasonable agreement

  10. A Double-Sided Linear Primary Permanent Magnet Vernier Machine

    PubMed Central

    2015-01-01

    The purpose of this paper is to present a new double-sided linear primary permanent magnet (PM) vernier (DSLPPMV) machine, which can offer high thrust force, low detent force, and improved power factor. Both PMs and windings of the proposed machine are on the short translator, while the long stator is designed as a double-sided simple iron core with salient teeth so that it is very robust to transmit high thrust force. The key of this new machine is the introduction of double stator and the elimination of translator yoke, so that the inductance and the volume of the machine can be reduced. Hence, the proposed machine offers improved power factor and thrust force density. The electromagnetic performances of the proposed machine are analyzed including flux, no-load EMF, thrust force density, and inductance. Based on using the finite element analysis, the characteristics and performances of the proposed machine are assessed. PMID:25874250

  11. A double-sided linear primary permanent magnet vernier machine.

    PubMed

    Du, Yi; Zou, Chunhua; Liu, Xianxing

    2015-01-01

    The purpose of this paper is to present a new double-sided linear primary permanent magnet (PM) vernier (DSLPPMV) machine, which can offer high thrust force, low detent force, and improved power factor. Both PMs and windings of the proposed machine are on the short translator, while the long stator is designed as a double-sided simple iron core with salient teeth so that it is very robust to transmit high thrust force. The key of this new machine is the introduction of double stator and the elimination of translator yoke, so that the inductance and the volume of the machine can be reduced. Hence, the proposed machine offers improved power factor and thrust force density. The electromagnetic performances of the proposed machine are analyzed including flux, no-load EMF, thrust force density, and inductance. Based on using the finite element analysis, the characteristics and performances of the proposed machine are assessed.

  12. Dual-range linearized transimpedance amplifier system

    DOEpatents

    Wessendorf, Kurt O.

    2010-11-02

    A transimpedance amplifier system is disclosed which simultaneously generates a low-gain output signal and a high-gain output signal from an input current signal using a single transimpedance amplifier having two different feedback loops with different amplification factors to generate two different output voltage signals. One of the feedback loops includes a resistor, and the other feedback loop includes another resistor in series with one or more diodes. The transimpedance amplifier system includes a signal linearizer to linearize one or both of the low- and high-gain output signals by scaling and adding the two output voltage signals from the transimpedance amplifier. The signal linearizer can be formed either as an analog device using one or two summing amplifiers, or alternately can be formed as a digital device using two analog-to-digital converters and a digital signal processor (e.g. a microprocessor or a computer).

  13. Development of a Low Inductance Linear Alternator for Stirling Power Convertors

    NASA Technical Reports Server (NTRS)

    Geng, Steven M.; Schifer, Nicholas A.

    2017-01-01

    The free-piston Stirling power convertor is a promising technology for high efficiency heat-to-electricity power conversion in space. Stirling power convertors typically utilize linear alternators for converting mechanical motion into electricity. The linear alternator is one of the heaviest components of modern Stirling power convertors. In addition, state-of-art Stirling linear alternators usually require the use of tuning capacitors or active power factor correction controllers to maximize convertor output power. The linear alternator to be discussed in this paper, eliminates the need for tuning capacitors and delivers electrical power output in which current is inherently in phase with voltage. No power factor correction is needed. In addition, the linear alternator concept requires very little iron, so core loss has been virtually eliminated. This concept is a unique moving coil design where the magnetic flux path is defined by the magnets themselves. This paper presents computational predictions for two different low inductance alternator configurations, and compares the predictions with experimental data for one of the configurations that has been built and is currently being tested.

  14. Development of a Low-Inductance Linear Alternator for Stirling Power Convertors

    NASA Technical Reports Server (NTRS)

    Geng, Steven M.; Schifer, Nicholas A.

    2017-01-01

    The free-piston Stirling power convertor is a promising technology for high-efficiency heat-to-electricity power conversion in space. Stirling power convertors typically utilize linear alternators for converting mechanical motion into electricity. The linear alternator is one of the heaviest components of modern Stirling power convertors. In addition, state-of-the-art Stirling linear alternators usually require the use of tuning capacitors or active power factor correction controllers to maximize convertor output power. The linear alternator to be discussed in this paper eliminates the need for tuning capacitors and delivers electrical power output in which current is inherently in phase with voltage. No power factor correction is needed. In addition, the linear alternator concept requires very little iron, so core loss has been virtually eliminated. This concept is a unique moving coil design where the magnetic flux path is defined by the magnets themselves. This paper presents computational predictions for two different low inductance alternator configurations. Additionally, one of the configurations was built and tested at GRC, and the experimental data is compared with the predictions.

  15. Factor Analysis for Clustered Observations.

    ERIC Educational Resources Information Center

    Longford, N. T.; Muthen, B. O.

    1992-01-01

    A two-level model for factor analysis is defined, and formulas for a scoring algorithm for this model are derived. A simple noniterative method based on decomposition of total sums of the squares and cross-products is discussed and illustrated with simulated data and data from the Second International Mathematics Study. (SLD)

  16. Train repathing in emergencies based on fuzzy linear programming.

    PubMed

    Meng, Xuelei; Cui, Bingmou

    2014-01-01

    Train pathing is a typical problem which is to assign the train trips on the sets of rail segments, such as rail tracks and links. This paper focuses on the train pathing problem, determining the paths of the train trips in emergencies. We analyze the influencing factors of train pathing, such as transferring cost, running cost, and social adverse effect cost. With the overall consideration of the segment and station capability constraints, we build the fuzzy linear programming model to solve the train pathing problem. We design the fuzzy membership function to describe the fuzzy coefficients. Furthermore, the contraction-expansion factors are introduced to contract or expand the value ranges of the fuzzy coefficients, coping with the uncertainty of the value range of the fuzzy coefficients. We propose a method based on triangular fuzzy coefficient and transfer the train pathing (fuzzy linear programming model) to a determinate linear model to solve the fuzzy linear programming problem. An emergency is supposed based on the real data of the Beijing-Shanghai Railway. The model in this paper was solved and the computation results prove the availability of the model and efficiency of the algorithm.

  17. Can Linear Superiorization Be Useful for Linear Optimization Problems?

    PubMed Central

    Censor, Yair

    2017-01-01

    Linear superiorization considers linear programming problems but instead of attempting to solve them with linear optimization methods it employs perturbation resilient feasibility-seeking algorithms and steers them toward reduced (not necessarily minimal) target function values. The two questions that we set out to explore experimentally are (i) Does linear superiorization provide a feasible point whose linear target function value is lower than that obtained by running the same feasibility-seeking algorithm without superiorization under identical conditions? and (ii) How does linear superiorization fare in comparison with the Simplex method for solving linear programming problems? Based on our computational experiments presented here, the answers to these two questions are: “yes” and “very well”, respectively. PMID:29335660

  18. The Analysis and Construction of Perfectly Matched Layers for the Linearized Euler Equations

    NASA Technical Reports Server (NTRS)

    Hesthaven, J. S.

    1997-01-01

    We present a detailed analysis of a recently proposed perfectly matched layer (PML) method for the absorption of acoustic waves. The split set of equations is shown to be only weakly well-posed, and ill-posed under small low order perturbations. This analysis provides the explanation for the stability problems associated with the split field formulation and illustrates why applying a filter has a stabilizing effect. Utilizing recent results obtained within the context of electromagnetics, we develop strongly well-posed absorbing layers for the linearized Euler equations. The schemes are shown to be perfectly absorbing independent of frequency and angle of incidence of the wave in the case of a non-convecting mean flow. In the general case of a convecting mean flow, a number of techniques is combined to obtain a absorbing layers exhibiting PML-like behavior. The efficacy of the proposed absorbing layers is illustrated though computation of benchmark problems in aero-acoustics.

  19. Using factor analysis to identify neuromuscular synergies during treadmill walking

    NASA Technical Reports Server (NTRS)

    Merkle, L. A.; Layne, C. S.; Bloomberg, J. J.; Zhang, J. J.

    1998-01-01

    Neuroscientists are often interested in grouping variables to facilitate understanding of a particular phenomenon. Factor analysis is a powerful statistical technique that groups variables into conceptually meaningful clusters, but remains underutilized by neuroscience researchers presumably due to its complicated concepts and procedures. This paper illustrates an application of factor analysis to identify coordinated patterns of whole-body muscle activation during treadmill walking. Ten male subjects walked on a treadmill (6.4 km/h) for 20 s during which surface electromyographic (EMG) activity was obtained from the left side sternocleidomastoid, neck extensors, erector spinae, and right side biceps femoris, rectus femoris, tibialis anterior, and medial gastrocnemius. Factor analysis revealed 65% of the variance of seven muscles sampled aligned with two orthogonal factors, labeled 'transition control' and 'loading'. These two factors describe coordinated patterns of muscular activity across body segments that would not be evident by evaluating individual muscle patterns. The results show that factor analysis can be effectively used to explore relationships among muscle patterns across all body segments to increase understanding of the complex coordination necessary for smooth and efficient locomotion. We encourage neuroscientists to consider using factor analysis to identify coordinated patterns of neuromuscular activation that would be obscured using more traditional EMG analyses.

  20. Healthcare Expenditures Associated with Depression Among Individuals with Osteoarthritis: Post-Regression Linear Decomposition Approach.

    PubMed

    Agarwal, Parul; Sambamoorthi, Usha

    2015-12-01

    Depression is common among individuals with osteoarthritis and leads to increased healthcare burden. The objective of this study was to examine excess total healthcare expenditures associated with depression among individuals with osteoarthritis in the US. Adults with self-reported osteoarthritis (n = 1881) were identified using data from the 2010 Medical Expenditure Panel Survey (MEPS). Among those with osteoarthritis, chi-square tests and ordinary least square regressions (OLS) were used to examine differences in healthcare expenditures between those with and without depression. Post-regression linear decomposition technique was used to estimate the relative contribution of different constructs of the Anderson's behavioral model, i.e., predisposing, enabling, need, personal healthcare practices, and external environment factors, to the excess expenditures associated with depression among individuals with osteoarthritis. All analysis accounted for the complex survey design of MEPS. Depression coexisted among 20.6 % of adults with osteoarthritis. The average total healthcare expenditures were $13,684 among adults with depression compared to $9284 among those without depression. Multivariable OLS regression revealed that adults with depression had 38.8 % higher healthcare expenditures (p < 0.001) compared to those without depression. Post-regression linear decomposition analysis indicated that 50 % of differences in expenditures among adults with and without depression can be explained by differences in need factors. Among individuals with coexisting osteoarthritis and depression, excess healthcare expenditures associated with depression were mainly due to comorbid anxiety, chronic conditions and poor health status. These expenditures may potentially be reduced by providing timely intervention for need factors or by providing care under a collaborative care model.

  1. Comparing Consider-Covariance Analysis with Sigma-Point Consider Filter and Linear-Theory Consider Filter Formulations

    NASA Technical Reports Server (NTRS)

    Lisano, Michael E.

    2007-01-01

    Recent literature in applied estimation theory reflects growing interest in the sigma-point (also called unscented ) formulation for optimal sequential state estimation, often describing performance comparisons with extended Kalman filters as applied to specific dynamical problems [c.f. 1, 2, 3]. Favorable attributes of sigma-point filters are described as including a lower expected error for nonlinear even non-differentiable dynamical systems, and a straightforward formulation not requiring derivation or implementation of any partial derivative Jacobian matrices. These attributes are particularly attractive, e.g. in terms of enabling simplified code architecture and streamlined testing, in the formulation of estimators for nonlinear spaceflight mechanics systems, such as filter software onboard deep-space robotic spacecraft. As presented in [4], the Sigma-Point Consider Filter (SPCF) algorithm extends the sigma-point filter algorithm to the problem of consider covariance analysis. Considering parameters in a dynamical system, while estimating its state, provides an upper bound on the estimated state covariance, which is viewed as a conservative approach to designing estimators for problems of general guidance, navigation and control. This is because, whether a parameter in the system model is observable or not, error in the knowledge of the value of a non-estimated parameter will increase the actual uncertainty of the estimated state of the system beyond the level formally indicated by the covariance of an estimator that neglects errors or uncertainty in that parameter. The equations for SPCF covariance evolution are obtained in a fashion similar to the derivation approach taken with standard (i.e. linearized or extended) consider parameterized Kalman filters (c.f. [5]). While in [4] the SPCF and linear-theory consider filter (LTCF) were applied to an illustrative linear dynamics/linear measurement problem, in the present work examines the SPCF as applied to

  2. Validation of a Custom Instrumented Retainer Form Factor for Measuring Linear and Angular Head Impact Kinematics.

    PubMed

    Miller, Logan E; Kuo, Calvin; Wu, Lyndia C; Urban, Jillian E; Camarillo, David B; Stitzel, Joel D

    2018-05-01

    Head impact exposure in popular contact sports is not well understood, especially in the youth population, despite recent advances in impact-sensing technology which has allowed widespread collection of real-time head impact data. Previous studies indicate that a custom-instrumented mouthpiece is a superior method for collecting accurate head acceleration data. The objective of this study was to evaluate the efficacy of mounting a sensor device inside an acrylic retainer form factor to measure six-degrees-of-freedom (6DOF) head kinematic response. This study compares 6DOF mouthpiece kinematics at the head center of gravity (CG) to kinematics measured by an anthropomorphic test device (ATD). This study found that when instrumentation is mounted in the rigid retainer form factor, there is good coupling with the upper dentition and highly accurate kinematic results compared to the ATD. Peak head kinematics were correlated with r2 > 0.98 for both rotational velocity and linear acceleration and r2 = 0.93 for rotational acceleration. These results indicate that a rigid retainer-based form factor is an accurate and promising method of collecting head impact data. This device can be used to study head impacts in helmeted contact sports such as football, hockey, and lacrosse as well as nonhelmeted sports such as soccer and basketball. Understanding the magnitude and frequency of impacts sustained in various sports using an accurate head impact sensor, such as the one presented in this study, will improve our understanding of head impact exposure and sports-related concussion.

  3. The Natural Neighbour Radial Point Interpolation Meshless Method Applied to the Non-Linear Analysis

    NASA Astrophysics Data System (ADS)

    Dinis, L. M. J. S.; Jorge, R. M. Natal; Belinha, J.

    2011-05-01

    In this work the Natural Neighbour Radial Point Interpolation Method (NNRPIM), is extended to large deformation analysis of elastic and elasto-plastic structures. The NNPRIM uses the Natural Neighbour concept in order to enforce the nodal connectivity and to create a node-depending background mesh, used in the numerical integration of the NNRPIM interpolation functions. Unlike the FEM, where geometrical restrictions on elements are imposed for the convergence of the method, in the NNRPIM there are no such restrictions, which permits a random node distribution for the discretized problem. The NNRPIM interpolation functions, used in the Galerkin weak form, are constructed using the Radial Point Interpolators, with some differences that modify the method performance. In the construction of the NNRPIM interpolation functions no polynomial base is required and the used Radial Basis Function (RBF) is the Multiquadric RBF. The NNRPIM interpolation functions posses the delta Kronecker property, which simplify the imposition of the natural and essential boundary conditions. One of the scopes of this work is to present the validation the NNRPIM in the large-deformation elasto-plastic analysis, thus the used non-linear solution algorithm is the Newton-Rapson initial stiffness method and the efficient "forward-Euler" procedure is used in order to return the stress state to the yield surface. Several non-linear examples, exhibiting elastic and elasto-plastic material properties, are studied to demonstrate the effectiveness of the method. The numerical results indicated that NNRPIM handles large material distortion effectively and provides an accurate solution under large deformation.

  4. A Factor Analysis of the BSRI and the PAQ.

    ERIC Educational Resources Information Center

    Edwards, Teresa A.; And Others

    Factor analysis of the Bem Sex Role Inventory (BSRI) and the Personality Attributes Questionnaire (PAQ) was undertaken to study the independence of the masculine and feminine scales within each instrument. Both instruments were administered to undergraduate education majors. Analysis of primary first and second order factors of the BSRI indicated…

  5. Combined slope ratio analysis and linear-subtraction: An extension of the Pearce ratio method

    NASA Astrophysics Data System (ADS)

    De Waal, Sybrand A.

    1996-07-01

    A new technique, called combined slope ratio analysis, has been developed by extending the Pearce element ratio or conserved-denominator method (Pearce, 1968) to its logical conclusions. If two stoichiometric substances are mixed and certain chemical components are uniquely contained in either one of the two mixing substances, then by treating these unique components as conserved, the composition of the substance not containing the relevant component can be accurately calculated within the limits allowed by analytical and geological error. The calculated composition can then be subjected to rigorous statistical testing using the linear-subtraction method recently advanced by Woronow (1994). Application of combined slope ratio analysis to the rocks of the Uwekahuna Laccolith, Hawaii, USA, and the lavas of the 1959-summit eruption of Kilauea Volcano, Hawaii, USA, yields results that are consistent with field observations.

  6. Rf linearity in low dimensional nanowire mosfets

    NASA Astrophysics Data System (ADS)

    Razavieh, Ali

    Ever decreasing cost of electronics due to unique scaling potential of today's VLSI processes such as CMOS technology along with innovations in RF devices, circuits and architectures make wireless communication an un-detachable part of everyday's life. This rapid transition of communication systems toward wireless technologies over last couple of decades resulted in operation of numerous standards within a small frequency window. More traffic in adjacent frequency ranges imposes more constraints on the linearity of RF front-end stages, and increases the need for more effective linearization techniques. Long-established ways to improve linearity in DSM CMOS technology are focused on system level methods which require complex circuit design techniques due to challenges such as nonlinear output conductance, and mobility degradation especially when low supply voltage is a key factor. These constrains have turned more focus toward improvement of linearity at the device level in order to simplify the existing linearization techniques. This dissertation discusses the possibility of employing nanostructures particularly nanowires in order to achieve and improve RF linearity at the device level by making a connection between the electronic transport properties of nanowires and their circuit level RF characteristics (RF linearity). Focus of this work is mainly on transconductance (gm) linearity because of the following reasons: 1) due to good electrostatics, nanowire transistors show fine current saturation at very small supply voltages. Good current saturation minimizes the output conductance nonlinearities. 2) non-linearity due to the gate to source capacitances (Cgs) can also be ignored in today's operating frequencies due to small gate capacitance values. If three criteria: i) operation in the quantum capacitance limit (QCL), ii) one-dimensional (1-D) transport, and iii) operation in the ballistic transport regime are met at the same time, a MOSFET will exhibit an ideal

  7. Analysis of linear and cyclic oligomers in polyamide-6 without sample preparation by liquid chromatography using the sandwich injection method. II. Methods of detection and quantification and overall long-term performance.

    PubMed

    Mengerink, Y; Peters, R; Kerkhoff, M; Hellenbrand, J; Omloo, H; Andrien, J; Vestjens, M; van der Wal, S

    2000-05-05

    By separating the first six linear and cyclic oligomers of polyamide-6 on a reversed-phase high-performance liquid chromatographic system after sandwich injection, quantitative determination of these oligomers becomes feasible. Low-wavelength UV detection of the different oligomers and selective post-column reaction detection of the linear oligomers with o-phthalic dicarboxaldehyde (OPA) and 3-mercaptopropionic acid (3-MPA) are discussed. A general methodology for quantification of oligomers in polymers was developed. It is demonstrated that the empirically determined group-equivalent absorption coefficients and quench factors are a convenient way of quantifying linear and cyclic oligomers of nylon-6. The overall long-term performance of the method was studied by monitoring a reference sample and the calibration factors of the linear and cyclic oligomers.

  8. Establishing Factor Validity Using Variable Reduction in Confirmatory Factor Analysis.

    ERIC Educational Resources Information Center

    Hofmann, Rich

    1995-01-01

    Using a 21-statement attitude-type instrument, an iterative procedure for improving confirmatory model fit is demonstrated within the context of the EQS program of P. M. Bentler and maximum likelihood factor analysis. Each iteration systematically eliminates the poorest fitting statement as identified by a variable fit index. (SLD)

  9. Internal and external environmental factors affecting the performance of hospital-based home nursing care.

    PubMed

    Noh, J-W; Kwon, Y-D; Yoon, S-J; Hwang, J-I

    2011-06-01

    Numerous studies on HNC services have been carried out by signifying their needs, efficiency and effectiveness. However, no study has ever been performed to determine the critical factors associated with HNC's positive results despite the deluge of positive studies on the service. This study included all of the 89 training hospitals that were practising HNC service in Korea as of November 2006. The input factors affecting the performance were classified as either internal or external environmental factors. This analysis was conducted to understand the impact that the corresponding factors had on performance. Data were analysed by using multiple linear regressions. The internal and external environment variables affected the performance of HNC based on univariate analysis. The meaningful variables were internal environmental factors. Specifically, managerial resource (the number of operating beds and the outpatient/inpatient ratio) were meaningful when the multiple linear regression analysis was performed. Indeed, the importance of organizational culture (the passion of HNC nurses) was significant. This study, considering the limited market size of Korea, illustrates that the critical factor for the development of hospital-led HNC lies with internal environmental factors rather than external ones. Among the internal environmental factors, the hospitals' managerial resource-related factors (specifically, the passion of nurses) were the most important contributing element. © 2011 The Authors. International Nursing Review © 2011 International Council of Nurses.

  10. Suppression of stimulated Brillouin scattering in optical fibers using a linearly chirped diode laser.

    PubMed

    White, J O; Vasilyev, A; Cahill, J P; Satyan, N; Okusaga, O; Rakuljic, G; Mungan, C E; Yariv, A

    2012-07-02

    The output of high power fiber amplifiers is typically limited by stimulated Brillouin scattering (SBS). An analysis of SBS with a chirped pump laser indicates that a chirp of 2.5 × 10(15) Hz/s could raise, by an order of magnitude, the SBS threshold of a 20-m fiber. A diode laser with a constant output power and a linear chirp of 5 × 10(15) Hz/s has been previously demonstrated. In a low-power proof-of-concept experiment, the threshold for SBS in a 6-km fiber is increased by a factor of 100 with a chirp of 5 × 10(14) Hz/s. A linear chirp will enable straightforward coherent combination of multiple fiber amplifiers, with electronic compensation of path length differences on the order of 0.2 m.

  11. Simple quasi-analytical holonomic homogenization model for the non-linear analysis of in-plane loaded masonry panels: Part 1, meso-scale

    NASA Astrophysics Data System (ADS)

    Milani, G.; Bertolesi, E.

    2017-07-01

    A simple quasi analytical holonomic homogenization approach for the non-linear analysis of masonry walls in-plane loaded is presented. The elementary cell (REV) is discretized with 24 triangular elastic constant stress elements (bricks) and non-linear interfaces (mortar). A holonomic behavior with softening is assumed for mortar. It is shown how the mechanical problem in the unit cell is characterized by very few displacement variables and how homogenized stress-strain behavior can be evaluated semi-analytically.

  12. Attitudes Toward Seeking Professional Psychological Help: Factor Structure and Socio-Demographic Predictors

    PubMed Central

    Picco, Louisa; Abdin, Edimanysah; Chong, Siow Ann; Pang, Shirlene; Shafie, Saleha; Chua, Boon Yiang; Vaingankar, Janhavi A.; Ong, Lue Ping; Tay, Jenny; Subramaniam, Mythily

    2016-01-01

    Attitudes toward seeking professional psychological help (ATSPPH) are complex. Help seeking preferences are influenced by various attitudinal and socio-demographic factors and can often result in unmet needs, treatment gaps, and delays in help-seeking. The aims of the current study were to explore the factor structure of the ATSPPH short form (-SF) scale and determine whether any significant socio-demographic differences exist in terms of help-seeking attitudes. Data were extracted from a population-based survey conducted among Singapore residents aged 18–65 years. Respondents provided socio-demographic information and were administered the ATSPPH-SF. Weighted mean and standard error of the mean were calculated for continuous variables, and frequencies and percentages for categorical variables. Confirmatory factor analysis and exploratory factor analysis were performed to establish the validity of the factor structure of the ATSPPH-SF scale. Multivariable linear regressions were conducted to examine predictors of each of the ATSPPH-SF factors. The factor analysis revealed that the ATSPPH-SF formed three distinct dimensions: “Openness to seeking professional help,” “Value in seeking professional help,” and “Preference to cope on one's own.” Multiple linear regression analyses showed that age, ethnicity, marital status, education, and income were significantly associated with the ATSPPH-SF factors. Population subgroups that were less open to or saw less value in seeking psychological help should be targeted via culturally appropriate education campaigns and tailored and supportive interventions. PMID:27199794

  13. Caffeine Increases the Linearity of the Visual BOLD Response

    PubMed Central

    Liu, Thomas T.; Liau, Joy

    2009-01-01

    Although the blood oxygenation level dependent (BOLD) signal used in most functional magnetic resonance imaging (fMRI) studies has been shown to exhibit nonlinear characteristics, most analyses assume that the BOLD signal responds in a linear fashion to stimulus. This assumption of linearity can lead to errors in the estimation of the BOLD response, especially for rapid event-related fMRI studies. In this study, we used a rapid event-related design and Volterra kernel analysis to assess the effect of a 200 mg oral dose of caffeine on the linearity of the visual BOLD response. The caffeine dose significantly (p < 0.02) increased the linearity of the BOLD response in a sample of 11 healthy volunteers studied on a 3 Tesla MRI system. In addition, the agreement between nonlinear and linear estimates of the hemodynamic response function was significantly increased (p= 0.013) with the caffeine dose. These findings indicate that differences in caffeine usage should be considered as a potential source of bias in the analysis of rapid event-related fMRI studies. PMID:19854278

  14. Analysis of blood pressure signal in patients with different ventricular ejection fraction using linear and non-linear methods.

    PubMed

    Arcentales, Andres; Rivera, Patricio; Caminal, Pere; Voss, Andreas; Bayes-Genis, Antonio; Giraldo, Beatriz F

    2016-08-01

    Changes in the left ventricle function produce alternans in the hemodynamic and electric behavior of the cardiovascular system. A total of 49 cardiomyopathy patients have been studied based on the blood pressure signal (BP), and were classified according to the left ventricular ejection fraction (LVEF) in low risk (LR: LVEF>35%, 17 patients) and high risk (HR: LVEF≤35, 32 patients) groups. We propose to characterize these patients using a linear and a nonlinear methods, based on the spectral estimation and the recurrence plot, respectively. From BP signal, we extracted each systolic time interval (STI), upward systolic slope (BPsl), and the difference between systolic and diastolic BP, defined as pulse pressure (PP). After, the best subset of parameters were obtained through the sequential feature selection (SFS) method. According to the results, the best classification was obtained using a combination of linear and nonlinear features from STI and PP parameters. For STI, the best combination was obtained considering the frequency peak and the diagonal structures of RP, with an area under the curve (AUC) of 79%. The same results were obtained when comparing PP values. Consequently, the use of combined linear and nonlinear parameters could improve the risk stratification of cardiomyopathy patients.

  15. Photoneutron Flux Measurement via Neutron Activation Analysis in a Radiotherapy Bunker with an 18 MV Linear Accelerator

    NASA Astrophysics Data System (ADS)

    Çeçen, Yiğit; Gülümser, Tuğçe; Yazgan, Çağrı; Dapo, Haris; Üstün, Mahmut; Boztosun, Ismail

    2017-09-01

    In cancer treatment, high energy X-rays are used which are produced by linear accelerators (LINACs). If the energy of these beams is over 8 MeV, photonuclear reactions occur between the bremsstrahlung photons and the metallic parts of the LINAC. As a result of these interactions, neutrons are also produced as secondary radiation products (γ,n) which are called photoneutrons. The study aims to map the photoneutron flux distribution within the LINAC bunker via neutron activation analysis (NAA) using indium-cadmium foils. Irradiations made at different gantry angles (0°, 90°, 180° and 270°) with a total of 91 positions in the Philips SLI-25 linear accelerator treatment room and location-based distribution of thermal neutron flux was obtained. Gamma spectrum analysis was carried out with high purity germanium (HPGe) detector. Results of the analysis showed that the maximum neutron flux in the room occurred at just above of the LINAC head (1.2x105 neutrons/cm2.s) which is compatible with an americium-beryllium (Am-Be) neutron source. There was a 90% decrease of flux at the walls and at the start of the maze with respect to the maximum neutron flux. And, just in front of the LINAC door, inside the room, neutron flux was measured less than 1% of the maximum.

  16. Non-linear effects of the built environment on automobile-involved pedestrian crash frequency: A machine learning approach.

    PubMed

    Ding, Chuan; Chen, Peng; Jiao, Junfeng

    2018-03-01

    Although a growing body of literature focuses on the relationship between the built environment and pedestrian crashes, limited evidence is provided about the relative importance of many built environment attributes by accounting for their mutual interaction effects and their non-linear effects on automobile-involved pedestrian crashes. This study adopts the approach of Multiple Additive Poisson Regression Trees (MAPRT) to fill such gaps using pedestrian collision data collected from Seattle, Washington. Traffic analysis zones are chosen as the analytical unit. The effects of various factors on pedestrian crash frequency investigated include characteristics the of road network, street elements, land use patterns, and traffic demand. Density and the degree of mixed land use have major effects on pedestrian crash frequency, accounting for approximately 66% of the effects in total. More importantly, some factors show clear non-linear relationships with pedestrian crash frequency, challenging the linearity assumption commonly used in existing studies which employ statistical models. With various accurately identified non-linear relationships between the built environment and pedestrian crashes, this study suggests local agencies to adopt geo-spatial differentiated policies to establish a safe walking environment. These findings, especially the effective ranges of the built environment, provide evidence to support for transport and land use planning, policy recommendations, and road safety programs. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Identification of key factors affecting the water pollutant concentration in the sluice-controlled river reaches of the Shaying River in China via statistical analysis methods.

    PubMed

    Dou, Ming; Zhang, Yan; Zuo, Qiting; Mi, Qingbin

    2015-08-01

    The construction of sluices creates a strong disturbance in water environmental factors within a river. The change in water pollutant concentrations of sluice-controlled river reaches (SCRRs) is more complex than that of natural river segments. To determine the key factors affecting water pollutant concentration changes in SCRRs, river reaches near the Huaidian Sluice in the Shaying River of China were selected as a case study, and water quality monitoring experiments based on different regulating modes were implemented in 2009 and 2010. To identify the key factors affecting the change rates for the chemical oxygen demand of permanganate (CODMn) and ammonia nitrogen (NH3-N) concentrations in the SCRRs of the Huaidian Sluice, partial correlation analysis, principal component analysis and principal factor analysis were used. The results indicate four factors, i.e., the inflow quantity from upper reaches, opening size of sluice gates, water pollutant concentration from upper reaches, and turbidity before the sluice, which are the common key factors for the CODMn and NH3-N concentration change rates. Moreover, the dissolved oxygen before a sluice is a key factor for the permanganate concentration from CODMn change rate, and the water depth before a sluice is a key factor for the NH3-N concentration change rate. Multiple linear regressions between the water pollutant concentration change rate and key factors were established via multiple linear regression analyses, and the quantitative relationship between the CODMn and NH3-N concentration change rates and key affecting factors was analyzed. Finally, the mechanism of action for the key factors affecting the water pollutant concentration changes was analyzed. The results reveal that the inflow quantity from upper reaches, opening size of sluice gates, permanganate concentration from CODMn from upper reaches and dissolved oxygen before the sluice have a negative influence and the turbidity before the sluice has a positive

  18. High-Speed Linear Raman Spectroscopy for Instability Analysis of a Bluff Body Flame

    NASA Technical Reports Server (NTRS)

    Kojima, Jun; Fischer, David

    2013-01-01

    We report a high-speed laser diagnostics technique based on point-wise linear Raman spectroscopy for measuring the frequency content of a CH4-air premixed flame stabilized behind a circular bluff body. The technique, which primarily employs a Nd:YLF pulsed laser and a fast image-intensified CCD camera, successfully measures the time evolution of scalar parameters (N2, O2, CH4, and H2O) in the vortex-induced flame instability at a data rate of 1 kHz. Oscillation of the V-shaped flame front is quantified through frequency analysis of the combustion species data and their correlations. This technique promises to be a useful diagnostics tool for combustion instability studies.

  19. Equivalent Linearization Analysis of Geometrically Nonlinear Random Vibrations Using Commercial Finite Element Codes

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.; Muravyov, Alexander A.

    2002-01-01

    Two new equivalent linearization implementations for geometrically nonlinear random vibrations are presented. Both implementations are based upon a novel approach for evaluating the nonlinear stiffness within commercial finite element codes and are suitable for use with any finite element code having geometrically nonlinear static analysis capabilities. The formulation includes a traditional force-error minimization approach and a relatively new version of a potential energy-error minimization approach, which has been generalized for multiple degree-of-freedom systems. Results for a simply supported plate under random acoustic excitation are presented and comparisons of the displacement root-mean-square values and power spectral densities are made with results from a nonlinear time domain numerical simulation.

  20. Inviscid linear stability analysis of two vertical columns of different densities in a gravitational acceleration field

    DOE PAGES

    Prathama, Aditya Heru; Pantano, Carlos

    2017-08-09

    Here, we study the inviscid linear stability of a vertical interface separating two fluids of different densities and subject to a gravitational acceleration field parallel to the interface. In this arrangement, the two free streams are constantly accelerated, which means that the linear stability analysis is not amenable to Fourier or Laplace solution in time. Instead, we derive the equations analytically by the initial-value problem method and express the solution in terms of the well-known parabolic cylinder function. The results, which can be classified as an accelerating Kelvin–Helmholtz configuration, show that even in the presence of surface tension, the interfacemore » is unconditionally unstable at all wavemodes. This is a consequence of the ever increasing momentum of the free streams, as gravity accelerates them indefinitely. The instability can be shown to grow as the exponential of a quadratic function of time.« less