21 CFR 2.19 - Methods of analysis.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 1 2013-04-01 2013-04-01 false Methods of analysis. 2.19 Section 2.19 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL GENERAL ADMINISTRATIVE RULINGS AND DECISIONS General Provisions § 2.19 Methods of analysis. Where the method of analysis...
21 CFR 2.19 - Methods of analysis.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 1 2014-04-01 2014-04-01 false Methods of analysis. 2.19 Section 2.19 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL GENERAL ADMINISTRATIVE RULINGS AND DECISIONS General Provisions § 2.19 Methods of analysis. Where the method of analysis...
21 CFR 2.19 - Methods of analysis.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 1 2012-04-01 2012-04-01 false Methods of analysis. 2.19 Section 2.19 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL GENERAL ADMINISTRATIVE RULINGS AND DECISIONS General Provisions § 2.19 Methods of analysis. Where the method of analysis...
Regularized Generalized Canonical Correlation Analysis
ERIC Educational Resources Information Center
Tenenhaus, Arthur; Tenenhaus, Michel
2011-01-01
Regularized generalized canonical correlation analysis (RGCCA) is a generalization of regularized canonical correlation analysis to three or more sets of variables. It constitutes a general framework for many multi-block data analysis methods. It combines the power of multi-block data analysis methods (maximization of well identified criteria) and…
A Generalized Method of Image Analysis from an Intercorrelation Matrix which May Be Singular.
ERIC Educational Resources Information Center
Yanai, Haruo; Mukherjee, Bishwa Nath
1987-01-01
This generalized image analysis method is applicable to singular and non-singular correlation matrices (CMs). Using the orthogonal projector and a weaker generalized inverse matrix, image and anti-image covariance matrices can be derived from a singular CM. (SLD)
Optimal Multicomponent Analysis Using the Generalized Standard Addition Method.
ERIC Educational Resources Information Center
Raymond, Margaret; And Others
1983-01-01
Describes an experiment on the simultaneous determination of chromium and magnesium by spectophotometry modified to include the Generalized Standard Addition Method computer program, a multivariate calibration method that provides optimal multicomponent analysis in the presence of interference and matrix effects. Provides instructions for…
Generalized Structured Component Analysis
ERIC Educational Resources Information Center
Hwang, Heungsun; Takane, Yoshio
2004-01-01
We propose an alternative method to partial least squares for path analysis with components, called generalized structured component analysis. The proposed method replaces factors by exact linear combinations of observed variables. It employs a well-defined least squares criterion to estimate model parameters. As a result, the proposed method…
Computational Methods for Structural Mechanics and Dynamics, part 1
NASA Technical Reports Server (NTRS)
Stroud, W. Jefferson (Editor); Housner, Jerrold M. (Editor); Tanner, John A. (Editor); Hayduk, Robert J. (Editor)
1989-01-01
The structural analysis methods research has several goals. One goal is to develop analysis methods that are general. This goal of generality leads naturally to finite-element methods, but the research will also include other structural analysis methods. Another goal is that the methods be amenable to error analysis; that is, given a physical problem and a mathematical model of that problem, an analyst would like to know the probable error in predicting a given response quantity. The ultimate objective is to specify the error tolerances and to use automated logic to adjust the mathematical model or solution strategy to obtain that accuracy. A third goal is to develop structural analysis methods that can exploit parallel processing computers. The structural analysis methods research will focus initially on three types of problems: local/global nonlinear stress analysis, nonlinear transient dynamics, and tire modeling.
An analysis of general chain systems
NASA Technical Reports Server (NTRS)
Passerello, C. E.; Huston, R. L.
1972-01-01
A general analysis of dynamic systems consisting of connected rigid bodies is presented. The number of bodies and their manner of connection is arbitrary so long as no closed loops are formed. The analysis represents a dynamic finite element method, which is computer-oriented and designed so that nonworking, interval constraint forces are automatically eliminated. The method is based upon Lagrange's form of d'Alembert's principle. Shifter matrix transformations are used with the geometrical aspects of the analysis. The method is illustrated with a space manipulator.
21 CFR 2.19 - Methods of analysis.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 1 2011-04-01 2011-04-01 false Methods of analysis. 2.19 Section 2.19 Food and... ADMINISTRATIVE RULINGS AND DECISIONS General Provisions § 2.19 Methods of analysis. Where the method of analysis... enforcement programs to utilize the methods of analysis of the AOAC INTERNATIONAL (AOAC) as published in the...
21 CFR 2.19 - Methods of analysis.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 1 2010-04-01 2010-04-01 false Methods of analysis. 2.19 Section 2.19 Food and... ADMINISTRATIVE RULINGS AND DECISIONS General Provisions § 2.19 Methods of analysis. Where the method of analysis... enforcement programs to utilize the methods of analysis of the AOAC INTERNATIONAL (AOAC) as published in the...
Eleventh NASTRAN User's Colloquium
NASA Technical Reports Server (NTRS)
1983-01-01
NASTRAN (NASA STRUCTURAL ANALYSIS) is a large, comprehensive, nonproprietary, general purpose finite element computer code for structural analysis which was developed under NASA sponsorship. The Eleventh Colloquium provides some comprehensive general papers on the application of finite element methods in engineering, comparisons with other approaches, unique applications, pre- and post-processing or auxiliary programs, and new methods of analysis with NASTRAN.
ERIC Educational Resources Information Center
Tsai, Tien-Lung; Shau, Wen-Yi; Hu, Fu-Chang
2006-01-01
This article generalizes linear path analysis (PA) and simultaneous equations models (SiEM) to deal with mixed responses of different types in a recursive or triangular system. An efficient instrumental variable (IV) method for estimating the structural coefficients of a 2-equation partially recursive generalized path analysis (GPA) model and…
ERIC Educational Resources Information Center
Jung, Kwanghee; Takane, Yoshio; Hwang, Heungsun; Woodward, Todd S.
2012-01-01
We propose a new method of structural equation modeling (SEM) for longitudinal and time series data, named Dynamic GSCA (Generalized Structured Component Analysis). The proposed method extends the original GSCA by incorporating a multivariate autoregressive model to account for the dynamic nature of data taken over time. Dynamic GSCA also…
21 CFR 163.5 - Methods of analysis.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 2 2010-04-01 2010-04-01 false Methods of analysis. 163.5 Section 163.5 Food and... CONSUMPTION CACAO PRODUCTS General Provisions § 163.5 Methods of analysis. Shell and cacao fat content in cacao products shall be determined by the following methods of analysis prescribed in “Official Methods...
Nonparametric Estimation of Standard Errors in Covariance Analysis Using the Infinitesimal Jackknife
ERIC Educational Resources Information Center
Jennrich, Robert I.
2008-01-01
The infinitesimal jackknife provides a simple general method for estimating standard errors in covariance structure analysis. Beyond its simplicity and generality what makes the infinitesimal jackknife method attractive is that essentially no assumptions are required to produce consistent standard error estimates, not even the requirement that the…
2004-08-01
ethnography , phenomenological study , grounded theory study and content analysis. THE HISTORICAL METHOD Methods I. Qualitative Research Methods ... Phenomenological Study 4. Grounded Theory Study 5. Content Analysis II. Quantitative Research Methods A...A. The Historical Method B. General Qualitative
Complexity analysis based on generalized deviation for financial markets
NASA Astrophysics Data System (ADS)
Li, Chao; Shang, Pengjian
2018-03-01
In this paper, a new modified method is proposed as a measure to investigate the correlation between past price and future volatility for financial time series, known as the complexity analysis based on generalized deviation. In comparison with the former retarded volatility model, the new approach is both simple and computationally efficient. The method based on the generalized deviation function presents us an exhaustive way showing the quantization of the financial market rules. Robustness of this method is verified by numerical experiments with both artificial and financial time series. Results show that the generalized deviation complexity analysis method not only identifies the volatility of financial time series, but provides a comprehensive way distinguishing the different characteristics between stock indices and individual stocks. Exponential functions can be used to successfully fit the volatility curves and quantify the changes of complexity for stock market data. Then we study the influence for negative domain of deviation coefficient and differences during the volatile periods and calm periods. after the data analysis of the experimental model, we found that the generalized deviation model has definite advantages in exploring the relationship between the historical returns and future volatility.
21 CFR 133.5 - Methods of analysis.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 2 2010-04-01 2010-04-01 false Methods of analysis. 133.5 Section 133.5 Food and... CONSUMPTION CHEESES AND RELATED CHEESE PRODUCTS General Provisions § 133.5 Methods of analysis. Moisture, milkfat, and phosphatase levels in cheeses will be determined by the following methods of analysis from...
Nonlinear analysis of structures. [within framework of finite element method
NASA Technical Reports Server (NTRS)
Armen, H., Jr.; Levine, H.; Pifko, A.; Levy, A.
1974-01-01
The development of nonlinear analysis techniques within the framework of the finite-element method is reported. Although the emphasis is concerned with those nonlinearities associated with material behavior, a general treatment of geometric nonlinearity, alone or in combination with plasticity is included, and applications presented for a class of problems categorized as axisymmetric shells of revolution. The scope of the nonlinear analysis capabilities includes: (1) a membrane stress analysis, (2) bending and membrane stress analysis, (3) analysis of thick and thin axisymmetric bodies of revolution, (4) a general three dimensional analysis, and (5) analysis of laminated composites. Applications of the methods are made to a number of sample structures. Correlation with available analytic or experimental data range from good to excellent.
An overview of longitudinal data analysis methods for neurological research.
Locascio, Joseph J; Atri, Alireza
2011-01-01
The purpose of this article is to provide a concise, broad and readily accessible overview of longitudinal data analysis methods, aimed to be a practical guide for clinical investigators in neurology. In general, we advise that older, traditional methods, including (1) simple regression of the dependent variable on a time measure, (2) analyzing a single summary subject level number that indexes changes for each subject and (3) a general linear model approach with a fixed-subject effect, should be reserved for quick, simple or preliminary analyses. We advocate the general use of mixed-random and fixed-effect regression models for analyses of most longitudinal clinical studies. Under restrictive situations or to provide validation, we recommend: (1) repeated-measure analysis of covariance (ANCOVA), (2) ANCOVA for two time points, (3) generalized estimating equations and (4) latent growth curve/structural equation models.
NASA Technical Reports Server (NTRS)
Stretchberry, D. M.; Hein, G. F.
1972-01-01
The general concepts of costing, budgeting, and benefit-cost ratio and cost-effectiveness analysis are discussed. The three common methods of costing are presented. Budgeting distributions are discussed. The use of discounting procedures is outlined. The benefit-cost ratio and cost-effectiveness analysis is defined and their current application to NASA planning is pointed out. Specific practices and techniques are discussed, and actual costing and budgeting procedures are outlined. The recommended method of calculating benefit-cost ratios is described. A standardized method of cost-effectiveness analysis and long-range planning are also discussed.
Regional frequency analysis of extreme rainfalls using partial L moments method
NASA Astrophysics Data System (ADS)
Zakaria, Zahrahtul Amani; Shabri, Ani
2013-07-01
An approach based on regional frequency analysis using L moments and LH moments are revisited in this study. Subsequently, an alternative regional frequency analysis using the partial L moments (PL moments) method is employed, and a new relationship for homogeneity analysis is developed. The results were then compared with those obtained using the method of L moments and LH moments of order two. The Selangor catchment, consisting of 37 sites and located on the west coast of Peninsular Malaysia, is chosen as a case study. PL moments for the generalized extreme value (GEV), generalized logistic (GLO), and generalized Pareto distributions were derived and used to develop the regional frequency analysis procedure. PL moment ratio diagram and Z test were employed in determining the best-fit distribution. Comparison between the three approaches showed that GLO and GEV distributions were identified as the suitable distributions for representing the statistical properties of extreme rainfall in Selangor. Monte Carlo simulation used for performance evaluation shows that the method of PL moments would outperform L and LH moments methods for estimation of large return period events.
A weak Galerkin generalized multiscale finite element method
Mu, Lin; Wang, Junping; Ye, Xiu
2016-03-31
In this study, we propose a general framework for weak Galerkin generalized multiscale (WG-GMS) finite element method for the elliptic problems with rapidly oscillating or high contrast coefficients. This general WG-GMS method features in high order accuracy on general meshes and can work with multiscale basis derived by different numerical schemes. A special case is studied under this WG-GMS framework in which the multiscale basis functions are obtained by solving local problem with the weak Galerkin finite element method. Convergence analysis and numerical experiments are obtained for the special case.
A weak Galerkin generalized multiscale finite element method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mu, Lin; Wang, Junping; Ye, Xiu
In this study, we propose a general framework for weak Galerkin generalized multiscale (WG-GMS) finite element method for the elliptic problems with rapidly oscillating or high contrast coefficients. This general WG-GMS method features in high order accuracy on general meshes and can work with multiscale basis derived by different numerical schemes. A special case is studied under this WG-GMS framework in which the multiscale basis functions are obtained by solving local problem with the weak Galerkin finite element method. Convergence analysis and numerical experiments are obtained for the special case.
Acoustic prediction methods for the NASA generalized advanced propeller analysis system (GAPAS)
NASA Technical Reports Server (NTRS)
Padula, S. L.; Block, P. J. W.
1984-01-01
Classical methods of propeller performance analysis are coupled with state-of-the-art Aircraft Noise Prediction Program (ANOPP:) techniques to yield a versatile design tool, the NASA Generalized Advanced Propeller Analysis System (GAPAS) for the novel quiet and efficient propellers. ANOPP is a collection of modular specialized programs. GAPAS as a whole addresses blade geometry and aerodynamics, rotor performance and loading, and subsonic propeller noise.
Transient analysis of 1D inhomogeneous media by dynamic inhomogeneous finite element method
NASA Astrophysics Data System (ADS)
Yang, Zailin; Wang, Yao; Hei, Baoping
2013-12-01
The dynamic inhomogeneous finite element method is studied for use in the transient analysis of onedimensional inhomogeneous media. The general formula of the inhomogeneous consistent mass matrix is established based on the shape function. In order to research the advantages of this method, it is compared with the general finite element method. A linear bar element is chosen for the discretization tests of material parameters with two fictitious distributions. And, a numerical example is solved to observe the differences in the results between these two methods. Some characteristics of the dynamic inhomogeneous finite element method that demonstrate its advantages are obtained through comparison with the general finite element method. It is found that the method can be used to solve elastic wave motion problems with a large element scale and a large number of iteration steps.
Time-dependent inertia analysis of vehicle mechanisms
NASA Astrophysics Data System (ADS)
Salmon, James Lee
Two methods for performing transient inertia analysis of vehicle hardware systems are developed in this dissertation. The analysis techniques can be used to predict the response of vehicle mechanism systems to the accelerations associated with vehicle impacts. General analytical methods for evaluating translational or rotational system dynamics are generated and evaluated for various system characteristics. The utility of the derived techniques are demonstrated by applying the generalized methods to two vehicle systems. Time dependent acceleration measured during a vehicle to vehicle impact are used as input to perform a dynamic analysis of an automobile liftgate latch and outside door handle. Generalized Lagrange equations for a non-conservative system are used to formulate a second order nonlinear differential equation defining the response of the components to the transient input. The differential equation is solved by employing the fourth order Runge-Kutta method. The events are then analyzed using commercially available two dimensional rigid body dynamic analysis software. The results of the two analytical techniques are compared to experimental data generated by high speed film analysis of tests of the two components performed on a high G acceleration sled at Ford Motor Company.
Probabilistic boundary element method
NASA Technical Reports Server (NTRS)
Cruse, T. A.; Raveendra, S. T.
1989-01-01
The purpose of the Probabilistic Structural Analysis Method (PSAM) project is to develop structural analysis capabilities for the design analysis of advanced space propulsion system hardware. The boundary element method (BEM) is used as the basis of the Probabilistic Advanced Analysis Methods (PADAM) which is discussed. The probabilistic BEM code (PBEM) is used to obtain the structural response and sensitivity results to a set of random variables. As such, PBEM performs analogous to other structural analysis codes such as finite elements in the PSAM system. For linear problems, unlike the finite element method (FEM), the BEM governing equations are written at the boundary of the body only, thus, the method eliminates the need to model the volume of the body. However, for general body force problems, a direct condensation of the governing equations to the boundary of the body is not possible and therefore volume modeling is generally required.
ACCOUNTING FOR CALIBRATION UNCERTAINTIES IN X-RAY ANALYSIS: EFFECTIVE AREAS IN SPECTRAL FITTING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Hyunsook; Kashyap, Vinay L.; Drake, Jeremy J.
2011-04-20
While considerable advance has been made to account for statistical uncertainties in astronomical analyses, systematic instrumental uncertainties have been generally ignored. This can be crucial to a proper interpretation of analysis results because instrumental calibration uncertainty is a form of systematic uncertainty. Ignoring it can underestimate error bars and introduce bias into the fitted values of model parameters. Accounting for such uncertainties currently requires extensive case-specific simulations if using existing analysis packages. Here, we present general statistical methods that incorporate calibration uncertainties into spectral analysis of high-energy data. We first present a method based on multiple imputation that can bemore » applied with any fitting method, but is necessarily approximate. We then describe a more exact Bayesian approach that works in conjunction with a Markov chain Monte Carlo based fitting. We explore methods for improving computational efficiency, and in particular detail a method of summarizing calibration uncertainties with a principal component analysis of samples of plausible calibration files. This method is implemented using recently codified Chandra effective area uncertainties for low-resolution spectral analysis and is verified using both simulated and actual Chandra data. Our procedure for incorporating effective area uncertainty is easily generalized to other types of calibration uncertainties.« less
ERIC Educational Resources Information Center
Thompson, Bruce
The relationship between analysis of variance (ANOVA) methods and their analogs (analysis of covariance and multiple analyses of variance and covariance--collectively referred to as OVA methods) and the more general analytic case is explored. A small heuristic data set is used, with a hypothetical sample of 20 subjects, randomly assigned to five…
An Overview of Longitudinal Data Analysis Methods for Neurological Research
Locascio, Joseph J.; Atri, Alireza
2011-01-01
The purpose of this article is to provide a concise, broad and readily accessible overview of longitudinal data analysis methods, aimed to be a practical guide for clinical investigators in neurology. In general, we advise that older, traditional methods, including (1) simple regression of the dependent variable on a time measure, (2) analyzing a single summary subject level number that indexes changes for each subject and (3) a general linear model approach with a fixed-subject effect, should be reserved for quick, simple or preliminary analyses. We advocate the general use of mixed-random and fixed-effect regression models for analyses of most longitudinal clinical studies. Under restrictive situations or to provide validation, we recommend: (1) repeated-measure analysis of covariance (ANCOVA), (2) ANCOVA for two time points, (3) generalized estimating equations and (4) latent growth curve/structural equation models. PMID:22203825
Iron Analysis by Redox Titration. A General Chemistry Experiment.
ERIC Educational Resources Information Center
Kaufman, Samuel; DeVoe, Howard
1988-01-01
Describes a simplified redox method for total iron analysis suitable for execution in a three-hour laboratory period by general chemistry students. Discusses materials, procedures, analyses, and student performance. (CW)
Picture of All Solutions of Successive 2-Block Maxbet Problems
ERIC Educational Resources Information Center
Choulakian, Vartan
2011-01-01
The Maxbet method is a generalized principal components analysis of a data set, where the group structure of the variables is taken into account. Similarly, 3-block[12,13] partial Maxdiff method is a generalization of covariance analysis, where only the covariances between blocks (1, 2) and (1, 3) are taken into account. The aim of this paper is…
NASA Technical Reports Server (NTRS)
Frocht, M M; Guernsey, R , Jr
1953-01-01
The method of strain measurement after annealing is reviewed and found to be satisfactory for the materials available in this country. A new general method is described for the photoelastic determination of the principal stresses at any point of a general body subjected to arbitrary load. The method has been applied to a sphere subjected to diametrical compressive loads. The results show possibilities of high accuracy.
A Bootstrap Generalization of Modified Parallel Analysis for IRT Dimensionality Assessment
ERIC Educational Resources Information Center
Finch, Holmes; Monahan, Patrick
2008-01-01
This article introduces a bootstrap generalization to the Modified Parallel Analysis (MPA) method of test dimensionality assessment using factor analysis. This methodology, based on the use of Marginal Maximum Likelihood nonlinear factor analysis, provides for the calculation of a test statistic based on a parametric bootstrap using the MPA…
21 CFR 133.5 - Methods of analysis.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 2 2011-04-01 2011-04-01 false Methods of analysis. 133.5 Section 133.5 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) FOOD FOR HUMAN CONSUMPTION CHEESES AND RELATED CHEESE PRODUCTS General Provisions § 133.5 Methods of analysis. Moisture...
21 CFR 133.5 - Methods of analysis.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 2 2012-04-01 2012-04-01 false Methods of analysis. 133.5 Section 133.5 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) FOOD FOR HUMAN CONSUMPTION CHEESES AND RELATED CHEESE PRODUCTS General Provisions § 133.5 Methods of analysis. Moisture...
21 CFR 133.5 - Methods of analysis.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 2 2013-04-01 2013-04-01 false Methods of analysis. 133.5 Section 133.5 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) FOOD FOR HUMAN CONSUMPTION CHEESES AND RELATED CHEESE PRODUCTS General Provisions § 133.5 Methods of analysis. Moisture...
Incorporating general race and housing flexibility and deadband in rolling element bearing analysis
NASA Technical Reports Server (NTRS)
Davis, R. R.; Vallance, C. S.
1989-01-01
Methods for including the effects of general race and housing compliance and outer race-to-housing deadband (clearance) in rolling element bearing mechanics analysis is presented. It is shown that these effects can cause significant changes in bearing stiffness characteristics, which are of major importance in rotordynamic response of turbomachinery and other rotating systems. Preloading analysis is demonstrated with the finite element/contact mechanics hybrid method applied to a 45 mm angular contact ball bearing.
NASA Technical Reports Server (NTRS)
Oconnell, R. F.; Hassig, H. J.; Radovcich, N. A.
1975-01-01
Computational aspects of (1) flutter optimization (minimization of structural mass subject to specified flutter requirements), (2) methods for solving the flutter equation, and (3) efficient methods for computing generalized aerodynamic force coefficients in the repetitive analysis environment of computer-aided structural design are discussed. Specific areas included: a two-dimensional Regula Falsi approach to solving the generalized flutter equation; method of incremented flutter analysis and its applications; the use of velocity potential influence coefficients in a five-matrix product formulation of the generalized aerodynamic force coefficients; options for computational operations required to generate generalized aerodynamic force coefficients; theoretical considerations related to optimization with one or more flutter constraints; and expressions for derivatives of flutter-related quantities with respect to design variables.
Discourse analysis in general practice: a sociolinguistic approach.
Nessa, J; Malterud, K
1990-06-01
It is a simple but important fact that as general practitioners we talk to our patients. The quality of the conversation is of vital importance for the outcome of the consultation. The purpose of this article is to discuss a methodological tool borrowed from sociolinguistics--discourse analysis. To assess the suitability of this method for analysis of general practice consultations, the authors have performed a discourse analysis of one single consultation. Our experiences are presented here.
Sensitivity analysis and approximation methods for general eigenvalue problems
NASA Technical Reports Server (NTRS)
Murthy, D. V.; Haftka, R. T.
1986-01-01
Optimization of dynamic systems involving complex non-hermitian matrices is often computationally expensive. Major contributors to the computational expense are the sensitivity analysis and reanalysis of a modified design. The present work seeks to alleviate this computational burden by identifying efficient sensitivity analysis and approximate reanalysis methods. For the algebraic eigenvalue problem involving non-hermitian matrices, algorithms for sensitivity analysis and approximate reanalysis are classified, compared and evaluated for efficiency and accuracy. Proper eigenvector normalization is discussed. An improved method for calculating derivatives of eigenvectors is proposed based on a more rational normalization condition and taking advantage of matrix sparsity. Important numerical aspects of this method are also discussed. To alleviate the problem of reanalysis, various approximation methods for eigenvalues are proposed and evaluated. Linear and quadratic approximations are based directly on the Taylor series. Several approximation methods are developed based on the generalized Rayleigh quotient for the eigenvalue problem. Approximation methods based on trace theorem give high accuracy without needing any derivatives. Operation counts for the computation of the approximations are given. General recommendations are made for the selection of appropriate approximation technique as a function of the matrix size, number of design variables, number of eigenvalues of interest and the number of design points at which approximation is sought.
The Analysis of Seawater: A Laboratory-Centered Learning Project in General Chemistry.
ERIC Educational Resources Information Center
Selco, Jodye I.; Roberts, Julian L., Jr.; Wacks, Daniel B.
2003-01-01
Describes a sea-water analysis project that introduces qualitative and quantitative analysis methods and laboratory methods such as gravimetric analysis, potentiometric titration, ion-selective electrodes, and the use of calibration curves. Uses a problem-based cooperative teaching approach. (Contains 24 references.) (YDS)
Generalized Full-Information Item Bifactor Analysis
ERIC Educational Resources Information Center
Cai, Li; Yang, Ji Seung; Hansen, Mark
2011-01-01
Full-information item bifactor analysis is an important statistical method in psychological and educational measurement. Current methods are limited to single-group analysis and inflexible in the types of item response models supported. We propose a flexible multiple-group item bifactor analysis framework that supports a variety of…
21 CFR 163.5 - Methods of analysis.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 2 2013-04-01 2013-04-01 false Methods of analysis. 163.5 Section 163.5 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) FOOD FOR HUMAN CONSUMPTION CACAO PRODUCTS General Provisions § 163.5 Methods of analysis. Shell and cacao fat content in...
21 CFR 163.5 - Methods of analysis.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 2 2014-04-01 2014-04-01 false Methods of analysis. 163.5 Section 163.5 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) FOOD FOR HUMAN CONSUMPTION CACAO PRODUCTS General Provisions § 163.5 Methods of analysis. Shell and cacao fat content in...
21 CFR 163.5 - Methods of analysis.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 2 2012-04-01 2012-04-01 false Methods of analysis. 163.5 Section 163.5 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) FOOD FOR HUMAN CONSUMPTION CACAO PRODUCTS General Provisions § 163.5 Methods of analysis. Shell and cacao fat content in...
21 CFR 163.5 - Methods of analysis.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 2 2011-04-01 2011-04-01 false Methods of analysis. 163.5 Section 163.5 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) FOOD FOR HUMAN CONSUMPTION CACAO PRODUCTS General Provisions § 163.5 Methods of analysis. Shell and cacao fat content in...
Utility-preserving anonymization for health data publishing.
Lee, Hyukki; Kim, Soohyung; Kim, Jong Wook; Chung, Yon Dohn
2017-07-11
Publishing raw electronic health records (EHRs) may be considered as a breach of the privacy of individuals because they usually contain sensitive information. A common practice for the privacy-preserving data publishing is to anonymize the data before publishing, and thus satisfy privacy models such as k-anonymity. Among various anonymization techniques, generalization is the most commonly used in medical/health data processing. Generalization inevitably causes information loss, and thus, various methods have been proposed to reduce information loss. However, existing generalization-based data anonymization methods cannot avoid excessive information loss and preserve data utility. We propose a utility-preserving anonymization for privacy preserving data publishing (PPDP). To preserve data utility, the proposed method comprises three parts: (1) utility-preserving model, (2) counterfeit record insertion, (3) catalog of the counterfeit records. We also propose an anonymization algorithm using the proposed method. Our anonymization algorithm applies full-domain generalization algorithm. We evaluate our method in comparison with existence method on two aspects, information loss measured through various quality metrics and error rate of analysis result. With all different types of quality metrics, our proposed method show the lower information loss than the existing method. In the real-world EHRs analysis, analysis results show small portion of error between the anonymized data through the proposed method and original data. We propose a new utility-preserving anonymization method and an anonymization algorithm using the proposed method. Through experiments on various datasets, we show that the utility of EHRs anonymized by the proposed method is significantly better than those anonymized by previous approaches.
Multivariate Autoregressive Modeling and Granger Causality Analysis of Multiple Spike Trains
Krumin, Michael; Shoham, Shy
2010-01-01
Recent years have seen the emergence of microelectrode arrays and optical methods allowing simultaneous recording of spiking activity from populations of neurons in various parts of the nervous system. The analysis of multiple neural spike train data could benefit significantly from existing methods for multivariate time-series analysis which have proven to be very powerful in the modeling and analysis of continuous neural signals like EEG signals. However, those methods have not generally been well adapted to point processes. Here, we use our recent results on correlation distortions in multivariate Linear-Nonlinear-Poisson spiking neuron models to derive generalized Yule-Walker-type equations for fitting ‘‘hidden” Multivariate Autoregressive models. We use this new framework to perform Granger causality analysis in order to extract the directed information flow pattern in networks of simulated spiking neurons. We discuss the relative merits and limitations of the new method. PMID:20454705
Teaching General Principles and Applications of Dendrogeomorphology.
ERIC Educational Resources Information Center
Butler, David R.
1987-01-01
Tree-ring analysis in geomorphology can be incorporated into a number of undergraduate methods in order to reconstruct the history of a variety of geomorphic processes. Discusses dendrochronology, general principles of dendrogeomorphology, field sampling methods, laboratory techniques, and examples of applications. (TW)
Code of Federal Regulations, 2010 CFR
2010-01-01
... Requirements; State Agricultural Loan Mediation Programs; Right of First Refusal § 614.4510 General. Direct... for maintaining control, for the proper analysis of such data, and prompt action as needed; (ii... objectives, financing programs, organizational structure, and operating methods, and appropriate analysis of...
Protocol vulnerability detection based on network traffic analysis and binary reverse engineering.
Wen, Shameng; Meng, Qingkun; Feng, Chao; Tang, Chaojing
2017-01-01
Network protocol vulnerability detection plays an important role in many domains, including protocol security analysis, application security, and network intrusion detection. In this study, by analyzing the general fuzzing method of network protocols, we propose a novel approach that combines network traffic analysis with the binary reverse engineering method. For network traffic analysis, the block-based protocol description language is introduced to construct test scripts, while the binary reverse engineering method employs the genetic algorithm with a fitness function designed to focus on code coverage. This combination leads to a substantial improvement in fuzz testing for network protocols. We build a prototype system and use it to test several real-world network protocol implementations. The experimental results show that the proposed approach detects vulnerabilities more efficiently and effectively than general fuzzing methods such as SPIKE.
A General Method for Targeted Quantitative Cross-Linking Mass Spectrometry.
Chavez, Juan D; Eng, Jimmy K; Schweppe, Devin K; Cilia, Michelle; Rivera, Keith; Zhong, Xuefei; Wu, Xia; Allen, Terrence; Khurgel, Moshe; Kumar, Akhilesh; Lampropoulos, Athanasios; Larsson, Mårten; Maity, Shuvadeep; Morozov, Yaroslav; Pathmasiri, Wimal; Perez-Neut, Mathew; Pineyro-Ruiz, Coriness; Polina, Elizabeth; Post, Stephanie; Rider, Mark; Tokmina-Roszyk, Dorota; Tyson, Katherine; Vieira Parrine Sant'Ana, Debora; Bruce, James E
2016-01-01
Chemical cross-linking mass spectrometry (XL-MS) provides protein structural information by identifying covalently linked proximal amino acid residues on protein surfaces. The information gained by this technique is complementary to other structural biology methods such as x-ray crystallography, NMR and cryo-electron microscopy[1]. The extension of traditional quantitative proteomics methods with chemical cross-linking can provide information on the structural dynamics of protein structures and protein complexes. The identification and quantitation of cross-linked peptides remains challenging for the general community, requiring specialized expertise ultimately limiting more widespread adoption of the technique. We describe a general method for targeted quantitative mass spectrometric analysis of cross-linked peptide pairs. We report the adaptation of the widely used, open source software package Skyline, for the analysis of quantitative XL-MS data as a means for data analysis and sharing of methods. We demonstrate the utility and robustness of the method with a cross-laboratory study and present data that is supported by and validates previously published data on quantified cross-linked peptide pairs. This advance provides an easy to use resource so that any lab with access to a LC-MS system capable of performing targeted quantitative analysis can quickly and accurately measure dynamic changes in protein structure and protein interactions.
Dielectrophoresis-Based Sample Handling in General-Purpose Programmable Diagnostic Instruments
Gascoyne, Peter R. C.; Vykoukal, Jody V.
2009-01-01
As the molecular origins of disease are better understood, the need for affordable, rapid, and automated technologies that enable microscale molecular diagnostics has become apparent. Widespread use of microsystems that perform sample preparation and molecular analysis could ensure that the benefits of new biomedical discoveries are realized by a maximum number of people, even those in environments lacking any infrastructure. While progress has been made in developing miniaturized diagnostic systems, samples are generally processed off-device using labor-intensive and time-consuming traditional sample preparation methods. We present the concept of an integrated programmable general-purpose sample analysis processor (GSAP) architecture where raw samples are routed to separation and analysis functional blocks contained within a single device. Several dielectrophoresis-based methods that could serve as the foundation for building GSAP functional blocks are reviewed including methods for cell and particle sorting, cell focusing, cell ac impedance analysis, cell lysis, and the manipulation of molecules and reagent droplets. PMID:19684877
7 CFR 58.930 - Official test methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
..., GENERAL SPECIFICATIONS FOR APPROVED PLANTS AND STANDARDS FOR GRADES OF DAIRY PRODUCTS 1 General Specifications for Dairy Plants Approved for USDA Inspection and Grading Service 1 Operations and Operating Procedures § 58.930 Official test methods. (a) Chemical. Chemical analysis, except where otherwise prescribed...
7 CFR 58.930 - Official test methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
..., GENERAL SPECIFICATIONS FOR APPROVED PLANTS AND STANDARDS FOR GRADES OF DAIRY PRODUCTS 1 General Specifications for Dairy Plants Approved for USDA Inspection and Grading Service 1 Operations and Operating Procedures § 58.930 Official test methods. (a) Chemical. Chemical analysis, except where otherwise prescribed...
7 CFR 58.930 - Official test methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
..., GENERAL SPECIFICATIONS FOR APPROVED PLANTS AND STANDARDS FOR GRADES OF DAIRY PRODUCTS 1 General Specifications for Dairy Plants Approved for USDA Inspection and Grading Service 1 Operations and Operating Procedures § 58.930 Official test methods. (a) Chemical. Chemical analysis, except where otherwise prescribed...
7 CFR 58.930 - Official test methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
..., GENERAL SPECIFICATIONS FOR APPROVED PLANTS AND STANDARDS FOR GRADES OF DAIRY PRODUCTS 1 General Specifications for Dairy Plants Approved for USDA Inspection and Grading Service 1 Operations and Operating Procedures § 58.930 Official test methods. (a) Chemical. Chemical analysis, except where otherwise prescribed...
7 CFR 58.930 - Official test methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
..., GENERAL SPECIFICATIONS FOR APPROVED PLANTS AND STANDARDS FOR GRADES OF DAIRY PRODUCTS 1 General Specifications for Dairy Plants Approved for USDA Inspection and Grading Service 1 Operations and Operating Procedures § 58.930 Official test methods. (a) Chemical. Chemical analysis, except where otherwise prescribed...
Generalized Appended Product Indicator Procedure for Nonlinear Structural Equation Analysis.
ERIC Educational Resources Information Center
Wall, Melanie M.; Amemiya, Yasuo
2001-01-01
Considers the estimation of polynomial structural models and shows a limitation of an existing method. Introduces a new procedure, the generalized appended product indicator procedure, for nonlinear structural equation analysis. Addresses statistical issues associated with the procedure through simulation. (SLD)
Assessment of parametric uncertainty for groundwater reactive transport modeling,
Shi, Xiaoqing; Ye, Ming; Curtis, Gary P.; Miller, Geoffery L.; Meyer, Philip D.; Kohler, Matthias; Yabusaki, Steve; Wu, Jichun
2014-01-01
The validity of using Gaussian assumptions for model residuals in uncertainty quantification of a groundwater reactive transport model was evaluated in this study. Least squares regression methods explicitly assume Gaussian residuals, and the assumption leads to Gaussian likelihood functions, model parameters, and model predictions. While the Bayesian methods do not explicitly require the Gaussian assumption, Gaussian residuals are widely used. This paper shows that the residuals of the reactive transport model are non-Gaussian, heteroscedastic, and correlated in time; characterizing them requires using a generalized likelihood function such as the formal generalized likelihood function developed by Schoups and Vrugt (2010). For the surface complexation model considered in this study for simulating uranium reactive transport in groundwater, parametric uncertainty is quantified using the least squares regression methods and Bayesian methods with both Gaussian and formal generalized likelihood functions. While the least squares methods and Bayesian methods with Gaussian likelihood function produce similar Gaussian parameter distributions, the parameter distributions of Bayesian uncertainty quantification using the formal generalized likelihood function are non-Gaussian. In addition, predictive performance of formal generalized likelihood function is superior to that of least squares regression and Bayesian methods with Gaussian likelihood function. The Bayesian uncertainty quantification is conducted using the differential evolution adaptive metropolis (DREAM(zs)) algorithm; as a Markov chain Monte Carlo (MCMC) method, it is a robust tool for quantifying uncertainty in groundwater reactive transport models. For the surface complexation model, the regression-based local sensitivity analysis and Morris- and DREAM(ZS)-based global sensitivity analysis yield almost identical ranking of parameter importance. The uncertainty analysis may help select appropriate likelihood functions, improve model calibration, and reduce predictive uncertainty in other groundwater reactive transport and environmental modeling.
NASA Technical Reports Server (NTRS)
Middleton, W. D.; Lundry, J. L.
1975-01-01
An integrated system of computer programs has been developed for the design and analysis of supersonic configurations. The system uses linearized theory methods for the calculation of surface pressures and supersonic area rule concepts in combination with linearized theory for calculation of aerodynamic force coefficients. Interactive graphics are optional at the user's request. This part presents a general description of the system and describes the theoretical methods used.
Commercialization of NESSUS: Status
NASA Technical Reports Server (NTRS)
Thacker, Ben H.; Millwater, Harry R.
1991-01-01
A plan was initiated in 1988 to commercialize the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) probabilistic structural analysis software. The goal of the on-going commercialization effort is to begin the transfer of Probabilistic Structural Analysis Method (PSAM) developed technology into industry and to develop additional funding resources in the general area of structural reliability. The commercialization effort is summarized. The SwRI NESSUS Software System is a general purpose probabilistic finite element computer program using state of the art methods for predicting stochastic structural response due to random loads, material properties, part geometry, and boundary conditions. NESSUS can be used to assess structural reliability, to compute probability of failure, to rank the input random variables by importance, and to provide a more cost effective design than traditional methods. The goal is to develop a general probabilistic structural analysis methodology to assist in the certification of critical components in the next generation Space Shuttle Main Engine.
Structural Embeddings: Mechanization with Method
NASA Technical Reports Server (NTRS)
Munoz, Cesar; Rushby, John
1999-01-01
The most powerful tools for analysis of formal specifications are general-purpose theorem provers and model checkers, but these tools provide scant methodological support. Conversely, those approaches that do provide a well-developed method generally have less powerful automation. It is natural, therefore, to try to combine the better-developed methods with the more powerful general-purpose tools. An obstacle is that the methods and the tools often employ very different logics. We argue that methods are separable from their logics and are largely concerned with the structure and organization of specifications. We, propose a technique called structural embedding that allows the structural elements of a method to be supported by a general-purpose tool, while substituting the logic of the tool for that of the method. We have found this technique quite effective and we provide some examples of its application. We also suggest how general-purpose systems could be restructured to support this activity better.
2001-10-25
Image Analysis aims to develop model-based computer analysis and visualization methods for showing focal and general abnormalities of lung ventilation and perfusion based on a sequence of digital chest fluoroscopy frames collected with the Dynamic Pulmonary Imaging technique 18,5,17,6. We have proposed and evaluated a multiresolutional method with an explicit ventilation model based on pyramid images for ventilation analysis. We have further extended the method for ventilation analysis to pulmonary perfusion. This paper focuses on the clinical evaluation of our method for
Statistical energy analysis computer program, user's guide
NASA Technical Reports Server (NTRS)
Trudell, R. W.; Yano, L. I.
1981-01-01
A high frequency random vibration analysis, (statistical energy analysis (SEA) method) is examined. The SEA method accomplishes high frequency prediction of arbitrary structural configurations. A general SEA computer program is described. A summary of SEA theory, example problems of SEA program application, and complete program listing are presented.
21 CFR 133.5 - Methods of analysis.
Code of Federal Regulations, 2014 CFR
2014-04-01
... CONSUMPTION CHEESES AND RELATED CHEESE PRODUCTS General Provisions § 133.5 Methods of analysis. Moisture...: http://www.archives.gov/federal_register/code_of_federal_regulations/ibr_locations.html): (a) Moisture content—section 16.233 “Method I (52)—Official Final Action”, under the heading “Moisture”. (b) Milkfat...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Wei; Reddy, T. A.; Gurian, Patrick
2007-01-31
A companion paper to Jiang and Reddy that presents a general and computationally efficient methodology for dyanmic scheduling and optimal control of complex primary HVAC&R plants using a deterministic engineering optimization approach.
Scalable Kernel Methods and Algorithms for General Sequence Analysis
ERIC Educational Resources Information Center
Kuksa, Pavel
2011-01-01
Analysis of large-scale sequential data has become an important task in machine learning and pattern recognition, inspired in part by numerous scientific and technological applications such as the document and text classification or the analysis of biological sequences. However, current computational methods for sequence comparison still lack…
Clustering "N" Objects into "K" Groups under Optimal Scaling of Variables.
ERIC Educational Resources Information Center
van Buuren, Stef; Heiser, Willem J.
1989-01-01
A method based on homogeneity analysis (multiple correspondence analysis or multiple scaling) is proposed to reduce many categorical variables to one variable with "k" categories. The method is a generalization of the sum of squared distances cluster analysis problem to the case of mixed measurement level variables. (SLD)
Rapid iterative reanalysis for automated design
NASA Technical Reports Server (NTRS)
Bhatia, K. G.
1973-01-01
A method for iterative reanalysis in automated structural design is presented for a finite-element analysis using the direct stiffness approach. A basic feature of the method is that the generalized stiffness and inertia matrices are expressed as functions of structural design parameters, and these generalized matrices are expanded in Taylor series about the initial design. Only the linear terms are retained in the expansions. The method is approximate because it uses static condensation, modal reduction, and the linear Taylor series expansions. The exact linear representation of the expansions of the generalized matrices is also described and a basis for the present method is established. Results of applications of the present method to the recalculation of the natural frequencies of two simple platelike structural models are presented and compared with results obtained by using a commonly applied analysis procedure used as a reference. In general, the results are in good agreement. A comparison of the computer times required for the use of the present method and the reference method indicated that the present method required substantially less time for reanalysis. Although the results presented are for relatively small-order problems, the present method will become more efficient relative to the reference method as the problem size increases. An extension of the present method to static reanalysis is described, ana a basis for unifying the static and dynamic reanalysis procedures is presented.
AN EULERIAN-LAGRANGIAN LOCALIZED ADJOINT METHOD FOR THE ADVECTION-DIFFUSION EQUATION
Many numerical methods use characteristic analysis to accommodate the advective component of transport. Such characteristic methods include Eulerian-Lagrangian methods (ELM), modified method of characteristics (MMOC), and operator splitting methods. A generalization of characteri...
ERIC Educational Resources Information Center
Usher, Wayne
2011-01-01
Introduction: To identify health website recommendation trends by Gold Coast (Australia) general practitioners (GPs) to their patients. Method: A mixed method approach to data collection and analysis was employed. Quantitative data were collected using a prepaid postal survey, consisting of 17 questions, mailed to 250 (61 per cent) of 410 GPs on…
Generalized fictitious methods for fluid-structure interactions: Analysis and simulations
NASA Astrophysics Data System (ADS)
Yu, Yue; Baek, Hyoungsu; Karniadakis, George Em
2013-07-01
We present a new fictitious pressure method for fluid-structure interaction (FSI) problems in incompressible flow by generalizing the fictitious mass and damping methods we published previously in [1]. The fictitious pressure method involves modification of the fluid solver whereas the fictitious mass and damping methods modify the structure solver. We analyze all fictitious methods for simplified problems and obtain explicit expressions for the optimal reduction factor (convergence rate index) at the FSI interface [2]. This analysis also demonstrates an apparent similarity of fictitious methods to the FSI approach based on Robin boundary conditions, which have been found to be very effective in FSI problems. We implement all methods, including the semi-implicit Robin based coupling method, in the context of spectral element discretization, which is more sensitive to temporal instabilities than low-order methods. However, the methods we present here are simple and general, and hence applicable to FSI based on any other spatial discretization. In numerical tests, we verify the selection of optimal values for the fictitious parameters for simplified problems and for vortex-induced vibrations (VIV) even at zero mass ratio ("for-ever-resonance"). We also develop an empirical a posteriori analysis for complex geometries and apply it to 3D patient-specific flexible brain arteries with aneurysms for very large deformations. We demonstrate that the fictitious pressure method enhances stability and convergence, and is comparable or better in most cases to the Robin approach or the other fictitious methods.
Privacy-preserving data cube for electronic medical records: An experimental evaluation.
Kim, Soohyung; Lee, Hyukki; Chung, Yon Dohn
2017-01-01
The aim of this study is to evaluate the effectiveness and efficiency of privacy-preserving data cubes of electronic medical records (EMRs). An EMR data cube is a complex of EMR statistics that are summarized or aggregated by all possible combinations of attributes. Data cubes are widely utilized for efficient big data analysis and also have great potential for EMR analysis. For safe data analysis without privacy breaches, we must consider the privacy preservation characteristics of the EMR data cube. In this paper, we introduce a design for a privacy-preserving EMR data cube and the anonymization methods needed to achieve data privacy. We further focus on changes in efficiency and effectiveness that are caused by the anonymization process for privacy preservation. Thus, we experimentally evaluate various types of privacy-preserving EMR data cubes using several practical metrics and discuss the applicability of each anonymization method with consideration for the EMR analysis environment. We construct privacy-preserving EMR data cubes from anonymized EMR datasets. A real EMR dataset and demographic dataset are used for the evaluation. There are a large number of anonymization methods to preserve EMR privacy, and the methods are classified into three categories (i.e., global generalization, local generalization, and bucketization) by anonymization rules. According to this classification, three types of privacy-preserving EMR data cubes were constructed for the evaluation. We perform a comparative analysis by measuring the data size, cell overlap, and information loss of the EMR data cubes. Global generalization considerably reduced the size of the EMR data cube and did not cause the data cube cells to overlap, but incurred a large amount of information loss. Local generalization maintained the data size and generated only moderate information loss, but there were cell overlaps that could decrease the search performance. Bucketization did not cause cells to overlap and generated little information loss; however, the method considerably inflated the size of the EMR data cubes. The utility of anonymized EMR data cubes varies widely according to the anonymization method, and the applicability of the anonymization method depends on the features of the EMR analysis environment. The findings help to adopt the optimal anonymization method considering the EMR analysis environment and goal of the EMR analysis. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
An Analysis of the Algebraic Method for Balancing Chemical Reactions.
ERIC Educational Resources Information Center
Olson, John A.
1997-01-01
Analyzes the algebraic method for balancing chemical reactions. Introduces a third general condition that involves a balance between the total amount of oxidation and reduction. Requires the specification of oxidation states for all elements throughout the reaction. Describes the general conditions, the mathematical treatment, redox reactions, and…
Tremblay, Marie-Claude; Brousselle, Astrid; Richard, Lucie; Beaudet, Nicole
2013-10-01
Program designers and evaluators should make a point of testing the validity of a program's intervention theory before investing either in implementation or in any type of evaluation. In this context, logic analysis can be a particularly useful option, since it can be used to test the plausibility of a program's intervention theory using scientific knowledge. Professional development in public health is one field among several that would truly benefit from logic analysis, as it appears to be generally lacking in theorization and evaluation. This article presents the application of this analysis method to an innovative public health professional development program, the Health Promotion Laboratory. More specifically, this paper aims to (1) define the logic analysis approach and differentiate it from similar evaluative methods; (2) illustrate the application of this method by a concrete example (logic analysis of a professional development program); and (3) reflect on the requirements of each phase of logic analysis, as well as on the advantages and disadvantages of such an evaluation method. Using logic analysis to evaluate the Health Promotion Laboratory showed that, generally speaking, the program's intervention theory appeared to have been well designed. By testing and critically discussing logic analysis, this article also contributes to further improving and clarifying the method. Copyright © 2013 Elsevier Ltd. All rights reserved.
Generalized sample entropy analysis for traffic signals based on similarity measure
NASA Astrophysics Data System (ADS)
Shang, Du; Xu, Mengjia; Shang, Pengjian
2017-05-01
Sample entropy is a prevailing method used to quantify the complexity of a time series. In this paper a modified method of generalized sample entropy and surrogate data analysis is proposed as a new measure to assess the complexity of a complex dynamical system such as traffic signals. The method based on similarity distance presents a different way of signals patterns match showing distinct behaviors of complexity. Simulations are conducted over synthetic data and traffic signals for providing the comparative study, which is provided to show the power of the new method. Compared with previous sample entropy and surrogate data analysis, the new method has two main advantages. The first one is that it overcomes the limitation about the relationship between the dimension parameter and the length of series. The second one is that the modified sample entropy functions can be used to quantitatively distinguish time series from different complex systems by the similar measure.
Probabilistic Structural Analysis Program
NASA Technical Reports Server (NTRS)
Pai, Shantaram S.; Chamis, Christos C.; Murthy, Pappu L. N.; Stefko, George L.; Riha, David S.; Thacker, Ben H.; Nagpal, Vinod K.; Mital, Subodh K.
2010-01-01
NASA/NESSUS 6.2c is a general-purpose, probabilistic analysis program that computes probability of failure and probabilistic sensitivity measures of engineered systems. Because NASA/NESSUS uses highly computationally efficient and accurate analysis techniques, probabilistic solutions can be obtained even for extremely large and complex models. Once the probabilistic response is quantified, the results can be used to support risk-informed decisions regarding reliability for safety-critical and one-of-a-kind systems, as well as for maintaining a level of quality while reducing manufacturing costs for larger-quantity products. NASA/NESSUS has been successfully applied to a diverse range of problems in aerospace, gas turbine engines, biomechanics, pipelines, defense, weaponry, and infrastructure. This program combines state-of-the-art probabilistic algorithms with general-purpose structural analysis and lifting methods to compute the probabilistic response and reliability of engineered structures. Uncertainties in load, material properties, geometry, boundary conditions, and initial conditions can be simulated. The structural analysis methods include non-linear finite-element methods, heat-transfer analysis, polymer/ceramic matrix composite analysis, monolithic (conventional metallic) materials life-prediction methodologies, boundary element methods, and user-written subroutines. Several probabilistic algorithms are available such as the advanced mean value method and the adaptive importance sampling method. NASA/NESSUS 6.2c is structured in a modular format with 15 elements.
ERIC Educational Resources Information Center
Prevost, A. Toby; Mason, Dan; Griffin, Simon; Kinmonth, Ann-Louise; Sutton, Stephen; Spiegelhalter, David
2007-01-01
Practical meta-analysis of correlation matrices generally ignores covariances (and hence correlations) between correlation estimates. The authors consider various methods for allowing for covariances, including generalized least squares, maximum marginal likelihood, and Bayesian approaches, illustrated using a 6-dimensional response in a series of…
Methods for Synthesizing Findings on Moderation Effects Across Multiple Randomized Trials
Brown, C Hendricks; Sloboda, Zili; Faggiano, Fabrizio; Teasdale, Brent; Keller, Ferdinand; Burkhart, Gregor; Vigna-Taglianti, Federica; Howe, George; Masyn, Katherine; Wang, Wei; Muthén, Bengt; Stephens, Peggy; Grey, Scott; Perrino, Tatiana
2011-01-01
This paper presents new methods for synthesizing results from subgroup and moderation analyses across different randomized trials. We demonstrate that such a synthesis generally results in additional power to detect significant moderation findings above what one would find in a single trial. Three general methods for conducting synthesis analyses are discussed, with two methods, integrative data analysis, and parallel analyses, sharing a large advantage over traditional methods available in meta-analysis. We present a broad class of analytic models to examine moderation effects across trials that can be used to assess their overall effect and explain sources of heterogeneity, and present ways to disentangle differences across trials due to individual differences, contextual level differences, intervention, and trial design. PMID:21360061
Methods for synthesizing findings on moderation effects across multiple randomized trials.
Brown, C Hendricks; Sloboda, Zili; Faggiano, Fabrizio; Teasdale, Brent; Keller, Ferdinand; Burkhart, Gregor; Vigna-Taglianti, Federica; Howe, George; Masyn, Katherine; Wang, Wei; Muthén, Bengt; Stephens, Peggy; Grey, Scott; Perrino, Tatiana
2013-04-01
This paper presents new methods for synthesizing results from subgroup and moderation analyses across different randomized trials. We demonstrate that such a synthesis generally results in additional power to detect significant moderation findings above what one would find in a single trial. Three general methods for conducting synthesis analyses are discussed, with two methods, integrative data analysis and parallel analyses, sharing a large advantage over traditional methods available in meta-analysis. We present a broad class of analytic models to examine moderation effects across trials that can be used to assess their overall effect and explain sources of heterogeneity, and present ways to disentangle differences across trials due to individual differences, contextual level differences, intervention, and trial design.
Search automation of the generalized method of device operational characteristics improvement
NASA Astrophysics Data System (ADS)
Petrova, I. Yu; Puchkova, A. A.; Zaripova, V. M.
2017-01-01
The article presents brief results of analysis of existing search methods of the closest patents, which can be applied to determine generalized methods of device operational characteristics improvement. There were observed the most widespread clustering algorithms and metrics for determining the proximity degree between two documents. The article proposes the technique of generalized methods determination; it has two implementation variants and consists of 7 steps. This technique has been implemented in the “Patents search” subsystem of the “Intellect” system. Also the article gives an example of the use of the proposed technique.
Single-phase power distribution system power flow and fault analysis
NASA Technical Reports Server (NTRS)
Halpin, S. M.; Grigsby, L. L.
1992-01-01
Alternative methods for power flow and fault analysis of single-phase distribution systems are presented. The algorithms for both power flow and fault analysis utilize a generalized approach to network modeling. The generalized admittance matrix, formed using elements of linear graph theory, is an accurate network model for all possible single-phase network configurations. Unlike the standard nodal admittance matrix formulation algorithms, the generalized approach uses generalized component models for the transmission line and transformer. The standard assumption of a common node voltage reference point is not required to construct the generalized admittance matrix. Therefore, truly accurate simulation results can be obtained for networks that cannot be modeled using traditional techniques.
General Quality Control (QC) Guidelines for SAM Methods
Learn more about quality control guidelines and recommendations for the analysis of samples using the methods listed in EPA's Selected Analytical Methods for Environmental Remediation and Recovery (SAM).
NASA Astrophysics Data System (ADS)
Zhou, Peng; Peng, Zhike; Chen, Shiqian; Yang, Yang; Zhang, Wenming
2018-06-01
With the development of large rotary machines for faster and more integrated performance, the condition monitoring and fault diagnosis for them are becoming more challenging. Since the time-frequency (TF) pattern of the vibration signal from the rotary machine often contains condition information and fault feature, the methods based on TF analysis have been widely-used to solve these two problems in the industrial community. This article introduces an effective non-stationary signal analysis method based on the general parameterized time-frequency transform (GPTFT). The GPTFT is achieved by inserting a rotation operator and a shift operator in the short-time Fourier transform. This method can produce a high-concentrated TF pattern with a general kernel. A multi-component instantaneous frequency (IF) extraction method is proposed based on it. The estimation for the IF of every component is accomplished by defining a spectrum concentration index (SCI). Moreover, such an IF estimation process is iteratively operated until all the components are extracted. The tests on three simulation examples and a real vibration signal demonstrate the effectiveness and superiority of our method.
A Comparison of Imputation Methods for Bayesian Factor Analysis Models
ERIC Educational Resources Information Center
Merkle, Edgar C.
2011-01-01
Imputation methods are popular for the handling of missing data in psychology. The methods generally consist of predicting missing data based on observed data, yielding a complete data set that is amiable to standard statistical analyses. In the context of Bayesian factor analysis, this article compares imputation under an unrestricted…
The Precision Efficacy Analysis for Regression Sample Size Method.
ERIC Educational Resources Information Center
Brooks, Gordon P.; Barcikowski, Robert S.
The general purpose of this study was to examine the efficiency of the Precision Efficacy Analysis for Regression (PEAR) method for choosing appropriate sample sizes in regression studies used for precision. The PEAR method, which is based on the algebraic manipulation of an accepted cross-validity formula, essentially uses an effect size to…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lines, L.; Burton, A.; Lu, H.X.
Accurate velocity models are a necessity for reliable migration results. Velocity analysis generally involves the use of methods such as normal moveout analysis (NMO), seismic traveltime tomography, or iterative prestack migration. These techniques can be effective, and each has its own advantage or disadvantage. Conventional NMO methods are relatively inexpensive but basically require simplifying assumptions about geology. Tomography is a more general method but requires traveltime interpretation of prestack data. Iterative prestack depth migration is very general but is computationally expensive. In some cases, there is the opportunity to estimate vertical velocities by use of well information. The well informationmore » can be used to optimize poststack migrations, thereby eliminating some of the time and expense of iterative prestack migration. The optimized poststack migration procedure defined here computes the velocity model which minimizes the depth differences between seismic images and formation depths at the well by using a least squares inversion method. The optimization methods described in this paper will hopefully produce ``migrations without migraines.``« less
Theory and applications of structured light single pixel imaging
NASA Astrophysics Data System (ADS)
Stokoe, Robert J.; Stockton, Patrick A.; Pezeshki, Ali; Bartels, Randy A.
2018-02-01
Many single-pixel imaging techniques have been developed in recent years. Though the methods of image acquisition vary considerably, the methods share unifying features that make general analysis possible. Furthermore, the methods developed thus far are based on intuitive processes that enable simple and physically-motivated reconstruction algorithms, however, this approach may not leverage the full potential of single-pixel imaging. We present a general theoretical framework of single-pixel imaging based on frame theory, which enables general, mathematically rigorous analysis. We apply our theoretical framework to existing single-pixel imaging techniques, as well as provide a foundation for developing more-advanced methods of image acquisition and reconstruction. The proposed frame theoretic framework for single-pixel imaging results in improved noise robustness, decrease in acquisition time, and can take advantage of special properties of the specimen under study. By building on this framework, new methods of imaging with a single element detector can be developed to realize the full potential associated with single-pixel imaging.
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung
1987-01-01
The validity of the modified equation stability analysis introduced by Warming and Hyett was investigated. It is shown that the procedure used in the derivation of the modified equation is flawed and generally leads to invalid results. Moreover, the interpretation of the modified equation as the exact partial differential equation solved by a finite-difference method generally cannot be justified even if spatial periodicity is assumed. For a two-level scheme, due to a series of mathematical quirks, the connection between the modified equation approach and the von Neuman method established by Warming and Hyett turns out to be correct despite its questionable original derivation. However, this connection is only partially valid for a scheme involving more than two time levels. In the von Neumann analysis, the complex error multiplication factor associated with a wave number generally has (L-1) roots for an L-level scheme. It is shown that the modified equation provides information about only one of these roots.
COMPENDIUM OF SELECTED METHODS FOR SAMPLING AND ANALYSIS AT GEOTHERMAL FACILITIES
The establishment of generally accepted methods for characterizing geothermal emissions has been hampered by the independent natures of both geothermal industrial development and sampling/analysis procedures despite three workshops on the latter (Las Vegas 1975, 1977, 1980). An i...
Measuring Efficiency of Secondary Healthcare Providers in Slovenia
Blatnik, Patricia; Bojnec, Štefan; Tušak, Matej
2017-01-01
Abstract The chief aim of this study was to analyze secondary healthcare providers' efficiency, focusing on the efficiency analysis of Slovene general hospitals. We intended to present a complete picture of technical, allocative, and cost or economic efficiency of general hospitals. Methods We researched the aspects of efficiency with two econometric methods. First, we calculated the necessary quotients of efficiency with the stochastic frontier analyze (SFA), which are realized by econometric evaluation of stochastic frontier functions; then, with the data envelopment analyze (DEA), we calculated the necessary quotients that are based on the linear programming method. Results Results on measures of efficiency showed that the two chosen methods produced two different conclusions. The SFA method concluded Celje General Hospital is the most efficient general hospital, whereas the DEA method concluded Brežice General Hospital was the hospital to be declared as the most efficient hospital. Conclusion Our results are a useful tool that can aid managers, payers, and designers of healthcare policy to better understand how general hospitals operate. The participants can accordingly decide with less difficulty on any further business operations of general hospitals, having the best practices of general hospitals at their disposal. PMID:28730180
NASA Astrophysics Data System (ADS)
Bialas, A.
2004-02-01
It is shown that the method of eliminating the statistical fluctuations from event-by-event analysis proposed recently by Fu and Liu can be rewritten in a compact form involving the generalized factorial moments.
[Evaluation of PAE and AE for identifying generalized tracks using snakes in Hidalgo, Mexico].
Montiel-Canales, Gustavo; Mayer-Goyenechea, Irene Goyenechea; Fernández Badillo, Leonardo; Castillo Cerón, Jesús M
2016-12-01
One of the most important concepts in Panbiogeography is the generalized track, which represents an ancestral biota fragmented by geological events that can be recovered through several methods, including Parsimony analysis of endemicity (PAE) and endemicity analysis (EA). PAE has been frequently used to identify generalized tracks, while EA is primarily designed to find areas of endemicity, but has been recently proposed for identifying generalized tracks as well. In this study we evaluated these methods to find generalized tracks using the distribution of the 84 snake species of Hidalgo. PAE found one generalized track from three individual tracks (Agkistrodon taylori, Crotalus totonacus and Pliocercus elapoides), supported by 89 % of Bootstrap, and EA identified two generalized tracks, with endemicity index values of 2.71-2.96 and 2.84-3.09, respectively. Those areas were transformed to generalized tracks. The first generalized track was retrieved from three individual tracks (Micrurus bernadi, Rhadinaea marcellae and R. quinquelineata), and the second was recovered from two individual tracks (Geophis mutitorques and Thamnophis sumichrasti). These generalized tracks can be considered a unique distribution pattern, because they resembled each other and agreed in shape. When comparing both methods, we noted that both are useful for identifying generalized tracks, and although they can be used independently, we suggest their complementary use. Nevertheless, to obtain accurate results, it is useful to consider theoretical bases of both methods, along with an appropriate choice of the size of the area. Results using small-grid size in EA are ideal for searching biogeographical patterns within geopolitical limits. Furthermore, they can be used for conservation proposals at state level where endemic species become irreplaceable, and where losing them would imply the extinction of unique lineages.
A general numerical analysis of the superconducting quasiparticle mixer
NASA Technical Reports Server (NTRS)
Hicks, R. G.; Feldman, M. J.; Kerr, A. R.
1985-01-01
For very low noise millimeter-wave receivers, the superconductor-insulator-superconductor (SIS) quasiparticle mixer is now competitive with conventional Schottky mixers. Tucker (1979, 1980) has developed a quantum theory of mixing which has provided a basis for the rapid improvement in SIS mixer performance. The present paper is concerned with a general method of numerical analysis for SIS mixers which allows arbitrary terminating impedances for all the harmonic frequencies. This analysis provides an approach for an examination of the range of validity of the three-frequency results of the quantum mixer theory. The new method has been implemented with the aid of a Fortran computer program.
Flow assignment model for quantitative analysis of diverting bulk freight from road to railway
Liu, Chang; Wang, Jiaxi; Xiao, Jie; Liu, Siqi; Wu, Jianping; Li, Jian
2017-01-01
Since railway transport possesses the advantage of high volume and low carbon emissions, diverting some freight from road to railway will help reduce the negative environmental impacts associated with transport. This paper develops a flow assignment model for quantitative analysis of diverting truck freight to railway. First, a general network which considers road transportation, railway transportation, handling and transferring is established according to all the steps in the whole transportation process. Then general functions which embody the factors which the shippers will pay attention to when choosing mode and path are formulated. The general functions contain the congestion cost on road, the capacity constraints of railways and freight stations. Based on the general network and general cost function, a user equilibrium flow assignment model is developed to simulate the flow distribution on the general network under the condition that all shippers choose transportation mode and path independently. Since the model is nonlinear and challenging, we adopt a method that uses tangent lines to constitute envelope curve to linearize it. Finally, a numerical example is presented to test the model and show the method of making quantitative analysis of bulk freight modal shift between road and railway. PMID:28771536
Analysis of Parasite and Other Skewed Counts
Alexander, Neal
2012-01-01
Objective To review methods for the statistical analysis of parasite and other skewed count data. Methods Statistical methods for skewed count data are described and compared, with reference to those used over a ten year period of Tropical Medicine and International Health. Two parasitological datasets are used for illustration. Results Ninety papers were identified, 89 with descriptive and 60 with inferential analysis. A lack of clarity is noted in identifying measures of location, in particular the Williams and geometric mean. The different measures are compared, emphasizing the legitimacy of the arithmetic mean for skewed data. In the published papers, the t test and related methods were often used on untransformed data, which is likely to be invalid. Several approaches to inferential analysis are described, emphasizing 1) non-parametric methods, while noting that they are not simply comparisons of medians, and 2) generalized linear modelling, in particular with the negative binomial distribution. Additional methods, such as the bootstrap, with potential for greater use are described. Conclusions Clarity is recommended when describing transformations and measures of location. It is suggested that non-parametric methods and generalized linear models are likely to be sufficient for most analyses. PMID:22943299
Systems and methods for sample analysis
Cooks, Robert Graham; Li, Guangtao; Li, Xin; Ouyang, Zheng
2015-01-13
The invention generally relates to systems and methods for sample analysis. In certain embodiments, the invention provides a system for analyzing a sample that includes a probe including a material connected to a high voltage source, a device for generating a heated gas, and a mass analyzer.
2008-09-01
Element Method. Wellesley- Cambridge Press, Wellesly, MA, 1988. [97] E. F. Toro . Riemann Solvers and Numerical Methods for Fluid Dynamics: A Practical...introducing additional state variables, are generally asymptotically dual consistent. Numerical results are presented to confirm the results of the analysis...dependence on the state gradient is handled by introducing additional state variables, are generally asymptotically dual consistent. Numerical results are
Comprehensive rotorcraft analysis methods
NASA Technical Reports Server (NTRS)
Stephens, Wendell B.; Austin, Edward E.
1988-01-01
The development and application of comprehensive rotorcraft analysis methods in the field of rotorcraft technology are described. These large scale analyses and the resulting computer programs are intended to treat the complex aeromechanical phenomena that describe the behavior of rotorcraft. They may be used to predict rotor aerodynamics, acoustic, performance, stability and control, handling qualities, loads and vibrations, structures, dynamics, and aeroelastic stability characteristics for a variety of applications including research, preliminary and detail design, and evaluation and treatment of field problems. The principal comprehensive methods developed or under development in recent years and generally available to the rotorcraft community because of US Army Aviation Research and Technology Activity (ARTA) sponsorship of all or part of the software systems are the Rotorcraft Flight Simulation (C81), Dynamic System Coupler (DYSCO), Coupled Rotor/Airframe Vibration Analysis Program (SIMVIB), Comprehensive Analytical Model of Rotorcraft Aerodynamics and Dynamics (CAMRAD), General Rotorcraft Aeromechanical Stability Program (GRASP), and Second Generation Comprehensive Helicopter Analysis System (2GCHAS).
Generalized Full-Information Item Bifactor Analysis
Cai, Li; Yang, Ji Seung; Hansen, Mark
2011-01-01
Full-information item bifactor analysis is an important statistical method in psychological and educational measurement. Current methods are limited to single group analysis and inflexible in the types of item response models supported. We propose a flexible multiple-group item bifactor analysis framework that supports a variety of multidimensional item response theory models for an arbitrary mixing of dichotomous, ordinal, and nominal items. The extended item bifactor model also enables the estimation of latent variable means and variances when data from more than one group are present. Generalized user-defined parameter restrictions are permitted within or across groups. We derive an efficient full-information maximum marginal likelihood estimator. Our estimation method achieves substantial computational savings by extending Gibbons and Hedeker’s (1992) bifactor dimension reduction method so that the optimization of the marginal log-likelihood only requires two-dimensional integration regardless of the dimensionality of the latent variables. We use simulation studies to demonstrate the flexibility and accuracy of the proposed methods. We apply the model to study cross-country differences, including differential item functioning, using data from a large international education survey on mathematics literacy. PMID:21534682
NASA Technical Reports Server (NTRS)
Johnson, F. T.
1980-01-01
A method for solving the linear integral equations of incompressible potential flow in three dimensions is presented. Both analysis (Neumann) and design (Dirichlet) boundary conditions are treated in a unified approach to the general flow problem. The method is an influence coefficient scheme which employs source and doublet panels as boundary surfaces. Curved panels possessing singularity strengths, which vary as polynomials are used, and all influence coefficients are derived in closed form. These and other features combine to produce an efficient scheme which is not only versatile but eminently suited to the practical realities of a user-oriented environment. A wide variety of numerical results demonstrating the method is presented.
Classification of Phase Transitions by Microcanonical Inflection-Point Analysis
NASA Astrophysics Data System (ADS)
Qi, Kai; Bachmann, Michael
2018-05-01
By means of the principle of minimal sensitivity we generalize the microcanonical inflection-point analysis method by probing derivatives of the microcanonical entropy for signals of transitions in complex systems. A strategy of systematically identifying and locating independent and dependent phase transitions of any order is proposed. The power of the generalized method is demonstrated in applications to the ferromagnetic Ising model and a coarse-grained model for polymer adsorption onto a substrate. The results shed new light on the intrinsic phase structure of systems with cooperative behavior.
NASA Astrophysics Data System (ADS)
Mao, Jin-Jin; Tian, Shou-Fu; Zou, Li; Zhang, Tian-Tian
2018-05-01
In this paper, we consider a generalized Hirota equation with a bounded potential, which can be used to describe the propagation properties of optical soliton solutions. By employing the hypothetical method and the sub-equation method, we construct the bright soliton, dark soliton, complexitons and Gaussian soliton solutions of the Hirota equation. Moreover, we explicitly derive the power series solutions with their convergence analysis. Finally, we provide the graphical analysis of such soliton solutions in order to better understand their dynamical behavior.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gutmacher, R.; Crawford, R.
This comprehensive guide to the analytical capabilities of Lawrence Livermore Laboratory's General Chemistry Division describes each analytical method in terms of its principle, field of application, and qualitative and quantitative uses. Also described are the state and quantity of sample required for analysis, processing time, available instrumentation, and responsible personnel.
A Simple Demonstration of a General Rule for the Variation of Magnetic Field with Distance
ERIC Educational Resources Information Center
Kodama, K.
2009-01-01
We describe a simple experiment demonstrating the variation in magnitude of a magnetic field with distance. The method described requires only an ordinary magnetic compass and a permanent magnet. The proposed graphical analysis illustrates a unique method for deducing a general rule of magnetostatics. (Contains 1 table and 6 figures.)
ERIC Educational Resources Information Center
Murakami, Yusuke
2013-01-01
There are two types of qualitative research that analyze a small number of cases or a single case: idiographic differentiation and nomothetic/generalization. There are few case studies of generalization. This is because theoretical inclination is weak in the field of education, and the binary framework of quantitative versus qualitative research…
NASA Technical Reports Server (NTRS)
Aboudi, Jacob; Pindera, Marek-Jerzy
1992-01-01
A user's guide for the program gmc.f is presented. The program is based on the generalized method of cells model (GMC) which is capable via a micromechanical analysis, of predicting the overall, inelastic behavior of unidirectional, multi-phase composites from the knowledge of the properties of the viscoplastic constituents. In particular, the program is sufficiently general to predict the response of unidirectional composites having variable fiber shapes and arrays.
An advanced probabilistic structural analysis method for implicit performance functions
NASA Technical Reports Server (NTRS)
Wu, Y.-T.; Millwater, H. R.; Cruse, T. A.
1989-01-01
In probabilistic structural analysis, the performance or response functions usually are implicitly defined and must be solved by numerical analysis methods such as finite element methods. In such cases, the most commonly used probabilistic analysis tool is the mean-based, second-moment method which provides only the first two statistical moments. This paper presents a generalized advanced mean value (AMV) method which is capable of establishing the distributions to provide additional information for reliability design. The method requires slightly more computations than the second-moment method but is highly efficient relative to the other alternative methods. In particular, the examples show that the AMV method can be used to solve problems involving non-monotonic functions that result in truncated distributions.
Development of Composite Materials with High Passive Damping Properties
2006-05-15
frequency response function analysis. Sound transmission through sandwich panels was studied using the statistical energy analysis (SEA). Modal density...2.2.3 Finite element models 14 2.2.4 Statistical energy analysis method 15 CHAPTER 3 ANALYSIS OF DAMPING IN SANDWICH MATERIALS. 24 3.1 Equation of...sheets and the core. 2.2.4 Statistical energy analysis method Finite element models are generally only efficient for problems at low and middle frequencies
Computer Graphics-aided systems analysis: application to well completion design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Detamore, J.E.; Sarma, M.P.
1985-03-01
The development of an engineering tool (in the form of a computer model) for solving design and analysis problems related with oil and gas well production operations is discussed. The development of the method is based on integrating the concepts of ''Systems Analysis'' with the techniques of ''Computer Graphics''. The concepts behind the method are very general in nature. This paper, however, illustrates the application of the method in solving gas well completion design problems. The use of the method will save time and improve the efficiency of such design and analysis problems. The method can be extended to othermore » design and analysis aspects of oil and gas wells.« less
Generalized method calculating the effective diffusion coefficient in periodic channels.
Kalinay, Pavol
2015-01-07
The method calculating the effective diffusion coefficient in an arbitrary periodic two-dimensional channel, presented in our previous paper [P. Kalinay, J. Chem. Phys. 141, 144101 (2014)], is generalized to 3D channels of cylindrical symmetry, as well as to 2D or 3D channels with particles driven by a constant longitudinal external driving force. The next possible extensions are also indicated. The former calculation was based on calculus in the complex plane, suitable for the stationary diffusion in 2D domains. The method is reformulated here using standard tools of functional analysis, enabling the generalization.
ERIC Educational Resources Information Center
Finn, Jerry; Dillon, Caroline
2007-01-01
This paper describes methods for teaching content analysis as part of the Research sequence in social work education. Teaching content analysis is used to develop research skills as well as to promote students' knowledge and critical thinking and about new information technology resources that are being increasingly used by the general public. The…
Estimating optical imaging system performance for space applications
NASA Technical Reports Server (NTRS)
Sinclair, K. F.
1972-01-01
The critical system elements of an optical imaging system are identified and a method for an initial assessment of system performance is presented. A generalized imaging system is defined. A system analysis is considered, followed by a component analysis. An example of the method is given using a film imaging system.
Twelfth NASTRAN (R) Users' Colloquium
NASA Technical Reports Server (NTRS)
1984-01-01
NASTRAN is a large, comprehensive, nonproprietary, general purpose finite element computer code for structural analysis. The Twelfth Users' Colloquim provides some comprehensive papers on the application of finite element methods in engineering, comparisons with other approaches, unique applications, pre and post processing or auxiliary programs, and new methods of analysis with NASTRAN.
Relative contributions of three descriptive methods: implications for behavioral assessment.
Pence, Sacha T; Roscoe, Eileen M; Bourret, Jason C; Ahearn, William H
2009-01-01
This study compared the outcomes of three descriptive analysis methods-the ABC method, the conditional probability method, and the conditional and background probability method-to each other and to the results obtained from functional analyses. Six individuals who had been diagnosed with developmental delays and exhibited problem behavior participated. Functional analyses indicated that participants' problem behavior was maintained by social positive reinforcement (n = 2), social negative reinforcement (n = 2), or automatic reinforcement (n = 2). Results showed that for all but 1 participant, descriptive analysis outcomes were similar across methods. In addition, for all but 1 participant, the descriptive analysis outcome differed substantially from the functional analysis outcome. This supports the general finding that descriptive analysis is a poor means of determining functional relations.
NASA Technical Reports Server (NTRS)
Moncada, Albert M.; Chattopadhyay, Aditi; Bednarcyk, Brett A.; Arnold, Steven M.
2008-01-01
Predicting failure in a composite can be done with ply level mechanisms and/or micro level mechanisms. This paper uses the Generalized Method of Cells and High-Fidelity Generalized Method of Cells micromechanics theories, coupled with classical lamination theory, as implemented within NASA's Micromechanics Analysis Code with Generalized Method of Cells. The code is able to implement different failure theories on the level of both the fiber and the matrix constituents within a laminate. A comparison is made among maximum stress, maximum strain, Tsai-Hill, and Tsai-Wu failure theories. To verify the failure theories the Worldwide Failure Exercise (WWFE) experiments have been used. The WWFE is a comprehensive study that covers a wide range of polymer matrix composite laminates. The numerical results indicate good correlation with the experimental results for most of the composite layups, but also point to the need for more accurate resin damage progression models.
NASA Astrophysics Data System (ADS)
Aoun, Bachir; Yu, Cun; Fan, Longlong; Chen, Zonghai; Amine, Khalil; Ren, Yang
2015-04-01
A generalized method is introduced to extract critical information from series of ranked correlated data. The method is generally applicable to all types of spectra evolving as a function of any arbitrary parameter. This approach is based on correlation functions and statistical scedasticity formalism. Numerous challenges in analyzing high throughput experimental data can be tackled using the herein proposed method. We applied this method to understand the reactivity pathway and formation mechanism of a Li-ion battery cathode material during high temperature synthesis using in-situ high-energy X-ray diffraction. We demonstrate that Pearson's correlation function can easily unravel all major phase transition and, more importantly, the minor structural changes which cannot be revealed by conventionally inspecting the series of diffraction patterns. Furthermore, a two-dimensional (2D) reactivity pattern calculated as the scedasticity along all measured reciprocal space of all successive diffraction pattern pairs unveils clearly the structural evolution path and the active areas of interest during the synthesis. The methods described here can be readily used for on-the-fly data analysis during various in-situ operando experiments in order to quickly evaluate and optimize experimental conditions, as well as for post data analysis and large data mining where considerable amount of data hinders the feasibility of the investigation through point-by-point inspection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aoun, Bachir; Yu, Cun; Fan, Longlong
A generalized method is introduced to extract critical information from series of ranked correlated data. The method is generally applicable to all types of spectra evolving as a function of any arbitrary parameter. This approach is based on correlation functions and statistical scedasticity formalism. Numerous challenges in analyzing high throughput experimental data can be tackled using the herein proposed method. We applied this method to understand the reactivity pathway and formation mechanism of a Li-ion battery cathode material during high temperature synthesis using in-situ highenergy X-ray diffraction. We demonstrate that Pearson's correlation function can easily unravel all major phase transitionmore » and, more importantly, the minor structural changes which cannot be revealed by conventionally inspecting the series of diffraction patterns. Furthermore, a two-dimensional (2D) reactivity pattern calculated as the scedasticity along all measured reciprocal space of all successive diffraction pattern pairs unveils clearly the structural evolution path and the active areas of interest during the synthesis. The methods described here can be readily used for on-the-fly data analysis during various in-situ operando experiments in order to quickly evaluate and optimize experimental conditions, as well as for post data analysis and large data mining where considerable amount of data hinders the feasibility of the investigation through point-by-point inspection.« less
Influence analysis in quantitative trait loci detection.
Dou, Xiaoling; Kuriki, Satoshi; Maeno, Akiteru; Takada, Toyoyuki; Shiroishi, Toshihiko
2014-07-01
This paper presents systematic methods for the detection of influential individuals that affect the log odds (LOD) score curve. We derive general formulas of influence functions for profile likelihoods and introduce them into two standard quantitative trait locus detection methods-the interval mapping method and single marker analysis. Besides influence analysis on specific LOD scores, we also develop influence analysis methods on the shape of the LOD score curves. A simulation-based method is proposed to assess the significance of the influence of the individuals. These methods are shown useful in the influence analysis of a real dataset of an experimental population from an F2 mouse cross. By receiver operating characteristic analysis, we confirm that the proposed methods show better performance than existing diagnostics. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Guided SAR image despeckling with probabilistic non local weights
NASA Astrophysics Data System (ADS)
Gokul, Jithin; Nair, Madhu S.; Rajan, Jeny
2017-12-01
SAR images are generally corrupted by granular disturbances called speckle, which makes visual analysis and detail extraction a difficult task. Non Local despeckling techniques with probabilistic similarity has been a recent trend in SAR despeckling. To achieve effective speckle suppression without compromising detail preservation, we propose an improvement for the existing Generalized Guided Filter with Bayesian Non-Local Means (GGF-BNLM) method. The proposed method (Guided SAR Image Despeckling with Probabilistic Non Local Weights) replaces parametric constants based on heuristics in GGF-BNLM method with dynamically derived values based on the image statistics for weight computation. Proposed changes make GGF-BNLM method adaptive and as a result, significant improvement is achieved in terms of performance. Experimental analysis on SAR images shows excellent speckle reduction without compromising feature preservation when compared to GGF-BNLM method. Results are also compared with other state-of-the-art and classic SAR depseckling techniques to demonstrate the effectiveness of the proposed method.
Automatic computation and solution of generalized harmonic balance equations
NASA Astrophysics Data System (ADS)
Peyton Jones, J. C.; Yaser, K. S. A.; Stevenson, J.
2018-02-01
Generalized methods are presented for generating and solving the harmonic balance equations for a broad class of nonlinear differential or difference equations and for a general set of harmonics chosen by the user. In particular, a new algorithm for automatically generating the Jacobian of the balance equations enables efficient solution of these equations using continuation methods. Efficient numeric validation techniques are also presented, and the combined algorithm is applied to the analysis of dc, fundamental, second and third harmonic response of a nonlinear automotive damper.
NASA Technical Reports Server (NTRS)
Fitzjerrell, D. G.
1974-01-01
A general study of the stability of nonlinear as compared to linear control systems is presented. The analysis is general and, therefore, applies to other types of nonlinear biological control systems as well as the cardiovascular control system models. Both inherent and numerical stability are discussed for corresponding analytical and graphic methods and numerical methods.
Methods of analyzing crude oil
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cooks, Robert Graham; Jjunju, Fred Paul Mark; Li, Anyin
The invention generally relates to methods of analyzing crude oil. In certain embodiments, methods of the invention involve obtaining a crude oil sample, and subjecting the crude oil sample to mass spectrometry analysis. In certain embodiments, the method is performed without any sample pre-purification steps.
NASA Technical Reports Server (NTRS)
Pineda, Evan J.; Waas, Anthony M.; Berdnarcyk, Brett A.; Arnold, Steven M.; Collier, Craig S.
2009-01-01
This preliminary report demonstrates the capabilities of the recently developed software implementation that links the Generalized Method of Cells to explicit finite element analysis by extending a previous development which tied the generalized method of cells to implicit finite elements. The multiscale framework, which uses explicit finite elements at the global-scale and the generalized method of cells at the microscale is detailed. This implementation is suitable for both dynamic mechanics problems and static problems exhibiting drastic and sudden changes in material properties, which often encounter convergence issues with commercial implicit solvers. Progressive failure analysis of stiffened and un-stiffened fiber-reinforced laminates subjected to normal blast pressure loads was performed and is used to demonstrate the capabilities of this framework. The focus of this report is to document the development of the software implementation; thus, no comparison between the results of the models and experimental data is drawn. However, the validity of the results are assessed qualitatively through the observation of failure paths, stress contours, and the distribution of system energies.
Dangers in Using Analysis of Covariance Procedures.
ERIC Educational Resources Information Center
Campbell, Kathleen T.
Problems associated with the use of analysis of covariance (ANCOVA) as a statistical control technique are explained. Three problems relate to the use of "OVA" methods (analysis of variance, analysis of covariance, multivariate analysis of variance, and multivariate analysis of covariance) in general. These are: (1) the wasting of information when…
NASA Astrophysics Data System (ADS)
Vanrolleghem, Peter A.; Mannina, Giorgio; Cosenza, Alida; Neumann, Marc B.
2015-03-01
Sensitivity analysis represents an important step in improving the understanding and use of environmental models. Indeed, by means of global sensitivity analysis (GSA), modellers may identify both important (factor prioritisation) and non-influential (factor fixing) model factors. No general rule has yet been defined for verifying the convergence of the GSA methods. In order to fill this gap this paper presents a convergence analysis of three widely used GSA methods (SRC, Extended FAST and Morris screening) for an urban drainage stormwater quality-quantity model. After the convergence was achieved the results of each method were compared. In particular, a discussion on peculiarities, applicability, and reliability of the three methods is presented. Moreover, a graphical Venn diagram based classification scheme and a precise terminology for better identifying important, interacting and non-influential factors for each method is proposed. In terms of convergence, it was shown that sensitivity indices related to factors of the quantity model achieve convergence faster. Results for the Morris screening method deviated considerably from the other methods. Factors related to the quality model require a much higher number of simulations than the number suggested in literature for achieving convergence with this method. In fact, the results have shown that the term "screening" is improperly used as the method may exclude important factors from further analysis. Moreover, for the presented application the convergence analysis shows more stable sensitivity coefficients for the Extended-FAST method compared to SRC and Morris screening. Substantial agreement in terms of factor fixing was found between the Morris screening and Extended FAST methods. In general, the water quality related factors exhibited more important interactions than factors related to water quantity. Furthermore, in contrast to water quantity model outputs, water quality model outputs were found to be characterised by high non-linearity.
General Framework for Meta-analysis of Rare Variants in Sequencing Association Studies
Lee, Seunggeun; Teslovich, Tanya M.; Boehnke, Michael; Lin, Xihong
2013-01-01
We propose a general statistical framework for meta-analysis of gene- or region-based multimarker rare variant association tests in sequencing association studies. In genome-wide association studies, single-marker meta-analysis has been widely used to increase statistical power by combining results via regression coefficients and standard errors from different studies. In analysis of rare variants in sequencing studies, region-based multimarker tests are often used to increase power. We propose meta-analysis methods for commonly used gene- or region-based rare variants tests, such as burden tests and variance component tests. Because estimation of regression coefficients of individual rare variants is often unstable or not feasible, the proposed method avoids this difficulty by calculating score statistics instead that only require fitting the null model for each study and then aggregating these score statistics across studies. Our proposed meta-analysis rare variant association tests are conducted based on study-specific summary statistics, specifically score statistics for each variant and between-variant covariance-type (linkage disequilibrium) relationship statistics for each gene or region. The proposed methods are able to incorporate different levels of heterogeneity of genetic effects across studies and are applicable to meta-analysis of multiple ancestry groups. We show that the proposed methods are essentially as powerful as joint analysis by directly pooling individual level genotype data. We conduct extensive simulations to evaluate the performance of our methods by varying levels of heterogeneity across studies, and we apply the proposed methods to meta-analysis of rare variant effects in a multicohort study of the genetics of blood lipid levels. PMID:23768515
Johnson, Quentin R; Lindsay, Richard J; Shen, Tongye
2018-02-21
A computational method which extracts the dominant motions from an ensemble of biomolecular conformations via a correlation analysis of residue-residue contacts is presented. The algorithm first renders the structural information into contact matrices, then constructs the collective modes based on the correlated dynamics of a selected set of dynamic contacts. Associated programs can bridge the results for further visualization using graphics software. The aim of this method is to provide an analysis of conformations of biopolymers from the contact viewpoint. It may assist a systematical uncovering of conformational switching mechanisms existing in proteins and biopolymer systems in general by statistical analysis of simulation snapshots. In contrast to conventional correlation analyses of Cartesian coordinates (such as distance covariance analysis and Cartesian principal component analysis), this program also provides an alternative way to locate essential collective motions in general. Herein, we detail the algorithm in a stepwise manner and comment on the importance of the method as applied to decoding allosteric mechanisms. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.
Functional Techniques for Data Analysis
NASA Technical Reports Server (NTRS)
Tomlinson, John R.
1997-01-01
This dissertation develops a new general method of solving Prony's problem. Two special cases of this new method have been developed previously. They are the Matrix Pencil and the Osculatory Interpolation. The dissertation shows that they are instances of a more general solution type which allows a wide ranging class of linear functional to be used in the solution of the problem. This class provides a continuum of functionals which provide new methods that can be used to solve Prony's problem.
Automatic movie skimming with general tempo analysis
NASA Astrophysics Data System (ADS)
Lee, Shih-Hung; Yeh, Chia-Hung; Kuo, C. C. J.
2003-11-01
Story units are extracted by general tempo analysis including tempos analysis including tempos of audio and visual information in this research. Although many schemes have been proposed to successfully segment video data into shots using basic low-level features, how to group shots into meaningful units called story units is still a challenging problem. By focusing on a certain type of video such as sport or news, we can explore models with the specific application domain knowledge. For movie contents, many heuristic rules based on audiovisual clues have been proposed with limited success. We propose a method to extract story units using general tempo analysis. Experimental results are given to demonstrate the feasibility and efficiency of the proposed technique.
Assimilating data into open ocean tidal models
NASA Astrophysics Data System (ADS)
Kivman, Gennady A.
The problem of deriving tidal fields from observations by reason of incompleteness and imperfectness of every data set practically available has an infinitely large number of allowable solutions fitting the data within measurement errors and hence can be treated as ill-posed. Therefore, interpolating the data always relies on some a priori assumptions concerning the tides, which provide a rule of sampling or, in other words, a regularization of the ill-posed problem. Data assimilation procedures used in large scale tide modeling are viewed in a common mathematical framework as such regularizations. It is shown that they all (basis functions expansion, parameter estimation, nudging, objective analysis, general inversion, and extended general inversion), including those (objective analysis and general inversion) originally formulated in stochastic terms, may be considered as utilizations of one of the three general methods suggested by the theory of ill-posed problems. The problem of grid refinement critical for inverse methods and nudging is discussed.
Integrated Processing in Planning and Understanding.
1986-12-01
to language analysis seemed necessary. The second observation was the rather commonsense one that it is easier to understand a foreign language ...syntactic analysis Probably the most widely employed method for natural language analysis is augmea ted transition network parsing, or ATNs (Thorne, Bratley...accomplished. It is for this reason that the programming language Prolog, which implements that general method , has proven so well-stilted to writing ATN
Forestry sector analysis for developing countries: issues and methods.
R.W. Haynes
1993-01-01
A satellite meeting of the 10th Forestry World Congress focused on the methods used for forest sector analysis and their applications in both developed and developing countries. The results of that meeting are summarized, and a general approach for forest sector modeling is proposed. The approach includes models derived from the existing...
Zhang, Lei; Zeng, Zhi; Ji, Qiang
2011-09-01
Chain graph (CG) is a hybrid probabilistic graphical model (PGM) capable of modeling heterogeneous relationships among random variables. So far, however, its application in image and video analysis is very limited due to lack of principled learning and inference methods for a CG of general topology. To overcome this limitation, we introduce methods to extend the conventional chain-like CG model to CG model with more general topology and the associated methods for learning and inference in such a general CG model. Specifically, we propose techniques to systematically construct a generally structured CG, to parameterize this model, to derive its joint probability distribution, to perform joint parameter learning, and to perform probabilistic inference in this model. To demonstrate the utility of such an extended CG, we apply it to two challenging image and video analysis problems: human activity recognition and image segmentation. The experimental results show improved performance of the extended CG model over the conventional directed or undirected PGMs. This study demonstrates the promise of the extended CG for effective modeling and inference of complex real-world problems.
Beta-function B-spline smoothing on triangulations
NASA Astrophysics Data System (ADS)
Dechevsky, Lubomir T.; Zanaty, Peter
2013-03-01
In this work we investigate a novel family of Ck-smooth rational basis functions on triangulations for fitting, smoothing, and denoising geometric data. The introduced basis function is closely related to a recently introduced general method introduced in utilizing generalized expo-rational B-splines, which provides Ck-smooth convex resolutions of unity on very general disjoint partitions and overlapping covers of multidimensional domains with complex geometry. One of the major advantages of this new triangular construction is its locality with respect to the star-1 neighborhood of the vertex on which the said base is providing Hermite interpolation. This locality of the basis functions can be in turn utilized in adaptive methods, where, for instance a local refinement of the underlying triangular mesh affects only the refined domain, whereas, in other method one needs to investigate what changes are occurring outside of the refined domain. Both the triangular and the general smooth constructions have the potential to become a new versatile tool of Computer Aided Geometric Design (CAGD), Finite and Boundary Element Analysis (FEA/BEA) and Iso-geometric Analysis (IGA).
Generalized Skew Coefficients of Annual Peak Flows for Rural, Unregulated Streams in West Virginia
Atkins, John T.; Wiley, Jeffrey B.; Paybins, Katherine S.
2009-01-01
Generalized skew was determined from analysis of records from 147 streamflow-gaging stations in or near West Virginia. The analysis followed guidelines established by the Interagency Advisory Committee on Water Data described in Bulletin 17B, except that stations having 50 or more years of record were used instead of stations with the less restrictive recommendation of 25 or more years of record. The generalized-skew analysis included contouring, averaging, and regression of station skews. The best method was considered the one with the smallest mean square error (MSE). MSE is defined as the following quantity summed and divided by the number of peaks: the square of the difference of an individual logarithm (base 10) of peak flow less the mean of all individual logarithms of peak flow. Contouring of station skews was the best method for determining generalized skew for West Virginia, with a MSE of about 0.2174. This MSE is an improvement over the MSE of about 0.3025 for the national map presented in Bulletin 17B.
ERIC Educational Resources Information Center
Tisdell, C. C.
2017-01-01
Solution methods to exact differential equations via integrating factors have a rich history dating back to Euler (1740) and the ideas enjoy applications to thermodynamics and electromagnetism. Recently, Azevedo and Valentino presented an analysis of the generalized Bernoulli equation, constructing a general solution by linearizing the problem…
ERIC Educational Resources Information Center
Wang, Chia-Yu
2015-01-01
The purpose of this study was to use multiple assessments to investigate the general versus task-specific characteristics of metacognition in dissimilar chemistry topics. This mixed-method approach investigated the nature of undergraduate general chemistry students' metacognition using four assessments: a self-report questionnaire, assessment of…
Gradient optimization and nonlinear control
NASA Technical Reports Server (NTRS)
Hasdorff, L.
1976-01-01
The book represents an introduction to computation in control by an iterative, gradient, numerical method, where linearity is not assumed. The general language and approach used are those of elementary functional analysis. The particular gradient method that is emphasized and used is conjugate gradient descent, a well known method exhibiting quadratic convergence while requiring very little more computation than simple steepest descent. Constraints are not dealt with directly, but rather the approach is to introduce them as penalty terms in the criterion. General conjugate gradient descent methods are developed and applied to problems in control.
MAGMA: Generalized Gene-Set Analysis of GWAS Data
de Leeuw, Christiaan A.; Mooij, Joris M.; Heskes, Tom; Posthuma, Danielle
2015-01-01
By aggregating data for complex traits in a biologically meaningful way, gene and gene-set analysis constitute a valuable addition to single-marker analysis. However, although various methods for gene and gene-set analysis currently exist, they generally suffer from a number of issues. Statistical power for most methods is strongly affected by linkage disequilibrium between markers, multi-marker associations are often hard to detect, and the reliance on permutation to compute p-values tends to make the analysis computationally very expensive. To address these issues we have developed MAGMA, a novel tool for gene and gene-set analysis. The gene analysis is based on a multiple regression model, to provide better statistical performance. The gene-set analysis is built as a separate layer around the gene analysis for additional flexibility. This gene-set analysis also uses a regression structure to allow generalization to analysis of continuous properties of genes and simultaneous analysis of multiple gene sets and other gene properties. Simulations and an analysis of Crohn’s Disease data are used to evaluate the performance of MAGMA and to compare it to a number of other gene and gene-set analysis tools. The results show that MAGMA has significantly more power than other tools for both the gene and the gene-set analysis, identifying more genes and gene sets associated with Crohn’s Disease while maintaining a correct type 1 error rate. Moreover, the MAGMA analysis of the Crohn’s Disease data was found to be considerably faster as well. PMID:25885710
MAGMA: generalized gene-set analysis of GWAS data.
de Leeuw, Christiaan A; Mooij, Joris M; Heskes, Tom; Posthuma, Danielle
2015-04-01
By aggregating data for complex traits in a biologically meaningful way, gene and gene-set analysis constitute a valuable addition to single-marker analysis. However, although various methods for gene and gene-set analysis currently exist, they generally suffer from a number of issues. Statistical power for most methods is strongly affected by linkage disequilibrium between markers, multi-marker associations are often hard to detect, and the reliance on permutation to compute p-values tends to make the analysis computationally very expensive. To address these issues we have developed MAGMA, a novel tool for gene and gene-set analysis. The gene analysis is based on a multiple regression model, to provide better statistical performance. The gene-set analysis is built as a separate layer around the gene analysis for additional flexibility. This gene-set analysis also uses a regression structure to allow generalization to analysis of continuous properties of genes and simultaneous analysis of multiple gene sets and other gene properties. Simulations and an analysis of Crohn's Disease data are used to evaluate the performance of MAGMA and to compare it to a number of other gene and gene-set analysis tools. The results show that MAGMA has significantly more power than other tools for both the gene and the gene-set analysis, identifying more genes and gene sets associated with Crohn's Disease while maintaining a correct type 1 error rate. Moreover, the MAGMA analysis of the Crohn's Disease data was found to be considerably faster as well.
Regional analysis of annual maximum rainfall using TL-moments method
NASA Astrophysics Data System (ADS)
Shabri, Ani Bin; Daud, Zalina Mohd; Ariff, Noratiqah Mohd
2011-06-01
Information related to distributions of rainfall amounts are of great importance for designs of water-related structures. One of the concerns of hydrologists and engineers is the probability distribution for modeling of regional data. In this study, a novel approach to regional frequency analysis using L-moments is revisited. Subsequently, an alternative regional frequency analysis using the TL-moments method is employed. The results from both methods were then compared. The analysis was based on daily annual maximum rainfall data from 40 stations in Selangor Malaysia. TL-moments for the generalized extreme value (GEV) and generalized logistic (GLO) distributions were derived and used to develop the regional frequency analysis procedure. TL-moment ratio diagram and Z-test were employed in determining the best-fit distribution. Comparison between the two approaches showed that the L-moments and TL-moments produced equivalent results. GLO and GEV distributions were identified as the most suitable distributions for representing the statistical properties of extreme rainfall in Selangor. Monte Carlo simulation was used for performance evaluation, and it showed that the method of TL-moments was more efficient for lower quantile estimation compared with the L-moments.
ERIC Educational Resources Information Center
Aguado, Jaume; Campbell, Alistair; Ascaso, Carlos; Navarro, Purificacion; Garcia-Esteve, Lluisa; Luciano, Juan V.
2012-01-01
In this study, the authors tested alternative factor models of the 12-item General Health Questionnaire (GHQ-12) in a sample of Spanish postpartum women, using confirmatory factor analysis. The authors report the results of modeling three different methods for scoring the GHQ-12 using estimation methods recommended for categorical and binary data.…
2010-01-01
Background Cluster analysis, and in particular hierarchical clustering, is widely used to extract information from gene expression data. The aim is to discover new classes, or sub-classes, of either individuals or genes. Performing a cluster analysis commonly involve decisions on how to; handle missing values, standardize the data and select genes. In addition, pre-processing, involving various types of filtration and normalization procedures, can have an effect on the ability to discover biologically relevant classes. Here we consider cluster analysis in a broad sense and perform a comprehensive evaluation that covers several aspects of cluster analyses, including normalization. Result We evaluated 2780 cluster analysis methods on seven publicly available 2-channel microarray data sets with common reference designs. Each cluster analysis method differed in data normalization (5 normalizations were considered), missing value imputation (2), standardization of data (2), gene selection (19) or clustering method (11). The cluster analyses are evaluated using known classes, such as cancer types, and the adjusted Rand index. The performances of the different analyses vary between the data sets and it is difficult to give general recommendations. However, normalization, gene selection and clustering method are all variables that have a significant impact on the performance. In particular, gene selection is important and it is generally necessary to include a relatively large number of genes in order to get good performance. Selecting genes with high standard deviation or using principal component analysis are shown to be the preferred gene selection methods. Hierarchical clustering using Ward's method, k-means clustering and Mclust are the clustering methods considered in this paper that achieves the highest adjusted Rand. Normalization can have a significant positive impact on the ability to cluster individuals, and there are indications that background correction is preferable, in particular if the gene selection is successful. However, this is an area that needs to be studied further in order to draw any general conclusions. Conclusions The choice of cluster analysis, and in particular gene selection, has a large impact on the ability to cluster individuals correctly based on expression profiles. Normalization has a positive effect, but the relative performance of different normalizations is an area that needs more research. In summary, although clustering, gene selection and normalization are considered standard methods in bioinformatics, our comprehensive analysis shows that selecting the right methods, and the right combinations of methods, is far from trivial and that much is still unexplored in what is considered to be the most basic analysis of genomic data. PMID:20937082
Survey of methods for calculating sensitivity of general eigenproblems
NASA Technical Reports Server (NTRS)
Murthy, Durbha V.; Haftka, Raphael T.
1987-01-01
A survey of methods for sensitivity analysis of the algebraic eigenvalue problem for non-Hermitian matrices is presented. In addition, a modification of one method based on a better normalizing condition is proposed. Methods are classified as Direct or Adjoint and are evaluated for efficiency. Operation counts are presented in terms of matrix size, number of design variables and number of eigenvalues and eigenvectors of interest. The effect of the sparsity of the matrix and its derivatives is also considered, and typical solution times are given. General guidelines are established for the selection of the most efficient method.
Computer-Aided Design of Low-Noise Microwave Circuits
NASA Astrophysics Data System (ADS)
Wedge, Scott William
1991-02-01
Devoid of most natural and manmade noise, microwave frequencies have detection sensitivities limited by internally generated receiver noise. Low-noise amplifiers are therefore critical components in radio astronomical antennas, communications links, radar systems, and even home satellite dishes. A general technique to accurately predict the noise performance of microwave circuits has been lacking. Current noise analysis methods have been limited to specific circuit topologies or neglect correlation, a strong effect in microwave devices. Presented here are generalized methods, developed for computer-aided design implementation, for the analysis of linear noisy microwave circuits comprised of arbitrarily interconnected components. Included are descriptions of efficient algorithms for the simultaneous analysis of noisy and deterministic circuit parameters based on a wave variable approach. The methods are therefore particularly suited to microwave and millimeter-wave circuits. Noise contributions from lossy passive components and active components with electronic noise are considered. Also presented is a new technique for the measurement of device noise characteristics that offers several advantages over current measurement methods.
NASA Technical Reports Server (NTRS)
Eggleston, John M; Mathews, Charles W
1954-01-01
In the process of analyzing the longitudinal frequency-response characteristics of aircraft, information on some of the methods of analysis has been obtained by the Langley Aeronautical Laboratory of the National Advisory Committee for Aeronautics. In the investigation of these methods, the practical applications and limitations were stressed. In general, the methods considered may be classed as: (1) analysis of sinusoidal response, (2) analysis of transient response as to harmonic content through determination of the Fourier integral by manual or machine methods, and (3) analysis of the transient through the use of least-squares solutions of the coefficients of an assumed equation for either the transient time response or frequency response (sometimes referred to as curve-fitting methods). (author)
Methods for Estimating Payload/Vehicle Design Loads
NASA Technical Reports Server (NTRS)
Chen, J. C.; Garba, J. A.; Salama, M. A.; Trubert, M. R.
1983-01-01
Several methods compared with respect to accuracy, design conservatism, and cost. Objective of survey: reduce time and expense of load calculation by selecting approximate method having sufficient accuracy for problem at hand. Methods generally applicable to dynamic load analysis in other aerospace and other vehicle/payload systems.
Generalized Correlation Coefficient for Non-Parametric Analysis of Microarray Time-Course Data.
Tan, Qihua; Thomassen, Mads; Burton, Mark; Mose, Kristian Fredløv; Andersen, Klaus Ejner; Hjelmborg, Jacob; Kruse, Torben
2017-06-06
Modeling complex time-course patterns is a challenging issue in microarray study due to complex gene expression patterns in response to the time-course experiment. We introduce the generalized correlation coefficient and propose a combinatory approach for detecting, testing and clustering the heterogeneous time-course gene expression patterns. Application of the method identified nonlinear time-course patterns in high agreement with parametric analysis. We conclude that the non-parametric nature in the generalized correlation analysis could be an useful and efficient tool for analyzing microarray time-course data and for exploring the complex relationships in the omics data for studying their association with disease and health.
Sugihara, Masahiro
2010-01-01
In survival analysis, treatment effects are commonly evaluated based on survival curves and hazard ratios as causal treatment effects. In observational studies, these estimates may be biased due to confounding factors. The inverse probability of treatment weighted (IPTW) method based on the propensity score is one of the approaches utilized to adjust for confounding factors between binary treatment groups. As a generalization of this methodology, we developed an exact formula for an IPTW log-rank test based on the generalized propensity score for survival data. This makes it possible to compare the group differences of IPTW Kaplan-Meier estimators of survival curves using an IPTW log-rank test for multi-valued treatments. As causal treatment effects, the hazard ratio can be estimated using the IPTW approach. If the treatments correspond to ordered levels of a treatment, the proposed method can be easily extended to the analysis of treatment effect patterns with contrast statistics. In this paper, the proposed method is illustrated with data from the Kyushu Lipid Intervention Study (KLIS), which investigated the primary preventive effects of pravastatin on coronary heart disease (CHD). The results of the proposed method suggested that pravastatin treatment reduces the risk of CHD and that compliance to pravastatin treatment is important for the prevention of CHD. (c) 2009 John Wiley & Sons, Ltd.
Detecting spatio-temporal modes in multivariate data by entropy field decomposition
NASA Astrophysics Data System (ADS)
Frank, Lawrence R.; Galinsky, Vitaly L.
2016-09-01
A new data analysis method that addresses a general problem of detecting spatio-temporal variations in multivariate data is presented. The method utilizes two recent and complimentary general approaches to data analysis, information field theory (IFT) and entropy spectrum pathways (ESPs). Both methods reformulate and incorporate Bayesian theory, thus use prior information to uncover underlying structure of the unknown signal. Unification of ESP and IFT creates an approach that is non-Gaussian and nonlinear by construction and is found to produce unique spatio-temporal modes of signal behavior that can be ranked according to their significance, from which space-time trajectories of parameter variations can be constructed and quantified. Two brief examples of real world applications of the theory to the analysis of data bearing completely different, unrelated nature, lacking any underlying similarity, are also presented. The first example provides an analysis of resting state functional magnetic resonance imaging data that allowed us to create an efficient and accurate computational method for assessing and categorizing brain activity. The second example demonstrates the potential of the method in the application to the analysis of a strong atmospheric storm circulation system during the complicated stage of tornado development and formation using data recorded by a mobile Doppler radar. Reference implementation of the method will be made available as a part of the QUEST toolkit that is currently under development at the Center for Scientific Computation in Imaging.
Ellipsoidal analysis of coordination polyhedra
Cumby, James; Attfield, J. Paul
2017-01-01
The idea of the coordination polyhedron is essential to understanding chemical structure. Simple polyhedra in crystalline compounds are often deformed due to structural complexity or electronic instabilities so distortion analysis methods are useful. Here we demonstrate that analysis of the minimum bounding ellipsoid of a coordination polyhedron provides a general method for studying distortion, yielding parameters that are sensitive to various orders in metal oxide examples. Ellipsoidal analysis leads to discovery of a general switching of polyhedral distortions at symmetry-disallowed transitions in perovskites that may evidence underlying coordination bistability, and reveals a weak off-centre ‘d5 effect' for Fe3+ ions that could be exploited in multiferroics. Separating electronic distortions from intrinsic deformations within the low temperature superstructure of magnetite provides new insights into the charge and trimeron orders. Ellipsoidal analysis can be useful for exploring local structure in many materials such as coordination complexes and frameworks, organometallics and organic molecules. PMID:28146146
NASA Technical Reports Server (NTRS)
Trubert, M.; Salama, M.
1979-01-01
Unlike an earlier shock spectra approach, generalization permits an accurate elastic interaction between the spacecraft and launch vehicle to obtain accurate bounds on the spacecraft response and structural loads. In addition, the modal response from a previous launch vehicle transient analysis with or without a dummy spacecraft - is exploited to define a modal impulse as a simple idealization of the actual forcing function. The idealized modal forcing function is then used to derive explicit expressions for an estimate of the bound on the spacecraft structural response and forces. Greater accuracy is achieved with the present method over the earlier shock spectra, while saving much computational effort over the transient analysis.
Data handling and analysis for the 1971 corn blight watch experiment.
NASA Technical Reports Server (NTRS)
Anuta, P. E.; Phillips, T. L.; Landgrebe, D. A.
1972-01-01
Review of the data handling and analysis methods used in the near-operational test of remote sensing systems provided by the 1971 corn blight watch experiment. The general data analysis techniques and, particularly, the statistical multispectral pattern recognition methods for automatic computer analysis of aircraft scanner data are described. Some of the results obtained are examined, and the implications of the experiment for future data communication requirements of earth resource survey systems are discussed.
Seismic Hazard Analysis — Quo vadis?
NASA Astrophysics Data System (ADS)
Klügel, Jens-Uwe
2008-05-01
The paper is dedicated to the review of methods of seismic hazard analysis currently in use, analyzing the strengths and weaknesses of different approaches. The review is performed from the perspective of a user of the results of seismic hazard analysis for different applications such as the design of critical and general (non-critical) civil infrastructures, technical and financial risk analysis. A set of criteria is developed for and applied to an objective assessment of the capabilities of different analysis methods. It is demonstrated that traditional probabilistic seismic hazard analysis (PSHA) methods have significant deficiencies, thus limiting their practical applications. These deficiencies have their roots in the use of inadequate probabilistic models and insufficient understanding of modern concepts of risk analysis, as have been revealed in some recent large scale studies. These deficiencies result in the lack of ability of a correct treatment of dependencies between physical parameters and finally, in an incorrect treatment of uncertainties. As a consequence, results of PSHA studies have been found to be unrealistic in comparison with empirical information from the real world. The attempt to compensate these problems by a systematic use of expert elicitation has, so far, not resulted in any improvement of the situation. It is also shown that scenario-earthquakes developed by disaggregation from the results of a traditional PSHA may not be conservative with respect to energy conservation and should not be used for the design of critical infrastructures without validation. Because the assessment of technical as well as of financial risks associated with potential damages of earthquakes need a risk analysis, current method is based on a probabilistic approach with its unsolved deficiencies. Traditional deterministic or scenario-based seismic hazard analysis methods provide a reliable and in general robust design basis for applications such as the design of critical infrastructures, especially with systematic sensitivity analyses based on validated phenomenological models. Deterministic seismic hazard analysis incorporates uncertainties in the safety factors. These factors are derived from experience as well as from expert judgment. Deterministic methods associated with high safety factors may lead to too conservative results, especially if applied for generally short-lived civil structures. Scenarios used in deterministic seismic hazard analysis have a clear physical basis. They are related to seismic sources discovered by geological, geomorphologic, geodetic and seismological investigations or derived from historical references. Scenario-based methods can be expanded for risk analysis applications with an extended data analysis providing the frequency of seismic events. Such an extension provides a better informed risk model that is suitable for risk-informed decision making.
Simplified half-life methods for the analysis of kinetic data
NASA Technical Reports Server (NTRS)
Eberhart, J. G.; Levin, E.
1988-01-01
The analysis of reaction rate data has as its goal the determination of the order rate constant which characterize the data. Chemical reactions with one reactant and present simplified methods for accomplishing this goal are considered. The approaches presented involve the use of half lives or other fractional lives. These methods are particularly useful for the more elementary discussions of kinetics found in general and physical chemistry courses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bass, B.R.; Bryan, R.H.; Bryson, J.W.
This paper summarizes the capabilities and applications of the general-purpose and special-purpose computer programs that have been developed for use in fracture mechanics analyses of HSST pressure vessel experiments. Emphasis is placed on the OCA/USA code, which is designed for analysis of pressurized-thermal-shock (PTS) conditions, and on the ORMGEN/ADINA/ORVIRT system which is used for more general analysis. Fundamental features of these programs are discussed, along with applications to pressure vessel experiments.
Micromechanical analysis of thermo-inelastic multiphase short-fiber composites
NASA Technical Reports Server (NTRS)
Aboudi, Jacob
1994-01-01
A micromechanical formulation is presented for the prediction of the overall thermo-inelastic behavior of multiphase composites which consist of short fibers. The analysis is an extension of the generalized method of cells that was previously derived for inelastic composites with continuous fibers, and the reliability of which was critically examined in several situations. The resulting three dimensional formulation is extremely general, wherein the analysis of thermo-inelastic composites with continuous fibers as well as particulate and porous inelastic materials are merely special cases.
Prediction and analysis of beta-turns in proteins by support vector machine.
Pham, Tho Hoan; Satou, Kenji; Ho, Tu Bao
2003-01-01
Tight turn has long been recognized as one of the three important features of proteins after the alpha-helix and beta-sheet. Tight turns play an important role in globular proteins from both the structural and functional points of view. More than 90% tight turns are beta-turns. Analysis and prediction of beta-turns in particular and tight turns in general are very useful for the design of new molecules such as drugs, pesticides, and antigens. In this paper, we introduce a support vector machine (SVM) approach to prediction and analysis of beta-turns. We have investigated two aspects of applying SVM to the prediction and analysis of beta-turns. First, we developed a new SVM method, called BTSVM, which predicts beta-turns of a protein from its sequence. The prediction results on the dataset of 426 non-homologous protein chains by sevenfold cross-validation technique showed that our method is superior to the other previous methods. Second, we analyzed how amino acid positions support (or prevent) the formation of beta-turns based on the "multivariable" classification model of a linear SVM. This model is more general than the other ones of previous statistical methods. Our analysis results are more comprehensive and easier to use than previously published analysis results.
Astronautic Structures Manual, Volume 3
NASA Technical Reports Server (NTRS)
1975-01-01
This document (Volumes I, II, and III) presents a compilation of industry-wide methods in aerospace strength analysis that can be carried out by hand, that are general enough in scope to cover most structures encountered, and that are sophisticated enough to give accurate estimates of the actual strength expected. It provides analysis techniques for the elastic and inelastic stress ranges. It serves not only as a catalog of methods not usually available, but also as a reference source for the background of the methods themselves. An overview of the manual is as follows: Section A is a general introduction of methods used and includes sections on loads, combined stresses, and interaction curves; Section B is devoted to methods of strength analysis; Section C is devoted to the topic of structural stability; Section D is on thermal stresses; Section E is on fatigue and fracture mechanics; Section F is on composites; Section G is on rotating machinery; and Section H is on statistics. These three volumes supersede Volumes I and II, NASA TM X-60041 and NASA TM X-60042, respectively.
General methods for sensitivity analysis of equilibrium dynamics in patch occupancy models
Miller, David A.W.
2012-01-01
Sensitivity analysis is a useful tool for the study of ecological models that has many potential applications for patch occupancy modeling. Drawing from the rich foundation of existing methods for Markov chain models, I demonstrate new methods for sensitivity analysis of the equilibrium state dynamics of occupancy models. Estimates from three previous studies are used to illustrate the utility of the sensitivity calculations: a joint occupancy model for a prey species, its predators, and habitat used by both; occurrence dynamics from a well-known metapopulation study of three butterfly species; and Golden Eagle occupancy and reproductive dynamics. I show how to deal efficiently with multistate models and how to calculate sensitivities involving derived state variables and lower-level parameters. In addition, I extend methods to incorporate environmental variation by allowing for spatial and temporal variability in transition probabilities. The approach used here is concise and general and can fully account for environmental variability in transition parameters. The methods can be used to improve inferences in occupancy studies by quantifying the effects of underlying parameters, aiding prediction of future system states, and identifying priorities for sampling effort.
ERIC Educational Resources Information Center
Krishen, Anoop
1989-01-01
This review covers methods for identification, characterization, and determination of rubber and materials in rubber. Topics include: general information, nuclear magnetic resonance spectroscopy, infrared spectroscopy, thermal methods, gel permeation chromatography, size exclusion chromatography, analysis related to safety and health, and…
Models for evaluating the performability of degradable computing systems
NASA Technical Reports Server (NTRS)
Wu, L. T.
1982-01-01
Recent advances in multiprocessor technology established the need for unified methods to evaluate computing systems performance and reliability. In response to this modeling need, a general modeling framework that permits the modeling, analysis and evaluation of degradable computing systems is considered. Within this framework, several user oriented performance variables are identified and shown to be proper generalizations of the traditional notions of system performance and reliability. Furthermore, a time varying version of the model is developed to generalize the traditional fault tree reliability evaluation methods of phased missions.
A general method for handling missing binary outcome data in randomized controlled trials
Jackson, Dan; White, Ian R; Mason, Dan; Sutton, Stephen
2014-01-01
Aims The analysis of randomized controlled trials with incomplete binary outcome data is challenging. We develop a general method for exploring the impact of missing data in such trials, with a focus on abstinence outcomes. Design We propose a sensitivity analysis where standard analyses, which could include ‘missing = smoking’ and ‘last observation carried forward’, are embedded in a wider class of models. Setting We apply our general method to data from two smoking cessation trials. Participants A total of 489 and 1758 participants from two smoking cessation trials. Measurements The abstinence outcomes were obtained using telephone interviews. Findings The estimated intervention effects from both trials depend on the sensitivity parameters used. The findings differ considerably in magnitude and statistical significance under quite extreme assumptions about the missing data, but are reasonably consistent under more moderate assumptions. Conclusions A new method for undertaking sensitivity analyses when handling missing data in trials with binary outcomes allows a wide range of assumptions about the missing data to be assessed. In two smoking cessation trials the results were insensitive to all but extreme assumptions. PMID:25171441
A New DEM Generalization Method Based on Watershed and Tree Structure
Chen, Yonggang; Ma, Tianwu; Chen, Xiaoyin; Chen, Zhende; Yang, Chunju; Lin, Chenzhi; Shan, Ligang
2016-01-01
The DEM generalization is the basis of multi-dimensional observation, the basis of expressing and analyzing the terrain. DEM is also the core of building the Multi-Scale Geographic Database. Thus, many researchers have studied both the theory and the method of DEM generalization. This paper proposed a new method of generalizing terrain, which extracts feature points based on the tree model construction which considering the nested relationship of watershed characteristics. The paper used the 5 m resolution DEM of the Jiuyuan gully watersheds in the Loess Plateau as the original data and extracted the feature points in every single watershed to reconstruct the DEM. The paper has achieved generalization from 1:10000 DEM to 1:50000 DEM by computing the best threshold. The best threshold is 0.06. In the last part of the paper, the height accuracy of the generalized DEM is analyzed by comparing it with some other classic methods, such as aggregation, resample, and VIP based on the original 1:50000 DEM. The outcome shows that the method performed well. The method can choose the best threshold according to the target generalization scale to decide the density of the feature points in the watershed. Meanwhile, this method can reserve the skeleton of the terrain, which can meet the needs of different levels of generalization. Additionally, through overlapped contour contrast, elevation statistical parameters and slope and aspect analysis, we found out that the W8D algorithm performed well and effectively in terrain representation. PMID:27517296
Perturbation Selection and Local Influence Analysis for Nonlinear Structural Equation Model
ERIC Educational Resources Information Center
Chen, Fei; Zhu, Hong-Tu; Lee, Sik-Yum
2009-01-01
Local influence analysis is an important statistical method for studying the sensitivity of a proposed model to model inputs. One of its important issues is related to the appropriate choice of a perturbation vector. In this paper, we develop a general method to select an appropriate perturbation vector and a second-order local influence measure…
Application of software technology to automatic test data analysis
NASA Technical Reports Server (NTRS)
Stagner, J. R.
1991-01-01
The verification process for a major software subsystem was partially automated as part of a feasibility demonstration. The methods employed are generally useful and applicable to other types of subsystems. The effort resulted in substantial savings in test engineer analysis time and offers a method for inclusion of automatic verification as a part of regression testing.
Economists Concoct New Method for Comparing Graduation Rates
ERIC Educational Resources Information Center
Glenn, David
2007-01-01
A pair of economists at the College of William and Mary have devised a new way of comparing colleges' graduation rates--a method, borrowed from business analysis, that they believe is fairer and more useful than the techniques used by "U.S. News & World Report" and the Education Trust. That general technique of regression analysis underlies the…
[Screening for cancer - economic consideration and cost-effectiveness].
Kjellberg, Jakob
2014-06-09
Cost-effectiveness analysis has become an accepted method to evaluate medical technology and allocate scarce health-care resources. Published decision analyses show that screening for cancer in general is cost-effective. However, cost-effectiveness analyses are only as good as the clinical data and the results are sensitive to the chosen methods and perspective of the analysis.
A Comparison of Component and Factor Patterns: A Monte Carlo Approach.
ERIC Educational Resources Information Center
Velicer, Wayne F.; And Others
1982-01-01
Factor analysis, image analysis, and principal component analysis are compared with respect to the factor patterns they would produce under various conditions. The general conclusion that is reached is that the three methods produce results that are equivalent. (Author/JKS)
Theoretical and software considerations for nonlinear dynamic analysis
NASA Technical Reports Server (NTRS)
Schmidt, R. J.; Dodds, R. H., Jr.
1983-01-01
In the finite element method for structural analysis, it is generally necessary to discretize the structural model into a very large number of elements to accurately evaluate displacements, strains, and stresses. As the complexity of the model increases, the number of degrees of freedom can easily exceed the capacity of present-day software system. Improvements of structural analysis software including more efficient use of existing hardware and improved structural modeling techniques are discussed. One modeling technique that is used successfully in static linear and nonlinear analysis is multilevel substructuring. This research extends the use of multilevel substructure modeling to include dynamic analysis and defines the requirements for a general purpose software system capable of efficient nonlinear dynamic analysis. The multilevel substructuring technique is presented, the analytical formulations and computational procedures for dynamic analysis and nonlinear mechanics are reviewed, and an approach to the design and implementation of a general purpose structural software system is presented.
Spatio-Chromatic Adaptation via Higher-Order Canonical Correlation Analysis of Natural Images
Gutmann, Michael U.; Laparra, Valero; Hyvärinen, Aapo; Malo, Jesús
2014-01-01
Independent component and canonical correlation analysis are two general-purpose statistical methods with wide applicability. In neuroscience, independent component analysis of chromatic natural images explains the spatio-chromatic structure of primary cortical receptive fields in terms of properties of the visual environment. Canonical correlation analysis explains similarly chromatic adaptation to different illuminations. But, as we show in this paper, neither of the two methods generalizes well to explain both spatio-chromatic processing and adaptation at the same time. We propose a statistical method which combines the desirable properties of independent component and canonical correlation analysis: It finds independent components in each data set which, across the two data sets, are related to each other via linear or higher-order correlations. The new method is as widely applicable as canonical correlation analysis, and also to more than two data sets. We call it higher-order canonical correlation analysis. When applied to chromatic natural images, we found that it provides a single (unified) statistical framework which accounts for both spatio-chromatic processing and adaptation. Filters with spatio-chromatic tuning properties as in the primary visual cortex emerged and corresponding-colors psychophysics was reproduced reasonably well. We used the new method to make a theory-driven testable prediction on how the neural response to colored patterns should change when the illumination changes. We predict shifts in the responses which are comparable to the shifts reported for chromatic contrast habituation. PMID:24533049
Spatio-chromatic adaptation via higher-order canonical correlation analysis of natural images.
Gutmann, Michael U; Laparra, Valero; Hyvärinen, Aapo; Malo, Jesús
2014-01-01
Independent component and canonical correlation analysis are two general-purpose statistical methods with wide applicability. In neuroscience, independent component analysis of chromatic natural images explains the spatio-chromatic structure of primary cortical receptive fields in terms of properties of the visual environment. Canonical correlation analysis explains similarly chromatic adaptation to different illuminations. But, as we show in this paper, neither of the two methods generalizes well to explain both spatio-chromatic processing and adaptation at the same time. We propose a statistical method which combines the desirable properties of independent component and canonical correlation analysis: It finds independent components in each data set which, across the two data sets, are related to each other via linear or higher-order correlations. The new method is as widely applicable as canonical correlation analysis, and also to more than two data sets. We call it higher-order canonical correlation analysis. When applied to chromatic natural images, we found that it provides a single (unified) statistical framework which accounts for both spatio-chromatic processing and adaptation. Filters with spatio-chromatic tuning properties as in the primary visual cortex emerged and corresponding-colors psychophysics was reproduced reasonably well. We used the new method to make a theory-driven testable prediction on how the neural response to colored patterns should change when the illumination changes. We predict shifts in the responses which are comparable to the shifts reported for chromatic contrast habituation.
Recent developments of the NESSUS probabilistic structural analysis computer program
NASA Technical Reports Server (NTRS)
Millwater, H.; Wu, Y.-T.; Torng, T.; Thacker, B.; Riha, D.; Leung, C. P.
1992-01-01
The NESSUS probabilistic structural analysis computer program combines state-of-the-art probabilistic algorithms with general purpose structural analysis methods to compute the probabilistic response and the reliability of engineering structures. Uncertainty in loading, material properties, geometry, boundary conditions and initial conditions can be simulated. The structural analysis methods include nonlinear finite element and boundary element methods. Several probabilistic algorithms are available such as the advanced mean value method and the adaptive importance sampling method. The scope of the code has recently been expanded to include probabilistic life and fatigue prediction of structures in terms of component and system reliability and risk analysis of structures considering cost of failure. The code is currently being extended to structural reliability considering progressive crack propagation. Several examples are presented to demonstrate the new capabilities.
Hilbert's axiomatic method and Carnap's general axiomatics.
Stöltzner, Michael
2015-10-01
This paper compares the axiomatic method of David Hilbert and his school with Rudolf Carnap's general axiomatics that was developed in the late 1920s, and that influenced his understanding of logic of science throughout the 1930s, when his logical pluralism developed. The distinct perspectives become visible most clearly in how Richard Baldus, along the lines of Hilbert, and Carnap and Friedrich Bachmann analyzed the axiom system of Hilbert's Foundations of Geometry—the paradigmatic example for the axiomatization of science. Whereas Hilbert's axiomatic method started from a local analysis of individual axiom systems in which the foundations of mathematics as a whole entered only when establishing the system's consistency, Carnap and his Vienna Circle colleague Hans Hahn instead advocated a global analysis of axiom systems in general. A primary goal was to evade, or formalize ex post, mathematicians' 'material' talk about axiom systems for such talk was held to be error-prone and susceptible to metaphysics. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Lallemand, Pierre; Luo, Li-Shi
2000-01-01
The generalized hydrodynamics (the wave vector dependence of the transport coefficients) of a generalized lattice Boltzmann equation (LBE) is studied in detail. The generalized lattice Boltzmann equation is constructed in moment space rather than in discrete velocity space. The generalized hydrodynamics of the model is obtained by solving the dispersion equation of the linearized LBE either analytically by using perturbation technique or numerically. The proposed LBE model has a maximum number of adjustable parameters for the given set of discrete velocities. Generalized hydrodynamics characterizes dispersion, dissipation (hyper-viscosities), anisotropy, and lack of Galilean invariance of the model, and can be applied to select the values of the adjustable parameters which optimize the properties of the model. The proposed generalized hydrodynamic analysis also provides some insights into stability and proper initial conditions for LBE simulations. The stability properties of some 2D LBE models are analyzed and compared with each other in the parameter space of the mean streaming velocity and the viscous relaxation time. The procedure described in this work can be applied to analyze other LBE models. As examples, LBE models with various interpolation schemes are analyzed. Numerical results on shear flow with an initially discontinuous velocity profile (shock) with or without a constant streaming velocity are shown to demonstrate the dispersion effects in the LBE model; the results compare favorably with our theoretical analysis. We also show that whereas linear analysis of the LBE evolution operator is equivalent to Chapman-Enskog analysis in the long wave-length limit (wave vector k = 0), it can also provide results for large values of k. Such results are important for the stability and other hydrodynamic properties of the LBE method and cannot be obtained through Chapman-Enskog analysis.
Efficient Analysis of Complex Structures
NASA Technical Reports Server (NTRS)
Kapania, Rakesh K.
2000-01-01
Last various accomplishments achieved during this project are : (1) A Survey of Neural Network (NN) applications using MATLAB NN Toolbox on structural engineering especially on equivalent continuum models (Appendix A). (2) Application of NN and GAs to simulate and synthesize substructures: 1-D and 2-D beam problems (Appendix B). (3) Development of an equivalent plate-model analysis method (EPA) for static and vibration analysis of general trapezoidal built-up wing structures composed of skins, spars and ribs. Calculation of all sorts of test cases and comparison with measurements or FEA results. (Appendix C). (4) Basic work on using second order sensitivities on simulating wing modal response, discussion of sensitivity evaluation approaches, and some results (Appendix D). (5) Establishing a general methodology of simulating the modal responses by direct application of NN and by sensitivity techniques, in a design space composed of a number of design points. Comparison is made through examples using these two methods (Appendix E). (6) Establishing a general methodology of efficient analysis of complex wing structures by indirect application of NN: the NN-aided Equivalent Plate Analysis. Training of the Neural Networks for this purpose in several cases of design spaces, which can be applicable for actual design of complex wings (Appendix F).
Andrić, Filip; Héberger, Károly
2015-02-06
Lipophilicity (logP) represents one of the most studied and most frequently used fundamental physicochemical properties. At present there are several possibilities for its quantitative expression and many of them stems from chromatographic experiments. Numerous attempts have been made to compare different computational methods, chromatographic methods vs. computational approaches, as well as chromatographic methods and direct shake-flask procedure without definite results or these findings are not accepted generally. In the present work numerous chromatographically derived lipophilicity measures in combination with diverse computational methods were ranked and clustered using the novel variable discrimination and ranking approaches based on the sum of ranking differences and the generalized pair correlation method. Available literature logP data measured on HILIC, and classical reversed-phase combining different classes of compounds have been compared with most frequently used multivariate data analysis techniques (principal component and hierarchical cluster analysis) as well as with the conclusions in the original sources. Chromatographic lipophilicity measures obtained under typical reversed-phase conditions outperform the majority of computationally estimated logPs. Oppositely, in the case of HILIC none of the many proposed chromatographic indices overcomes any of the computationally assessed logPs. Only two of them (logkmin and kmin) may be selected as recommended chromatographic lipophilicity measures. Both ranking approaches, sum of ranking differences and generalized pair correlation method, although based on different backgrounds, provides highly similar variable ordering and grouping leading to the same conclusions. Copyright © 2015. Published by Elsevier B.V.
Optimal analytic method for the nonlinear Hasegawa-Mima equation
NASA Astrophysics Data System (ADS)
Baxter, Mathew; Van Gorder, Robert A.; Vajravelu, Kuppalapalle
2014-05-01
The Hasegawa-Mima equation is a nonlinear partial differential equation that describes the electric potential due to a drift wave in a plasma. In the present paper, we apply the method of homotopy analysis to a slightly more general Hasegawa-Mima equation, which accounts for hyper-viscous damping or viscous dissipation. First, we outline the method for the general initial/boundary value problem over a compact rectangular spatial domain. We use a two-stage method, where both the convergence control parameter and the auxiliary linear operator are optimally selected to minimize the residual error due to the approximation. To do the latter, we consider a family of operators parameterized by a constant which gives the decay rate of the solutions. After outlining the general method, we consider a number of concrete examples in order to demonstrate the utility of this approach. The results enable us to study properties of the initial/boundary value problem for the generalized Hasegawa-Mima equation. In several cases considered, we are able to obtain solutions with extremely small residual errors after relatively few iterations are computed (residual errors on the order of 10-15 are found in multiple cases after only three iterations). The results demonstrate that selecting a parameterized auxiliary linear operator can be extremely useful for minimizing residual errors when used concurrently with the optimal homotopy analysis method, suggesting that this approach can prove useful for a number of nonlinear partial differential equations arising in physics and nonlinear mechanics.
NASA Technical Reports Server (NTRS)
Noah, S. T.; Kim, Y. B.
1991-01-01
A general approach is developed for determining the periodic solutions and their stability of nonlinear oscillators with piecewise-smooth characteristics. A modified harmonic balance/Fourier transform procedure is devised for the analysis. The procedure avoids certain numerical differentiation employed previously in determining the periodic solutions, therefore enhancing the reliability and efficiency of the method. Stability of the solutions is determined via perturbations of their state variables. The method is applied to a forced oscillator interacting with a stop of finite stiffness. Flip and fold bifurcations are found to occur. This led to the identification of parameter ranges in which chaotic response occurred.
NASA Technical Reports Server (NTRS)
Giles, G. L.; Rogers, J. L., Jr.
1982-01-01
The methodology used to implement structural sensitivity calculations into a major, general-purpose finite-element analysis system (SPAR) is described. This implementation includes a generalized method for specifying element cross-sectional dimensions as design variables that can be used in analytically calculating derivatives of output quantities from static stress, vibration, and buckling analyses for both membrane and bending elements. Limited sample results for static displacements and stresses are presented to indicate the advantages of analytically calculating response derivatives compared to finite difference methods. Continuing developments to implement these procedures into an enhanced version of SPAR are also discussed.
Valeri, Linda; Lin, Xihong; VanderWeele, Tyler J.
2014-01-01
Mediation analysis is a popular approach to examine the extent to which the effect of an exposure on an outcome is through an intermediate variable (mediator) and the extent to which the effect is direct. When the mediator is mis-measured the validity of mediation analysis can be severely undermined. In this paper we first study the bias of classical, non-differential measurement error on a continuous mediator in the estimation of direct and indirect causal effects in generalized linear models when the outcome is either continuous or discrete and exposure-mediator interaction may be present. Our theoretical results as well as a numerical study demonstrate that in the presence of non-linearities the bias of naive estimators for direct and indirect effects that ignore measurement error can take unintuitive directions. We then develop methods to correct for measurement error. Three correction approaches using method of moments, regression calibration and SIMEX are compared. We apply the proposed method to the Massachusetts General Hospital lung cancer study to evaluate the effect of genetic variants mediated through smoking on lung cancer risk. PMID:25220625
The formulation and estimation of a spatial skew-normal generalized ordered-response model.
DOT National Transportation Integrated Search
2016-06-01
This paper proposes a new spatial generalized ordered response model with skew-normal kernel error terms and an : associated estimation method. It contributes to the spatial analysis field by allowing a flexible and parametric skew-normal : distribut...
Guillén-Riquelme, Alejandro; Buela-Casal, Gualberto
2014-01-01
Since its creation the STAI has been cited in more than 14,000 documents, with more than 60 adaptations in different countries. In some adaptations this instrument has no clinical scores. The aim of this work is to determine if the State-Trait Anxiety Inventory (STAI) has higher scores in patients diagnosed with anxiety than in general population. In addition, we want to examine if the internal consistency is adequate in anxious patient samples. We performed a literature search in Tripdatabase, Cochrane, Web of Knowledge, Scopus, PyscINFO and Scholar Google, for documents published between 2008 y 2012. We selected 131 scientific articles to compare between patients diagnosed with anxiety and general population, and 25 for the generalization of reliability. For the analysis we used Cohen's d for means comparisons (random-effects method) and Cronbach's alpha for the reliability generalization (fixed-effects method). In the groups comparision the differences in state anxiety (d=1.39; CI95%: 1.22-1.56) and in the trait anxiety (d=1.74; CI95%:1.56-1.91) were significants. The reliability for patients of some anxiety disorder was between 0.87 and 0.93. So it seems that the STAI is sensitive to the level of anxiety of the individual and reliable for patients with diagnosis of panic attack, specific phobia, social phobia, generalized social phobia, generalized anxiety disorder, post-traumatic stress disorder, obsessive compulsive disorder or acute Stress disorder.
A simple method of calculating Stirling engines for engine design optimization
NASA Technical Reports Server (NTRS)
Martini, W. R.
1978-01-01
A calculation method is presented for a rhombic drive Stirling engine with a tubular heater and cooler and a screen type regenerator. Generally the equations presented describe power generation and consumption and heat losses. It is the simplest type of analysis that takes into account the conflicting requirements inherent in Stirling engine design. The method itemizes the power and heat losses for intelligent engine optimization. The results of engine analysis of the GPU-3 Stirling engine are compared with more complicated engine analysis and with engine measurements.
Doha, E.H.; Abd-Elhameed, W.M.; Youssri, Y.H.
2014-01-01
Two families of certain nonsymmetric generalized Jacobi polynomials with negative integer indexes are employed for solving third- and fifth-order two point boundary value problems governed by homogeneous and nonhomogeneous boundary conditions using a dual Petrov–Galerkin method. The idea behind our method is to use trial functions satisfying the underlying boundary conditions of the differential equations and the test functions satisfying the dual boundary conditions. The resulting linear systems from the application of our method are specially structured and they can be efficiently inverted. The use of generalized Jacobi polynomials simplify the theoretical and numerical analysis of the method and also leads to accurate and efficient numerical algorithms. The presented numerical results indicate that the proposed numerical algorithms are reliable and very efficient. PMID:26425358
NASA Technical Reports Server (NTRS)
Allen, G.
1972-01-01
The use of the theta-operator method and generalized hypergeometric functions in obtaining solutions to nth-order linear ordinary differential equations is explained. For completeness, the analysis of the differential equation to determine whether the point of expansion is an ordinary point or a regular singular point is included. The superiority of the two methods shown over the standard method is demonstrated by using all three of the methods to work out several examples. Also included is a compendium of formulae and properties of the theta operator and generalized hypergeometric functions which is complete enough to make the report self-contained.
Aprea, C; Sciarra, G; Bozzi, N
1997-01-01
Two methods for the quantitative analysis of 2,4-dichlorophenoxyacetic acid (2,4-D) and 2-methyl-4-chlorophenoxyacetic acid (MCPA) in urine were compared. The first was an high-performance liquid chromatography method using a C8 column with ion suppression and diode array detection. The urine extracts were first purified by solid-phase extraction (SPE) on silica capillary columns. The detection limit of the method was 15 micrograms/L for both compounds. The percentage coefficient of variation of the whole analysis evaluated at a concentration of 125.0 micrograms/L was 6.2% for 2,4-D and 6.8% for MCPA. The mean recovery of analysis was 81% for 2,4-D and 85% for MCPA. The second was a gas chromatographic (GC) method in which the compounds were first derivatized with pentafluorobenzylbromide to pentafluorobenzyl esters, which were determined with a slightly polar capillary column and electron capture detection. Before GC analysis, the urine extracts were purified by SPE on silica capillary columns. This method had a detection limit of 1 microgram/L for both compounds and a percentage coefficient of variation of the whole analysis, evaluated at a concentration of 30.0 micrograms/L, of 8% for 2,4-D, and of 5.5% for MCPA. the mean recovery was 87% for 2,4-D and 94% for MCPA. The low detection limit made the second method suitable for assaying the two herbicides in the general population. Duplicate analysis of ten urine samples from occupationally exposed subjects by the two methods gave identical results for a wide range of concentrations.
NASA Technical Reports Server (NTRS)
Press, Harry; Mazelsky, Bernard
1954-01-01
The applicability of some results from the theory of generalized harmonic analysis (or power-spectral analysis) to the analysis of gust loads on airplanes in continuous rough air is examined. The general relations for linear systems between power spectrums of a random input disturbance and an output response are used to relate the spectrum of airplane load in rough air to the spectrum of atmospheric gust velocity. The power spectrum of loads is shown to provide a measure of the load intensity in terms of the standard deviation (root mean square) of the load distribution for an airplane in flight through continuous rough air. For the case of a load output having a normal distribution, which appears from experimental evidence to apply to homogeneous rough air, the standard deviation is shown to describe the probability distribution of loads or the proportion of total time that the load has given values. Thus, for airplane in flight through homogeneous rough air, the probability distribution of loads may be determined from a power-spectral analysis. In order to illustrate the application of power-spectral analysis to gust-load analysis and to obtain an insight into the relations between loads and airplane gust-response characteristics, two selected series of calculations are presented. The results indicate that both methods of analysis yield results that are consistent to a first approximation.
NASA Astrophysics Data System (ADS)
Chen, Guoxiong; Cheng, Qiuming
2016-02-01
Multi-resolution and scale-invariance have been increasingly recognized as two closely related intrinsic properties endowed in geofields such as geochemical and geophysical anomalies, and they are commonly investigated by using multiscale- and scaling-analysis methods. In this paper, the wavelet-based multiscale decomposition (WMD) method was proposed to investigate the multiscale natures of geochemical pattern from large scale to small scale. In the light of the wavelet transformation of fractal measures, we demonstrated that the wavelet approximation operator provides a generalization of box-counting method for scaling analysis of geochemical patterns. Specifically, the approximation coefficient acts as the generalized density-value in density-area fractal modeling of singular geochemical distributions. Accordingly, we presented a novel local singularity analysis (LSA) using the WMD algorithm which extends the conventional moving averaging to a kernel-based operator for implementing LSA. Finally, the novel LSA was validated using a case study dealing with geochemical data (Fe2O3) in stream sediments for mineral exploration in Inner Mongolia, China. In comparison with the LSA implemented using the moving averaging method the novel LSA using WMD identified improved weak geochemical anomalies associated with mineralization in covered area.
Dutheil, Julien; Gaillard, Sylvain; Bazin, Eric; Glémin, Sylvain; Ranwez, Vincent; Galtier, Nicolas; Belkhir, Khalid
2006-04-04
A large number of bioinformatics applications in the fields of bio-sequence analysis, molecular evolution and population genetics typically share input/output methods, data storage requirements and data analysis algorithms. Such common features may be conveniently bundled into re-usable libraries, which enable the rapid development of new methods and robust applications. We present Bio++, a set of Object Oriented libraries written in C++. Available components include classes for data storage and handling (nucleotide/amino-acid/codon sequences, trees, distance matrices, population genetics datasets), various input/output formats, basic sequence manipulation (concatenation, transcription, translation, etc.), phylogenetic analysis (maximum parsimony, markov models, distance methods, likelihood computation and maximization), population genetics/genomics (diversity statistics, neutrality tests, various multi-locus analyses) and various algorithms for numerical calculus. Implementation of methods aims at being both efficient and user-friendly. A special concern was given to the library design to enable easy extension and new methods development. We defined a general hierarchy of classes that allow the developer to implement its own algorithms while remaining compatible with the rest of the libraries. Bio++ source code is distributed free of charge under the CeCILL general public licence from its website http://kimura.univ-montp2.fr/BioPP.
The Fourier decomposition method for nonlinear and non-stationary time series analysis.
Singh, Pushpendra; Joshi, Shiv Dutt; Patney, Rakesh Kumar; Saha, Kaushik
2017-03-01
for many decades, there has been a general perception in the literature that Fourier methods are not suitable for the analysis of nonlinear and non-stationary data. In this paper, we propose a novel and adaptive Fourier decomposition method (FDM), based on the Fourier theory, and demonstrate its efficacy for the analysis of nonlinear and non-stationary time series. The proposed FDM decomposes any data into a small number of 'Fourier intrinsic band functions' (FIBFs). The FDM presents a generalized Fourier expansion with variable amplitudes and variable frequencies of a time series by the Fourier method itself. We propose an idea of zero-phase filter bank-based multivariate FDM (MFDM), for the analysis of multivariate nonlinear and non-stationary time series, using the FDM. We also present an algorithm to obtain cut-off frequencies for MFDM. The proposed MFDM generates a finite number of band-limited multivariate FIBFs (MFIBFs). The MFDM preserves some intrinsic physical properties of the multivariate data, such as scale alignment, trend and instantaneous frequency. The proposed methods provide a time-frequency-energy (TFE) distribution that reveals the intrinsic structure of a data. Numerical computations and simulations have been carried out and comparison is made with the empirical mode decomposition algorithms.
The Fourier decomposition method for nonlinear and non-stationary time series analysis
Joshi, Shiv Dutt; Patney, Rakesh Kumar; Saha, Kaushik
2017-01-01
for many decades, there has been a general perception in the literature that Fourier methods are not suitable for the analysis of nonlinear and non-stationary data. In this paper, we propose a novel and adaptive Fourier decomposition method (FDM), based on the Fourier theory, and demonstrate its efficacy for the analysis of nonlinear and non-stationary time series. The proposed FDM decomposes any data into a small number of ‘Fourier intrinsic band functions’ (FIBFs). The FDM presents a generalized Fourier expansion with variable amplitudes and variable frequencies of a time series by the Fourier method itself. We propose an idea of zero-phase filter bank-based multivariate FDM (MFDM), for the analysis of multivariate nonlinear and non-stationary time series, using the FDM. We also present an algorithm to obtain cut-off frequencies for MFDM. The proposed MFDM generates a finite number of band-limited multivariate FIBFs (MFIBFs). The MFDM preserves some intrinsic physical properties of the multivariate data, such as scale alignment, trend and instantaneous frequency. The proposed methods provide a time–frequency–energy (TFE) distribution that reveals the intrinsic structure of a data. Numerical computations and simulations have been carried out and comparison is made with the empirical mode decomposition algorithms. PMID:28413352
Yuan, Ke-Hai; Jiang, Ge; Cheng, Ying
2017-11-01
Data in psychology are often collected using Likert-type scales, and it has been shown that factor analysis of Likert-type data is better performed on the polychoric correlation matrix than on the product-moment covariance matrix, especially when the distributions of the observed variables are skewed. In theory, factor analysis of the polychoric correlation matrix is best conducted using generalized least squares with an asymptotically correct weight matrix (AGLS). However, simulation studies showed that both least squares (LS) and diagonally weighted least squares (DWLS) perform better than AGLS, and thus LS or DWLS is routinely used in practice. In either LS or DWLS, the associations among the polychoric correlation coefficients are completely ignored. To mend such a gap between statistical theory and empirical work, this paper proposes new methods, called ridge GLS, for factor analysis of ordinal data. Monte Carlo results show that, for a wide range of sample sizes, ridge GLS methods yield uniformly more accurate parameter estimates than existing methods (LS, DWLS, AGLS). A real-data example indicates that estimates by ridge GLS are 9-20% more efficient than those by existing methods. Rescaled and adjusted test statistics as well as sandwich-type standard errors following the ridge GLS methods also perform reasonably well. © 2017 The British Psychological Society.
ERIC Educational Resources Information Center
D'Amelia, Ronald P.; Stracuzzi, Vincent; Nirode, William F.
2008-01-01
Today's general chemistry students are introduced to many of the principles and concepts of thermodynamics. In first-year general chemistry undergraduate courses, thermodynamic properties such as heat capacity are frequently discussed. Classical calorimetric methods of analysis and thermal equilibrium experiments are used to determine heat…
Hoyer, Annika; Kuss, Oliver
2018-05-01
Meta-analysis of diagnostic studies is still a rapidly developing area of biostatistical research. Especially, there is an increasing interest in methods to compare different diagnostic tests to a common gold standard. Restricting to the case of two diagnostic tests, in these meta-analyses the parameters of interest are the differences of sensitivities and specificities (with their corresponding confidence intervals) between the two diagnostic tests while accounting for the various associations across single studies and between the two tests. We propose statistical models with a quadrivariate response (where sensitivity of test 1, specificity of test 1, sensitivity of test 2, and specificity of test 2 are the four responses) as a sensible approach to this task. Using a quadrivariate generalized linear mixed model naturally generalizes the common standard bivariate model of meta-analysis for a single diagnostic test. If information on several thresholds of the tests is available, the quadrivariate model can be further generalized to yield a comparison of full receiver operating characteristic (ROC) curves. We illustrate our model by an example where two screening methods for the diagnosis of type 2 diabetes are compared.
ERIC Educational Resources Information Center
Neman, Robert Lynn
This study was designed to assess the effects of the problem-oriented method compared to those of the traditional approach in general chemistry at the college level. The problem-oriented course included topics such as air and water pollution, drug addiction and analysis, tetraethyl-lead additives, insecticides in the environment, and recycling of…
Generalized Structured Component Analysis with Uniqueness Terms for Accommodating Measurement Error
Hwang, Heungsun; Takane, Yoshio; Jung, Kwanghee
2017-01-01
Generalized structured component analysis (GSCA) is a component-based approach to structural equation modeling (SEM), where latent variables are approximated by weighted composites of indicators. It has no formal mechanism to incorporate errors in indicators, which in turn renders components prone to the errors as well. We propose to extend GSCA to account for errors in indicators explicitly. This extension, called GSCAM, considers both common and unique parts of indicators, as postulated in common factor analysis, and estimates a weighted composite of indicators with their unique parts removed. Adding such unique parts or uniqueness terms serves to account for measurement errors in indicators in a manner similar to common factor analysis. Simulation studies are conducted to compare parameter recovery of GSCAM and existing methods. These methods are also applied to fit a substantively well-established model to real data. PMID:29270146
Chaudhry, Jehanzeb Hameed; Estep, Don; Tavener, Simon; Carey, Varis; Sandelin, Jeff
2016-01-01
We consider numerical methods for initial value problems that employ a two stage approach consisting of solution on a relatively coarse discretization followed by solution on a relatively fine discretization. Examples include adaptive error control, parallel-in-time solution schemes, and efficient solution of adjoint problems for computing a posteriori error estimates. We describe a general formulation of two stage computations then perform a general a posteriori error analysis based on computable residuals and solution of an adjoint problem. The analysis accommodates various variations in the two stage computation and in formulation of the adjoint problems. We apply the analysis to compute "dual-weighted" a posteriori error estimates, to develop novel algorithms for efficient solution that take into account cancellation of error, and to the Parareal Algorithm. We test the various results using several numerical examples.
Jung, Kwanghee; Takane, Yoshio; Hwang, Heungsun; Woodward, Todd S
2016-06-01
We extend dynamic generalized structured component analysis (GSCA) to enhance its data-analytic capability in structural equation modeling of multi-subject time series data. Time series data of multiple subjects are typically hierarchically structured, where time points are nested within subjects who are in turn nested within a group. The proposed approach, named multilevel dynamic GSCA, accommodates the nested structure in time series data. Explicitly taking the nested structure into account, the proposed method allows investigating subject-wise variability of the loadings and path coefficients by looking at the variance estimates of the corresponding random effects, as well as fixed loadings between observed and latent variables and fixed path coefficients between latent variables. We demonstrate the effectiveness of the proposed approach by applying the method to the multi-subject functional neuroimaging data for brain connectivity analysis, where time series data-level measurements are nested within subjects.
Detecting Spatio-Temporal Modes in Multivariate Data by Entropy Field Decomposition
Frank, Lawrence R.; Galinsky, Vitaly L.
2016-01-01
A new data analysis method that addresses a general problem of detecting spatio-temporal variations in multivariate data is presented. The method utilizes two recent and complimentary general approaches to data analysis, information field theory (IFT) and entropy spectrum pathways (ESP). Both methods reformulate and incorporate Bayesian theory, thus use prior information to uncover underlying structure of the unknown signal. Unification of ESP and IFT creates an approach that is non-Gaussian and non-linear by construction and is found to produce unique spatio-temporal modes of signal behavior that can be ranked according to their significance, from which space-time trajectories of parameter variations can be constructed and quantified. Two brief examples of real world applications of the theory to the analysis of data bearing completely different, unrelated nature, lacking any underlying similarity, are also presented. The first example provides an analysis of resting state functional magnetic resonance imaging (rsFMRI) data that allowed us to create an efficient and accurate computational method for assessing and categorizing brain activity. The second example demonstrates the potential of the method in the application to the analysis of a strong atmospheric storm circulation system during the complicated stage of tornado development and formation using data recorded by a mobile Doppler radar. Reference implementation of the method will be made available as a part of the QUEST toolkit that is currently under development at the Center for Scientific Computation in Imaging. PMID:27695512
NASA Astrophysics Data System (ADS)
Qian, Xi-Yuan; Liu, Ya-Min; Jiang, Zhi-Qiang; Podobnik, Boris; Zhou, Wei-Xing; Stanley, H. Eugene
2015-06-01
When common factors strongly influence two power-law cross-correlated time series recorded in complex natural or social systems, using detrended cross-correlation analysis (DCCA) without considering these common factors will bias the results. We use detrended partial cross-correlation analysis (DPXA) to uncover the intrinsic power-law cross correlations between two simultaneously recorded time series in the presence of nonstationarity after removing the effects of other time series acting as common forces. The DPXA method is a generalization of the detrended cross-correlation analysis that takes into account partial correlation analysis. We demonstrate the method by using bivariate fractional Brownian motions contaminated with a fractional Brownian motion. We find that the DPXA is able to recover the analytical cross Hurst indices, and thus the multiscale DPXA coefficients are a viable alternative to the conventional cross-correlation coefficient. We demonstrate the advantage of the DPXA coefficients over the DCCA coefficients by analyzing contaminated bivariate fractional Brownian motions. We calculate the DPXA coefficients and use them to extract the intrinsic cross correlation between crude oil and gold futures by taking into consideration the impact of the U.S. dollar index. We develop the multifractal DPXA (MF-DPXA) method in order to generalize the DPXA method and investigate multifractal time series. We analyze multifractal binomial measures masked with strong white noises and find that the MF-DPXA method quantifies the hidden multifractal nature while the multifractal DCCA method fails.
7 CFR 160.1 - Definitions of general terms.
Code of Federal Regulations, 2010 CFR
2010-01-01
...) Analysis. Any examination by physical, chemical, or sensory methods. (m) Classification. Designation as to... Administrator has sufficient and proper interest in the analysis, classification, grading, or sale of naval... provisions of the act and the provisions in this part to show the results of any examination, analysis...
Mohammadkhani, Parvaneh; Pourshahbaz, Abbas; Kami, Maryam; Mazidi, Mahdi; Abasi, Imaneh
2016-01-01
Objective: Generalized anxiety disorder is one of the most common anxiety disorders in the general population. Several studies suggest that anxiety sensitivity is a vulnerability factor in generalized anxiety severity. However, some other studies suggest that negative repetitive thinking and experiential avoidance as response factors can explain this relationship. Therefore, this study aimed to investigate the mediating role of experiential avoidance and negative repetitive thinking in the relationship between anxiety sensitivity and generalized anxiety severity. Method: This was a cross-sectional and correlational study. A sample of 475 university students was selected through stratified sampling method. The participants completed Anxiety Sensitivity Inventory-3, Acceptance and Action Questionnaire-II, Perseverative Thinking Questionnaire, and Generalized Anxiety Disorder 7-item Scale. Data were analyzed by Pearson correlation, multiple regression analysis and path analysis. Results: The results revealed a positive relationship between anxiety sensitivity, particularly cognitive anxiety sensitivity, experiential avoidance, repetitive thinking and generalized anxiety severity. In addition, findings showed that repetitive thinking, but not experiential avoidance, fully mediated the relationship between cognitive anxiety sensitivity and generalized anxiety severity. α Level was p<0.005. Conclusion: Consistent with the trans-diagnostic hypothesis, anxiety sensitivity predicts generalized anxiety severity, but its effect is due to the generating repetitive negative thought. PMID:27928245
ERIC Educational Resources Information Center
Ling, Guangming; Rijmen, Frank
2011-01-01
The factorial structure of the Time Management (TM) scale of the Student 360: Insight Program (S360) was evaluated based on a national sample. A general procedure with a variety of methods was introduced and implemented, including the computation of descriptive statistics, exploratory factor analysis (EFA), and confirmatory factor analysis (CFA).…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1994-12-01
The purpose of this Handbook is to establish general training program guidelines for training personnel in developing training for operation, maintenance, and technical support personnel at Department of Energy (DOE) nuclear facilities. TTJA is not the only method of job analysis; however, when conducted properly TTJA can be cost effective, efficient, and self-validating, and represents an effective method of defining job requirements. The table-top job analysis is suggested in the DOE Training Accreditation Program manuals as an acceptable alternative to traditional methods of analyzing job requirements. DOE 5480-20A strongly endorses and recommends it as the preferred method for analyzing jobsmore » for positions addressed by the Order.« less
Applications of modern statistical methods to analysis of data in physical science
NASA Astrophysics Data System (ADS)
Wicker, James Eric
Modern methods of statistical and computational analysis offer solutions to dilemmas confronting researchers in physical science. Although the ideas behind modern statistical and computational analysis methods were originally introduced in the 1970's, most scientists still rely on methods written during the early era of computing. These researchers, who analyze increasingly voluminous and multivariate data sets, need modern analysis methods to extract the best results from their studies. The first section of this work showcases applications of modern linear regression. Since the 1960's, many researchers in spectroscopy have used classical stepwise regression techniques to derive molecular constants. However, problems with thresholds of entry and exit for model variables plagues this analysis method. Other criticisms of this kind of stepwise procedure include its inefficient searching method, the order in which variables enter or leave the model and problems with overfitting data. We implement an information scoring technique that overcomes the assumptions inherent in the stepwise regression process to calculate molecular model parameters. We believe that this kind of information based model evaluation can be applied to more general analysis situations in physical science. The second section proposes new methods of multivariate cluster analysis. The K-means algorithm and the EM algorithm, introduced in the 1960's and 1970's respectively, formed the basis of multivariate cluster analysis methodology for many years. However, several shortcomings of these methods include strong dependence on initial seed values and inaccurate results when the data seriously depart from hypersphericity. We propose new cluster analysis methods based on genetic algorithms that overcomes the strong dependence on initial seed values. In addition, we propose a generalization of the Genetic K-means algorithm which can accurately identify clusters with complex hyperellipsoidal covariance structures. We then use this new algorithm in a genetic algorithm based Expectation-Maximization process that can accurately calculate parameters describing complex clusters in a mixture model routine. Using the accuracy of this GEM algorithm, we assign information scores to cluster calculations in order to best identify the number of mixture components in a multivariate data set. We will showcase how these algorithms can be used to process multivariate data from astronomical observations.
A Unified Development of Basis Reduction Methods for Rotor Blade Analysis
NASA Technical Reports Server (NTRS)
Ruzicka, Gene C.; Hodges, Dewey H.; Rutkowski, Michael (Technical Monitor)
2001-01-01
The axial foreshortening effect plays a key role in rotor blade dynamics, but approximating it accurately in reduced basis models has long posed a difficult problem for analysts. Recently, though, several methods have been shown to be effective in obtaining accurate,reduced basis models for rotor blades. These methods are the axial elongation method,the mixed finite element method, and the nonlinear normal mode method. The main objective of this paper is to demonstrate the close relationships among these methods, which are seemingly disparate at first glance. First, the difficulties inherent in obtaining reduced basis models of rotor blades are illustrated by examining the modal reduction accuracy of several blade analysis formulations. It is shown that classical, displacement-based finite elements are ill-suited for rotor blade analysis because they can't accurately represent the axial strain in modal space, and that this problem may be solved by employing the axial force as a variable in the analysis. It is shown that the mixed finite element method is a convenient means for accomplishing this, and the derivation of a mixed finite element for rotor blade analysis is outlined. A shortcoming of the mixed finite element method is that is that it increases the number of variables in the analysis. It is demonstrated that this problem may be rectified by solving for the axial displacements in terms of the axial forces and the bending displacements. Effectively, this procedure constitutes a generalization of the widely used axial elongation method to blades of arbitrary topology. The procedure is developed first for a single element, and then extended to an arbitrary assemblage of elements of arbitrary type. Finally, it is shown that the generalized axial elongation method is essentially an approximate solution for an invariant manifold that can be used as the basis for a nonlinear normal mode.
Visual Analytics of integrated Data Systems for Space Weather Purposes
NASA Astrophysics Data System (ADS)
Rosa, Reinaldo; Veronese, Thalita; Giovani, Paulo
Analysis of information from multiple data sources obtained through high resolution instrumental measurements has become a fundamental task in all scientific areas. The development of expert methods able to treat such multi-source data systems, with both large variability and measurement extension, is a key for studying complex scientific phenomena, especially those related to systemic analysis in space and environmental sciences. In this talk, we present a time series generalization introducing the concept of generalized numerical lattice, which represents a discrete sequence of temporal measures for a given variable. In this novel representation approach each generalized numerical lattice brings post-analytical data information. We define a generalized numerical lattice as a set of three parameters representing the following data properties: dimensionality, size and post-analytical measure (e.g., the autocorrelation, Hurst exponent, etc)[1]. From this representation generalization, any multi-source database can be reduced to a closed set of classified time series in spatiotemporal generalized dimensions. As a case study, we show a preliminary application in space science data, highlighting the possibility of a real time analysis expert system. In this particular application, we have selected and analyzed, using detrended fluctuation analysis (DFA), several decimetric solar bursts associated to X flare-classes. The association with geomagnetic activity is also reported. DFA method is performed in the framework of a radio burst automatic monitoring system. Our results may characterize the variability pattern evolution, computing the DFA scaling exponent, scanning the time series by a short windowing before the extreme event [2]. For the first time, the application of systematic fluctuation analysis for space weather purposes is presented. The prototype for visual analytics is implemented in a Compute Unified Device Architecture (CUDA) by using the K20 Nvidia graphics processing units (GPUs) to reduce the integrated analysis runtime. [1] Veronese et al. doi: 10.6062/jcis.2009.01.02.0021, 2010. [2] Veronese et al. doi:http://dx.doi.org/10.1016/j.jastp.2010.09.030, 2011.
Extreme learning machine for ranking: generalization analysis and applications.
Chen, Hong; Peng, Jiangtao; Zhou, Yicong; Li, Luoqing; Pan, Zhibin
2014-05-01
The extreme learning machine (ELM) has attracted increasing attention recently with its successful applications in classification and regression. In this paper, we investigate the generalization performance of ELM-based ranking. A new regularized ranking algorithm is proposed based on the combinations of activation functions in ELM. The generalization analysis is established for the ELM-based ranking (ELMRank) in terms of the covering numbers of hypothesis space. Empirical results on the benchmark datasets show the competitive performance of the ELMRank over the state-of-the-art ranking methods. Copyright © 2014 Elsevier Ltd. All rights reserved.
Generalized causal mediation and path analysis: Extensions and practical considerations.
Albert, Jeffrey M; Cho, Jang Ik; Liu, Yiying; Nelson, Suchitra
2018-01-01
Causal mediation analysis seeks to decompose the effect of a treatment or exposure among multiple possible paths and provide casually interpretable path-specific effect estimates. Recent advances have extended causal mediation analysis to situations with a sequence of mediators or multiple contemporaneous mediators. However, available methods still have limitations, and computational and other challenges remain. The present paper provides an extended causal mediation and path analysis methodology. The new method, implemented in the new R package, gmediation (described in a companion paper), accommodates both a sequence (two stages) of mediators and multiple mediators at each stage, and allows for multiple types of outcomes following generalized linear models. The methodology can also handle unsaturated models and clustered data. Addressing other practical issues, we provide new guidelines for the choice of a decomposition, and for the choice of a reference group multiplier for the reduction of Monte Carlo error in mediation formula computations. The new method is applied to data from a cohort study to illuminate the contribution of alternative biological and behavioral paths in the effect of socioeconomic status on dental caries in adolescence.
Wen, Cheng; Dallimer, Martin; Carver, Steve; Ziv, Guy
2018-05-06
Despite the great potential of mitigating carbon emission, development of wind farms is often opposed by local communities due to the visual impact on landscape. A growing number of studies have applied nonmarket valuation methods like Choice Experiments (CE) to value the visual impact by eliciting respondents' willingness to pay (WTP) or willingness to accept (WTA) for hypothetical wind farms through survey questions. Several meta-analyses have been found in the literature to synthesize results from different valuation studies, but they have various limitations related to the use of the prevailing multivariate meta-regression analysis. In this paper, we propose a new meta-analysis method to establish general functions for the relationships between the estimated WTP or WTA and three wind farm attributes, namely the distance to residential/coastal areas, the number of turbines and turbine height. This method involves establishing WTA or WTP functions for individual studies, fitting the average derivative functions and deriving the general integral functions of WTP or WTA against wind farm attributes. Results indicate that respondents in different studies consistently showed increasing WTP for moving wind farms to greater distances, which can be fitted by non-linear (natural logarithm) functions. However, divergent preferences for the number of turbines and turbine height were found in different studies. We argue that the new analysis method proposed in this paper is an alternative to the mainstream multivariate meta-regression analysis for synthesizing CE studies and the general integral functions of WTP or WTA against wind farm attributes are useful for future spatial modelling and benefit transfer studies. We also suggest that future multivariate meta-analyses should include non-linear components in the regression functions. Copyright © 2018. Published by Elsevier B.V.
RELATIVE CONTRIBUTIONS OF THREE DESCRIPTIVE METHODS: IMPLICATIONS FOR BEHAVIORAL ASSESSMENT
Pence, Sacha T; Roscoe, Eileen M; Bourret, Jason C; Ahearn, William H
2009-01-01
This study compared the outcomes of three descriptive analysis methods—the ABC method, the conditional probability method, and the conditional and background probability method—to each other and to the results obtained from functional analyses. Six individuals who had been diagnosed with developmental delays and exhibited problem behavior participated. Functional analyses indicated that participants' problem behavior was maintained by social positive reinforcement (n = 2), social negative reinforcement (n = 2), or automatic reinforcement (n = 2). Results showed that for all but 1 participant, descriptive analysis outcomes were similar across methods. In addition, for all but 1 participant, the descriptive analysis outcome differed substantially from the functional analysis outcome. This supports the general finding that descriptive analysis is a poor means of determining functional relations. PMID:19949536
DOT National Transportation Integrated Search
2011-12-01
Current AASHTO provisions for the conventional load rating of flat slab bridges rely on the equivalent strip method : of analysis for determining live load effects, this is generally regarded as overly conservative by many professional : engineers. A...
Generalized Analysis Tools for Multi-Spacecraft Missions
NASA Astrophysics Data System (ADS)
Chanteur, G. M.
2011-12-01
Analysis tools for multi-spacecraft missions like CLUSTER or MMS have been designed since the end of the 90's to estimate gradients of fields or to characterize discontinuities crossed by a cluster of spacecraft. Different approaches have been presented and discussed in the book "Analysis Methods for Multi-Spacecraft Data" published as Scientific Report 001 of the International Space Science Institute in Bern, Switzerland (G. Paschmann and P. Daly Eds., 1998). On one hand the approach using methods of least squares has the advantage to apply to any number of spacecraft [1] but is not convenient to perform analytical computation especially when considering the error analysis. On the other hand the barycentric approach is powerful as it provides simple analytical formulas involving the reciprocal vectors of the tetrahedron [2] but appears limited to clusters of four spacecraft. Moreover the barycentric approach allows to derive theoretical formulas for errors affecting the estimators built from the reciprocal vectors [2,3,4]. Following a first generalization of reciprocal vectors proposed by Vogt et al [4] and despite the present lack of projects with more than four spacecraft we present generalized reciprocal vectors for a cluster made of any number of spacecraft : each spacecraft is given a positive or nul weight. The non-coplanarity of at least four spacecraft with strictly positive weights is a necessary and sufficient condition for this analysis to be enabled. Weights given to spacecraft allow to minimize the influence of some spacecraft if its location or the quality of its data are not appropriate, or simply to extract subsets of spacecraft from the cluster. Estimators presented in [2] are generalized within this new frame except for the error analysis which is still under investigation. References [1] Harvey, C. C.: Spatial Gradients and the Volumetric Tensor, in: Analysis Methods for Multi-Spacecraft Data, G. Paschmann and P. Daly (eds.), pp. 307-322, ISSI SR-001, 1998. [2] Chanteur, G.: Spatial Interpolation for Four Spacecraft: Theory, in: Analysis Methods for Multi-Spacecraft Data, G. Paschmann and P. Daly (eds.), pp. 371-393, ISSI SR-001, 1998. [3] Chanteur, G.: Accuracy of field gradient estimations by Cluster: Explanation of its dependency upon elongation and planarity of the tetrahedron, pp. 265-268, ESA SP-449, 2000. [4] Vogt, J., Paschmann, G., and Chanteur, G.: Reciprocal Vectors, pp. 33-46, ISSI SR-008, 2008.
Seminar on Understanding Digital Control and Analysis in Vibration Test Systems
NASA Technical Reports Server (NTRS)
1975-01-01
The advantages of the digital methods over the analog vibration methods are demonstrated. The following topics are covered: (1) methods of computer-controlled random vibration and reverberation acoustic testing, (2) methods of computer-controlled sinewave vibration testing, and (3) methods of computer-controlled shock testing. General algorithms are described in the form of block diagrams and flow diagrams.
A general probabilistic model for group independent component analysis and its estimation methods
Guo, Ying
2012-01-01
SUMMARY Independent component analysis (ICA) has become an important tool for analyzing data from functional magnetic resonance imaging (fMRI) studies. ICA has been successfully applied to single-subject fMRI data. The extension of ICA to group inferences in neuroimaging studies, however, is challenging due to the unavailability of a pre-specified group design matrix and the uncertainty in between-subjects variability in fMRI data. We present a general probabilistic ICA (PICA) model that can accommodate varying group structures of multi-subject spatio-temporal processes. An advantage of the proposed model is that it can flexibly model various types of group structures in different underlying neural source signals and under different experimental conditions in fMRI studies. A maximum likelihood method is used for estimating this general group ICA model. We propose two EM algorithms to obtain the ML estimates. The first method is an exact EM algorithm which provides an exact E-step and an explicit noniterative M-step. The second method is an variational approximation EM algorithm which is computationally more efficient than the exact EM. In simulation studies, we first compare the performance of the proposed general group PICA model and the existing probabilistic group ICA approach. We then compare the two proposed EM algorithms and show the variational approximation EM achieves comparable accuracy to the exact EM with significantly less computation time. An fMRI data example is used to illustrate application of the proposed methods. PMID:21517789
Quantitative mass spectrometry methods for pharmaceutical analysis
Loos, Glenn; Van Schepdael, Ann
2016-01-01
Quantitative pharmaceutical analysis is nowadays frequently executed using mass spectrometry. Electrospray ionization coupled to a (hybrid) triple quadrupole mass spectrometer is generally used in combination with solid-phase extraction and liquid chromatography. Furthermore, isotopically labelled standards are often used to correct for ion suppression. The challenges in producing sensitive but reliable quantitative data depend on the instrumentation, sample preparation and hyphenated techniques. In this contribution, different approaches to enhance the ionization efficiencies using modified source geometries and improved ion guidance are provided. Furthermore, possibilities to minimize, assess and correct for matrix interferences caused by co-eluting substances are described. With the focus on pharmaceuticals in the environment and bioanalysis, different separation techniques, trends in liquid chromatography and sample preparation methods to minimize matrix effects and increase sensitivity are discussed. Although highly sensitive methods are generally aimed for to provide automated multi-residue analysis, (less sensitive) miniaturized set-ups have a great potential due to their ability for in-field usage. This article is part of the themed issue ‘Quantitative mass spectrometry’. PMID:27644982
ERIC Educational Resources Information Center
de Laat, Maarten; Lally, Vic; Lipponen, Lasse; Simons, Robert-Jan
2007-01-01
The focus of this study is to explore the advances that Social Network Analysis (SNA) can bring, in combination with other methods, when studying Networked Learning/Computer-Supported Collaborative Learning (NL/CSCL). We present a general overview of how SNA is applied in NL/CSCL research; we then go on to illustrate how this research method can…
Traditional and Cognitive Job Analyses as Tools for Understanding the Skills Gap.
ERIC Educational Resources Information Center
Hanser, Lawrence M.
Traditional methods of job and task analysis may be categorized as worker-oriented methods focusing on general human behaviors performed by workers in jobs or as job-oriented methods focusing on the technologies involved in jobs. The ability of both types of traditional methods to identify, understand, and communicate the skills needed in high…
A general method for handling missing binary outcome data in randomized controlled trials.
Jackson, Dan; White, Ian R; Mason, Dan; Sutton, Stephen
2014-12-01
The analysis of randomized controlled trials with incomplete binary outcome data is challenging. We develop a general method for exploring the impact of missing data in such trials, with a focus on abstinence outcomes. We propose a sensitivity analysis where standard analyses, which could include 'missing = smoking' and 'last observation carried forward', are embedded in a wider class of models. We apply our general method to data from two smoking cessation trials. A total of 489 and 1758 participants from two smoking cessation trials. The abstinence outcomes were obtained using telephone interviews. The estimated intervention effects from both trials depend on the sensitivity parameters used. The findings differ considerably in magnitude and statistical significance under quite extreme assumptions about the missing data, but are reasonably consistent under more moderate assumptions. A new method for undertaking sensitivity analyses when handling missing data in trials with binary outcomes allows a wide range of assumptions about the missing data to be assessed. In two smoking cessation trials the results were insensitive to all but extreme assumptions. © 2014 The Authors. Addiction published by John Wiley & Sons Ltd on behalf of Society for the Study of Addiction.
NASA Astrophysics Data System (ADS)
Tisdell, C. C.
2017-08-01
Solution methods to exact differential equations via integrating factors have a rich history dating back to Euler (1740) and the ideas enjoy applications to thermodynamics and electromagnetism. Recently, Azevedo and Valentino presented an analysis of the generalized Bernoulli equation, constructing a general solution by linearizing the problem through a substitution. The purpose of this note is to present an alternative approach using 'exact methods', illustrating that a substitution and linearization of the problem is unnecessary. The ideas may be seen as forming a complimentary and arguably simpler approach to Azevedo and Valentino that have the potential to be assimilated and adapted to pedagogical needs of those learning and teaching exact differential equations in schools, colleges, universities and polytechnics. We illustrate how to apply the ideas through an analysis of the Gompertz equation, which is of interest in biomathematical models of tumour growth.
Lie Symmetry Analysis and Conservation Laws of a Generalized Time Fractional Foam Drainage Equation
NASA Astrophysics Data System (ADS)
Wang, Li; Tian, Shou-Fu; Zhao, Zhen-Tao; Song, Xiao-Qiu
2016-07-01
In this paper, a generalized time fractional nonlinear foam drainage equation is investigated by means of the Lie group analysis method. Based on the Riemann—Liouville derivative, the Lie point symmetries and symmetry reductions of the equation are derived, respectively. Furthermore, conservation laws with two kinds of independent variables of the equation are performed by making use of the nonlinear self-adjointness method. Supported by the National Training Programs of Innovation and Entrepreneurship for Undergraduates under Grant No. 201410290039, the Fundamental Research Funds for the Central Universities under Grant Nos. 2015QNA53 and 2015XKQY14, the Fundamental Research Funds for Postdoctoral at the Key Laboratory of Gas and Fire Control for Coal Mines, the General Financial Grant from the China Postdoctoral Science Foundation under Grant No. 2015M570498, and Natural Sciences Foundation of China under Grant No. 11301527
Meta‐analysis of test accuracy studies using imputation for partial reporting of multiple thresholds
Deeks, J.J.; Martin, E.C.; Riley, R.D.
2017-01-01
Introduction For tests reporting continuous results, primary studies usually provide test performance at multiple but often different thresholds. This creates missing data when performing a meta‐analysis at each threshold. A standard meta‐analysis (no imputation [NI]) ignores such missing data. A single imputation (SI) approach was recently proposed to recover missing threshold results. Here, we propose a new method that performs multiple imputation of the missing threshold results using discrete combinations (MIDC). Methods The new MIDC method imputes missing threshold results by randomly selecting from the set of all possible discrete combinations which lie between the results for 2 known bounding thresholds. Imputed and observed results are then synthesised at each threshold. This is repeated multiple times, and the multiple pooled results at each threshold are combined using Rubin's rules to give final estimates. We compared the NI, SI, and MIDC approaches via simulation. Results Both imputation methods outperform the NI method in simulations. There was generally little difference in the SI and MIDC methods, but the latter was noticeably better in terms of estimating the between‐study variances and generally gave better coverage, due to slightly larger standard errors of pooled estimates. Given selective reporting of thresholds, the imputation methods also reduced bias in the summary receiver operating characteristic curve. Simulations demonstrate the imputation methods rely on an equal threshold spacing assumption. A real example is presented. Conclusions The SI and, in particular, MIDC methods can be used to examine the impact of missing threshold results in meta‐analysis of test accuracy studies. PMID:29052347
Multidisciplinary optimization of a controlled space structure using 150 design variables
NASA Technical Reports Server (NTRS)
James, Benjamin B.
1992-01-01
A general optimization-based method for the design of large space platforms through integration of the disciplines of structural dynamics and control is presented. The method uses the global sensitivity equations approach and is especially appropriate for preliminary design problems in which the structural and control analyses are tightly coupled. The method is capable of coordinating general purpose structural analysis, multivariable control, and optimization codes, and thus, can be adapted to a variety of controls-structures integrated design projects. The method is used to minimize the total weight of a space platform while maintaining a specified vibration decay rate after slewing maneuvers.
Efficient solution of the simplified P N equations
Hamilton, Steven P.; Evans, Thomas M.
2014-12-23
We show new solver strategies for the multigroup SPN equations for nuclear reactor analysis. By forming the complete matrix over space, moments, and energy a robust set of solution strategies may be applied. Moreover, power iteration, shifted power iteration, Rayleigh quotient iteration, Arnoldi's method, and a generalized Davidson method, each using algebraic and physics-based multigrid preconditioners, have been compared on C5G7 MOX test problem as well as an operational PWR model. These results show that the most ecient approach is the generalized Davidson method, that is 30-40 times faster than traditional power iteration and 6-10 times faster than Arnoldi's method.
Ardiana, Febry; Suciati; Indrayanto, Gunawan
2015-01-01
Valsartan is an antihypertensive drug which selectively inhibits angiotensin receptor type II. Generally, valsartan is available as film-coated tablets. This review summarizes thermal analysis, spectroscopy characteristics (UV, IR, MS, and NMR), polymorphism forms, impurities, and related compounds of valsartan. The methods of analysis of valsartan in pharmaceutical dosage forms and in biological fluids using spectrophotometer, CE, TLC, and HPLC methods are discussed in details. Both official and nonofficial methods are described. It is recommended to use LC-MS method for analyzing valsartan in complex matrices such as biological fluids and herbal preparations; in this case, MRM is preferred than SIM method. © 2015 Elsevier Inc. All rights reserved.
Multifunctional Collaborative Modeling and Analysis Methods in Engineering Science
NASA Technical Reports Server (NTRS)
Ransom, Jonathan B.; Broduer, Steve (Technical Monitor)
2001-01-01
Engineers are challenged to produce better designs in less time and for less cost. Hence, to investigate novel and revolutionary design concepts, accurate, high-fidelity results must be assimilated rapidly into the design, analysis, and simulation process. This assimilation should consider diverse mathematical modeling and multi-discipline interactions necessitated by concepts exploiting advanced materials and structures. Integrated high-fidelity methods with diverse engineering applications provide the enabling technologies to assimilate these high-fidelity, multi-disciplinary results rapidly at an early stage in the design. These integrated methods must be multifunctional, collaborative, and applicable to the general field of engineering science and mechanics. Multifunctional methodologies and analysis procedures are formulated for interfacing diverse subdomain idealizations including multi-fidelity modeling methods and multi-discipline analysis methods. These methods, based on the method of weighted residuals, ensure accurate compatibility of primary and secondary variables across the subdomain interfaces. Methods are developed using diverse mathematical modeling (i.e., finite difference and finite element methods) and multi-fidelity modeling among the subdomains. Several benchmark scalar-field and vector-field problems in engineering science are presented with extensions to multidisciplinary problems. Results for all problems presented are in overall good agreement with the exact analytical solution or the reference numerical solution. Based on the results, the integrated modeling approach using the finite element method for multi-fidelity discretization among the subdomains is identified as most robust. The multiple-method approach is advantageous when interfacing diverse disciplines in which each of the method's strengths are utilized. The multifunctional methodology presented provides an effective mechanism by which domains with diverse idealizations are interfaced. This capability rapidly provides the high-fidelity results needed in the early design phase. Moreover, the capability is applicable to the general field of engineering science and mechanics. Hence, it provides a collaborative capability that accounts for interactions among engineering analysis methods.
Buckling analysis for anisotropic laminated plates under combined inplane loads
NASA Technical Reports Server (NTRS)
Viswanathan, A. V.; Tamekuni, M.; Baker, L. L.
1974-01-01
The buckling analysis presented considers rectangular flat or curved general laminates subjected to combined inplane normal and shear loads. Linear theory is used in the analysis. All prebuckling deformations and any initial imperfections are ignored. The analysis method can be readily extended to longitudinally stiffened structures subjected to combined inplane normal and shear loads.
ERIC Educational Resources Information Center
Hussmann, Katja; Grande, Marion; Meffert, Elisabeth; Christoph, Swetlana; Piefke, Martina; Willmes, Klaus; Huber, Walter
2012-01-01
Although generally accepted as an important part of aphasia assessment, detailed analysis of spontaneous speech is rarely carried out in clinical practice mostly due to time limitations. The Aachener Sprachanalyse (ASPA; Aachen Speech Analysis) is a computer-assisted method for the quantitative analysis of German spontaneous speech that allows for…
NASA Technical Reports Server (NTRS)
Yao, Tse-Min; Choi, Kyung K.
1987-01-01
An automatic regridding method and a three dimensional shape design parameterization technique were constructed and integrated into a unified theory of shape design sensitivity analysis. An algorithm was developed for general shape design sensitivity analysis of three dimensional eleastic solids. Numerical implementation of this shape design sensitivity analysis method was carried out using the finite element code ANSYS. The unified theory of shape design sensitivity analysis uses the material derivative of continuum mechanics with a design velocity field that represents shape change effects over the structural design. Automatic regridding methods were developed by generating a domain velocity field with boundary displacement method. Shape design parameterization for three dimensional surface design problems was illustrated using a Bezier surface with boundary perturbations that depend linearly on the perturbation of design parameters. A linearization method of optimization, LINRM, was used to obtain optimum shapes. Three examples from different engineering disciplines were investigated to demonstrate the accuracy and versatility of this shape design sensitivity analysis method.
NASA Technical Reports Server (NTRS)
1994-01-01
General Purpose Boundary Element Solution Technology (GPBEST) software employs the boundary element method of mechanical engineering analysis, as opposed to finite element. It is, according to one of its developers, 10 times faster in data preparation and more accurate than other methods. Its use results in less expensive products because the time between design and manufacturing is shortened. A commercial derivative of a NASA-developed computer code, it is marketed by Best Corporation to solve problems in stress analysis, heat transfer, fluid analysis and yielding and cracking of solids. Other applications include designing tractor and auto parts, household appliances and acoustic analysis.
Chiu, Huai-Hsuan; Liao, Hsiao-Wei; Shao, Yu-Yun; Lu, Yen-Shen; Lin, Ching-Hung; Tsai, I-Lin; Kuo, Ching-Hua
2018-08-17
Monoclonal antibody (mAb) drugs have generated much interest in recent years for treating various diseases. Immunoglobulin G (IgG) represents a high percentage of mAb drugs that have been approved by the Food and Drug Administration (FDA). To facilitate therapeutic drug monitoring and pharmacokinetic/pharmacodynamic studies, we developed a general liquid chromatography-tandem mass spectrometry (LC-MS/MS) method to quantify the concentration of IgG-based mAbs in human plasma. Three IgG-based drugs (bevacizumab, nivolumab and pembrolizumab) were selected to demonstrate our method. Protein G beads were used for sample pretreatment due to their universal ability to trap IgG-based drugs. Surrogate peptides that were obtained after trypsin digestion were quantified by using LC-MS/MS. To calibrate sample preparation errors and matrix effects that occur during LC-MS/MS analysis, we used two internal standards (IS) method that include the IgG-based drug-IS tocilizumab and post-column infused IS. Using two internal standards was found to effectively improve quantification accuracy, which was within 15% for all mAb drugs that were tested at three different concentrations. This general method was validated in term of its precision, accuracy, linearity and sensitivity for 3 demonstration mAb drugs. The successful application of the method to clinical samples demonstrated its' applicability in clinical analysis. It is anticipated that this general method could be applied to other mAb-based drugs for use in precision medicine and clinical studies. Copyright © 2018 Elsevier B.V. All rights reserved.
Jain, Amit; Kuhls-Gilcrist, Andrew T; Gupta, Sandesh K; Bednarek, Daniel R; Rudin, Stephen
2010-03-01
The MTF, NNPS, and DQE are standard linear system metrics used to characterize intrinsic detector performance. To evaluate total system performance for actual clinical conditions, generalized linear system metrics (GMTF, GNNPS and GDQE) that include the effect of the focal spot distribution, scattered radiation, and geometric unsharpness are more meaningful and appropriate. In this study, a two-dimensional (2D) generalized linear system analysis was carried out for a standard flat panel detector (FPD) (194-micron pixel pitch and 600-micron thick CsI) and a newly-developed, high-resolution, micro-angiographic fluoroscope (MAF) (35-micron pixel pitch and 300-micron thick CsI). Realistic clinical parameters and x-ray spectra were used. The 2D detector MTFs were calculated using the new Noise Response method and slanted edge method and 2D focal spot distribution measurements were done using a pin-hole assembly. The scatter fraction, generated for a uniform head equivalent phantom, was measured and the scatter MTF was simulated with a theoretical model. Different magnifications and scatter fractions were used to estimate the 2D GMTF, GNNPS and GDQE for both detectors. Results show spatial non-isotropy for the 2D generalized metrics which provide a quantitative description of the performance of the complete imaging system for both detectors. This generalized analysis demonstrated that the MAF and FPD have similar capabilities at lower spatial frequencies, but that the MAF has superior performance over the FPD at higher frequencies even when considering focal spot blurring and scatter. This 2D generalized performance analysis is a valuable tool to evaluate total system capabilities and to enable optimized design for specific imaging tasks.
Mixed time integration methods for transient thermal analysis of structures
NASA Technical Reports Server (NTRS)
Liu, W. K.
1982-01-01
The computational methods used to predict and optimize the thermal structural behavior of aerospace vehicle structures are reviewed. In general, two classes of algorithms, implicit and explicit, are used in transient thermal analysis of structures. Each of these two methods has its own merits. Due to the different time scales of the mechanical and thermal responses, the selection of a time integration method can be a different yet critical factor in the efficient solution of such problems. Therefore mixed time integration methods for transient thermal analysis of structures are being developed. The computer implementation aspects and numerical evaluation of these mixed time implicit-explicit algorithms in thermal analysis of structures are presented. A computationally useful method of estimating the critical time step for linear quadrilateral element is also given. Numerical tests confirm the stability criterion and accuracy characteristics of the methods. The superiority of these mixed time methods to the fully implicit method or the fully explicit method is also demonstrated.
Mixed time integration methods for transient thermal analysis of structures
NASA Technical Reports Server (NTRS)
Liu, W. K.
1983-01-01
The computational methods used to predict and optimize the thermal-structural behavior of aerospace vehicle structures are reviewed. In general, two classes of algorithms, implicit and explicit, are used in transient thermal analysis of structures. Each of these two methods has its own merits. Due to the different time scales of the mechanical and thermal responses, the selection of a time integration method can be a difficult yet critical factor in the efficient solution of such problems. Therefore mixed time integration methods for transient thermal analysis of structures are being developed. The computer implementation aspects and numerical evaluation of these mixed time implicit-explicit algorithms in thermal analysis of structures are presented. A computationally-useful method of estimating the critical time step for linear quadrilateral element is also given. Numerical tests confirm the stability criterion and accuracy characteristics of the methods. The superiority of these mixed time methods to the fully implicit method or the fully explicit method is also demonstrated.
[A competency model of rural general practitioners: theory construction and empirical study].
Yang, Xiu-Mu; Qi, Yu-Long; Shne, Zheng-Fu; Han, Bu-Xin; Meng, Bei
2015-04-01
To perform theory construction and empirical study of the competency model of rural general practitioners. Through literature study, job analysis, interviews, and expert team discussion, the questionnaire of rural general practitioners competency was constructed. A total of 1458 rural general practitioners were surveyed by the questionnaire in 6 central provinces. The common factors were constructed using the principal component method of exploratory factor analysis and confirmatory factor analysis. The influence of the competency characteristics on the working performance was analyzed using regression equation analysis. The Cronbach 's alpha coefficient of the questionnaire was 0.974. The model consisted of 9 dimensions and 59 items. The 9 competency dimensions included basic public health service ability, basic clinical skills, system analysis capability, information management capability, communication and cooperation ability, occupational moral ability, non-medical professional knowledge, personal traits and psychological adaptability. The rate of explained cumulative total variance was 76.855%. The model fitting index were Χ(2)/df 1.88, GFI=0.94, NFI=0.96, NNFI=0.98, PNFI=0.91, RMSEA=0.068, CFI=0.97, IFI=0.97, RFI=0.96, suggesting good model fitting. Regression analysis showed that the competency characteristics had a significant effect on job performance. The rural general practitioners competency model provides reference for rural doctor training, rural order directional cultivation of medical students, and competency performance management of the rural general practitioners.
NASA Astrophysics Data System (ADS)
Pueyo, Laurent
2016-01-01
A new class of high-contrast image analysis algorithms, that empirically fit and subtract systematic noise has lead to recent discoveries of faint exoplanet /substellar companions and scattered light images of circumstellar disks. The consensus emerging in the community is that these methods are extremely efficient at enhancing the detectability of faint astrophysical signal, but do generally create systematic biases in their observed properties. This poster provides a solution this outstanding problem. We present an analytical derivation of a linear expansion that captures the impact of astrophysical over/self-subtraction in current image analysis techniques. We examine the general case for which the reference images of the astrophysical scene moves azimuthally and/or radially across the field of view as a result of the observation strategy. Our new method method is based on perturbing the covariance matrix underlying any least-squares speckles problem and propagating this perturbation through the data analysis algorithm. This work is presented in the framework of Karhunen-Loeve Image Processing (KLIP) but it can be easily generalized to methods relying on linear combination of images (instead of eigen-modes). Based on this linear expansion, obtained in the most general case, we then demonstrate practical applications of this new algorithm. We first consider the case of the spectral extraction of faint point sources in IFS data and illustrate, using public Gemini Planet Imager commissioning data, that our novel perturbation based Forward Modeling (which we named KLIP-FM) can indeed alleviate algorithmic biases. We then apply KLIP-FM to the detection of point sources and show how it decreases the rate of false negatives while keeping the rate of false positives unchanged when compared to classical KLIP. This can potentially have important consequences on the design of follow-up strategies of ongoing direct imaging surveys.
MAC/GMC Code Enhanced for Coupled Electromagnetothermoelastic Analysis of Smart Composites
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Arnold, Steven M.; Aboudi, Jacob
2002-01-01
Intelligent materials are those that exhibit coupling between their electromagnetic response and their thermomechanical response. This coupling allows smart materials to react mechanically (e.g., an induced displacement) to applied electrical or magnetic fields (for instance). These materials find many important applications in sensors, actuators, and transducers. Recently interest has arisen in the development of smart composites that are formed via the combination of two or more phases, one or more of which is a smart material. To design with and utilize smart composites, designers need theories that predict the coupled smart behavior of these materials from the electromagnetothermoelastic properties of the individual phases. The micromechanics model known as the generalized method of cells (GMC) has recently been extended to provide this important capability. This coupled electromagnetothermoelastic theory has recently been incorporated within NASA Glenn Research Center's Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC). This software package is user friendly and has many additional features that render it useful as a design and analysis tool for composite materials in general, and with its new capabilities, for smart composites as well.
Conformal mapping for multiple terminals
Wang, Weimin; Ma, Wenying; Wang, Qiang; Ren, Hao
2016-01-01
Conformal mapping is an important mathematical tool that can be used to solve various physical and engineering problems in many fields, including electrostatics, fluid mechanics, classical mechanics, and transformation optics. It is an accurate and convenient way to solve problems involving two terminals. However, when faced with problems involving three or more terminals, which are more common in practical applications, existing conformal mapping methods apply assumptions or approximations. A general exact method does not exist for a structure with an arbitrary number of terminals. This study presents a conformal mapping method for multiple terminals. Through an accurate analysis of boundary conditions, additional terminals or boundaries are folded into the inner part of a mapped region. The method is applied to several typical situations, and the calculation process is described for two examples of an electrostatic actuator with three electrodes and of a light beam splitter with three ports. Compared with previously reported results, the solutions for the two examples based on our method are more precise and general. The proposed method is helpful in promoting the application of conformal mapping in analysis of practical problems. PMID:27830746
Rock, Adam J.; Coventry, William L.; Morgan, Methuen I.; Loi, Natasha M.
2016-01-01
Generally, academic psychologists are mindful of the fact that, for many students, the study of research methods and statistics is anxiety provoking (Gal et al., 1997). Given the ubiquitous and distributed nature of eLearning systems (Nof et al., 2015), teachers of research methods and statistics need to cultivate an understanding of how to effectively use eLearning tools to inspire psychology students to learn. Consequently, the aim of the present paper is to discuss critically how using eLearning systems might engage psychology students in research methods and statistics. First, we critically appraise definitions of eLearning. Second, we examine numerous important pedagogical principles associated with effectively teaching research methods and statistics using eLearning systems. Subsequently, we provide practical examples of our own eLearning-based class activities designed to engage psychology students to learn statistical concepts such as Factor Analysis and Discriminant Function Analysis. Finally, we discuss general trends in eLearning and possible futures that are pertinent to teachers of research methods and statistics in psychology. PMID:27014147
Rock, Adam J; Coventry, William L; Morgan, Methuen I; Loi, Natasha M
2016-01-01
Generally, academic psychologists are mindful of the fact that, for many students, the study of research methods and statistics is anxiety provoking (Gal et al., 1997). Given the ubiquitous and distributed nature of eLearning systems (Nof et al., 2015), teachers of research methods and statistics need to cultivate an understanding of how to effectively use eLearning tools to inspire psychology students to learn. Consequently, the aim of the present paper is to discuss critically how using eLearning systems might engage psychology students in research methods and statistics. First, we critically appraise definitions of eLearning. Second, we examine numerous important pedagogical principles associated with effectively teaching research methods and statistics using eLearning systems. Subsequently, we provide practical examples of our own eLearning-based class activities designed to engage psychology students to learn statistical concepts such as Factor Analysis and Discriminant Function Analysis. Finally, we discuss general trends in eLearning and possible futures that are pertinent to teachers of research methods and statistics in psychology.
NASA Astrophysics Data System (ADS)
Ebrahimian, Ali; Wilson, Bruce N.; Gulliver, John S.
2016-05-01
Impervious surfaces are useful indicators of the urbanization impacts on water resources. Effective impervious area (EIA), which is the portion of total impervious area (TIA) that is hydraulically connected to the drainage system, is a better catchment parameter in the determination of actual urban runoff. Development of reliable methods for quantifying EIA rather than TIA is currently one of the knowledge gaps in the rainfall-runoff modeling context. The objective of this study is to improve the rainfall-runoff data analysis method for estimating EIA fraction in urban catchments by eliminating the subjective part of the existing method and by reducing the uncertainty of EIA estimates. First, the theoretical framework is generalized using a general linear least square model and using a general criterion for categorizing runoff events. Issues with the existing method that reduce the precision of the EIA fraction estimates are then identified and discussed. Two improved methods, based on ordinary least square (OLS) and weighted least square (WLS) estimates, are proposed to address these issues. The proposed weighted least squares method is then applied to eleven urban catchments in Europe, Canada, and Australia. The results are compared to map measured directly connected impervious area (DCIA) and are shown to be consistent with DCIA values. In addition, both of the improved methods are applied to nine urban catchments in Minnesota, USA. Both methods were successful in removing the subjective component inherent in the analysis of rainfall-runoff data of the current method. The WLS method is more robust than the OLS method and generates results that are different and more precise than the OLS method in the presence of heteroscedastic residuals in our rainfall-runoff data.
Malacrida, Leonel; Gratton, Enrico; Jameson, David M
2016-01-01
In this note, we present a discussion of the advantages and scope of model-free analysis methods applied to the popular solvatochromic probe LAURDAN, which is widely used as an environmental probe to study dynamics and structure in membranes. In particular, we compare and contrast the generalized polarization approach with the spectral phasor approach. To illustrate our points we utilize several model membrane systems containing pure lipid phases and, in some cases, cholesterol or surfactants. We demonstrate that the spectral phasor method offers definitive advantages in the case of complex systems. PMID:27182438
Event by event analysis and entropy of multiparticle systems
NASA Astrophysics Data System (ADS)
Bialas, A.; Czyz, W.
2000-04-01
The coincidence method of measuring the entropy of a system, proposed some time ago by Ma, is generalized to include systems out of equilibrium. It is suggested that the method can be adapted to analyze multiparticle states produced in high-energy collisions.
Large scale intercomparison of aerosol trace element analysis by different analytical methods
NASA Astrophysics Data System (ADS)
Bombelka, E.; Richter, F.-W.; Ries, H.; Wätjen, U.
1984-04-01
The general agreement of PIXE analysis with other methods (INAA, XRF, AAS, OES-ICP, and PhAA) is very good based on the analysis of filter pieces taken from 250 aerosol samples. It is better than 5% for Pb and Zn, better than 10% for V, Cr, and Mn, indicating that the accuracy of PIXE analysis can be within 10%. For elements such as Cd and Sb, difficult to analyze by PIXE because of their low mass content in the sample, the agreement is given mainly by the reproducibility of the method (20% to 30%). Similar agreement is found for sulfur, after taking account of the depth distribution of the aerosol in the filter.
NASA Astrophysics Data System (ADS)
He, Yang; Sun, Yajuan; Zhang, Ruili; Wang, Yulei; Liu, Jian; Qin, Hong
2016-09-01
We construct high order symmetric volume-preserving methods for the relativistic dynamics of a charged particle by the splitting technique with processing. By expanding the phase space to include the time t, we give a more general construction of volume-preserving methods that can be applied to systems with time-dependent electromagnetic fields. The newly derived methods provide numerical solutions with good accuracy and conservative properties over long time of simulation. Furthermore, because of the use of an accuracy-enhancing processing technique, the explicit methods obtain high-order accuracy and are more efficient than the methods derived from standard compositions. The results are verified by the numerical experiments. Linear stability analysis of the methods shows that the high order processed method allows larger time step size in numerical integrations.
Uncertainties in predicting solar panel power output
NASA Technical Reports Server (NTRS)
Anspaugh, B.
1974-01-01
The problem of calculating solar panel power output at launch and during a space mission is considered. The major sources of uncertainty and error in predicting the post launch electrical performance of the panel are considered. A general discussion of error analysis is given. Examples of uncertainty calculations are included. A general method of calculating the effect on the panel of various degrading environments is presented, with references supplied for specific methods. A technique for sizing a solar panel for a required mission power profile is developed.
Electrostatics of crossed arrays of strips.
Danicki, Eugene
2010-07-01
The BIS-expansion method is widely applied in analysis of SAW devices. Its generalization is presented for two planar periodic systems of perfectly conducting strips arranged perpendicularly on both sides of a dielectric layer. The generalized method can be applied in the evaluation of capacitances of strips on printed circuits boards and certain microwave devices, but primarily it may help in evaluation of 2-D piezoelectric sensors and actuators, with row and column addressing their elements, and also piezoelectric bulk wave resonators.
Yang, James J; Li, Jia; Williams, L Keoki; Buu, Anne
2016-01-05
In genome-wide association studies (GWAS) for complex diseases, the association between a SNP and each phenotype is usually weak. Combining multiple related phenotypic traits can increase the power of gene search and thus is a practically important area that requires methodology work. This study provides a comprehensive review of existing methods for conducting GWAS on complex diseases with multiple phenotypes including the multivariate analysis of variance (MANOVA), the principal component analysis (PCA), the generalizing estimating equations (GEE), the trait-based association test involving the extended Simes procedure (TATES), and the classical Fisher combination test. We propose a new method that relaxes the unrealistic independence assumption of the classical Fisher combination test and is computationally efficient. To demonstrate applications of the proposed method, we also present the results of statistical analysis on the Study of Addiction: Genetics and Environment (SAGE) data. Our simulation study shows that the proposed method has higher power than existing methods while controlling for the type I error rate. The GEE and the classical Fisher combination test, on the other hand, do not control the type I error rate and thus are not recommended. In general, the power of the competing methods decreases as the correlation between phenotypes increases. All the methods tend to have lower power when the multivariate phenotypes come from long tailed distributions. The real data analysis also demonstrates that the proposed method allows us to compare the marginal results with the multivariate results and specify which SNPs are specific to a particular phenotype or contribute to the common construct. The proposed method outperforms existing methods in most settings and also has great applications in GWAS on complex diseases with multiple phenotypes such as the substance abuse disorders.
Stability analysis of nonlinear autonomous systems - General theory and application to flutter
NASA Technical Reports Server (NTRS)
Smith, L. L.; Morino, L.
1975-01-01
The analysis makes use of a singular perturbation method, the multiple time scaling. Concepts of stable and unstable limit cycles are introduced. The solution is obtained in the form of an asymptotic expansion. Numerical results are presented for the nonlinear flutter of panels and airfoils in supersonic flow. The approach used is an extension of a method for analyzing nonlinear panel flutter reported by Morino (1969).
A kernel regression approach to gene-gene interaction detection for case-control studies.
Larson, Nicholas B; Schaid, Daniel J
2013-11-01
Gene-gene interactions are increasingly being addressed as a potentially important contributor to the variability of complex traits. Consequently, attentions have moved beyond single locus analysis of association to more complex genetic models. Although several single-marker approaches toward interaction analysis have been developed, such methods suffer from very high testing dimensionality and do not take advantage of existing information, notably the definition of genes as functional units. Here, we propose a comprehensive family of gene-level score tests for identifying genetic elements of disease risk, in particular pairwise gene-gene interactions. Using kernel machine methods, we devise score-based variance component tests under a generalized linear mixed model framework. We conducted simulations based upon coalescent genetic models to evaluate the performance of our approach under a variety of disease models. These simulations indicate that our methods are generally higher powered than alternative gene-level approaches and at worst competitive with exhaustive SNP-level (where SNP is single-nucleotide polymorphism) analyses. Furthermore, we observe that simulated epistatic effects resulted in significant marginal testing results for the involved genes regardless of whether or not true main effects were present. We detail the benefits of our methods and discuss potential genome-wide analysis strategies for gene-gene interaction analysis in a case-control study design. © 2013 WILEY PERIODICALS, INC.
Modeling and Analysis of Wrinkled Membranes: An Overview
NASA Technical Reports Server (NTRS)
Yang, B.; Ding, H.; Lou, M.; Fang, H.; Broduer, Steve (Technical Monitor)
2001-01-01
Thin-film membranes are basic elements of a variety of space inflatable/deployable structures. Wrinkling degrades the performance and reliability of these membrane structures, and hence has been a topic of continued interest. Wrinkling analysis of membranes for general geometry and arbitrary boundary conditions is quite challenging. The objective of this presentation is two-fold. Firstly, the existing models of wrinkled membranes and related numerical solution methods are reviewed. The important issues to be discussed are the capability of a membrane model to characterize taut, wrinkled and slack states of membranes in a consistent and physically reasonable manner; the ability of a wrinkling analysis method to predict the formation and growth of wrinkled regions, and to determine out-of-plane deformation and wrinkled waves; the convergence of a numerical solution method for wrinkling analysis; and the compatibility of a wrinkling analysis with general-purpose finite element codes. According to this review, several opening issues in modeling and analysis of wrinkled membranes that are to be addressed in future research are summarized, The second objective of this presentation is to discuss a newly developed membrane model of two viable parameters (2-VP model) and associated parametric finite element method (PFEM) for wrinkling analysis are introduced. The innovations and advantages of the proposed membrane model and PFEM-based wrinkling analysis are: (1) Via a unified stress-strain relation; the 2-VP model treat the taut, wrinkled, and slack states of membranes consistently; (2) The PFEM-based wrinkling analysis has guaranteed convergence; (3) The 2-VP model along with PFEM is capable of predicting membrane out-of-plane deformations; and (4) The PFEM can be integrated into any existing finite element code. Preliminary numerical examples are also included in this presentation to demonstrate the 2-VP model and PFEM-based wrinkling analysis approach.
Engine dynamic analysis with general nonlinear finite element codes
NASA Technical Reports Server (NTRS)
Adams, M. L.; Padovan, J.; Fertis, D. G.
1991-01-01
A general engine dynamic analysis as a standard design study computational tool is described for the prediction and understanding of complex engine dynamic behavior. Improved definition of engine dynamic response provides valuable information and insights leading to reduced maintenance and overhaul costs on existing engine configurations. Application of advanced engine dynamic simulation methods provides a considerable cost reduction in the development of new engine designs by eliminating some of the trial and error process done with engine hardware development.
Austin, Peter C
2018-01-01
Propensity score methods are increasingly being used to estimate the effects of treatments and exposures when using observational data. The propensity score was initially developed for use with binary exposures (e.g., active treatment vs. control). The generalized propensity score is an extension of the propensity score for use with quantitative exposures (e.g., dose or quantity of medication, income, years of education). A crucial component of any propensity score analysis is that of balance assessment. This entails assessing the degree to which conditioning on the propensity score (via matching, weighting, or stratification) has balanced measured baseline covariates between exposure groups. Methods for balance assessment have been well described and are frequently implemented when using the propensity score with binary exposures. However, there is a paucity of information on how to assess baseline covariate balance when using the generalized propensity score. We describe how methods based on the standardized difference can be adapted for use with quantitative exposures when using the generalized propensity score. We also describe a method based on assessing the correlation between the quantitative exposure and each covariate in the sample when weighted using generalized propensity score -based weights. We conducted a series of Monte Carlo simulations to evaluate the performance of these methods. We also compared two different methods of estimating the generalized propensity score: ordinary least squared regression and the covariate balancing propensity score method. We illustrate the application of these methods using data on patients hospitalized with a heart attack with the quantitative exposure being creatinine level.
The Regular Education Initiative: What Do Three Groups of Education Professionals Think?
ERIC Educational Resources Information Center
Davis, Jane C.; Maheady, Larry
1991-01-01
A survey of general education teachers, special education teachers, and building principals in Michigan assessed their agreement with the Regular Education Initiative (REI) goals and methods. Analysis of the 605 responses indicated general agreement with REI goals and procedures. Most educators believed that pragmatic factors posed the greatest…
Demographic Accounting and Model-Building. Education and Development Technical Reports.
ERIC Educational Resources Information Center
Stone, Richard
This report describes and develops a model for coordinating a variety of demographic and social statistics within a single framework. The framework proposed, together with its associated methods of analysis, serves both general and specific functions. The general aim of these functions is to give numerical definition to the pattern of society and…
Explicit Solutions and Bifurcations for a Class of Generalized Boussinesq Wave Equation
NASA Astrophysics Data System (ADS)
Ma, Zhi-Min; Sun, Yu-Huai; Liu, Fu-Sheng
2013-03-01
In this paper, the generalized Boussinesq wave equation utt — uxx + a(um)xx + buxxxx = 0 is investigated by using the bifurcation theory and the method of phase portraits analysis. Under the different parameter conditions, the exact explicit parametric representations for solitary wave solutions and periodic wave solutions are obtained.
On insomnia analysis using methods of artificial intelligence
NASA Astrophysics Data System (ADS)
Wasiewicz, P.; Skalski, M.
2011-10-01
Insomnia generally is defined as a subjective report of difficulty falling sleep, difficulty staying asleep, early awakening, or nonrestorative sleep. It is one of the most common health complaints among the general population. in this paper we try to find relationships between different insomnia cases and predisposing, precipitating, and perpetuating factors following by pharmacological treatment.
Sensitivity Analysis for some Water Pollution Problem
NASA Astrophysics Data System (ADS)
Le Dimet, François-Xavier; Tran Thu, Ha; Hussaini, Yousuff
2014-05-01
Sensitivity Analysis for Some Water Pollution Problems Francois-Xavier Le Dimet1 & Tran Thu Ha2 & M. Yousuff Hussaini3 1Université de Grenoble, France, 2Vietnamese Academy of Sciences, 3 Florida State University Sensitivity analysis employs some response function and the variable with respect to which its sensitivity is evaluated. If the state of the system is retrieved through a variational data assimilation process, then the observation appears only in the Optimality System (OS). In many cases, observations have errors and it is important to estimate their impact. Therefore, sensitivity analysis has to be carried out on the OS, and in that sense sensitivity analysis is a second order property. The OS can be considered as a generalized model because it contains all the available information. This presentation proposes a method to carry out sensitivity analysis in general. The method is demonstrated with an application to water pollution problem. The model involves shallow waters equations and an equation for the pollutant concentration. These equations are discretized using a finite volume method. The response function depends on the pollutant source, and its sensitivity with respect to the source term of the pollutant is studied. Specifically, we consider: • Identification of unknown parameters, and • Identification of sources of pollution and sensitivity with respect to the sources. We also use a Singular Evolutive Interpolated Kalman Filter to study this problem. The presentation includes a comparison of the results from these two methods. .
ERIC Educational Resources Information Center
Smith, Walter T., Jr.; Patterson, John M.
1980-01-01
Discusses analytical methods selected from current research articles. Groups information by topics of general interest, including acids, aldehydes and ketones, nitro compounds, phenols, and thiols. Cites 97 references. (CS)
Zheng, Jieru; Kang, Youn K; Therien, Michael J; Beratan, David N
2005-08-17
Donor-acceptor interactions were investigated in a series of unusually rigid, cofacially compressed pi-stacked porphyrin-bridge-quinone systems. The two-state generalized Mulliken-Hush (GMH) approach was used to compute the coupling matrix elements. The theoretical coupling values evaluated with the GMH method were obtained from configuration interaction calculations using the INDO/S method. The results of this analysis are consistent with the comparatively soft distance dependences observed for both the charge separation and charge recombination reactions. Theoretical studies of model structures indicate that the phenyl units dominate the mediation of the donor-acceptor coupling and that the relatively weak exponential decay of rate with distance arises from the compression of this pi-electron stack.
The bench scientist's guide to RNA-Seq analysis
USDA-ARS?s Scientific Manuscript database
RNA sequencing (RNA-Seq) is emerging as a highly accurate method to quantify transcript abundance. However, analyses of the large data sets obtained by sequencing the entire transcriptome of organisms have generally been performed by bioinformatic specialists. Here we outline a methods strategy desi...
NASA Technical Reports Server (NTRS)
Schmidt, R. J.; Dodds, R. H., Jr.
1985-01-01
The dynamic analysis of complex structural systems using the finite element method and multilevel substructured models is presented. The fixed-interface method is selected for substructure reduction because of its efficiency, accuracy, and adaptability to restart and reanalysis. This method is extended to reduction of substructures which are themselves composed of reduced substructures. The implementation and performance of the method in a general purpose software system is emphasized. Solution algorithms consistent with the chosen data structures are presented. It is demonstrated that successful finite element software requires the use of software executives to supplement the algorithmic language. The complexity of the implementation of restart and reanalysis porcedures illustrates the need for executive systems to support the noncomputational aspects of the software. It is shown that significant computational efficiencies can be achieved through proper use of substructuring and reduction technbiques without sacrificing solution accuracy. The restart and reanalysis capabilities and the flexible procedures for multilevel substructured modeling gives economical yet accurate analyses of complex structural systems.
Sidorov, V L; Shvetsova, I V; Isakova, I V
2007-01-01
The authors give the comparative analysis of Russian and foreign forensic medical methods of species character identification of the blood from the stains on the material evidences and bone fragments. It is shown that for this purpose it is feasible to apply human immunoglobulin G (IgG) and solid phase enzyme immunoassay (EIA) with the kit "IgG general-EIA-BEST". In comparison with the methods used in Russia this method is more sensitive, convenient for objective registration and computer processing. The results of experiments shown that it is possible to use the kit "IgG general-EIA-BEST" in forensic medicine for the species character identification of the blood from the stains on the material evidences and bone fragments.
The response analysis of fractional-order stochastic system via generalized cell mapping method.
Wang, Liang; Xue, Lili; Sun, Chunyan; Yue, Xiaole; Xu, Wei
2018-01-01
This paper is concerned with the response of a fractional-order stochastic system. The short memory principle is introduced to ensure that the response of the system is a Markov process. The generalized cell mapping method is applied to display the global dynamics of the noise-free system, such as attractors, basins of attraction, basin boundary, saddle, and invariant manifolds. The stochastic generalized cell mapping method is employed to obtain the evolutionary process of probability density functions of the response. The fractional-order ϕ 6 oscillator and the fractional-order smooth and discontinuous oscillator are taken as examples to give the implementations of our strategies. Studies have shown that the evolutionary direction of the probability density function of the fractional-order stochastic system is consistent with the unstable manifold. The effectiveness of the method is confirmed using Monte Carlo results.
NASA Astrophysics Data System (ADS)
Pan'kov, A. A.
1997-05-01
The feasibility of using a generalized self-consistent method for predicting the effective elastic properties of composites with random hybrid structures has been examined. Using this method, the problem is reduced to solution of simpler special averaged problems for composites with single inclusions and corresponding transition layers in the medium examined. The dimensions of the transition layers are defined by correlation radii of the composite random structure of the composite, while the heterogeneous elastic properties of the transition layers take account of the probabilities for variation of the size and configuration of the inclusions using averaged special indicator functions. Results are given for a numerical calculation of the averaged indicator functions and analysis of the effect of the micropores in the matrix-fiber interface region on the effective elastic properties of unidirectional fiberglass—epoxy using the generalized self-consistent method and compared with experimental data and reported solutions.
Analysis of cohort studies with multivariate and partially observed disease classification data.
Chatterjee, Nilanjan; Sinha, Samiran; Diver, W Ryan; Feigelson, Heather Spencer
2010-09-01
Complex diseases like cancers can often be classified into subtypes using various pathological and molecular traits of the disease. In this article, we develop methods for analysis of disease incidence in cohort studies incorporating data on multiple disease traits using a two-stage semiparametric Cox proportional hazards regression model that allows one to examine the heterogeneity in the effect of the covariates by the levels of the different disease traits. For inference in the presence of missing disease traits, we propose a generalization of an estimating equation approach for handling missing cause of failure in competing-risk data. We prove asymptotic unbiasedness of the estimating equation method under a general missing-at-random assumption and propose a novel influence-function-based sandwich variance estimator. The methods are illustrated using simulation studies and a real data application involving the Cancer Prevention Study II nutrition cohort.
Postmus, Douwe; Tervonen, Tommi; van Valkenhoef, Gert; Hillege, Hans L; Buskens, Erik
2014-09-01
A standard practice in health economic evaluation is to monetize health effects by assuming a certain societal willingness-to-pay per unit of health gain. Although the resulting net monetary benefit (NMB) is easy to compute, the use of a single willingness-to-pay threshold assumes expressibility of the health effects on a single non-monetary scale. To relax this assumption, this article proves that the NMB framework is a special case of the more general stochastic multi-criteria acceptability analysis (SMAA) method. Specifically, as SMAA does not restrict the number of criteria to two and also does not require the marginal rates of substitution to be constant, there are problem instances for which the use of this more general method may result in a better understanding of the trade-offs underlying the reimbursement decision-making problem. This is illustrated by applying both methods in a case study related to infertility treatment.
NASA Astrophysics Data System (ADS)
Liu, Changjiang; Cheng, Irene; Zhang, Yi; Basu, Anup
2017-06-01
This paper presents an improved multi-scale Retinex (MSR) based enhancement for ariel images under low visibility. For traditional multi-scale Retinex, three scales are commonly employed, which limits its application scenarios. We extend our research to a general purpose enhanced method, and design an MSR with more than three scales. Based on the mathematical analysis and deductions, an explicit multi-scale representation is proposed that balances image contrast and color consistency. In addition, a histogram truncation technique is introduced as a post-processing strategy to remap the multi-scale Retinex output to the dynamic range of the display. Analysis of experimental results and comparisons with existing algorithms demonstrate the effectiveness and generality of the proposed method. Results on image quality assessment proves the accuracy of the proposed method with respect to both objective and subjective criteria.
Lee, Juneyoung; Kim, Kyung Won; Choi, Sang Hyun; Huh, Jimi
2015-01-01
Meta-analysis of diagnostic test accuracy studies differs from the usual meta-analysis of therapeutic/interventional studies in that, it is required to simultaneously analyze a pair of two outcome measures such as sensitivity and specificity, instead of a single outcome. Since sensitivity and specificity are generally inversely correlated and could be affected by a threshold effect, more sophisticated statistical methods are required for the meta-analysis of diagnostic test accuracy. Hierarchical models including the bivariate model and the hierarchical summary receiver operating characteristic model are increasingly being accepted as standard methods for meta-analysis of diagnostic test accuracy studies. We provide a conceptual review of statistical methods currently used and recommended for meta-analysis of diagnostic test accuracy studies. This article could serve as a methodological reference for those who perform systematic review and meta-analysis of diagnostic test accuracy studies. PMID:26576107
15 CFR 806.13 - Miscellaneous.
Code of Federal Regulations, 2010 CFR
2010-01-01
... ECONOMIC ANALYSIS, DEPARTMENT OF COMMERCE DIRECT INVESTMENT SURVEYS § 806.13 Miscellaneous. (a) Accounting methods and records. Generally accepted U.S. accounting principles should be followed. Corporations should... filed with the Bureau of Economic Analysis; this should be the copy with the address label if such a...
Global Optimality of the Successive Maxbet Algorithm.
ERIC Educational Resources Information Center
Hanafi, Mohamed; ten Berge, Jos M. F.
2003-01-01
It is known that the Maxbet algorithm, which is an alternative to the method of generalized canonical correlation analysis and Procrustes analysis, may converge to local maxima. Discusses an eigenvalue criterion that is sufficient, but not necessary, for global optimality of the successive Maxbet algorithm. (SLD)
Methods for Human Dehydration Measurement
NASA Astrophysics Data System (ADS)
Trenz, Florian; Weigel, Robert; Hagelauer, Amelie
2018-03-01
The aim of this article is to give a broad overview of current methods for the identification and quantification of the human dehydration level. Starting off from most common clinical setups, including vital parameters and general patients' appearance, more quantifiable results from chemical laboratory and electromagnetic measurement methods will be reviewed. Different analysis methods throughout the electromagnetic spectrum, ranging from direct current (DC) conductivity measurements up to neutron activation analysis (NAA), are discussed on the base of published results. Finally, promising technologies, which allow for an integration of a dehydration assessment system in a compact and portable way, will be spotted.
Suggestions for presenting the results of data analyses
Anderson, David R.; Link, William A.; Johnson, Douglas H.; Burnham, Kenneth P.
2001-01-01
We give suggestions for the presentation of research results from frequentist, information-theoretic, and Bayesian analysis paradigms, followed by several general suggestions. The information-theoretic and Bayesian methods offer alternative approaches to data analysis and inference compared to traditionally used methods. Guidance is lacking on the presentation of results under these alternative procedures and on nontesting aspects of classical frequentists methods of statistical analysis. Null hypothesis testing has come under intense criticism. We recommend less reporting of the results of statistical tests of null hypotheses in cases where the null is surely false anyway, or where the null hypothesis is of little interest to science or management.
Analysis of a boron-carbide-drum-controlled critical reactor experiment
NASA Technical Reports Server (NTRS)
Mayo, W. T.
1972-01-01
In order to validate methods and cross sections used in the neutronic design of compact fast-spectrum reactors for generating electric power in space, an analysis of a boron-carbide-drum-controlled critical reactor was made. For this reactor the transport analysis gave generally satisfactory results. The calculated multiplication factor for the most detailed calculation was only 0.7-percent Delta k too high. Calculated reactivity worth of the control drums was $11.61 compared to measurements of $11.58 by the inverse kinetics methods and $11.98 by the inverse counting method. Calculated radial and axial power distributions were in good agreement with experiment.
2017-03-23
solutions obtained through their proposed method to comparative instances of a generalized assignment problem with either ordinal cost components or... method flag: Designates the method by which the changed/ new assignment problem instance is solved. methodFlag = 0:SMAWarmstart Returns a matching...of randomized perturbations. We examine the contrasts between these methods in the context of assigning Army Officers among a set of identified
Structural synthesis: Precursor and catalyst
NASA Technical Reports Server (NTRS)
Schmit, L. A.
1984-01-01
More than twenty five years have elapsed since it was recognized that a rather general class of structural design optimization tasks could be properly posed as an inequality constrained minimization problem. It is suggested that, independent of primary discipline area, it will be useful to think about: (1) posing design problems in terms of an objective function and inequality constraints; (2) generating design oriented approximate analysis methods (giving special attention to behavior sensitivity analysis); (3) distinguishing between decisions that lead to an analysis model and those that lead to a design model; (4) finding ways to generate a sequence of approximate design optimization problems that capture the essential characteristics of the primary problem, while still having an explicit algebraic form that is matched to one or more of the established optimization algorithms; (5) examining the potential of optimum design sensitivity analysis to facilitate quantitative trade-off studies as well as participation in multilevel design activities. It should be kept in mind that multilevel methods are inherently well suited to a parallel mode of operation in computer terms or to a division of labor between task groups in organizational terms. Based on structural experience with multilevel methods general guidelines are suggested.
Soil Sampling Operating Procedure
EPA Region 4 Science and Ecosystem Support Division (SESD) document that describes general and specific procedures, methods, and considerations when collecting soil samples for field screening or laboratory analysis.
Sediment Sampling Operating Procedure
EPA Region 4 Science and Ecosystem Support Division (SESD) document that describes general and specific procedures, methods, and considerations when collecting sediment samples for field screening or laboratory analysis.
Testing alternative ground water models using cross-validation and other methods
Foglia, L.; Mehl, S.W.; Hill, M.C.; Perona, P.; Burlando, P.
2007-01-01
Many methods can be used to test alternative ground water models. Of concern in this work are methods able to (1) rank alternative models (also called model discrimination) and (2) identify observations important to parameter estimates and predictions (equivalent to the purpose served by some types of sensitivity analysis). Some of the measures investigated are computationally efficient; others are computationally demanding. The latter are generally needed to account for model nonlinearity. The efficient model discrimination methods investigated include the information criteria: the corrected Akaike information criterion, Bayesian information criterion, and generalized cross-validation. The efficient sensitivity analysis measures used are dimensionless scaled sensitivity (DSS), composite scaled sensitivity, and parameter correlation coefficient (PCC); the other statistics are DFBETAS, Cook's D, and observation-prediction statistic. Acronyms are explained in the introduction. Cross-validation (CV) is a computationally intensive nonlinear method that is used for both model discrimination and sensitivity analysis. The methods are tested using up to five alternative parsimoniously constructed models of the ground water system of the Maggia Valley in southern Switzerland. The alternative models differ in their representation of hydraulic conductivity. A new method for graphically representing CV and sensitivity analysis results for complex models is presented and used to evaluate the utility of the efficient statistics. The results indicate that for model selection, the information criteria produce similar results at much smaller computational cost than CV. For identifying important observations, the only obviously inferior linear measure is DSS; the poor performance was expected because DSS does not include the effects of parameter correlation and PCC reveals large parameter correlations. ?? 2007 National Ground Water Association.
NASA Astrophysics Data System (ADS)
Wang, Z.
2015-12-01
For decades, distributed and lumped hydrological models have furthered our understanding of hydrological system. The development of hydrological simulation in large scale and high precision elaborated the spatial descriptions and hydrological behaviors. Meanwhile, the new trend is also followed by the increment of model complexity and number of parameters, which brings new challenges of uncertainty quantification. Generalized Likelihood Uncertainty Estimation (GLUE) has been widely used in uncertainty analysis for hydrological models referring to Monte Carlo method coupled with Bayesian estimation. However, the stochastic sampling method of prior parameters adopted by GLUE appears inefficient, especially in high dimensional parameter space. The heuristic optimization algorithms utilizing iterative evolution show better convergence speed and optimality-searching performance. In light of the features of heuristic optimization algorithms, this study adopted genetic algorithm, differential evolution, shuffled complex evolving algorithm to search the parameter space and obtain the parameter sets of large likelihoods. Based on the multi-algorithm sampling, hydrological model uncertainty analysis is conducted by the typical GLUE framework. To demonstrate the superiority of the new method, two hydrological models of different complexity are examined. The results shows the adaptive method tends to be efficient in sampling and effective in uncertainty analysis, providing an alternative path for uncertainty quantilization.
1992-04-10
and passive tracer concentrations, and their cross correlations have generally been used to estimate the magnitude of dispersive atmospheric transport...of gravity waves and turbulence. . 10 III. METHODS .......... ........................ 12 A. Data .......... ........................ 12 B. Analysis ...unstable, i.e., strange. For waves or even limit cycle motion about fixed attractors, self-similarity does not occur. Pertinent to time series analysis , this
ERIC Educational Resources Information Center
Wetzel, Angela Payne
2011-01-01
Previous systematic reviews indicate a lack of reporting of reliability and validity evidence in subsets of the medical education literature. Psychology and general education reviews of factor analysis also indicate gaps between current and best practices; yet, a comprehensive review of exploratory factor analysis in instrument development across…
Intelligence, Surveillance, and Reconnaissance Fusion for Coalition Operations
2008-07-01
classification of the targets of interest. The MMI features extracted in this manner have two properties that provide a sound justification for...are generalizations of well- known feature extraction methods such as Principal Components Analysis (PCA) and Independent Component Analysis (ICA...augment (without degrading performance) a large class of generic fusion processes. Ontologies Classifications Feature extraction Feature analysis
Sensitivity Analysis in Engineering
NASA Technical Reports Server (NTRS)
Adelman, Howard M. (Compiler); Haftka, Raphael T. (Compiler)
1987-01-01
The symposium proceedings presented focused primarily on sensitivity analysis of structural response. However, the first session, entitled, General and Multidisciplinary Sensitivity, focused on areas such as physics, chemistry, controls, and aerodynamics. The other four sessions were concerned with the sensitivity of structural systems modeled by finite elements. Session 2 dealt with Static Sensitivity Analysis and Applications; Session 3 with Eigenproblem Sensitivity Methods; Session 4 with Transient Sensitivity Analysis; and Session 5 with Shape Sensitivity Analysis.
From Image Analysis to Computer Vision: Motives, Methods, and Milestones.
1998-07-01
images. Initially, work on digital image analysis dealt with specific classes of images such as text, photomicrographs, nuclear particle tracks, and aerial...photographs; but by the 1960’s, general algorithms and paradigms for image analysis began to be formulated. When the artificial intelligence...scene, but eventually from image sequences obtained by a moving camera; at this stage, image analysis had become scene analysis or computer vision
Huberts, W; Donders, W P; Delhaas, T; van de Vosse, F N
2014-12-01
Patient-specific modeling requires model personalization, which can be achieved in an efficient manner by parameter fixing and parameter prioritization. An efficient variance-based method is using generalized polynomial chaos expansion (gPCE), but it has not been applied in the context of model personalization, nor has it ever been compared with standard variance-based methods for models with many parameters. In this work, we apply the gPCE method to a previously reported pulse wave propagation model and compare the conclusions for model personalization with that of a reference analysis performed with Saltelli's efficient Monte Carlo method. We furthermore differentiate two approaches for obtaining the expansion coefficients: one based on spectral projection (gPCE-P) and one based on least squares regression (gPCE-R). It was found that in general the gPCE yields similar conclusions as the reference analysis but at much lower cost, as long as the polynomial metamodel does not contain unnecessary high order terms. Furthermore, the gPCE-R approach generally yielded better results than gPCE-P. The weak performance of the gPCE-P can be attributed to the assessment of the expansion coefficients using the Smolyak algorithm, which might be hampered by the high number of model parameters and/or by possible non-smoothness in the output space. Copyright © 2014 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Suparmi, A.; Cari, C.; Lilis Elviyanti, Isnaini
2018-04-01
Analysis of relativistic energy and wave function for zero spin particles using Klein Gordon equation was influenced by separable noncentral cylindrical potential was solved by asymptotic iteration method (AIM). By using cylindrical coordinates, the Klein Gordon equation for the case of symmetry spin was reduced to three one-dimensional Schrodinger like equations that were solvable using variable separation method. The relativistic energy was calculated numerically with Matlab software, and the general unnormalized wave function was expressed in hypergeometric terms.
A study of commuter airplane design optimization
NASA Technical Reports Server (NTRS)
Keppel, B. V.; Eysink, H.; Hammer, J.; Hawley, K.; Meredith, P.; Roskam, J.
1978-01-01
The usability of the general aviation synthesis program (GASP) was enhanced by the development of separate computer subroutines which can be added as a package to this assembly of computerized design methods or used as a separate subroutine program to compute the dynamic longitudinal, lateral-directional stability characteristics for a given airplane. Currently available analysis methods were evaluated to ascertain those most appropriate for the design functions which the GASP computerized design program performs. Methods for providing proper constraint and/or analysis functions for GASP were developed as well as the appropriate subroutines.
Review of Recent Methodological Developments in Group-Randomized Trials: Part 2-Analysis.
Turner, Elizabeth L; Prague, Melanie; Gallis, John A; Li, Fan; Murray, David M
2017-07-01
In 2004, Murray et al. reviewed methodological developments in the design and analysis of group-randomized trials (GRTs). We have updated that review with developments in analysis of the past 13 years, with a companion article to focus on developments in design. We discuss developments in the topics of the earlier review (e.g., methods for parallel-arm GRTs, individually randomized group-treatment trials, and missing data) and in new topics, including methods to account for multiple-level clustering and alternative estimation methods (e.g., augmented generalized estimating equations, targeted maximum likelihood, and quadratic inference functions). In addition, we describe developments in analysis of alternative group designs (including stepped-wedge GRTs, network-randomized trials, and pseudocluster randomized trials), which require clustering to be accounted for in their design and analysis.
A Self-Directed Method for Cell-Type Identification and Separation of Gene Expression Microarrays
Zuckerman, Neta S.; Noam, Yair; Goldsmith, Andrea J.; Lee, Peter P.
2013-01-01
Gene expression analysis is generally performed on heterogeneous tissue samples consisting of multiple cell types. Current methods developed to separate heterogeneous gene expression rely on prior knowledge of the cell-type composition and/or signatures - these are not available in most public datasets. We present a novel method to identify the cell-type composition, signatures and proportions per sample without need for a-priori information. The method was successfully tested on controlled and semi-controlled datasets and performed as accurately as current methods that do require additional information. As such, this method enables the analysis of cell-type specific gene expression using existing large pools of publically available microarray datasets. PMID:23990767
Analytic uncertainty and sensitivity analysis of models with input correlations
NASA Astrophysics Data System (ADS)
Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu
2018-03-01
Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.
NASA Technical Reports Server (NTRS)
Bratanow, T.; Ecer, A.
1973-01-01
A general computational method for analyzing unsteady flow around pitching and plunging airfoils was developed. The finite element method was applied in developing an efficient numerical procedure for the solution of equations describing the flow around airfoils. The numerical results were employed in conjunction with computer graphics techniques to produce visualization of the flow. The investigation involved mathematical model studies of flow in two phases: (1) analysis of a potential flow formulation and (2) analysis of an incompressible, unsteady, viscous flow from Navier-Stokes equations.
Kettler, Susanne; Kennedy, Marc; McNamara, Cronan; Oberdörfer, Regina; O'Mahony, Cian; Schnabel, Jürgen; Smith, Benjamin; Sprong, Corinne; Faludi, Roland; Tennant, David
2015-08-01
Uncertainty analysis is an important component of dietary exposure assessments in order to understand correctly the strength and limits of its results. Often, standard screening procedures are applied in a first step which results in conservative estimates. If through those screening procedures a potential exceedance of health-based guidance values is indicated, within the tiered approach more refined models are applied. However, the sources and types of uncertainties in deterministic and probabilistic models can vary or differ. A key objective of this work has been the mapping of different sources and types of uncertainties to better understand how to best use uncertainty analysis to generate more realistic comprehension of dietary exposure. In dietary exposure assessments, uncertainties can be introduced by knowledge gaps about the exposure scenario, parameter and the model itself. With this mapping, general and model-independent uncertainties have been identified and described, as well as those which can be introduced and influenced by the specific model during the tiered approach. This analysis identifies that there are general uncertainties common to point estimates (screening or deterministic methods) and probabilistic exposure assessment methods. To provide further clarity, general sources of uncertainty affecting many dietary exposure assessments should be separated from model-specific uncertainties. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
SPAR improved structure-fluid dynamic analysis capability, phase 2
NASA Technical Reports Server (NTRS)
Pearson, M. L.
1984-01-01
An efficient and general method of analyzing a coupled dynamic system of fluid flow and elastic structures is investigated. The improvement of Structural Performance Analysis and Redesign (SPAR) code is summarized. All error codes are documented and the SPAR processor/subroutine cross reference is included.
Method for nonlinear exponential regression analysis
NASA Technical Reports Server (NTRS)
Junkin, B. G.
1972-01-01
Two computer programs developed according to two general types of exponential models for conducting nonlinear exponential regression analysis are described. Least squares procedure is used in which the nonlinear problem is linearized by expanding in a Taylor series. Program is written in FORTRAN 5 for the Univac 1108 computer.
Compensation of hospital-based physicians.
Steinwald, B
1983-01-01
This study is concerned with methods of compensating hospital-based physicians (HBPs) in five medical specialties: anesthesiology, pathology, radiology, cardiology, and emergency medicine. Data on 2232 nonfederal, short-term general hospitals came from a mail questionnaire survey conducted in Fall 1979. The data indicate that numerous compensation methods exist but these methods, without much loss of precision, can be reduced to salary, percentage of department revenue, and fee-for-service. When HBPs are compensated by salary or percentage methods, most patient billing is conducted by the hospital. In contrast, most fee-for-service HBPs bill their patients directly. Determinants of HBP compensation methods are investigated via multinomial logit analysis. This analysis indicates that choice of HBP compensation methods are investigated via multinomial logit analysis. This analysis indicates that choice of HBP compensation methods is sensitive to a number of hospital characteristics and attributes of both the hospital and physicians' services markets. The empirical findings are discussed in light of past conceptual and empirical research on physician compensation, and current policy issues in the health services sector. PMID:6841112
Multivariate generalized multifactor dimensionality reduction to detect gene-gene interactions
2013-01-01
Background Recently, one of the greatest challenges in genome-wide association studies is to detect gene-gene and/or gene-environment interactions for common complex human diseases. Ritchie et al. (2001) proposed multifactor dimensionality reduction (MDR) method for interaction analysis. MDR is a combinatorial approach to reduce multi-locus genotypes into high-risk and low-risk groups. Although MDR has been widely used for case-control studies with binary phenotypes, several extensions have been proposed. One of these methods, a generalized MDR (GMDR) proposed by Lou et al. (2007), allows adjusting for covariates and applying to both dichotomous and continuous phenotypes. GMDR uses the residual score of a generalized linear model of phenotypes to assign either high-risk or low-risk group, while MDR uses the ratio of cases to controls. Methods In this study, we propose multivariate GMDR, an extension of GMDR for multivariate phenotypes. Jointly analysing correlated multivariate phenotypes may have more power to detect susceptible genes and gene-gene interactions. We construct generalized estimating equations (GEE) with multivariate phenotypes to extend generalized linear models. Using the score vectors from GEE we discriminate high-risk from low-risk groups. We applied the multivariate GMDR method to the blood pressure data of the 7,546 subjects from the Korean Association Resource study: systolic blood pressure (SBP) and diastolic blood pressure (DBP). We compare the results of multivariate GMDR for SBP and DBP to the results from separate univariate GMDR for SBP and DBP, respectively. We also applied the multivariate GMDR method to the repeatedly measured hypertension status from 5,466 subjects and compared its result with those of univariate GMDR at each time point. Results Results from the univariate GMDR and multivariate GMDR in two-locus model with both blood pressures and hypertension phenotypes indicate best combinations of SNPs whose interaction has significant association with risk for high blood pressures or hypertension. Although the test balanced accuracy (BA) of multivariate analysis was not always greater than that of univariate analysis, the multivariate BAs were more stable with smaller standard deviations. Conclusions In this study, we have developed multivariate GMDR method using GEE approach. It is useful to use multivariate GMDR with correlated multiple phenotypes of interests. PMID:24565370
Element analysis: a wavelet-based method for analysing time-localized events in noisy time series.
Lilly, Jonathan M
2017-04-01
A method is derived for the quantitative analysis of signals that are composed of superpositions of isolated, time-localized 'events'. Here, these events are taken to be well represented as rescaled and phase-rotated versions of generalized Morse wavelets, a broad family of continuous analytic functions. Analysing a signal composed of replicates of such a function using another Morse wavelet allows one to directly estimate the properties of events from the values of the wavelet transform at its own maxima. The distribution of events in general power-law noise is determined in order to establish significance based on an expected false detection rate. Finally, an expression for an event's 'region of influence' within the wavelet transform permits the formation of a criterion for rejecting spurious maxima due to numerical artefacts or other unsuitable events. Signals can then be reconstructed based on a small number of isolated points on the time/scale plane. This method, termed element analysis , is applied to the identification of long-lived eddy structures in ocean currents as observed by along-track measurements of sea surface elevation from satellite altimetry.
Lee, Kam L; Ireland, Timothy A; Bernardo, Michael
2016-06-01
This is the first part of a two-part study in benchmarking the performance of fixed digital radiographic general X-ray systems. This paper concentrates on reporting findings related to quantitative analysis techniques used to establish comparative image quality metrics. A systematic technical comparison of the evaluated systems is presented in part two of this study. A novel quantitative image quality analysis method is presented with technical considerations addressed for peer review. The novel method was applied to seven general radiographic systems with four different makes of radiographic image receptor (12 image receptors in total). For the System Modulation Transfer Function (sMTF), the use of grid was found to reduce veiling glare and decrease roll-off. The major contributor in sMTF degradation was found to be focal spot blurring. For the System Normalised Noise Power Spectrum (sNNPS), it was found that all systems examined had similar sNNPS responses. A mathematical model is presented to explain how the use of stationary grid may cause a difference between horizontal and vertical sNNPS responses.
Comparison of Five System Identification Algorithms for Rotorcraft Higher Harmonic Control
NASA Technical Reports Server (NTRS)
Jacklin, Stephen A.
1998-01-01
This report presents an analysis and performance comparison of five system identification algorithms. The methods are presented in the context of identifying a frequency-domain transfer matrix for the higher harmonic control (HHC) of helicopter vibration. The five system identification algorithms include three previously proposed methods: (1) the weighted-least- squares-error approach (in moving-block format), (2) the Kalman filter method, and (3) the least-mean-squares (LMS) filter method. In addition there are two new ones: (4) a generalized Kalman filter method and (5) a generalized LMS filter method. The generalized Kalman filter method and the generalized LMS filter method were derived as extensions of the classic methods to permit identification by using more than one measurement per identification cycle. Simulation results are presented for conditions ranging from the ideal case of a stationary transfer matrix and no measurement noise to the more complex cases involving both measurement noise and transfer-matrix variation. Both open-loop identification and closed- loop identification were simulated. Closed-loop mode identification was more challenging than open-loop identification because of the decreasing signal-to-noise ratio as the vibration became reduced. The closed-loop simulation considered both local-model identification, with measured vibration feedback and global-model identification with feedback of the identified uncontrolled vibration. The algorithms were evaluated in terms of their accuracy, stability, convergence properties, computation speeds, and relative ease of implementation.
Analysis of Electrowetting Dynamics with Level Set Method
NASA Astrophysics Data System (ADS)
Park, Jun Kwon; Hong, Jiwoo; Kang, Kwan Hyoung
2009-11-01
Electrowetting is a versatile tool to handle tiny droplets and forms a backbone of digital microfluidics. Numerical analysis is necessary to fully understand the dynamics of electrowetting, especially in designing electrowetting-based liquid lenses and reflective displays. We developed a numerical method to analyze the general contact-line problems, incorporating dynamic contact angle models. The method was applied to the analysis of spreading process of a sessile droplet for step input voltages in electrowetting. The result was compared with experimental data and analytical result which is based on the spectral method. It is shown that contact line friction significantly affects the contact line motion and the oscillation amplitude. The pinning process of contact line was well represented by including the hysteresis effect in the contact angle models.
Johnson, Heath E; Haugh, Jason M
2013-12-02
This unit focuses on the use of total internal reflection fluorescence (TIRF) microscopy and image analysis methods to study the dynamics of signal transduction mediated by class I phosphoinositide 3-kinases (PI3Ks) in mammalian cells. The first four protocols cover live-cell imaging experiments, image acquisition parameters, and basic image processing and segmentation. These methods are generally applicable to live-cell TIRF experiments. The remaining protocols outline more advanced image analysis methods, which were developed in our laboratory for the purpose of characterizing the spatiotemporal dynamics of PI3K signaling. These methods may be extended to analyze other cellular processes monitored using fluorescent biosensors. Copyright © 2013 John Wiley & Sons, Inc.
ERIC Educational Resources Information Center
Gomez, Rapson
2012-01-01
Objective: Generalized partial credit model, which is based on item response theory (IRT), was used to test differential item functioning (DIF) for the "Diagnostic and Statistical Manual of Mental Disorders" (4th ed.), inattention (IA), and hyperactivity/impulsivity (HI) symptoms across boys and girls. Method: To accomplish this, parents completed…
ERIC Educational Resources Information Center
Broderick, John
Suggestions are offered to help college-level teachers of sociology develop and implement programs which are consistent with the recent trend toward traditionalism in general higher education--a renewed interest in the traditional disciplines such as history, economics, and language studies. Suggestions center around two teaching methods--critical…
Coping as a Predictor of Burnout and General Health in Therapists Working in ABA Schools
ERIC Educational Resources Information Center
Griffith, G. M.; Barbakou, A.; Hastings, R. P.
2014-01-01
Background: Little is known about the work-related well-being of applied behaviour analysis (ABA) therapists who work in school-based contexts and deliver ABA interventions to children with autism. Methods: A questionnaire on work-related stress (burnout), general distress, perceived supervisor support and coping was completed by 45 ABA therapists…
ERIC Educational Resources Information Center
D'Amelia, Ronald; Franks, Thomas; Nirode, William F.
2007-01-01
In first-year general chemistry undergraduate courses, thermodynamics and thermal properties such as melting points and changes in enthalpy ([Delta]H) and entropy ([Delta]S) of phase changes are frequently discussed. Typically, classical calorimetric methods of analysis are used to determine [Delta]H of reactions. Differential scanning calorimetry…
A generalization of random matrix theory and its application to statistical physics.
Wang, Duan; Zhang, Xin; Horvatic, Davor; Podobnik, Boris; Eugene Stanley, H
2017-02-01
To study the statistical structure of crosscorrelations in empirical data, we generalize random matrix theory and propose a new method of cross-correlation analysis, known as autoregressive random matrix theory (ARRMT). ARRMT takes into account the influence of auto-correlations in the study of cross-correlations in multiple time series. We first analytically and numerically determine how auto-correlations affect the eigenvalue distribution of the correlation matrix. Then we introduce ARRMT with a detailed procedure of how to implement the method. Finally, we illustrate the method using two examples taken from inflation rates for air pressure data for 95 US cities.
Research and applications: Artificial intelligence
NASA Technical Reports Server (NTRS)
Raphael, B.; Fikes, R. E.; Chaitin, L. J.; Hart, P. E.; Duda, R. O.; Nilsson, N. J.
1971-01-01
A program of research in the field of artificial intelligence is presented. The research areas discussed include automatic theorem proving, representations of real-world environments, problem-solving methods, the design of a programming system for problem-solving research, techniques for general scene analysis based upon television data, and the problems of assembling an integrated robot system. Major accomplishments include the development of a new problem-solving system that uses both formal logical inference and informal heuristic methods, the development of a method of automatic learning by generalization, and the design of the overall structure of a new complete robot system. Eight appendices to the report contain extensive technical details of the work described.
Progressive matrix cracking in off-axis plies of a general symmetric laminate
NASA Technical Reports Server (NTRS)
Thomas, David J.; Wetherhold, Robert C.
1993-01-01
A generalized shear-lag model is derived to determine the average through-the-thickness stress state present in a layer undergoing transverse matrix cracking, by extending the method of Lee and Daniels (1991) to a general symmetric multilayered system. The model is capable of considering cracking in layers of arbitrary orientation, states of general in-plane applied loading, and laminates with a general symmetric stacking sequence. The model is included in a computer program designed for probabilistic laminate analysis, and the results are compared to those determined with the ply drop-off technique.
Soil Gas Sampling Operating Procedure
EPA Region 4 Science and Ecosystem Support Division (SESD) document that describes general and specific procedures, methods, and considerations when collecting soil gas samples for field screening or laboratory analysis.
Variational Methods in Sensitivity Analysis and Optimization for Aerodynamic Applications
NASA Technical Reports Server (NTRS)
Ibrahim, A. H.; Hou, G. J.-W.; Tiwari, S. N. (Principal Investigator)
1996-01-01
Variational methods (VM) sensitivity analysis, which is the continuous alternative to the discrete sensitivity analysis, is employed to derive the costate (adjoint) equations, the transversality conditions, and the functional sensitivity derivatives. In the derivation of the sensitivity equations, the variational methods use the generalized calculus of variations, in which the variable boundary is considered as the design function. The converged solution of the state equations together with the converged solution of the costate equations are integrated along the domain boundary to uniquely determine the functional sensitivity derivatives with respect to the design function. The determination of the sensitivity derivatives of the performance index or functional entails the coupled solutions of the state and costate equations. As the stable and converged numerical solution of the costate equations with their boundary conditions are a priori unknown, numerical stability analysis is performed on both the state and costate equations. Thereafter, based on the amplification factors obtained by solving the generalized eigenvalue equations, the stability behavior of the costate equations is discussed and compared with the state (Euler) equations. The stability analysis of the costate equations suggests that the converged and stable solution of the costate equation is possible only if the computational domain of the costate equations is transformed to take into account the reverse flow nature of the costate equations. The application of the variational methods to aerodynamic shape optimization problems is demonstrated for internal flow problems at supersonic Mach number range. The study shows, that while maintaining the accuracy of the functional sensitivity derivatives within the reasonable range for engineering prediction purposes, the variational methods show a substantial gain in computational efficiency, i.e., computer time and memory, when compared with the finite difference sensitivity analysis.
Neuzil, C.E.; Cooley, C.; Silliman, Stephen E.; Bredehoeft, J.D.; Hsieh, P.A.
1981-01-01
In Part I a general analytical solution for the transient pulse test was presented. Part II presents a graphical method for analyzing data from a test to obtain the hydraulic properties of the sample. The general solution depends on both hydraulic conductivity and specific storage and, in theory, analysis of the data can provide values for both of these hydraulic properties. However, in practice, one of two limiting cases may apply in which case it is possible to calculate only hydraulic conductivity or the product of hydraulic conductivity times specific storage. In this paper we examine the conditions when both hydraulic parameters can be calculated. The analyses of data from two tests are presented. In Appendix I the general solution presented in Part I is compared with an earlier analysis, in which compressive storage in the sample is assumed negligible, and the error in calculated hydraulic conductivity due to this simplifying assumption is examined. ?? 1981.
Vibration analysis of rotor systems using reduced subsystem models
NASA Technical Reports Server (NTRS)
Fan, Uei-Jiun; Noah, Sherif T.
1989-01-01
A general impedance method using reduced submodels has been developed for the linear dynamic analysis of rotor systems. Formulated in terms of either modal or physical coordinates of the subsystems, the method enables imbalance responses at specific locations of the rotor systems to be efficiently determined from a small number of 'master' degrees of freedom. To demonstrate the capability of this impedance approach, the Space Shuttle Main Engine high-pressure oxygen turbopump has been investigated to determine the bearing loads due to imbalance. Based on the same formulation, an eigenvalue analysis has been performed to study the system stability. A small 5-DOF model has been utilized to illustrate the application of the method to eigenvalue analysis. Because of its inherent characteristics of allowing formulation of reduced submodels, the impedance method can significantly increase the computational speed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nandi, Debottam; Shankaranarayanan, S., E-mail: debottam@iisertvm.ac.in, E-mail: shanki@iisertvm.ac.in
2016-10-01
In this work, we present a consistent Hamiltonian analysis of cosmological perturbations for generalized non-canonical scalar fields. In order to do so, we introduce a new phase-space variable that is uniquely defined for different non-canonical scalar fields. We also show that this is the simplest and efficient way of expressing the Hamiltonian. We extend the Hamiltonian approach of [1] to non-canonical scalar field and obtain an unique expression of speed of sound in terms of phase-space variable. In order to invert generalized phase-space Hamilton's equations to Euler-Lagrange equations of motion, we prescribe a general inversion formulae and show that ourmore » approach for non-canonical scalar field is consistent. We also obtain the third and fourth order interaction Hamiltonian for generalized non-canonical scalar fields and briefly discuss the extension of our method to generalized Galilean scalar fields.« less
General aviation aircraft interior noise problem: Some suggested solutions
NASA Technical Reports Server (NTRS)
Roskam, J.; Navaneethan, R.
1984-01-01
Laboratory investigation of sound transmission through panels and the use of modern data analysis techniques applied to actual aircraft is used to determine methods to reduce general aviation interior noise. The experimental noise reduction characteristics of stiffened flat and curved panels with damping treatment are discussed. The experimental results of double-wall panels used in the general aviation industry are given. The effects of skin panel material, fiberglass insulation and trim panel material on the noise reduction characteristics of double-wall panels are investigated. With few modifications, the classical sound transmission theory can be used to design the interior noise control treatment of aircraft. Acoustic intensity and analysis procedures are included.
Proceedings of the 6. international conference on stability and handling of liquid fuels. Volume 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giles, H.N.
Volume 2 of these proceedings contain 42 papers arranged under the following topical sections: Fuel blending and compatibility; Middle distillates; Microbiology; Alternative fuels; General topics (analytical methods, tank remediation, fuel additives, storage stability); and Poster presentations (analysis methods, oxidation kinetics, health problems).
New Method for Analysis of Multiple Anthelmintic Residues in Animal Tissue
USDA-ARS?s Scientific Manuscript database
For the first time, 39 of the major anthelmintics can be detected in one rapid and sensitive LC-MS/MS method, including the flukicides, which have been generally overlooked in surveillance programs. Utilizing the QuEChERS approach, residues were extracted from liver and milk using acetonitrile, sod...
The use of the wavelet cluster analysis for asteroid family determination
NASA Technical Reports Server (NTRS)
Benjoya, Phillippe; Slezak, E.; Froeschle, Claude
1992-01-01
The asteroid family determination has been analysis method dependent for a longtime. A new cluster analysis based on the wavelet transform has allowed an automatic definition of families with a degree of significance versus randomness. Actually this method is rather general and can be applied to any kind of structural analysis. We will rather concentrate on the main features of the method. The analysis has been performed on the set of 4100 asteroid proper elements computed by Milani and Knezevic (see Milani and Knezevic 1990). Twenty one families have been found and influence of the chosen metric has been tested. The results have beem compared to Zappala et al.'s ones (see Zappala et al 1990) obtained by the use of a completely different method applied to the same set of data. For the first time, a good overlapping has been found between both method results, not only for the big well known families but also for the smallest ones.
The computation of three-dimensional flows using unstructured grids
NASA Technical Reports Server (NTRS)
Morgan, K.; Peraire, J.; Peiro, J.; Hassan, O.
1991-01-01
A general method is described for automatically discretizing, into unstructured assemblies of tetrahedra, the three-dimensional solution domains of complex shape which are of interest in practical computational aerodynamics. An algorithm for the solution of the compressible Euler equations which can be implemented on such general unstructured tetrahedral grids is described. This is an explicit cell-vertex scheme which follows a general Taylor-Galerkin philosophy. The approach is employed to compute a transonic inviscid flow over a standard wing and the results are shown to compare favorably with experimental observations. As a more practical demonstration, the method is then applied to the analysis of inviscid flow over a complete modern fighter configuration. The effect of using mesh adaptivity is illustrated when the method is applied to the solution of high speed flow in an engine inlet.
Proposal for a recovery prediction method for patients affected by acute mediastinitis
2012-01-01
Background An attempt to find a prediction method of death risk in patients affected by acute mediastinitis. There is not such a tool described in available literature for that serious disease. Methods The study comprised 44 consecutive cases of acute mediastinitis. General anamnesis and biochemical data were included. Factor analysis was used to extract the risk characteristic for the patients. The most valuable results were obtained for 8 parameters which were selected for further statistical analysis (all collected during few hours after admission). Three factors reached Eigenvalue >1. Clinical explanations of these combined statistical factors are: Factor1 - proteinic status (serum total protein, albumin, and hemoglobin level), Factor2 - inflammatory status (white blood cells, CRP, procalcitonin), and Factor3 - general risk (age, number of coexisting diseases). Threshold values of prediction factors were estimated by means of statistical analysis (factor analysis, Statgraphics Centurion XVI). Results The final prediction result for the patients is constructed as simultaneous evaluation of all factor scores. High probability of death should be predicted if factor 1 value decreases with simultaneous increase of factors 2 and 3. The diagnostic power of the proposed method was revealed to be high [sensitivity =90%, specificity =64%], for Factor1 [SNC = 87%, SPC = 79%]; for Factor2 [SNC = 87%, SPC = 50%] and for Factor3 [SNC = 73%, SPC = 71%]. Conclusion The proposed prediction method seems a useful emergency signal during acute mediastinitis control in affected patients. PMID:22574625
Measurement of residual stresses by the moire method
NASA Astrophysics Data System (ADS)
Sciammarella, C. A.; Albertazzi, A., Jr.
Three different applications of the moire method to the determination of residual stresses and strains are presented. The three applications take advantage of the property of ratings to record the changes of the surface they are printed on. One of the applications deals with thermal residual stresses, another with contact residual stress and the third one is a generalization of the blind hole technique. This last application is based on a computer assisted moire technique and on the generalization of the quasi-heterodyne techniques of fringe pattern analysis.
Kahan, Brennan C; Harhay, Michael O
2015-12-01
Adjustment for center in multicenter trials is recommended when there are between-center differences or when randomization has been stratified by center. However, common methods of analysis (such as fixed-effects, Mantel-Haenszel, or stratified Cox models) often require a large number of patients or events per center to perform well. We reviewed 206 multicenter randomized trials published in four general medical journals to assess the average number of patients and events per center and determine whether appropriate methods of analysis were used in trials with few patients or events per center. The median number of events per center/treatment arm combination for trials using a binary or survival outcome was 3 (interquartile range, 1-10). Sixteen percent of trials had less than 1 event per center/treatment combination, 50% fewer than 3, and 63% fewer than 5. Of the trials which adjusted for center using a method of analysis which requires a large number of events per center, 6% had less than 1 event per center-treatment combination, 25% fewer than 3, and 50% fewer than 5. Methods of analysis that allow for few events per center, such as random-effects models or generalized estimating equations (GEEs), were rarely used. Many multicenter trials contain few events per center. Adjustment for center using random-effects models or GEE with model-based (non-robust) standard errors may be beneficial in these scenarios. Copyright © 2015 Elsevier Inc. All rights reserved.
A general multiblock Euler code for propulsion integration. Volume 1: Theory document
NASA Technical Reports Server (NTRS)
Chen, H. C.; Su, T. Y.; Kao, T. J.
1991-01-01
A general multiblock Euler solver was developed for the analysis of flow fields over geometrically complex configurations either in free air or in a wind tunnel. In this approach, the external space around a complex configuration was divided into a number of topologically simple blocks, so that surface-fitted grids and an efficient flow solution algorithm could be easily applied in each block. The computational grid in each block is generated using a combination of algebraic and elliptic methods. A grid generation/flow solver interface program was developed to facilitate the establishment of block-to-block relations and the boundary conditions for each block. The flow solver utilizes a finite volume formulation and an explicit time stepping scheme to solve the Euler equations. A multiblock version of the multigrid method was developed to accelerate the convergence of the calculations. The generality of the method was demonstrated through the analysis of two complex configurations at various flow conditions. Results were compared to available test data. Two accompanying volumes, user manuals for the preparation of multi-block grids (vol. 2) and for the Euler flow solver (vol. 3), provide information on input data format and program execution.
Edge detection and localization with edge pattern analysis and inflection characterization
NASA Astrophysics Data System (ADS)
Jiang, Bo
2012-05-01
In general edges are considered to be abrupt changes or discontinuities in two dimensional image signal intensity distributions. The accuracy of front-end edge detection methods in image processing impacts the eventual success of higher level pattern analysis downstream. To generalize edge detectors designed from a simple ideal step function model to real distortions in natural images, research on one dimensional edge pattern analysis to improve the accuracy of edge detection and localization proposes an edge detection algorithm, which is composed by three basic edge patterns, such as ramp, impulse, and step. After mathematical analysis, general rules for edge representation based upon the classification of edge types into three categories-ramp, impulse, and step (RIS) are developed to reduce detection and localization errors, especially reducing "double edge" effect that is one important drawback to the derivative method. But, when applying one dimensional edge pattern in two dimensional image processing, a new issue is naturally raised that the edge detector should correct marking inflections or junctions of edges. Research on human visual perception of objects and information theory pointed out that a pattern lexicon of "inflection micro-patterns" has larger information than a straight line. Also, research on scene perception gave an idea that contours have larger information are more important factor to determine the success of scene categorization. Therefore, inflections or junctions are extremely useful features, whose accurate description and reconstruction are significant in solving correspondence problems in computer vision. Therefore, aside from adoption of edge pattern analysis, inflection or junction characterization is also utilized to extend traditional derivative edge detection algorithm. Experiments were conducted to test my propositions about edge detection and localization accuracy improvements. The results support the idea that these edge detection method improvements are effective in enhancing the accuracy of edge detection and localization.
Studies of Horst's Procedure for Binary Data Analysis.
ERIC Educational Resources Information Center
Gray, William M.; Hofmann, Richard J.
Most responses to educational and psychological test items may be represented in binary form. However, such dichotomously scored items present special problems when an analysis of correlational interrelationships among the items is attempted. Two general methods of analyzing binary data are proposed by Horst to partial out the effects of…
ERIC Educational Resources Information Center
Campbell, Dean J.; Xia, Younan
2007-01-01
The physical phenomenon of plasmons and the techniques that build upon them are discussed. Plasmon-enhanced applications are well-suited for introduction in physical chemistry and instrumental analysis classes and some methods of fabrication and analysis of plasmon-producing structures are simple for use in labs in general, physical and inorganic…
Analysis of a Suspected Drug Sample
ERIC Educational Resources Information Center
Schurter, Eric J.; Zook-Gerdau, Lois Anne; Szalay, Paul
2011-01-01
This general chemistry laboratory uses differences in solubility to separate a mixture of caffeine and aspirin while introducing the instrumental analysis methods of GCMS and FTIR. The drug mixture is separated by partitioning aspirin and caffeine between dichloromethane and aqueous base. TLC and reference standards are used to identify aspirin…
ERIC Educational Resources Information Center
Temel, Senar
2016-01-01
This study aims to analyse prospective chemistry teachers' cognitive structures related to the subject of oxidation and reduction through a flow map method. Purposeful sampling method was employed in this study, and 8 prospective chemistry teachers from a group of students who had taken general chemistry and analytical chemistry courses were…
Field Branches Quality System and Technical Procedures: This document describes general and specific procedures, methods and considerations to be used and observed when collecting soil gas samples for field screening or laboratory analysis.
Generalized fourier analyses of the advection-diffusion equation - Part II: two-dimensional domains
NASA Astrophysics Data System (ADS)
Voth, Thomas E.; Martinez, Mario J.; Christon, Mark A.
2004-07-01
Part I of this work presents a detailed multi-methods comparison of the spatial errors associated with the one-dimensional finite difference, finite element and finite volume semi-discretizations of the scalar advection-diffusion equation. In Part II we extend the analysis to two-dimensional domains and also consider the effects of wave propagation direction and grid aspect ratio on the phase speed, and the discrete and artificial diffusivities. The observed dependence of dispersive and diffusive behaviour on propagation direction makes comparison of methods more difficult relative to the one-dimensional results. For this reason, integrated (over propagation direction and wave number) error and anisotropy metrics are introduced to facilitate comparison among the various methods. With respect to these metrics, the consistent mass Galerkin and consistent mass control-volume finite element methods, and their streamline upwind derivatives, exhibit comparable accuracy, and generally out-perform their lumped mass counterparts and finite-difference based schemes. While this work can only be considered a first step in a comprehensive multi-methods analysis and comparison, it serves to identify some of the relative strengths and weaknesses of multiple numerical methods in a common mathematical framework. Published in 2004 by John Wiley & Sons, Ltd.
A spectral dynamic stiffness method for free vibration analysis of plane elastodynamic problems
NASA Astrophysics Data System (ADS)
Liu, X.; Banerjee, J. R.
2017-03-01
A highly efficient and accurate analytical spectral dynamic stiffness (SDS) method for modal analysis of plane elastodynamic problems based on both plane stress and plane strain assumptions is presented in this paper. First, the general solution satisfying the governing differential equation exactly is derived by applying two types of one-dimensional modified Fourier series. Then the SDS matrix for an element is formulated symbolically using the general solution. The SDS matrices are assembled directly in a similar way to that of the finite element method, demonstrating the method's capability to model complex structures. Any arbitrary boundary conditions are represented accurately in the form of the modified Fourier series. The Wittrick-Williams algorithm is then used as the solution technique where the mode count problem (J0) of a fully-clamped element is resolved. The proposed method gives highly accurate solutions with remarkable computational efficiency, covering low, medium and high frequency ranges. The method is applied to both plane stress and plane strain problems with simple as well as complex geometries. All results from the theory in this paper are accurate up to the last figures quoted to serve as benchmarks.
Variational asymptotic modeling of composite dimensionally reducible structures
NASA Astrophysics Data System (ADS)
Yu, Wenbin
A general framework to construct accurate reduced models for composite dimensionally reducible structures (beams, plates and shells) was formulated based on two theoretical foundations: decomposition of the rotation tensor and the variational asymptotic method. Two engineering software systems, Variational Asymptotic Beam Sectional Analysis (VABS, new version) and Variational Asymptotic Plate and Shell Analysis (VAPAS), were developed. Several restrictions found in previous work on beam modeling were removed in the present effort. A general formulation of Timoshenko-like cross-sectional analysis was developed, through which the shear center coordinates and a consistent Vlasov model can be obtained. Recovery relations are given to recover the asymptotic approximations for the three-dimensional field variables. A new version of VABS has been developed, which is a much improved program in comparison to the old one. Numerous examples are given for validation. A Reissner-like model being as asymptotically correct as possible was obtained for composite plates and shells. After formulating the three-dimensional elasticity problem in intrinsic form, the variational asymptotic method was used to systematically reduce the dimensionality of the problem by taking advantage of the smallness of the thickness. The through-the-thickness analysis is solved by a one-dimensional finite element method to provide the stiffnesses as input for the two-dimensional nonlinear plate or shell analysis as well as recovery relations to approximately express the three-dimensional results. The known fact that there exists more than one theory that is asymptotically correct to a given order is adopted to cast the refined energy into a Reissner-like form. A two-dimensional nonlinear shell theory consistent with the present modeling process was developed. The engineering computer code VAPAS was developed and inserted into DYMORE to provide an efficient and accurate analysis of composite plates and shells. Numerical results are compared with the exact solutions, and the excellent agreement proves that one can use VAPAS to analyze composite plates and shells efficiently and accurately. In conclusion, rigorous modeling approaches were developed for composite beams, plates and shells within a general framework. No such consistent and general treatment is found in the literature. The associated computer programs VABS and VAPAS are envisioned to have many applications in industry.
Williams Element with Generalized Degrees of Freedom for Fracture Analysis of Multiple-Cracked Beam
NASA Astrophysics Data System (ADS)
Xu, Hua; Wei, Quyang; Yang, Lufeng
2017-10-01
In this paper, the method of finite element with generalized degrees of freedom (FEDOFs) is used to calculate the stress intensity factor (SIF) of multiple cracked beam and analysed the effect of minor cracks on the main crack SIF in different cases. Williams element is insensitive to the size of singular region. So that calculation efficiency is highly improved. Examples analysis validates that the SIF near the crack tip can be obtained directly though FEDOFs. And the result is well consistent with ANSYS solution and has a satisfied accuracy.
Nan, Feng; Moghadasi, Mohammad; Vakili, Pirooz; Vajda, Sandor; Kozakov, Dima; Ch. Paschalidis, Ioannis
2015-01-01
We propose a new stochastic global optimization method targeting protein docking problems. The method is based on finding a general convex polynomial underestimator to the binding energy function in a permissive subspace that possesses a funnel-like structure. We use Principal Component Analysis (PCA) to determine such permissive subspaces. The problem of finding the general convex polynomial underestimator is reduced into the problem of ensuring that a certain polynomial is a Sum-of-Squares (SOS), which can be done via semi-definite programming. The underestimator is then used to bias sampling of the energy function in order to recover a deep minimum. We show that the proposed method significantly improves the quality of docked conformations compared to existing methods. PMID:25914440
NASA Astrophysics Data System (ADS)
Black, Joshua A.; Knowles, Peter J.
2018-06-01
The performance of quasi-variational coupled-cluster (QV) theory applied to the calculation of activation and reaction energies has been investigated. A statistical analysis of results obtained for six different sets of reactions has been carried out, and the results have been compared to those from standard single-reference methods. In general, the QV methods lead to increased activation energies and larger absolute reaction energies compared to those obtained with traditional coupled-cluster theory.
Probabilistic structural analysis methods and applications
NASA Technical Reports Server (NTRS)
Cruse, T. A.; Wu, Y.-T.; Dias, B.; Rajagopal, K. R.
1988-01-01
An advanced algorithm for simulating the probabilistic distribution of structural responses due to statistical uncertainties in loads, geometry, material properties, and boundary conditions is reported. The method effectively combines an advanced algorithm for calculating probability levels for multivariate problems (fast probability integration) together with a general-purpose finite-element code for stress, vibration, and buckling analysis. Application is made to a space propulsion system turbine blade for which the geometry and material properties are treated as random variables.
Uncertainty characterization approaches for risk assessment of DBPs in drinking water: a review.
Chowdhury, Shakhawat; Champagne, Pascale; McLellan, P James
2009-04-01
The management of risk from disinfection by-products (DBPs) in drinking water has become a critical issue over the last three decades. The areas of concern for risk management studies include (i) human health risk from DBPs, (ii) disinfection performance, (iii) technical feasibility (maintenance, management and operation) of treatment and disinfection approaches, and (iv) cost. Human health risk assessment is typically considered to be the most important phase of the risk-based decision-making or risk management studies. The factors associated with health risk assessment and other attributes are generally prone to considerable uncertainty. Probabilistic and non-probabilistic approaches have both been employed to characterize uncertainties associated with risk assessment. The probabilistic approaches include sampling-based methods (typically Monte Carlo simulation and stratified sampling) and asymptotic (approximate) reliability analysis (first- and second-order reliability methods). Non-probabilistic approaches include interval analysis, fuzzy set theory and possibility theory. However, it is generally accepted that no single method is suitable for the entire spectrum of problems encountered in uncertainty analyses for risk assessment. Each method has its own set of advantages and limitations. In this paper, the feasibility and limitations of different uncertainty analysis approaches are outlined for risk management studies of drinking water supply systems. The findings assist in the selection of suitable approaches for uncertainty analysis in risk management studies associated with DBPs and human health risk.
Markov Logic Networks in the Analysis of Genetic Data
Sakhanenko, Nikita A.
2010-01-01
Abstract Complex, non-additive genetic interactions are common and can be critical in determining phenotypes. Genome-wide association studies (GWAS) and similar statistical studies of linkage data, however, assume additive models of gene interactions in looking for genotype-phenotype associations. These statistical methods view the compound effects of multiple genes on a phenotype as a sum of influences of each gene and often miss a substantial part of the heritable effect. Such methods do not use any biological knowledge about underlying mechanisms. Modeling approaches from the artificial intelligence (AI) field that incorporate deterministic knowledge into models to perform statistical analysis can be applied to include prior knowledge in genetic analysis. We chose to use the most general such approach, Markov Logic Networks (MLNs), for combining deterministic knowledge with statistical analysis. Using simple, logistic regression-type MLNs we can replicate the results of traditional statistical methods, but we also show that we are able to go beyond finding independent markers linked to a phenotype by using joint inference without an independence assumption. The method is applied to genetic data on yeast sporulation, a complex phenotype with gene interactions. In addition to detecting all of the previously identified loci associated with sporulation, our method identifies four loci with smaller effects. Since their effect on sporulation is small, these four loci were not detected with methods that do not account for dependence between markers due to gene interactions. We show how gene interactions can be detected using more complex models, which can be used as a general framework for incorporating systems biology with genetics. PMID:20958249
Wang, Wei; Ackland, David C; McClelland, Jodie A; Webster, Kate E; Halgamuge, Saman
2018-01-01
Quantitative gait analysis is an important tool in objective assessment and management of total knee arthroplasty (TKA) patients. Studies evaluating gait patterns in TKA patients have tended to focus on discrete data such as spatiotemporal information, joint range of motion and peak values of kinematics and kinetics, or consider selected principal components of gait waveforms for analysis. These strategies may not have the capacity to capture small variations in gait patterns associated with each joint across an entire gait cycle, and may ultimately limit the accuracy of gait classification. The aim of this study was to develop an automatic feature extraction method to analyse patterns from high-dimensional autocorrelated gait waveforms. A general linear feature extraction framework was proposed and a hierarchical partial least squares method derived for discriminant analysis of multiple gait waveforms. The effectiveness of this strategy was verified using a dataset of joint angle and ground reaction force waveforms from 43 patients after TKA surgery and 31 healthy control subjects. Compared with principal component analysis and partial least squares methods, the hierarchical partial least squares method achieved generally better classification performance on all possible combinations of waveforms, with the highest classification accuracy . The novel hierarchical partial least squares method proposed is capable of capturing virtually all significant differences between TKA patients and the controls, and provides new insights into data visualization. The proposed framework presents a foundation for more rigorous classification of gait, and may ultimately be used to evaluate the effects of interventions such as surgery and rehabilitation.
NASA Astrophysics Data System (ADS)
Heavens, A. F.; Seikel, M.; Nord, B. D.; Aich, M.; Bouffanais, Y.; Bassett, B. A.; Hobson, M. P.
2014-12-01
The Fisher Information Matrix formalism (Fisher 1935) is extended to cases where the data are divided into two parts (X, Y), where the expectation value of Y depends on X according to some theoretical model, and X and Y both have errors with arbitrary covariance. In the simplest case, (X, Y) represent data pairs of abscissa and ordinate, in which case the analysis deals with the case of data pairs with errors in both coordinates, but X can be any measured quantities on which Y depends. The analysis applies for arbitrary covariance, provided all errors are Gaussian, and provided the errors in X are small, both in comparison with the scale over which the expected signal Y changes, and with the width of the prior distribution. This generalizes the Fisher Matrix approach, which normally only considers errors in the `ordinate' Y. In this work, we include errors in X by marginalizing over latent variables, effectively employing a Bayesian hierarchical model, and deriving the Fisher Matrix for this more general case. The methods here also extend to likelihood surfaces which are not Gaussian in the parameter space, and so techniques such as DALI (Derivative Approximation for Likelihoods) can be generalized straightforwardly to include arbitrary Gaussian data error covariances. For simple mock data and theoretical models, we compare to Markov Chain Monte Carlo experiments, illustrating the method with cosmological supernova data. We also include the new method in the FISHER4CAST software.
Review of Hull Structural Monitoring Systems for Navy Ships
2013-05-01
generally based on the same basic form of S-N curve, different correction methods are used by the various classification societies. ii. Methods for...Likewise there are a number of different methods employed for temperature compensation and these vary depending on the type of gauge, although typically...Analysis, Inc.[30] Figure 8. Examples of different methods of temperature compensation of fibre-optic strain sensors. It is noted in NATO
NASA Astrophysics Data System (ADS)
Keating, Elizabeth H.; Doherty, John; Vrugt, Jasper A.; Kang, Qinjun
2010-10-01
Highly parameterized and CPU-intensive groundwater models are increasingly being used to understand and predict flow and transport through aquifers. Despite their frequent use, these models pose significant challenges for parameter estimation and predictive uncertainty analysis algorithms, particularly global methods which usually require very large numbers of forward runs. Here we present a general methodology for parameter estimation and uncertainty analysis that can be utilized in these situations. Our proposed method includes extraction of a surrogate model that mimics key characteristics of a full process model, followed by testing and implementation of a pragmatic uncertainty analysis technique, called null-space Monte Carlo (NSMC), that merges the strengths of gradient-based search and parameter dimensionality reduction. As part of the surrogate model analysis, the results of NSMC are compared with a formal Bayesian approach using the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. Such a comparison has never been accomplished before, especially in the context of high parameter dimensionality. Despite the highly nonlinear nature of the inverse problem, the existence of multiple local minima, and the relatively large parameter dimensionality, both methods performed well and results compare favorably with each other. Experiences gained from the surrogate model analysis are then transferred to calibrate the full highly parameterized and CPU intensive groundwater model and to explore predictive uncertainty of predictions made by that model. The methodology presented here is generally applicable to any highly parameterized and CPU-intensive environmental model, where efficient methods such as NSMC provide the only practical means for conducting predictive uncertainty analysis.
ERIC Educational Resources Information Center
Mosier, Nancy R.
Financial analysis techniques are tools that help managers make sound financial decisions that contribute to general corporate objectives. A literature review reveals that the most commonly used financial analysis techniques are payback time, average rate of return, present value or present worth, and internal rate of return. Despite the success…
Self-learning Monte Carlo method and cumulative update in fermion systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Junwei; Shen, Huitao; Qi, Yang
2017-06-07
In this study, we develop the self-learning Monte Carlo (SLMC) method, a general-purpose numerical method recently introduced to simulate many-body systems, for studying interacting fermion systems. Our method uses a highly efficient update algorithm, which we design and dub “cumulative update”, to generate new candidate configurations in the Markov chain based on a self-learned bosonic effective model. From a general analysis and a numerical study of the double exchange model as an example, we find that the SLMC with cumulative update drastically reduces the computational cost of the simulation, while remaining statistically exact. Remarkably, its computational complexity is far lessmore » than the conventional algorithm with local updates.« less
2009-01-01
Background Decisions about interim analysis and early stopping of clinical trials, as based on recommendations of Data Monitoring Committees (DMCs), have far reaching consequences for the scientific validity and clinical impact of a trial. Our aim was to evaluate the frequency and quality of the reporting on DMC composition and roles, interim analysis and early termination in pediatric trials. Methods We conducted a systematic review of randomized controlled clinical trials published from 2005 to 2007 in a sample of four general and four pediatric journals. We used full-text databases to identify trials which reported on DMCs, interim analysis or early termination, and included children or adolescents. Information was extracted on general trial characteristics, risk of bias, and a set of parameters regarding DMC composition and roles, interim analysis and early termination. Results 110 of the 648 pediatric trials in this sample (17%) reported on DMC or interim analysis or early stopping, and were included; 68 from general and 42 from pediatric journals. The presence of DMCs was reported in 89 of the 110 included trials (81%); 62 papers, including 46 of the 89 that reported on DMCs (52%), also presented information about interim analysis. No paper adequately reported all DMC parameters, and nine (15%) reported all interim analysis details. Of 32 trials which terminated early, 22 (69%) did not report predefined stopping guidelines and 15 (47%) did not provide information on statistical monitoring methods. Conclusions Reporting on DMC composition and roles, on interim analysis results and on early termination of pediatric trials is incomplete and heterogeneous. We propose a minimal set of reporting parameters that will allow the reader to assess the validity of trial results. PMID:20003383
A comparison of methods for the analysis of binomial clustered outcomes in behavioral research.
Ferrari, Alberto; Comelli, Mario
2016-12-01
In behavioral research, data consisting of a per-subject proportion of "successes" and "failures" over a finite number of trials often arise. This clustered binary data are usually non-normally distributed, which can distort inference if the usual general linear model is applied and sample size is small. A number of more advanced methods is available, but they are often technically challenging and a comparative assessment of their performances in behavioral setups has not been performed. We studied the performances of some methods applicable to the analysis of proportions; namely linear regression, Poisson regression, beta-binomial regression and Generalized Linear Mixed Models (GLMMs). We report on a simulation study evaluating power and Type I error rate of these models in hypothetical scenarios met by behavioral researchers; plus, we describe results from the application of these methods on data from real experiments. Our results show that, while GLMMs are powerful instruments for the analysis of clustered binary outcomes, beta-binomial regression can outperform them in a range of scenarios. Linear regression gave results consistent with the nominal level of significance, but was overall less powerful. Poisson regression, instead, mostly led to anticonservative inference. GLMMs and beta-binomial regression are generally more powerful than linear regression; yet linear regression is robust to model misspecification in some conditions, whereas Poisson regression suffers heavily from violations of the assumptions when used to model proportion data. We conclude providing directions to behavioral scientists dealing with clustered binary data and small sample sizes. Copyright © 2016 Elsevier B.V. All rights reserved.
Generality in nanotechnologies and its relationship to economic performance
NASA Astrophysics Data System (ADS)
Gomez Baquero, Fernando
In the history if economic analysis there is perhaps no more important question than the one of how economic development is achieved. For more than a century, economists have explored the role of technology in economic growth but there is still much to be learned about the effect that technologies, in particular emerging ones, have on economic growth and productivity. The objective of this research is to understand the relationship between nanotechnologies and economic growth and productivity, using the theory of General Purpose Technology (GPT)-driven economic growth. To do so, the Generality Index (calculated from patent data) was used to understand the relative pervasiveness of nanotechnologies. The analysis of trends and patterns of Generality Index, using the largest group of patents since the publication of the NBER Patent Database, indicates that nanotechnologies possess a higher average Generality than other technological groups. Next, the relationship between the Generality Index and Total Factor Productivity (TFP) was studied using econometric analysis. Model estimates indicate that the variation in Generality for the group of nanotechnologies can explain a large proportion of the variation in TFP. However, the explanatory power of the entire set of patents (not just nanotechnologies) is larger and corresponds better to the expected theoretical models. Additionally, there is a negative short-run relationship between Generality and TFP, conflicting with part of the theoretical GPT-models. Finally, the relationship between the Generality of nanotechnologies and policy-driven investment events, such as R&D investments and grant awards, was studied using econometric methods. The statistical evidence suggests that NSF awards are related to technologies with higher Generality, while NIH awards and NNI investments are related to a lower average Generality. Overall, results of this research work indicate that the introduction of pervasive technologies into an economic system sets in motion an interesting series of events that can both increase and decrease productivity and therefore economic growth. The metrics and methods developed in this work emphasize the importance of developing and using new metrics for strategic decision making, both in the private sector and in the public sector.
NASA Technical Reports Server (NTRS)
Price J. M.; Ortega, R.
1998-01-01
Probabilistic method is not a universally accepted approach for the design and analysis of aerospace structures. The validity of this approach must be demonstrated to encourage its acceptance as it viable design and analysis tool to estimate structural reliability. The objective of this Study is to develop a well characterized finite population of similar aerospace structures that can be used to (1) validate probabilistic codes, (2) demonstrate the basic principles behind probabilistic methods, (3) formulate general guidelines for characterization of material drivers (such as elastic modulus) when limited data is available, and (4) investigate how the drivers affect the results of sensitivity analysis at the component/failure mode level.
A generalized analytical approach to the coupled effect of SMA actuation and elastica deflection
NASA Astrophysics Data System (ADS)
Sreekumar, M.; Singaperumal, M.
2009-11-01
A compliant miniature parallel manipulator made of superelastic nitinol pipe as its central pillar and actuated by three symmetrically attached shape memory alloy (SMA) wires is under development. The mobility for the platform is obtained by the selective actuation of one or two wires at a time. If one wire is actuated, the other two unactuated wires provide the counter effect. Similarly, if two wires are actuated simultaneously or in a differential manner, the third unactuated wire resists the movement of the platform. In an earlier work of the authors, the static displacement analysis was presented without considering the effect of unactuated wires. In this contribution, the force-displacement analysis is presented considering the effect of both actuated and unactuated wires. Subsequently, an attempt has been made to obtain a generalized approach from which six types of actuation methods are identified using a group of conditional parameters. Each method leads to a set of large deflection expressions suitable for a particular actuation method. As the large deflection expressions derived for the mechanism are nonlinear and involve interdependent parameters, their simplified form using a parametric approximation have also been obtained using Howell's algorithm. The generalized approach and the solution algorithm developed can be applied to any kind of compliant mechanism having large deflection capabilities, including planar and spatial MEMS devices and stability analysis of long slender columns supported by wires or cables. The procedure developed is also suitable for the static analysis of spatial compliant mechanisms actuated by multiple SMA actuators.
Mechanistic approach to generalized technical analysis of share prices and stock market indices
NASA Astrophysics Data System (ADS)
Ausloos, M.; Ivanova, K.
2002-05-01
Classical technical analysis methods of stock evolution are recalled, i.e. the notion of moving averages and momentum indicators. The moving averages lead to define death and gold crosses, resistance and support lines. Momentum indicators lead the price trend, thus give signals before the price trend turns over. The classical technical analysis investment strategy is thereby sketched. Next, we present a generalization of these tricks drawing on physical principles, i.e. taking into account not only the price of a stock but also the volume of transactions. The latter becomes a time dependent generalized mass. The notion of pressure, acceleration and force are deduced. A generalized (kinetic) energy is easily defined. It is understood that the momentum indicators take into account the sign of the fluctuations, while the energy is geared toward the absolute value of the fluctuations. They have different patterns which are checked by searching for the crossing points of their respective moving averages. The case of IBM evolution over 1990-2000 is used for illustrations.
40 CFR 436.31 - Specialized definitions.
Code of Federal Regulations, 2010 CFR
2010-07-01
..., the general definitions, abbreviations and methods of analysis set forth in part 401 of this chapter... may be obtained from the National Climatic Center of the Environmental Data Service, National Oceanic...
puma: a Bioconductor package for propagating uncertainty in microarray analysis.
Pearson, Richard D; Liu, Xuejun; Sanguinetti, Guido; Milo, Marta; Lawrence, Neil D; Rattray, Magnus
2009-07-09
Most analyses of microarray data are based on point estimates of expression levels and ignore the uncertainty of such estimates. By determining uncertainties from Affymetrix GeneChip data and propagating these uncertainties to downstream analyses it has been shown that we can improve results of differential expression detection, principal component analysis and clustering. Previously, implementations of these uncertainty propagation methods have only been available as separate packages, written in different languages. Previous implementations have also suffered from being very costly to compute, and in the case of differential expression detection, have been limited in the experimental designs to which they can be applied. puma is a Bioconductor package incorporating a suite of analysis methods for use on Affymetrix GeneChip data. puma extends the differential expression detection methods of previous work from the 2-class case to the multi-factorial case. puma can be used to automatically create design and contrast matrices for typical experimental designs, which can be used both within the package itself but also in other Bioconductor packages. The implementation of differential expression detection methods has been parallelised leading to significant decreases in processing time on a range of computer architectures. puma incorporates the first R implementation of an uncertainty propagation version of principal component analysis, and an implementation of a clustering method based on uncertainty propagation. All of these techniques are brought together in a single, easy-to-use package with clear, task-based documentation. For the first time, the puma package makes a suite of uncertainty propagation methods available to a general audience. These methods can be used to improve results from more traditional analyses of microarray data. puma also offers improvements in terms of scope and speed of execution over previously available methods. puma is recommended for anyone working with the Affymetrix GeneChip platform for gene expression analysis and can also be applied more generally.
NASA Astrophysics Data System (ADS)
Salvato, Steven Walter
The purpose of this study was to analyze questions within the chapters of a nontraditional general chemistry textbook and the four general chemistry textbooks most widely used by Texas community colleges in order to determine if the questions require higher- or lower-order thinking according to Bloom's taxonomy. The study employed quantitative methods. Bloom's taxonomy (Bloom, Engelhart, Furst, Hill, & Krathwohl, 1956) was utilized as the main instrument in the study. Additional tools were used to help classify the questions into the proper category of the taxonomy (McBeath, 1992; Metfessel, Michael, & Kirsner, 1969). The top four general chemistry textbooks used in Texas community colleges and Chemistry: A Project of the American Chemical Society (Bell et al., 2005) were analyzed during the fall semester of 2010 in order to categorize the questions within the chapters into one of the six levels of Bloom's taxonomy. Two coders were used to assess reliability. The data were analyzed using descriptive and inferential methods. The descriptive method involved calculation of the frequencies and percentages of coded questions from the books as belonging to the six categories of the taxonomy. Questions were dichotomized into higher- and lower-order thinking questions. The inferential methods involved chi-square tests of association to determine if there were statistically significant differences among the four traditional college general chemistry textbooks in the proportions of higher- and lower-order questions and if there were statistically significant differences between the nontraditional chemistry textbook and the four traditional general chemistry textbooks. Findings indicated statistically significant differences among the four textbooks frequently used in Texas community colleges in the number of higher- and lower-level questions. Statistically significant differences were also found among the four textbooks and the nontraditional textbook. After the analysis of the data, conclusions were drawn, implications for practice were delineated, and recommendations for future research were given.
Karelin, A O; Lomtev, A Yu; Mozzhukhina, N A; Yeremin, G B; Nikonov, V A
Inhalation of fine particulate matters (PM and PM ) poses a threat for the health of population. Purpose of the study the analysis of the monitoring of fine particulate matters in the atmospheric air of Saint-Petersburg and identification of the main problems of the monitoring. Research methods methods of scientific hypothetical deductive cognition, sanitary-statistical methods, general logical methods and approaches of researches: analysis, synthesis, abstracting, generalization, induction. Results. The article represents the analysis of the monitoring of fine particulate matters in the atmospheric air of Saint- Petersburg. Only 11 in automatic monitoring stations out of 22 there is carried out the control of fine particulate matters: in 7 - PM and PM, and in 4 - PM The average year concentrations were below MAC in all the stations. The maximum concentrations achieved 3 MAC, but the repeatance of cases of exceedence of concentrations more than MAC was very rare. On the average of the city concentrations of PM were decreased from 0,8 MAC in 2006 and 1,1 MAC in 2007 to 0,5 MAC in 2013-14. The executed analysis revealed main problems of the monitoring of fine particulate matters in the Russian Federation. They include the absence of the usage 1of the officially approved methods of controlling of PM and PM in the atmospheric air until March 1, 2016, lack of the modern equipment for measurement of fine particulate matters. Conclusions. Therefore, the state of the monitoring of fine particulate matters in the atmospheric air in the Russian Federation fails to be satisfactory. It is necessary to improve system of the monitoring, create modern Russian appliances, methods and means for measurement of fine particulate matters concentrations in the atmospheric air.
High-frequency asymptotic methods for analyzing the EM scattering by open-ended waveguide cavities
NASA Technical Reports Server (NTRS)
Burkholder, R. J.; Pathak, P. H.
1989-01-01
Four high-frequency methods are described for analyzing the electromagnetic (EM) scattering by electrically large open-ended cavities. They are: (1) a hybrid combination of waveguide modal analysis and high-frequency asymptotics, (2) geometrical optics (GO) ray shooting, (3) Gaussian beam (GB) shooting, and (4) the generalized ray expansion (GRE) method. The hybrid modal method gives very accurate results but is limited to cavities which are made up of sections of uniform waveguides for which the modal fields are known. The GO ray shooting method can be applied to much more arbitrary cavity geometries and can handle absorber treated interior walls, but it generally only predicts the major trends of the RCS pattern and not the details. Also, a very large number of rays need to be tracked for each new incidence angle. Like the GO ray shooting method, the GB shooting method can handle more arbitrary cavities, but it is much more efficient and generally more accurate than the GO method because it includes the fields diffracted by the rim at the open end which enter the cavity. However, due to beam divergence effects the GB method is limited to cavities which are not very long compared to their width. The GRE method overcomes the length-to-width limitation of the GB method by replacing the GB's with GO ray tubes which are launched in the same manner as the GB's to include the interior rim diffracted field. This method gives good accuracy and is generally more efficient than the GO method, but a large number of ray tubes needs to be tracked.
Regression Is a Univariate General Linear Model Subsuming Other Parametric Methods as Special Cases.
ERIC Educational Resources Information Center
Vidal, Sherry
Although the concept of the general linear model (GLM) has existed since the 1960s, other univariate analyses such as the t-test and the analysis of variance models have remained popular. The GLM produces an equation that minimizes the mean differences of independent variables as they are related to a dependent variable. From a computer printout…
Generalized Feature Extraction for Wrist Pulse Analysis: From 1-D Time Series to 2-D Matrix.
Dimin Wang; Zhang, David; Guangming Lu
2017-07-01
Traditional Chinese pulse diagnosis, known as an empirical science, depends on the subjective experience. Inconsistent diagnostic results may be obtained among different practitioners. A scientific way of studying the pulse should be to analyze the objectified wrist pulse waveforms. In recent years, many pulse acquisition platforms have been developed with the advances in sensor and computer technology. And the pulse diagnosis using pattern recognition theories is also increasingly attracting attentions. Though many literatures on pulse feature extraction have been published, they just handle the pulse signals as simple 1-D time series and ignore the information within the class. This paper presents a generalized method of pulse feature extraction, extending the feature dimension from 1-D time series to 2-D matrix. The conventional wrist pulse features correspond to a particular case of the generalized models. The proposed method is validated through pattern classification on actual pulse records. Both quantitative and qualitative results relative to the 1-D pulse features are given through diabetes diagnosis. The experimental results show that the generalized 2-D matrix feature is effective in extracting both the periodic and nonperiodic information. And it is practical for wrist pulse analysis.
Integrative omics analysis. A study based on Plasmodium falciparum mRNA and protein data.
Tomescu, Oana A; Mattanovich, Diethard; Thallinger, Gerhard G
2014-01-01
Technological improvements have shifted the focus from data generation to data analysis. The availability of large amounts of data from transcriptomics, protemics and metabolomics experiments raise new questions concerning suitable integrative analysis methods. We compare three integrative analysis techniques (co-inertia analysis, generalized singular value decomposition and integrative biclustering) by applying them to gene and protein abundance data from the six life cycle stages of Plasmodium falciparum. Co-inertia analysis is an analysis method used to visualize and explore gene and protein data. The generalized singular value decomposition has shown its potential in the analysis of two transcriptome data sets. Integrative Biclustering applies biclustering to gene and protein data. Using CIA, we visualize the six life cycle stages of Plasmodium falciparum, as well as GO terms in a 2D plane and interpret the spatial configuration. With GSVD, we decompose the transcriptomic and proteomic data sets into matrices with biologically meaningful interpretations and explore the processes captured by the data sets. IBC identifies groups of genes, proteins, GO Terms and life cycle stages of Plasmodium falciparum. We show method-specific results as well as a network view of the life cycle stages based on the results common to all three methods. Additionally, by combining the results of the three methods, we create a three-fold validated network of life cycle stage specific GO terms: Sporozoites are associated with transcription and transport; merozoites with entry into host cell as well as biosynthetic and metabolic processes; rings with oxidation-reduction processes; trophozoites with glycolysis and energy production; schizonts with antigenic variation and immune response; gametocyctes with DNA packaging and mitochondrial transport. Furthermore, the network connectivity underlines the separation of the intraerythrocytic cycle from the gametocyte and sporozoite stages. Using integrative analysis techniques, we can integrate knowledge from different levels and obtain a wider view of the system under study. The overlap between method-specific and common results is considerable, even if the basic mathematical assumptions are very different. The three-fold validated network of life cycle stage characteristics of Plasmodium falciparum could identify a large amount of the known associations from literature in only one study.
Integrative omics analysis. A study based on Plasmodium falciparum mRNA and protein data
2014-01-01
Background Technological improvements have shifted the focus from data generation to data analysis. The availability of large amounts of data from transcriptomics, protemics and metabolomics experiments raise new questions concerning suitable integrative analysis methods. We compare three integrative analysis techniques (co-inertia analysis, generalized singular value decomposition and integrative biclustering) by applying them to gene and protein abundance data from the six life cycle stages of Plasmodium falciparum. Co-inertia analysis is an analysis method used to visualize and explore gene and protein data. The generalized singular value decomposition has shown its potential in the analysis of two transcriptome data sets. Integrative Biclustering applies biclustering to gene and protein data. Results Using CIA, we visualize the six life cycle stages of Plasmodium falciparum, as well as GO terms in a 2D plane and interpret the spatial configuration. With GSVD, we decompose the transcriptomic and proteomic data sets into matrices with biologically meaningful interpretations and explore the processes captured by the data sets. IBC identifies groups of genes, proteins, GO Terms and life cycle stages of Plasmodium falciparum. We show method-specific results as well as a network view of the life cycle stages based on the results common to all three methods. Additionally, by combining the results of the three methods, we create a three-fold validated network of life cycle stage specific GO terms: Sporozoites are associated with transcription and transport; merozoites with entry into host cell as well as biosynthetic and metabolic processes; rings with oxidation-reduction processes; trophozoites with glycolysis and energy production; schizonts with antigenic variation and immune response; gametocyctes with DNA packaging and mitochondrial transport. Furthermore, the network connectivity underlines the separation of the intraerythrocytic cycle from the gametocyte and sporozoite stages. Conclusion Using integrative analysis techniques, we can integrate knowledge from different levels and obtain a wider view of the system under study. The overlap between method-specific and common results is considerable, even if the basic mathematical assumptions are very different. The three-fold validated network of life cycle stage characteristics of Plasmodium falciparum could identify a large amount of the known associations from literature in only one study. PMID:25033389
Trees, B-series and G-symplectic methods
NASA Astrophysics Data System (ADS)
Butcher, J. C.
2017-07-01
The order conditions for Runge-Kutta methods are intimately connected with the graphs known as rooted trees. The conditions can be expressed in terms of Taylor expansions written as weighted sums of elementary differentials, that is as B-series. Polish notation provides a unifying structure for representing many of the quantities appearing in this theory. Applications include the analysis of general linear methods with special reference to G-symplectic methods. A new order 6 method has recently been constructed.
5'to 3' nucleic acid synthesis using 3'-photoremovable protecting group
Pirrung, Michael C.; Shuey, Steven W.; Bradley, Jean-Claude
1999-01-01
The present invention relates, in general, to a method of synthesizing a nucleic acid, and, in particular, to a method of effecting 5' to 3' nucleic acid synthesis. The method can be used to prepare arrays of oligomers bound to a support via their 5' end. The invention also relates to a method of effecting mutation analysis using such arrays. The invention further relates to compounds and compositions suitable for use in such methods.
Ice Growth Measurements from Image Data to Support Ice Crystal and Mixed-Phase Accretion Testing
NASA Technical Reports Server (NTRS)
Struk, Peter M.; Lynch, Christopher J.
2012-01-01
This paper describes the imaging techniques as well as the analysis methods used to measure the ice thickness and growth rate in support of ice-crystal icing tests performed at the National Research Council of Canada (NRC) Research Altitude Test Facility (RATFac). A detailed description of the camera setup, which involves both still and video cameras, as well as the analysis methods using the NASA Spotlight software, are presented. Two cases, one from two different test entries, showing significant ice growth are analyzed in detail describing the ice thickness and growth rate which is generally linear. Estimates of the bias uncertainty are presented for all measurements. Finally some of the challenges related to the imaging and analysis methods are discussed as well as methods used to overcome them.
Ice Growth Measurements from Image Data to Support Ice-Crystal and Mixed-Phase Accretion Testing
NASA Technical Reports Server (NTRS)
Struk, Peter, M; Lynch, Christopher, J.
2012-01-01
This paper describes the imaging techniques as well as the analysis methods used to measure the ice thickness and growth rate in support of ice-crystal icing tests performed at the National Research Council of Canada (NRC) Research Altitude Test Facility (RATFac). A detailed description of the camera setup, which involves both still and video cameras, as well as the analysis methods using the NASA Spotlight software, are presented. Two cases, one from two different test entries, showing significant ice growth are analyzed in detail describing the ice thickness and growth rate which is generally linear. Estimates of the bias uncertainty are presented for all measurements. Finally some of the challenges related to the imaging and analysis methods are discussed as well as methods used to overcome them.
Yamazaki, Hiroshi; Slingsby, Brian Taylor; Takahashi, Miyako; Hayashi, Yoko; Sugimori, Hiroki; Nakayama, Takeo
2009-12-01
Although qualitative studies have increased since the 1990s, some reports note that relatively few influential journals published them up until 2000. This study critically reviewed the characteristics of qualitative studies published in top tier medical journals since 2000. We assessed full texts of qualitative studies published between 2000 and 2004 in the Annals of Internal Medicine, BMJ, JAMA, Lancet, and New England Journal of Medicine. We found 80 qualitative studies, of which 73 (91%) were published in BMJ. Only 10 studies (13%) combined qualitative and quantitative methods. Sixty-two studies (78%) used only one method of data collection. Interviews dominated the choice of data collection. The median sample size was 36 (range: 9-383). Thirty-three studies (41%) did not specify the type of analysis used but rather described the analytic process in detail. The rest indicated the mode of data analysis, in which the most prevalent methods were the constant comparative method (23%) and the grounded theory approach (22%). Qualitative data analysis software was used by 33 studies (41%). Among influential journals of general medicine, only BMJ consistently published an average of 15 qualitative study reports between 2000 and 2004. These findings lend insight into what qualities and characteristics make a qualitative study worthy of consideration to be published in an influential journal, primarily BMJ.
Finnveden, Göran; Björklund, Anna; Moberg, Asa; Ekvall, Tomas
2007-06-01
A large number of methods and approaches that can be used for supporting waste management decisions at different levels in society have been developed. In this paper an overview of methods is provided and preliminary guidelines for the choice of methods are presented. The methods introduced include: Environmental Impact Assessment, Strategic Environmental Assessment, Life Cycle Assessment, Cost-Benefit Analysis, Cost-effectiveness Analysis, Life-cycle Costing, Risk Assessment, Material Flow Accounting, Substance Flow Analysis, Energy Analysis, Exergy Analysis, Entropy Analysis, Environmental Management Systems, and Environmental Auditing. The characteristics used are the types of impacts included, the objects under study and whether the method is procedural or analytical. The different methods can be described as systems analysis methods. Waste management systems thinking is receiving increasing attention. This is, for example, evidenced by the suggested thematic strategy on waste by the European Commission where life-cycle analysis and life-cycle thinking get prominent positions. Indeed, life-cycle analyses have been shown to provide policy-relevant and consistent results. However, it is also clear that the studies will always be open to criticism since they are simplifications of reality and include uncertainties. This is something all systems analysis methods have in common. Assumptions can be challenged and it may be difficult to generalize from case studies to policies. This suggests that if decisions are going to be made, they are likely to be made on a less than perfect basis.
An optimal design of wind turbine and ship structure based on neuro-response surface method
NASA Astrophysics Data System (ADS)
Lee, Jae-Chul; Shin, Sung-Chul; Kim, Soo-Young
2015-07-01
The geometry of engineering systems affects their performances. For this reason, the shape of engineering systems needs to be optimized in the initial design stage. However, engineering system design problems consist of multi-objective optimization and the performance analysis using commercial code or numerical analysis is generally time-consuming. To solve these problems, many engineers perform the optimization using the approximation model (response surface). The Response Surface Method (RSM) is generally used to predict the system performance in engineering research field, but RSM presents some prediction errors for highly nonlinear systems. The major objective of this research is to establish an optimal design method for multi-objective problems and confirm its applicability. The proposed process is composed of three parts: definition of geometry, generation of response surface, and optimization process. To reduce the time for performance analysis and minimize the prediction errors, the approximation model is generated using the Backpropagation Artificial Neural Network (BPANN) which is considered as Neuro-Response Surface Method (NRSM). The optimization is done for the generated response surface by non-dominated sorting genetic algorithm-II (NSGA-II). Through case studies of marine system and ship structure (substructure of floating offshore wind turbine considering hydrodynamics performances and bulk carrier bottom stiffened panels considering structure performance), we have confirmed the applicability of the proposed method for multi-objective side constraint optimization problems.
SCALE 6.2 Continuous-Energy TSUNAMI-3D Capabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perfetti, Christopher M; Rearden, Bradley T
2015-01-01
The TSUNAMI (Tools for Sensitivity and UNcertainty Analysis Methodology Implementation) capabilities within the SCALE code system make use of sensitivity coefficients for an extensive number of criticality safety applications, such as quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different systems, quantifying computational biases, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved ease of use and fidelity and the desire to extend TSUNAMI analysis to advanced applications have motivated the development of a SCALE 6.2 module for calculating sensitivity coefficients using three-dimensional (3D) continuous-energy (CE) Montemore » Carlo methods: CE TSUNAMI-3D. This paper provides an overview of the theory, implementation, and capabilities of the CE TSUNAMI-3D sensitivity analysis methods. CE TSUNAMI contains two methods for calculating sensitivity coefficients in eigenvalue sensitivity applications: (1) the Iterated Fission Probability (IFP) method and (2) the Contributon-Linked eigenvalue sensitivity/Uncertainty estimation via Track length importance CHaracterization (CLUTCH) method. This work also presents the GEneralized Adjoint Response in Monte Carlo method (GEAR-MC), a first-of-its-kind approach for calculating adjoint-weighted, generalized response sensitivity coefficients—such as flux responses or reaction rate ratios—in CE Monte Carlo applications. The accuracy and efficiency of the CE TSUNAMI-3D eigenvalue sensitivity methods are assessed from a user perspective in a companion publication, and the accuracy and features of the CE TSUNAMI-3D GEAR-MC methods are detailed in this paper.« less
Software Safety Progress in NASA
NASA Technical Reports Server (NTRS)
Radley, Charles F.
1995-01-01
NASA has developed guidelines for development and analysis of safety-critical software. These guidelines have been documented in a Guidebook for Safety Critical Software Development and Analysis. The guidelines represent a practical 'how to' approach, to assist software developers and safety analysts in cost effective methods for software safety. They provide guidance in the implementation of the recent NASA Software Safety Standard NSS-1740.13 which was released as 'Interim' version in June 1994, scheduled for formal adoption late 1995. This paper is a survey of the methods in general use, resulting in the NASA guidelines for safety critical software development and analysis.
Aeroelastic analysis of a troposkien-type wind turbine blade
NASA Technical Reports Server (NTRS)
Nitzsche, F.
1981-01-01
The linear aeroelastic equations for one curved blade of a vertical axis wind turbine in state vector form are presented. The method is based on a simple integrating matrix scheme together with the transfer matrix idea. The method is proposed as a convenient way of solving the associated eigenvalue problem for general support conditions.
Power Analysis for Complex Mediational Designs Using Monte Carlo Methods
ERIC Educational Resources Information Center
Thoemmes, Felix; MacKinnon, David P.; Reiser, Mark R.
2010-01-01
Applied researchers often include mediation effects in applications of advanced methods such as latent variable models and linear growth curve models. Guidance on how to estimate statistical power to detect mediation for these models has not yet been addressed in the literature. We describe a general framework for power analyses for complex…
A comparison of three approaches to non-stationary flood frequency analysis
NASA Astrophysics Data System (ADS)
Debele, S. E.; Strupczewski, W. G.; Bogdanowicz, E.
2017-08-01
Non-stationary flood frequency analysis (FFA) is applied to statistical analysis of seasonal flow maxima from Polish and Norwegian catchments. Three non-stationary estimation methods, namely, maximum likelihood (ML), two stage (WLS/TS) and GAMLSS (generalized additive model for location, scale and shape parameters), are compared in the context of capturing the effect of non-stationarity on the estimation of time-dependent moments and design quantiles. The use of a multimodel approach is recommended, to reduce the errors due to the model misspecification in the magnitude of quantiles. The results of calculations based on observed seasonal daily flow maxima and computer simulation experiments showed that GAMLSS gave the best results with respect to the relative bias and root mean square error in the estimates of trend in the standard deviation and the constant shape parameter, while WLS/TS provided better accuracy in the estimates of trend in the mean value. Within three compared methods the WLS/TS method is recommended to deal with non-stationarity in short time series. Some practical aspects of the GAMLSS package application are also presented. The detailed discussion of general issues related to consequences of climate change in the FFA is presented in the second part of the article entitled "Around and about an application of the GAMLSS package in non-stationary flood frequency analysis".
NASA Astrophysics Data System (ADS)
Wang, Qingzhi; Tan, Guanzheng; He, Yong; Wu, Min
2017-10-01
This paper considers a stability analysis issue of piecewise non-linear systems and applies it to intermittent synchronisation of chaotic systems. First, based on piecewise Lyapunov function methods, more general and less conservative stability criteria of piecewise non-linear systems in periodic and aperiodic cases are presented, respectively. Next, intermittent synchronisation conditions of chaotic systems are derived which extend existing results. Finally, Chua's circuit is taken as an example to verify the validity of our methods.
An improved silver staining procedure for schizodeme analysis in polyacrylamide gradient gels.
Gonçalves, A M; Nehme, N S; Morel, C M
1990-01-01
A simple protocol is described for the silver staining of polyacrylamide gradient gels used for the separation of restriction fragments of kinetoplast DNA [schizodeme analysis of trypanosomatids (Morel et al., 1980)]. The method overcomes the problems of non-uniform staining and strong background color which are frequently encountered when conventional protocols for silver staining of linear gels are applied to gradient gels. The method described has proven to be of general applicability for DNA, RNA and protein separations in gradient gels.
A managerial accounting analysis of hospital costs.
Frank, W G
1976-01-01
Variance analysis, an accounting technique, is applied to an eight-component model of hospital costs to determine the contribution each component makes to cost increases. The method is illustrated by application to data on total costs from 1950 to 1973 for all U.S. nongovernmental not-for-profit short-term general hospitals. The costs of a single hospital are analyzed and compared to the group costs. The potential uses and limitations of the method as a planning and research tool are discussed. PMID:965233
A managerial accounting analysis of hospital costs.
Frank, W G
1976-01-01
Variance analysis, an accounting technique, is applied to an eight-component model of hospital costs to determine the contribution each component makes to cost increases. The method is illustrated by application to data on total costs from 1950 to 1973 for all U.S. nongovernmental not-for-profit short-term general hospitals. The costs of a single hospital are analyzed and compared to the group costs. The potential uses and limitations of the method as a planning and research tool are discussed.
2015-08-19
laboratory analysis using EPA TO-15, and collection of gas samples in sorbent tubes for later analysis of aldehydes using NIOSH Method 2016. Total VOCs...measurement can be a general qualitative indicator of IAQ problems; formaldehyde and other aldehydes are common organic gases emitted from OSB; and...table in the middle of the hut. 5.1.2.3 Formaldehyde and other aldehydes Aldehydes were measured using both Dräger-tubes and by NIOSH Method 2016. The
A Hybrid Generalized Hidden Markov Model-Based Condition Monitoring Approach for Rolling Bearings
Liu, Jie; Hu, Youmin; Wu, Bo; Wang, Yan; Xie, Fengyun
2017-01-01
The operating condition of rolling bearings affects productivity and quality in the rotating machine process. Developing an effective rolling bearing condition monitoring approach is critical to accurately identify the operating condition. In this paper, a hybrid generalized hidden Markov model-based condition monitoring approach for rolling bearings is proposed, where interval valued features are used to efficiently recognize and classify machine states in the machine process. In the proposed method, vibration signals are decomposed into multiple modes with variational mode decomposition (VMD). Parameters of the VMD, in the form of generalized intervals, provide a concise representation for aleatory and epistemic uncertainty and improve the robustness of identification. The multi-scale permutation entropy method is applied to extract state features from the decomposed signals in different operating conditions. Traditional principal component analysis is adopted to reduce feature size and computational cost. With the extracted features’ information, the generalized hidden Markov model, based on generalized interval probability, is used to recognize and classify the fault types and fault severity levels. Finally, the experiment results show that the proposed method is effective at recognizing and classifying the fault types and fault severity levels of rolling bearings. This monitoring method is also efficient enough to quantify the two uncertainty components. PMID:28524088
Shape optimization using a NURBS-based interface-enriched generalized FEM
Najafi, Ahmad R.; Safdari, Masoud; Tortorelli, Daniel A.; ...
2016-11-26
This study presents a gradient-based shape optimization over a fixed mesh using a non-uniform rational B-splines-based interface-enriched generalized finite element method, applicable to multi-material structures. In the proposed method, non-uniform rational B-splines are used to parameterize the design geometry precisely and compactly by a small number of design variables. An analytical shape sensitivity analysis is developed to compute derivatives of the objective and constraint functions with respect to the design variables. Subtle but important new terms involve the sensitivity of shape functions and their spatial derivatives. As a result, verification and illustrative problems are solved to demonstrate the precision andmore » capability of the method.« less
Plasticity - Theory and finite element applications.
NASA Technical Reports Server (NTRS)
Armen, H., Jr.; Levine, H. S.
1972-01-01
A unified presentation is given of the development and distinctions associated with various incremental solution procedures used to solve the equations governing the nonlinear behavior of structures, and this is discussed within the framework of the finite-element method. Although the primary emphasis here is on material nonlinearities, consideration is also given to geometric nonlinearities acting separately or in combination with nonlinear material behavior. The methods discussed here are applicable to a broad spectrum of structures, ranging from simple beams to general three-dimensional bodies. The finite-element analysis methods for material nonlinearity are general in the sense that any of the available plasticity theories can be incorporated to treat strain hardening or ideally plastic behavior.
The method of fundamental solutions for computing acoustic interior transmission eigenvalues
NASA Astrophysics Data System (ADS)
Kleefeld, Andreas; Pieronek, Lukas
2018-03-01
We analyze the method of fundamental solutions (MFS) in two different versions with focus on the computation of approximate acoustic interior transmission eigenvalues in 2D for homogeneous media. Our approach is mesh- and integration free, but suffers in general from the ill-conditioning effects of the discretized eigenoperator, which we could then successfully balance using an approved stabilization scheme. Our numerical examples cover many of the common scattering objects and prove to be very competitive in accuracy with the standard methods for PDE-related eigenvalue problems. We finally give an approximation analysis for our framework and provide error estimates, which bound interior transmission eigenvalue deviations in terms of some generalized MFS output.
Error Ratio Analysis: Alternate Mathematics Assessment for General and Special Educators.
ERIC Educational Resources Information Center
Miller, James H.; Carr, Sonya C.
1997-01-01
Eighty-seven elementary students in grades four, five, and six, were administered a 30-item multiplication instrument to assess performance in computation across grade levels. An interpretation of student performance using error ratio analysis is provided and the use of this method with groups of students for instructional decision making is…
A Study of Multigrid Preconditioners Using Eigensystem Analysis
NASA Technical Reports Server (NTRS)
Roberts, Thomas W.; Swanson, R. C.
2005-01-01
The convergence properties of numerical schemes for partial differential equations are studied by examining the eigensystem of the discrete operator. This method of analysis is very general, and allows the effects of boundary conditions and grid nonuniformities to be examined directly. Algorithms for the Laplace equation and a two equation model hyperbolic system are examined.
ERIC Educational Resources Information Center
Bain, Ryan M.; Pulliam, Christopher J.; Yan, Xin; Moore, Kassandra F.; Mu¨ller, Thomas; Cooks, R. Graham
2014-01-01
Undergraduate laboratories generally teach an understanding of chemical reactivity using bulk or semimicroscale experiments with product isolation and subsequent chemical and spectroscopic analysis. In this study students were exposed to mass spectrometry as a means of chemical synthesis as well as analysis. The ionization method used, paper…
NASA Technical Reports Server (NTRS)
Lan, C. Edward; Ge, Fuying
1989-01-01
Control system design for general nonlinear flight dynamic models is considered through numerical simulation. The design is accomplished through a numerical optimizer coupled with analysis of flight dynamic equations. The general flight dynamic equations are numerically integrated and dynamic characteristics are then identified from the dynamic response. The design variables are determined iteratively by the optimizer to optimize a prescribed objective function which is related to desired dynamic characteristics. Generality of the method allows nonlinear effects to aerodynamics and dynamic coupling to be considered in the design process. To demonstrate the method, nonlinear simulation models for an F-5A and an F-16 configurations are used to design dampers to satisfy specifications on flying qualities and control systems to prevent departure. The results indicate that the present method is simple in formulation and effective in satisfying the design objectives.
Paper-based chromatic toxicity bioassay by analysis of bacterial ferricyanide reduction.
Pujol-Vila, F; Vigués, N; Guerrero-Navarro, A; Jiménez, S; Gómez, D; Fernández, M; Bori, J; Vallès, B; Riva, M C; Muñoz-Berbel, X; Mas, J
2016-03-03
Water quality assessment requires a continuous and strict analysis of samples to guarantee compliance with established standards. Nowadays, the increasing number of pollutants and their synergistic effects lead to the development general toxicity bioassays capable to analyse water pollution as a whole. Current general toxicity methods, e.g. Microtox(®), rely on long operation protocols, the use of complex and expensive instrumentation and sample pre-treatment, which should be transported to the laboratory for analysis. These requirements delay sample analysis and hence, the response to avoid an environmental catastrophe. In an attempt to solve it, a fast (15 min) and low-cost toxicity bioassay based on the chromatic changes associated to bacterial ferricyanide reduction is here presented. E. coli cells (used as model bacteria) were stably trapped on low-cost paper matrices (cellulose-based paper discs, PDs) and remained viable for long times (1 month at -20 °C). Apart from bacterial carrier, paper matrices also acted as a fluidic element, allowing fluid management without the need of external pumps. Bioassay evaluation was performed using copper as model toxic agent. Chromatic changes associated to bacterial ferricyanide reduction were determined by three different transduction methods, i.e. (i) optical reflectometry (as reference method), (ii) image analysis and (iii) visual inspection. In all cases, bioassay results (in terms of half maximal effective concentrations, EC50) were in agreement with already reported data, confirming the good performance of the bioassay. The validation of the bioassay was performed by analysis of real samples from natural sources, which were analysed and compared with a reference method (i.e. Microtox). Obtained results showed agreement for about 70% of toxic samples and 80% of non-toxic samples, which may validate the use of this simple and quick protocol in the determination of general toxicity. The minimum instrumentation requirements and the simplicity of the bioassay open the possibility of in-situ water toxicity assessment with a fast and low-cost protocol. Copyright © 2016 Elsevier B.V. All rights reserved.
Stein, Richard R; Bucci, Vanni; Toussaint, Nora C; Buffie, Charlie G; Rätsch, Gunnar; Pamer, Eric G; Sander, Chris; Xavier, João B
2013-01-01
The intestinal microbiota is a microbial ecosystem of crucial importance to human health. Understanding how the microbiota confers resistance against enteric pathogens and how antibiotics disrupt that resistance is key to the prevention and cure of intestinal infections. We present a novel method to infer microbial community ecology directly from time-resolved metagenomics. This method extends generalized Lotka-Volterra dynamics to account for external perturbations. Data from recent experiments on antibiotic-mediated Clostridium difficile infection is analyzed to quantify microbial interactions, commensal-pathogen interactions, and the effect of the antibiotic on the community. Stability analysis reveals that the microbiota is intrinsically stable, explaining how antibiotic perturbations and C. difficile inoculation can produce catastrophic shifts that persist even after removal of the perturbations. Importantly, the analysis suggests a subnetwork of bacterial groups implicated in protection against C. difficile. Due to its generality, our method can be applied to any high-resolution ecological time-series data to infer community structure and response to external stimuli.
NASA Astrophysics Data System (ADS)
Ghassemi, Aazam; Yazdani, Mostafa; Hedayati, Mohamad
2017-12-01
In this work, based on the First Order Shear Deformation Theory (FSDT), an attempt is made to explore the applicability and accuracy of the Generalized Differential Quadrature Method (GDQM) for bending analysis of composite sandwich plates under static loading. Comparative studies of the bending behavior of composite sandwich plates are made between two types of boundary conditions for different cases. The effects of fiber orientation, ratio of thickness to length of the plate, the ratio of thickness of core to thickness of the face sheet are studied on the transverse displacement and moment resultants. As shown in this study, the role of the core thickness in deformation of these plates can be reversed by the stiffness of the core in comparison with sheets. The obtained graphs give very good results due to optimum design of sandwich plates. In Comparison with existing solutions, fast convergent rates and high accuracy results can be achieved by the GDQ method.
Unified Computational Methods for Regression Analysis of Zero-Inflated and Bound-Inflated Data
Yang, Yan; Simpson, Douglas
2010-01-01
Bounded data with excess observations at the boundary are common in many areas of application. Various individual cases of inflated mixture models have been studied in the literature for bound-inflated data, yet the computational methods have been developed separately for each type of model. In this article we use a common framework for computing these models, and expand the range of models for both discrete and semi-continuous data with point inflation at the lower boundary. The quasi-Newton and EM algorithms are adapted and compared for estimation of model parameters. The numerical Hessian and generalized Louis method are investigated as means for computing standard errors after optimization. Correlated data are included in this framework via generalized estimating equations. The estimation of parameters and effectiveness of standard errors are demonstrated through simulation and in the analysis of data from an ultrasound bioeffect study. The unified approach enables reliable computation for a wide class of inflated mixture models and comparison of competing models. PMID:20228950
Toussaint, Nora C.; Buffie, Charlie G.; Rätsch, Gunnar; Pamer, Eric G.; Sander, Chris; Xavier, João B.
2013-01-01
The intestinal microbiota is a microbial ecosystem of crucial importance to human health. Understanding how the microbiota confers resistance against enteric pathogens and how antibiotics disrupt that resistance is key to the prevention and cure of intestinal infections. We present a novel method to infer microbial community ecology directly from time-resolved metagenomics. This method extends generalized Lotka–Volterra dynamics to account for external perturbations. Data from recent experiments on antibiotic-mediated Clostridium difficile infection is analyzed to quantify microbial interactions, commensal-pathogen interactions, and the effect of the antibiotic on the community. Stability analysis reveals that the microbiota is intrinsically stable, explaining how antibiotic perturbations and C. difficile inoculation can produce catastrophic shifts that persist even after removal of the perturbations. Importantly, the analysis suggests a subnetwork of bacterial groups implicated in protection against C. difficile. Due to its generality, our method can be applied to any high-resolution ecological time-series data to infer community structure and response to external stimuli. PMID:24348232
NASA Astrophysics Data System (ADS)
Yusa, Yasunori; Okada, Hiroshi; Yamada, Tomonori; Yoshimura, Shinobu
2018-04-01
A domain decomposition method for large-scale elastic-plastic problems is proposed. The proposed method is based on a quasi-Newton method in conjunction with a balancing domain decomposition preconditioner. The use of a quasi-Newton method overcomes two problems associated with the conventional domain decomposition method based on the Newton-Raphson method: (1) avoidance of a double-loop iteration algorithm, which generally has large computational complexity, and (2) consideration of the local concentration of nonlinear deformation, which is observed in elastic-plastic problems with stress concentration. Moreover, the application of a balancing domain decomposition preconditioner ensures scalability. Using the conventional and proposed domain decomposition methods, several numerical tests, including weak scaling tests, were performed. The convergence performance of the proposed method is comparable to that of the conventional method. In particular, in elastic-plastic analysis, the proposed method exhibits better convergence performance than the conventional method.
A note on generalized Genome Scan Meta-Analysis statistics
Koziol, James A; Feng, Anne C
2005-01-01
Background Wise et al. introduced a rank-based statistical technique for meta-analysis of genome scans, the Genome Scan Meta-Analysis (GSMA) method. Levinson et al. recently described two generalizations of the GSMA statistic: (i) a weighted version of the GSMA statistic, so that different studies could be ascribed different weights for analysis; and (ii) an order statistic approach, reflecting the fact that a GSMA statistic can be computed for each chromosomal region or bin width across the various genome scan studies. Results We provide an Edgeworth approximation to the null distribution of the weighted GSMA statistic, and, we examine the limiting distribution of the GSMA statistics under the order statistic formulation, and quantify the relevance of the pairwise correlations of the GSMA statistics across different bins on this limiting distribution. We also remark on aggregate criteria and multiple testing for determining significance of GSMA results. Conclusion Theoretical considerations detailed herein can lead to clarification and simplification of testing criteria for generalizations of the GSMA statistic. PMID:15717930
Perfetti, Christopher M.; Rearden, Bradley T.
2016-03-01
The sensitivity and uncertainty analysis tools of the ORNL SCALE nuclear modeling and simulation code system that have been developed over the last decade have proven indispensable for numerous application and design studies for nuclear criticality safety and reactor physics. SCALE contains tools for analyzing the uncertainty in the eigenvalue of critical systems, but cannot quantify uncertainty in important neutronic parameters such as multigroup cross sections, fuel fission rates, activation rates, and neutron fluence rates with realistic three-dimensional Monte Carlo simulations. A more complete understanding of the sources of uncertainty in these design-limiting parameters could lead to improvements in processmore » optimization, reactor safety, and help inform regulators when setting operational safety margins. A novel approach for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was recently explored as academic research and has been found to accurately and rapidly calculate sensitivity coefficients in criticality safety applications. The work presented here describes a new method, known as the GEAR-MC method, which extends the CLUTCH theory for calculating eigenvalue sensitivity coefficients to enable sensitivity coefficient calculations and uncertainty analysis for a generalized set of neutronic responses using high-fidelity continuous-energy Monte Carlo calculations. Here, several criticality safety systems were examined to demonstrate proof of principle for the GEAR-MC method, and GEAR-MC was seen to produce response sensitivity coefficients that agreed well with reference direct perturbation sensitivity coefficients.« less
Analysis and prediction of leucine-rich nuclear export signals.
la Cour, Tanja; Kiemer, Lars; Mølgaard, Anne; Gupta, Ramneek; Skriver, Karen; Brunak, Søren
2004-06-01
We present a thorough analysis of nuclear export signals and a prediction server, which we have made publicly available. The machine learning prediction method is a significant improvement over the generally used consensus patterns. Nuclear export signals (NESs) are extremely important regulators of the subcellular location of proteins. This regulation has an impact on transcription and other nuclear processes, which are fundamental to the viability of the cell. NESs are studied in relation to cancer, the cell cycle, cell differentiation and other important aspects of molecular biology. Our conclusion from this analysis is that the most important properties of NESs are accessibility and flexibility allowing relevant proteins to interact with the signal. Furthermore, we show that not only the known hydrophobic residues are important in defining a nuclear export signals. We employ both neural networks and hidden Markov models in the prediction algorithm and verify the method on the most recently discovered NESs. The NES predictor (NetNES) is made available for general use at http://www.cbs.dtu.dk/.
Ouyang, Min; Tian, Hui; Wang, Zhenghua; Hong, Liu; Mao, Zijun
2017-01-17
This article studies a general type of initiating events in critical infrastructures, called spatially localized failures (SLFs), which are defined as the failure of a set of infrastructure components distributed in a spatially localized area due to damage sustained, while other components outside the area do not directly fail. These failures can be regarded as a special type of intentional attack, such as bomb or explosive assault, or a generalized modeling of the impact of localized natural hazards on large-scale systems. This article introduces three SLFs models: node centered SLFs, district-based SLFs, and circle-shaped SLFs, and proposes a SLFs-induced vulnerability analysis method from three aspects: identification of critical locations, comparisons of infrastructure vulnerability to random failures, topologically localized failures and SLFs, and quantification of infrastructure information value. The proposed SLFs-induced vulnerability analysis method is finally applied to the Chinese railway system and can be also easily adapted to analyze other critical infrastructures for valuable protection suggestions. © 2017 Society for Risk Analysis.
A formulation of rotor-airframe coupling for design analysis of vibrations of helicopter airframes
NASA Technical Reports Server (NTRS)
Kvaternik, R. G.; Walton, W. C., Jr.
1982-01-01
A linear formulation of rotor airframe coupling intended for vibration analysis in airframe structural design is presented. The airframe is represented by a finite element analysis model; the rotor is represented by a general set of linear differential equations with periodic coefficients; and the connections between the rotor and airframe are specified through general linear equations of constraint. Coupling equations are applied to the rotor and airframe equations to produce one set of linear differential equations governing vibrations of the combined rotor airframe system. These equations are solved by the harmonic balance method for the system steady state vibrations. A feature of the solution process is the representation of the airframe in terms of forced responses calculated at the rotor harmonics of interest. A method based on matrix partitioning is worked out for quick recalculations of vibrations in design studies when only relatively few airframe members are varied. All relations are presented in forms suitable for direct computer implementation.
Comparison of software packages for detecting differential expression in RNA-seq studies
Seyednasrollah, Fatemeh; Laiho, Asta
2015-01-01
RNA-sequencing (RNA-seq) has rapidly become a popular tool to characterize transcriptomes. A fundamental research problem in many RNA-seq studies is the identification of reliable molecular markers that show differential expression between distinct sample groups. Together with the growing popularity of RNA-seq, a number of data analysis methods and pipelines have already been developed for this task. Currently, however, there is no clear consensus about the best practices yet, which makes the choice of an appropriate method a daunting task especially for a basic user without a strong statistical or computational background. To assist the choice, we perform here a systematic comparison of eight widely used software packages and pipelines for detecting differential expression between sample groups in a practical research setting and provide general guidelines for choosing a robust pipeline. In general, our results demonstrate how the data analysis tool utilized can markedly affect the outcome of the data analysis, highlighting the importance of this choice. PMID:24300110
Comparison of software packages for detecting differential expression in RNA-seq studies.
Seyednasrollah, Fatemeh; Laiho, Asta; Elo, Laura L
2015-01-01
RNA-sequencing (RNA-seq) has rapidly become a popular tool to characterize transcriptomes. A fundamental research problem in many RNA-seq studies is the identification of reliable molecular markers that show differential expression between distinct sample groups. Together with the growing popularity of RNA-seq, a number of data analysis methods and pipelines have already been developed for this task. Currently, however, there is no clear consensus about the best practices yet, which makes the choice of an appropriate method a daunting task especially for a basic user without a strong statistical or computational background. To assist the choice, we perform here a systematic comparison of eight widely used software packages and pipelines for detecting differential expression between sample groups in a practical research setting and provide general guidelines for choosing a robust pipeline. In general, our results demonstrate how the data analysis tool utilized can markedly affect the outcome of the data analysis, highlighting the importance of this choice. © The Author 2013. Published by Oxford University Press.
Parenting stress among caregivers of children with chronic illness: a systematic review.
Cousino, Melissa K; Hazen, Rebecca A
2013-09-01
To critically review, analyze, and synthesize the literature on parenting stress among caregivers of children with asthma, cancer, cystic fibrosis, diabetes, epilepsy, juvenile rheumatoid arthritis, and/or sickle cell disease. Method PsychInfo, MEDLINE, and Cumulative Index to Nursing and Allied Health Literature were searched according to inclusion criteria. Meta-analysis of 13 studies and qualitative analysis of 96 studies was conducted. Results Caregivers of children with chronic illness reported significantly greater general parenting stress than caregivers of healthy children (d = .40; p = ≤.0001). Qualitative analysis revealed that greater general parenting stress was associated with greater parental responsibility for treatment management and was unrelated to illness duration and severity across illness populations. Greater parenting stress was associated with poorer psychological adjustment in caregivers and children with chronic illness. Conclusion Parenting stress is an important target for future intervention. General and illness-specific measures of parenting stress should be used in future studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradonjic, Milan; Hagberg, Aric; Hengartner, Nick
We analyze component evolution in general random intersection graphs (RIGs) and give conditions on existence and uniqueness of the giant component. Our techniques generalize the existing methods for analysis on component evolution in RIGs. That is, we analyze survival and extinction properties of a dependent, inhomogeneous Galton-Watson branching process on general RIGs. Our analysis relies on bounding the branching processes and inherits the fundamental concepts from the study on component evolution in Erdos-Renyi graphs. The main challenge becomes from the underlying structure of RIGs, when the number of offsprings follows a binomial distribution with a different number of nodes andmore » different rate at each step during the evolution. RIGs can be interpreted as a model for large randomly formed non-metric data sets. Besides the mathematical analysis on component evolution, which we provide in this work, we perceive RIGs as an important random structure which has already found applications in social networks, epidemic networks, blog readership, or wireless sensor networks.« less
NASA Technical Reports Server (NTRS)
Christidis, Z. D.; Spar, J.
1980-01-01
Spherical harmonic analysis was used to analyze the observed climatological (C) fields of temperature at 850 mb, geopotential height at 500 mb, and sea level pressure. The spherical harmonic method was also applied to the corresponding "model climatological" fields (M) generated by a general circulation model, the "GISS climate model." The climate model was initialized with observed data for the first of December 1976 at 00. GMT and allowed to generate five years of meteorological history. Monthly means of the above fields for the five years were computed and subjected to spherical harmonic analysis. It was found from the comparison of the spectral components of both sets, M and C, that the climate model generated reasonable 500 mb geopotential heights. The model temperature field at 850 mb exhibited a generally correct structure. However, the meridional temperature gradient was overestimated and overheating of the continents was observed in summer.
Use of deferiprone for the treatment of hepatic iron storage disease in three hornbills.
Sandmeier, Peter; Clauss, Marcus; Donati, Olivio F; Chiers, Koen; Kienzle, Ellen; Hatt, Jean-Michel
2012-01-01
3 hornbills (2 Papua hornbills [Aceros plicatus] and 1 longtailed hornbill [Tockus albocristatus]) were evaluated because of general listlessness and loss of feather glossiness. Because hepatic iron storage disease was suspected, liver biopsy was performed and formalin-fixed liver samples were submitted for histologic examination and quantitative image analysis (QIA). Additional frozen liver samples were submitted for chemical analysis. Birds also underwent magnetic resonance imaging (MRI) under general anesthesia for noninvasive measurement of liver iron content. Serum biochemical analysis and analysis of feed were also performed. Results of diagnostic testing indicated that all 3 hornbills were affected with hepatic iron storage disease. The iron chelator deferiprone was administered (75 mg/kg [34.1 mg/lb], PO, once daily for 90 days). During the treatment period, liver biopsy samples were obtained at regular intervals for QIA and chemical analysis of the liver iron content and follow-up MRI was performed. In all 3 hornbills, a rapid and large decrease in liver iron content was observed. All 3 methods for quantifying the liver iron content were able to verify the decrease in liver iron content. Orally administered deferiprone was found to effectively reduce the liver iron content in these 3 hornbills with iron storage disease. All 3 methods used to monitor the liver iron content (QIA, chemical analysis of liver biopsy samples, and MRI) had similar results, indicating that all of these methods should be considered for the diagnosis of iron storage disease and monitoring of liver iron content during treatment.
40 CFR 457.31 - Specialized definitions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... provided below, the general definitions, abbreviations and methods of analysis set forth in 40 CFR part 401 shall apply to this subpart. (b) The term “product” shall mean products from plants which blend...
Approximation concepts for efficient structural synthesis
NASA Technical Reports Server (NTRS)
Schmit, L. A., Jr.; Miura, H.
1976-01-01
It is shown that efficient structural synthesis capabilities can be created by using approximation concepts to mesh finite element structural analysis methods with nonlinear mathematical programming techniques. The history of the application of mathematical programming techniques to structural design optimization problems is reviewed. Several rather general approximation concepts are described along with the technical foundations of the ACCESS 1 computer program, which implements several approximation concepts. A substantial collection of structural design problems involving truss and idealized wing structures is presented. It is concluded that since the basic ideas employed in creating the ACCESS 1 program are rather general, its successful development supports the contention that the introduction of approximation concepts will lead to the emergence of a new generation of practical and efficient, large scale, structural synthesis capabilities in which finite element analysis methods and mathematical programming algorithms will play a central role.
Diers, Anne R.; Keszler, Agnes; Hogg, Neil
2015-01-01
BACKGROUND S-Nitrosothiols have been recognized as biologically-relevant products of nitric oxide that are involved in many of the diverse activities of this free radical. SCOPE OF REVIEW This review serves to discuss current methods for the detection and analysis of protein S-nitrosothiols. The major methods of S-nitrosothiol detection include chemiluminescence-based methods and switch-based methods, each of which comes in various flavors with advantages and caveats. MAJOR CONCLUSIONS The detection of S-nitrosothiols is challenging and prone to many artifacts. Accurate measurements require an understanding of the underlying chemistry of the methods involved and the use of appropriate controls. GENERAL SIGNIFICANCE Nothing is more important to a field of research than robust methodology that is generally trusted. The field of S-Nitrosation has developed such methods but, as S-nitrosothiols are easy to introduce as artifacts, it is vital that current users learn from the lessons of the past. PMID:23988402
Hrabovský, Miroslav
2014-01-01
The purpose of the study is to show a proposal of an extension of a one-dimensional speckle correlation method, which is primarily intended for determination of one-dimensional object's translation, for detection of general in-plane object's translation. In that view, a numerical simulation of a displacement of the speckle field as a consequence of general in-plane object's translation is presented. The translation components a x and a y representing the projections of a vector a of the object's displacement onto both x- and y-axes in the object plane (x, y) are evaluated separately by means of the extended one-dimensional speckle correlation method. Moreover, one can perform a distinct optimization of the method by reduction of intensity values representing detected speckle patterns. The theoretical relations between the translation components a x and a y of the object and the displacement of the speckle pattern for selected geometrical arrangement are mentioned and used for the testifying of the proposed method's rightness. PMID:24592180
Remarks on turbulent constitutive relations
NASA Technical Reports Server (NTRS)
Shih, Tsan-Hsing; Lumley, John L.
1993-01-01
The paper demonstrates that the concept of turbulent constitutive relations can be used to construct general models for various turbulent correlations. Some of the Generalized Cayley-Hamilton formulas for relating tensor products of higher extension to tensor products of lower extension are introduced. The combination of dimensional analysis and invariant theory can lead to 'turbulent constitutive relations' (or general turbulence models) for, in principle, any turbulent correlations. As examples, the constitutive relations for Reynolds stresses and scalar fluxes are derived. The results are consistent with ones from Renormalization Group (RNG) theory and two-scale Direct-Interaction Approximation (DIA) method, but with a more general form.
Comprehensive Micromechanics-Analysis Code - Version 4.0
NASA Technical Reports Server (NTRS)
Arnold, S. M.; Bednarcyk, B. A.
2005-01-01
Version 4.0 of the Micromechanics Analysis Code With Generalized Method of Cells (MAC/GMC) has been developed as an improved means of computational simulation of advanced composite materials. The previous version of MAC/GMC was described in "Comprehensive Micromechanics-Analysis Code" (LEW-16870), NASA Tech Briefs, Vol. 24, No. 6 (June 2000), page 38. To recapitulate: MAC/GMC is a computer program that predicts the elastic and inelastic thermomechanical responses of continuous and discontinuous composite materials with arbitrary internal microstructures and reinforcement shapes. The predictive capability of MAC/GMC rests on a model known as the generalized method of cells (GMC) - a continuum-based model of micromechanics that provides closed-form expressions for the macroscopic response of a composite material in terms of the properties, sizes, shapes, and responses of the individual constituents or phases that make up the material. Enhancements in version 4.0 include a capability for modeling thermomechanically and electromagnetically coupled ("smart") materials; a more-accurate (high-fidelity) version of the GMC; a capability to simulate discontinuous plies within a laminate; additional constitutive models of materials; expanded yield-surface-analysis capabilities; and expanded failure-analysis and life-prediction capabilities on both the microscopic and macroscopic scales.
Review of the Air-Coupled Impact-Echo Method for Non-Destructive Testing
NASA Astrophysics Data System (ADS)
Nowotarski, Piotr; Dubas, Sebastian; Milwicz, Roman
2017-10-01
The article presents the general idea of Air-Coupled Impact-Echo (ACIE) method which is one of the non-destructive testing (NDT) techniques used in the construction industry. One of the main advantages of the general Impact Echo (IE) method is that it is sufficient to access from one side to that of the structure which greatly facilitate research in the road facilities or places which are difficult to access and diagnose. The main purpose of the article is to present state-of-the-art related to ACIE method based on the publications available at Thomson Reuters Web of Science Core Collection database (WOS) with the further analysis of the mentioned methods. Deeper analysis was also performed for the newest publications published within last 3 years related to ACIE for investigation on the subject of main focus of the researchers and scientists to try to define possible regions where additional examination and work is necessary. One of the main conclusions that comes from the performed analysis is that ACIE methods can be widely used for performing NDT of concrete structures and can be performed faster than standard IE method thanks to the Air-coupled sensors. What is more, 92.3% of the analysed recent research described in publications connected with ACIE was performed in laboratories, and only 23.1% in-situ on real structures. This indicates that method requires further research to prepare test stand ready to perform analysis on real objects outside laboratory conditions. Moreover, algorithms that are used for data processing and later presentation in ACIE method are still being developed and there is no universal solution available for all kinds of the existing and possible to find defects, which indicates possible research area for further works. Authors are of the opinion that emerging ACIE method could be good opportunity for ND testing especially for concrete structures. Development and refinement of test stands that will allow to perform in-situ tests could shorten the overall time of the research and with the connection of implementation of higher accuracy algorithms for data analysis better precision of defects localization can be achieved.
[Detection of UGT1A1*28 Polymorphism Using Fragment Analysis].
Huang, Ying; Su, Jian; Huang, Xiaosui; Lu, Danxia; Xie, Zhi; Yang, Suqing; Guo, Weibang; Lv, Zhiyi; Wu, Hongsui; Zhang, Xuchao
2017-12-20
Uridine-diphosphoglucuronosyl transferase 1A1 (UGT1A1), UGT1A1*28 polymorphism can reduce UGT1A1 enzymatic activity, which may lead to severe toxicities in patients who receive irinotecan. This study tries to build a fragment analysis method to detect UGT1A1*28 polymorphism. A total of 286 blood specimens from the lung cancer patients who were hospitalized in Guangdong General Hospital between April 2014 to May 2015 were detected UGT1A1*28 polymorphism by fragment analysis method. Comparing with Sanger sequencing, precision and accuracy of the fragment analysis method were 100%. Of the 286 patients, 236 (82.5% harbored TA6/6 genotype, 48 (16.8%) TA 6/7 genotype and 2 (0.7%) TA7/7 genotype. Our data suggest hat the fragment analysis method is robust for detecting UGT1A1*28 polymorphism in clinical practice. It's simple, time-saving, and easy-to-carry.
Monakhova, Yulia B; Mushtakova, Svetlana P
2017-05-01
A fast and reliable spectroscopic method for multicomponent quantitative analysis of targeted compounds with overlapping signals in complex mixtures has been established. The innovative analytical approach is based on the preliminary chemometric extraction of qualitative and quantitative information from UV-vis and IR spectral profiles of a calibration system using independent component analysis (ICA). Using this quantitative model and ICA resolution results of spectral profiling of "unknown" model mixtures, the absolute analyte concentrations in multicomponent mixtures and authentic samples were then calculated without reference solutions. Good recoveries generally between 95% and 105% were obtained. The method can be applied to any spectroscopic data that obey the Beer-Lambert-Bouguer law. The proposed method was tested on analysis of vitamins and caffeine in energy drinks and aromatic hydrocarbons in motor fuel with 10% error. The results demonstrated that the proposed method is a promising tool for rapid simultaneous multicomponent analysis in the case of spectral overlap and the absence/inaccessibility of reference materials.
Asymptotic modal analysis of a rectangular acoustic cavity excited by wall vibration
NASA Technical Reports Server (NTRS)
Peretti, Linda F.; Dowell, Earl H.
1992-01-01
Asymptotic modal analysis, a method that has recently been developed for structural dynamical systems, has been applied to a rectangular acoustic cavity. The cavity had a flexible vibrating portion on one wall, and the other five walls were rigid. Banded white noise was transmitted through the flexible portion (plate) only. Both the location along the wall and the size of the plate were varied. The mean square pressure levels of the cavity interior were computed as a ratio of the result obtained from classical modal analysis to that obtained from asymptotic modal analysis for the various plate configurations. In general, this ratio converged to 1.0 as the number of responding modes increased. Intensification effects were found due to both the excitation location and the response location. The asymptotic modal analysis method was both efficient and accurate in solving the given problem. The method has advantages over the traditional methods that are used for solving dynamics problems with a large number of responding modes.
A method based on IHS cylindrical transform model for quality assessment of image fusion
NASA Astrophysics Data System (ADS)
Zhu, Xiaokun; Jia, Yonghong
2005-10-01
Image fusion technique has been widely applied to remote sensing image analysis and processing, and methods for quality assessment of image fusion in remote sensing have also become the research issues at home and abroad. Traditional assessment methods combine calculation of quantitative indexes and visual interpretation to compare fused images quantificationally and qualitatively. However, in the existing assessment methods, there are two defects: on one hand, most imdexes lack the theoretic support to compare different fusion methods. On the hand, there is not a uniform preference for most of the quantitative assessment indexes when they are applied to estimate the fusion effects. That is, the spatial resolution and spectral feature could not be analyzed synchronously by these indexes and there is not a general method to unify the spatial and spectral feature assessment. So in this paper, on the basis of the approximate general model of four traditional fusion methods, including Intensity Hue Saturation(IHS) triangle transform fusion, High Pass Filter(HPF) fusion, Principal Component Analysis(PCA) fusion, Wavelet Transform(WT) fusion, a correlation coefficient assessment method based on IHS cylindrical transform is proposed. By experiments, this method can not only get the evaluation results of spatial and spectral features on the basis of uniform preference, but also can acquire the comparison between fusion image sources and fused images, and acquire differences among fusion methods. Compared with the traditional assessment methods, the new methods is more intuitionistic, and in accord with subjective estimation.
The learner’s perspective in GP teaching practices with multi-level learners: a qualitative study
2014-01-01
Background Medical students, junior hospital doctors on rotation and general practice (GP) registrars are undertaking their training in clinical general practices in increasing numbers in Australia. Some practices have four levels of learner. This study aimed to explore how multi-level teaching (also called vertical integration of GP education and training) is occurring in clinical general practice and the impact of such teaching on the learner. Methods A qualitative research methodology was used with face-to-face, semi-structured interviews of medical students, junior hospital doctors, GP registrars and GP teachers in eight training practices in the region that taught all levels of learners. Interviews were audio-recorded and transcribed. Qualitative analysis was conducted using thematic analysis techniques aided by the use of the software package N-Vivo 9. Primary themes were identified and categorised by the co-investigators. Results 52 interviews were completed and analysed. Themes were identified relating to both the practice learning environment and teaching methods used. A practice environment where there is a strong teaching culture, enjoyment of learning, and flexible learning methods, as well as learning spaces and organised teaching arrangements, all contribute to positive learning from a learners’ perspective. Learners identified a number of innovative teaching methods and viewed them as positive. These included multi-level learner group tutorials in the practice, being taught by a team of teachers, including GP registrars and other health professionals, and access to a supernumerary GP supervisor (also termed “GP consultant teacher”). Other teaching methods that were viewed positively were parallel consulting, informal learning and rural hospital context integrated learning. Conclusions Vertical integration of GP education and training generally impacted positively on all levels of learner. This research has provided further evidence about the learning culture, structures and teaching processes that have a positive impact on learners in the clinical general practice setting where there are multiple levels of learners. It has also identified some innovative teaching methods that will need further examination. The findings reinforce the importance of the environment for learning and learner centred approaches and will be important for training organisations developing vertically integrated practices and in their training of GP teachers. PMID:24645670
ZHU, Zhipei; ZHANG, Li; JIANG, Jiangling; LI, Wei; CAO, Xinyi; ZHOU, Zhirui; ZHANG, Tiansong; LI, Chunbo
2014-01-01
Background There is ongoing debate about the efficacy of placebos in the treatment of mental disorders. In randomized control trials (RCTs) about the treatment of generalized anxiety disorder, the administration of a psychological placebo or placement on a waiting list are the two most common control conditions. But there has never been a systematic comparison of the clinical effect of these different strategies. Aim Compare the change in symptom severity among individuals treated with cognitive behavioral therapy, provided a psychological placebo, or placed on a waiting list using data from RCTs on generalized anxiety disorder. Methods The following databases were searched for RCTs on generalized anxiety disorder: PubMed, PsycInfo, EMBASE, The Cochrane Library, CNKI, Chongqing VIP, Wanfang, Chinese Biological Medical Literature Database, and Taiwan Electronic Periodical Services. Studies were selected based on pre-defined inclusion and exclusion criteria and the quality of each included study – based on the risk of bias and the level of evidence – was formally assessed. Meta-analysis was conducted using RevMan5.3 and network meta-analyses comparing the three groups were conducted using R. Results Twelve studies with a combined sample size of 531 were included in the analysis. Compared to either control method (placebo or waiting list), cognitive behavioral therapy was more effective for generalized anxiety disorder. Provision of a psychological placebo was associated with a significantly greater reduction of symptoms than placement on a waiting list. Eight of the studies were classified as ‘high risk of bias’, and the overall level of evidence was classified as ‘moderate’, indicating that further research could change the overall results of the meta-analysis. Conclusions RCTs about the treatment of generalized anxiety disorders are generally of moderate quality; they indicate the superiority of CBT but the results cannot, as yet, be considered robust. There is evidence of a non-negligible treatment effect of psychological placebos used as control conditions in research studies. This effect should be considered when designing and interpreting the results of randomized controlled trials about the effectiveness of psychotherapeutic interventions. PMID:25642106
Enhancing the quality and credibility of qualitative analysis.
Patton, M Q
1999-12-01
Varying philosophical and theoretical orientations to qualitative inquiry remind us that issues of quality and credibility intersect with audience and intended research purposes. This overview examines ways of enhancing the quality and credibility of qualitative analysis by dealing with three distinct but related inquiry concerns: rigorous techniques and methods for gathering and analyzing qualitative data, including attention to validity, reliability, and triangulation; the credibility, competence, and perceived trustworthiness of the qualitative researcher; and the philosophical beliefs of evaluation users about such paradigm-based preferences as objectivity versus subjectivity, truth versus perspective, and generalizations versus extrapolations. Although this overview examines some general approaches to issues of credibility and data quality in qualitative analysis, it is important to acknowledge that particular philosophical underpinnings, specific paradigms, and special purposes for qualitative inquiry will typically include additional or substitute criteria for assuring and judging quality, validity, and credibility. Moreover, the context for these considerations has evolved. In early literature on evaluation methods the debate between qualitative and quantitative methodologists was often strident. In recent years the debate has softened. A consensus has gradually emerged that the important challenge is to match appropriately the methods to empirical questions and issues, and not to universally advocate any single methodological approach for all problems.
[Analysis of variance of repeated data measured by water maze with SPSS].
Qiu, Hong; Jin, Guo-qin; Jin, Ru-feng; Zhao, Wei-kang
2007-01-01
To introduce the method of analyzing repeated data measured by water maze with SPSS 11.0, and offer a reference statistical method to clinical and basic medicine researchers who take the design of repeated measures. Using repeated measures and multivariate analysis of variance (ANOVA) process of the general linear model in SPSS and giving comparison among different groups and different measure time pairwise. Firstly, Mauchly's test of sphericity should be used to judge whether there were relations among the repeatedly measured data. If any (P
Element analysis: a wavelet-based method for analysing time-localized events in noisy time series
2017-01-01
A method is derived for the quantitative analysis of signals that are composed of superpositions of isolated, time-localized ‘events’. Here, these events are taken to be well represented as rescaled and phase-rotated versions of generalized Morse wavelets, a broad family of continuous analytic functions. Analysing a signal composed of replicates of such a function using another Morse wavelet allows one to directly estimate the properties of events from the values of the wavelet transform at its own maxima. The distribution of events in general power-law noise is determined in order to establish significance based on an expected false detection rate. Finally, an expression for an event’s ‘region of influence’ within the wavelet transform permits the formation of a criterion for rejecting spurious maxima due to numerical artefacts or other unsuitable events. Signals can then be reconstructed based on a small number of isolated points on the time/scale plane. This method, termed element analysis, is applied to the identification of long-lived eddy structures in ocean currents as observed by along-track measurements of sea surface elevation from satellite altimetry. PMID:28484325
Extrapolation methods for vector sequences
NASA Technical Reports Server (NTRS)
Smith, David A.; Ford, William F.; Sidi, Avram
1987-01-01
This paper derives, describes, and compares five extrapolation methods for accelerating convergence of vector sequences or transforming divergent vector sequences to convergent ones. These methods are the scalar epsilon algorithm (SEA), vector epsilon algorithm (VEA), topological epsilon algorithm (TEA), minimal polynomial extrapolation (MPE), and reduced rank extrapolation (RRE). MPE and RRE are first derived and proven to give the exact solution for the right 'essential degree' k. Then, Brezinski's (1975) generalization of the Shanks-Schmidt transform is presented; the generalized form leads from systems of equations to TEA. The necessary connections are then made with SEA and VEA. The algorithms are extended to the nonlinear case by cycling, the error analysis for MPE and VEA is sketched, and the theoretical support for quadratic convergence is discussed. Strategies for practical implementation of the methods are considered.
5[prime] to 3[prime] nucleic acid synthesis using 3[prime]-photoremovable protecting group
Pirrung, M.C.; Shuey, S.W.; Bradley, J.C.
1999-06-01
The present invention relates, in general, to a method of synthesizing a nucleic acid, and, in particular, to a method of effecting 5[prime] to 3[prime] nucleic acid synthesis. The method can be used to prepare arrays of oligomers bound to a support via their 5[prime] end. The invention also relates to a method of effecting mutation analysis using such arrays. The invention further relates to compounds and compositions suitable for use in such methods.
Wen, Dong; Jia, Peilei; Lian, Qiusheng; Zhou, Yanhong; Lu, Chengbiao
2016-01-01
At present, the sparse representation-based classification (SRC) has become an important approach in electroencephalograph (EEG) signal analysis, by which the data is sparsely represented on the basis of a fixed dictionary or learned dictionary and classified based on the reconstruction criteria. SRC methods have been used to analyze the EEG signals of epilepsy, cognitive impairment and brain computer interface (BCI), which made rapid progress including the improvement in computational accuracy, efficiency and robustness. However, these methods have deficiencies in real-time performance, generalization ability and the dependence of labeled sample in the analysis of the EEG signals. This mini review described the advantages and disadvantages of the SRC methods in the EEG signal analysis with the expectation that these methods can provide the better tools for analyzing EEG signals. PMID:27458376
NASA Technical Reports Server (NTRS)
Ibrahim, A. H.; Tiwari, S. N.; Smith, R. E.
1997-01-01
Variational methods (VM) sensitivity analysis employed to derive the costate (adjoint) equations, the transversality conditions, and the functional sensitivity derivatives. In the derivation of the sensitivity equations, the variational methods use the generalized calculus of variations, in which the variable boundary is considered as the design function. The converged solution of the state equations together with the converged solution of the costate equations are integrated along the domain boundary to uniquely determine the functional sensitivity derivatives with respect to the design function. The application of the variational methods to aerodynamic shape optimization problems is demonstrated for internal flow problems at supersonic Mach number range. The study shows, that while maintaining the accuracy of the functional sensitivity derivatives within the reasonable range for engineering prediction purposes, the variational methods show a substantial gain in computational efficiency, i.e., computer time and memory, when compared with the finite difference sensitivity analysis.
Probabilistic Structural Analysis Theory Development
NASA Technical Reports Server (NTRS)
Burnside, O. H.
1985-01-01
The objective of the Probabilistic Structural Analysis Methods (PSAM) project is to develop analysis techniques and computer programs for predicting the probabilistic response of critical structural components for current and future space propulsion systems. This technology will play a central role in establishing system performance and durability. The first year's technical activity is concentrating on probabilistic finite element formulation strategy and code development. Work is also in progress to survey critical materials and space shuttle mian engine components. The probabilistic finite element computer program NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) is being developed. The final probabilistic code will have, in the general case, the capability of performing nonlinear dynamic of stochastic structures. It is the goal of the approximate methods effort to increase problem solving efficiency relative to finite element methods by using energy methods to generate trial solutions which satisfy the structural boundary conditions. These approximate methods will be less computer intensive relative to the finite element approach.
The stress analysis method for three-dimensional composite materials
NASA Astrophysics Data System (ADS)
Nagai, Kanehiro; Yokoyama, Atsushi; Maekawa, Zen'ichiro; Hamada, Hiroyuki
1994-05-01
This study proposes a stress analysis method for three-dimensionally fiber reinforced composite materials. In this method, the rule-of mixture for composites is successfully applied to 3-D space in which material properties would change 3-dimensionally. The fundamental formulas for Young's modulus, shear modulus, and Poisson's ratio are derived. Also, we discuss a strength estimation and an optimum material design technique for 3-D composite materials. The analysis is executed for a triaxial orthogonally woven fabric, and their results are compared to the experimental data in order to verify the accuracy of this method. The present methodology can be easily understood with basic material mechanics and elementary mathematics, so it enables us to write a computer program of this theory without difficulty. Furthermore, this method can be applied to various types of 3-D composites because of its general-purpose characteristics.
Gabler, Nicole B; Duan, Naihua; Raneses, Eli; Suttner, Leah; Ciarametaro, Michael; Cooney, Elizabeth; Dubois, Robert W; Halpern, Scott D; Kravitz, Richard L
2016-07-16
When subgroup analyses are not correctly analyzed and reported, incorrect conclusions may be drawn, and inappropriate treatments provided. Despite the increased recognition of the importance of subgroup analysis, little information exists regarding the prevalence, appropriateness, and study characteristics that influence subgroup analysis. The objective of this study is to determine (1) if the use of subgroup analyses and multivariable risk indices has increased, (2) whether statistical methodology has improved over time, and (3) which study characteristics predict subgroup analysis. We randomly selected randomized controlled trials (RCTs) from five high-impact general medical journals during three time periods. Data from these articles were abstracted in duplicate using standard forms and a standard protocol. Subgroup analysis was defined as reporting any subgroup effect. Appropriate methods for subgroup analysis included a formal test for heterogeneity or interaction across treatment-by-covariate groups. We used logistic regression to determine the variables significantly associated with any subgroup analysis or, among RCTs reporting subgroup analyses, using appropriate methodology. The final sample of 416 articles reported 437 RCTs, of which 270 (62 %) reported subgroup analysis. Among these, 185 (69 %) used appropriate methods to conduct such analyses. Subgroup analysis was reported in 62, 55, and 67 % of the articles from 2007, 2010, and 2013, respectively. The percentage using appropriate methods decreased over the three time points from 77 % in 2007 to 63 % in 2013 (p < 0.05). Significant predictors of reporting subgroup analysis included industry funding (OR 1.94 (95 % CI 1.17, 3.21)), sample size (OR 1.98 per quintile (1.64, 2.40), and a significant primary outcome (OR 0.55 (0.33, 0.92)). The use of appropriate methods to conduct subgroup analysis decreased by year (OR 0.88 (0.76, 1.00)) and was less common with industry funding (OR 0.35 (0.18, 0.70)). Only 33 (18 %) of the RCTs examined subgroup effects using a multivariable risk index. While we found no significant increase in the reporting of subgroup analysis over time, our results show a significant decrease in the reporting of subgroup analyses using appropriate methods during recent years. Industry-sponsored trials may more commonly report subgroup analyses, but without utilizing appropriate methods. Suboptimal reporting of subgroup effects may impact optimal physician-patient decision-making.
Acceleration of convergence of vector sequences
NASA Technical Reports Server (NTRS)
Sidi, A.; Ford, W. F.; Smith, D. A.
1983-01-01
A general approach to the construction of convergence acceleration methods for vector sequence is proposed. Using this approach, one can generate some known methods, such as the minimal polynomial extrapolation, the reduced rank extrapolation, and the topological epsilon algorithm, and also some new ones. Some of the new methods are easier to implement than the known methods and are observed to have similar numerical properties. The convergence analysis of these new methods is carried out, and it is shown that they are especially suitable for accelerating the convergence of vector sequences that are obtained when one solves linear systems of equations iteratively. A stability analysis is also given, and numerical examples are provided. The convergence and stability properties of the topological epsilon algorithm are likewise given.
Espinosa, Nieves; Søndergaard, Roar R; Jørgensen, Mikkel; Krebs, Frederik C
2016-04-21
Silver nanowires (AgNWs) were prepared on a 5 g scale using either the well-known batch synthesis following the polyol method or a new flow synthesis method. The AgNWs were employed as semitransparent electrode materials in organic photovoltaics and compared to traditional printed silver electrodes based on micron sized silver flakes using life cycle analysis and environmental impact analysis methods. The life cycle analysis of AgNWs confirms that they provide an avenue to low-impact semitransparent electrodes. We find that the benefit of AgNWs in terms of embodied energy is less pronounced than generally assumed but that the toxicological and environmental benefits are significant. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A projection operator method for the analysis of magnetic neutron form factors
NASA Astrophysics Data System (ADS)
Kaprzyk, S.; Van Laar, B.; Maniawski, F.
1981-03-01
A set of projection operators in matrix form has been derived on the basis of decomposition of the spin density into a series of fully symmetrized cubic harmonics. This set of projection operators allows a formulation of the Fourier analysis of magnetic form factors in a convenient way. The presented method is capable of checking the validity of various theoretical models used for spin density analysis up to now. The general formalism is worked out in explicit form for the fcc and bcc structures and deals with that part of spin density which is contained within the sphere inscribed in the Wigner-Seitz cell. This projection operator method has been tested on the magnetic form factors of nickel and iron.
A generalized modal shock spectra method for spacecraft loads analysis
NASA Technical Reports Server (NTRS)
Trubert, M.; Salama, M.
1979-01-01
Unlike the traditional shock spectra approach, the generalization presented in this paper permits elastic interaction between the spacecraft and launch vehicle in order to obtain accurate bounds on the spacecraft response and structural loads. In addition, the modal response from a previous launch vehicle transient analysis - with or without a dummy spacecraft - is exploited in order to define a modal impulse as a simple idealization of the actual forcing function. The idealized modal forcing function is then used to derive explicit expressions for an estimate of the bound on the spacecraft structural response and forces.
An Integrated Solution for Performing Thermo-fluid Conjugate Analysis
NASA Technical Reports Server (NTRS)
Kornberg, Oren
2009-01-01
A method has been developed which integrates a fluid flow analyzer and a thermal analyzer to produce both steady state and transient results of 1-D, 2-D, and 3-D analysis models. The Generalized Fluid System Simulation Program (GFSSP) is a one dimensional, general purpose fluid analysis code which computes pressures and flow distributions in complex fluid networks. The MSC Systems Improved Numerical Differencing Analyzer (MSC.SINDA) is a one dimensional general purpose thermal analyzer that solves network representations of thermal systems. Both GFSSP and MSC.SINDA have graphical user interfaces which are used to build the respective model and prepare it for analysis. The SINDA/GFSSP Conjugate Integrator (SGCI) is a formbase graphical integration program used to set input parameters for the conjugate analyses and run the models. The contents of this paper describes SGCI and its thermo-fluids conjugate analysis techniques and capabilities by presenting results from some example models including the cryogenic chill down of a copper pipe, a bar between two walls in a fluid stream, and a solid plate creating a phase change in a flowing fluid.
Recent advances in capillary electrophoretic migration techniques for pharmaceutical analysis.
Deeb, Sami El; Wätzig, Hermann; El-Hady, Deia Abd; Albishri, Hassan M; de Griend, Cari Sänger-van; Scriba, Gerhard K E
2014-01-01
Since the introduction about 30 years ago, CE techniques have gained a significant impact in pharmaceutical analysis. The present review covers recent advances and applications of CE for the analysis of pharmaceuticals. Both small molecules and biomolecules such as proteins are considered. The applications range from the determination of drug-related substances to the analysis of counterions and the determination of physicochemical parameters. Furthermore, general considerations of CE methods in pharmaceutical analysis are described. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Nonlinear dynamics of laser systems with elements of a chaos: Advanced computational code
NASA Astrophysics Data System (ADS)
Buyadzhi, V. V.; Glushkov, A. V.; Khetselius, O. Yu; Kuznetsova, A. A.; Buyadzhi, A. A.; Prepelitsa, G. P.; Ternovsky, V. B.
2017-10-01
A general, uniform chaos-geometric computational approach to analysis, modelling and prediction of the non-linear dynamics of quantum and laser systems (laser and quantum generators system etc) with elements of the deterministic chaos is briefly presented. The approach is based on using the advanced generalized techniques such as the wavelet analysis, multi-fractal formalism, mutual information approach, correlation integral analysis, false nearest neighbour algorithm, the Lyapunov’s exponents analysis, and surrogate data method, prediction models etc There are firstly presented the numerical data on the topological and dynamical invariants (in particular, the correlation, embedding, Kaplan-York dimensions, the Lyapunov’s exponents, Kolmogorov’s entropy and other parameters) for laser system (the semiconductor GaAs/GaAlAs laser with a retarded feedback) dynamics in a chaotic and hyperchaotic regimes.
A methodology for commonality analysis, with applications to selected space station systems
NASA Technical Reports Server (NTRS)
Thomas, Lawrence Dale
1989-01-01
The application of commonality in a system represents an attempt to reduce costs by reducing the number of unique components. A formal method for conducting commonality analysis has not been established. In this dissertation, commonality analysis is characterized as a partitioning problem. The cost impacts of commonality are quantified in an objective function, and the solution is that partition which minimizes this objective function. Clustering techniques are used to approximate a solution, and sufficient conditions are developed which can be used to verify the optimality of the solution. This method for commonality analysis is general in scope. It may be applied to the various types of commonality analysis required in the conceptual, preliminary, and detail design phases of the system development cycle.
Analyzing Visibility Configurations.
Dachsbacher, C
2011-04-01
Many algorithms, such as level of detail rendering and occlusion culling methods, make decisions based on the degree of visibility of an object, but do not analyze the distribution, or structure, of the visible and occluded regions across surfaces. We present an efficient method to classify different visibility configurations and show how this can be used on top of existing methods based on visibility determination. We adapt co-occurrence matrices for visibility analysis and generalize them to operate on clusters of triangular surfaces instead of pixels. We employ machine learning techniques to reliably classify the thus extracted feature vectors. Our method allows perceptually motivated level of detail methods for real-time rendering applications by detecting configurations with expected visual masking. We exemplify the versatility of our method with an analysis of area light visibility configurations in ray tracing and an area-to-area visibility analysis suitable for hierarchical radiosity refinement. Initial results demonstrate the robustness, simplicity, and performance of our method in synthetic scenes, as well as real applications.
NASA Astrophysics Data System (ADS)
Ohyanagi, S.; Dileonardo, C.
2013-12-01
As a natural phenomenon earthquake occurrence is difficult to predict. Statistical analysis of earthquake data was performed using candlestick chart and Bollinger Band methods. These statistical methods, commonly used in the financial world to analyze market trends were tested against earthquake data. Earthquakes above Mw 4.0 located on shore of Sanriku (37.75°N ~ 41.00°N, 143.00°E ~ 144.50°E) from February 1973 to May 2013 were selected for analysis. Two specific patterns in earthquake occurrence were recognized through the analysis. One is a spread of candlestick prior to the occurrence of events greater than Mw 6.0. A second pattern shows convergence in the Bollinger Band, which implies a positive or negative change in the trend of earthquakes. Both patterns match general models for the buildup and release of strain through the earthquake cycle, and agree with both the characteristics of the candlestick chart and Bollinger Band analysis. These results show there is a high correlation between patterns in earthquake occurrence and trend analysis by these two statistical methods. The results of this study agree with the appropriateness of the application of these financial analysis methods to the analysis of earthquake occurrence.
Cowling, Thomas E; Harris, Matthew; Majeed, Azeem
2017-01-01
Background The UK government plans to extend the opening hours of general practices in England. The ‘extended hours access scheme’ pays practices for providing appointments outside core times (08:00 to 18.30, Monday to Friday) for at least 30 min per 1000 registered patients each week. Objective To determine the association between extended hours access scheme participation and patient experience. Methods Retrospective analysis of a national cross-sectional survey completed by questionnaire (General Practice Patient Survey 2013–2014); 903 357 survey respondents aged ≥18 years old and registered to 8005 general practices formed the study population. Outcome measures were satisfaction with opening hours, experience of making an appointment and overall experience (on five-level interval scales from 0 to 100). Mean differences between scheme participation groups were estimated using multilevel random-effects regression, propensity score matching and instrumental variable analysis. Results Most patients were very (37.2%) or fairly satisfied (42.7%) with the opening hours of their general practices; results were similar for experience of making an appointment and overall experience. Most general practices participated in the extended hours access scheme (73.9%). Mean differences in outcome measures between scheme participants and non-participants were positive but small across estimation methods (mean differences ≤1.79). For example, scheme participation was associated with a 1.25 (95% CI 0.96 to 1.55) increase in satisfaction with opening hours using multilevel regression; this association was slightly greater when patients could not take time off work to see a general practitioner (2.08, 95% CI 1.53 to 2.63). Conclusions Participation in the extended hours access scheme has a limited association with three patient experience measures. This questions expected impacts of current plans to extend opening hours on patient experience. PMID:27343274
A simplified analysis of the multigrid V-cycle as a fast elliptic solver
NASA Technical Reports Server (NTRS)
Decker, Naomi H.; Taasan, Shlomo
1988-01-01
For special model problems, Fourier analysis gives exact convergence rates for the two-grid multigrid cycle and, for more general problems, provides estimates of the two-grid convergence rates via local mode analysis. A method is presented for obtaining mutigrid convergence rate estimates for cycles involving more than two grids (using essentially the same analysis as for the two-grid cycle). For the simple cast of the V-cycle used as a fast Laplace solver on the unit square, the k-grid convergence rate bounds obtained by this method are sharper than the bounds predicted by the variational theory. Both theoretical justification and experimental evidence are presented.
SEU System Analysis: Not Just the Sum of All Parts
NASA Technical Reports Server (NTRS)
Berg, Melanie D.; Label, Kenneth
2014-01-01
Single event upset (SEU) analysis of complex systems is challenging. Currently, system SEU analysis is performed by component level partitioning and then either: the most dominant SEU cross-sections (SEUs) are used in system error rate calculations; or the partition SEUs are summed to eventually obtain a system error rate. In many cases, system error rates are overestimated because these methods generally overlook system level derating factors. The problem with overestimating is that it can cause overdesign and consequently negatively affect the following: cost, schedule, functionality, and validation/verification. The scope of this presentation is to discuss the risks involved with our current scheme of SEU analysis for complex systems; and to provide alternative methods for improvement.
Research of generalized wavelet transformations of Haar correctness in remote sensing of the Earth
NASA Astrophysics Data System (ADS)
Kazaryan, Maretta; Shakhramanyan, Mihail; Nedkov, Roumen; Richter, Andrey; Borisova, Denitsa; Stankova, Nataliya; Ivanova, Iva; Zaharinova, Mariana
2017-10-01
In this paper, Haar's generalized wavelet functions are applied to the problem of ecological monitoring by the method of remote sensing of the Earth. We study generalized Haar wavelet series and suggest the use of Tikhonov's regularization method for investigating them for correctness. In the solution of this problem, an important role is played by classes of functions that were introduced and described in detail by I.M. Sobol for studying multidimensional quadrature formulas and it contains functions with rapidly convergent series of wavelet Haar. A theorem on the stability and uniform convergence of the regularized summation function of the generalized wavelet-Haar series of a function from this class with approximate coefficients is proved. The article also examines the problem of using orthogonal transformations in Earth remote sensing technologies for environmental monitoring. Remote sensing of the Earth allows to receive from spacecrafts information of medium, high spatial resolution and to conduct hyperspectral measurements. Spacecrafts have tens or hundreds of spectral channels. To process the images, the device of discrete orthogonal transforms, and namely, wavelet transforms, was used. The aim of the work is to apply the regularization method in one of the problems associated with remote sensing of the Earth and subsequently to process the satellite images through discrete orthogonal transformations, in particular, generalized Haar wavelet transforms. General methods of research. In this paper, Tikhonov's regularization method, the elements of mathematical analysis, the theory of discrete orthogonal transformations, and methods for decoding of satellite images are used. Scientific novelty. The task of processing of archival satellite snapshots (images), in particular, signal filtering, was investigated from the point of view of an incorrectly posed problem. The regularization parameters for discrete orthogonal transformations were determined.
NASA Astrophysics Data System (ADS)
Lenoir, Guillaume; Crucifix, Michel
2018-03-01
Geophysical time series are sometimes sampled irregularly along the time axis. The situation is particularly frequent in palaeoclimatology. Yet, there is so far no general framework for handling the continuous wavelet transform when the time sampling is irregular. Here we provide such a framework. To this end, we define the scalogram as the continuous-wavelet-transform equivalent of the extended Lomb-Scargle periodogram defined in Part 1 of this study (Lenoir and Crucifix, 2018). The signal being analysed is modelled as the sum of a locally periodic component in the time-frequency plane, a polynomial trend, and a background noise. The mother wavelet adopted here is the Morlet wavelet classically used in geophysical applications. The background noise model is a stationary Gaussian continuous autoregressive-moving-average (CARMA) process, which is more general than the traditional Gaussian white and red noise processes. The scalogram is smoothed by averaging over neighbouring times in order to reduce its variance. The Shannon-Nyquist exclusion zone is however defined as the area corrupted by local aliasing issues. The local amplitude in the time-frequency plane is then estimated with least-squares methods. We also derive an approximate formula linking the squared amplitude and the scalogram. Based on this property, we define a new analysis tool: the weighted smoothed scalogram, which we recommend for most analyses. The estimated signal amplitude also gives access to band and ridge filtering. Finally, we design a test of significance for the weighted smoothed scalogram against the stationary Gaussian CARMA background noise, and provide algorithms for computing confidence levels, either analytically or with Monte Carlo Markov chain methods. All the analysis tools presented in this article are available to the reader in the Python package WAVEPAL.
The local properties of ocean surface waves by the phase-time method
NASA Technical Reports Server (NTRS)
Huang, Norden E.; Long, Steven R.; Tung, Chi-Chao; Donelan, Mark A.; Yuan, Yeli; Lai, Ronald J.
1992-01-01
A new approach using phase information to view and study the properties of frequency modulation, wave group structures, and wave breaking is presented. The method is applied to ocean wave time series data and a new type of wave group (containing the large 'rogue' waves) is identified. The method also has the capability of broad applications in the analysis of time series data in general.
Johnston, K M; Gustafson, P; Levy, A R; Grootendorst, P
2008-04-30
A major, often unstated, concern of researchers carrying out epidemiological studies of medical therapy is the potential impact on validity if estimates of treatment are biased due to unmeasured confounders. One technique for obtaining consistent estimates of treatment effects in the presence of unmeasured confounders is instrumental variables analysis (IVA). This technique has been well developed in the econometrics literature and is being increasingly used in epidemiological studies. However, the approach to IVA that is most commonly used in such studies is based on linear models, while many epidemiological applications make use of non-linear models, specifically generalized linear models (GLMs) such as logistic or Poisson regression. Here we present a simple method for applying IVA within the class of GLMs using the generalized method of moments approach. We explore some of the theoretical properties of the method and illustrate its use within both a simulation example and an epidemiological study where unmeasured confounding is suspected to be present. We estimate the effects of beta-blocker therapy on one-year all-cause mortality after an incident hospitalization for heart failure, in the absence of data describing disease severity, which is believed to be a confounder. 2008 John Wiley & Sons, Ltd
NASA Technical Reports Server (NTRS)
Belcastro, Christine M.; Chang, B.-C.; Fischl, Robert
1989-01-01
In the design and analysis of robust control systems for uncertain plants, the technique of formulating what is termed an M-delta model has become widely accepted and applied in the robust control literature. The M represents the transfer function matrix M(s) of the nominal system, and delta represents an uncertainty matrix acting on M(s). The uncertainty can arise from various sources, such as structured uncertainty from parameter variations or multiple unstructured uncertainties from unmodeled dynamics and other neglected phenomena. In general, delta is a block diagonal matrix, and for real parameter variations the diagonal elements are real. As stated in the literature, this structure can always be formed for any linear interconnection of inputs, outputs, transfer functions, parameter variations, and perturbations. However, very little of the literature addresses methods for obtaining this structure, and none of this literature addresses a general methodology for obtaining a minimal M-delta model for a wide class of uncertainty. Since have a delta matrix of minimum order would improve the efficiency of structured singular value (or multivariable stability margin) computations, a method of obtaining a minimal M-delta model would be useful. A generalized method of obtaining a minimal M-delta structure for systems with real parameter variations is given.
Algebraic solution for the forward displacement analysis of the general 6-6 stewart mechanism
NASA Astrophysics Data System (ADS)
Wei, Feng; Wei, Shimin; Zhang, Ying; Liao, Qizheng
2016-01-01
The solution for the forward displacement analysis(FDA) of the general 6-6 Stewart mechanism(i.e., the connection points of the moving and fixed platforms are not restricted to lying in a plane) has been extensively studied, but the efficiency of the solution remains to be effectively addressed. To this end, an algebraic elimination method is proposed for the FDA of the general 6-6 Stewart mechanism. The kinematic constraint equations are built using conformal geometric algebra(CGA). The kinematic constraint equations are transformed by a substitution of variables into seven equations with seven unknown variables. According to the characteristic of anti-symmetric matrices, the aforementioned seven equations can be further transformed into seven equations with four unknown variables by a substitution of variables using the Gröbner basis. Its elimination weight is increased through changing the degree of one variable, and sixteen equations with four unknown variables can be obtained using the Gröbner basis. A 40th-degree univariate polynomial equation is derived by constructing a relatively small-sized 9´9 Sylvester resultant matrix. Finally, two numerical examples are employed to verify the proposed method. The results indicate that the proposed method can effectively improve the efficiency of solution and reduce the computational burden because of the small-sized resultant matrix.
NASA Astrophysics Data System (ADS)
Pai, Akshay; Samala, Ravi K.; Zhang, Jianying; Qian, Wei
2010-03-01
Mammography reading by radiologists and breast tissue image interpretation by pathologists often leads to high False Positive (FP) Rates. Similarly, current Computer Aided Diagnosis (CADx) methods tend to concentrate more on sensitivity, thus increasing the FP rates. A novel method is introduced here which employs similarity based method to decrease the FP rate in the diagnosis of microcalcifications. This method employs the Principal Component Analysis (PCA) and the similarity metrics in order to achieve the proposed goal. The training and testing set is divided into generalized (Normal and Abnormal) and more specific (Abnormal, Normal, Benign) classes. The performance of this method as a standalone classification system is evaluated in both the cases (general and specific). In another approach the probability of each case belonging to a particular class is calculated. If the probabilities are too close to classify, the augmented CADx system can be instructed to have a detailed analysis of such cases. In case of normal cases with high probability, no further processing is necessary, thus reducing the computation time. Hence, this novel method can be employed in cascade with CADx to reduce the FP rate and also avoid unnecessary computational time. Using this methodology, a false positive rate of 8% and 11% is achieved for mammography and cellular images respectively.
Vectorized Monte Carlo methods for reactor lattice analysis
NASA Technical Reports Server (NTRS)
Brown, F. B.
1984-01-01
Some of the new computational methods and equivalent mathematical representations of physics models used in the MCV code, a vectorized continuous-enery Monte Carlo code for use on the CYBER-205 computer are discussed. While the principal application of MCV is the neutronics analysis of repeating reactor lattices, the new methods used in MCV should be generally useful for vectorizing Monte Carlo for other applications. For background, a brief overview of the vector processing features of the CYBER-205 is included, followed by a discussion of the fundamentals of Monte Carlo vectorization. The physics models used in the MCV vectorized Monte Carlo code are then summarized. The new methods used in scattering analysis are presented along with details of several key, highly specialized computational routines. Finally, speedups relative to CDC-7600 scalar Monte Carlo are discussed.
A quantitative polymerase chain reaction (qPCR) method for the detection of entercocci fecal indicator bacteria has been shown to be generally applicable for the analysis of temperate fresh (Great Lakes) and marine coastal waters and for providing risk-based determinations of wat...
ERIC Educational Resources Information Center
Collins, Cyleste C.; Dressler, William W.
2008-01-01
This study uses mixed methods and theory from cognitive anthropology to examine the cultural models of domestic violence among domestic violence agency workers, welfare workers, nurses, and a general population comparison group. Data collection and analysis uses quantitative and qualitative techniques, and the findings are integrated for…
40 CFR 420.02 - General definitions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... (NOB) to determine if the nitrification is occurring; and (2) Analysis of the nitrogen balance to... obtained by the method specified in 40 CFR 136.3. (c) The term ammonia-N (or ammonia-nitrogen) means the...
40 CFR 420.02 - General definitions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... (NOB) to determine if the nitrification is occurring; and (2) Analysis of the nitrogen balance to... obtained by the method specified in 40 CFR 136.3. (c) The term ammonia-N (or ammonia-nitrogen) means the...
40 CFR 420.02 - General definitions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... (NOB) to determine if the nitrification is occurring; and (2) Analysis of the nitrogen balance to... obtained by the method specified in 40 CFR 136.3. (c) The term ammonia-N (or ammonia-nitrogen) means the...
40 CFR 420.02 - General definitions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... (NOB) to determine if the nitrification is occurring; and (2) Analysis of the nitrogen balance to... obtained by the method specified in 40 CFR 136.3. (c) The term ammonia-N (or ammonia-nitrogen) means the...
40 CFR 420.02 - General definitions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... (NOB) to determine if the nitrification is occurring; and (2) Analysis of the nitrogen balance to... obtained by the method specified in 40 CFR 136.3. (c) The term ammonia-N (or ammonia-nitrogen) means the...
Judicial perspectives on child passenger protection legislation
DOT National Transportation Integrated Search
1980-08-01
This report provides an analysis of judicial perspectives of general sessions judges concerning the Tennessee child passenger protection law. Two methods were employed to gather information: questionnaire were mailed to 103 judges while 12 judges par...
Ninth NASTRAN (R) Users' Colloquium
NASA Technical Reports Server (NTRS)
1980-01-01
The general application of finite element methodology and the specific application of NASTRAN to a wide variety of static and dynamic structural problems is addressed. Comparison with other approaches and new methods of analysis with nastran are included.
Improving Fraud and Abuse Detection in General Physician Claims: A Data Mining Study
Joudaki, Hossein; Rashidian, Arash; Minaei-Bidgoli, Behrouz; Mahmoodi, Mahmood; Geraili, Bijan; Nasiri, Mahdi; Arab, Mohammad
2016-01-01
Background: We aimed to identify the indicators of healthcare fraud and abuse in general physicians’ drug prescription claims, and to identify a subset of general physicians that were more likely to have committed fraud and abuse. Methods: We applied data mining approach to a major health insurance organization dataset of private sector general physicians’ prescription claims. It involved 5 steps: clarifying the nature of the problem and objectives, data preparation, indicator identification and selection, cluster analysis to identify suspect physicians, and discriminant analysis to assess the validity of the clustering approach. Results: Thirteen indicators were developed in total. Over half of the general physicians (54%) were ‘suspects’ of conducting abusive behavior. The results also identified 2% of physicians as suspects of fraud. Discriminant analysis suggested that the indicators demonstrated adequate performance in the detection of physicians who were suspect of perpetrating fraud (98%) and abuse (85%) in a new sample of data. Conclusion: Our data mining approach will help health insurance organizations in low-and middle-income countries (LMICs) in streamlining auditing approaches towards the suspect groups rather than routine auditing of all physicians. PMID:26927587
Tan, Lavinia; Hackenberg, Timothy D
2015-11-01
Pigeons' demand and preference for specific and generalized tokens was examined in a token economy. Pigeons could produce and exchange different colored tokens for food, for water, or for food or water. Token production was measured across three phases, which examined: (1) across-session price increases (typical demand curve method); (2) within-session price increases (progressive-ratio, PR, schedule); and (3) concurrent pairwise choices between the token types. Exponential demand curves were fitted to the response data and accounted for over 90% total variance. Demand curve parameter values, Pmax , Omax and α showed that demand was ordered in the following way: food tokens, generalized tokens, water tokens, both in Phase 1 and in Phase 3. This suggests that the preferences were predictable on the basis of elasticity and response output from the demand analysis. Pmax and Omax values failed to consistently predict breakpoints and peak response rates in the PR schedules in Phase 2, however, suggesting limits on a unitary conception of reinforcer efficacy. The patterns of generalized token production and exchange in Phase 3 suggest that the generalized tokens served as substitutes for the specific food and water tokens. Taken together, the present findings demonstrate the utility of behavioral economic concepts in the analysis of generalized reinforcement. © Society for the Experimental Analysis of Behavior.
2016-09-01
UNCLASSIFIED UNCLASSIFIED Refinement of Out of Circularity and Thickness Measurements of a Cylinder for Finite Element Analysis...significant effect on the collapse strength and must be accurately represented in finite element analysis to obtain accurate results. Often it is necessary...to interpolate measurements from a relatively coarse grid to a refined finite element model and methods that have wide general acceptance are
Thematic Progression in a Cardiologist's Text: Context, Frames and Progression.
ERIC Educational Resources Information Center
Salter, Robert T.
Thematic progression (TP) is examined in the text of a communication between a cardiologist and a general practitioner concerning a patient, offering a clinical diagnosis of the patient's condition. Analysis of the discourse looks at the field, tenor, and mode of the communication as a context for TP. The methods of analysis are first described,…
Methods for improved preconcentrators
Manginell, Ronald P [Albuquerque, NM; Lewis, Patrick R [Albuquerque, NM; Okandan, Murat [Edgewood, NM
2010-06-01
The present invention relates generally to chemical analysis (e.g. by gas chromatography), and in particular to a compact chemical preconcentrator formed on a substrate with a heatable sorptive membrane that can be used to accumulate and concentrate one or more chemical species of interest over time and then rapidly release the concentrated chemical species upon demand for chemical analysis.
ERIC Educational Resources Information Center
Leventhal, Brian C.; Stone, Clement A.
2018-01-01
Interest in Bayesian analysis of item response theory (IRT) models has grown tremendously due to the appeal of the paradigm among psychometricians, advantages of these methods when analyzing complex models, and availability of general-purpose software. Possible models include models which reflect multidimensionality due to designed test structure,…
ERIC Educational Resources Information Center
Moallem, Mahnaz
A study was conducted to analyze current job announcements in the field of instructional design and technology and to produce descriptive information that portrays the required skills and areas of knowledge for instructional technology graduates. Content analysis, in its general terms, was used as the research method for this study. One hundred…
Content Analysis of Curriculum-Related Studies in Turkey between 2000 and 2014
ERIC Educational Resources Information Center
Aksan, Elif; Baki, Adnan
2017-01-01
This study aims to carry out a content analysis determining the general framework of studies related to curriculum. For this purpose, 154 curriculum-related studies carried out in Turkey between 2000 and 2014 were examined in terms of year, sample, method, data collection technique, purpose, and result. The most studies related to curriculum were…
Trejos, Tatiana; Montero, Shirly; Almirall, José R
2003-08-01
The discrimination potential of Laser Ablation Inductively Coupled Plasma Mass Spectrometry (LA-ICP-MS) is compared with previously reported solution ICP-MS methods using external calibration (EC) with internal standardization and a newly reported solution isotope dilution (ID) method for the analysis of two different glass populations. A total of 91 different glass samples were used for the comparison study; refractive index and elemental composition were measured by the techniques mentioned above. One set consisted of 45 headlamps taken from a variety of automobiles that represents a range of 20 years of manufacturing dates. A second set consisted of 46 automotive glasses (side windows, rear windows, and windshields) representing casework glass from different vehicle manufacturers over several years. The element menu for the LA-ICP-MS and EC-ICP-MS methods include Mg, Al, Ca, Mn, Ce, Ti, Zr, Sb, Ga, Ba, Rb, Sm, Sr, Hf, La, and Pb. The ID method was limited to the analysis of two isotopes each of Mg, Sr, Zr, Sb, Ba, Sm, Hf, and Pb. Laser ablation analyses were performed with a Q switched Nd:YAG, 266 nm, 6 mJ output energy laser. The laser was used in depth profile mode while sampling using a 50 microm spot size for 50 sec at 10 Hz (500 shots). The typical bias for the analysis of NIST 612 by LA-ICP-MS was less than 5% in all cases and typically better than 5% for most isotopes. The precision for the vast majority of the element menu was determined generally less than 10% for all the methods when NIST 612 was measured (40 microg x g(-1)). Method detection limits (MDL) for the EC and LA-ICP-MS methods were similar and generally reported as less than 1 microg x g(-1) for the analysis of NIST 612. While the solution sample introduction methods using EC and ID presented excellent sensitivity and precision, these methods have the disadvantages of destroying the sample, and also involve complex sample preparation. The laser ablation method was simpler, faster, and produced comparable discrimination to the EC-ICP-MS and ID-ICP-MS. LA-ICP-MS can offer an excellent alternative to solution analysis of glass in forensic casework samples.
Measurement methods for human exposure analysis.
Lioy, P J
1995-01-01
The general methods used to complete measurements of human exposures are identified and illustrations are provided for the cases of indirect and direct methods used for exposure analysis. The application of the techniques for external measurements of exposure, microenvironmental and personal monitors, are placed in the context of the need to test hypotheses concerning the biological effects of concern. The linkage of external measurements to measurements made in biological fluids is explored for a suite of contaminants. This information is placed in the context of the scientific framework used to conduct exposure assessment. Examples are taken from research on volatile organics and for a large scale problem: hazardous waste sites. PMID:7635110
Image encryption using random sequence generated from generalized information domain
NASA Astrophysics Data System (ADS)
Xia-Yan, Zhang; Guo-Ji, Zhang; Xuan, Li; Ya-Zhou, Ren; Jie-Hua, Wu
2016-05-01
A novel image encryption method based on the random sequence generated from the generalized information domain and permutation-diffusion architecture is proposed. The random sequence is generated by reconstruction from the generalized information file and discrete trajectory extraction from the data stream. The trajectory address sequence is used to generate a P-box to shuffle the plain image while random sequences are treated as keystreams. A new factor called drift factor is employed to accelerate and enhance the performance of the random sequence generator. An initial value is introduced to make the encryption method an approximately one-time pad. Experimental results show that the random sequences pass the NIST statistical test with a high ratio and extensive analysis demonstrates that the new encryption scheme has superior security.
A Two-Step Approach to Uncertainty Quantification of Core Simulators
Yankov, Artem; Collins, Benjamin; Klein, Markus; ...
2012-01-01
For the multiple sources of error introduced into the standard computational regime for simulating reactor cores, rigorous uncertainty analysis methods are available primarily to quantify the effects of cross section uncertainties. Two methods for propagating cross section uncertainties through core simulators are the XSUSA statistical approach and the “two-step” method. The XSUSA approach, which is based on the SUSA code package, is fundamentally a stochastic sampling method. Alternatively, the two-step method utilizes generalized perturbation theory in the first step and stochastic sampling in the second step. The consistency of these two methods in quantifying uncertainties in the multiplication factor andmore » in the core power distribution was examined in the framework of phase I-3 of the OECD Uncertainty Analysis in Modeling benchmark. With the Three Mile Island Unit 1 core as a base model for analysis, the XSUSA and two-step methods were applied with certain limitations, and the results were compared to those produced by other stochastic sampling-based codes. Based on the uncertainty analysis results, conclusions were drawn as to the method that is currently more viable for computing uncertainties in burnup and transient calculations.« less
NASA Astrophysics Data System (ADS)
Miura, Yasunari; Sugiyama, Yuki
2017-12-01
We present a general method for analyzing macroscopic collective phenomena observed in many-body systems. For this purpose, we employ diffusion maps, which are one of the dimensionality-reduction techniques, and systematically define a few relevant coarse-grained variables for describing macroscopic phenomena. The time evolution of macroscopic behavior is described as a trajectory in the low-dimensional space constructed by these coarse variables. We apply this method to the analysis of the traffic model, called the optimal velocity model, and reveal a bifurcation structure, which features a transition to the emergence of a moving cluster as a traffic jam.
Developing patient reference groups within general practice: a mixed-methods study.
Smiddy, Jane; Reay, Joanne; Peckham, Stephen; Williams, Lorraine; Wilson, Patricia
2015-03-01
Clinical commissioning groups (CCGs) are required to demonstrate meaningful patient and public engagement and involvement (PPEI). Recent health service reforms have included financial incentives for general practices to develop patient reference groups (PRGs). To explore the impact of the patient participation direct enhanced service (DES) on development of PRGs, the influence of PRGs on decision making within general practice, and their interface with CCGs. A mixed-methods approach within three case study sites in England. Three case study sites were tracked for 18 months as part of an evaluation of PPEI in commissioning. A sub-study focused on PRGs utilising documentary and web-based analysis; results were mapped against findings of the main study. Evidence highlighted variations in the establishment of PRGs, with the number of active PRGs via practice websites ranging from 27% to 93%. Such groups were given a number of descriptions such as patient reference groups, patient participation groups, and patient forums. Data analysis highlighted that the mode of operation varied between virtual and tangible groups and whether they were GP- or patient-led, such analysis enabled the construction of a typology of PRGs. Evidence reviewed suggested that groups functioned within parameters of the DES with activities limited to practice level. Data analysis highlighted a lack of strategic vision in relation to such groups, particularly their role within an overall patient and PPEI framework). Findings identified diversity in the operationalisation of PRGs. Their development does not appear linked to a strategic vision or overall PPEI framework. Although local pragmatic issues are important to patients, GPs must ensure that PRGs develop strategic direction if health reforms are to be addressed. © British Journal of General Practice 2015.
Artificial intelligence in radiology.
Hosny, Ahmed; Parmar, Chintan; Quackenbush, John; Schwartz, Lawrence H; Aerts, Hugo J W L
2018-05-17
Artificial intelligence (AI) algorithms, particularly deep learning, have demonstrated remarkable progress in image-recognition tasks. Methods ranging from convolutional neural networks to variational autoencoders have found myriad applications in the medical image analysis field, propelling it forward at a rapid pace. Historically, in radiology practice, trained physicians visually assessed medical images for the detection, characterization and monitoring of diseases. AI methods excel at automatically recognizing complex patterns in imaging data and providing quantitative, rather than qualitative, assessments of radiographic characteristics. In this Opinion article, we establish a general understanding of AI methods, particularly those pertaining to image-based tasks. We explore how these methods could impact multiple facets of radiology, with a general focus on applications in oncology, and demonstrate ways in which these methods are advancing the field. Finally, we discuss the challenges facing clinical implementation and provide our perspective on how the domain could be advanced.
Domain decomposition methods for systems of conservation laws: Spectral collocation approximations
NASA Technical Reports Server (NTRS)
Quarteroni, Alfio
1989-01-01
Hyperbolic systems of conversation laws are considered which are discretized in space by spectral collocation methods and advanced in time by finite difference schemes. At any time-level a domain deposition method based on an iteration by subdomain procedure was introduced yielding at each step a sequence of independent subproblems (one for each subdomain) that can be solved simultaneously. The method is set for a general nonlinear problem in several space variables. The convergence analysis, however, is carried out only for a linear one-dimensional system with continuous solutions. A precise form of the error reduction factor at each iteration is derived. Although the method is applied here to the case of spectral collocation approximation only, the idea is fairly general and can be used in a different context as well. For instance, its application to space discretization by finite differences is straight forward.
Solving Graph Laplacian Systems Through Recursive Bisections and Two-Grid Preconditioning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ponce, Colin; Vassilevski, Panayot S.
2016-02-18
We present a parallelizable direct method for computing the solution to graph Laplacian-based linear systems derived from graphs that can be hierarchically bipartitioned with small edge cuts. For a graph of size n with constant-size edge cuts, our method decomposes a graph Laplacian in time O(n log n), and then uses that decomposition to perform a linear solve in time O(n log n). We then use the developed technique to design a preconditioner for graph Laplacians that do not have this property. Finally, we augment this preconditioner with a two-grid method that accounts for much of the preconditioner's weaknesses. Wemore » present an analysis of this method, as well as a general theorem for the condition number of a general class of two-grid support graph-based preconditioners. Numerical experiments illustrate the performance of the studied methods.« less
Jahanimoghadam, Fatemeh; Horri, Azadeh; Hasheminejad, Naimeh; Hashemi Nejad, Naser; Baneshi, Mohammad Reza
2018-01-01
Statement of the Problem: In dentistry, incorrect working posture is the most important cause of musculoskeletal disorders. Purpose: The aim of this research was to evaluate the work postures of general dentists and specialists using rapid entire body assessment (REBA) method. Materials and Method: In this cross-sectional study, work postures were assessed in 90 dentists by employing REBA method. Stratified sampling method was used. Data were analyzed by analysis of variance (ANOVA), Independent t-test and Pearson’s correlation test in SPSS 19. Results: The results showed that work postures of 90% of dentists were at moderate- to high-risk levels. Among the specialists, periodontists, pedodontists and oral and maxillofacial surgeons had the worst body postures. Conclusion: In general, dentists’ working postures need improvement and consequently, a more comprehensive ergonomic training and promotion is required in dentistry curriculum at Universities. PMID:29854890
Weight estimation techniques for composite airplanes in general aviation industry
NASA Technical Reports Server (NTRS)
Paramasivam, T.; Horn, W. J.; Ritter, J.
1986-01-01
Currently available weight estimation methods for general aviation airplanes were investigated. New equations with explicit material properties were developed for the weight estimation of aircraft components such as wing, fuselage and empennage. Regression analysis was applied to the basic equations for a data base of twelve airplanes to determine the coefficients. The resulting equations can be used to predict the component weights of either metallic or composite airplanes.
Quasipolynomial generalization of Lotka-Volterra mappings
NASA Astrophysics Data System (ADS)
Hernández-Bermejo, Benito; Brenig, Léon
2002-07-01
In recent years, it has been shown that Lotka-Volterra mappings constitute a valuable tool from both the theoretical and the applied points of view, with developments in very diverse fields such as physics, population dynamics, chemistry and economy. The purpose of this work is to demonstrate that many of the most important ideas and algebraic methods that constitute the basis of the quasipolynomial formalism (originally conceived for the analysis of ordinary differential equations) can be extended into the mapping domain. The extension of the formalism into the discrete-time context is remarkable as far as the quasipolynomial methodology had never been shown to be applicable beyond the differential case. It will be demonstrated that Lotka-Volterra mappings play a central role in the quasipolynomial formalism for the discrete-time case. Moreover, the extension of the formalism into the discrete-time domain allows a significant generalization of Lotka-Volterra mappings as well as a whole transfer of algebraic methods into the discrete-time context. The result is a novel and more general conceptual framework for the understanding of Lotka-Volterra mappings as well as a new range of possibilities that become open not only for the theoretical analysis of Lotka-Volterra mappings and their generalizations, but also for the development of new applications.
40 CFR Table 6 of Subpart Bbbbbbb... - General Provisions
Code of Federal Regulations, 2010 CFR
2010-07-01
... under which performance tests must be conducted. § 63.7(e)(2)-(4) Conduct of Performance Tests and Data Reduction Yes. § 63.7(f)-(h) Use of Alternative Test Method; Data Analysis, Recordkeeping, and Reporting...
NASA Technical Reports Server (NTRS)
Keith, J. S.; Ferguson, D. R.; Heck, P. H.
1972-01-01
The computer program, Streamtube Curvature Analysis, is described for the engineering user and for the programmer. The user oriented documentation includes a description of the mathematical governing equations, their use in the solution, and the method of solution. The general logical flow of the program is outlined and detailed instructions for program usage and operation are explained. General procedures for program use and the program capabilities and limitations are described. From the standpoint of the grammar, the overlay structure of the program is described. The various storage tables are defined and their uses explained. The input and output are discussed in detail. The program listing includes numerous comments so that the logical flow within the program is easily followed. A test case showing input data and output format is included as well as an error printout description.
Direct Method Transcription for a Human-Class Translunar Injection Trajectory Optimization
NASA Technical Reports Server (NTRS)
Witzberger, Kevin E.; Zeiler, Tom
2012-01-01
This paper presents a new trajectory optimization software package developed in the framework of a low-to-high fidelity 3 degrees-of-freedom (DOF)/6-DOF vehicle simulation program named Mission Analysis Simulation Tool in Fortran (MASTIF) and its application to a translunar trajectory optimization problem. The functionality of the developed optimization package is implemented as a new "mode" in generalized settings to make it applicable for a general trajectory optimization problem. In doing so, a direct optimization method using collocation is employed for solving the problem. Trajectory optimization problems in MASTIF are transcribed to a constrained nonlinear programming (NLP) problem and solved with SNOPT, a commercially available NLP solver. A detailed description of the optimization software developed is provided as well as the transcription specifics for the translunar injection (TLI) problem. The analysis includes a 3-DOF trajectory TLI optimization and a 3-DOF vehicle TLI simulation using closed-loop guidance.
Performance Analysis and Design Synthesis (PADS) computer program. Volume 3: User manual
NASA Technical Reports Server (NTRS)
1972-01-01
The two-fold purpose of the Performance Analysis and Design Synthesis (PADS) computer program is discussed. The program can size launch vehicles in conjunction with calculus-of-variations optimal trajectories and can also be used as a general purpose branched trajectory optimization program. For trajectory optimization alone or with sizing, PADS has two trajectory modules. The first trajectory module uses the method of steepest descent. The second module uses the method of quasi-linearization, which requires a starting solution from the first trajectory module.
Application of abstract harmonic analysis to the high-speed recognition of images
NASA Technical Reports Server (NTRS)
Usikov, D. A.
1979-01-01
Methods are constructed for rapidly computing correlation functions using the theory of abstract harmonic analysis. The theory developed includes as a particular case the familiar Fourier transform method for a correlation function which makes it possible to find images which are independent of their translation in the plane. Two examples of the application of the general theory described are the search for images, independent of their rotation and scale, and the search for images which are independent of their translations and rotations in the plane.
Isomorphisms between Petri nets and dataflow graphs
NASA Technical Reports Server (NTRS)
Kavi, Krishna M.; Buckles, Billy P.; Bhat, U. Narayan
1987-01-01
Dataflow graphs are a generalized model of computation. Uninterpreted dataflow graphs with nondeterminism resolved via probabilities are shown to be isomorphic to a class of Petri nets known as free choice nets. Petri net analysis methods are readily available in the literature and this result makes those methods accessible to dataflow research. Nevertheless, combinatorial explosion can render Petri net analysis inoperative. Using a previously known technique for decomposing free choice nets into smaller components, it is demonstrated that, in principle, it is possible to determine aspects of the overall behavior from the particular behavior of components.
A discourse on sensitivity analysis for discretely-modeled structures
NASA Technical Reports Server (NTRS)
Adelman, Howard M.; Haftka, Raphael T.
1991-01-01
A descriptive review is presented of the most recent methods for performing sensitivity analysis of the structural behavior of discretely-modeled systems. The methods are generally but not exclusively aimed at finite element modeled structures. Topics included are: selections of finite difference step sizes; special consideration for finite difference sensitivity of iteratively-solved response problems; first and second derivatives of static structural response; sensitivity of stresses; nonlinear static response sensitivity; eigenvalue and eigenvector sensitivities for both distinct and repeated eigenvalues; and sensitivity of transient response for both linear and nonlinear structural response.
Classification and Identification of Bacteria by Mass Spectrometry and Computational Analysis
Sauer, Sascha; Freiwald, Anja; Maier, Thomas; Kube, Michael; Reinhardt, Richard; Kostrzewa, Markus; Geider, Klaus
2008-01-01
Background In general, the definite determination of bacterial species is a tedious process and requires extensive manual labour. Novel technologies for bacterial detection and analysis can therefore help microbiologists in minimising their efforts in developing a number of microbiological applications. Methodology We present a robust, standardized procedure for automated bacterial analysis that is based on the detection of patterns of protein masses by MALDI mass spectrometry. We particularly applied the approach for classifying and identifying strains in species of the genus Erwinia. Many species of this genus are associated with disastrous plant diseases such as fire blight. Using our experimental procedure, we created a general bacterial mass spectra database that currently contains 2800 entries of bacteria of different genera. This database will be steadily expanded. To support users with a feasible analytical method, we developed and tested comprehensive software tools that are demonstrated herein. Furthermore, to gain additional analytical accuracy and reliability in the analysis we used genotyping of single nucleotide polymorphisms by mass spectrometry to unambiguously determine closely related strains that are difficult to distinguish by only relying on protein mass pattern detection. Conclusions With the method for bacterial analysis, we could identify fire blight pathogens from a variety of biological sources. The method can be used for a number of additional bacterial genera. Moreover, the mass spectrometry approach presented allows the integration of data from different biological levels such as the genome and the proteome. PMID:18665227
Knowles, Justin R.; Skutnik, Steven E.; Glasgow, David C.; ...
2016-06-23
Rapid non-destructive assay methods for trace fissile material analysis are needed in both nuclear forensics and safeguards communities. To address these needs, research at the High Flux Isotope Reactor Neutron Activation Analysis laboratory has developed a generalized non-destructive assay method to characterize materials containing fissile isotopes. This method relies on gamma-ray emissions from short-lived fission products and capitalizes off of differences in fission product yields to identify fissile compositions of trace material samples. Although prior work has explored the use of short-lived fission product gamma-ray measurements, the proposed method is the first to provide a holistic characterization of isotopic identification,more » mass ratios, and absolute mass determination. Successful single fissile isotope mass recoveries of less than 6% error have been conducted on standards of 235U and 239Pu as low as 12 nanograms in less than 10 minutes. Additionally, mixtures of fissile isotope standards containing 235U and 239Pu have been characterized as low as 229 nanograms of fissile mass with less than 12% error. The generalizability of this method is illustrated by evaluating different fissile isotopes, mixtures of fissile isotopes, and two different irradiation positions in the reactor. Furthermore, it is anticipated that this method will be expanded to characterize additional fissile nuclides, utilize various irradiation sources, and account for increasingly complex sample matrices.« less
NASA Astrophysics Data System (ADS)
Knowles, Justin; Skutnik, Steven; Glasgow, David; Kapsimalis, Roger
2016-10-01
Rapid nondestructive assay methods for trace fissile material analysis are needed in both nuclear forensics and safeguards communities. To address these needs, research at the Oak Ridge National Laboratory High Flux Isotope Reactor Neutron Activation Analysis facility has developed a generalized nondestructive assay method to characterize materials containing fissile isotopes. This method relies on gamma-ray emissions from short-lived fission products and makes use of differences in fission product yields to identify fissile compositions of trace material samples. Although prior work has explored the use of short-lived fission product gamma-ray measurements, the proposed method is the first to provide a complete characterization of isotopic identification, mass ratios, and absolute mass determination. Successful single fissile isotope mass recoveries of less than 6% recovery bias have been conducted on standards of 235U and 239Pu as low as 12 ng in less than 10 minutes. Additionally, mixtures of fissile isotope standards containing 235U and 239Pu have been characterized as low as 198 ng of fissile mass with less than 7% recovery bias. The generalizability of this method is illustrated by evaluating different fissile isotopes, mixtures of fissile isotopes, and two different irradiation positions in the reactor. It is anticipated that this method will be expanded to characterize additional fissile nuclides, utilize various irradiation facilities, and account for increasingly complex sample matrices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knowles, Justin R.; Skutnik, Steven E.; Glasgow, David C.
Rapid non-destructive assay methods for trace fissile material analysis are needed in both nuclear forensics and safeguards communities. To address these needs, research at the High Flux Isotope Reactor Neutron Activation Analysis laboratory has developed a generalized non-destructive assay method to characterize materials containing fissile isotopes. This method relies on gamma-ray emissions from short-lived fission products and capitalizes off of differences in fission product yields to identify fissile compositions of trace material samples. Although prior work has explored the use of short-lived fission product gamma-ray measurements, the proposed method is the first to provide a holistic characterization of isotopic identification,more » mass ratios, and absolute mass determination. Successful single fissile isotope mass recoveries of less than 6% error have been conducted on standards of 235U and 239Pu as low as 12 nanograms in less than 10 minutes. Additionally, mixtures of fissile isotope standards containing 235U and 239Pu have been characterized as low as 229 nanograms of fissile mass with less than 12% error. The generalizability of this method is illustrated by evaluating different fissile isotopes, mixtures of fissile isotopes, and two different irradiation positions in the reactor. Furthermore, it is anticipated that this method will be expanded to characterize additional fissile nuclides, utilize various irradiation sources, and account for increasingly complex sample matrices.« less
NASA Astrophysics Data System (ADS)
Reynders, Edwin P. B.; Langley, Robin S.
2018-08-01
The hybrid deterministic-statistical energy analysis method has proven to be a versatile framework for modeling built-up vibro-acoustic systems. The stiff system components are modeled deterministically, e.g., using the finite element method, while the wave fields in the flexible components are modeled as diffuse. In the present paper, the hybrid method is extended such that not only the ensemble mean and variance of the harmonic system response can be computed, but also of the band-averaged system response. This variance represents the uncertainty that is due to the assumption of a diffuse field in the flexible components of the hybrid system. The developments start with a cross-frequency generalization of the reciprocity relationship between the total energy in a diffuse field and the cross spectrum of the blocked reverberant loading at the boundaries of that field. By making extensive use of this generalization in a first-order perturbation analysis, explicit expressions are derived for the cross-frequency and band-averaged variance of the vibrational energies in the diffuse components and for the cross-frequency and band-averaged variance of the cross spectrum of the vibro-acoustic field response of the deterministic components. These expressions are extensively validated against detailed Monte Carlo analyses of coupled plate systems in which diffuse fields are simulated by randomly distributing small point masses across the flexible components, and good agreement is found.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baer, Donald R.
ISO Technical Report (TR) 14187 provides an introduction to (and examples of) the information that can be obtained about nanostructured materials using surface-analysis tools. In addition, both general issues and challenges associated with characterising nanostructured materials and the specific opportunities and challenges associated with individual analytical methods are identified. As the size of objects or components of materials approaches a few nanometres, the distinctions among 'bulk', 'surface' and 'particle' analysis blur. This Technical Report focuses on issues specifically relevant to surface chemical analysis of nanostructured materials. The report considers a variety of analysis methods but focuses on techniques that aremore » in the domain of ISO/TC 201 including Auger electron spectroscopy, X-ray photoelectron spectroscopy, secondary ion mass spectrometry, and scanning probe microscopy. Measurements of nanoparticle surface properties such as surface potential that are often made in a solution are not discussed.« less
Diagnostics of Tree Diseases Caused by Phytophthora austrocedri Species.
Mulholland, Vincent; Elliot, Matthew; Green, Sarah
2015-01-01
We present methods for the detection and quantification of four Phytophthora species which are pathogenic on trees; Phytophthora ramorum, Phytophthora kernoviae, Phytophthora lateralis, and Phytophthora austrocedri. Nucleic acid extraction methods are presented for phloem tissue from trees, soil, and pure cultures on agar plates. Real-time PCR methods are presented and include primer and probe sets for each species, general advice on real-time PCR setup and data analysis. A method for sequence-based identification, useful for pure cultures, is also included.
Lucius, Aaron L.; Maluf, Nasib K.; Fischer, Christopher J.; Lohman, Timothy M.
2003-01-01
Helicase-catalyzed DNA unwinding is often studied using “all or none” assays that detect only the final product of fully unwound DNA. Even using these assays, quantitative analysis of DNA unwinding time courses for DNA duplexes of different lengths, L, using “n-step” sequential mechanisms, can reveal information about the number of intermediates in the unwinding reaction and the “kinetic step size”, m, defined as the average number of basepairs unwound between two successive rate limiting steps in the unwinding cycle. Simultaneous nonlinear least-squares analysis using “n-step” sequential mechanisms has previously been limited by an inability to float the number of “unwinding steps”, n, and m, in the fitting algorithm. Here we discuss the behavior of single turnover DNA unwinding time courses and describe novel methods for nonlinear least-squares analysis that overcome these problems. Analytic expressions for the time courses, fss(t), when obtainable, can be written using gamma and incomplete gamma functions. When analytic expressions are not obtainable, the numerical solution of the inverse Laplace transform can be used to obtain fss(t). Both methods allow n and m to be continuous fitting parameters. These approaches are generally applicable to enzymes that translocate along a lattice or require repetition of a series of steps before product formation. PMID:14507688
Lucius, Aaron L; Maluf, Nasib K; Fischer, Christopher J; Lohman, Timothy M
2003-10-01
Helicase-catalyzed DNA unwinding is often studied using "all or none" assays that detect only the final product of fully unwound DNA. Even using these assays, quantitative analysis of DNA unwinding time courses for DNA duplexes of different lengths, L, using "n-step" sequential mechanisms, can reveal information about the number of intermediates in the unwinding reaction and the "kinetic step size", m, defined as the average number of basepairs unwound between two successive rate limiting steps in the unwinding cycle. Simultaneous nonlinear least-squares analysis using "n-step" sequential mechanisms has previously been limited by an inability to float the number of "unwinding steps", n, and m, in the fitting algorithm. Here we discuss the behavior of single turnover DNA unwinding time courses and describe novel methods for nonlinear least-squares analysis that overcome these problems. Analytic expressions for the time courses, f(ss)(t), when obtainable, can be written using gamma and incomplete gamma functions. When analytic expressions are not obtainable, the numerical solution of the inverse Laplace transform can be used to obtain f(ss)(t). Both methods allow n and m to be continuous fitting parameters. These approaches are generally applicable to enzymes that translocate along a lattice or require repetition of a series of steps before product formation.
Applying Nyquist's method for stability determination to solar wind observations
NASA Astrophysics Data System (ADS)
Klein, Kristopher G.; Kasper, Justin C.; Korreck, K. E.; Stevens, Michael L.
2017-10-01
The role instabilities play in governing the evolution of solar and astrophysical plasmas is a matter of considerable scientific interest. The large number of sources of free energy accessible to such nearly collisionless plasmas makes general modeling of unstable behavior, accounting for the temperatures, densities, anisotropies, and relative drifts of a large number of populations, analytically difficult. We therefore seek a general method of stability determination that may be automated for future analysis of solar wind observations. This work describes an efficient application of the Nyquist instability method to the Vlasov dispersion relation appropriate for hot, collisionless, magnetized plasmas, including the solar wind. The algorithm recovers the familiar proton temperature anisotropy instabilities, as well as instabilities that had been previously identified using fits extracted from in situ observations in Gary et al. (2016). Future proposed applications of this method are discussed.
Evaluation of generalized degrees of freedom for sparse estimation by replica method
NASA Astrophysics Data System (ADS)
Sakata, A.
2016-12-01
We develop a method to evaluate the generalized degrees of freedom (GDF) for linear regression with sparse regularization. The GDF is a key factor in model selection, and thus its evaluation is useful in many modelling applications. An analytical expression for the GDF is derived using the replica method in the large-system-size limit with random Gaussian predictors. The resulting formula has a universal form that is independent of the type of regularization, providing us with a simple interpretation. Within the framework of replica symmetric (RS) analysis, GDF has a physical meaning as the effective fraction of non-zero components. The validity of our method in the RS phase is supported by the consistency of our results with previous mathematical results. The analytical results in the RS phase are calculated numerically using the belief propagation algorithm.
Improved accuracy for finite element structural analysis via an integrated force method
NASA Technical Reports Server (NTRS)
Patnaik, S. N.; Hopkins, D. A.; Aiello, R. A.; Berke, L.
1992-01-01
A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.
Samak, M. Mosleh E. Abu; Bakar, A. Ashrif A.; Kashif, Muhammad; Zan, Mohd Saiful Dzulkifly
2016-01-01
This paper discusses numerical analysis methods for different geometrical features that have limited interval values for typically used sensor wavelengths. Compared with existing Finite Difference Time Domain (FDTD) methods, the alternating direction implicit (ADI)-FDTD method reduces the number of sub-steps by a factor of two to three, which represents a 33% time savings in each single run. The local one-dimensional (LOD)-FDTD method has similar numerical equation properties, which should be calculated as in the previous method. Generally, a small number of arithmetic processes, which result in a shorter simulation time, are desired. The alternating direction implicit technique can be considered a significant step forward for improving the efficiency of unconditionally stable FDTD schemes. This comparative study shows that the local one-dimensional method had minimum relative error ranges of less than 40% for analytical frequencies above 42.85 GHz, and the same accuracy was generated by both methods.