NASA Astrophysics Data System (ADS)
Kim, Tae-Jeong; Kim, Ki-Young; Shin, Dong-Hoon; Kwon, Hyun-Han
2015-04-01
It has been widely acknowledged that the appropriate simulation of natural streamflow at ungauged sites is one of the fundamental challenges to hydrology community. In particular, the key to reliable runoff simulation in ungauged basins is a reliable rainfall-runoff model and a parameter estimation. In general, parameter estimation in rainfall-runoff models is a complex issue due to an insufficient hydrologic data. This study aims to regionalize the parameters of the continuous rainfall-runoff model in conjunction with Bayesian statistical techniques to facilitate uncertainty analysis. First, this study uses the Bayesian Markov Chain Monte Carlo scheme for the Sacramento rainfall-runoff model that has been widely used around the world. The Sacramento model is calibrated against daily runoff observation, and thirteen parameters of the model are optimized as well as posterior distributor distributions for each parameter are derived. Second, we applied Bayesian generalized linear regression model to set of the parameters with basin characteristics (e.g. area and slope), to obtain a functional relationship between pairs of variables. The proposed model was validated in two gauged watersheds in accordance with the efficiency criteria such as the Nash-Sutcliffe efficiency, coefficient of efficiency, index of agreement and coefficient of correlation. The future study will be further focused on uncertainty analysis to fully incorporate propagation of the uncertainty into the regionalization framework. KEYWORDS: Ungauge, Parameter, Sacramento, Generalized linear model, Regionalization Acknowledgement This research was supported by a Grant (13SCIPA01) from Smart Civil Infrastructure Research Program funded by the Ministry of Land, Infrastructure and Transport (MOLIT) of Korea government and the Korea Agency for Infrastructure Technology Advancement (KAIA).
General linear chirplet transform
NASA Astrophysics Data System (ADS)
Yu, Gang; Zhou, Yiqi
2016-03-01
Time-frequency (TF) analysis (TFA) method is an effective tool to characterize the time-varying feature of a signal, which has drawn many attentions in a fairly long period. With the development of TFA, many advanced methods are proposed, which can provide more precise TF results. However, some restrictions are introduced inevitably. In this paper, we introduce a novel TFA method, termed as general linear chirplet transform (GLCT), which can overcome some limitations existed in current TFA methods. In numerical and experimental validations, by comparing with current TFA methods, some advantages of GLCT are demonstrated, which consist of well-characterizing the signal of multi-component with distinct non-linear features, being independent to the mathematical model and initial TFA method, allowing for the reconstruction of the interested component, and being non-sensitivity to noise.
Generalized Linear Covariance Analysis
NASA Technical Reports Server (NTRS)
Carpenter, James R.; Markley, F. Landis
2014-01-01
This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.
Generalized Linear Covariance Analysis
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Markley, F. Landis
2008-01-01
We review and extend in two directions the results of prior work on generalized covariance analysis methods. This prior work allowed for partitioning of the state space into "solve-for" and "consider" parameters, allowed for differences between the formal values and the true values of the measurement noise, process noise, and a priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and a priori solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator s anchor time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the "variance sandpile" and the "sensitivity mosaic," and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.
Quantization of general linear electrodynamics
Rivera, Sergio; Schuller, Frederic P.
2011-03-15
General linear electrodynamics allow for an arbitrary linear constitutive relation between the field strength 2-form and induction 2-form density if crucial hyperbolicity and energy conditions are satisfied, which render the theory predictive and physically interpretable. Taking into account the higher-order polynomial dispersion relation and associated causal structure of general linear electrodynamics, we carefully develop its Hamiltonian formulation from first principles. Canonical quantization of the resulting constrained system then results in a quantum vacuum which is sensitive to the constitutive tensor of the classical theory. As an application we calculate the Casimir effect in a birefringent linear optical medium.
NASA Astrophysics Data System (ADS)
Ripamonti, Francesco; Orsini, Lorenzo; Resta, Ferruccio
2015-04-01
Non-linear behavior is present in many mechanical system operating conditions. In these cases, a common engineering practice is to linearize the equation of motion around a particular operating point, and to design a linear controller. The main disadvantage is that the stability properties and validity of the controller are local. In order to improve the controller performance, non-linear control techniques represent a very attractive solution for many smart structures. The aim of this paper is to compare non-linear model-based and non-model-based control techniques. In particular the model-based sliding-mode-control (SMC) technique is considered because of its easy implementation and the strong robustness of the controller even under heavy model uncertainties. Among the non-model-based control techniques, the fuzzy control (FC), allowing designing the controller according to if-then rules, has been considered. It defines the controller without a system reference model, offering many advantages such as an intrinsic robustness. These techniques have been tested on the pendulum nonlinear system.
Linear Models Based on Noisy Data and the Frisch Scheme*
Ning, Lipeng; Georgiou, Tryphon T.; Tannenbaum, Allen; Boyd, Stephen P.
2016-01-01
We address the problem of identifying linear relations among variables based on noisy measurements. This is a central question in the search for structure in large data sets. Often a key assumption is that measurement errors in each variable are independent. This basic formulation has its roots in the work of Charles Spearman in 1904 and of Ragnar Frisch in the 1930s. Various topics such as errors-in-variables, factor analysis, and instrumental variables all refer to alternative viewpoints on this problem and on ways to account for the anticipated way that noise enters the data. In the present paper we begin by describing certain fundamental contributions by the founders of the field and provide alternative modern proofs to certain key results. We then go on to consider a modern viewpoint and novel numerical techniques to the problem. The central theme is expressed by the Frisch–Kalman dictum, which calls for identifying a noise contribution that allows a maximal number of simultaneous linear relations among the noise-free variables—a rank minimization problem. In the years since Frisch’s original formulation, there have been several insights, including trace minimization as a convenient heuristic to replace rank minimization. We discuss convex relaxations and theoretical bounds on the rank that, when met, provide guarantees for global optimality. A complementary point of view to this minimum-rank dictum is presented in which models are sought leading to a uniformly optimal quadratic estimation error for the error-free variables. Points of contact between these formalisms are discussed, and alternative regularization schemes are presented. PMID:27168672
Generalized Linear Models in Family Studies
ERIC Educational Resources Information Center
Wu, Zheng
2005-01-01
Generalized linear models (GLMs), as defined by J. A. Nelder and R. W. M. Wedderburn (1972), unify a class of regression models for categorical, discrete, and continuous response variables. As an extension of classical linear models, GLMs provide a common body of theory and methodology for some seemingly unrelated models and procedures, such as…
Multiconlitron: a general piecewise linear classifier.
Yujian, Li; Bo, Liu; Xinwu, Yang; Yaozong, Fu; Houjun, Li
2011-02-01
Based on the "convexly separable" concept, we present a solid geometric theory and a new general framework to design piecewise linear classifiers for two arbitrarily complicated nonintersecting classes by using a "multiconlitron," which is a union of multiple conlitrons that comprise a set of hyperplanes or linear functions surrounding a convex region for separating two convexly separable datasets. We propose a new iterative algorithm called the cross distance minimization algorithm (CDMA) to compute hard margin non-kernel support vector machines (SVMs) via the nearest point pair between two convex polytopes. Using CDMA, we derive two new algorithms, i.e., the support conlitron algorithm (SCA) and the support multiconlitron algorithm (SMA) to construct support conlitrons and support multiconlitrons, respectively, which are unique and can separate two classes by a maximum margin as in an SVM. Comparative experiments show that SMA can outperform linear SVM on many of the selected databases and provide similar results to radial basis function SVM on some of them, while SCA performs better than linear SVM on three out of four applicable databases. Other experiments show that SMA and SCA may be further improved to draw more potential in the new research direction of piecewise linear learning. PMID:21138800
Reduced-Order Models Based on Linear and Nonlinear Aerodynamic Impulse Responses
NASA Technical Reports Server (NTRS)
Silva, Walter A.
1999-01-01
This paper discusses a method for the identification and application of reduced-order models based on linear and nonlinear aerodynamic impulse responses. The Volterra theory of nonlinear systems and an appropriate kernel identification technique are described. Insight into the nature of kernels is provided by applying the method to the nonlinear Riccati equation in a non-aerodynamic application. The method is then applied to a nonlinear aerodynamic model of RAE 2822 supercritical airfoil undergoing plunge motions using the CFL3D Navier-Stokes flow solver with the Spalart-Allmaras turbulence model. Results demonstrate the computational efficiency of the technique.
Reduced Order Models Based on Linear and Nonlinear Aerodynamic Impulse Responses
NASA Technical Reports Server (NTRS)
Silva, Walter A.
1999-01-01
This paper discusses a method for the identification and application of reduced-order models based on linear and nonlinear aerodynamic impulse responses. The Volterra theory of nonlinear systems and an appropriate kernel identification technique are described. Insight into the nature of kernels is provided by applying the method to the nonlinear Riccati equation in a non-aerodynamic application. The method is then applied to a nonlinear aerodynamic model of an RAE 2822 supercritical airfoil undergoing plunge motions using the CFL3D Navier-Stokes flow solver with the Spalart-Allmaras turbulence model. Results demonstrate the computational efficiency of the technique.
A General Framework for Multiphysics Modeling Based on Numerical Averaging
NASA Astrophysics Data System (ADS)
Lunati, I.; Tomin, P.
2014-12-01
In the last years, multiphysics (hybrid) modeling has attracted increasing attention as a tool to bridge the gap between pore-scale processes and a continuum description at the meter-scale (laboratory scale). This approach is particularly appealing for complex nonlinear processes, such as multiphase flow, reactive transport, density-driven instabilities, and geomechanical coupling. We present a general framework that can be applied to all these classes of problems. The method is based on ideas from the Multiscale Finite-Volume method (MsFV), which has been originally developed for Darcy-scale application. Recently, we have reformulated MsFV starting with a local-global splitting, which allows us to retain the original degree of coupling for the local problems and to use spatiotemporal adaptive strategies. The new framework is based on the simple idea that different characteristic temporal scales are inherited from different spatial scales, and the global and the local problems are solved with different temporal resolutions. The global (coarse-scale) problem is constructed based on a numerical volume-averaging paradigm and a continuum (Darcy-scale) description is obtained by introducing additional simplifications (e.g., by assuming that pressure is the only independent variable at the coarse scale, we recover an extended Darcy's law). We demonstrate that it is possible to adaptively and dynamically couple the Darcy-scale and the pore-scale descriptions of multiphase flow in a single conceptual and computational framework. Pore-scale problems are solved only in the active front region where fluid distribution changes with time. In the rest of the domain, only a coarse description is employed. This framework can be applied to other important problems such as reactive transport and crack propagation. As it is based on a numerical upscaling paradigm, our method can be used to explore the limits of validity of macroscopic models and to illuminate the meaning of
Identification of general linear mechanical systems
NASA Technical Reports Server (NTRS)
Sirlin, S. W.; Longman, R. W.; Juang, J. N.
1983-01-01
Previous work in identification theory has been concerned with the general first order time derivative form. Linear mechanical systems, a large and important class, naturally have a second order form. This paper utilizes this additional structural information for the purpose of identification. A realization is obtained from input-output data, and then knowledge of the system input, output, and inertia matrices is used to determine a set of linear equations whereby we identify the remaining unknown system matrices. Necessary and sufficient conditions on the number, type and placement of sensors and actuators are given which guarantee identificability, and less stringent conditions are given which guarantee generic identifiability. Both a priori identifiability and a posteriori identifiability are considered, i.e., identifiability being insured prior to obtaining data, and identifiability being assured with a given data set.
On the order of general linear methods.
Constantinescu, E. M.; Mathematics and Computer Science
2009-09-01
General linear (GL) methods are numerical algorithms used to solve ODEs. The standard order conditions analysis involves the GL matrix itself and a starting procedure; however, a finishing method (F) is required to extract the actual ODE solution. The standard order analysis and stability are sufficient for the convergence of any GL method. Nonetheless, using a simple GL scheme, we show that the order definition may be too restrictive. Specifically, the order for GL methods with low order intermediate components may be underestimated. In this note we explore the order conditions for GL schemes and propose a new definition for characterizing the order of GL methods, which is focused on the final result--the outcome of F--and can provide more effective algebraic order conditions.
Development of a CFD-compatible transition model based on linear stability theory
NASA Astrophysics Data System (ADS)
Coder, James G.
A new laminar-turbulent transition model for low-turbulence external aerodynamic applications is presented that incorporates linear stability theory in a manner compatible with modern computational fluid dynamics solvers. The model uses a new transport equation that describes the growth of the maximum Tollmien-Schlichting instability amplitude in the presence of a boundary layer. To avoid the need for integration paths and non-local operations, a locally defined non-dimensional pressure-gradient parameter is used that serves as an estimator of the integral boundary-layer properties. The model has been implemented into the OVERFLOW 2.2f solver and interacts with the Spalart-Allmaras and Menter SST eddy-viscosity turbulence models. Comparisons of predictions using the new transition model with high-quality wind-tunnel measurements of airfoil section characteristics validate the predictive qualities of the model. Predictions for three-dimensional aircraft and wing geometries show the correct qualitative behavior even though limited experimental data are available. These cases also demonstrate that the model is well-behaved about general aeronautical configurations. These cases confirm that the new transition model is an improvement over the current state of the art in computational fluid dynamics transition modeling by providing more accurate solutions at approximately half the added computational expense.
On generalized hamming weighs for Galois ring linear codes
Ashikhmin, A.
1997-08-01
The definition of generalized Hamming weights (GHW) for linear codes over Galois rings is discussed. The properties of GHW for Galois ring linear codes are stated. Upper and existence bounds for GHW of Z{sub 4}-linear codes and a lower bound for GHW of the Kerdock code over Z{sub 4} are derived. GHW of some Z{sub 4}-linear codes are determined.
Generalized Multicarrier CDMA: Unification and Linear Equalization
NASA Astrophysics Data System (ADS)
Giannakis, Georgios B.; Anghel, Paul A.; Wang, Zhengdao
2005-12-01
Relying on block-symbol spreading and judicious design of user codes, this paper builds on the generalized multicarrier (GMC) quasisynchronous CDMA system that is capable of multiuser interference (MUI) elimination and intersymbol interference (ISI) suppression with guaranteed symbol recovery, regardless of the wireless frequency-selective channels. GMC-CDMA affords an all-digital unifying framework, which encompasses single-carrier and several multicarrier (MC) CDMA systems. Besides the unifying framework, it is shown that GMC-CDMA offers flexibility both in full load (maximum number of users allowed by the available bandwidth) and in reduced load settings. A novel blind channel estimation algorithm is also derived. Analytical evaluation and simulations illustrate the superior error performance and flexibility of uncoded GMC-CDMA over competing MC-CDMA alternatives especially in the presence of uplink multipath channels.
ERIC Educational Resources Information Center
Cheong, Yuk Fai; Kamata, Akihito
2013-01-01
In this article, we discuss and illustrate two centering and anchoring options available in differential item functioning (DIF) detection studies based on the hierarchical generalized linear and generalized linear mixed modeling frameworks. We compared and contrasted the assumptions of the two options, and examined the properties of their DIF…
Chu, Dezhang; Lawson, Gareth L; Wiebe, Peter H
2016-05-01
The linear inversion commonly used in fisheries and zooplankton acoustics assumes a constant inversion kernel and ignores the uncertainties associated with the shape and behavior of the scattering targets, as well as other relevant animal parameters. Here, errors of the linear inversion due to uncertainty associated with the inversion kernel are quantified. A scattering model-based nonlinear inversion method is presented that takes into account the nonlinearity of the inverse problem and is able to estimate simultaneously animal abundance and the parameters associated with the scattering model inherent to the kernel. It uses sophisticated scattering models to estimate first, the abundance, and second, the relevant shape and behavioral parameters of the target organisms. Numerical simulations demonstrate that the abundance, size, and behavior (tilt angle) parameters of marine animals (fish or zooplankton) can be accurately inferred from the inversion by using multi-frequency acoustic data. The influence of the singularity and uncertainty in the inversion kernel on the inversion results can be mitigated by examining the singular values for linear inverse problems and employing a non-linear inversion involving a scattering model-based kernel. PMID:27250181
Linear stability of general magnetically insulated electron flow
NASA Astrophysics Data System (ADS)
Swegle, J. A.; Mendel, C. W., Jr.; Seidel, D. B.; Quintenz, J. P.
1984-03-01
A linear stability theory for magnetically insulated systems was formulated by linearizing the general 3-D, time dependent theory of Mendel, Seidel, and Slut. It is found that, case of electron trajectories which are nearly laminar, with only small transverse motion, several suggestive simplifications occur in the eigenvalue equations.
Linear stability of general magnetically insulated electron flow
Swegle, J.A.; Mendel, C.W. Jr.; Seidel, D.B.; Quintenz, J.P.
1984-01-01
We have formulated a linear stability theory for magnetically insulated systems by linearizing the general 3-D, time-dependent theory of Mendel, Seidel, and Slutz. In the physically interesting case of electron trajectories which are nearly laminar, with only small transverse motion, we have found that several suggestive simplifications occur in the eigenvalue equations.
A novel crowd flow model based on linear fractional stable motion
NASA Astrophysics Data System (ADS)
Wei, Juan; Zhang, Hong; Wu, Zhenya; He, Junlin; Guo, Yangyong
2016-03-01
For the evacuation dynamics in indoor space, a novel crowd flow model is put forward based on Linear Fractional Stable Motion. Based on position attraction and queuing time, the calculation formula of movement probability is defined and the queuing time is depicted according to linear fractal stable movement. At last, an experiment and simulation platform can be used for performance analysis, studying deeply the relation among system evacuation time, crowd density and exit flow rate. It is concluded that the evacuation time and the exit flow rate have positive correlations with the crowd density, and when the exit width reaches to the threshold value, it will not effectively decrease the evacuation time by further increasing the exit width.
Log-linear model based behavior selection method for artificial fish swarm algorithm.
Huang, Zhehuang; Chen, Yidong
2015-01-01
Artificial fish swarm algorithm (AFSA) is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm. PMID:25691895
Log-Linear Model Based Behavior Selection Method for Artificial Fish Swarm Algorithm
Huang, Zhehuang; Chen, Yidong
2015-01-01
Artificial fish swarm algorithm (AFSA) is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm. PMID:25691895
Linear equations in general purpose codes for stiff ODEs
Shampine, L. F.
1980-02-01
It is noted that it is possible to improve significantly the handling of linear problems in a general-purpose code with very little trouble to the user or change to the code. In such situations analytical evaluation of the Jacobian is a lot cheaper than numerical differencing. A slight change in the point at which the Jacobian is evaluated results in a more accurate Jacobian in linear problems. (RWR)
Generalized coarse-grained model based on point multipole and Gay-Berne potentials
NASA Astrophysics Data System (ADS)
Golubkov, Pavel A.; Ren, Pengyu
2006-08-01
This paper presents a general coarse-grained molecular mechanics model based on electric point multipole expansion and Gay-Berne [J. Chem. Phys. 74, 3316 (1981)] potential. Coarse graining of van der Waals potential is achieved by treating molecules as soft uniaxial ellipsoids interacting via a generalized anisotropic Gay-Berne function. The charge distribution is represented by point multipole expansion, including point charge, dipole, and quadrupole moments placed at the center of mass. The Gay-Berne and point multipole potentials are combined in the local reference frame defined by the inertial frame of the all-atom counterpart. The coarse-grained model has been applied to rigid-body molecular dynamics simulations of molecular liquids including benzene and methanol. The computational efficiency is improved by several orders of magnitude, while the results are in reasonable agreement with all-atom models and experimental data. We also discuss the implications of using point multipole for polar molecules capable of hydrogen bonding and the applicability of this model to a broad range of molecular systems including highly charged biopolymers.
Generalized coarse-grained model based on point multipole and Gay-Berne potentials.
Golubkov, Pavel A; Ren, Pengyu
2006-08-14
This paper presents a general coarse-grained molecular mechanics model based on electric point multipole expansion and Gay-Berne [J. Chem. Phys. 74, 3316 (1981)] potential. Coarse graining of van der Waals potential is achieved by treating molecules as soft uniaxial ellipsoids interacting via a generalized anisotropic Gay-Berne function. The charge distribution is represented by point multipole expansion, including point charge, dipole, and quadrupole moments placed at the center of mass. The Gay-Berne and point multipole potentials are combined in the local reference frame defined by the inertial frame of the all-atom counterpart. The coarse-grained model has been applied to rigid-body molecular dynamics simulations of molecular liquids including benzene and methanol. The computational efficiency is improved by several orders of magnitude, while the results are in reasonable agreement with all-atom models and experimental data. We also discuss the implications of using point multipole for polar molecules capable of hydrogen bonding and the applicability of this model to a broad range of molecular systems including highly charged biopolymers. PMID:16942269
NASA Astrophysics Data System (ADS)
Rudy, Ashley C. A.; Lamoureux, Scott F.; Treitz, Paul; van Ewijk, Karin Y.
2016-07-01
To effectively assess and mitigate risk of permafrost disturbance, disturbance-prone areas can be predicted through the application of susceptibility models. In this study we developed regional susceptibility models for permafrost disturbances using a field disturbance inventory to test the transferability of the model to a broader region in the Canadian High Arctic. Resulting maps of susceptibility were then used to explore the effect of terrain variables on the occurrence of disturbances within this region. To account for a large range of landscape characteristics, the model was calibrated using two locations: Sabine Peninsula, Melville Island, NU, and Fosheim Peninsula, Ellesmere Island, NU. Spatial patterns of disturbance were predicted with a generalized linear model (GLM) and generalized additive model (GAM), each calibrated using disturbed and randomized undisturbed locations from both locations and GIS-derived terrain predictor variables including slope, potential incoming solar radiation, wetness index, topographic position index, elevation, and distance to water. Each model was validated for the Sabine and Fosheim Peninsulas using independent data sets while the transferability of the model to an independent site was assessed at Cape Bounty, Melville Island, NU. The regional GLM and GAM validated well for both calibration sites (Sabine and Fosheim) with the area under the receiver operating curves (AUROC) > 0.79. Both models were applied directly to Cape Bounty without calibration and validated equally with AUROC's of 0.76; however, each model predicted disturbed and undisturbed samples differently. Additionally, the sensitivity of the transferred model was assessed using data sets with different sample sizes. Results indicated that models based on larger sample sizes transferred more consistently and captured the variability within the terrain attributes in the respective study areas. Terrain attributes associated with the initiation of disturbances were
Beam envelope calculations in general linear coupled lattices
NASA Astrophysics Data System (ADS)
Chung, Moses; Qin, Hong; Groening, Lars; Davidson, Ronald C.; Xiao, Chen
2015-01-01
The envelope equations and Twiss parameters (β and α) provide important bases for uncoupled linear beam dynamics. For sophisticated beam manipulations, however, coupling elements between two transverse planes are intentionally introduced. The recently developed generalized Courant-Snyder theory offers an effective way of describing the linear beam dynamics in such coupled systems with a remarkably similar mathematical structure to the original Courant-Snyder theory. In this work, we present numerical solutions to the symmetrized matrix envelope equation for β which removes the gauge freedom in the matrix envelope equation for w. Furthermore, we construct the transfer and beam matrices in terms of the generalized Twiss parameters, which enables calculation of the beam envelopes in arbitrary linear coupled systems.
Beam envelope calculations in general linear coupled lattices
Chung, Moses; Qin, Hong; Groening, Lars; Xiao, Chen; Davidson, Ronald C.
2015-01-15
The envelope equations and Twiss parameters (β and α) provide important bases for uncoupled linear beam dynamics. For sophisticated beam manipulations, however, coupling elements between two transverse planes are intentionally introduced. The recently developed generalized Courant-Snyder theory offers an effective way of describing the linear beam dynamics in such coupled systems with a remarkably similar mathematical structure to the original Courant-Snyder theory. In this work, we present numerical solutions to the symmetrized matrix envelope equation for β which removes the gauge freedom in the matrix envelope equation for w. Furthermore, we construct the transfer and beam matrices in terms of the generalized Twiss parameters, which enables calculation of the beam envelopes in arbitrary linear coupled systems.
Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models
ERIC Educational Resources Information Center
Wagler, Amy E.
2014-01-01
Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…
A General Linear Model Approach to Adjusting the Cumulative GPA.
ERIC Educational Resources Information Center
Young, John W.
A general linear model (GLM), using least-squares techniques, was used to develop a criterion measure to replace freshman year grade point average (GPA) in college admission predictive validity studies. Problems with the use of GPA include those associated with the combination of grades from different courses and disciplines into a single measure,…
Generalized in vitro-in vivo relationship (IVIVR) model based on artificial neural networks
Mendyk, Aleksander; Tuszyński, Paweł K; Polak, Sebastian; Jachowicz, Renata
2013-01-01
Background The aim of this study was to develop a generalized in vitro-in vivo relationship (IVIVR) model based on in vitro dissolution profiles together with quantitative and qualitative composition of dosage formulations as covariates. Such a model would be of substantial aid in the early stages of development of a pharmaceutical formulation, when no in vivo results are yet available and it is impossible to create a classical in vitro-in vivo correlation (IVIVC)/IVIVR. Methods Chemoinformatics software was used to compute the molecular descriptors of drug substances (ie, active pharmaceutical ingredients) and excipients. The data were collected from the literature. Artificial neural networks were used as the modeling tool. The training process was carried out using the 10-fold cross-validation technique. Results The database contained 93 formulations with 307 inputs initially, and was later limited to 28 in a course of sensitivity analysis. The four best models were introduced into the artificial neural network ensemble. Complete in vivo profiles were predicted accurately for 37.6% of the formulations. Conclusion It has been shown that artificial neural networks can be an effective predictive tool for constructing IVIVR in an integrated generalized model for various formulations. Because IVIVC/IVIVR is classically conducted for 2–4 formulations and with a single active pharmaceutical ingredient, the approach described here is unique in that it incorporates various active pharmaceutical ingredients and dosage forms into a single model. Thus, preliminary IVIVC/IVIVR can be available without in vivo data, which is impossible using current IVIVC/IVIVR procedures. PMID:23569360
The generalized sidelobe canceller based on quaternion widely linear processing.
Tao, Jian-wu; Chang, Wen-xiu
2014-01-01
We investigate the problem of quaternion beamforming based on widely linear processing. First, a quaternion model of linear symmetric array with two-component electromagnetic (EM) vector sensors is presented. Based on array's quaternion model, we propose the general expression of a quaternion semiwidely linear (QSWL) beamformer. Unlike the complex widely linear beamformer, the QSWL beamformer is based on the simultaneous operation on the quaternion vector, which is composed of two jointly proper complex vectors, and its involution counterpart. Second, we propose a useful implementation of QSWL beamformer, that is, QSWL generalized sidelobe canceller (GSC), and derive the simple expressions of the weight vectors. The QSWL GSC consists of two-stage beamformers. By designing the weight vectors of two-stage beamformers, the interference is completely canceled in the output of QSWL GSC and the desired signal is not distorted. We derive the array's gain expression and analyze the performance of the QSWL GSC in the presence of one type of interference. The advantage of QSWL GSC is that the main beam can always point to the desired signal's direction and the robustness to DOA mismatch is improved. Finally, simulations are used to verify the performance of the proposed QSWL GSC. PMID:24955425
The Generalized Sidelobe Canceller Based on Quaternion Widely Linear Processing
Tao, Jian-wu; Chang, Wen-xiu
2014-01-01
We investigate the problem of quaternion beamforming based on widely linear processing. First, a quaternion model of linear symmetric array with two-component electromagnetic (EM) vector sensors is presented. Based on array's quaternion model, we propose the general expression of a quaternion semiwidely linear (QSWL) beamformer. Unlike the complex widely linear beamformer, the QSWL beamformer is based on the simultaneous operation on the quaternion vector, which is composed of two jointly proper complex vectors, and its involution counterpart. Second, we propose a useful implementation of QSWL beamformer, that is, QSWL generalized sidelobe canceller (GSC), and derive the simple expressions of the weight vectors. The QSWL GSC consists of two-stage beamformers. By designing the weight vectors of two-stage beamformers, the interference is completely canceled in the output of QSWL GSC and the desired signal is not distorted. We derive the array's gain expression and analyze the performance of the QSWL GSC in the presence of one type of interference. The advantage of QSWL GSC is that the main beam can always point to the desired signal's direction and the robustness to DOA mismatch is improved. Finally, simulations are used to verify the performance of the proposed QSWL GSC. PMID:24955425
Capsule deformation and orientation in general linear flows
NASA Astrophysics Data System (ADS)
Szatmary, Alex; Eggleton, Charles
2010-11-01
We considered the response of spherical and non-spherical capsules to general flows. (A capsule is an elastic membrane enclosing a fluid, immersed in fluid.) First, we established that nonspherical capsules align with the imposed irrotational linear flow; this means that initial orientation does not affect steady-state capsule deformation, so this steady-state deformation can be determined entirely by the capillary number and the type of flow. The type of flow is characterized by r: r=0 for axisymmetric flows, and r=1 for planar flows; intermediate values of r are combinations of planar and axisymmetric flow. By varying the capillary number and r, all irrotational linear Stokes flows can be generated. For the same capillary number, planar flows lead to more deformation than uniaxial or biaxial extensional flows. Deformation varies monotonically with r, so one can determine bounds on capsule deformation in general flow by only looking at uniaxial, biaxial, and planar flow. These results are applicable to spheres in all linear flows and to ellipsoids in irrotational linear flow.
Generalization of continuous-variable quantum cloning with linear optics
NASA Astrophysics Data System (ADS)
Zhai, Zehui; Guo, Juan; Gao, Jiangrui
2006-05-01
We propose an asymmetric quantum cloning scheme. Based on the proposal and experiment by Andersen [Phys. Rev. Lett. 94, 240503 (2005)], we generalize it to two asymmetric cases: quantum cloning with asymmetry between output clones and between quadrature variables. These optical implementations also employ linear elements and homodyne detection only. Finally, we also compare the utility of symmetric and asymmetric cloning in an analysis of a squeezed-state quantum key distribution protocol and find that the asymmetric one is more advantageous.
Credibility analysis of risk classes by generalized linear model
NASA Astrophysics Data System (ADS)
Erdemir, Ovgucan Karadag; Sucu, Meral
2016-06-01
In this paper generalized linear model (GLM) and credibility theory which are frequently used in nonlife insurance pricing are combined for reliability analysis. Using full credibility standard, GLM is associated with limited fluctuation credibility approach. Comparison criteria such as asymptotic variance and credibility probability are used to analyze the credibility of risk classes. An application is performed by using one-year claim frequency data of a Turkish insurance company and results of credible risk classes are interpreted.
Clutter locus equation for more general linear array orientation
NASA Astrophysics Data System (ADS)
Bickel, Douglas L.
2011-06-01
The clutter locus is an important concept in space-time adaptive processing (STAP) for ground moving target indicator (GMTI) radar systems. The clutter locus defines the expected ground clutter location in the angle-Doppler domain. Typically in literature, the clutter locus is presented as a line, or even a set of ellipsoids, under certain assumptions about the geometry of the array. Most often, the array is assumed to be in the horizontal plane containing the velocity vector. This paper will give a more general 3-dimensional interpretation of the clutter locus for a general linear array orientation.
Genetic parameters for racing records in trotters using linear and generalized linear models.
Suontama, M; van der Werf, J H J; Juga, J; Ojala, M
2012-09-01
Heritability and repeatability and genetic and phenotypic correlations were estimated for trotting race records with linear and generalized linear models using 510,519 records on 17,792 Finnhorses and 513,161 records on 25,536 Standardbred trotters. Heritability and repeatability were estimated for single racing time and earnings traits with linear models, and logarithmic scale was used for racing time and fourth-root scale for earnings to correct for nonnormality. Generalized linear models with a gamma distribution were applied for single racing time and with a multinomial distribution for single earnings traits. In addition, genetic parameters for annual earnings were estimated with linear models on the observed and fourth-root scales. Racing success traits of single placings, winnings, breaking stride, and disqualifications were analyzed using generalized linear models with a binomial distribution. Estimates of heritability were greatest for racing time, which ranged from 0.32 to 0.34. Estimates of heritability were low for single earnings with all distributions, ranging from 0.01 to 0.09. Annual earnings were closer to normal distribution than single earnings. Heritability estimates were moderate for annual earnings on the fourth-root scale, 0.19 for Finnhorses and 0.27 for Standardbred trotters. Heritability estimates for binomial racing success variables ranged from 0.04 to 0.12, being greatest for winnings and least for breaking stride. Genetic correlations among racing traits were high, whereas phenotypic correlations were mainly low to moderate, except correlations between racing time and earnings were high. On the basis of a moderate heritability and moderate to high repeatability for racing time and annual earnings, selection of horses for these traits is effective when based on a few repeated records. Because of high genetic correlations, direct selection for racing time and annual earnings would also result in good genetic response in racing success
Linear spin-2 fields in most general backgrounds
NASA Astrophysics Data System (ADS)
Bernard, Laura; Deffayet, Cédric; Schmidt-May, Angnis; von Strauss, Mikael
2016-04-01
We derive the full perturbative equations of motion for the most general background solutions in ghost-free bimetric theory in its metric formulation. Clever field redefinitions at the level of fluctuations enable us to circumvent the problem of varying a square-root matrix appearing in the theory. This greatly simplifies the expressions for the linear variation of the bimetric interaction terms. We show that these field redefinitions exist and are uniquely invertible if and only if the variation of the square-root matrix itself has a unique solution, which is a requirement for the linearized theory to be well defined. As an application of our results we examine the constraint structure of ghost-free bimetric theory at the level of linear equations of motion for the first time. We identify a scalar combination of equations which is responsible for the absence of the Boulware-Deser ghost mode in the theory. The bimetric scalar constraint is in general not manifestly covariant in its nature. However, in the massive gravity limit the constraint assumes a covariant form when one of the interaction parameters is set to zero. For that case our analysis provides an alternative and almost trivial proof of the absence of the Boulware-Deser ghost. Our findings generalize previous results in the metric formulation of massive gravity and also agree with studies of its vielbein version.
A general linear model for MEG beamformer imaging.
Brookes, Matthew J; Gibson, Andrew M; Hall, Stephen D; Furlong, Paul L; Barnes, Gareth R; Hillebrand, Arjan; Singh, Krish D; Holliday, Ian E; Francis, Sue T; Morris, Peter G
2004-11-01
A new general linear model (GLM) beamformer method is described for processing magnetoencephalography (MEG) data. A standard nonlinear beamformer is used to determine the time course of neuronal activation for each point in a predefined source space. A Hilbert transform gives the envelope of oscillatory activity at each location in any chosen frequency band (not necessary in the case of sustained (DC) fields), enabling the general linear model to be applied and a volumetric T statistic image to be determined. The new method is illustrated by a two-source simulation (sustained field and 20 Hz) and is shown to provide accurate localization. The method is also shown to locate accurately the increasing and decreasing gamma activities to the temporal and frontal lobes, respectively, in the case of a scintillating scotoma. The new method brings the advantages of the general linear model to the analysis of MEG data and should prove useful for the localization of changing patterns of activity across all frequency ranges including DC (sustained fields). PMID:15528094
NASA Astrophysics Data System (ADS)
Nordtvedt, K.
2015-11-01
A local system of bodies in General Relativity whose exterior metric field asymptotically approaches the Minkowski metric effaces any effects of the matter distribution exterior to its Minkowski boundary condition. To enforce to all orders this property of gravity which appears to hold in nature, a method using linear algebraic scaling equations is developed which generates by an iterative process an N-body Lagrangian expansion for gravity's motion-independent potentials which fulfills exterior effacement along with needed metric potential expansions. Then additional properties of gravity - interior effacement and Lorentz time dilation and spatial contraction - produce additional iterative, linear algebraic equations for obtaining the full non-linear and motion-dependent N-body gravity Lagrangian potentials as well.
ERIC Educational Resources Information Center
Levy, Roy; Xu, Yuning; Yel, Nedim; Svetina, Dubravka
2015-01-01
The standardized generalized dimensionality discrepancy measure and the standardized model-based covariance are introduced as tools to critique dimensionality assumptions in multidimensional item response models. These tools are grounded in a covariance theory perspective and associated connections between dimensionality and local independence.…
Comparative Study of Algorithms for Automated Generalization of Linear Objects
NASA Astrophysics Data System (ADS)
Azimjon, S.; Gupta, P. K.; Sukhmani, R. S. G. S.
2014-11-01
Automated generalization, rooted from conventional cartography, has become an increasing concern in both geographic information system (GIS) and mapping fields. All geographic phenomenon and the processes are bound to the scale, as it is impossible for human being to observe the Earth and the processes in it without decreasing its scale. To get optimal results, cartographers and map-making agencies develop set of rules and constraints, however these rules are under consideration and topic for many researches up until recent days. Reducing map generating time and giving objectivity is possible by developing automated map generalization algorithms (McMaster and Shea, 1988). Modification of the scale traditionally is a manual process, which requires knowledge of the expert cartographer, and it depends on the experience of the user, which makes the process very subjective as every user may generate different map with same requirements. However, automating generalization based on the cartographic rules and constrains can give consistent result. Also, developing automated system for map generation is the demand of this rapid changing world. The research that we have conveyed considers only generalization of the roads, as it is one of the indispensable parts of a map. Dehradun city, Uttarakhand state of India was selected as a study area. The study carried out comparative study of the generalization software sets, operations and algorithms available currently, also considers advantages and drawbacks of the existing software used worldwide. Research concludes with the development of road network generalization tool and with the final generalized road map of the study area, which explores the use of open source python programming language and attempts to compare different road network generalization algorithms. Thus, the paper discusses the alternative solutions for automated generalization of linear objects using GIS-technologies. Research made on automated of road network
Parametrizing linear generalized Langevin dynamics from explicit molecular dynamics simulations
Gottwald, Fabian; Karsten, Sven; Ivanov, Sergei D. Kühn, Oliver
2015-06-28
Fundamental understanding of complex dynamics in many-particle systems on the atomistic level is of utmost importance. Often the systems of interest are of macroscopic size but can be partitioned into a few important degrees of freedom which are treated most accurately and others which constitute a thermal bath. Particular attention in this respect attracts the linear generalized Langevin equation, which can be rigorously derived by means of a linear projection technique. Within this framework, a complicated interaction with the bath can be reduced to a single memory kernel. This memory kernel in turn is parametrized for a particular system studied, usually by means of time-domain methods based on explicit molecular dynamics data. Here, we discuss that this task is more naturally achieved in frequency domain and develop a Fourier-based parametrization method that outperforms its time-domain analogues. Very surprisingly, the widely used rigid bond method turns out to be inappropriate in general. Importantly, we show that the rigid bond approach leads to a systematic overestimation of relaxation times, unless the system under study consists of a harmonic bath bi-linearly coupled to the relevant degrees of freedom.
Hand, M. M.
1999-07-30
Variable-speed, horizontal axis wind turbines use blade-pitch control to meet specified objectives for three regions of operation. This paper focuses on controller design for the constant power production regime. A simple, rigid, non-linear turbine model was used to systematically perform trade-off studies between two performance metrics. Minimization of both the deviation of the rotor speed from the desired speed and the motion of the actuator is desired. The robust nature of the proportional-integral-derivative (PID) controller is illustrated, and optimal operating conditions are determined. Because numerous simulation runs may be completed in a short time, the relationship of the two opposing metrics is easily visualized. Traditional controller design generally consists of linearizing a model about an operating point. This step was taken for two different operating points, and the systematic design approach was used. A comparison of the optimal regions selected using the n on-linear model and the two linear models shows similarities. The linearization point selection does, however, affect the turbine performance slightly. Exploitation of the simplicity of the model allows surfaces consisting of operation under a wide range of gain values to be created. This methodology provides a means of visually observing turbine performance based upon the two metrics chosen for this study. Design of a PID controller is simplified, and it is possible to ascertain the best possible combination of controller parameters. The wide, flat surfaces indicate that a PID controller is very robust in this variable-speed wind turbine application.
Generalization of continuous-variable quantum cloning with linear optics
Zhai Zehui; Guo Juan; Gao Jiangrui
2006-05-15
We propose an asymmetric quantum cloning scheme. Based on the proposal and experiment by Andersen et al. [Phys. Rev. Lett. 94, 240503 (2005)], we generalize it to two asymmetric cases: quantum cloning with asymmetry between output clones and between quadrature variables. These optical implementations also employ linear elements and homodyne detection only. Finally, we also compare the utility of symmetric and asymmetric cloning in an analysis of a squeezed-state quantum key distribution protocol and find that the asymmetric one is more advantageous.
Generalized space and linear momentum operators in quantum mechanics
Costa, Bruno G. da
2014-06-15
We propose a modification of a recently introduced generalized translation operator, by including a q-exponential factor, which implies in the definition of a Hermitian deformed linear momentum operator p{sup ^}{sub q}, and its canonically conjugate deformed position operator x{sup ^}{sub q}. A canonical transformation leads the Hamiltonian of a position-dependent mass particle to another Hamiltonian of a particle with constant mass in a conservative force field of a deformed phase space. The equation of motion for the classical phase space may be expressed in terms of the generalized dual q-derivative. A position-dependent mass confined in an infinite square potential well is shown as an instance. Uncertainty and correspondence principles are analyzed.
General mirror pairs for gauged linear sigma models
NASA Astrophysics Data System (ADS)
Aspinwall, Paul S.; Plesser, M. Ronen
2015-11-01
We carefully analyze the conditions for an abelian gauged linear σ-model to exhibit nontrivial IR behavior described by a nonsingular superconformal field theory determining a superstring vacuum. This is done without reference to a geometric phase, by associating singular behavior to a noncompact space of (semi-)classical vacua. We find that models determined by reflexive combinatorial data are nonsingular for generic values of their parameters. This condition has the pleasant feature that the mirror of a nonsingular gauged linear σ-model is another such model, but it is clearly too strong and we provide an example of a non-reflexive mirror pair. We discuss a weaker condition inspired by considering extremal transitions, which is also mirror symmetric and which we conjecture to be sufficient. We apply these ideas to extremal transitions and to understanding the way in which both Berglund-Hübsch mirror symmetry and the Vafa-Witten mirror orbifold with discrete torsion can be seen as special cases of the general combinatorial duality of gauged linear σ-models. In the former case we encounter an example showing that our weaker condition is still not necessary.
Marginally specified generalized linear mixed models: a robust approach.
Mills, J E; Field, C A; Dupuis, D J
2002-12-01
Longitudinal data modeling is complicated by the necessity to deal appropriately with the correlation between observations made on the same individual. Building on an earlier nonrobust version proposed by Heagerty (1999, Biometrics 55, 688-698), our robust marginally specified generalized linear mixed model (ROBMS-GLMM) provides an effective method for dealing with such data. This model is one of the first to allow both population-averaged and individual-specific inference. As well, it adopts the flexibility and interpretability of generalized linear mixed models for introducing dependence but builds a regression structure for the marginal mean, allowing valid application with time-dependent (exogenous) and time-independent covariates. These new estimators are obtained as solutions of a robustified likelihood equation involving Huber's least favorable distribution and a collection of weights. Huber's least favorable distribution produces estimates that are resistant to certain deviations from the random effects distributional assumptions. Innovative weighting strategies enable the ROBMS-GLMM to perform well when faced with outlying observations both in the response and covariates. We illustrate the methodology with an analysis of a prospective longitudinal study of laryngoscopic endotracheal intubation, a skill that numerous health-care professionals are expected to acquire. The principal goal of our research is to achieve robust inference in longitudinal analyses. PMID:12495126
Optimization in generalized linear models: A case study
NASA Astrophysics Data System (ADS)
Silva, Eliana Costa e.; Correia, Aldina; Lopes, Isabel Cristina
2016-06-01
The maximum likelihood method is usually chosen to estimate the regression parameters of Generalized Linear Models (GLM) and also for hypothesis testing and goodness of fit tests. The classical method for estimating GLM parameters is the Fisher scores. In this work we propose to compute the estimates of the parameters with two alternative methods: a derivative-based optimization method, namely the BFGS method which is one of the most popular of the quasi-Newton algorithms, and the PSwarm derivative-free optimization method that combines features of a pattern search optimization method with a global Particle Swarm scheme. As a case study we use a dataset of biological parameters (phytoplankton) and chemical and environmental parameters of the water column of a Portuguese reservoir. The results show that, for this dataset, BFGS and PSwarm methods provided a better fit, than Fisher scores method, and can be good alternatives for finding the estimates for the parameters of a GLM.
The left invariant metric in the general linear group
NASA Astrophysics Data System (ADS)
Andruchow, E.; Larotonda, G.; Recht, L.; Varela, A.
2014-12-01
Left invariant metrics induced by the p-norms of the trace in the matrix algebra are studied on the general linear group. By means of the Euler-Lagrange equations, existence and uniqueness of extremal paths for the length functional are established, and regularity properties of these extremal paths are obtained. Minimizing paths in the group are shown to have a velocity with constant singular values and multiplicity. In several special cases, these geodesic paths are computed explicitly. In particular the Riemannian geodesics, corresponding to the case p = 2, are characterized as the product of two one-parameter groups. It is also shown that geodesics are one-parameter groups if and only if the initial velocity is a normal matrix. These results are further extended to the context of compact operators with p-summable spectrum, where a differential equation for the spectral projections of the velocity vector of an extremal path is obtained.
Using parallel banded linear system solvers in generalized eigenvalue problems
NASA Technical Reports Server (NTRS)
Zhang, Hong; Moss, William F.
1993-01-01
Subspace iteration is a reliable and cost effective method for solving positive definite banded symmetric generalized eigenproblems, especially in the case of large scale problems. This paper discusses an algorithm that makes use of two parallel banded solvers in subspace iteration. A shift is introduced to decompose the banded linear systems into relatively independent subsystems and to accelerate the iterations. With this shift, an eigenproblem is mapped efficiently into the memories of a multiprocessor and a high speed-up is obtained for parallel implementations. An optimal shift is a shift that balances total computation and communication costs. Under certain conditions, we show how to estimate an optimal shift analytically using the decay rate for the inverse of a banded matrix, and how to improve this estimate. Computational results on iPSC/2 and iPSC/860 multiprocessors are presented.
General Linear Rf-Current Drive Calculation in Toroidal Plasma
NASA Astrophysics Data System (ADS)
Smirnov, A. P.; Harvey, R. W.; Prater, R.
2009-04-01
A new general linear calculation of RF current drive has been implemented in the GENRAY all-frequencies RF ray tracing code. This is referred to as the ADJ-QL package, and is based on the Karney, et al. [1] relativistic Green function calculator, ADJ, generalized to non-circular plasmas in toroidal geometry, and coupled with full, bounce-averaged momentum-space RF quasilinear flux [2] expressions calculated at each point along the RF ray trajectories. This approach includes momentum conservation, polarization effects and the influence of trapped electrons. It is assumed that the electron distribution function remains close to a relativistic Maxwellian function. Within the bounds of these assumptions, small banana width, toroidal geometry and low collisionality, the calculation is applicable for all-frequencies RF electron current drive including electron cyclotron, lower hybrid, fast waves and electron Bernstein waves. GENRAY ADJ-QL calculations of the relativistic momentum-conserving current drive have been applied in several cases: benchmarking of electron cyclotron current drive in ITER against other code results; and electron Bernstein and high harmonic fast wave current drive in NSTX. The impacts of momentum conservation on the current drive are also shown for these cases.
Generalized linear joint PP-PS inversion based on two constraints
NASA Astrophysics Data System (ADS)
Fang, Yuan; Zhang, Feng-Qi; Wang, Yan-Chun
2016-03-01
Conventional joint PP—PS inversion is based on approximations of the Zoeppritz equations and assumes constant VP/VS; therefore, the inversion precision and stability cannot satisfy current exploration requirements. We propose a joint PP—PS inversion method based on the exact Zoeppritz equations that combines Bayesian statistics and generalized linear inversion. A forward model based on the exact Zoeppritz equations is built to minimize the error of the approximations in the large-angle data, the prior distribution of the model parameters is added as a regularization item to decrease the ill-posed nature of the inversion, low-frequency constraints are introduced to stabilize the low-frequency data and improve robustness, and a fast algorithm is used to solve the objective function while minimizing the computational load. The proposed method has superior antinoising properties and well reproduces real data.
Adaptive Error Estimation in Linearized Ocean General Circulation Models
NASA Technical Reports Server (NTRS)
Chechelnitsky, Michael Y.
1999-01-01
Data assimilation methods are routinely used in oceanography. The statistics of the model and measurement errors need to be specified a priori. This study addresses the problem of estimating model and measurement error statistics from observations. We start by testing innovation based methods of adaptive error estimation with low-dimensional models in the North Pacific (5-60 deg N, 132-252 deg E) to TOPEX/POSEIDON (TIP) sea level anomaly data, acoustic tomography data from the ATOC project, and the MIT General Circulation Model (GCM). A reduced state linear model that describes large scale internal (baroclinic) error dynamics is used. The methods are shown to be sensitive to the initial guess for the error statistics and the type of observations. A new off-line approach is developed, the covariance matching approach (CMA), where covariance matrices of model-data residuals are "matched" to their theoretical expectations using familiar least squares methods. This method uses observations directly instead of the innovations sequence and is shown to be related to the MT method and the method of Fu et al. (1993). Twin experiments using the same linearized MIT GCM suggest that altimetric data are ill-suited to the estimation of internal GCM errors, but that such estimates can in theory be obtained using acoustic data. The CMA is then applied to T/P sea level anomaly data and a linearization of a global GFDL GCM which uses two vertical modes. We show that the CMA method can be used with a global model and a global data set, and that the estimates of the error statistics are robust. We show that the fraction of the GCM-T/P residual variance explained by the model error is larger than that derived in Fukumori et al.(1999) with the method of Fu et al.(1993). Most of the model error is explained by the barotropic mode. However, we find that impact of the change in the error statistics on the data assimilation estimates is very small. This is explained by the large
Spatial temporal disaggregation of daily rainfall from a generalized linear model
NASA Astrophysics Data System (ADS)
Segond, M.-L.; Onof, C.; Wheater, H. S.
2006-12-01
SummaryThis paper describes a methodology for continuous simulation of spatially-distributed hourly rainfall, based on observed data from a daily raingauge network. Generalized linear models (GLMs), which can represent the spatial and temporal non-stationarities of multi-site daily rainfall (Chandler, R.E., Wheater, H.S., 2002. Analysis of rainfall variability using generalised linear models: a case study from the west of Ireland. Water Resources Research, 38 (10), 1192. doi:10.1029/2001WR000906), are combined with a single-site disaggregation model based on Poisson cluster processes (Koutsoyiannis, D., Onof, C., 2001. Rainfall disaggregation using adjusting procedures on a Poisson cluster model. Journal of Hydrology 246, 109-122). The resulting sub-daily temporal profile is then applied linearly to all sites over the catchment to reproduce the spatially-varying daily totals. The method is tested for the River Lee catchment, UK, a tributary of the Thames covering an area of 1400 km 2. Twenty simulations of 12 years of hourly rainfall are generated at 20 sites and compared with the historical series. The proposed model preserves most standard statistics but has some limitations in the representation of extreme rainfall and the correlation structure. The method can be extended to sites within the modelled region not used in the model calibration.
Enhancing Retrieval with Hyperlinks: A General Model Based on Propositional Argumentation Systems.
ERIC Educational Resources Information Center
Picard, Justin; Savoy, Jacques
2003-01-01
Discusses the use of hyperlinks for improving information retrieval on the World Wide Web and proposes a general model for using hyperlinks based on Probabilistic Argumentation Systems. Topics include propositional logic, knowledge, and uncertainty; assumptions; using hyperlinks to modify document score and rank; and estimating the popularity of a…
Ma, Rongfei
2015-01-01
In this paper, ammonia quantitative analysis based on miniaturized Al ionization gas sensor and non-linear bistable dynamic model was proposed. Al plate anodic gas-ionization sensor was used to obtain the current-voltage (I-V) data. Measurement data was processed by non-linear bistable dynamics model. Results showed that the proposed method quantitatively determined ammonia concentrations. PMID:25975362
Accelerated Hazards Model based on Parametric Families Generalized with Bernstein Polynomials
Chen, Yuhui; Hanson, Timothy; Zhang, Jiajia
2015-01-01
Summary A transformed Bernstein polynomial that is centered at standard parametric families, such as Weibull or log-logistic, is proposed for use in the accelerated hazards model. This class provides a convenient way towards creating a Bayesian non-parametric prior for smooth densities, blending the merits of parametric and non-parametric methods, that is amenable to standard estimation approaches. For example optimization methods in SAS or R can yield the posterior mode and asymptotic covariance matrix. This novel nonparametric prior is employed in the accelerated hazards model, which is further generalized to time-dependent covariates. The proposed approach fares considerably better than previous approaches in simulations; data on the effectiveness of biodegradable carmustine polymers on recurrent brain malignant gliomas is investigated. PMID:24261450
Park, Chan-Gyung; Hwang, Jai-chan; Park, Jaehong; Noh, Hyerim
2010-03-15
We study a generalized version of Chaplygin gas as unified model of dark matter and dark energy. Using realistic theoretical models and the currently available observational data from the age of the universe, the expansion history based on the type Ia supernovae, the matter power spectrum, the cosmic microwave background radiation anisotropy power spectra, and the perturbation growth factor we put the unified model under observational test. As the model has only two free parameters in the flat Friedmann background [{Lambda}CDM (cold dark matter) model has only one free parameter] we show that the model is already tightly constrained by currently available observations. The only parameter space extremely close to the {Lambda}CDM model is allowed in this unified model.
NASA Astrophysics Data System (ADS)
Sakaris, Christos S.; Sakellariou, John S.; Fassois, Spilios D.
2015-07-01
A Generalized Functional Model Based Method for vibration-based damage precise localization on structures consisting of 1D, 2D, or 3D elements is introduced. The method generalizes previous versions applicable to structures consisting of 1D elements, thus allowing for 2D and 3D elements as well. It is based on scalar (single sensor) or vector (multiple sensor) Functional Models which - in the inspection phase - incorporate the mathematical form of the specific structural topology. Precise localization is then based on coordinate estimation within this model structure, and confidence bounds are also obtained. The effectiveness of the method is demonstrated through experiments on a 3D truss structure where damage corresponds to single bolt loosening. Both the scalar and vector versions of the method are shown to be effective even within a very limited, low frequency, bandwidth of 3-59 Hz. The improvement achieved through the use of multiple sensors is also demonstrated.
Linear and generalized linear models for the detection of QTL effects on within-subject variability
Wittenburg, Dörte; Guiard, Volker; Liese, Friedrich; Reinsch, Norbert
2007-01-01
Summary Quantitative trait loci (QTLs) may affect not only the mean of a trait but also its variability. A special aspect is the variability between multiple measured traits of genotyped animals, such as the within-litter variance of piglet birth weights. The sample variance of repeated measurements is assigned as an observation for every genotyped individual. It is shown that the conditional distribution of the non-normally distributed trait can be approximated by a gamma distribution. To detect QTL effects in the daughter design, a generalized linear model with the identity link function is applied. Suitable test statistics are constructed to test the null hypothesis H0: No QTL with effect on the within-litter variance is segregating versus HA: There is a QTL with effect on the variability of birth weight within litter. Furthermore, estimates of the QTL effect and the QTL position are introduced and discussed. The efficiency of the presented tests is compared with a test based on weighted regression. The error probability of the first type as well as the power of QTL detection are discussed and compared for the different tests. PMID:18208630
Fermion unification model based on the intrinsic SU(8) symmetry of a generalized Dirac equation
NASA Astrophysics Data System (ADS)
Marsch, Eckart; Narita, Yasuhito
2015-10-01
A natural generalization of the original Dirac spinor into a multi-component spinor is achieved, which corresponds to the single lepton and the three quarks of the first family of the standard model of elementary particle physics. Different fermions result from similarity transformations of the Dirac equation, but apparently there can be no more fermions according to the maximal multiplicity revealed in this study. Rotations in the fermion state space are achieved by the unitary generators of the U(1) and the SU(3) groups, corresponding to quantum electrodynamics (QED based on electric charge) and chromodynamics (QCD based on colour charge). In addition to hypercharge the dual degree of freedom of hyperspin emerges, which occurs due to the duplicity implied by the two related (Weyl and Dirac) representations of the Dirac equation. This yields the SU(2) symmetry of the weak interaction, which can be married to U(1) to generate the unified electroweak interaction as in the standard model. Therefore, the symmetry group encompassing all the three groups mentioned above is SU(8), which can accommodate and unify the observed eight basic stable fermions.
Connections between Generalizing and Justifying: Students' Reasoning with Linear Relationships
ERIC Educational Resources Information Center
Ellis, Amy B.
2007-01-01
Research investigating algebra students' abilities to generalize and justify suggests that they experience difficulty in creating and using appropriate generalizations and proofs. Although the field has documented students' errors, less is known about what students do understand to be general and convincing. This study examines the ways in which…
NASA Technical Reports Server (NTRS)
McCormick, S.; Ruge, John W.
1998-01-01
This work represents a part of a project to develop an atmospheric general circulation model based on the semi-Lagrangian advection of potential vorticity (PC) with divergence as the companion prognostic variable.
Wang, Shuo; Cao, Yang
2015-01-01
Random effect in cellular systems is an important topic in systems biology and often simulated with Gillespie’s stochastic simulation algorithm (SSA). Abridgment refers to model reduction that approximates a group of reactions by a smaller group with fewer species and reactions. This paper presents a theoretical analysis, based on comparison of the first exit time, for the abridgment on a linear chain reaction model motivated by systems with multiple phosphorylation sites. The analysis shows that if the relaxation time of the fast subsystem is much smaller than the mean firing time of the slow reactions, the abridgment can be applied with little error. This analysis is further verified with numerical experiments for models of bistable switch and oscillations in which linear chain system plays a critical role. PMID:26263559
A General Linear Method for Equating with Small Samples
ERIC Educational Resources Information Center
Albano, Anthony D.
2015-01-01
Research on equating with small samples has shown that methods with stronger assumptions and fewer statistical estimates can lead to decreased error in the estimated equating function. This article introduces a new approach to linear observed-score equating, one which provides flexible control over how form difficulty is assumed versus estimated…
Generalizing a Categorization of Students' Interpretations of Linear Kinematics Graphs
ERIC Educational Resources Information Center
Bollen, Laurens; De Cock, Mieke; Zuza, Kristina; Guisasola, Jenaro; van Kampen, Paul
2016-01-01
We have investigated whether and how a categorization of responses to questions on linear distance-time graphs, based on a study of Irish students enrolled in an algebra-based course, could be adopted and adapted to responses from students enrolled in calculus-based physics courses at universities in Flanders, Belgium (KU Leuven) and the Basque…
Guisan, A.; Edwards, T.C., Jr.; Hastie, T.
2002-01-01
An important statistical development of the last 30 years has been the advance in regression analysis provided by generalized linear models (GLMs) and generalized additive models (GAMs). Here we introduce a series of papers prepared within the framework of an international workshop entitled: Advances in GLMs/GAMs modeling: from species distribution to environmental management, held in Riederalp, Switzerland, 6-11 August 2001. We first discuss some general uses of statistical models in ecology, as well as provide a short review of several key examples of the use of GLMs and GAMs in ecological modeling efforts. We next present an overview of GLMs and GAMs, and discuss some of their related statistics used for predictor selection, model diagnostics, and evaluation. Included is a discussion of several new approaches applicable to GLMs and GAMs, such as ridge regression, an alternative to stepwise selection of predictors, and methods for the identification of interactions by a combined use of regression trees and several other approaches. We close with an overview of the papers and how we feel they advance our understanding of their application to ecological modeling. ?? 2002 Elsevier Science B.V. All rights reserved.
Burant, Aniela; Thompson, Christopher; Lowry, Gregory V; Karamalidis, Athanasios K
2016-05-17
Partitioning coefficients of organic compounds between water and supercritical CO2 (sc-CO2) are necessary to assess the risk of migration of these chemicals from subsurface CO2 storage sites. Despite the large number of potential organic contaminants, the current data set of published water-sc-CO2 partitioning coefficients is very limited. Here, the partitioning coefficients of thiophene, pyrrole, and anisole were measured in situ over a range of temperatures and pressures using a novel pressurized batch-reactor system with dual spectroscopic detectors: a near-infrared spectrometer for measuring the organic analyte in the CO2 phase and a UV detector for quantifying the analyte in the aqueous phase. Our measured partitioning coefficients followed expected trends based on volatility and aqueous solubility. The partitioning coefficients and literature data were then used to update a published poly parameter linear free-energy relationship and to develop five new linear free-energy relationships for predicting water-sc-CO2 partitioning coefficients. A total of four of the models targeted a single class of organic compounds. Unlike models that utilize Abraham solvation parameters, the new relationships use vapor pressure and aqueous solubility of the organic compound at 25 °C and CO2 density to predict partitioning coefficients over a range of temperature and pressure conditions. The compound class models provide better estimates of partitioning behavior for compounds in that class than does the model built for the entire data set. PMID:27081725
NASA Astrophysics Data System (ADS)
Hakkarainen, Elina; Tähtinen, Matti
2016-05-01
Demonstrations of direct steam generation (DSG) in linear Fresnel collectors (LFC) have given promising results related to higher steam parameters compared to the current state-of-the-art parabolic trough collector (PTC) technology using oil as heat transfer fluid (HTF). However, DSG technology lacks feasible solution for long-term thermal energy storage (TES) system. This option is important for CSP technology in order to offer dispatchable power. Recently, molten salts have been proposed to be used as HTF and directly as storage medium in both line-focusing solar fields, offering storage capacity of several hours. This direct molten salt (DMS) storage concept has already gained operational experience in solar tower power plant, and it is under demonstration phase both in the case of LFC and PTC systems. Dynamic simulation programs offer a valuable effort for design and optimization of solar power plants. In this work, APROS dynamic simulation program is used to model a DMS linear Fresnel solar field with two-tank TES system, and example simulation results are presented in order to verify the functionality of the model and capability of APROS for CSP modelling and simulation.
Generalizing a categorization of students' interpretations of linear kinematics graphs
NASA Astrophysics Data System (ADS)
Bollen, Laurens; De Cock, Mieke; Zuza, Kristina; Guisasola, Jenaro; van Kampen, Paul
2016-06-01
We have investigated whether and how a categorization of responses to questions on linear distance-time graphs, based on a study of Irish students enrolled in an algebra-based course, could be adopted and adapted to responses from students enrolled in calculus-based physics courses at universities in Flanders, Belgium (KU Leuven) and the Basque Country, Spain (University of the Basque Country). We discuss how we adapted the categorization to accommodate a much more diverse student cohort and explain how the prior knowledge of students may account for many differences in the prevalence of approaches and success rates. Although calculus-based physics students make fewer mistakes than algebra-based physics students, they encounter similar difficulties that are often related to incorrectly dividing two coordinates. We verified that a qualitative understanding of kinematics is an important but not sufficient condition for students to determine a correct value for the speed. When comparing responses to questions on linear distance-time graphs with responses to isomorphic questions on linear water level versus time graphs, we observed that the context of a question influences the approach students use. Neither qualitative understanding nor an ability to find the slope of a context-free graph proved to be a reliable predictor for the approach students use when they determine the instantaneous speed.
Ureba, A.; Salguero, F. J.; Barbeiro, A. R.; Jimenez-Ortega, E.; Baeza, J. A.; Leal, A.; Miras, H.; Linares, R.; Perucha, M.
2014-08-15
Purpose: The authors present a hybrid direct multileaf collimator (MLC) aperture optimization model exclusively based on sequencing of patient imaging data to be implemented on a Monte Carlo treatment planning system (MC-TPS) to allow the explicit radiation transport simulation of advanced radiotherapy treatments with optimal results in efficient times for clinical practice. Methods: The planning system (called CARMEN) is a full MC-TPS, controlled through aMATLAB interface, which is based on the sequencing of a novel map, called “biophysical” map, which is generated from enhanced image data of patients to achieve a set of segments actually deliverable. In order to reduce the required computation time, the conventional fluence map has been replaced by the biophysical map which is sequenced to provide direct apertures that will later be weighted by means of an optimization algorithm based on linear programming. A ray-casting algorithm throughout the patient CT assembles information about the found structures, the mass thickness crossed, as well as PET values. Data are recorded to generate a biophysical map for each gantry angle. These maps are the input files for a home-made sequencer developed to take into account the interactions of photons and electrons with the MLC. For each linac (Axesse of Elekta and Primus of Siemens) and energy beam studied (6, 9, 12, 15 MeV and 6 MV), phase space files were simulated with the EGSnrc/BEAMnrc code. The dose calculation in patient was carried out with the BEAMDOSE code. This code is a modified version of EGSnrc/DOSXYZnrc able to calculate the beamlet dose in order to combine them with different weights during the optimization process. Results: Three complex radiotherapy treatments were selected to check the reliability of CARMEN in situations where the MC calculation can offer an added value: A head-and-neck case (Case I) with three targets delineated on PET/CT images and a demanding dose-escalation; a partial breast
Item Purification in Differential Item Functioning Using Generalized Linear Mixed Models
ERIC Educational Resources Information Center
Liu, Qian
2011-01-01
For this dissertation, four item purification procedures were implemented onto the generalized linear mixed model for differential item functioning (DIF) analysis, and the performance of these item purification procedures was investigated through a series of simulations. Among the four procedures, forward and generalized linear mixed model (GLMM)…
NASA Technical Reports Server (NTRS)
Stevens, P. K.
1981-01-01
This paper presents a generalization of the Nyquist stability criterion to include general multivariable linear stationary systems subject to linear static and dynamic feedback. At the same time, a unifying proof is given for all known versions of the Nyquist criterion for finite dimensional systems.
Arakawa, Akio; Konor, C.S.
1997-12-31
There are great conceptual advantages in the use of an isentropic vertical coordinate in atmospheric models. Design of such a model, however, requires to overcome computational problems due to intersection of coordinate surfaces with the earth`s surface. Under this project, the authors have completed the development of a model based on a generalized vertical coordinate, {zeta} = F({Theta}, p, p{sub s}), in which an isentropic coordinate can be combined with a terrain-following {sigma}-coordinate a smooth transition between the two. One of the key issues in developing such a model is to satisfy the consistency between the predictions of pressure and potential temperature. In the model, the consistency is satisfied by the use of an equation that determines the vertical mass flux. A procedure to properly choose {zeta} = F({Theta}, p, p{sub s}) is also developed, which guarantees that {zeta} is a monotonic function of height even when unstable stratification occurs. There are two versions of the model constructed in parallel: one is the middle-latitude {beta}-plane version and the other is the global version. Both of these versions include moisture prediction, relaxed large-scale condensation and relaxed moist-convective adjustment schemes. A well-mixed planetary boundary layer (PBL) is also added.
Computer analysis of general linear networks using digraphs.
NASA Technical Reports Server (NTRS)
Mcclenahan, J. O.; Chan, S.-P.
1972-01-01
Investigation of the application of digraphs in analyzing general electronic networks, and development of a computer program based on a particular digraph method developed by Chen. The Chen digraph method is a topological method for solution of networks and serves as a shortcut when hand calculations are required. The advantage offered by this method of analysis is that the results are in symbolic form. It is limited, however, by the size of network that may be handled. Usually hand calculations become too tedious for networks larger than about five nodes, depending on how many elements the network contains. Direct determinant expansion for a five-node network is a very tedious process also.
NASA Astrophysics Data System (ADS)
Prieto Sierra, C.; García Alonso, E.; Mínguez Solana, R.; Medina Santamaría, R.
2013-07-01
This paper explores a new approach to lumped hydrological modelling based on general laws of growth, in particular using the classic logistic equation proposed by Verhulst. By identifying homologies between the growth of a generic system and the evolution of the flow at the outlet of a river basin, and adopting some complementary hypotheses, a compact model with 3 parameters, extensible to 4 or 5, is obtained. The model assumes that a hydrological system, under persistent conditions of precipitation, potential evapotranspiration and land uses, tends to reach an equilibrium discharge that can be expressed as a function of a dynamic aridity index, including a free parameter reflecting the basin properties. The rate at which the system approaches such equilibrium discharge, which is constantly changing and generally not attainable, is another parameter of the model; finally, a time lag is introduced to reflect a characteristic delay between the input (precipitation) and output (discharge) in the system behaviour. To test the suitability of the proposed model, 5 previously studied river basins in the UK, with different characteristics, have been analysed at a daily scale, and the results compared with those of the model IHACRES (Identification of unit Hydrographs and Component flows from Rainfall, Evaporation and Streamflow data). It is found that the logistic equilibrium model with 3 parameters properly reproduces the hydrological behaviour of such basins, improving the IHACRES in four of them; moreover, the model parameters are relatively stable over different periods of calibration and evaluation. Adding more parameters to the basic structure, the fits only improve slightly in some of the analysed series, but potentially increasing equifinality effects. The results obtained indicate that growth equations, with possible variations, can be useful and parsimonious tools for hydrological modelling, at least in certain types of watersheds.
Use of generalized linear models and digital data in a forest inventory of Northern Utah
Moisen, G.G.; Edwards, T.C., Jr.
1999-01-01
Forest inventories, like those conducted by the Forest Service's Forest Inventory and Analysis Program (FIA) in the Rocky Mountain Region, are under increased pressure to produce better information at reduced costs. Here we describe our efforts in Utah to merge satellite-based information with forest inventory data for the purposes of reducing the costs of estimates of forest population totals and providing spatial depiction of forest resources. We illustrate how generalized linear models can be used to construct approximately unbiased and efficient estimates of population totals while providing a mechanism for prediction in space for mapping of forest structure. We model forest type and timber volume of five tree species groups as functions of a variety of predictor variables in the northern Utah mountains. Predictor variables include elevation, aspect, slope, geographic coordinates, as well as vegetation cover types based on satellite data from both the Advanced Very High Resolution Radiometer (AVHRR) and Thematic Mapper (TM) platforms. We examine the relative precision of estimates of area by forest type and mean cubic-foot volumes under six different models, including the traditional double sampling for stratification strategy. Only very small gains in precision were realized through the use of expensive photointerpreted or TM-based data for stratification, while models based on topography and spatial coordinates alone were competitive. We also compare the predictive capability of the models through various map accuracy measures. The models including the TM-based vegetation performed best overall, while topography and spatial coordinates alone provided substantial information at very low cost.
NASA Astrophysics Data System (ADS)
Scafetta, Nicola
2013-11-01
Power spectra of global surface temperature (GST) records (available since 1850) reveal major periodicities at about 9.1, 10-11, 19-22 and 59-62 years. Equivalent oscillations are found in numerous multisecular paleoclimatic records. The Coupled Model Intercomparison Project 5 (CMIP5) general circulation models (GCMs), to be used in the IPCC Fifth Assessment Report (AR5, 2013), are analyzed and found not able to reconstruct this variability. In particular, from 2000 to 2013.5 a GST plateau is observed while the GCMs predicted a warming rate of about 2 °C/century. In contrast, the hypothesis that the climate is regulated by specific natural oscillations more accurately fits the GST records at multiple time scales. For example, a quasi 60-year natural oscillation simultaneously explains the 1850-1880, 1910-1940 and 1970-2000 warming periods, the 1880-1910 and 1940-1970 cooling periods and the post 2000 GST plateau. This hypothesis implies that about 50% of the ~ 0.5 °C global surface warming observed from 1970 to 2000 was due to natural oscillations of the climate system, not to anthropogenic forcing as modeled by the CMIP3 and CMIP5 GCMs. Consequently, the climate sensitivity to CO2 doubling should be reduced by half, for example from the 2.0-4.5 °C range (as claimed by the IPCC, 2007) to 1.0-2.3 °C with a likely median of ~ 1.5 °C instead of ~ 3.0 °C. Also modern paleoclimatic temperature reconstructions showing a larger preindustrial variability than the hockey-stick shaped temperature reconstructions developed in early 2000 imply a weaker anthropogenic effect and a stronger solar contribution to climatic changes. The observed natural oscillations could be driven by astronomical forcings. The ~ 9.1 year oscillation appears to be a combination of long soli-lunar tidal oscillations, while quasi 10-11, 20 and 60 year oscillations are typically found among major solar and heliospheric oscillations driven mostly by Jupiter and Saturn movements. Solar models based
NASA Astrophysics Data System (ADS)
Irmak, Suat; Mutiibwa, Denis
2010-08-01
The 1-D and single layer combination-based energy balance Penman-Monteith (PM) model has limitations in practical application due to the lack of canopy resistance (rc) data for different vegetation surfaces. rc could be estimated by inversion of the PM model if the actual evapotranspiration (ETa) rate is known, but this approach has its own set of issues. Instead, an empirical method of estimating rc is suggested in this study. We investigated the relationships between primary micrometeorological parameters and rc and developed seven models to estimate rc for a nonstressed maize canopy on an hourly time step using a generalized-linear modeling approach. The most complex rc model uses net radiation (Rn), air temperature (Ta), vapor pressure deficit (VPD), relative humidity (RH), wind speed at 3 m (u3), aerodynamic resistance (ra), leaf area index (LAI), and solar zenith angle (Θ). The simplest model requires Rn, Ta, and RH. We present the practical implementation of all models via experimental validation using scaled up rc data obtained from the dynamic diffusion porometer-measured leaf stomatal resistance through an extensive field campaign in 2006. For further validation, we estimated ETa by solving the PM model using the modeled rc from all seven models and compared the PM ETa estimates with the Bowen ratio energy balance system (BREBS)-measured ETa for an independent data set in 2005. The relationships between hourly rc versus Ta, RH, VPD, Rn, incoming shortwave radiation (Rs), u3, wind direction, LAI, Θ, and ra were presented and discussed. We demonstrated the negative impact of exclusion of LAI when modeling rc, whereas exclusion of ra and Θ did not impact the performance of the rc models. Compared to the calibration results, the validation root mean square difference between observed and modeled rc increased by 5 s m-1 for all rc models developed, ranging from 9.9 s m-1 for the most complex model to 22.8 s m-1 for the simplest model, as compared with the
Zhao, Yingfeng; Liu, Sanyang
2016-01-01
We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient. PMID:27547676
NASA Technical Reports Server (NTRS)
Rankin, C. C.
1988-01-01
A consistent linearization is provided for the element-dependent corotational formulation, providing the proper first and second variation of the strain energy. As a result, the warping problem that has plagued flat elements has been overcome, with beneficial effects carried over to linear solutions. True Newton quadratic convergence has been restored to the Structural Analysis of General Shells (STAGS) code for conservative loading using the full corotational implementation. Some implications for general finite element analysis are discussed, including what effect the automatic frame invariance provided by this work might have on the development of new, improved elements.
ERIC Educational Resources Information Center
Battauz, Michela; Bellio, Ruggero
2011-01-01
This paper proposes a structural analysis for generalized linear models when some explanatory variables are measured with error and the measurement error variance is a function of the true variables. The focus is on latent variables investigated on the basis of questionnaires and estimated using item response theory models. Latent variable…
Estimation of Complex Generalized Linear Mixed Models for Measurement and Growth
ERIC Educational Resources Information Center
Jeon, Minjeong
2012-01-01
Maximum likelihood (ML) estimation of generalized linear mixed models (GLMMs) is technically challenging because of the intractable likelihoods that involve high dimensional integrations over random effects. The problem is magnified when the random effects have a crossed design and thus the data cannot be reduced to small independent clusters. A…
Regression Is a Univariate General Linear Model Subsuming Other Parametric Methods as Special Cases.
ERIC Educational Resources Information Center
Vidal, Sherry
Although the concept of the general linear model (GLM) has existed since the 1960s, other univariate analyses such as the t-test and the analysis of variance models have remained popular. The GLM produces an equation that minimizes the mean differences of independent variables as they are related to a dependent variable. From a computer printout…
ERIC Educational Resources Information Center
Henson, Robin K.
In General Linear Model (GLM) analyses, it is important to interpret structure coefficients, along with standardized weights, when evaluating variable contribution to observed effects. Although often used in canonical correlation analysis, structure coefficients are less frequently used in multiple regression and several other multivariate…
Implementing general quantum measurements on linear optical and solid-state qubits
NASA Astrophysics Data System (ADS)
Ota, Yukihiro; Ashhab, Sahel; Nori, Franco
2013-03-01
We show a systematic construction for implementing general measurements on a single qubit, including both strong (or projection) and weak measurements. We mainly focus on linear optical qubits. The present approach is composed of simple and feasible elements, i.e., beam splitters, wave plates, and polarizing beam splitters. We show how the parameters characterizing the measurement operators are controlled by the linear optical elements. We also propose a method for the implementation of general measurements in solid-state qubits. Furthermore, we show an interesting application of the general measurements, i.e., entanglement amplification. YO is partially supported by the SPDR Program, RIKEN. SA and FN acknowledge ARO, NSF grant No. 0726909, JSPS-RFBR contract No. 12-02-92100, Grant-in-Aid for Scientific Research (S), MEXT Kakenhi on Quantum Cybernetics, and the JSPS via its FIRST program.
Zhou, Shaohua Kevin; Aggarwal, Gaurav; Chellappa, Rama; Jacobs, David W
2007-02-01
Traditional photometric stereo algorithms employ a Lambertian reflectance model with a varying albedo field and involve the appearance of only one object. In this paper, we generalize photometric stereo algorithms to handle all appearances of all objects in a class, in particular the human face class, by making use of the linear Lambertian property. A linear Lambertian object is one which is linearly spanned by a set of basis objects and has a Lambertian surface. The linear property leads to a rank constraint and, consequently, a factorization of an observation matrix that consists of exemplar images of different objects (e.g., faces of different subjects) under different, unknown illuminations. Integrability and symmetry constraints are used to fully recover the subspace bases using a novel linearized algorithm that takes the varying albedo field into account. The effectiveness of the linear Lambertian property is further investigated by using it for the problem of illumination-invariant face recognition using just one image. Attached shadows are incorporated in the model by a careful treatment of the inherent nonlinearity in Lambert's law. This enables us to extend our algorithm to perform face recognition in the presence of multiple illumination sources. Experimental results using standard data sets are presented. PMID:17170477
Shen, Mouquan; Park, Ju H
2016-07-01
This paper addresses the H∞ filtering of continuous Markov jump linear systems with general transition probabilities and output quantization. S-procedure is employed to handle the adverse influence of the quantization and a new approach is developed to conquer the nonlinearity induced by uncertain and unknown transition probabilities. Then, sufficient conditions are presented to ensure the filtering error system to be stochastically stable with the prescribed performance requirement. Without specified structure imposed on introduced slack variables, a flexible filter design method is established in terms of linear matrix inequalities. The effectiveness of the proposed method is validated by a numerical example. PMID:27129765
Non-linear regime of the Generalized Minimal Massive Gravity in critical points
NASA Astrophysics Data System (ADS)
Setare, M. R.; Adami, H.
2016-03-01
The Generalized Minimal Massive Gravity (GMMG) theory is realized by adding the CS deformation term, the higher derivative deformation term, and an extra term to pure Einstein gravity with a negative cosmological constant. In the present paper we obtain exact solutions to the GMMG field equations in the non-linear regime of the model. GMMG model about AdS_3 space is conjectured to be dual to a 2-dimensional CFT. We study the theory in critical points corresponding to the central charges c_-=0 or c_+=0, in the non-linear regime. We show that AdS_3 wave solutions are present, and have logarithmic form in critical points. Then we study the AdS_3 non-linear deformation solution. Furthermore we obtain logarithmic deformation of extremal BTZ black hole. After that using Abbott-Deser-Tekin method we calculate the energy and angular momentum of these types of black hole solutions.
Linear and nonlinear associations between general intelligence and personality in Project TALENT.
Major, Jason T; Johnson, Wendy; Deary, Ian J
2014-04-01
Research on the relations of personality traits to intelligence has primarily been concerned with linear associations. Yet, there are no a priori reasons why linear relations should be expected over nonlinear ones, which represent a much larger set of all possible associations. Using 2 techniques, quadratic and generalized additive models, we tested for linear and nonlinear associations of general intelligence (g) with 10 personality scales from Project TALENT (PT), a nationally representative sample of approximately 400,000 American high school students from 1960, divided into 4 grade samples (Flanagan et al., 1962). We departed from previous studies, including one with PT (Reeve, Meyer, & Bonaccio, 2006), by modeling latent quadratic effects directly, controlling the influence of the common factor in the personality scales, and assuming a direction of effect from g to personality. On the basis of the literature, we made 17 directional hypotheses for the linear and quadratic associations. Of these, 53% were supported in all 4 male grades and 58% in all 4 female grades. Quadratic associations explained substantive variance above and beyond linear effects (mean R² between 1.8% and 3.6%) for Sociability, Maturity, Vigor, and Leadership in males and Sociability, Maturity, and Tidiness in females; linear associations were predominant for other traits. We discuss how suited current theories of the personality-intelligence interface are to explain these associations, and how research on intellectually gifted samples may provide a unique way of understanding them. We conclude that nonlinear models can provide incremental detail regarding personality and intelligence associations. PMID:24660993
Semiparametric Analysis of Heterogeneous Data Using Varying-Scale Generalized Linear Models
Xie, Minge; Simpson, Douglas G.; Carroll, Raymond J.
2009-01-01
This article describes a class of heteroscedastic generalized linear regression models in which a subset of the regression parameters are rescaled nonparametrically, and develops efficient semiparametric inferences for the parametric components of the models. Such models provide a means to adapt for heterogeneity in the data due to varying exposures, varying levels of aggregation, and so on. The class of models considered includes generalized partially linear models and nonparametrically scaled link function models as special cases. We present an algorithm to estimate the scale function nonparametrically, and obtain asymptotic distribution theory for regression parameter estimates. In particular, we establish that the asymptotic covariance of the semiparametric estimator for the parametric part of the model achieves the semiparametric lower bound. We also describe bootstrap-based goodness-of-scale test. We illustrate the methodology with simulations, published data, and data from collaborative research on ultrasound safety. PMID:19444331
Linear relations in microbial reaction systems: a general overview of their origin, form, and use.
Noorman, H J; Heijnen, J J; Ch A M Luyben, K
1991-09-01
In microbial reaction systems, there are a number of linear relations among net conversion rates. These can be very useful in the analysis of experimental data. This article provides a general approach for the formation and application of the linear relations. Two type of system descriptions, one considering the biomass as a black box and the other based on metabolic pathways, are encountered. These are defined in a linear vector and matrix algebra framework. A correct a priori description can be obtained by three useful tests: the independency, consistency, and observability tests. The independency are different. The black box approach provides only conservations relations. They are derived from element, electrical charge, energy, and Gibbs energy balances. The metabolic approach provides, in addition to the conservation relations, metabolic and reaction relations. These result from component, energy, and Gibbs energy balances. Thus it is more attractive to use the metabolic description than the black box approach. A number of different types of linear relations given in the literature are reviewed. They are classified according to the different categories that result from the black box or the metabolic system description. Validation of hypotheses related to metabolic pathways can be supported by experimental validation of the linear metabolic relations. However, definite proof from biochemical evidence remains indispensable. PMID:18604879
Zhao, Ningning; Basarab, Adrian; Kouame, Denis; Tourneret, Jean-Yves
2016-08-01
This paper proposes a joint segmentation and deconvolution Bayesian method for medical ultrasound (US) images. Contrary to piecewise homogeneous images, US images exhibit heavy characteristic speckle patterns correlated with the tissue structures. The generalized Gaussian distribution (GGD) has been shown to be one of the most relevant distributions for characterizing the speckle in US images. Thus, we propose a GGD-Potts model defined by a label map coupling US image segmentation and deconvolution. The Bayesian estimators of the unknown model parameters, including the US image, the label map, and all the hyperparameters are difficult to be expressed in a closed form. Thus, we investigate a Gibbs sampler to generate samples distributed according to the posterior of interest. These generated samples are finally used to compute the Bayesian estimators of the unknown parameters. The performance of the proposed Bayesian model is compared with the existing approaches via several experiments conducted on realistic synthetic data and in vivo US images. PMID:27187959
Application of the Hyper-Poisson Generalized Linear Model for Analyzing Motor Vehicle Crashes.
Khazraee, S Hadi; Sáez-Castillo, Antonio Jose; Geedipally, Srinivas Reddy; Lord, Dominique
2015-05-01
The hyper-Poisson distribution can handle both over- and underdispersion, and its generalized linear model formulation allows the dispersion of the distribution to be observation-specific and dependent on model covariates. This study's objective is to examine the potential applicability of a newly proposed generalized linear model framework for the hyper-Poisson distribution in analyzing motor vehicle crash count data. The hyper-Poisson generalized linear model was first fitted to intersection crash data from Toronto, characterized by overdispersion, and then to crash data from railway-highway crossings in Korea, characterized by underdispersion. The results of this study are promising. When fitted to the Toronto data set, the goodness-of-fit measures indicated that the hyper-Poisson model with a variable dispersion parameter provided a statistical fit as good as the traditional negative binomial model. The hyper-Poisson model was also successful in handling the underdispersed data from Korea; the model performed as well as the gamma probability model and the Conway-Maxwell-Poisson model previously developed for the same data set. The advantages of the hyper-Poisson model studied in this article are noteworthy. Unlike the negative binomial model, which has difficulties in handling underdispersed data, the hyper-Poisson model can handle both over- and underdispersed crash data. Although not a major issue for the Conway-Maxwell-Poisson model, the effect of each variable on the expected mean of crashes is easily interpretable in the case of this new model. PMID:25385093
Chen, Xueli; Gao, Xinbo; Qu, Xiaochao; Chen, Duofang; Ma, Xiaopeng; Liang, Jimin; Tian, Jie
2010-10-10
The camera lens diaphragm is an important component in a noncontact optical imaging system and has a crucial influence on the images registered on the CCD camera. However, this influence has not been taken into account in the existing free-space photon transport models. To model the photon transport process more accurately, a generalized free-space photon transport model is proposed. It combines Lambertian source theory with analysis of the influence of the camera lens diaphragm to simulate photon transport process in free space. In addition, the radiance theorem is also adopted to establish the energy relationship between the virtual detector and the CCD camera. The accuracy and feasibility of the proposed model is validated with a Monte-Carlo-based free-space photon transport model and physical phantom experiment. A comparison study with our previous hybrid radiosity-radiance theorem based model demonstrates the improvement performance and potential of the proposed model for simulating photon transport process in free space. PMID:20935713
Unified Einstein-Virasoro Master Equation in the General Non-Linear Sigma Model
Boer, J. de; Halpern, M.B.
1996-06-05
The Virasoro master equation (VME) describes the general affine-Virasoro construction $T=L^abJ_aJ_b+iD^a \\dif J_a$ in the operator algebra of the WZW model, where $L^ab$ is the inverse inertia tensor and $D^a $ is the improvement vector. In this paper, we generalize this construction to find the general (one-loop) Virasoro construction in the operator algebra of the general non-linear sigma model. The result is a unified Einstein-Virasoro master equation which couples the spacetime spin-two field $L^ab$ to the background fields of the sigma model. For a particular solution $L_G^ab$, the unified system reduces to the canonical stress tensors and conventional Einstein equations of the sigma model, and the system reduces to the general affine-Virasoro construction and the VME when the sigma model is taken to be the WZW action. More generally, the unified system describes a space of conformal field theories which is presumably much larger than the sum of the general affine-Virasoro construction and the sigma model with its canonical stress tensors. We also discuss a number of algebraic and geometrical properties of the system, including its relation to an unsolved problem in the theory of $G$-structures on manifolds with torsion.
Numerical study of fourth-order linearized compact schemes for generalized NLS equations
NASA Astrophysics Data System (ADS)
Liao, Hong-lin; Shi, Han-sheng; Zhao, Ying
2014-08-01
The fourth-order compact approximation for the spatial second-derivative and several linearized approaches, including the time-lagging method of Zhang et al. (1995), the local-extrapolation technique of Chang et al. (1999) and the recent scheme of Dahlby et al. (2009), are considered in constructing fourth-order linearized compact difference (FLCD) schemes for generalized NLS equations. By applying a new time-lagging linearized approach, we propose a symmetric fourth-order linearized compact difference (SFLCD) scheme, which is shown to be more robust in long-time simulations of plane wave, breather, periodic traveling-wave and solitary wave solutions. Numerical experiments suggest that the SFLCD scheme is a little more accurate than some other FLCD schemes and the split-step compact difference scheme of Dehghan and Taleei (2010). Compared with the time-splitting pseudospectral method of Bao et al. (2003), our SFLCD method is more suitable for oscillating solutions or the problems with a rapidly varying potential.
Normality of raw data in general linear models: The most widespread myth in statistics
Kery, M.; Hatfield, J.S.
2003-01-01
In years of statistical consulting for ecologists and wildlife biologists, by far the most common misconception we have come across has been the one about normality in general linear models. These comprise a very large part of the statistical models used in ecology and include t tests, simple and multiple linear regression, polynomial regression, and analysis of variance (ANOVA) and covariance (ANCOVA). There is a widely held belief that the normality assumption pertains to the raw data rather than to the model residuals. We suspect that this error may also occur in countless published studies, whenever the normality assumption is tested prior to analysis. This may lead to the use of nonparametric alternatives (if there are any), when parametric tests would indeed be appropriate, or to use of transformations of raw data, which may introduce hidden assumptions such as multiplicative effects on the natural scale in the case of log-transformed data. Our aim here is to dispel this myth. We very briefly describe relevant theory for two cases of general linear models to show that the residuals need to be normally distributed if tests requiring normality are to be used, such as t and F tests. We then give two examples demonstrating that the distribution of the response variable may be nonnormal, and yet the residuals are well behaved. We do not go into the issue of how to test normality; instead we display the distributions of response variables and residuals graphically.
A general theory of linear cosmological perturbations: scalar-tensor and vector-tensor theories
NASA Astrophysics Data System (ADS)
Lagos, Macarena; Baker, Tessa; Ferreira, Pedro G.; Noller, Johannes
2016-08-01
We present a method for parametrizing linear cosmological perturbations of theories of gravity, around homogeneous and isotropic backgrounds. The method is sufficiently general and systematic that it can be applied to theories with any degrees of freedom (DoFs) and arbitrary gauge symmetries. In this paper, we focus on scalar-tensor and vector-tensor theories, invariant under linear coordinate transformations. In the case of scalar-tensor theories, we use our framework to recover the simple parametrizations of linearized Horndeski and ``Beyond Horndeski'' theories, and also find higher-derivative corrections. In the case of vector-tensor theories, we first construct the most general quadratic action for perturbations that leads to second-order equations of motion, which propagates two scalar DoFs. Then we specialize to the case in which the vector field is time-like (à la Einstein-Aether gravity), where the theory only propagates one scalar DoF. As a result, we identify the complete forms of the quadratic actions for perturbations, and the number of free parameters that need to be defined, to cosmologically characterize these two broad classes of theories.
Generalized stochastic resonance in a linear fractional system with a random delay
NASA Astrophysics Data System (ADS)
Gao, Shi-Long
2012-12-01
The generalized stochastic resonance (GSR) phenomena in a linear fractional random-delayed system driven by a weak periodic signal and an additive noise are considered in this paper. A random delay is considered for a linear fractional Langevin equation to describe the intercellular signal transmission and material exchange processes in ion channels. By virtue of the small delay approximation and Laplace transformation, the analytical expression for the amplitude of the first-order steady state moment is obtained. The simulation results show that the amplitude curves as functions of different system parameters behave non-monotonically and exhibit typical characteristics of GSR phenomena. Furthermore, a physical explanation for all the GSR phenomena is given and the cooperative effects of random delay and the fractional memory are also discussed.
The generalized Dirichlet-Neumann map for linear elliptic PDEs and its numerical implementation
NASA Astrophysics Data System (ADS)
Sifalakis, A. G.; Fokas, A. S.; Fulton, S. R.; Saridakis, Y. G.
2008-09-01
A new approach for analyzing boundary value problems for linear and for integrable nonlinear PDEs was introduced in Fokas [A unified transform method for solving linear and certain nonlinear PDEs, Proc. Roy. Soc. London Ser. A 53 (1997) 1411-1443]. For linear elliptic PDEs, an important aspect of this approach is the characterization of a generalized Dirichlet to Neumann map: given the derivative of the solution along a direction of an arbitrary angle to the boundary, the derivative of the solution perpendicularly to this direction is computed without solving on the interior of the domain. This is based on the analysis of the so-called global relation, an equation which couples known and unknown components of the derivative on the boundary and which is valid for all values of a complex parameter k. A collocation-type numerical method for solving the global relation for the Laplace equation in an arbitrary bounded convex polygon was introduced in Fulton et al. [An analytical method for linear elliptic PDEs and its numerical implementation, J. Comput. Appl. Math. 167 (2004) 465-483]. Here, by choosing a different set of the "collocation points" (values for k), we present a significant improvement of the results in Fulton et al. [An analytical method for linear elliptic PDEs and its numerical implementation, J. Comput. Appl. Math. 167 (2004) 465-483]. The new collocation points lead to well-conditioned collocation methods. Their combination with sine basis functions leads to a collocation matrix whose diagonal blocks are point diagonal matrices yielding efficient implementation of iterative methods; numerical experimentation suggests quadratic convergence. The choice of Chebyshev basis functions leads to higher order convergence, which for regular polygons appear to be exponential.
Random generalized linear model: a highly accurate and interpretable ensemble predictor
2013-01-01
Background Ensemble predictors such as the random forest are known to have superior accuracy but their black-box predictions are difficult to interpret. In contrast, a generalized linear model (GLM) is very interpretable especially when forward feature selection is used to construct the model. However, forward feature selection tends to overfit the data and leads to low predictive accuracy. Therefore, it remains an important research goal to combine the advantages of ensemble predictors (high accuracy) with the advantages of forward regression modeling (interpretability). To address this goal several articles have explored GLM based ensemble predictors. Since limited evaluations suggested that these ensemble predictors were less accurate than alternative predictors, they have found little attention in the literature. Results Comprehensive evaluations involving hundreds of genomic data sets, the UCI machine learning benchmark data, and simulations are used to give GLM based ensemble predictors a new and careful look. A novel bootstrap aggregated (bagged) GLM predictor that incorporates several elements of randomness and instability (random subspace method, optional interaction terms, forward variable selection) often outperforms a host of alternative prediction methods including random forests and penalized regression models (ridge regression, elastic net, lasso). This random generalized linear model (RGLM) predictor provides variable importance measures that can be used to define a “thinned” ensemble predictor (involving few features) that retains excellent predictive accuracy. Conclusion RGLM is a state of the art predictor that shares the advantages of a random forest (excellent predictive accuracy, feature importance measures, out-of-bag estimates of accuracy) with those of a forward selected generalized linear model (interpretability). These methods are implemented in the freely available R software package randomGLM. PMID:23323760
NASA Astrophysics Data System (ADS)
Sakaris, C. S.; Sakellariou, J. S.; Fassois, S. D.
2016-06-01
This study focuses on the problem of vibration-based damage precise localization via data-based, time series type, methods for structures consisting of 1D, 2D, or 3D elements. A Generalized Functional Model Based method is postulated based on an expanded Vector-dependent Functionally Pooled ARX (VFP-ARX) model form, capable of accounting for an arbitrary structural topology. The FP model's operating parameter vector elements are properly constrained to reflect any given topology. Damage localization is based on operating parameter vector estimation within the specified topology, so that the location estimate and its uncertainty bounds are statistically optimal. The method's effectiveness is experimentally demonstrated through damage precise localization on a laboratory spatial truss structure using various damage scenarios and a single pair of random excitation - vibration response signals in a low and limited frequency bandwidth.
Standard errors for EM estimates in generalized linear models with random effects.
Friedl, H; Kauermann, G
2000-09-01
A procedure is derived for computing standard errors of EM estimates in generalized linear models with random effects. Quadrature formulas are used to approximate the integrals in the EM algorithm, where two different approaches are pursued, i.e., Gauss-Hermite quadrature in the case of Gaussian random effects and nonparametric maximum likelihood estimation for an unspecified random effect distribution. An approximation of the expected Fisher information matrix is derived from an expansion of the EM estimating equations. This allows for inferential arguments based on EM estimates, as demonstrated by an example and simulations. PMID:10985213
Robust root clustering for linear uncertain systems using generalized Lyapunov theory
NASA Technical Reports Server (NTRS)
Yedavalli, R. K.
1993-01-01
Consideration is given to the problem of matrix root clustering in subregions of a complex plane for linear state space models with real parameter uncertainty. The nominal matrix root clustering theory of Gutman & Jury (1981) using the generalized Liapunov equation is extended to the perturbed matrix case, and bounds are derived on the perturbation to maintain root clustering inside a given region. The theory makes it possible to obtain an explicit relationship between the parameters of the root clustering region and the uncertainty range of the parameter space.
Flexible analysis of digital PCR experiments using generalized linear mixed models.
Vynck, Matthijs; Vandesompele, Jo; Nijs, Nele; Menten, Björn; De Ganck, Ariane; Thas, Olivier
2016-09-01
The use of digital PCR for quantification of nucleic acids is rapidly growing. A major drawback remains the lack of flexible data analysis tools. Published analysis approaches are either tailored to specific problem settings or fail to take into account sources of variability. We propose the generalized linear mixed models framework as a flexible tool for analyzing a wide range of experiments. We also introduce a method for estimating reference gene stability to improve accuracy and precision of copy number and relative expression estimates. We demonstrate the usefulness of the methodology on a complex experimental setup. PMID:27551671
Location-scale cumulative odds models for ordinal data: a generalized non-linear model approach.
Cox, C
1995-06-15
Proportional odds regression models for multinomial probabilities based on ordered categories have been generalized in two somewhat different directions. Models having scale as well as location parameters for adjustment of boundaries (on an unobservable, underlying continuum) between categories have been employed in the context of ROC analysis. Partial proportional odds models, having different regression adjustments for different multinomial categories, have also been proposed. This paper considers a synthesis and further generalization of these two families. With use of a number of examples, I discuss and illustrate properties of this extended family of models. Emphasis is on the computation of maximum likelihood estimates of parameters, asymptotic standard deviations, and goodness-of-fit statistics with use of non-linear regression programs in standard statistical software such as SAS. PMID:7667560
Digit Span is (mostly) related linearly to general intelligence: Every extra bit of span counts.
Gignac, Gilles E; Weiss, Lawrence G
2015-12-01
Historically, Digit Span has been regarded as a relatively poor indicator of general intellectual functioning (g). In fact, Wechsler (1958) contended that beyond an average level of Digit Span performance, there was little benefit to possessing a greater memory span. Although Wechsler's position does not appear to have ever been tested empirically, it does appear to have become clinical lore. Consequently, the purpose of this investigation was to test Wechsler's contention on the Wechsler Adult Intelligence Scale-Fourth Edition normative sample (N = 1,800; ages: 16 - 69). Based on linear and nonlinear contrast analyses of means, as well as linear and nonlinear bifactor model analyses, all 3 Digit Span indicators (LDSF, LDSB, and LDSS) were found to exhibit primarily linear associations with FSIQ/g. Thus, the commonly held position that Digit Span performance beyond an average level is not indicative of greater intellectual functioning was not supported. The results are discussed in light of the increasing evidence across multiple domains that memory span plays an important role in intellectual functioning. PMID:25774642
Thermodynamic bounds and general properties of optimal efficiency and power in linear responses.
Jiang, Jian-Hua
2014-10-01
We study the optimal exergy efficiency and power for thermodynamic systems with an Onsager-type "current-force" relationship describing the linear response to external influences. We derive, in analytic forms, the maximum efficiency and optimal efficiency for maximum power for a thermodynamic machine described by a N×N symmetric Onsager matrix with arbitrary integer N. The figure of merit is expressed in terms of the largest eigenvalue of the "coupling matrix" which is solely determined by the Onsager matrix. Some simple but general relationships between the power and efficiency at the conditions for (i) maximum efficiency and (ii) optimal efficiency for maximum power are obtained. We show how the second law of thermodynamics bounds the optimal efficiency and the Onsager matrix and relate those bounds together. The maximum power theorem (Jacobi's Law) is generalized to all thermodynamic machines with a symmetric Onsager matrix in the linear-response regime. We also discuss systems with an asymmetric Onsager matrix (such as systems under magnetic field) for a particular situation and we show that the reversible limit of efficiency can be reached at finite output power. Cooperative effects are found to improve the figure of merit significantly in systems with multiply cross-correlated responses. Application to example systems demonstrates that the theory is helpful in guiding the search for high performance materials and structures in energy researches. PMID:25375457
Gao, Xieping; Li, Bodong; Xiao, Fen
2013-12-01
Multidimensional linear phase perfect reconstruction filter bank (MDLPPRFB) can be designed and implemented via lattice structure. The lattice structure for the MDLPPRFB with filter support N(MΞ) has been published by Muramatsu , where M is the decimation matrix, Ξ is a positive integer diagonal matrix, and N(N) denotes the set of integer vectors in the fundamental parallelepiped of the matrix N. Obviously, if Ξ is chosen to be other positive diagonal matrices instead of only positive integer ones, the corresponding lattice structure would provide more choices of filter banks, offering better trade-off between filter support and filter performance. We call such resulted filter bank as generalized-support MDLPPRFB (GSMDLPPRFB). The lattice structure for GSMDLPPRFB, however, cannot be designed by simply generalizing the process that Muramatsu employed. Furthermore, the related theories to assist the design also become different from those used by Muramatsu . Such issues will be addressed in this paper. To guide the design of GSMDLPPRFB, the necessary and sufficient conditions are established for a generalized-support multidimensional filter bank to be linear-phase. To determine the cases we can find a GSMDLPPRFB, the necessary conditions about the existence of it are proposed to be related with filter support and symmetry polarity (i.e., the number of symmetric filters ns and antisymmetric filters na). Based on a process (different from the one Muramatsu used) that combines several polyphase matrices to construct the starting block, one of the core building blocks of lattice structure, the lattice structure for GSMDLPPRFB is developed and shown to be minimal. Additionally, the result in this paper includes Muramatsu's as a special case. PMID:23974625
Model based manipulator control
NASA Technical Reports Server (NTRS)
Petrosky, Lyman J.; Oppenheim, Irving J.
1989-01-01
The feasibility of using model based control (MBC) for robotic manipulators was investigated. A double inverted pendulum system was constructed as the experimental system for a general study of dynamically stable manipulation. The original interest in dynamically stable systems was driven by the objective of high vertical reach (balancing), and the planning of inertially favorable trajectories for force and payload demands. The model-based control approach is described and the results of experimental tests are summarized. Results directly demonstrate that MBC can provide stable control at all speeds of operation and support operations requiring dynamic stability such as balancing. The application of MBC to systems with flexible links is also discussed.
The heritability of general cognitive ability increases linearly from childhood to young adulthood.
Haworth, C M A; Wright, M J; Luciano, M; Martin, N G; de Geus, E J C; van Beijsterveldt, C E M; Bartels, M; Posthuma, D; Boomsma, D I; Davis, O S P; Kovas, Y; Corley, R P; Defries, J C; Hewitt, J K; Olson, R K; Rhea, S-A; Wadsworth, S J; Iacono, W G; McGue, M; Thompson, L A; Hart, S A; Petrill, S A; Lubinski, D; Plomin, R
2010-11-01
Although common sense suggests that environmental influences increasingly account for individual differences in behavior as experiences accumulate during the course of life, this hypothesis has not previously been tested, in part because of the large sample sizes needed for an adequately powered analysis. Here we show for general cognitive ability that, to the contrary, genetic influence increases with age. The heritability of general cognitive ability increases significantly and linearly from 41% in childhood (9 years) to 55% in adolescence (12 years) and to 66% in young adulthood (17 years) in a sample of 11 000 pairs of twins from four countries, a larger sample than all previous studies combined. In addition to its far-reaching implications for neuroscience and molecular genetics, this finding suggests new ways of thinking about the interface between nature and nurture during the school years. Why, despite life's 'slings and arrows of outrageous fortune', do genetically driven differences increasingly account for differences in general cognitive ability? We suggest that the answer lies with genotype-environment correlation: as children grow up, they increasingly select, modify and even create their own experiences in part based on their genetic propensities. PMID:19488046
Model Averaging Methods for Weight Trimming in Generalized Linear Regression Models
Elliott, Michael R.
2012-01-01
In sample surveys where units have unequal probabilities of inclusion, associations between the inclusion probability and the statistic of interest can induce bias in unweighted estimates. This is true even in regression models, where the estimates of the population slope may be biased if the underlying mean model is misspecified or the sampling is nonignorable. Weights equal to the inverse of the probability of inclusion are often used to counteract this bias. Highly disproportional sample designs have highly variable weights; weight trimming reduces large weights to a maximum value, reducing variability but introducing bias. Most standard approaches are ad hoc in that they do not use the data to optimize bias-variance trade-offs. This article uses Bayesian model averaging to create “data driven” weight trimming estimators. We extend previous results for linear regression models (Elliott 2008) to generalized linear regression models, developing robust models that approximate fully-weighted estimators when bias correction is of greatest importance, and approximate unweighted estimators when variance reduction is critical. PMID:23275683
A General Linear Relaxometry Model of R1 Using Imaging Data
Callaghan, Martina F; Helms, Gunther; Lutti, Antoine; Mohammadi, Siawoosh; Weiskopf, Nikolaus
2015-01-01
Purpose The longitudinal relaxation rate (R1) measured in vivo depends on the local microstructural properties of the tissue, such as macromolecular, iron, and water content. Here, we use whole brain multiparametric in vivo data and a general linear relaxometry model to describe the dependence of R1 on these components. We explore a) the validity of having a single fixed set of model coefficients for the whole brain and b) the stability of the model coefficients in a large cohort. Methods Maps of magnetization transfer (MT) and effective transverse relaxation rate (R2*) were used as surrogates for macromolecular and iron content, respectively. Spatial variations in these parameters reflected variations in underlying tissue microstructure. A linear model was applied to the whole brain, including gray/white matter and deep brain structures, to determine the global model coefficients. Synthetic R1 values were then calculated using these coefficients and compared with the measured R1 maps. Results The model's validity was demonstrated by correspondence between the synthetic and measured R1 values and by high stability of the model coefficients across a large cohort. Conclusion A single set of global coefficients can be used to relate R1, MT, and R2* across the whole brain. Our population study demonstrates the robustness and stability of the model. Magn Reson Med, 2014. © 2014 The Authors. Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. Magn Reson Med 73:1309–1314, 2015. © 2014 Wiley Periodicals, Inc. PMID:24700606
Linear stability of a generalized multi-anticipative car following model with time delays
NASA Astrophysics Data System (ADS)
Ngoduy, D.
2015-05-01
In traffic flow, the multi-anticipative driving behavior describes the reaction of a vehicle to the driving behavior of many vehicles in front where as the time delay is defined as a physiological parameter reflecting the period of time between perceiving a stimulus of leading vehicles and performing a relevant action such as acceleration or deceleration. A lot of effort has been undertaken to understand the effects of either multi-anticipative driving behavior or time delays on traffic flow dynamics. This paper is a first attempt to analytically investigate the dynamics of a generalized class of car-following models with multi-anticipative driving behavior and different time delays associated with such multi-anticipations. To this end, this paper puts forwards to deriving the (long-wavelength) linear stability condition of such a car-following model and study how the combination of different choices of multi-anticipations and time delays affects the instabilities of traffic flow with respect to a small perturbation. It is found that the effect of delays and multi-anticipations are model-dependent, that is, the destabilization effect of delays is suppressed by the stabilization effect of multi-anticipations. Moreover, the weight factor reflecting the distribution of the driver's sensing to the relative gaps of leading vehicles is less sensitive to the linear stability condition of traffic flow than the weight factor for the relative speed of those leading vehicles.
Generalized Linear Models for Identifying Predictors of the Evolutionary Diffusion of Viruses
Beard, Rachel; Magee, Daniel; Suchard, Marc A.; Lemey, Philippe; Scotch, Matthew
2014-01-01
Bioinformatics and phylogeography models use viral sequence data to analyze spread of epidemics and pandemics. However, few of these models have included analytical methods for testing whether certain predictors such as population density, rates of disease migration, and climate are drivers of spatial spread. Understanding the specific factors that drive spatial diffusion of viruses is critical for targeting public health interventions and curbing spread. In this paper we describe the application and evaluation of a model that integrates demographic and environmental predictors with molecular sequence data. The approach parameterizes evolutionary spread of RNA viruses as a generalized linear model (GLM) within a Bayesian inference framework using Markov chain Monte Carlo (MCMC). We evaluate this approach by reconstructing the spread of H5N1 in Egypt while assessing the impact of individual predictors on evolutionary diffusion of the virus. PMID:25717395
Solving the Linear Balance Equation on the Globe as a Generalized Inverse Problem
NASA Technical Reports Server (NTRS)
Lu, Huei-Iin; Robertson, Franklin R.
1999-01-01
A generalized (pseudo) inverse technique was developed to facilitate a better understanding of the numerical effects of tropical singularities inherent in the spectral linear balance equation (LBE). Depending upon the truncation, various levels of determinancy are manifest. The traditional fully-determined (FD) systems give rise to a strong response, while the under-determined (UD) systems yield a weak response to the tropical singularities. The over-determined (OD) systems result in a modest response and a large residual in the tropics. The FD and OD systems can be alternatively solved by the iterative method. Differences in the solutions of an UD system exist between the inverse technique and the iterative method owing to the non- uniqueness of the problem. A realistic balanced wind was obtained by solving the principal components of the spectral LBE in terms of vorticity in an intermediate resolution. Improved solutions were achieved by including the singular-component solutions which best fit the observed wind data.
Computing Confidence Bounds for Power and Sample Size of the General Linear Univariate Model
Taylor, Douglas J.; Muller, Keith E.
2013-01-01
The power of a test, the probability of rejecting the null hypothesis in favor of an alternative, may be computed using estimates of one or more distributional parameters. Statisticians frequently fix mean values and calculate power or sample size using a variance estimate from an existing study. Hence computed power becomes a random variable for a fixed sample size. Likewise, the sample size necessary to achieve a fixed power varies randomly. Standard statistical practice requires reporting uncertainty associated with such point estimates. Previous authors studied an asymptotically unbiased method of obtaining confidence intervals for noncentrality and power of the general linear univariate model in this setting. We provide exact confidence intervals for noncentrality, power, and sample size. Such confidence intervals, particularly one-sided intervals, help in planning a future study and in evaluating existing studies. PMID:24039272
A Bayesian approach for inducing sparsity in generalized linear models with multi-category response
2015-01-01
Background The dimension and complexity of high-throughput gene expression data create many challenges for downstream analysis. Several approaches exist to reduce the number of variables with respect to small sample sizes. In this study, we utilized the Generalized Double Pareto (GDP) prior to induce sparsity in a Bayesian Generalized Linear Model (GLM) setting. The approach was evaluated using a publicly available microarray dataset containing 99 samples corresponding to four different prostate cancer subtypes. Results A hierarchical Sparse Bayesian GLM using GDP prior (SBGG) was developed to take into account the progressive nature of the response variable. We obtained an average overall classification accuracy between 82.5% and 94%, which was higher than Support Vector Machine, Random Forest or a Sparse Bayesian GLM using double exponential priors. Additionally, SBGG outperforms the other 3 methods in correctly identifying pre-metastatic stages of cancer progression, which can prove extremely valuable for therapeutic and diagnostic purposes. Importantly, using Geneset Cohesion Analysis Tool, we found that the top 100 genes produced by SBGG had an average functional cohesion p-value of 2.0E-4 compared to 0.007 to 0.131 produced by the other methods. Conclusions Using GDP in a Bayesian GLM model applied to cancer progression data results in better subclass prediction. In particular, the method identifies pre-metastatic stages of prostate cancer with substantially better accuracy and produces more functionally relevant gene sets. PMID:26423345
SPARSE GENERALIZED FUNCTIONAL LINEAR MODEL FOR PREDICTING REMISSION STATUS OF DEPRESSION PATIENTS
Liu, Yashu; Nie, Zhi; Zhou, Jiayu; Farnum, Michael; Narayan, Vaibhav A; Wittenberg, Gayle; Ye, Jieping
2014-01-01
Complex diseases such as major depression affect people over time in complicated patterns. Longitudinal data analysis is thus crucial for understanding and prognosis of such diseases and has received considerable attention in the biomedical research community. Traditional classification and regression methods have been commonly applied in a simple (controlled) clinical setting with a small number of time points. However, these methods cannot be easily extended to the more general setting for longitudinal analysis, as they are not inherently built for time-dependent data. Functional regression, in contrast, is capable of identifying the relationship between features and outcomes along with time information by assuming features and/or outcomes as random functions over time rather than independent random variables. In this paper, we propose a novel sparse generalized functional linear model for the prediction of treatment remission status of the depression participants with longitudinal features. Compared to traditional functional regression models, our model enables high-dimensional learning, smoothness of functional coefficients, longitudinal feature selection and interpretable estimation of functional coefficients. Extensive experiments have been conducted on the Sequenced Treatment Alternatives to Relieve Depression (STAR*D) data set and the results show that the proposed sparse functional regression method achieves significantly higher prediction power than existing approaches. PMID:24297562
Generalized linear transport theory in dilute neutral gases and dispersion relation of sound waves.
Bendib, A; Bendib-Kalache, K; Gombert, M M; Imadouchene, N
2006-10-01
The transport processes in dilute neutral gases are studied by using the kinetic equation with a collision relaxation model that meets all conservation requirements. The kinetic equation is solved keeping the whole anisotropic part of the distribution function with the use of the continued fractions. The conservative laws of the collision operator are taken into account with the projection operator techniques. The generalized heat flux and stress tensor are calculated in the linear approximation, as functions of the lower moments, i.e., the density, the flow velocity and the temperature. The results obtained are valid for arbitrary collision frequency nu with the respect to kv(t) and the characteristic frequency omega, where k(-1) is the characteristic length scale of the system and v(t) is the thermal velocity. The transport coefficients constitute accurate closure relations for the generalized hydrodynamic equations. An application to the dispersion and the attenuation of sound waves in the whole collisionality regime is presented. The results obtained are in very good agreement with the experimental data. PMID:17155048
Extracting H I cosmological signal with generalized needlet internal linear combination
NASA Astrophysics Data System (ADS)
Olivari, L. C.; Remazeilles, M.; Dickinson, C.
2016-03-01
H I intensity mapping is a new observational technique to map fluctuations in the large-scale structure of matter using the 21 cm emission line of atomic hydrogen (H I). Sensitive H I intensity mapping experiments have the potential to detect Baryon Acoustic Oscillations at low redshifts (z ≲ 1) in order to constrain the properties of dark energy. Observations of the H I signal will be contaminated by instrumental noise and, more significantly, by astrophysical foregrounds, such as Galactic synchrotron emission, which is at least four orders of magnitude brighter than the H I signal. Foreground cleaning is recognized as one of the key challenges for future radio astronomy surveys. We study the ability of the Generalized Needlet Internal Linear Combination (GNILC) method to subtract radio foregrounds and to recover the cosmological H I signal for a general H I intensity mapping experiment. The GNILC method is a new technique that uses both frequency and spatial information to separate the components of the observed data. Our results show that the method is robust to the complexity of the foregrounds. For simulated radio observations including H I emission, Galactic synchrotron, Galactic free-free, radio sources, and 0.05 mK thermal noise, we find that the GNILC method can reconstruct the H I power spectrum for multipoles 30 < ℓ < 150 with 6 per cent accuracy on 50 per cent of the sky for a redshift z ˜ 0.25.
Unification of the general non-linear sigma model and the Virasoro master equation
Boer, J. de; Halpern, M.B. |
1997-06-01
The Virasoro master equation describes a large set of conformal field theories known as the affine-Virasoro constructions, in the operator algebra (affinie Lie algebra) of the WZW model, while the einstein equations of the general non-linear sigma model describe another large set of conformal field theories. This talk summarizes recent work which unifies these two sets of conformal field theories, together with a presumable large class of new conformal field theories. The basic idea is to consider spin-two operators of the form L{sub ij}{partial_derivative}x{sup i}{partial_derivative}x{sup j} in the background of a general sigma model. The requirement that these operators satisfy the Virasoro algebra leads to a set of equations called the unified Einstein-Virasoro master equation, in which the spin-two spacetime field L{sub ij} cuples to the usual spacetime fields of the sigma model. The one-loop form of this unified system is presented, and some of its algebraic and geometric properties are discussed.
NASA Astrophysics Data System (ADS)
Jellison, Gerald E., Jr.; Griffiths, C. Owen; Holcomb, David E.; Rouleau, Christopher M.
2002-09-01
The two-modulator generalized ellipsometer (2-MGE) is a spectroscopic polarization-sensitive optical instrument that is sensitive to both standard ellipsometric parameters from isotropic samples as well as cross polarization terms arising from anisotropic samples. In reflection mode, teh 2-MGE has been used to measure the complex dielectric functions of several uniaxial crystals, including TiO2, ZnO, and BiI3. The 2-MGE can also be used in the transmission mode, in which the complete Mueller matrix of a sample can be determined (using 4 zone measurements). If the sample is a linear diattenuator and retarder, then only a single zone is required to determine the sample retardation, diattenuation, the principal axis direction, and the depolarization. These measurements have been performed in two different modes: 1) Spectroscopic, where the current wavelength limits are 260 to 850 nm, and 2) Spatially resolved (Current resolution ~30-50 microns) at a single wavelength. The latter mode results in retardation, linear diattenuation, and principal axis direction "maps" of the sample. Two examples are examined in this paper. First, a simple Polaroid film polarizer is measured, where it is seen that the device behaves nearly ideally in its design wavelength range (visible), but acts more as a retarder in the infrared. Second, congruently grown LiNbO3 is examined under bias. These results show that there are significant variations in the electric field-Pockels coefficient product within the material. Spectroscopic measurements are used to determine the dispersion of the r22 Pockels coefficient.
Development and validation of a general purpose linearization program for rigid aircraft models
NASA Technical Reports Server (NTRS)
Duke, E. L.; Antoniewicz, R. F.
1985-01-01
A FORTRAN program that provides the user with a powerful and flexible tool for the linearization of aircraft models is discussed. The program LINEAR numerically determines a linear systems model using nonlinear equations of motion and a user-supplied, nonlinear aerodynamic model. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to allow easy selection and definition of the state, control, and observation variables to be used in a particular model. Also, included in the report is a comparison of linear and nonlinear models for a high performance aircraft.
MGMRES: A generalization of GMRES for solving large sparse nonsymmetric linear systems
Young, D.M.; Chen, J.Y.
1994-12-31
The authors are concerned with the solution of the linear system (1): Au = b, where A is a real square nonsingular matrix which is large, sparse and non-symmetric. They consider the use of Krylov subspace methods. They first choose an initial approximation u{sup (0)} to the solution {bar u} = A{sup {minus}1}B of (1). They also choose an auxiliary matrix Z which is nonsingular. For n = 1,2,{hor_ellipsis} they determine u{sup (n)} such that u{sup (n)} {minus} u{sup (0)}{epsilon}K{sub n}(r{sup (0)},A) where K{sub n}(r{sup (0)},A) is the (Krylov) subspace spanned by the Krylov vectors r{sup (0)}, Ar{sup (0)}, {hor_ellipsis}, A{sup n{minus}1}r{sup 0} and where r{sup (0)} = b{minus}Au{sup (0)}. If ZA is SPD they also require that (u{sup (n)}{minus}{bar u}, ZA(u{sup (n)}{minus}{bar u})) be minimized. If, on the other hand, ZA is not SPD, then they require that the Galerkin condition, (Zr{sup n}, v) = 0, be satisfied for all v{epsilon}K{sub n}(r{sup (0)}, A) where r{sup n} = b{minus}Au{sup (n)}. In this paper the authors consider a generalization of GMRES. This generalized method, which they refer to as `MGMRES`, is very similar to GMRES except that they let Z = A{sup T}Y where Y is a nonsingular matrix which is symmetric by not necessarily SPD.
Shin, Yongyun; Raudenbush, Stephen W
2013-01-01
This article extends single-level missing data methods to efficient estimation of a Q-level nested hierarchical general linear model given ignorable missing data with a general missing pattern at any of the Q levels. The key idea is to reexpress a desired hierarchical model as the joint distribution of all variables including the outcome that are subject to missingness, conditional on all of the covariates that are completely observed and to estimate the joint model under normal theory. The unconstrained joint model, however, identifies extraneous parameters that are not of interest in subsequent analysis of the hierarchical model and that rapidly multiply as the number of levels, the number of variables subject to missingness, and the number of random coefficients grow. Therefore, the joint model may be extremely high dimensional and difficult to estimate well unless constraints are imposed to avoid the proliferation of extraneous covariance components at each level. Furthermore, the over-identified hierarchical model may produce considerably biased inferences. The challenge is to represent the constraints within the framework of the Q-level model in a way that is uniform without regard to Q; in a way that facilitates efficient computation for any number of Q levels; and also in a way that produces unbiased and efficient analysis of the hierarchical model. Our approach yields Q-step recursive estimation and imputation procedures whose qth-step computation involves only level-q data given higher-level computation components. We illustrate the approach with a study of the growth in body mass index analyzing a national sample of elementary school children. PMID:24077621
Generalized Jeans' Escape of Pick-Up Ions in Quasi-Linear Relaxation
NASA Technical Reports Server (NTRS)
Moore, T. E.; Khazanov, G. V.
2011-01-01
Jeans escape is a well-validated formulation of upper atmospheric escape that we have generalized to estimate plasma escape from ionospheres. It involves the computation of the parts of particle velocity space that are unbound by the gravitational potential at the exobase, followed by a calculation of the flux carried by such unbound particles as they escape from the potential well. To generalize this approach for ions, we superposed an electrostatic ambipolar potential and a centrifugal potential, for motions across and along a divergent magnetic field. We then considered how the presence of superthermal electrons, produced by precipitating auroral primary electrons, controls the ambipolar potential. We also showed that the centrifugal potential plays a small role in controlling the mass escape flux from the terrestrial ionosphere. We then applied the transverse ion velocity distribution produced when ions, picked up by supersonic (i.e., auroral) ionospheric convection, relax via quasi-linear diffusion, as estimated for cometary comas [1]. The results provide a theoretical basis for observed ion escape response to electromagnetic and kinetic energy sources. They also suggest that super-sonic but sub-Alfvenic flow, with ion pick-up, is a unique and important regime of ion-neutral coupling, in which plasma wave-particle interactions are driven by ion-neutral collisions at densities for which the collision frequency falls near or below the gyro-frequency. As another possible illustration of this process, the heliopause ribbon discovered by the IBEX mission involves interactions between the solar wind ions and the interstellar neutral gas, in a regime that may be analogous [2].
NASA Astrophysics Data System (ADS)
Elliott, J.; de Souza, R. S.; Krone-Martins, A.; Cameron, E.; Ishida, E. E. O.; Hilbe, J.
2015-04-01
Machine learning techniques offer a precious tool box for use within astronomy to solve problems involving so-called big data. They provide a means to make accurate predictions about a particular system without prior knowledge of the underlying physical processes of the data. In this article, and the companion papers of this series, we present the set of Generalized Linear Models (GLMs) as a fast alternative method for tackling general astronomical problems, including the ones related to the machine learning paradigm. To demonstrate the applicability of GLMs to inherently positive and continuous physical observables, we explore their use in estimating the photometric redshifts of galaxies from their multi-wavelength photometry. Using the gamma family with a log link function we predict redshifts from the PHoto-z Accuracy Testing simulated catalogue and a subset of the Sloan Digital Sky Survey from Data Release 10. We obtain fits that result in catastrophic outlier rates as low as ∼1% for simulated and ∼2% for real data. Moreover, we can easily obtain such levels of precision within a matter of seconds on a normal desktop computer and with training sets that contain merely thousands of galaxies. Our software is made publicly available as a user-friendly package developed in Python, R and via an interactive web application. This software allows users to apply a set of GLMs to their own photometric catalogues and generates publication quality plots with minimum effort. By facilitating their ease of use to the astronomical community, this paper series aims to make GLMs widely known and to encourage their implementation in future large-scale projects, such as the Large Synoptic Survey Telescope.
Fast inference in generalized linear models via expected log-likelihoods
Ramirez, Alexandro D.; Paninski, Liam
2015-01-01
Generalized linear models play an essential role in a wide variety of statistical applications. This paper discusses an approximation of the likelihood in these models that can greatly facilitate computation. The basic idea is to replace a sum that appears in the exact log-likelihood by an expectation over the model covariates; the resulting “expected log-likelihood” can in many cases be computed significantly faster than the exact log-likelihood. In many neuroscience experiments the distribution over model covariates is controlled by the experimenter and the expected log-likelihood approximation becomes particularly useful; for example, estimators based on maximizing this expected log-likelihood (or a penalized version thereof) can often be obtained with orders of magnitude computational savings compared to the exact maximum likelihood estimators. A risk analysis establishes that these maximum EL estimators often come with little cost in accuracy (and in some cases even improved accuracy) compared to standard maximum likelihood estimates. Finally, we find that these methods can significantly decrease the computation time of marginal likelihood calculations for model selection and of Markov chain Monte Carlo methods for sampling from the posterior parameter distribution. We illustrate our results by applying these methods to a computationally-challenging dataset of neural spike trains obtained via large-scale multi-electrode recordings in the primate retina. PMID:23832289
Master equation solutions in the linear regime of characteristic formulation of general relativity
NASA Astrophysics Data System (ADS)
Cedeño M., C. E.; de Araujo, J. C. N.
2015-12-01
From the field equations in the linear regime of the characteristic formulation of general relativity, Bishop, for a Schwarzschild's background, and Mädler, for a Minkowski's background, were able to show that it is possible to derive a fourth order ordinary differential equation, called master equation, for the J metric variable of the Bondi-Sachs metric. Once β , another Bondi-Sachs potential, is obtained from the field equations, and J is obtained from the master equation, the other metric variables are solved integrating directly the rest of the field equations. In the past, the master equation was solved for the first multipolar terms, for both the Minkowski's and Schwarzschild's backgrounds. Also, Mädler recently reported a generalisation of the exact solutions to the linearised field equations when a Minkowski's background is considered, expressing the master equation family of solutions for the vacuum in terms of Bessel's functions of the first and the second kind. Here, we report new solutions to the master equation for any multipolar moment l , with and without matter sources in terms only of the first kind Bessel's functions for the Minkowski, and in terms of the Confluent Heun's functions (Generalised Hypergeometric) for radiative (nonradiative) case in the Schwarzschild's background. We particularize our families of solutions for the known cases for l =2 reported previously in the literature and find complete agreement, showing the robustness of our results.
The overlooked potential of Generalized Linear Models in astronomy, I: Binomial regression
NASA Astrophysics Data System (ADS)
de Souza, R. S.; Cameron, E.; Killedar, M.; Hilbe, J.; Vilalta, R.; Maio, U.; Biffi, V.; Ciardi, B.; Riggs, J. D.
2015-09-01
Revealing hidden patterns in astronomical data is often the path to fundamental scientific breakthroughs; meanwhile the complexity of scientific enquiry increases as more subtle relationships are sought. Contemporary data analysis problems often elude the capabilities of classical statistical techniques, suggesting the use of cutting edge statistical methods. In this light, astronomers have overlooked a whole family of statistical techniques for exploratory data analysis and robust regression, the so-called Generalized Linear Models (GLMs). In this paper-the first in a series aimed at illustrating the power of these methods in astronomical applications-we elucidate the potential of a particular class of GLMs for handling binary/binomial data, the so-called logit and probit regression techniques, from both a maximum likelihood and a Bayesian perspective. As a case in point, we present the use of these GLMs to explore the conditions of star formation activity and metal enrichment in primordial minihaloes from cosmological hydro-simulations including detailed chemistry, gas physics, and stellar feedback. We predict that for a dark mini-halo with metallicity ≈ 1.3 × 10-4Z⨀, an increase of 1.2 × 10-2 in the gas molecular fraction, increases the probability of star formation occurrence by a factor of 75%. Finally, we highlight the use of receiver operating characteristic curves as a diagnostic for binary classifiers, and ultimately we use these to demonstrate the competitive predictive performance of GLMs against the popular technique of artificial neural networks.
NASA Astrophysics Data System (ADS)
García-Díaz, J. Carlos
2009-11-01
Fault detection and diagnosis is an important problem in process engineering. Process equipments are subject to malfunctions during operation. Galvanized steel is a value added product, furnishing effective performance by combining the corrosion resistance of zinc with the strength and formability of steel. Fault detection and diagnosis is an important problem in continuous hot dip galvanizing and the increasingly stringent quality requirements in automotive industry has also demanded ongoing efforts in process control to make the process more robust. When faults occur, they change the relationship among these observed variables. This work compares different statistical regression models proposed in the literature for estimating the quality of galvanized steel coils on the basis of short time histories. Data for 26 batches were available. Five variables were selected for monitoring the process: the steel strip velocity, four bath temperatures and bath level. The entire data consisting of 48 galvanized steel coils was divided into sets. The first training data set was 25 conforming coils and the second data set was 23 nonconforming coils. Logistic regression is a modeling tool in which the dependent variable is categorical. In most applications, the dependent variable is binary. The results show that the logistic generalized linear models do provide good estimates of quality coils and can be useful for quality control in manufacturing process.
A general parallel sparse-blocked matrix multiply for linear scaling SCF theory
NASA Astrophysics Data System (ADS)
Challacombe, Matt
2000-06-01
A general approach to the parallel sparse-blocked matrix-matrix multiply is developed in the context of linear scaling self-consistent-field (SCF) theory. The data-parallel message passing method uses non-blocking communication to overlap computation and communication. The space filling curve heuristic is used to achieve data locality for sparse matrix elements that decay with “separation”. Load balance is achieved by solving the bin packing problem for blocks with variable size.With this new method as the kernel, parallel performance of the simplified density matrix minimization (SDMM) for solution of the SCF equations is investigated for RHF/6-31G ∗∗ water clusters and RHF/3-21G estane globules. Sustained rates above 5.7 GFLOPS for the SDMM have been achieved for (H 2 O) 200 with 95 Origin 2000 processors. Scalability is found to be limited by load imbalance, which increases with decreasing granularity, due primarily to the inhomogeneous distribution of variable block sizes.
Sensitivity Analysis of Linear Elastic Cracked Structures Using Generalized Finite Element Method
NASA Astrophysics Data System (ADS)
Pal, Mahendra Kumar; Rajagopal, Amirtham
2014-09-01
In this work, a sensitivity analysis of linear elastic cracked structures using two-scale Generalized Finite Element Method (GFEM) is presented. The method is based on computation of material derivatives, mutual potential energies, and direct differentiation. In a computational setting, the discrete form of the mutual potential energy release rate is simple and easy to calculate, as it only requires the multiplication of the displacement vectors and stiffness sensitivity matrices. By judiciously choosing the velocity field, the method only requires displacement response in a sub-domain close to the crack tip, thus making the method computationally efficient. The method thus requires an exact computation of displacement response in a sub-domain close to the crack tip. To this end, in this study we have used a two-scale GFEM for sensitivity analysis. GFEM is based on the enrichment of the classical finite element approximation. These enrichment functions incorporate the discontinuity response in the domain. Three numerical examples which comprise mode-I and mixed mode deformations are presented to evaluate the accuracy of the fracture parameters calculated by the proposed method.
Chen, Gang; Adleman, Nancy E.; Saad, Ziad S.; Leibenluft, Ellen; Cox, RobertW.
2014-01-01
All neuroimaging packages can handle group analysis with t-tests or general linear modeling (GLM). However, they are quite hamstrung when there are multiple within-subject factors or when quantitative covariates are involved in the presence of a within-subject factor. In addition, sphericity is typically assumed for the variance–covariance structure when there are more than two levels in a within-subject factor. To overcome such limitations in the traditional AN(C)OVA and GLM, we adopt a multivariate modeling (MVM) approach to analyzing neuroimaging data at the group level with the following advantages: a) there is no limit on the number of factors as long as sample sizes are deemed appropriate; b) quantitative covariates can be analyzed together with within- subject factors; c) when a within-subject factor is involved, three testing methodologies are provided: traditional univariate testing (UVT)with sphericity assumption (UVT-UC) and with correction when the assumption is violated (UVT-SC), and within-subject multivariate testing (MVT-WS); d) to correct for sphericity violation at the voxel level, we propose a hybrid testing (HT) approach that achieves equal or higher power via combining traditional sphericity correction methods (Greenhouse–Geisser and Huynh–Feldt) with MVT-WS. PMID:24954281
Garcia, J M; Teodoro, F; Cerdeira, R; Coelho, L M R; Kumar, Prashant; Carvalho, M G
2016-09-01
A methodology to predict PM10 concentrations in urban outdoor environments is developed based on the generalized linear models (GLMs). The methodology is based on the relationship developed between atmospheric concentrations of air pollutants (i.e. CO, NO2, NOx, VOCs, SO2) and meteorological variables (i.e. ambient temperature, relative humidity (RH) and wind speed) for a city (Barreiro) of Portugal. The model uses air pollution and meteorological data from the Portuguese monitoring air quality station networks. The developed GLM considers PM10 concentrations as a dependent variable, and both the gaseous pollutants and meteorological variables as explanatory independent variables. A logarithmic link function was considered with a Poisson probability distribution. Particular attention was given to cases with air temperatures both below and above 25°C. The best performance for modelled results against the measured data was achieved for the model with values of air temperature above 25°C compared with the model considering all ranges of air temperatures and with the model considering only temperature below 25°C. The model was also tested with similar data from another Portuguese city, Oporto, and results found to behave similarly. It is concluded that this model and the methodology could be adopted for other cities to predict PM10 concentrations when these data are not available by measurements from air quality monitoring stations or other acquisition means. PMID:26839052
Maximal freedom at minimum cost: linear large-scale structure in general modifications of gravity
Bellini, Emilio; Sawicki, Ignacy E-mail: ignacy.sawicki@outlook.com
2014-07-01
We present a turnkey solution, ready for implementation in numerical codes, for the study of linear structure formation in general scalar-tensor models involving a single universally coupled scalar field. We show that the totality of cosmological information on the gravitational sector can be compressed — without any redundancy — into five independent and arbitrary functions of time only and one constant. These describe physical properties of the universe: the observable background expansion history, fractional matter density today, and four functions of time describing the properties of the dark energy. We show that two of those dark-energy property functions control the existence of anisotropic stress, the other two — dark-energy clustering, both of which are can be scale-dependent. All these properties can in principle be measured, but no information on the underlying theory of acceleration beyond this can be obtained. We present a translation between popular models of late-time acceleration (e.g. perfect fluids, f(R), kinetic gravity braiding, galileons), as well as the effective field theory framework, and our formulation. In this way, implementing this formulation numerically would give a single tool which could consistently test the majority of models of late-time acceleration heretofore proposed.
Yock, Adam D. Kudchadker, Rajat J.; Rao, Arvind; Dong, Lei; Beadle, Beth M.; Garden, Adam S.; Court, Laurence E.
2014-05-15
Purpose: The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Methods: Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. Results: In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: −11.6%–23.8%) and 14.6% (range: −7.3%–27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: −6.8%–40.3%) and 13.1% (range: −1.5%–52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: −11.1%–20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. Conclusions: A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography
An assessment of estimation methods for generalized linear mixed models with binary outcomes
Capanu, Marinela; Gönen, Mithat; Begg, Colin B.
2013-01-01
Two main classes of methodology have been developed for addressing the analytical intractability of generalized linear mixed models (GLMMs): likelihood-based methods and Bayesian methods. Likelihood-based methods such as the penalized quasi-likelihood approach have been shown to produce biased estimates especially for binary clustered data with small clusters sizes. More recent methods using adaptive Gaussian quadrature perform well but can be overwhelmed by problems with large numbers of random effects, and efficient algorithms to better handle these situations have not yet been integrated in standard statistical packages. Bayesian methods, though they have good frequentist properties when the model is correct, are known to be computationally intensive and also require specialized code, limiting their use in practice. In this article we introduce a modification of the hybrid approach of Capanu and Begg [1] as a bridge between the likelihood-based and Bayesian approaches by employing Bayesian estimation for the variance components followed by Laplacian estimation for the regression coefficients. We investigate its performance as well as that of several likelihood-based methods in the setting of GLMMs with binary outcomes. We apply the methods to three datasets and conduct simulations to illustrate their properties. Simulation results indicate that for moderate to large numbers of observations per random effect, adaptive Gaussian quadrature and the Laplacian approximation are very accurate, with adaptive Gaussian quadrature preferable as the number of observations per random effect increases. The hybrid approach is overall similar to the Laplace method, and it can be superior for data with very sparse random effects. PMID:23839712
Generalized functional linear models for gene-based case-control association studies.
Fan, Ruzong; Wang, Yifan; Mills, James L; Carter, Tonia C; Lobach, Iryna; Wilson, Alexander F; Bailey-Wilson, Joan E; Weeks, Daniel E; Xiong, Momiao
2014-11-01
By using functional data analysis techniques, we developed generalized functional linear models for testing association between a dichotomous trait and multiple genetic variants in a genetic region while adjusting for covariates. Both fixed and mixed effect models are developed and compared. Extensive simulations show that Rao's efficient score tests of the fixed effect models are very conservative since they generate lower type I errors than nominal levels, and global tests of the mixed effect models generate accurate type I errors. Furthermore, we found that the Rao's efficient score test statistics of the fixed effect models have higher power than the sequence kernel association test (SKAT) and its optimal unified version (SKAT-O) in most cases when the causal variants are both rare and common. When the causal variants are all rare (i.e., minor allele frequencies less than 0.03), the Rao's efficient score test statistics and the global tests have similar or slightly lower power than SKAT and SKAT-O. In practice, it is not known whether rare variants or common variants in a gene region are disease related. All we can assume is that a combination of rare and common variants influences disease susceptibility. Thus, the improved performance of our models when the causal variants are both rare and common shows that the proposed models can be very useful in dissecting complex traits. We compare the performance of our methods with SKAT and SKAT-O on real neural tube defects and Hirschsprung's disease datasets. The Rao's efficient score test statistics and the global tests are more sensitive than SKAT and SKAT-O in the real data analysis. Our methods can be used in either gene-disease genome-wide/exome-wide association studies or candidate gene analyses. PMID:25203683
NASA Technical Reports Server (NTRS)
Holdaway, Daniel; Kent, James
2015-01-01
The linearity of a selection of common advection schemes is tested and examined with a view to their use in the tangent linear and adjoint versions of an atmospheric general circulation model. The schemes are tested within a simple offline one-dimensional periodic domain as well as using a simplified and complete configuration of the linearised version of NASA's Goddard Earth Observing System version 5 (GEOS-5). All schemes which prevent the development of negative values and preserve the shape of the solution are confirmed to have nonlinear behaviour. The piecewise parabolic method (PPM) with certain flux limiters, including that used by default in GEOS-5, is found to support linear growth near the shocks. This property can cause the rapid development of unrealistically large perturbations within the tangent linear and adjoint models. It is shown that these schemes with flux limiters should not be used within the linearised version of a transport scheme. The results from tests using GEOS-5 show that the current default scheme (a version of PPM) is not suitable for the tangent linear and adjoint model, and that using a linear third-order scheme for the linearised model produces better behaviour. Using the third-order scheme for the linearised model improves the correlations between the linear and non-linear perturbation trajectories for cloud liquid water and cloud liquid ice in GEOS-5.
NASA Astrophysics Data System (ADS)
Begley, Matthew R.; Creton, Costantino; McMeeking, Robert M.
2015-11-01
A general asymptotic plane strain crack tip stress field is constructed for linear versions of neo-Hookean materials, which spans a wide variety of special cases including incompressible Mooney elastomers, the compressible Blatz-Ko elastomer, several cases of the Ogden constitutive law and a new result for a compressible linear neo-Hookean material. The nominal stress field has dominant terms that have a square root singularity with respect to the distance of material points from the crack tip in the undeformed reference configuration. At second order, there is a uniform tension parallel to the crack. The associated displacement field in plane strain at leading order has dependence proportional to the square root of the same coordinate. The relationship between the amplitude of the crack tip singularity (a stress intensity factor) and the plane strain energy release rate is outlined for the general linear material, with simplified relationships presented for notable special cases.
General methods for determining the linear stability of coronal magnetic fields
NASA Technical Reports Server (NTRS)
Craig, I. J. D.; Sneyd, A. D.; Mcclymont, A. N.
1988-01-01
A time integration of a linearized plasma equation of motion has been performed to calculate the ideal linear stability of arbitrary three-dimensional magnetic fields. The convergence rates of the explicit and implicit power methods employed are speeded up by using sequences of cyclic shifts. Growth rates are obtained for Gold-Hoyle force-free equilibria, and the corkscrew-kink instability is found to be very weak.
Meta-analysis of Complex Diseases at Gene Level with Generalized Functional Linear Models.
Fan, Ruzong; Wang, Yifan; Chiu, Chi-Yang; Chen, Wei; Ren, Haobo; Li, Yun; Boehnke, Michael; Amos, Christopher I; Moore, Jason H; Xiong, Momiao
2016-02-01
We developed generalized functional linear models (GFLMs) to perform a meta-analysis of multiple case-control studies to evaluate the relationship of genetic data to dichotomous traits adjusting for covariates. Unlike the previously developed meta-analysis for sequence kernel association tests (MetaSKATs), which are based on mixed-effect models to make the contributions of major gene loci random, GFLMs are fixed models; i.e., genetic effects of multiple genetic variants are fixed. Based on GFLMs, we developed chi-squared-distributed Rao's efficient score test and likelihood-ratio test (LRT) statistics to test for an association between a complex dichotomous trait and multiple genetic variants. We then performed extensive simulations to evaluate the empirical type I error rates and power performance of the proposed tests. The Rao's efficient score test statistics of GFLMs are very conservative and have higher power than MetaSKATs when some causal variants are rare and some are common. When the causal variants are all rare [i.e., minor allele frequencies (MAF) < 0.03], the Rao's efficient score test statistics have similar or slightly lower power than MetaSKATs. The LRT statistics generate accurate type I error rates for homogeneous genetic-effect models and may inflate type I error rates for heterogeneous genetic-effect models owing to the large numbers of degrees of freedom and have similar or slightly higher power than the Rao's efficient score test statistics. GFLMs were applied to analyze genetic data of 22 gene regions of type 2 diabetes data from a meta-analysis of eight European studies and detected significant association for 18 genes (P < 3.10 × 10(-6)), tentative association for 2 genes (HHEX and HMGA2; P ≈ 10(-5)), and no association for 2 genes, while MetaSKATs detected none. In addition, the traditional additive-effect model detects association at gene HHEX. GFLMs and related tests can analyze rare or common variants or a combination of the two and
Chen, Gang; Adleman, Nancy E; Saad, Ziad S; Leibenluft, Ellen; Cox, Robert W
2014-10-01
All neuroimaging packages can handle group analysis with t-tests or general linear modeling (GLM). However, they are quite hamstrung when there are multiple within-subject factors or when quantitative covariates are involved in the presence of a within-subject factor. In addition, sphericity is typically assumed for the variance-covariance structure when there are more than two levels in a within-subject factor. To overcome such limitations in the traditional AN(C)OVA and GLM, we adopt a multivariate modeling (MVM) approach to analyzing neuroimaging data at the group level with the following advantages: a) there is no limit on the number of factors as long as sample sizes are deemed appropriate; b) quantitative covariates can be analyzed together with within-subject factors; c) when a within-subject factor is involved, three testing methodologies are provided: traditional univariate testing (UVT) with sphericity assumption (UVT-UC) and with correction when the assumption is violated (UVT-SC), and within-subject multivariate testing (MVT-WS); d) to correct for sphericity violation at the voxel level, we propose a hybrid testing (HT) approach that achieves equal or higher power via combining traditional sphericity correction methods (Greenhouse-Geisser and Huynh-Feldt) with MVT-WS. To validate the MVM methodology, we performed simulations to assess the controllability for false positives and power achievement. A real FMRI dataset was analyzed to demonstrate the capability of the MVM approach. The methodology has been implemented into an open source program 3dMVM in AFNI, and all the statistical tests can be performed through symbolic coding with variable names instead of the tedious process of dummy coding. Our data indicates that the severity of sphericity violation varies substantially across brain regions. The differences among various modeling methodologies were addressed through direct comparisons between the MVM approach and some of the GLM implementations in
Shirokov, M. E.
2013-11-15
The method of complementary channel for analysis of reversibility (sufficiency) of a quantum channel with respect to families of input states (pure states for the most part) are considered and applied to Bosonic linear (quasi-free) channels, in particular, to Bosonic Gaussian channels. The obtained reversibility conditions for Bosonic linear channels have clear physical interpretation and their sufficiency is also shown by explicit construction of reversing channels. The method of complementary channel gives possibility to prove necessity of these conditions and to describe all reversed families of pure states in the Schrodinger representation. Some applications in quantum information theory are considered. Conditions for existence of discrete classical-quantum subchannels and of completely depolarizing subchannels of a Bosonic linear channel are presented.
NASA Technical Reports Server (NTRS)
Chahine, M. T.
1977-01-01
A mapping transformation is derived for the inverse solution of nonlinear and linear integral equations of the types encountered in remote sounding studies. The method is applied to the solution of specific problems for the determination of the thermal and composition structure of planetary atmospheres from a knowledge of their upwelling radiance.
ERIC Educational Resources Information Center
Yan, Jun; Aseltine, Robert H., Jr.; Harel, Ofer
2013-01-01
Comparing regression coefficients between models when one model is nested within another is of great practical interest when two explanations of a given phenomenon are specified as linear models. The statistical problem is whether the coefficients associated with a given set of covariates change significantly when other covariates are added into…
Linear and Nonlinear Optical Properties in Spherical Quantum Dots: Generalized Hulthén Potential
NASA Astrophysics Data System (ADS)
Onyeaju, M. C.; Idiodi, J. O. A.; Ikot, A. N.; Solaimani, M.; Hassanabadi, H.
2016-05-01
In this work, we studied the optical properties of spherical quantum dots confined in Hulthén potential with the appropriate centrifugal term included. The approximate solution of the bound state and wave functions were obtained from the Schrödinger wave equation by applying the factorization method. Also, we have used the density matrix formalism to investigate the linear and third-order nonlinear absorption coefficient and refractive index changes.
NASA Astrophysics Data System (ADS)
Rukolaine, Sergey A.
2016-05-01
In classical kinetic models a particle free path distribution is exponential, but this is more likely to be an exception than a rule. In this paper we derive a generalized linear Boltzmann equation (GLBE) for a general free path distribution in the framework of Alt's model. In the case that the free path distribution has at least first and second finite moments we construct an asymptotic solution to the initial value problem for the GLBE for small mean free paths. In the special case of the one-speed transport problem the asymptotic solution results in a diffusion approximation to the GLBE.
Use of a generalized linear model to evaluate range forage production estimates
NASA Astrophysics Data System (ADS)
Mitchell, John E.; Joyce, Linda A.
1986-05-01
Interdisciplinary teams have been used in federal land planning and in the private sector to reach consensus on the environmental impact of management. When a large data base is constructed, verifiability of the accuracy of the coded estimates and the underlying assumptions becomes a problem. A mechanism is provided by the use of a linear statistical model to evaluate production coefficients in terms of errors in coding and underlying assumptions. The technique can be used to evaluate other intuitive models depicting natural resource production in relation to prescribed variables, such as site factors or secondary succession.
A General Method for Solving Systems of Non-Linear Equations
NASA Technical Reports Server (NTRS)
Nachtsheim, Philip R.; Deiss, Ron (Technical Monitor)
1995-01-01
The method of steepest descent is modified so that accelerated convergence is achieved near a root. It is assumed that the function of interest can be approximated near a root by a quadratic form. An eigenvector of the quadratic form is found by evaluating the function and its gradient at an arbitrary point and another suitably selected point. The terminal point of the eigenvector is chosen to lie on the line segment joining the two points. The terminal point found lies on an axis of the quadratic form. The selection of a suitable step size at this point leads directly to the root in the direction of steepest descent in a single step. Newton's root finding method not infrequently diverges if the starting point is far from the root. However, the current method in these regions merely reverts to the method of steepest descent with an adaptive step size. The current method's performance should match that of the Levenberg-Marquardt root finding method since they both share the ability to converge from a starting point far from the root and both exhibit quadratic convergence near a root. The Levenberg-Marquardt method requires storage for coefficients of linear equations. The current method which does not require the solution of linear equations requires more time for additional function and gradient evaluations. The classic trade off of time for space separates the two methods.
A substructure coupling procedure applicable to general linear time-invariant dynamic systems
NASA Technical Reports Server (NTRS)
Howsman, T. G.; Craig, R. R., Jr.
1984-01-01
A substructure synthesis procedure applicable to structural systems containing general nonconservative terms is presented. In their final form, the non-self-adjoint substructure equations of motion are cast in state vector form through the use of a variational principle. A reduced-order model for each substructure is implemented by representing the substructure as a combination of a small number of Ritz vectors. For the method presented, the substructure Ritz vectors are identified as a truncated set of substructure eigenmodes, which are typically complex, along with a set of generalized real attachment modes. The formation of the generalized attachment modes does not require any knowledge of the substructure flexible modes; hence, only the eigenmodes used explicitly as Ritz vectors need to be extracted from the substructure eigenproblem. An example problem is presented to illustrate the method.
A substructure coupling procedure applicable to general linear time-invariant dynamic systems
NASA Technical Reports Server (NTRS)
Howsman, T. G.; Craig, R. R., Jr.
1984-01-01
A substructure synthesis procedure applicable to structural systems containing general nonconservative terms is presented. In their final form, the nonself-adjoint substructure equations of motion are cast in state vector form through the use of a variational principle. A reduced-order mode for each substructure is implemented by representing the substructure as a combination of a small number of Ritz vectors. For the method presented, the substructure Ritz vectors are identified as a truncated set of substructure eigenmodes, which are typically complex, along with a set of generalized real attachment modes. The formation of the generalized attachment modes does not require any knowledge of the substructure flexible modes; hence, only the eigenmodes used explicitly as Ritz vectors need to be extracted from the substructure eigenproblem. An example problem is presented to illustrate the method.
NASA Technical Reports Server (NTRS)
Nemeth, Michael P.; Schultz, Marc R.
2012-01-01
A detailed exact solution is presented for laminated-composite circular cylinders with general wall construction and that undergo axisymmetric deformations. The overall solution is formulated in a general, systematic way and is based on the solution of a single fourth-order, nonhomogeneous ordinary differential equation with constant coefficients in which the radial displacement is the dependent variable. Moreover, the effects of general anisotropy are included and positive-definiteness of the strain energy is used to define uniquely the form of the basis functions spanning the solution space of the ordinary differential equation. Loading conditions are considered that include axisymmetric edge loads, surface tractions, and temperature fields. Likewise, all possible axisymmetric boundary conditions are considered. Results are presented for five examples that demonstrate a wide range of behavior for specially orthotropic and fully anisotropic cylinders.
Shevenell, L.A.; Beauchamp, J.J.
1994-11-01
Several waste disposal sites are located on or adjacent to the karstic Maynardville Limestone (Cmn) and the Copper Ridge Dolomite (Ccr) at the Oak Ridge Y-12 Plant. These formations receive contaminants in groundwaters from nearby disposal sites, which can be transported quite rapidly due to the karst flow system. In order to evaluate transport processes through the karst aquifer, the solutional aspects of the formations must be characterized. As one component of this characterization effort, statistical analyses were conducted on the data related to cavities in order to determine if a suitable model could be identified that is capable of predicting the probability of cavity size or distribution in locations for which drilling data are not available. Existing data on the locations (East, North coordinates), depths (and elevations), and sizes of known conduits and other water zones were used in the analyses. Two different models were constructed in the attempt to predict the distribution of cavities in the vicinity of the Y-12 Plant: General Linear Models (GLM), and Logistic Regression Models (LOG). Each of the models attempted was very sensitive to the data set used. Models based on subsets of the full data set were found to do an inadequate job of predicting the behavior of the full data set. The fact that the Ccr and Cmn data sets differ significantly is not surprising considering the hydrogeology of the two formations differs. Flow in the Cmn is generally at elevations between 600 and 950 ft and is dominantly strike parallel through submerged, partially mud-filled cavities with sizes up to 40 ft, but more typically less than 5 ft. Recognized flow in the Ccr is generally above 950 ft elevation, with flow both parallel and perpendicular to geologic strike through conduits, which tend to be large than those on the Cnm, and are often not fully saturated at the shallower depths.
Enhanced multi-level block ILU preconditioning strategies for general sparse linear systems
NASA Astrophysics Data System (ADS)
Saad, Yousef; Zhang, Jun
2001-05-01
This paper introduces several strategies to deal with pivot blocks in multi-level block incomplete LU factorization (BILUM) preconditioning techniques. These techniques are aimed at increasing the robustness and controlling the amount of fill-ins of BILUM for solving large sparse linear systems when large-size blocks are used to form block-independent set. Techniques proposed in this paper include double-dropping strategies, approximate singular-value decomposition, variable size blocks and use of an arrowhead block submatrix. We point out the advantages and disadvantages of these strategies and discuss their efficient implementations. Numerical experiments are conducted to show the usefulness of the new techniques in dealing with hard-to-solve problems arising from computational fluid dynamics. In addition, we discuss the relation between multi-level ILU preconditioning methods and algebraic multi-level methods.
Iterative solution of general sparse linear systems on clusters of workstations
Lo, Gen-Ching; Saad, Y.
1996-12-31
Solving sparse irregularly structured linear systems on parallel platforms poses several challenges. First, sparsity makes it difficult to exploit data locality, whether in a distributed or shared memory environment. A second, perhaps more serious challenge, is to find efficient ways to precondition the system. Preconditioning techniques which have a large degree of parallelism, such as multicolor SSOR, often have a slower rate of convergence than their sequential counterparts. Finally, a number of other computational kernels such as inner products could ruin any gains gained from parallel speed-ups, and this is especially true on workstation clusters where start-up times may be high. In this paper we discuss these issues and report on our experience with PSPARSLIB, an on-going project for building a library of parallel iterative sparse matrix solvers.
Quasi-Linear Parameter Varying Representation of General Aircraft Dynamics Over Non-Trim Region
NASA Technical Reports Server (NTRS)
Shin, Jong-Yeob
2007-01-01
For applying linear parameter varying (LPV) control synthesis and analysis to a nonlinear system, it is required that a nonlinear system be represented in the form of an LPV model. In this paper, a new representation method is developed to construct an LPV model from a nonlinear mathematical model without the restriction that an operating point must be in the neighborhood of equilibrium points. An LPV model constructed by the new method preserves local stabilities of the original nonlinear system at "frozen" scheduling parameters and also represents the original nonlinear dynamics of a system over a non-trim region. An LPV model of the motion of FASER (Free-flying Aircraft for Subscale Experimental Research) is constructed by the new method.
NASA Technical Reports Server (NTRS)
Kaul, Upender K.
2005-01-01
A three-dimensional numerical solver based on finite-difference solution of three-dimensional elastodynamic equations in generalized curvilinear coordinates has been developed and used to generate data such as radial and tangential stresses over various gear component geometries under rotation. The geometries considered are an annulus, a thin annular disk, and a thin solid disk. The solution is based on first principles and does not involve lumped parameter or distributed parameter systems approach. The elastodynamic equations in the velocity-stress formulation that are considered here have been used in the solution of problems of geophysics where non-rotating Cartesian grids are considered. For arbitrary geometries, these equations along with the appropriate boundary conditions have been cast in generalized curvilinear coordinates in the present study.
Generalized linear stability of non-inertial rimming flow in a rotating horizontal cylinder.
Aggarwal, Himanshu; Tiwari, Naveen
2015-10-01
The stability of a thin film of viscous liquid inside a horizontally rotating cylinder is studied using modal and non-modal analysis. The equation governing the film thickness is derived within lubrication approximation and up to first order in aspect ratio (average film thickness to radius of the cylinder). Effect of gravity, viscous stress and capillary pressure are considered in the model. Steady base profiles are computed in the parameter space of interest that are uniform in the axial direction. A linear stability analysis is performed on these base profiles to study their stability to axial perturbations. The destabilizing behavior of aspect ratio and surface tension is demonstrated which is attributed to capillary instability. The transient growth that gives maximum amplification of any initial disturbance and the pseudospectra of the stability operator are computed. These computations reveal weak effect of non-normality of the operator and the results of eigenvalue analysis are recovered after a brief transient period. Results from nonlinear simulations are also presented which also confirm the validity of the modal analysis for the flow considered in this study. PMID:26496740
NASA Astrophysics Data System (ADS)
Sokolov, I. M.
2006-06-01
The work by Barbi, Bologna, and Grigolini [Phys. Rev. Lett. 95, 220601 (2005)] discusses a response to alternating external field of a non-Markovian two-state system, where the waiting time between the two attempted changes of state follows a power law. It introduced a new instrument for description of such situations based on a stochastic master equation with reset. In the present Brief Report we provide an alternative description of the situation within the framework of a generalized master equation. The results of our analytical approach are corroborated by direct numerical simulations of the system.
NASA Technical Reports Server (NTRS)
Noah, S. T.; Kim, Y. B.
1991-01-01
A general approach is developed for determining the periodic solutions and their stability of nonlinear oscillators with piecewise-smooth characteristics. A modified harmonic balance/Fourier transform procedure is devised for the analysis. The procedure avoids certain numerical differentiation employed previously in determining the periodic solutions, therefore enhancing the reliability and efficiency of the method. Stability of the solutions is determined via perturbations of their state variables. The method is applied to a forced oscillator interacting with a stop of finite stiffness. Flip and fold bifurcations are found to occur. This led to the identification of parameter ranges in which chaotic response occurred.
NASA Astrophysics Data System (ADS)
Wang, Lu; Wu, Li-Wei; Wei, Le; Gao, Juan; Sun, Cui-Li; Chai, Pei; Li, Dao-Wu
2014-02-01
The accuracy of attenuation correction in positron emission tomography scanners depends mainly on deriving the reliable 511-keV linear attenuation coefficient distribution in the scanned objects. In the PET/CT system, the linear attenuation distribution is usually obtained from the intensities of the CT image. However, the intensities of the CT image relate to the attenuation of photons in an energy range of 40 keV-140 keV. Before implementing PET attenuation correction, the intensities of CT images must be transformed into the PET 511-keV linear attenuation coefficients. However, the CT scan parameters can affect the effective energy of CT X-ray photons and thus affect the intensities of the CT image. Therefore, for PET/CT attenuation correction, it is crucial to determine the conversion curve with a given set of CT scan parameters and convert the CT image into a PET linear attenuation coefficient distribution. A generalized method is proposed for converting a CT image into a PET linear attenuation coefficient distribution. Instead of some parameter-dependent phantom calibration experiments, the conversion curve is calculated directly by employing the consistency conditions to yield the most consistent attenuation map with the measured PET data. The method is evaluated with phantom experiments and small animal experiments. In phantom studies, the estimated conversion curve fits the true attenuation coefficients accurately, and accurate PET attenuation maps are obtained by the estimated conversion curves and provide nearly the same correction results as the true attenuation map. In small animal studies, a more complicated attenuation distribution of the mouse is obtained successfully to remove the attenuation artifact and improve the PET image contrast efficiently.
MacNab, Ying C
2016-09-20
We present a general coregionalization framework for developing coregionalized multivariate Gaussian conditional autoregressive (cMCAR) models for Bayesian analysis of multivariate lattice data in general and multivariate disease mapping data in particular. This framework is inclusive of cMCARs that facilitate flexible modelling of spatially structured symmetric or asymmetric cross-variable local interactions, allowing a wide range of separable or non-separable covariance structures, and symmetric or asymmetric cross-covariances, to be modelled. We present a brief overview of established univariate Gaussian conditional autoregressive (CAR) models for univariate lattice data and develop coregionalized multivariate extensions. Classes of cMCARs are presented by formulating precision structures. The resulting conditional properties of the multivariate spatial models are established, which cast new light on cMCARs with richly structured covariances and cross-covariances of different spatial ranges. The related methods are illustrated via an in-depth Bayesian analysis of a Minnesota county-level cancer data set. We also bring a new dimension to the traditional enterprize of Bayesian disease mapping: estimating and mapping covariances and cross-covariances of the underlying disease risks. Maps of covariances and cross-covariances bring to light spatial characterizations of the cMCARs and inform on spatial risk associations between areas and diseases. Copyright © 2016 John Wiley & Sons, Ltd. PMID:27091685
NASA Astrophysics Data System (ADS)
Cedeño M, C. E.; de Araujo, J. C. N.
2016-05-01
A study of binary systems composed of two point particles with different masses in the linear regime of the characteristic formulation of general relativity with a Minkowski background is provided. The present paper generalizes a previous study by Bishop et al. The boundary conditions at the world tubes generated by the particles's orbits are explored, where the metric variables are decomposed in spin-weighted spherical harmonics. The power lost by the emission of gravitational waves is computed using the Bondi News function. The power found is the well-known result obtained by Peters and Mathews using a different approach. This agreement validates the approach considered here. Several multipole term contributions to the gravitational radiation field are also shown.
Biohybrid Control of General Linear Systems Using the Adaptive Filter Model of Cerebellum
Wilson, Emma D.; Assaf, Tareq; Pearson, Martin J.; Rossiter, Jonathan M.; Dean, Paul; Anderson, Sean R.; Porrill, John
2015-01-01
The adaptive filter model of the cerebellar microcircuit has been successfully applied to biological motor control problems, such as the vestibulo-ocular reflex (VOR), and to sensory processing problems, such as the adaptive cancelation of reafferent noise. It has also been successfully applied to problems in robotics, such as adaptive camera stabilization and sensor noise cancelation. In previous applications to inverse control problems, the algorithm was applied to the velocity control of a plant dominated by viscous and elastic elements. Naive application of the adaptive filter model to the displacement (as opposed to velocity) control of this plant results in unstable learning and control. To be more generally useful in engineering problems, it is essential to remove this restriction to enable the stable control of plants of any order. We address this problem here by developing a biohybrid model reference adaptive control (MRAC) scheme, which stabilizes the control algorithm for strictly proper plants. We evaluate the performance of this novel cerebellar-inspired algorithm with MRAC scheme in the experimental control of a dielectric electroactive polymer, a class of artificial muscle. The results show that the augmented cerebellar algorithm is able to accurately control the displacement response of the artificial muscle. The proposed solution not only greatly extends the practical applicability of the cerebellar-inspired algorithm, but may also shed light on cerebellar involvement in a wider range of biological control tasks. PMID:26257638
A generalized linear mixed model for longitudinal binary data with a marginal logit link function
Parzen, Michael; Ghosh, Souparno; Lipsitz, Stuart; Sinha, Debajyoti; Fitzmaurice, Garrett M.; Mallick, Bani K.; Ibrahim, Joseph G.
2010-01-01
Summary Longitudinal studies of a binary outcome are common in the health, social, and behavioral sciences. In general, a feature of random effects logistic regression models for longitudinal binary data is that the marginal functional form, when integrated over the distribution of the random effects, is no longer of logistic form. Recently, Wang and Louis (2003) proposed a random intercept model in the clustered binary data setting where the marginal model has a logistic form. An acknowledged limitation of their model is that it allows only a single random effect that varies from cluster to cluster. In this paper, we propose a modification of their model to handle longitudinal data, allowing separate, but correlated, random intercepts at each measurement occasion. The proposed model allows for a flexible correlation structure among the random intercepts, where the correlations can be interpreted in terms of Kendall’s τ. For example, the marginal correlations among the repeated binary outcomes can decline with increasing time separation, while the model retains the property of having matching conditional and marginal logit link functions. Finally, the proposed method is used to analyze data from a longitudinal study designed to monitor cardiac abnormalities in children born to HIV-infected women. PMID:21532998
Generalized Uncertainty Quantification for Linear Inverse Problems in X-ray Imaging
Fowler, Michael James
2014-04-25
In industrial and engineering applications, X-ray radiography has attained wide use as a data collection protocol for the assessment of material properties in cases where direct observation is not possible. The direct measurement of nuclear materials, particularly when they are under explosive or implosive loading, is not feasible, and radiography can serve as a useful tool for obtaining indirect measurements. In such experiments, high energy X-rays are pulsed through a scene containing material of interest, and a detector records a radiograph by measuring the radiation that is not attenuated in the scene. One approach to the analysis of these radiographs is to model the imaging system as an operator that acts upon the object being imaged to produce a radiograph. In this model, the goal is to solve an inverse problem to reconstruct the values of interest in the object, which are typically material properties such as density or areal density. The primary objective in this work is to provide quantitative solutions with uncertainty estimates for three separate applications in X-ray radiography: deconvolution, Abel inversion, and radiation spot shape reconstruction. For each problem, we introduce a new hierarchical Bayesian model for determining a posterior distribution on the unknowns and develop efficient Markov chain Monte Carlo (MCMC) methods for sampling from the posterior. A Poisson likelihood, based on a noise model for photon counts at the detector, is combined with a prior tailored to each application: an edge-localizing prior for deconvolution; a smoothing prior with non-negativity constraints for spot reconstruction; and a full covariance sampling prior based on a Wishart hyperprior for Abel inversion. After developing our methods in a general setting, we demonstrate each model on both synthetically generated datasets, including those from a well known radiation transport code, and real high energy radiographs taken at two U. S. Department of Energy
Iwasaki, Yuichi; Brinkman, Stephen F
2015-04-01
Increased concerns about the toxicity of chemical mixtures have led to greater emphasis on analyzing the interactions among the mixture components based on observed effects. The authors applied a generalized linear mixed model (GLMM) to analyze survival of brown trout (Salmo trutta) acutely exposed to metal mixtures that contained copper and zinc. Compared with dominant conventional approaches based on an assumption of concentration addition and the concentration of a chemical that causes x% effect (ECx), the GLMM approach has 2 major advantages. First, binary response variables such as survival can be modeled without any transformations, and thus sample size can be taken into consideration. Second, the importance of the chemical interaction can be tested in a simple statistical manner. Through this application, the authors investigated whether the estimated concentration of the 2 metals binding to humic acid, which is assumed to be a proxy of nonspecific biotic ligand sites, provided a better prediction of survival effects than dissolved and free-ion concentrations of metals. The results suggest that the estimated concentration of metals binding to humic acid is a better predictor of survival effects, and thus the metal competition at the ligands could be an important mechanism responsible for effects of metal mixtures. Application of the GLMM (and the generalized linear model) presents an alternative or complementary approach to analyzing mixture toxicity. PMID:25524054
Benedetti, Andrea; Platt, Robert; Atherton, Juli
2014-01-01
Background Over time, adaptive Gaussian Hermite quadrature (QUAD) has become the preferred method for estimating generalized linear mixed models with binary outcomes. However, penalized quasi-likelihood (PQL) is still used frequently. In this work, we systematically evaluated whether matching results from PQL and QUAD indicate less bias in estimated regression coefficients and variance parameters via simulation. Methods We performed a simulation study in which we varied the size of the data set, probability of the outcome, variance of the random effect, number of clusters and number of subjects per cluster, etc. We estimated bias in the regression coefficients, odds ratios and variance parameters as estimated via PQL and QUAD. We ascertained if similarity of estimated regression coefficients, odds ratios and variance parameters predicted less bias. Results Overall, we found that the absolute percent bias of the odds ratio estimated via PQL or QUAD increased as the PQL- and QUAD-estimated odds ratios became more discrepant, though results varied markedly depending on the characteristics of the dataset Conclusions Given how markedly results varied depending on data set characteristics, specifying a rule above which indicated biased results proved impossible. This work suggests that comparing results from generalized linear mixed models estimated via PQL and QUAD is a worthwhile exercise for regression coefficients and variance components obtained via QUAD, in situations where PQL is known to give reasonable results. PMID:24416249
Kitaev models based on unitary quantum groupoids
Chang, Liang
2014-04-15
We establish a generalization of Kitaev models based on unitary quantum groupoids. In particular, when inputting a Kitaev-Kong quantum groupoid H{sub C}, we show that the ground state manifold of the generalized model is canonically isomorphic to that of the Levin-Wen model based on a unitary fusion category C. Therefore, the generalized Kitaev models provide realizations of the target space of the Turaev-Viro topological quantum field theory based on C.
Shirazi, Mohammadali; Lord, Dominique; Dhavala, Soma Sekhar; Geedipally, Srinivas Reddy
2016-06-01
Crash data can often be characterized by over-dispersion, heavy (long) tail and many observations with the value zero. Over the last few years, a small number of researchers have started developing and applying novel and innovative multi-parameter models to analyze such data. These multi-parameter models have been proposed for overcoming the limitations of the traditional negative binomial (NB) model, which cannot handle this kind of data efficiently. The research documented in this paper continues the work related to multi-parameter models. The objective of this paper is to document the development and application of a flexible NB generalized linear model with randomly distributed mixed effects characterized by the Dirichlet process (NB-DP) to model crash data. The objective of the study was accomplished using two datasets. The new model was compared to the NB and the recently introduced model based on the mixture of the NB and Lindley (NB-L) distributions. Overall, the research study shows that the NB-DP model offers a better performance than the NB model once data are over-dispersed and have a heavy tail. The NB-DP performed better than the NB-L when the dataset has a heavy tail, but a smaller percentage of zeros. However, both models performed similarly when the dataset contained a large amount of zeros. In addition to a greater flexibility, the NB-DP provides a clustering by-product that allows the safety analyst to better understand the characteristics of the data, such as the identification of outliers and sources of dispersion. PMID:26945472
Wang, Tao; He, Peng; Ahn, Kwang Woo; Wang, Xujing; Ghosh, Soumitra; Laud, Purushottam
2015-01-01
The generalized linear mixed model (GLMM) is a useful tool for modeling genetic correlation among family data in genetic association studies. However, when dealing with families of varied sizes and diverse genetic relatedness, the GLMM has a special correlation structure which often makes it difficult to be specified using standard statistical software. In this study, we propose a Cholesky decomposition based re-formulation of the GLMM so that the re-formulated GLMM can be specified conveniently via “proc nlmixed” and “proc glimmix” in SAS, or OpenBUGS via R package BRugs. Performances of these procedures in fitting the re-formulated GLMM are examined through simulation studies. We also apply this re-formulated GLMM to analyze a real data set from Type 1 Diabetes Genetics Consortium (T1DGC). PMID:25873936
Wang, Chi; Dominici, Francesca; Parmigiani, Giovanni; Zigler, Corwin Matthew
2015-01-01
Summary Confounder selection and adjustment are essential elements of assessing the causal effect of an exposure or treatment in observational studies. Building upon work by Wang et al. (2012) and Lefebvre et al. (2014), we propose and evaluate a Bayesian method to estimate average causal effects in studies with a large number of potential confounders, relatively few observations, likely interactions between confounders and the exposure of interest, and uncertainty on which confounders and interaction terms should be included. Our method is applicable across all exposures and outcomes that can be handled through generalized linear models. In this general setting, estimation of the average causal effect is different from estimation of the exposure coefficient in the outcome model due to non-collapsibility. We implement a Bayesian bootstrap procedure to integrate over the distribution of potential confounders and to estimate the causal effect. Our method permits estimation of both the overall population causal effect and effects in specified subpopulations, providing clear characterization of heterogeneous exposure effects that may vary considerably across different covariate profiles. Simulation studies demonstrate that the proposed method performs well in small sample size situations with 100 to 150 observations and 50 covariates. The method is applied to data on 15060 US Medicare beneficiaries diagnosed with a malignant brain tumor between 2000 and 2009 to evaluate whether surgery reduces hospital readmissions within thirty days of diagnosis. PMID:25899155
NASA Astrophysics Data System (ADS)
Lebovka, Nikolai I.; Tarasevich, Yuri Yu.; Dubinin, Dmitri O.; Laptev, Valeri V.; Vygornitskii, Nikolai V.
2015-12-01
The jamming and percolation for two generalized models of random sequential adsorption (RSA) of linear k -mers (particles occupying k adjacent sites) on a square lattice are studied by means of Monte Carlo simulation. The classical RSA model assumes the absence of overlapping of the new incoming particle with the previously deposited ones. The first model is a generalized variant of the RSA model for both k -mers and a lattice with defects. Some of the occupying k adjacent sites are considered as insulating and some of the lattice sites are occupied by defects (impurities). For this model even a small concentration of defects can inhibit percolation for relatively long k -mers. The second model is the cooperative sequential adsorption one where, for each new k -mer, only a restricted number of lateral contacts z with previously deposited k -mers is allowed. Deposition occurs in the case when z ≤(1 -d ) zm where zm=2 (k +1 ) is the maximum numbers of the contacts of k -mer, and d is the fraction of forbidden contacts. Percolation is observed only at some interval kmin≤k ≤kmax where the values kmin and kmax depend upon the fraction of forbidden contacts d . The value kmax decreases as d increases. A logarithmic dependence of the type log10(kmax) =a +b d , where a =4.04 ±0.22 ,b =-4.93 ±0.57 , is obtained.
NASA Astrophysics Data System (ADS)
Choi, Hon-Chit; Wen, Lingfeng; Eberl, Stefan; Feng, Dagan
2006-03-01
Dynamic Single Photon Emission Computed Tomography (SPECT) has the potential to quantitatively estimate physiological parameters by fitting compartment models to the tracer kinetics. The generalized linear least square method (GLLS) is an efficient method to estimate unbiased kinetic parameters and parametric images. However, due to the low sensitivity of SPECT, noisy data can cause voxel-wise parameter estimation by GLLS to fail. Fuzzy C-Mean (FCM) clustering and modified FCM, which also utilizes information from the immediate neighboring voxels, are proposed to improve the voxel-wise parameter estimation of GLLS. Monte Carlo simulations were performed to generate dynamic SPECT data with different noise levels and processed by general and modified FCM clustering. Parametric images were estimated by Logan and Yokoi graphical analysis and GLLS. The influx rate (K I), volume of distribution (V d) were estimated for the cerebellum, thalamus and frontal cortex. Our results show that (1) FCM reduces the bias and improves the reliability of parameter estimates for noisy data, (2) GLLS provides estimates of micro parameters (K I-k 4) as well as macro parameters, such as volume of distribution (Vd) and binding potential (BP I & BP II) and (3) FCM clustering incorporating neighboring voxel information does not improve the parameter estimates, but improves noise in the parametric images. These findings indicated that it is desirable for pre-segmentation with traditional FCM clustering to generate voxel-wise parametric images with GLLS from dynamic SPECT data.
NASA Astrophysics Data System (ADS)
Cariolle, D.; Teyssèdre, H.
2007-01-01
This article describes the validation of a linear parameterization of the ozone photochemistry for use in upper tropospheric and stratospheric studies. The present work extends a previously developed scheme by improving the 2D model used to derive the coefficients of the parameterization. The chemical reaction rates are updated from a compilation that includes recent laboratory works. Furthermore, the polar ozone destruction due to heterogeneous reactions at the surface of the polar stratospheric clouds is taken into account as a function of the stratospheric temperature and the total chlorine content. Two versions of the parameterization are tested. The first one only requires the resolution of a continuity equation for the time evolution of the ozone mixing ratio, the second one uses one additional equation for a cold tracer. The parameterization has been introduced into the chemical transport model MOCAGE. The model is integrated with wind and temperature fields from the ECMWF operational analyses over the period 2000-2004. Overall, the results show a very good agreement between the modelled ozone distribution and the Total Ozone Mapping Spectrometer (TOMS) satellite data and the "in-situ" vertical soundings. During the course of the integration the model does not show any drift and the biases are generally small. The model also reproduces fairly well the polar ozone variability, with notably the formation of "ozone holes" in the southern hemisphere with amplitudes and seasonal evolutions that follow the dynamics and time evolution of the polar vortex. The introduction of the cold tracer further improves the model simulation by allowing additional ozone destruction inside air masses exported from the high to the mid-latitudes, and by maintaining low ozone contents inside the polar vortex of the southern hemisphere over longer periods in spring time. It is concluded that for the study of climatic scenarios or the assimilation of ozone data, the present
Lebovka, Nikolai I; Tarasevich, Yuri Yu; Dubinin, Dmitri O; Laptev, Valeri V; Vygornitskii, Nikolai V
2015-12-01
The jamming and percolation for two generalized models of random sequential adsorption (RSA) of linear k-mers (particles occupying k adjacent sites) on a square lattice are studied by means of Monte Carlo simulation. The classical RSA model assumes the absence of overlapping of the new incoming particle with the previously deposited ones. The first model is a generalized variant of the RSA model for both k-mers and a lattice with defects. Some of the occupying k adjacent sites are considered as insulating and some of the lattice sites are occupied by defects (impurities). For this model even a small concentration of defects can inhibit percolation for relatively long k-mers. The second model is the cooperative sequential adsorption one where, for each new k-mer, only a restricted number of lateral contacts z with previously deposited k-mers is allowed. Deposition occurs in the case when z≤(1-d)z(m) where z(m)=2(k+1) is the maximum numbers of the contacts of k-mer, and d is the fraction of forbidden contacts. Percolation is observed only at some interval k(min)≤k≤k(max) where the values k(min) and k(max) depend upon the fraction of forbidden contacts d. The value k(max) decreases as d increases. A logarithmic dependence of the type log(10)(k(max))=a+bd, where a=4.04±0.22,b=-4.93±0.57, is obtained. PMID:26764641
Gonçalves, Nuno R; Whelan, Robert; Foxe, John J; Lalor, Edmund C
2014-08-15
Noninvasive investigation of human sensory processing with high temporal resolution typically involves repeatedly presenting discrete stimuli and extracting an average event-related response from scalp recorded neuroelectric or neuromagnetic signals. While this approach is and has been extremely useful, it suffers from two drawbacks: a lack of naturalness in terms of the stimulus and a lack of precision in terms of the cortical response generators. Here we show that a linear modeling approach that exploits functional specialization in sensory systems can be used to rapidly obtain spatiotemporally precise responses to complex sensory stimuli using electroencephalography (EEG). We demonstrate the method by example through the controlled modulation of the contrast and coherent motion of visual stimuli. Regressing the data against these modulation signals produces spatially focal, highly temporally resolved response measures that are suggestive of specific activation of visual areas V1 and V6, respectively, based on their onset latency, their topographic distribution and the estimated location of their sources. We discuss our approach by comparing it with fMRI/MRI informed source analysis methods and, in doing so, we provide novel information on the timing of coherent motion processing in human V6. Generalizing such an approach has the potential to facilitate the rapid, inexpensive spatiotemporal localization of higher perceptual functions in behaving humans. PMID:24736185
Tian, Fenghua; Liu, Hanli
2014-01-15
One of the main challenges in functional diffuse optical tomography (DOT) is to accurately recover the depth of brain activation, which is even more essential when differentiating true brain signals from task-evoked artifacts in the scalp. Recently, we developed a depth-compensated algorithm (DCA) to minimize the depth localization error in DOT. However, the semi-infinite model that was used in DCA deviated significantly from the realistic human head anatomy. In the present work, we incorporated depth-compensated DOT (DC-DOT) with a standard anatomical atlas of human head. Computer simulations and human measurements of sensorimotor activation were conducted to examine and prove the depth specificity and quantification accuracy of brain atlas-based DC-DOT. In addition, node-wise statistical analysis based on the general linear model (GLM) was also implemented and performed in this study, showing the robustness of DC-DOT that can accurately identify brain activation at the correct depth for functional brain imaging, even when co-existing with superficial artifacts. PMID:23859922
Pernet, Cyril R.
2014-01-01
This tutorial presents several misconceptions related to the use the General Linear Model (GLM) in functional Magnetic Resonance Imaging (fMRI). The goal is not to present mathematical proofs but to educate using examples and computer code (in Matlab). In particular, I address issues related to (1) model parameterization (modeling baseline or null events) and scaling of the design matrix; (2) hemodynamic modeling using basis functions, and (3) computing percentage signal change. Using a simple controlled block design and an alternating block design, I first show why “baseline” should not be modeled (model over-parameterization), and how this affects effect sizes. I also show that, depending on what is tested; over-parameterization does not necessarily impact upon statistical results. Next, using a simple periodic vs. random event related design, I show how the hemodynamic model (hemodynamic function only or using derivatives) can affects parameter estimates, as well as detail the role of orthogonalization. I then relate the above results to the computation of percentage signal change. Finally, I discuss how these issues affect group analyses and give some recommendations. PMID:24478622
NASA Astrophysics Data System (ADS)
Asong, Zilefac E.; Khaliq, M. N.; Wheater, H. S.
2016-02-01
Based on the Generalized Linear Model (GLM) framework, a multisite stochastic modelling approach is developed using daily observations of precipitation and minimum and maximum temperatures from 120 sites located across the Canadian Prairie Provinces: Alberta, Saskatchewan and Manitoba. Temperature is modeled using a two-stage normal-heteroscedastic model by fitting mean and variance components separately. Likewise, precipitation occurrence and conditional precipitation intensity processes are modeled separately. The relationship between precipitation and temperature is accounted for by using transformations of precipitation as covariates to predict temperature fields. Large scale atmospheric covariates from the National Center for Environmental Prediction Reanalysis-I, teleconnection indices, geographical site attributes, and observed precipitation and temperature records are used to calibrate these models for the 1971-2000 period. Validation of the developed models is performed on both pre- and post-calibration period data. Results of the study indicate that the developed models are able to capture spatiotemporal characteristics of observed precipitation and temperature fields, such as inter-site and inter-variable correlation structure, and systematic regional variations present in observed sequences. A number of simulated weather statistics ranging from seasonal means to characteristics of temperature and precipitation extremes and some of the commonly used climate indices are also found to be in close agreement with those derived from observed data. This GLM-based modelling approach will be developed further for multisite statistical downscaling of Global Climate Model outputs to explore climate variability and change in this region of Canada.
NASA Astrophysics Data System (ADS)
Asong, Z. E.; Khaliq, M. N.; Wheater, H. S.
2016-08-01
In this study, a multisite multivariate statistical downscaling approach based on the Generalized Linear Model (GLM) framework is developed to downscale daily observations of precipitation and minimum and maximum temperatures from 120 sites located across the Canadian Prairie Provinces: Alberta, Saskatchewan and Manitoba. First, large scale atmospheric covariates from the National Center for Environmental Prediction (NCEP) Reanalysis-I, teleconnection indices, geographical site attributes, and observed precipitation and temperature records are used to calibrate GLMs for the 1971-2000 period. Then the calibrated models are used to generate daily sequences of precipitation and temperature for the 1962-2005 historical (conditioned on NCEP predictors), and future period (2006-2100) using outputs from five CMIP5 (Coupled Model Intercomparison Project Phase-5) Earth System Models corresponding to Representative Concentration Pathway (RCP): RCP2.6, RCP4.5, and RCP8.5 scenarios. The results indicate that the fitted GLMs are able to capture spatiotemporal characteristics of observed precipitation and temperature fields. According to the downscaled future climate, mean precipitation is projected to increase in summer and decrease in winter while minimum temperature is expected to warm faster than the maximum temperature. Climate extremes are projected to intensify with increased radiative forcing.
NASA Astrophysics Data System (ADS)
de Souza, R. S.; Hilbe, J. M.; Buelens, B.; Riggs, J. D.; Cameron, E.; Ishida, E. E. O.; Chies-Santos, A. L.; Killedar, M.
2015-10-01
In this paper, the third in a series illustrating the power of generalized linear models (GLMs) for the astronomical community, we elucidate the potential of the class of GLMs which handles count data. The size of a galaxy's globular cluster (GC) population (NGC) is a prolonged puzzle in the astronomical literature. It falls in the category of count data analysis, yet it is usually modelled as if it were a continuous response variable. We have developed a Bayesian negative binomial regression model to study the connection between NGC and the following galaxy properties: central black hole mass, dynamical bulge mass, bulge velocity dispersion and absolute visual magnitude. The methodology introduced herein naturally accounts for heteroscedasticity, intrinsic scatter, errors in measurements in both axes (either discrete or continuous) and allows modelling the population of GCs on their natural scale as a non-negative integer variable. Prediction intervals of 99 per cent around the trend for expected NGC comfortably envelope the data, notably including the Milky Way, which has hitherto been considered a problematic outlier. Finally, we demonstrate how random intercept models can incorporate information of each particular galaxy morphological type. Bayesian variable selection methodology allows for automatically identifying galaxy types with different productions of GCs, suggesting that on average S0 galaxies have a GC population 35 per cent smaller than other types with similar brightness.
Mendes, T M; Guimarães-Okamoto, P T C; Machado-de-Avila, R A; Oliveira, D; Melo, M M; Lobato, Z I; Kalapothakis, E; Chávez-Olórtegui, C
2015-06-01
This communication describes the general characteristics of the venom from the Brazilian scorpion Tityus fasciolatus, which is an endemic species found in the central Brazil (States of Goiás and Minas Gerais), being responsible for sting accidents in this area. The soluble venom obtained from this scorpion is toxic to mice being the LD50 is 2.984 mg/kg (subcutaneally). SDS-PAGE of the soluble venom resulted in 10 fractions ranged in size from 6 to 10-80 kDa. Sheep were employed for anti-T. fasciolatus venom serum production. Western blotting analysis showed that most of these venom proteins are immunogenic. T. fasciolatus anti-venom revealed consistent cross-reactivity with venom antigens from Tityus serrulatus. Using known primers for T. serrulatus toxins, we have identified three toxins sequences from T. fasciolatus venom. Linear epitopes of these toxins were localized and fifty-five overlapping pentadecapeptides covering complete amino acid sequence of the three toxins were synthesized in cellulose membrane (spot-synthesis technique). The epitopes were located on the 3D structures and some important residues for structure/function were identified. PMID:25817000
Xiao, Xun; Geyer, Veikko F; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F
2016-08-01
Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy. PMID:27104582
NASA Astrophysics Data System (ADS)
Widyaningsih, Yekti; Saefuddin, Asep; Notodiputro, Khairil A.; Wigena, Aji H.
2012-05-01
The objective of this research is to build a nested generalized linear mixed model using an ordinal response variable with some covariates. There are three main jobs in this paper, i.e. parameters estimation procedure, simulation, and implementation of the model for the real data. At the part of parameters estimation procedure, concepts of threshold, nested random effect, and computational algorithm are described. The simulations data are built for 3 conditions to know the effect of different parameter values of random effect distributions. The last job is the implementation of the model for the data about poverty in 9 districts of Java Island. The districts are Kuningan, Karawang, and Majalengka chose randomly in West Java; Temanggung, Boyolali, and Cilacap from Central Java; and Blitar, Ngawi, and Jember from East Java. The covariates in this model are province, number of bad nutrition cases, number of farmer families, and number of health personnel. In this modeling, all covariates are grouped as ordinal scale. Unit observation in this research is sub-district (kecamatan) nested in district, and districts (kabupaten) are nested in province. For the result of simulation, ARB (Absolute Relative Bias) and RRMSE (Relative Root of mean square errors) scale is used. They show that prov parameters have the highest bias, but more stable RRMSE in all conditions. The simulation design needs to be improved by adding other condition, such as higher correlation between covariates. Furthermore, as the result of the model implementation for the data, only number of farmer family and number of medical personnel have significant contributions to the level of poverty in Central Java and East Java province, and only district 2 (Karawang) of province 1 (West Java) has different random effect from the others. The source of the data is PODES (Potensi Desa) 2008 from BPS (Badan Pusat Statistik).
NASA Astrophysics Data System (ADS)
Cariolle, D.; Teyssèdre, H.
2007-05-01
This article describes the validation of a linear parameterization of the ozone photochemistry for use in upper tropospheric and stratospheric studies. The present work extends a previously developed scheme by improving the 2-D model used to derive the coefficients of the parameterization. The chemical reaction rates are updated from a compilation that includes recent laboratory work. Furthermore, the polar ozone destruction due to heterogeneous reactions at the surface of the polar stratospheric clouds is taken into account as a function of the stratospheric temperature and the total chlorine content. Two versions of the parameterization are tested. The first one only requires the solution of a continuity equation for the time evolution of the ozone mixing ratio, the second one uses one additional equation for a cold tracer. The parameterization has been introduced into the chemical transport model MOCAGE. The model is integrated with wind and temperature fields from the ECMWF operational analyses over the period 2000-2004. Overall, the results from the two versions show a very good agreement between the modelled ozone distribution and the Total Ozone Mapping Spectrometer (TOMS) satellite data and the "in-situ" vertical soundings. During the course of the integration the model does not show any drift and the biases are generally small, of the order of 10%. The model also reproduces fairly well the polar ozone variability, notably the formation of "ozone holes" in the Southern Hemisphere with amplitudes and a seasonal evolution that follow the dynamics and time evolution of the polar vortex. The introduction of the cold tracer further improves the model simulation by allowing additional ozone destruction inside air masses exported from the high to the mid-latitudes, and by maintaining low ozone content inside the polar vortex of the Southern Hemisphere over longer periods in spring time. It is concluded that for the study of climate scenarios or the assimilation of
Improving model-based diagnosis through algebraic analysis: The Petri net challenge
Portinale, L.
1996-12-31
The present paper describes the empirical evaluation of a linear algebra approach to model-based diagnosis, in case the behavioral model of the device under examination is described through a Petri net model. In particular, we show that algebraic analysis based on P-invariants of the net model, can significantly improve the performance of a model-based diagnostic system, while keeping the integrity of a general framework defined from a formal logical theory. A system called INVADS is described and experimental results, performed on a car fault domain and involving the comparison of different implementations of P-invariant based diagnosis, are then discussed.
NASA Astrophysics Data System (ADS)
Nickel, Stefan; Hertel, Anne; Pesch, Roland; Schröder, Winfried; Steinnes, Eiliv; Uggerud, Hilde Thelle
2014-12-01
Objective. This study explores the statistical relations between the accumulation of heavy metals in moss and natural surface soil and potential influencing factors such as atmospheric deposition by use of multivariate regression-kriging and generalized linear models. Based on data collected in 1995, 2000, 2005 and 2010 throughout Norway the statistical correlation of a set of potential predictors (elevation, precipitation, density of different land uses, population density, physical properties of soil) with concentrations of cadmium (Cd), mercury and lead in moss and natural surface soil (response variables), respectively, were evaluated. Spatio-temporal trends were estimated by applying generalized linear models and geostatistics on spatial data covering Norway. The resulting maps were used to investigate to what extent the HM concentrations in moss and natural surface soil are correlated. Results. From a set of ten potential predictor variables the modelled atmospheric deposition showed the highest correlation with heavy metals concentrations in moss and natural surface soil. Density of various land uses in a 5 km radius reveal significant correlations with lead and cadmium concentration in moss and mercury concentration in natural surface soil. Elevation also appeared as a relevant factor for accumulation of lead and mercury in moss and cadmium in natural surface soil respectively. Precipitation was found to be a significant factor for cadmium in moss and mercury in natural surface soil. The integrated use of multivariate generalized linear models and kriging interpolation enabled creating heavy metals maps at a high level of spatial resolution. The spatial patterns of cadmium and lead concentrations in moss and natural surface soil in 1995 and 2005 are similar. The heavy metals concentrations in moss and natural surface soil are correlated significantly with high coefficients for lead, medium for cadmium and moderate for mercury. From 1995 up to 2010 the
NASA Technical Reports Server (NTRS)
Namburu, R. R.; Tamma, K. K.
1991-01-01
The applicability and evaluation of a generalized gamma(T) family of flux-based representations are examined for two different thermal analysis formulations for structures and materials which exhibit no phase change effects. The so-called H-theta and theta forms are demonstrated for numerous test models and linear and higher-order elements. The results show that the theta form with flux-based representations is generally superior to traditional approaches.
ERIC Educational Resources Information Center
Bashaw, W. L., Ed.; Findley, Warren G., Ed.
This volume contains the five major addresses and subsequent discussion from the Symposium on the General Linear Models Approach to the Analysis of Experimental Data in Educational Research, which was held in 1967 in Athens, Georgia. The symposium was designed to produce systematic information, including new methodology, for dissemination to the…
ERIC Educational Resources Information Center
Dimitrov, Dimiter M.; Raykov, Tenko; AL-Qataee, Abdullah Ali
2015-01-01
This article is concerned with developing a measure of general academic ability (GAA) for high school graduates who apply to colleges, as well as with the identification of optimal weights of the GAA indicators in a linear combination that yields a composite score with maximal reliability and maximal predictive validity, employing the framework of…
Model-Based Prognostics of Hybrid Systems
NASA Technical Reports Server (NTRS)
Daigle, Matthew; Roychoudhury, Indranil; Bregon, Anibal
2015-01-01
Model-based prognostics has become a popular approach to solving the prognostics problem. However, almost all work has focused on prognostics of systems with continuous dynamics. In this paper, we extend the model-based prognostics framework to hybrid systems models that combine both continuous and discrete dynamics. In general, most systems are hybrid in nature, including those that combine physical processes with software. We generalize the model-based prognostics formulation to hybrid systems, and describe the challenges involved. We present a general approach for modeling hybrid systems, and overview methods for solving estimation and prediction in hybrid systems. As a case study, we consider the problem of conflict (i.e., loss of separation) prediction in the National Airspace System, in which the aircraft models are hybrid dynamical systems.
NASA Technical Reports Server (NTRS)
Frisch, Harold P.
2007-01-01
Engineers, who design systems using text specification documents, focus their work upon the completed system to meet Performance, time and budget goals. Consistency and integrity is difficult to maintain within text documents for a single complex system and more difficult to maintain as several systems are combined into higher-level systems, are maintained over decades, and evolve technically and in performance through updates. This system design approach frequently results in major changes during the system integration and test phase, and in time and budget overruns. Engineers who build system specification documents within a model-based systems environment go a step further and aggregate all of the data. They interrelate all of the data to insure consistency and integrity. After the model is constructed, the various system specification documents are prepared, all from the same database. The consistency and integrity of the model is assured, therefore the consistency and integrity of the various specification documents is insured. This article attempts to define model-based systems relative to such an environment. The intent is to expose the complexity of the enabling problem by outlining what is needed, why it is needed and how needs are being addressed by international standards writing teams.
Lewis, Robert Michael (College of William and Mary, Williamsburg, VA); Torczon, Virginia Joanne (College of William and Mary, Williamsburg, VA); Kolda, Tamara Gibson
2006-08-01
We consider the solution of nonlinear programs in the case where derivatives of the objective function and nonlinear constraints are unavailable. To solve such problems, we propose an adaptation of a method due to Conn, Gould, Sartenaer, and Toint that proceeds by approximately minimizing a succession of linearly constrained augmented Lagrangians. Our modification is to use a derivative-free generating set direct search algorithm to solve the linearly constrained subproblems. The stopping criterion proposed by Conn, Gould, Sartenaer and Toint for the approximate solution of the subproblems requires explicit knowledge of derivatives. Such information is presumed absent in the generating set search method we employ. Instead, we show that stationarity results for linearly constrained generating set search methods provide a derivative-free stopping criterion, based on a step-length control parameter, that is sufficient to preserve the convergence properties of the original augmented Lagrangian algorithm.
Cadmium-hazard mapping using a general linear regression model (Irr-Cad) for rapid risk assessment.
Simmons, Robert W; Noble, Andrew D; Pongsakul, P; Sukreeyapongse, O; Chinabut, N
2009-02-01
Research undertaken over the last 40 years has identified the irrefutable relationship between the long-term consumption of cadmium (Cd)-contaminated rice and human Cd disease. In order to protect public health and livelihood security, the ability to accurately and rapidly determine spatial Cd contamination is of high priority. During 2001-2004, a General Linear Regression Model Irr-Cad was developed to predict the spatial distribution of soil Cd in a Cd/Zn co-contaminated cascading irrigated rice-based system in Mae Sot District, Tak Province, Thailand (Longitude E 98 degrees 59'-E 98 degrees 63' and Latitude N 16 degrees 67'-16 degrees 66'). The results indicate that Irr-Cad accounted for 98% of the variance in mean Field Order total soil Cd. Preliminary validation indicated that Irr-Cad 'predicted' mean Field Order total soil Cd, was significantly (p < 0.001) correlated (R (2) = 0.92) with 'observed' mean Field Order total soil Cd values. Field Order is determined by a given field's proximity to primary outlets from in-field irrigation channels and subsequent inter-field irrigation flows. This in turn determines Field Order in Irrigation Sequence (Field Order(IS)). Mean Field Order total soil Cd represents the mean total soil Cd (aqua regia-digested) for a given Field Order(IS). In 2004-2005, Irr-Cad was utilized to evaluate the spatial distribution of total soil Cd in a 'high-risk' area of Mae Sot District. Secondary validation on six randomly selected field groups verified that Irr-Cad predicted mean Field Order total soil Cd and was significantly (p < 0.001) correlated with the observed mean Field Order total soil Cd with R (2) values ranging from 0.89 to 0.97. The practical applicability of Irr-Cad is in its minimal input requirements, namely the classification of fields in terms of Field Order(IS), strategic sampling of all primary fields and laboratory based determination of total soil Cd (T-Cd(P)) and the use of a weighed coefficient for Cd (Coeff
NASA Technical Reports Server (NTRS)
Utku, S.
1969-01-01
A general purpose digital computer program for the in-core solution of linear equilibrium problems of structural mechanics is documented. The program requires minimum input for the description of the problem. The solution is obtained by means of the displacement method and the finite element technique. Almost any geometry and structure may be handled because of the availability of linear, triangular, quadrilateral, tetrahedral, hexahedral, conical, triangular torus, and quadrilateral torus elements. The assumption of piecewise linear deflection distribution insures monotonic convergence of the deflections from the stiffer side with decreasing mesh size. The stresses are provided by the best-fit strain tensors in the least squares at the mesh points where the deflections are given. The selection of local coordinate systems whenever necessary is automatic. The core memory is used by means of dynamic memory allocation, an optional mesh-point relabelling scheme and imposition of the boundary conditions during the assembly time.
NASA Astrophysics Data System (ADS)
Sharma, S.; Narayan, A.
2001-06-01
The non-linear oscillation of inter-connected satellites system about its equilibrium position in the neighabourhood of main resonance ??=3D 1, under the combined effects of the solar radiation pressure and the dissipative forces of general nature has been discussed. It is found that the oscillation of the system gets disturbed when the frequency of the natural oscillation approaches the resonance frequency.
NASA Technical Reports Server (NTRS)
Rowe, Sidney E.
2010-01-01
In September 2007, the Engineering Directorate at the Marshall Space Flight Center (MSFC) created the Design System Focus Team (DSFT). MSFC was responsible for the in-house design and development of the Ares 1 Upper Stage and the Engineering Directorate was preparing to deploy a new electronic Configuration Management and Data Management System with the Design Data Management System (DDMS) based upon a Commercial Off The Shelf (COTS) Product Data Management (PDM) System. The DSFT was to establish standardized CAD practices and a new data life cycle for design data. Of special interest here, the design teams were to implement Model Based Definition (MBD) in support of the Upper Stage manufacturing contract. It is noted that this MBD does use partially dimensioned drawings for auxiliary information to the model. The design data lifecycle implemented several new release states to be used prior to formal release that allowed the models to move through a flow of progressive maturity. The DSFT identified some 17 Lessons Learned as outcomes of the standards development, pathfinder deployments and initial application to the Upper Stage design completion. Some of the high value examples are reviewed.
Kim, Hyunwoo J.; Adluru, Nagesh; Collins, Maxwell D.; Chung, Moo K.; Bendlin, Barbara B.; Johnson, Sterling C.; Davidson, Richard J.; Singh, Vikas
2014-01-01
Linear regression is a parametric model which is ubiquitous in scientific analysis. The classical setup where the observations and responses, i.e., (xi, yi) pairs, are Euclidean is well studied. The setting where yi is manifold valued is a topic of much interest, motivated by applications in shape analysis, topic modeling, and medical imaging. Recent work gives strategies for max-margin classifiers, principal components analysis, and dictionary learning on certain types of manifolds. For parametric regression specifically, results within the last year provide mechanisms to regress one real-valued parameter, xi ∈ R, against a manifold-valued variable, yi ∈ . We seek to substantially extend the operating range of such methods by deriving schemes for multivariate multiple linear regression —a manifold-valued dependent variable against multiple independent variables, i.e., f : Rn → . Our variational algorithm efficiently solves for multiple geodesic bases on the manifold concurrently via gradient updates. This allows us to answer questions such as: what is the relationship of the measurement at voxel y to disease when conditioned on age and gender. We show applications to statistical analysis of diffusion weighted images, which give rise to regression tasks on the manifold GL(n)/O(n) for diffusion tensor images (DTI) and the Hilbert unit sphere for orientation distribution functions (ODF) from high angular resolution acquisition. The companion open-source code is available on nitrc.org/projects/riem_mglm. PMID:25580070
Wu, Jian; Wen, Qiuting
2008-01-01
Optical positioning system is an important part in the computer aided surgery system. Under the previous research of the three linear CCD positioning system prototype, this paper proposed a new way to implement three-dimensional coordinates reconstruction of a marker in the digital signal processor while not in a computer as before. And the experiments were designed to calculate the markers' three dimensional coordinates in the DSP chip and the computer respectively, the results of the three dimensional coordinates' reconstruction showed that the calculation precision in DSP chip and the computer had no difference within 0.01mm error limit. Furthermore, the method that the three dimensional coordinates' reconstruction implemented in the DSP chip can improve the stability of the optical positioning system, and to the greatest extent to increase the calculation independent of hardware, while not depend on computer processing as before. PMID:19163161
NASA Astrophysics Data System (ADS)
Tsuboi, Zengo
2013-05-01
In [1] (Z. Tsuboi, Nucl. Phys. B 826 (2010) 399, arxiv:arXiv:0906.2039), we proposed Wronskian-like solutions of the T-system for [ M , N ]-hook of the general linear superalgebra gl (M | N). We have generalized these Wronskian-like solutions to the ones for the general T-hook, which is a union of [M1 ,N1 ]-hook and [M2 ,N2 ]-hook (M =M1 +M2, N =N1 +N2). These solutions are related to Weyl-type supercharacter formulas of infinite dimensional unitarizable modules of gl (M | N). Our solutions also include a Wronskian-like solution discussed in [2] (N. Gromov, V. Kazakov, S. Leurent, Z. Tsuboi, JHEP 1101 (2011) 155, arxiv:arXiv:1010.2720) in relation to the AdS5 /CFT4 spectral problem.
Principles of models based engineering
Dolin, R.M.; Hefele, J.
1996-11-01
This report describes a Models Based Engineering (MBE) philosophy and implementation strategy that has been developed at Los Alamos National Laboratory`s Center for Advanced Engineering Technology. A major theme in this discussion is that models based engineering is an information management technology enabling the development of information driven engineering. Unlike other information management technologies, models based engineering encompasses the breadth of engineering information, from design intent through product definition to consumer application.
NASA Technical Reports Server (NTRS)
Ustino, Eugene A.
2006-01-01
This slide presentation reviews the observable radiances as functions of atmospheric parameters and of surface parameters; the mathematics of atmospheric weighting functions (WFs) and surface partial derivatives (PDs) are presented; and the equation of the forward radiative transfer (RT) problem is presented. For non-scattering atmospheres this can be done analytically, and all WFs and PDs can be computed analytically using the direct linearization approach. For scattering atmospheres, in general case, the solution of the forward RT problem can be obtained only numerically, but we need only two numerical solutions: one of the forward RT problem and one of the adjoint RT problem to compute all WFs and PDs we can think of. In this presentation we discuss applications of both the linearization and adjoint approaches
NASA Astrophysics Data System (ADS)
Ramadan, Omar
2015-09-01
In this paper, systematic wave-equation finite difference time domain (WE-FDTD) formulations are presented for modeling electromagnetic wave-propagation in linear and nonlinear dispersive materials. In the proposed formulations, the complex conjugate pole residue (CCPR) pairs model is adopted in deriving a unified dispersive WE-FDTD algorithm that allows modeling different dispersive materials, such as Debye, Drude and Lorentz, in the same manner with the minimal additional auxiliary variables. Moreover, the proposed formulations are incorporated with the wave-equation perfectly matched layer (WE-PML) to construct a material independent mesh truncating technique that can be used for modeling general frequency-dependent open region problems. Several numerical examples involving linear and nonlinear dispersive materials are included to show the validity of the proposed formulations.
Beresten, S F; Rubikaite, B I; Kisselev, L L
1988-10-26
A method is proposed which permits the localization of antigenic determinants of a linear type on the polypeptide chain of a protein molecule of unknown primary structure. An antigen modified with maleic anhydride at the amino-terminal groups and at the epsilon-NH2 groups of lysine residues was subjected to partial enzymic digestion, so that the antigenic protein had, on average, less than one cleavage site per polypeptide chain. The resultant ends were labeled with 125I-labeled Bolton and Hunter reagent and the maleic group removed. The detection of the two larger labeled fragments (a longer one which still could bind to a monoclonal antibody and a shorter one which was incapable of binding) made it possible to determine the distance from the antigenic determinant to the C-terminus of the polypeptide chain. The position of the antigenic determinant could be established in more detail using partial chemical degradation of the original antigen using information about the maximal length of a fragment which has lost its ability to interact with the monoclonal antibody. The method has been applied to bovine tryptophanyl-tRNA synthetase (EC 6.1.1.2). PMID:2459255
NASA Technical Reports Server (NTRS)
Tal-Ezer, Hillel
1987-01-01
During the process of solving a mathematical model numerically, there is often a need to operate on a vector v by an operator which can be expressed as f(A) while A is NxN matrix (ex: exp(A), sin(A), A sup -1). Except for very simple matrices, it is impractical to construct the matrix f(A) explicitly. Usually an approximation to it is used. In the present research, an algorithm is developed which uses a polynomial approximation to f(A). It is reduced to a problem of approximating f(z) by a polynomial in z while z belongs to the domain D in the complex plane which includes all the eigenvalues of A. This problem of approximation is approached by interpolating the function f(z) in a certain set of points which is known to have some maximal properties. The approximation thus achieved is almost best. Implementing the algorithm to some practical problem is described. Since a solution to a linear system Ax = b is x= A sup -1 b, an iterative solution to it can be regarded as a polynomial approximation to f(A) = A sup -1. Implementing the algorithm in this case is also described.
Wang, Ching-Yun; Dieu Tapsoba, Jean De; Duggan, Catherine; Campbell, Kristin L; McTiernan, Anne
2016-05-10
In many biomedical studies, covariates of interest may be measured with errors. However, frequently in a regression analysis, the quantiles of the exposure variable are often used as the covariates in the regression analysis. Because of measurement errors in the continuous exposure variable, there could be misclassification in the quantiles for the exposure variable. Misclassification in the quantiles could lead to bias estimation in the association between the exposure variable and the outcome variable. Adjustment for misclassification will be challenging when the gold standard variables are not available. In this paper, we develop two regression calibration estimators to reduce bias in effect estimation. The first estimator is normal likelihood-based. The second estimator is linearization-based, and it provides a simple and practical correction. Finite sample performance is examined via a simulation study. We apply the methods to a four-arm randomized clinical trial that tested exercise and weight loss interventions in women aged 50-75years. Copyright © 2015 John Wiley & Sons, Ltd. PMID:26593772
Lipparini, Filippo; Scalmani, Giovanni; Lagardère, Louis; Stamm, Benjamin; Cancès, Eric; Maday, Yvon; Piquemal, Jean-Philip; Frisch, Michael J; Mennucci, Benedetta
2014-11-14
We present the general theory and implementation of the Conductor-like Screening Model according to the recently developed ddCOSMO paradigm. The various quantities needed to apply ddCOSMO at different levels of theory, including quantum mechanical descriptions, are discussed in detail, with a particular focus on how to compute the integrals needed to evaluate the ddCOSMO solvation energy and its derivatives. The overall computational cost of a ddCOSMO computation is then analyzed and decomposed in the various steps: the different relative weights of such contributions are then discussed for both ddCOSMO and the fastest available alternative discretization to the COSMO equations. Finally, the scaling of the cost of the various steps with respect to the size of the solute is analyzed and discussed, showing how ddCOSMO opens significantly new possibilities when cheap or hybrid molecular mechanics/quantum mechanics methods are used to describe the solute. PMID:25399133
Lipparini, Filippo; Scalmani, Giovanni; Frisch, Michael J.; Lagardère, Louis; Stamm, Benjamin; Cancès, Eric; Maday, Yvon; Piquemal, Jean-Philip; Mennucci, Benedetta
2014-11-14
We present the general theory and implementation of the Conductor-like Screening Model according to the recently developed ddCOSMO paradigm. The various quantities needed to apply ddCOSMO at different levels of theory, including quantum mechanical descriptions, are discussed in detail, with a particular focus on how to compute the integrals needed to evaluate the ddCOSMO solvation energy and its derivatives. The overall computational cost of a ddCOSMO computation is then analyzed and decomposed in the various steps: the different relative weights of such contributions are then discussed for both ddCOSMO and the fastest available alternative discretization to the COSMO equations. Finally, the scaling of the cost of the various steps with respect to the size of the solute is analyzed and discussed, showing how ddCOSMO opens significantly new possibilities when cheap or hybrid molecular mechanics/quantum mechanics methods are used to describe the solute.
NASA Astrophysics Data System (ADS)
Harko, T.; Mak, M. K.
2016-09-01
Obtaining exact solutions of the spherically symmetric general relativistic gravitational field equations describing the interior structure of an isotropic fluid sphere is a long standing problem in theoretical and mathematical physics. The usual approach to this problem consists mainly in the numerical investigation of the Tolman-Oppenheimer-Volkoff and of the mass continuity equations, which describes the hydrostatic stability of the dense stars. In the present paper we introduce an alternative approach for the study of the relativistic fluid sphere, based on the relativistic mass equation, obtained by eliminating the energy density in the Tolman-Oppenheimer-Volkoff equation. Despite its apparent complexity, the relativistic mass equation can be solved exactly by using a power series representation for the mass, and the Cauchy convolution for infinite power series. We obtain exact series solutions for general relativistic dense astrophysical objects described by the linear barotropic and the polytropic equations of state, respectively. For the polytropic case we obtain the exact power series solution corresponding to arbitrary values of the polytropic index n. The explicit form of the solution is presented for the polytropic index n=1, and for the indexes n=1/2 and n=1/5, respectively. The case of n=3 is also considered. In each case the exact power series solution is compared with the exact numerical solutions, which are reproduced by the power series solutions truncated to seven terms only. The power series representations of the geometric and physical properties of the linear barotropic and polytropic stars are also obtained.
Model based control of polymer composite manufacturing processes
NASA Astrophysics Data System (ADS)
Potaraju, Sairam
2000-10-01
The objective of this research is to develop tools that help process engineers design, analyze and control polymeric composite manufacturing processes to achieve higher productivity and cost reduction. Current techniques for process design and control of composite manufacturing suffer from the paucity of good process models that can accurately represent these non-linear systems. Existing models developed by researchers in the past are designed to be process and operation specific, hence generating new simulation models is time consuming and requires significant effort. To address this issue, an Object Oriented Design (OOD) approach is used to develop a component-based model building framework. Process models for two commonly used industrial processes (Injected Pultrusion and Autoclave Curing) are developed using this framework to demonstrate the flexibility. Steady state and dynamic validation of this simulator is performed using a bench scale injected pultrusion process. This simulator could not be implemented online for control due to computational constraints. Models that are fast enough for online implementation, with nearly the same degree of accuracy are developed using a two-tier scheme. First, lower dimensional models that captures essential resin flow, heat transfer and cure kinetics important from a process monitoring and control standpoint are formulated. The second step is to reduce these low dimensional models to Reduced Order Models (ROM) suited for online model based estimation, control and optimization. Model reduction is carried out using Proper Orthogonal Decomposition (POD) technique in conjunction with a Galerkin formulation procedure. Subsequently, a nonlinear model-based estimation and inferential control scheme based on the ROM is implemented. In particular, this research work contributes in the following general areas: (1) Design and implementation of versatile frameworks for modeling and simulation of manufacturing processes using object
Argumentation in Science Education: A Model-Based Framework
ERIC Educational Resources Information Center
Bottcher, Florian; Meisert, Anke
2011-01-01
The goal of this article is threefold: First, the theoretical background for a model-based framework of argumentation to describe and evaluate argumentative processes in science education is presented. Based on the general model-based perspective in cognitive science and the philosophy of science, it is proposed to understand arguments as reasons…
Piccardo, Matteo; Bloino, Julien; Barone, Vincenzo
2015-01-01
Models going beyond the rigid-rotor and the harmonic oscillator levels are mandatory for providing accurate theoretical predictions for several spectroscopic properties. Different strategies have been devised for this purpose. Among them, the treatment by perturbation theory of the molecular Hamiltonian after its expansion in power series of products of vibrational and rotational operators, also referred to as vibrational perturbation theory (VPT), is particularly appealing for its computational efficiency to treat medium-to-large systems. Moreover, generalized (GVPT) strategies combining the use of perturbative and variational formalisms can be adopted to further improve the accuracy of the results, with the first approach used for weakly coupled terms, and the second one to handle tightly coupled ones. In this context, the GVPT formulation for asymmetric, symmetric, and linear tops is revisited and fully generalized to both minima and first-order saddle points of the molecular potential energy surface. The computational strategies and approximations that can be adopted in dealing with GVPT computations are pointed out, with a particular attention devoted to the treatment of symmetry and degeneracies. A number of tests and applications are discussed, to show the possibilities of the developments, as regards both the variety of treatable systems and eligible methods. © 2015 Wiley Periodicals, Inc. PMID:26345131
Efficient Model-Based Diagnosis Engine
NASA Technical Reports Server (NTRS)
Fijany, Amir; Vatan, Farrokh; Barrett, Anthony; James, Mark; Mackey, Ryan; Williams, Colin
2009-01-01
An efficient diagnosis engine - a combination of mathematical models and algorithms - has been developed for identifying faulty components in a possibly complex engineering system. This model-based diagnosis engine embodies a twofold approach to reducing, relative to prior model-based diagnosis engines, the amount of computation needed to perform a thorough, accurate diagnosis. The first part of the approach involves a reconstruction of the general diagnostic engine to reduce the complexity of the mathematical-model calculations and of the software needed to perform them. The second part of the approach involves algorithms for computing a minimal diagnosis (the term "minimal diagnosis" is defined below). A somewhat lengthy background discussion is prerequisite to a meaningful summary of the innovative aspects of the present efficient model-based diagnosis engine. In model-based diagnosis, the function of each component and the relationships among all the components of the engineering system to be diagnosed are represented as a logical system denoted the system description (SD). Hence, the expected normal behavior of the engineering system is the set of logical consequences of the SD. Faulty components lead to inconsistencies between the observed behaviors of the system and the SD (see figure). Diagnosis - the task of finding faulty components - is reduced to finding those components, the abnormalities of which could explain all the inconsistencies. The solution of the diagnosis problem should be a minimal diagnosis, which is a minimal set of faulty components. A minimal diagnosis stands in contradistinction to the trivial solution, in which all components are deemed to be faulty, and which, therefore, always explains all inconsistencies.
NASA Astrophysics Data System (ADS)
Harko, Tiberiu; Liang, Shi-Dong
2016-06-01
We investigate the connection between the linear harmonic oscillator equation and some classes of second order nonlinear ordinary differential equations of Li\\'enard and generalized Li\\'enard type, which physically describe important oscillator systems. By using a method inspired by quantum mechanics, and which consist on the deformation of the phase space coordinates of the harmonic oscillator, we generalize the equation of motion of the classical linear harmonic oscillator to several classes of strongly non-linear differential equations. The first integrals, and a number of exact solutions of the corresponding equations are explicitly obtained. The devised method can be further generalized to derive explicit general solutions of nonlinear second order differential equations unrelated to the harmonic oscillator. Applications of the obtained results for the study of the travelling wave solutions of the reaction-convection-diffusion equations, and of the large amplitude free vibrations of a uniform cantilever beam are also presented.
NASA Astrophysics Data System (ADS)
Pulquério, Mário; Garrett, Pedro; Santos, Filipe Duarte; Cruz, Maria João
2015-04-01
Portugal is on a climate change hot spot region, where precipitation is expected to decrease with important impacts regarding future water availability. As one of the European countries affected more by droughts in the last decades, it is important to assess how future precipitation regimes will change in order to study its impacts on water resources. Due to the coarse scale of global circulation models, it is often needed to downscale climate variables to the regional or local scale using statistical and/or dynamical techniques. In this study, we tested the use of a generalized linear model, as implemented in the program GLIMCLIM, to downscale precipitation for the center of Portugal where the Tagus basin is located. An analysis of the method performance is done as well as an evaluation of future precipitation trends and extremes for the twenty-first century. Additionally, we perform the first analysis of the evolution of droughts in climate change scenarios by the Standardized Precipitation Index in the study area. Results show that GLIMCLIM is able to capture the precipitation's interannual variation and seasonality correctly. However, summer precipitation is considerably overestimated. Additionally, precipitation extremes are in general well recovered, but high daily rainfall may be overestimated, and dry spell lengths are not correctly recovered by the model. Downscaled projections show a reduction in precipitation between 19 and 28 % at the end of the century. Results indicate that precipitation extremes will decrease and the magnitude of droughts can increase up to three times in relation to the 1961-1990 period which can have strong ecological, social, and economic impacts.
Bacheler, N.M.; Hightower, J.E.; Burdick, S.M.; Paramore, L.M.; Buckel, J.A.; Pollock, K.H.
2010-01-01
Estimating the selectivity patterns of various fishing gears is a critical component of fisheries stock assessment due to the difficulty in obtaining representative samples from most gears. We used short-term recoveries (n = 3587) of tagged red drum Sciaenops ocellatus to directly estimate age- and length-based selectivity patterns using generalized linear models. The most parsimonious models were selected using AIC, and standard deviations were estimated using simulations. Selectivity of red drum was dependent upon the regulation period in which the fish was caught, the gear used to catch the fish (i.e., hook-and-line, gill nets, pound nets), and the fate of the fish upon recovery (i.e., harvested or released); models including all first-order interactions between main effects outperformed models without interactions. Selectivity of harvested fish was generally dome-shaped and shifted toward larger, older fish in response to regulation changes. Selectivity of caught-and-released red drum was highest on the youngest and smallest fish in the early and middle regulation periods, but increased on larger, legal-sized fish in the late regulation period. These results suggest that catch-and-release mortality has consistently been high for small, young red drum, but has recently become more common in larger, older fish. This method of estimating selectivity from short-term tag recoveries is valuable because it is simpler than full tag-return models, and may be more robust because yearly fishing and natural mortality rates do not need to be modeled and estimated. ?? 2009 Elsevier B.V.
Burdick, Summer M.; Hightower, Joseph E.; Bacheler, Nathan M.; Paramore, Lee M.; Buckel, Jeffrey A.; Pollock, Kenneth H.
2010-01-01
Estimating the selectivity patterns of various fishing gears is a critical component of fisheries stock assessment due to the difficulty in obtaining representative samples from most gears. We used short-term recoveries (n = 3587) of tagged red drum Sciaenops ocellatus to directly estimate age- and length-based selectivity patterns using generalized linear models. The most parsimonious models were selected using AIC, and standard deviations were estimated using simulations. Selectivity of red drum was dependent upon the regulation period in which the fish was caught, the gear used to catch the fish (i.e., hook-and-line, gill nets, pound nets), and the fate of the fish upon recovery (i.e., harvested or released); models including all first-order interactions between main effects outperformed models without interactions. Selectivity of harvested fish was generally dome-shaped and shifted toward larger, older fish in response to regulation changes. Selectivity of caught-and-released red drum was highest on the youngest and smallest fish in the early and middle regulation periods, but increased on larger, legal-sized fish in the late regulation period. These results suggest that catch-and-release mortality has consistently been high for small, young red drum, but has recently become more common in larger, older fish. This method of estimating selectivity from short-term tag recoveries is valuable because it is simpler than full tag-return models, and may be more robust because yearly fishing and natural mortality rates do not need to be modeled and estimated.
Model-based tomographic reconstruction
Chambers, David H.; Lehman, Sean K.; Goodman, Dennis M.
2012-06-26
A model-based approach to estimating wall positions for a building is developed and tested using simulated data. It borrows two techniques from geophysical inversion problems, layer stripping and stacking, and combines them with a model-based estimation algorithm that minimizes the mean-square error between the predicted signal and the data. The technique is designed to process multiple looks from an ultra wideband radar array. The processed signal is time-gated and each section processed to detect the presence of a wall and estimate its position, thickness, and material parameters. The floor plan of a building is determined by moving the array around the outside of the building. In this paper we describe how the stacking and layer stripping algorithms are combined and show the results from a simple numerical example of three parallel walls.
Yue, Yu Ryan; Wang, Xiao-Feng
2016-05-10
This paper is motivated from a retrospective study of the impact of vitamin D deficiency on the clinical outcomes for critically ill patients in multi-center critical care units. The primary predictors of interest, vitamin D2 and D3 levels, are censored at a known detection limit. Within the context of generalized linear mixed models, we investigate statistical methods to handle multiple censored predictors in the presence of auxiliary variables. A Bayesian joint modeling approach is proposed to fit the complex heterogeneous multi-center data, in which the data information is fully used to estimate parameters of interest. Efficient Monte Carlo Markov chain algorithms are specifically developed depending on the nature of the response. Simulation studies demonstrate the outperformance of the proposed Bayesian approach over other existing methods. An application to the data set from the vitamin D deficiency study is presented. Possible extensions of the method regarding the absence of auxiliary variables, semiparametric models, as well as the type of censoring are also discussed. Copyright © 2015 John Wiley & Sons, Ltd. PMID:26643287
Bishop, Christopher M.
2013-01-01
Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications. PMID:23277612
NASA Technical Reports Server (NTRS)
Joshi, Anjali; Heimdahl, Mats P. E.; Miller, Steven P.; Whalen, Mike W.
2006-01-01
System safety analysis techniques are well established and are used extensively during the design of safety-critical systems. Despite this, most of the techniques are highly subjective and dependent on the skill of the practitioner. Since these analyses are usually based on an informal system model, it is unlikely that they will be complete, consistent, and error free. In fact, the lack of precise models of the system architecture and its failure modes often forces the safety analysts to devote much of their effort to gathering architectural details about the system behavior from several sources and embedding this information in the safety artifacts such as the fault trees. This report describes Model-Based Safety Analysis, an approach in which the system and safety engineers share a common system model created using a model-based development process. By extending the system model with a fault model as well as relevant portions of the physical system to be controlled, automated support can be provided for much of the safety analysis. We believe that by using a common model for both system and safety engineering and automating parts of the safety analysis, we can both reduce the cost and improve the quality of the safety analysis. Here we present our vision of model-based safety analysis and discuss the advantages and challenges in making this approach practical.
Bishop, Christopher M
2013-02-13
Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications. PMID:23277612
Qualitative model-based diagnosis using possibility theory
NASA Technical Reports Server (NTRS)
Joslyn, Cliff
1994-01-01
The potential for the use of possibility in the qualitative model-based diagnosis of spacecraft systems is described. The first sections of the paper briefly introduce the Model-Based Diagnostic (MBD) approach to spacecraft fault diagnosis; Qualitative Modeling (QM) methodologies; and the concepts of possibilistic modeling in the context of Generalized Information Theory (GIT). Then the necessary conditions for the applicability of possibilistic methods to qualitative MBD, and a number of potential directions for such an application, are described.
Model-based reconfiguration: Diagnosis and recovery
NASA Technical Reports Server (NTRS)
Crow, Judy; Rushby, John
1994-01-01
We extend Reiter's general theory of model-based diagnosis to a theory of fault detection, identification, and reconfiguration (FDIR). The generality of Reiter's theory readily supports an extension in which the problem of reconfiguration is viewed as a close analog of the problem of diagnosis. Using a reconfiguration predicate 'rcfg' analogous to the abnormality predicate 'ab,' we derive a strategy for reconfiguration by transforming the corresponding strategy for diagnosis. There are two obvious benefits of this approach: algorithms for diagnosis can be exploited as algorithms for reconfiguration and we have a theoretical framework for an integrated approach to FDIR. As a first step toward realizing these benefits we show that a class of diagnosis engines can be used for reconfiguration and we discuss algorithms for integrated FDIR. We argue that integrating recovery and diagnosis is an essential next step if this technology is to be useful for practical applications.
Vilar, Lara; Gómez, Israel; Martínez-Vega, Javier; Echavarría, Pilar; Riaño, David; Martín, M. Pilar
2016-01-01
The socio-economic factors are of key importance during all phases of wildfire management that include prevention, suppression and restoration. However, modeling these factors, at the proper spatial and temporal scale to understand fire regimes is still challenging. This study analyses socio-economic drivers of wildfire occurrence in central Spain. This site represents a good example of how human activities play a key role over wildfires in the European Mediterranean basin. Generalized Linear Models (GLM) and machine learning Maximum Entropy models (Maxent) predicted wildfire occurrence in the 1980s and also in the 2000s to identify changes between each period in the socio-economic drivers affecting wildfire occurrence. GLM base their estimation on wildfire presence-absence observations whereas Maxent on wildfire presence-only. According to indicators like sensitivity or commission error Maxent outperformed GLM in both periods. It achieved a sensitivity of 38.9% and a commission error of 43.9% for the 1980s, and 67.3% and 17.9% for the 2000s. Instead, GLM obtained 23.33, 64.97, 9.41 and 18.34%, respectively. However GLM performed steadier than Maxent in terms of the overall fit. Both models explained wildfires from predictors such as population density and Wildland Urban Interface (WUI), but differed in their relative contribution. As a result of the urban sprawl and an abandonment of rural areas, predictors like WUI and distance to roads increased their contribution to both models in the 2000s, whereas Forest-Grassland Interface (FGI) influence decreased. This study demonstrates that human component can be modelled with a spatio-temporal dimension to integrate it into wildfire risk assessment. PMID:27557113
Vilar, Lara; Gómez, Israel; Martínez-Vega, Javier; Echavarría, Pilar; Riaño, David; Martín, M Pilar
2016-01-01
The socio-economic factors are of key importance during all phases of wildfire management that include prevention, suppression and restoration. However, modeling these factors, at the proper spatial and temporal scale to understand fire regimes is still challenging. This study analyses socio-economic drivers of wildfire occurrence in central Spain. This site represents a good example of how human activities play a key role over wildfires in the European Mediterranean basin. Generalized Linear Models (GLM) and machine learning Maximum Entropy models (Maxent) predicted wildfire occurrence in the 1980s and also in the 2000s to identify changes between each period in the socio-economic drivers affecting wildfire occurrence. GLM base their estimation on wildfire presence-absence observations whereas Maxent on wildfire presence-only. According to indicators like sensitivity or commission error Maxent outperformed GLM in both periods. It achieved a sensitivity of 38.9% and a commission error of 43.9% for the 1980s, and 67.3% and 17.9% for the 2000s. Instead, GLM obtained 23.33, 64.97, 9.41 and 18.34%, respectively. However GLM performed steadier than Maxent in terms of the overall fit. Both models explained wildfires from predictors such as population density and Wildland Urban Interface (WUI), but differed in their relative contribution. As a result of the urban sprawl and an abandonment of rural areas, predictors like WUI and distance to roads increased their contribution to both models in the 2000s, whereas Forest-Grassland Interface (FGI) influence decreased. This study demonstrates that human component can be modelled with a spatio-temporal dimension to integrate it into wildfire risk assessment. PMID:27557113
Chakraborty, Hrishikesh; Helms, Ronald W; Sen, Pranab K; Cohen, Myron S
2003-05-15
Estimating the correlation coefficient between two outcome variables is one of the most important aspects of epidemiological and clinical research. A simple Pearson's correlation coefficient method is usually employed when there are complete independent data points for both outcome variables. However, researchers often deal with correlated observations in a longitudinal setting with missing values where a simple Pearson's correlation coefficient method cannot be used. General linear mixed models (GLMM) techniques were used to estimate correlation coefficients in a longitudinal data set with missing values. A random regression mixed model with unstructured covariance matrix was employed to estimate correlation coefficients between concentrations of HIV-1 RNA in blood and seminal plasma. The effects of CD4 count and antiretroviral therapy were also examined. We used data sets from three different centres (650 samples from 238 patients) where blood and seminal plasma HIV-1 RNA concentrations were collected from patients; 137 samples from 90 different patients without antiviral therapy and 513 samples from 148 patients receiving therapy were considered for analysis. We found no significant correlation between blood and semen HIV-1 RNA concentration in the absence of antiviral therapy. However, a moderate correlation between blood and semen HIV-1 RNA was observed among subjects with lower CD4 counts receiving therapy. Our findings confirm and extend the idea that the concentrations of HIV-1 in semen often differ from the HIV-1 concentration in blood. Antiretroviral therapy administered to subjects with low CD4 counts result in sufficient concomitant reduction of HIV-1 in blood and semen so as to improve the correlation between these compartments. These results have important implications for studies related to the sexual transmission of HIV, and development of HIV prevention strategies. PMID:12704609
Model-based reasoning: Troubleshooting
NASA Astrophysics Data System (ADS)
Davis, Randall; Hamscher, Walter C.
1988-07-01
To determine why something has stopped working, its useful to know how it was supposed to work in the first place. That simple observation underlies some of the considerable interest generated in recent years on the topic of model-based reasoning, particularly its application to diagnosis and troubleshooting. This paper surveys the current state of the art, reviewing areas that are well understood and exploring areas that present challenging research topics. It views the fundamental paradigm as the interaction of prediction and observation, and explores it by examining three fundamental subproblems: generating hypotheses by reasoning from a symptom to a collection of components whose misbehavior may plausibly have caused that symptom; testing each hypothesis to see whether it can account for all available observations of device behavior; then discriminating among the ones that survive testing. We analyze each of these independently at the knowledge level i.e., attempting to understand what reasoning capabilities arise from the different varieties of knowledge available to the program. We find that while a wide range of apparently diverse model-based systems have been built for diagnosis and troubleshooting, they are for the most part variations on the central theme outlined here. Their diversity lies primarily in the varying amounts of kinds of knowledge they bring to bear at each stage of the process; the underlying paradigm is fundamentally the same.
Leite-Martins, Liliana R; Mahú, Maria I M; Costa, Ana L; Mendes, Angelo; Lopes, Elisabete; Mendonça, Denisa M V; Niza-Ribeiro, João J R; de Matos, Augusto J F; da Costa, Paulo Martins
2014-11-01
Antimicrobial resistance (AMR) is a growing global public health problem, which is caused by the use of antimicrobials in both human and animal medical practice. The objectives of the present cross-sectional study were as follows: (1) to determine the prevalence of resistance in Escherichia coli isolated from the feces of pets from the Porto region of Portugal against 19 antimicrobial agents and (2) to assess the individual, clinical and environmental characteristics associated with each pet as risk markers for the AMR of the E. coli isolates. From September 2009 to May 2012, rectal swabs were collected from pets selected using a systematic random procedure from the ordinary population of animals attending the Veterinary Hospital of Porto University. A total of 78 dogs and 22 cats were sampled with the objective of isolating E. coli. The animals' owners, who allowed the collection of fecal samples from their pets, answered a questionnaire to collect information about the markers that could influence the AMR of the enteric E. coli. Chromocult tryptone bile X-glucuronide agar was used for E. coli isolation, and the disk diffusion method was used to determine the antimicrobial susceptibility. The data were analyzed using a multilevel, univariable and multivariable generalized linear mixed model (GLMM). Several (49.7%) of the 396 isolates obtained in this study were multidrug-resistant. The E. coli isolates exhibited resistance to the antimicrobial agent's ampicillin (51.3%), cephalothin (46.7%), tetracycline (45.2%) and streptomycin (43.4%). Previous quinolone treatment was the main risk marker for the presence of AMR for 12 (ampicillin, cephalothin, ceftazidime, cefotaxime, nalidixic acid, ciprofloxacin, gentamicin, tetracycline, streptomycin, chloramphenicol, trimethoprim-sulfamethoxazole and aztreonam) of the 15 antimicrobials assessed. Coprophagic habits were also positively associated with an increased risk of AMR for six drugs, ampicillin, amoxicillin
Model Based Reconstruction of UT Array Data
NASA Astrophysics Data System (ADS)
Calmon, P.; Iakovleva, E.; Fidahoussen, A.; Ribay, G.; Chatillon, S.
2008-02-01
Beyond the detection of defects, their characterization (identification, positioning, sizing) is one goal of great importance often assigned to the analysis of NDT data. The first step of such analysis in the case of ultrasonic testing amounts to image in the part the detected echoes. This operation is in general achieved by considering time of flights and by applying simplified algorithms which are often valid only on canonical situations. In this communication we present an overview of different imaging techniques studied at CEA LIST and based on the exploitation of direct models which enable to address complex configurations and are available in the CIVA software plat-form. We discuss in particular ray-model based algorithms, algorithms derived from classical synthetic focusing and processing of the full inter-element matrix (MUSIC algorithm).
Sequential Bayesian Detection: A Model-Based Approach
Sullivan, E J; Candy, J V
2007-08-13
Sequential detection theory has been known for a long time evolving in the late 1940's by Wald and followed by Middleton's classic exposition in the 1960's coupled with the concurrent enabling technology of digital computer systems and the development of sequential processors. Its development, when coupled to modern sequential model-based processors, offers a reasonable way to attack physics-based problems. In this chapter, the fundamentals of the sequential detection are reviewed from the Neyman-Pearson theoretical perspective and formulated for both linear and nonlinear (approximate) Gauss-Markov, state-space representations. We review the development of modern sequential detectors and incorporate the sequential model-based processors as an integral part of their solution. Motivated by a wealth of physics-based detection problems, we show how both linear and nonlinear processors can seamlessly be embedded into the sequential detection framework to provide a powerful approach to solving non-stationary detection problems.
Sequential Bayesian Detection: A Model-Based Approach
Candy, J V
2008-12-08
Sequential detection theory has been known for a long time evolving in the late 1940's by Wald and followed by Middleton's classic exposition in the 1960's coupled with the concurrent enabling technology of digital computer systems and the development of sequential processors. Its development, when coupled to modern sequential model-based processors, offers a reasonable way to attack physics-based problems. In this chapter, the fundamentals of the sequential detection are reviewed from the Neyman-Pearson theoretical perspective and formulated for both linear and nonlinear (approximate) Gauss-Markov, state-space representations. We review the development of modern sequential detectors and incorporate the sequential model-based processors as an integral part of their solution. Motivated by a wealth of physics-based detection problems, we show how both linear and nonlinear processors can seamlessly be embedded into the sequential detection framework to provide a powerful approach to solving non-stationary detection problems.
Model based control of dynamic atomic force microscope.
Lee, Chibum; Salapaka, Srinivasa M
2015-04-01
A model-based robust control approach is proposed that significantly improves imaging bandwidth for the dynamic mode atomic force microscopy. A model for cantilever oscillation amplitude and phase dynamics is derived and used for the control design. In particular, the control design is based on a linearized model and robust H(∞) control theory. This design yields a significant improvement when compared to the conventional proportional-integral designs and verified by experiments. PMID:25933864
Model based control of dynamic atomic force microscope
Lee, Chibum; Salapaka, Srinivasa M.
2015-04-15
A model-based robust control approach is proposed that significantly improves imaging bandwidth for the dynamic mode atomic force microscopy. A model for cantilever oscillation amplitude and phase dynamics is derived and used for the control design. In particular, the control design is based on a linearized model and robust H{sub ∞} control theory. This design yields a significant improvement when compared to the conventional proportional-integral designs and verified by experiments.
NASA Astrophysics Data System (ADS)
Hibbard, Bill
2012-05-01
Orseau and Ring, as well as Dewey, have recently described problems, including self-delusion, with the behavior of agents using various definitions of utility functions. An agent's utility function is defined in terms of the agent's history of interactions with its environment. This paper argues, via two examples, that the behavior problems can be avoided by formulating the utility function in two steps: 1) inferring a model of the environment from interactions, and 2) computing utility as a function of the environment model. Basing a utility function on a model that the agent must learn implies that the utility function must initially be expressed in terms of specifications to be matched to structures in the learned model. These specifications constitute prior assumptions about the environment so this approach will not work with arbitrary environments. But the approach should work for agents designed by humans to act in the physical world. The paper also addresses the issue of self-modifying agents and shows that if provided with the possibility to modify their utility functions agents will not choose to do so, under some usual assumptions.
Applying knowledge compilation techniques to model-based reasoning
NASA Technical Reports Server (NTRS)
Keller, Richard M.
1991-01-01
Researchers in the area of knowledge compilation are developing general purpose techniques for improving the efficiency of knowledge-based systems. In this article, an attempt is made to define knowledge compilation, to characterize several classes of knowledge compilation techniques, and to illustrate how some of these techniques can be applied to improve the performance of model-based reasoning systems.
Sidorin, Anatoly
2010-01-05
In linear accelerators the particles are accelerated by either electrostatic fields or oscillating Radio Frequency (RF) fields. Accordingly the linear accelerators are divided in three large groups: electrostatic, induction and RF accelerators. Overview of the different types of accelerators is given. Stability of longitudinal and transverse motion in the RF linear accelerators is briefly discussed. The methods of beam focusing in linacs are described.
NASA Astrophysics Data System (ADS)
Sidorin, Anatoly
2010-01-01
In linear accelerators the particles are accelerated by either electrostatic fields or oscillating Radio Frequency (RF) fields. Accordingly the linear accelerators are divided in three large groups: electrostatic, induction and RF accelerators. Overview of the different types of accelerators is given. Stability of longitudinal and transverse motion in the RF linear accelerators is briefly discussed. The methods of beam focusing in linacs are described.
Model-based ocean acoustic passive localization. Revision 1
Candy, J.V.; Sullivan, E.J.
1994-06-01
A model-based approach is developed (theoretically) to solve the passive localization problem. Here the authors investigate the design of a model-based identifier for a shallow water ocean acoustic problem characterized by a normal-mode model. In this problem they show how the processor can be structured to estimate the vertical wave numbers directly from measured pressure-field and sound speed measurements thereby eliminating the need for synthetic aperture processing or even a propagation model solution. Finally, they investigate various special cases of the source localization problem, designing a model-based localizer for each and evaluating the underlying structure with the expectation of gaining more and more insight into the general problem.
ERIC Educational Resources Information Center
Walkiewicz, T. A.; Newby, N. D., Jr.
1972-01-01
A discussion of linear collisions between two or three objects is related to a junior-level course in analytical mechanics. The theoretical discussion uses a geometrical approach that treats elastic and inelastic collisions from a unified point of view. Experiments with a linear air track are described. (Author/TS)
Model-based phase-shifting interferometer
NASA Astrophysics Data System (ADS)
Liu, Dong; Zhang, Lei; Shi, Tu; Yang, Yongying; Chong, Shiyao; Miao, Liang; Huang, Wei; Shen, Yibing; Bai, Jian
2015-10-01
A model-based phase-shifting interferometer (MPI) is developed, in which a novel calculation technique is proposed instead of the traditional complicated system structure, to achieve versatile, high precision and quantitative surface tests. In the MPI, the partial null lens (PNL) is employed to implement the non-null test. With some alternative PNLs, similar as the transmission spheres in ZYGO interferometers, the MPI provides a flexible test for general spherical and aspherical surfaces. Based on modern computer modeling technique, a reverse iterative optimizing construction (ROR) method is employed for the retrace error correction of non-null test, as well as figure error reconstruction. A self-compiled ray-tracing program is set up for the accurate system modeling and reverse ray tracing. The surface figure error then can be easily extracted from the wavefront data in forms of Zernike polynomials by the ROR method. Experiments of the spherical and aspherical tests are presented to validate the flexibility and accuracy. The test results are compared with those of Zygo interferometer (null tests), which demonstrates the high accuracy of the MPI. With such accuracy and flexibility, the MPI would possess large potential in modern optical shop testing.