Science.gov

Sample records for generalized linear model-based

  1. A general linear model-based approach for inferring selection to climate

    PubMed Central

    2013-01-01

    Background Many efforts have been made to detect signatures of positive selection in the human genome, especially those associated with expansion from Africa and subsequent colonization of all other continents. However, most approaches have not directly probed the relationship between the environment and patterns of variation among humans. We have designed a method to identify regions of the genome under selection based on Mantel tests conducted within a general linear model framework, which we call MAntel-GLM to Infer Clinal Selection (MAGICS). MAGICS explicitly incorporates population-specific and genome-wide patterns of background variation as well as information from environmental values to provide an improved picture of selection and its underlying causes in human populations. Results Our results significantly overlap with those obtained by other published methodologies, but MAGICS has several advantages. These include improvements that: limit false positives by reducing the number of independent tests conducted and by correcting for geographic distance, which we found to be a major contributor to selection signals; yield absolute rather than relative estimates of significance; identify specific geographic regions linked most strongly to particular signals of selection; and detect recent balancing as well as directional selection. Conclusions We find evidence of selection associated with climate (P < 10-5) in 354 genes, and among these observe a highly significant enrichment for directional positive selection. Two of our strongest 'hits’, however, ADRA2A and ADRA2C, implicated in vasoconstriction in response to cold and pain stimuli, show evidence of balancing selection. Our results clearly demonstrate evidence of climate-related signals of directional and balancing selection. PMID:24053227

  2. Kalman estimator- and general linear model-based on-line brain activation mapping by near-infrared spectroscopy

    PubMed Central

    2010-01-01

    Background Near-infrared spectroscopy (NIRS) is a non-invasive neuroimaging technique that recently has been developed to measure the changes of cerebral blood oxygenation associated with brain activities. To date, for functional brain mapping applications, there is no standard on-line method for analysing NIRS data. Methods In this paper, a novel on-line NIRS data analysis framework taking advantages of both the general linear model (GLM) and the Kalman estimator is devised. The Kalman estimator is used to update the GLM coefficients recursively, and one critical coefficient regarding brain activities is then passed to a t-statistical test. The t-statistical test result is used to update a topographic brain activation map. Meanwhile, a set of high-pass filters is plugged into the GLM to prevent very low-frequency noises, and an autoregressive (AR) model is used to prevent the temporal correlation caused by physiological noises in NIRS time series. A set of data recorded in finger tapping experiments is studied using the proposed framework. Results The obtained results suggest that the method can effectively track the task related brain activation areas, and prevent the noise distortion in the estimation while the experiment is running. Thereby, the potential of the proposed method for real-time NIRS-based brain imaging was demonstrated. Conclusions This paper presents a novel on-line approach for analysing NIRS data for functional brain mapping applications. This approach demonstrates the potential of a real-time-updating topographic brain activation map. PMID:21138595

  3. Generalized Linear Covariance Analysis

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Markley, F. Landis

    2008-01-01

    We review and extend in two directions the results of prior work on generalized covariance analysis methods. This prior work allowed for partitioning of the state space into "solve-for" and "consider" parameters, allowed for differences between the formal values and the true values of the measurement noise, process noise, and a priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and a priori solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator s anchor time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the "variance sandpile" and the "sensitivity mosaic," and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.

  4. Generalized Linear Covariance Analysis

    NASA Technical Reports Server (NTRS)

    Carpenter, James R.; Markley, F. Landis

    2014-01-01

    This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.

  5. A General Accelerated Degradation Model Based on the Wiener Process

    PubMed Central

    Liu, Le; Li, Xiaoyang; Sun, Fuqiang; Wang, Ning

    2016-01-01

    Accelerated degradation testing (ADT) is an efficient tool to conduct material service reliability and safety evaluations by analyzing performance degradation data. Traditional stochastic process models are mainly for linear or linearization degradation paths. However, those methods are not applicable for the situations where the degradation processes cannot be linearized. Hence, in this paper, a general ADT model based on the Wiener process is proposed to solve the problem for accelerated degradation data analysis. The general model can consider the unit-to-unit variation and temporal variation of the degradation process, and is suitable for both linear and nonlinear ADT analyses with single or multiple acceleration variables. The statistical inference is given to estimate the unknown parameters in both constant stress and step stress ADT. The simulation example and two real applications demonstrate that the proposed method can yield reliable lifetime evaluation results compared with the existing linear and time-scale transformation Wiener processes in both linear and nonlinear ADT analyses. PMID:28774107

  6. Quantization of general linear electrodynamics

    SciTech Connect

    Rivera, Sergio; Schuller, Frederic P.

    2011-03-15

    General linear electrodynamics allow for an arbitrary linear constitutive relation between the field strength 2-form and induction 2-form density if crucial hyperbolicity and energy conditions are satisfied, which render the theory predictive and physically interpretable. Taking into account the higher-order polynomial dispersion relation and associated causal structure of general linear electrodynamics, we carefully develop its Hamiltonian formulation from first principles. Canonical quantization of the resulting constrained system then results in a quantum vacuum which is sensitive to the constitutive tensor of the classical theory. As an application we calculate the Casimir effect in a birefringent linear optical medium.

  7. Non-linear control logics for vibrations suppression: a comparison between model-based and non-model-based techniques

    NASA Astrophysics Data System (ADS)

    Ripamonti, Francesco; Orsini, Lorenzo; Resta, Ferruccio

    2015-04-01

    Non-linear behavior is present in many mechanical system operating conditions. In these cases, a common engineering practice is to linearize the equation of motion around a particular operating point, and to design a linear controller. The main disadvantage is that the stability properties and validity of the controller are local. In order to improve the controller performance, non-linear control techniques represent a very attractive solution for many smart structures. The aim of this paper is to compare non-linear model-based and non-model-based control techniques. In particular the model-based sliding-mode-control (SMC) technique is considered because of its easy implementation and the strong robustness of the controller even under heavy model uncertainties. Among the non-model-based control techniques, the fuzzy control (FC), allowing designing the controller according to if-then rules, has been considered. It defines the controller without a system reference model, offering many advantages such as an intrinsic robustness. These techniques have been tested on the pendulum nonlinear system.

  8. Linear Models Based on Noisy Data and the Frisch Scheme*

    PubMed Central

    Ning, Lipeng; Georgiou, Tryphon T.; Tannenbaum, Allen; Boyd, Stephen P.

    2016-01-01

    We address the problem of identifying linear relations among variables based on noisy measurements. This is a central question in the search for structure in large data sets. Often a key assumption is that measurement errors in each variable are independent. This basic formulation has its roots in the work of Charles Spearman in 1904 and of Ragnar Frisch in the 1930s. Various topics such as errors-in-variables, factor analysis, and instrumental variables all refer to alternative viewpoints on this problem and on ways to account for the anticipated way that noise enters the data. In the present paper we begin by describing certain fundamental contributions by the founders of the field and provide alternative modern proofs to certain key results. We then go on to consider a modern viewpoint and novel numerical techniques to the problem. The central theme is expressed by the Frisch–Kalman dictum, which calls for identifying a noise contribution that allows a maximal number of simultaneous linear relations among the noise-free variables—a rank minimization problem. In the years since Frisch’s original formulation, there have been several insights, including trace minimization as a convenient heuristic to replace rank minimization. We discuss convex relaxations and theoretical bounds on the rank that, when met, provide guarantees for global optimality. A complementary point of view to this minimum-rank dictum is presented in which models are sought leading to a uniformly optimal quadratic estimation error for the error-free variables. Points of contact between these formalisms are discussed, and alternative regularization schemes are presented. PMID:27168672

  9. Semi-Parametric Generalized Linear Models.

    DTIC Science & Technology

    1985-08-01

    is nonsingular, upper triangular, and of full rank r. It is known (Dongarra et al., 1979) that G-1 FT is the Moore - Penrose inverse of L . Therefore... GENERALIZED LINEAR pq Mathematics Research Center University of Wisconsin-Madison 610 Walnut Street Madison, Wisconsin 53705 TI C August 1985 E T NOV 7 8...North Carolina 27709 -. -.. . - -.-. g / 6 O5’o UNIVERSITY OF WISCONSIN-MADISON MATHD4ATICS RESEARCH CENTER SD4I-PARAMETRIC GENERALIZED LINEAR MODELS

  10. Extended Generalized Linear Latent and Mixed Model

    ERIC Educational Resources Information Center

    Segawa, Eisuke; Emery, Sherry; Curry, Susan J.

    2008-01-01

    The generalized linear latent and mixed modeling (GLLAMM framework) includes many models such as hierarchical and structural equation models. However, GLLAMM cannot currently accommodate some models because it does not allow some parameters to be random. GLLAMM is extended to overcome the limitation by adding a submodel that specifies a…

  11. Linear and nonlinear generalized Fourier transforms.

    PubMed

    Pelloni, Beatrice

    2006-12-15

    This article presents an overview of a transform method for solving linear and integrable nonlinear partial differential equations. This new transform method, proposed by Fokas, yields a generalization and unification of various fundamental mathematical techniques and, in particular, it yields an extension of the Fourier transform method.

  12. Alternative approach to general coupled linear optics

    SciTech Connect

    Wolski, Andrzej

    2005-11-29

    The Twiss parameters provide a convenient description of beam optics in uncoupled linear beamlines. For coupled beamlines, a variety of approaches are possible for describing the linear optics; here, we propose an approach and notation that naturally generalizes the familiar Twiss parameters to the coupled case in three degrees of freedom. Our approach is based on an eigensystem analysis of the matrix of second-order beam moments, or alternatively (in the case of a storage ring) on an eigensystem analysis of the linear single-turn map. The lattice functions that emerge from this approach have an interpretation that is conceptually very simple: in particular, the lattice functions directly relate the beam distribution in phase space to the invariant emittances. To emphasize the physical significance of the coupled lattice functions, we develop the theory from first principles, using only the assumption of linear symplectic transport. We also give some examples of the application of this approach, demonstrating its advantages of conceptual and notational simplicity.

  13. [General practice--linear thinking and complexity].

    PubMed

    Stalder, H

    2006-09-27

    As physicians, we apply and teach linear thinking. This approach permits to dissect the patient's problem to the molecular level and has contributed enormously to the knowledge and progress of medicine. The linear approach is particularly useful in medical education, in quantitative research and helps to resolve simple problems. However, it risks to be rigid. Living beings (such as patients and physicians!) have to be considered as complex systems. A complex system cannot be dissected into its parts without losing its identity. It is dependent on its past and interactions with the outside are often followed by unpredictable reactions. The patient-centred approach in medicine permits the physician, a complex system himself, to integrate the patient's system and to adapt to his reality. It is particularly useful in general medicine.

  14. Estimating parameters and hidden variables in non-linear state-space models based on ODEs for biological networks inference.

    PubMed

    Quach, Minh; Brunel, Nicolas; d'Alché-Buc, Florence

    2007-12-01

    Statistical inference of biological networks such as gene regulatory networks, signaling pathways and metabolic networks can contribute to build a picture of complex interactions that take place in the cell. However, biological systems considered as dynamical, non-linear and generally partially observed processes may be difficult to estimate even if the structure of interactions is given. Using the same approach as Sitz et al. proposed in another context, we derive non-linear state-space models from ODEs describing biological networks. In this framework, we apply Unscented Kalman Filtering (UKF) to the estimation of both parameters and hidden variables of non-linear state-space models. We instantiate the method on a transcriptional regulatory model based on Hill kinetics and a signaling pathway model based on mass action kinetics. We successfully use synthetic data and experimental data to test our approach. This approach covers a large set of biological networks models and gives rise to simple and fast estimation algorithms. Moreover, the Bayesian tool used here directly provides uncertainty estimates on parameters and hidden states. Let us also emphasize that it can be coupled with structure inference methods used in Graphical Probabilistic Models. Matlab code available on demand.

  15. Reduced Order Models Based on Linear and Nonlinear Aerodynamic Impulse Responses

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.

    1999-01-01

    This paper discusses a method for the identification and application of reduced-order models based on linear and nonlinear aerodynamic impulse responses. The Volterra theory of nonlinear systems and an appropriate kernel identification technique are described. Insight into the nature of kernels is provided by applying the method to the nonlinear Riccati equation in a non-aerodynamic application. The method is then applied to a nonlinear aerodynamic model of an RAE 2822 supercritical airfoil undergoing plunge motions using the CFL3D Navier-Stokes flow solver with the Spalart-Allmaras turbulence model. Results demonstrate the computational efficiency of the technique.

  16. Cognitive performance modeling based on general systems performance theory.

    PubMed

    Kondraske, George V

    2010-01-01

    General Systems Performance Theory (GSPT) was initially motivated by problems associated with quantifying different aspects of human performance. It has proved to be invaluable for measurement development and understanding quantitative relationships between human subsystem capacities and performance in complex tasks. It is now desired to bring focus to the application of GSPT to modeling of cognitive system performance. Previous studies involving two complex tasks (i.e., driving and performing laparoscopic surgery) and incorporating measures that are clearly related to cognitive performance (information processing speed and short-term memory capacity) were revisited. A GSPT-derived method of task analysis and performance prediction termed Nonlinear Causal Resource Analysis (NCRA) was employed to determine the demand on basic cognitive performance resources required to support different levels of complex task performance. This approach is presented as a means to determine a cognitive workload profile and the subsequent computation of a single number measure of cognitive workload (CW). Computation of CW may be a viable alternative to measuring it. Various possible "more basic" performance resources that contribute to cognitive system performance are discussed. It is concluded from this preliminary exploration that a GSPT-based approach can contribute to defining cognitive performance models that are useful for both individual subjects and specific groups (e.g., military pilots).

  17. Gravitational Wave in Linear General Relativity

    NASA Astrophysics Data System (ADS)

    Cubillos, D. J.

    2017-07-01

    General relativity is the best theory currently available to describe the interaction due to gravity. Within Albert Einstein's field equations this interaction is described by means of the spatiotemporal curvature generated by the matter-energy content in the universe. Weyl worked on the existence of perturbations of the curvature of space-time that propagate at the speed of light, which are known as Gravitational Waves, obtained to a first approximation through the linearization of the field equations of Einstein. Weyl's solution consists of taking the field equations in a vacuum and disturbing the metric, using the Minkowski metric slightly perturbed by a factor ɛ greater than zero but much smaller than one. If the feedback effect of the field is neglected, it can be considered as a weak field solution. After introducing the disturbed metric and ignoring ɛ terms of order greater than one, we can find the linearized field equations in terms of the perturbation, which can then be expressed in terms of the Dalambertian operator of the perturbation equalized to zero. This is analogous to the linear wave equation in classical mechanics, which can be interpreted by saying that gravitational effects propagate as waves at the speed of light. In addition to this, by studying the motion of a particle affected by this perturbation through the geodesic equation can show the transversal character of the gravitational wave and its two possible states of polarization. It can be shown that the energy carried by the wave is of the order of 1/c5 where c is the speed of light, which explains that its effects on matter are very small and very difficult to detect.

  18. A General Framework for Multiphysics Modeling Based on Numerical Averaging

    NASA Astrophysics Data System (ADS)

    Lunati, I.; Tomin, P.

    2014-12-01

    In the last years, multiphysics (hybrid) modeling has attracted increasing attention as a tool to bridge the gap between pore-scale processes and a continuum description at the meter-scale (laboratory scale). This approach is particularly appealing for complex nonlinear processes, such as multiphase flow, reactive transport, density-driven instabilities, and geomechanical coupling. We present a general framework that can be applied to all these classes of problems. The method is based on ideas from the Multiscale Finite-Volume method (MsFV), which has been originally developed for Darcy-scale application. Recently, we have reformulated MsFV starting with a local-global splitting, which allows us to retain the original degree of coupling for the local problems and to use spatiotemporal adaptive strategies. The new framework is based on the simple idea that different characteristic temporal scales are inherited from different spatial scales, and the global and the local problems are solved with different temporal resolutions. The global (coarse-scale) problem is constructed based on a numerical volume-averaging paradigm and a continuum (Darcy-scale) description is obtained by introducing additional simplifications (e.g., by assuming that pressure is the only independent variable at the coarse scale, we recover an extended Darcy's law). We demonstrate that it is possible to adaptively and dynamically couple the Darcy-scale and the pore-scale descriptions of multiphase flow in a single conceptual and computational framework. Pore-scale problems are solved only in the active front region where fluid distribution changes with time. In the rest of the domain, only a coarse description is employed. This framework can be applied to other important problems such as reactive transport and crack propagation. As it is based on a numerical upscaling paradigm, our method can be used to explore the limits of validity of macroscopic models and to illuminate the meaning of

  19. Blended General Linear Methods based on Generalized BDF

    NASA Astrophysics Data System (ADS)

    Brugnano, Luigi; Magherini, Cecilia

    2008-09-01

    General Linear Methods were introduced in order to encompass a large family of numerical methods for the solution of ODE-IVPs, ranging from LMF to RK formulae. In so doing, it is possible to obtain methods able to overcome typical drawbacks of the previous classes of methods. For example, stability limitations of LMF and order reduction for RK methods. Nevertheless, these goals are usually achieved at the price of a higher computational cost. Consequently, many efforts have been done in order to derive GLMs with particular features, to be exploited for their efficient implementation. In recent years, the derivation of GLMs from particular Boundary Value Methods (BVMs), namely the family of Generalized BDF (GBDF), has been proposed for the numerical solution of stiff ODE-IVPs. Here, this approach is further developed in order to derive GLMs combining good stability and accuracy properties with the possibility of efficiently solving the generated discrete problems via the blended implementation of the methods.

  20. Model-based Estimation for Pose, Velocity of Projectile from Stereo Linear Array Image

    NASA Astrophysics Data System (ADS)

    Zhao, Zhuxin; Wen, Gongjian; Zhang, Xing; Li, Deren

    2012-01-01

    The pose (position and attitude) and velocity of in-flight projectiles have major influence on the performance and accuracy. A cost-effective method for measuring the gun-boosted projectiles is proposed. The method adopts only one linear array image collected by the stereo vision system combining a digital line-scan camera and a mirror near the muzzle. From the projectile's stereo image, the motion parameters (pose and velocity) are acquired by using a model-based optimization algorithm. The algorithm achieves optimal estimation of the parameters by matching the stereo projection of the projectile and that of the same size 3D model. The speed and the AOA (angle of attack) could also be determined subsequently. Experiments are made to test the proposed method.

  1. Evaluating the double Poisson generalized linear model.

    PubMed

    Zou, Yaotian; Geedipally, Srinivas Reddy; Lord, Dominique

    2013-10-01

    The objectives of this study are to: (1) examine the applicability of the double Poisson (DP) generalized linear model (GLM) for analyzing motor vehicle crash data characterized by over- and under-dispersion and (2) compare the performance of the DP GLM with the Conway-Maxwell-Poisson (COM-Poisson) GLM in terms of goodness-of-fit and theoretical soundness. The DP distribution has seldom been investigated and applied since its first introduction two decades ago. The hurdle for applying the DP is related to its normalizing constant (or multiplicative constant) which is not available in closed form. This study proposed a new method to approximate the normalizing constant of the DP with high accuracy and reliability. The DP GLM and COM-Poisson GLM were developed using two observed over-dispersed datasets and one observed under-dispersed dataset. The modeling results indicate that the DP GLM with its normalizing constant approximated by the new method can handle crash data characterized by over- and under-dispersion. Its performance is comparable to the COM-Poisson GLM in terms of goodness-of-fit (GOF), although COM-Poisson GLM provides a slightly better fit. For the over-dispersed data, the DP GLM performs similar to the NB GLM. Considering the fact that the DP GLM can be easily estimated with inexpensive computation and that it is simpler to interpret coefficients, it offers a flexible and efficient alternative for researchers to model count data.

  2. Non-Linear Analysis Indicates Chaotic Dynamics and Reduced Resilience in Model-Based Daphnia Populations Exposed to Environmental Stress

    PubMed Central

    Ottermanns, Richard; Szonn, Kerstin; Preuß, Thomas G.; Roß-Nickoll, Martina

    2014-01-01

    In this study we present evidence that anthropogenic stressors can reduce the resilience of age-structured populations. Enhancement of disturbance in a model-based Daphnia population lead to a repression of chaotic population dynamics at the same time increasing the degree of synchrony between the population's age classes. Based on the theory of chaos-mediated survival an increased risk of extinction was revealed for this population exposed to high concentrations of a chemical stressor. The Lyapunov coefficient was supposed to be a useful indicator to detect disturbance thresholds leading to alterations in population dynamics. One possible explanation could be a discrete change in attractor orientation due to external disturbance. The statistical analysis of Lyapunov coefficient distribution is proposed as a methodology to test for significant non-linear effects of general disturbance on populations. Although many new questions arose, this study forms a theoretical basis for a dynamical definition of population recovery. PMID:24809537

  3. A linearization approach for the model-based analysis of combined aggregate and individual patient data.

    PubMed

    Ravva, Patanjali; Karlsson, Mats O; French, Jonathan L

    2014-04-30

    The application of model-based meta-analysis in drug development has gained prominence recently, particularly for characterizing dose-response relationships and quantifying treatment effect sizes of competitor drugs. The models are typically nonlinear in nature and involve covariates to explain the heterogeneity in summary-level literature (or aggregate data (AD)). Inferring individual patient-level relationships from these nonlinear meta-analysis models leads to aggregation bias. Individual patient-level data (IPD) are indeed required to characterize patient-level relationships but too often this information is limited. Since combined analyses of AD and IPD allow advantage of the information they share to be taken, the models developed for AD must be derived from IPD models; in the case of linear models, the solution is a closed form, while for nonlinear models, closed form solutions do not exist. Here, we propose a linearization method based on a second order Taylor series approximation for fitting models to AD alone or combined AD and IPD. The application of this method is illustrated by an analysis of a continuous landmark endpoint, i.e., change from baseline in HbA1c at week 12, from 18 clinical trials evaluating the effects of DPP-4 inhibitors on hyperglycemia in diabetic patients. The performance of this method is demonstrated by a simulation study where the effects of varying the degree of nonlinearity and of heterogeneity in covariates (as assessed by the ratio of between-trial to within-trial variability) were studied. A dose-response relationship using an Emax model with linear and nonlinear effects of covariates on the emax parameter was used to simulate data. The simulation results showed that when an IPD model is simply used for modeling AD, the bias in the emax parameter estimate increased noticeably with an increasing degree of nonlinearity in the model, with respect to covariates. When using an appropriately derived AD model, the linearization

  4. A generalized nonlinear model-based mixed multinomial logit approach for crash data analysis.

    PubMed

    Zeng, Ziqiang; Zhu, Wenbo; Ke, Ruimin; Ash, John; Wang, Yinhai; Xu, Jiuping; Xu, Xinxin

    2017-02-01

    The mixed multinomial logit (MNL) approach, which can account for unobserved heterogeneity, is a promising unordered model that has been employed in analyzing the effect of factors contributing to crash severity. However, its basic assumption of using a linear function to explore the relationship between the probability of crash severity and its contributing factors can be violated in reality. This paper develops a generalized nonlinear model-based mixed MNL approach which is capable of capturing non-monotonic relationships by developing nonlinear predictors for the contributing factors in the context of unobserved heterogeneity. The crash data on seven Interstate freeways in Washington between January 2011 and December 2014 are collected to develop the nonlinear predictors in the model. Thirteen contributing factors in terms of traffic characteristics, roadway geometric characteristics, and weather conditions are identified to have significant mixed (fixed or random) effects on the crash density in three crash severity levels: fatal, injury, and property damage only. The proposed model is compared with the standard mixed MNL model. The comparison results suggest a slight superiority of the new approach in terms of model fit measured by the Akaike Information Criterion (12.06 percent decrease) and Bayesian Information Criterion (9.11 percent decrease). The predicted crash densities for all three levels of crash severities of the new approach are also closer (on average) to the observations than the ones predicted by the standard mixed MNL model. Finally, the significance and impacts of the contributing factors are analyzed.

  5. Topology modification for surgical simulation using precomputed finite element models based on linear elasticity.

    PubMed

    Lee, Bryan; Popescu, Dan C; Ourselin, Sébastien

    2010-12-01

    Surgical simulators provide another tool for training and practising surgical procedures, usually restricted to the use of cadavers. Our surgical simulator utilises Finite Element (FE) models based on linear elasticity. It is driven by displacements, as opposed to forces, allowing for realistic simulation of both deformation and haptic response at real-time rates. To achieve demanding computational requirements, the stiffness matrix K, which encompasses the geometrical and physical properties of the object, is precomputed, along with K⁻¹. Common to many surgical procedures is the requirement of cutting tissue. Introducing topology modifications, such as cutting, into these precomputed schemes does however come as a challenge, as the precomputed data needs to be modified, to reflect the new topology. In particular, recomputing K⁻¹ is too costly to be performed during the simulation. Our topology modification method is based upon updating K⁻¹ rather than entirely recomputing the matrix. By integrating condensation, we improve efficiency to allow for interaction with larger models. We can further enhance this by redistributing computational load to improve the system's real-time response. We exemplify our techniques with results from our surgical simulation system. Crown Copyright © 2010. Published by Elsevier Ltd. All rights reserved.

  6. Development of a CFD-compatible transition model based on linear stability theory

    NASA Astrophysics Data System (ADS)

    Coder, James G.

    A new laminar-turbulent transition model for low-turbulence external aerodynamic applications is presented that incorporates linear stability theory in a manner compatible with modern computational fluid dynamics solvers. The model uses a new transport equation that describes the growth of the maximum Tollmien-Schlichting instability amplitude in the presence of a boundary layer. To avoid the need for integration paths and non-local operations, a locally defined non-dimensional pressure-gradient parameter is used that serves as an estimator of the integral boundary-layer properties. The model has been implemented into the OVERFLOW 2.2f solver and interacts with the Spalart-Allmaras and Menter SST eddy-viscosity turbulence models. Comparisons of predictions using the new transition model with high-quality wind-tunnel measurements of airfoil section characteristics validate the predictive qualities of the model. Predictions for three-dimensional aircraft and wing geometries show the correct qualitative behavior even though limited experimental data are available. These cases also demonstrate that the model is well-behaved about general aeronautical configurations. These cases confirm that the new transition model is an improvement over the current state of the art in computational fluid dynamics transition modeling by providing more accurate solutions at approximately half the added computational expense.

  7. Centering, Scale Indeterminacy, and Differential Item Functioning Detection in Hierarchical Generalized Linear and Generalized Linear Mixed Models

    ERIC Educational Resources Information Center

    Cheong, Yuk Fai; Kamata, Akihito

    2013-01-01

    In this article, we discuss and illustrate two centering and anchoring options available in differential item functioning (DIF) detection studies based on the hierarchical generalized linear and generalized linear mixed modeling frameworks. We compared and contrasted the assumptions of the two options, and examined the properties of their DIF…

  8. Centering, Scale Indeterminacy, and Differential Item Functioning Detection in Hierarchical Generalized Linear and Generalized Linear Mixed Models

    ERIC Educational Resources Information Center

    Cheong, Yuk Fai; Kamata, Akihito

    2013-01-01

    In this article, we discuss and illustrate two centering and anchoring options available in differential item functioning (DIF) detection studies based on the hierarchical generalized linear and generalized linear mixed modeling frameworks. We compared and contrasted the assumptions of the two options, and examined the properties of their DIF…

  9. Linear stability of general magnetically insulated electron flow

    NASA Astrophysics Data System (ADS)

    Swegle, J. A.; Mendel, C. W., Jr.; Seidel, D. B.; Quintenz, J. P.

    1984-03-01

    A linear stability theory for magnetically insulated systems was formulated by linearizing the general 3-D, time dependent theory of Mendel, Seidel, and Slut. It is found that, case of electron trajectories which are nearly laminar, with only small transverse motion, several suggestive simplifications occur in the eigenvalue equations.

  10. The General Linear Model and Direct Standardization: A Comparison.

    ERIC Educational Resources Information Center

    Little, Roderick J. A.; Pullum, Thomas W.

    1979-01-01

    Two methods of analyzing nonorthogonal (uneven cell sizes) cross-classified data sets are compared. The methods are direct standardization and the general linear model. The authors illustrate when direct standardization may be a desirable method of analysis. (JKS)

  11. A Hybrid Generalized Hidden Markov Model-Based Condition Monitoring Approach for Rolling Bearings

    PubMed Central

    Liu, Jie; Hu, Youmin; Wu, Bo; Wang, Yan; Xie, Fengyun

    2017-01-01

    The operating condition of rolling bearings affects productivity and quality in the rotating machine process. Developing an effective rolling bearing condition monitoring approach is critical to accurately identify the operating condition. In this paper, a hybrid generalized hidden Markov model-based condition monitoring approach for rolling bearings is proposed, where interval valued features are used to efficiently recognize and classify machine states in the machine process. In the proposed method, vibration signals are decomposed into multiple modes with variational mode decomposition (VMD). Parameters of the VMD, in the form of generalized intervals, provide a concise representation for aleatory and epistemic uncertainty and improve the robustness of identification. The multi-scale permutation entropy method is applied to extract state features from the decomposed signals in different operating conditions. Traditional principal component analysis is adopted to reduce feature size and computational cost. With the extracted features’ information, the generalized hidden Markov model, based on generalized interval probability, is used to recognize and classify the fault types and fault severity levels. Finally, the experiment results show that the proposed method is effective at recognizing and classifying the fault types and fault severity levels of rolling bearings. This monitoring method is also efficient enough to quantify the two uncertainty components. PMID:28524088

  12. A general U-block model-based design procedure for nonlinear polynomial control systems

    NASA Astrophysics Data System (ADS)

    Zhu, Q. M.; Zhao, D. Y.; Zhang, Jianhua

    2016-10-01

    The proposition of U-model concept (in terms of 'providing concise and applicable solutions for complex problems') and a corresponding basic U-control design algorithm was originated in the first author's PhD thesis. The term of U-model appeared (not rigorously defined) for the first time in the first author's other journal paper, which established a framework for using linear polynomial control system design approaches to design nonlinear polynomial control systems (in brief, linear polynomial approaches → nonlinear polynomial plants). This paper represents the next milestone work - using linear state-space approaches to design nonlinear polynomial control systems (in brief, linear state-space approaches → nonlinear polynomial plants). The overall aim of the study is to establish a framework, defined as the U-block model, which provides a generic prototype for using linear state-space-based approaches to design the control systems with smooth nonlinear plants/processes described by polynomial models. For analysing the feasibility and effectiveness, sliding mode control design approach is selected as an exemplary case study. Numerical simulation studies provide a user-friendly step-by-step procedure for the readers/users with interest in their ad hoc applications. In formality, this is the first paper to present the U-model-oriented control system design in a formal way and to study the associated properties and theorems. The previous publications, in the main, have been algorithm-based studies and simulation demonstrations. In some sense, this paper can be treated as a landmark for the U-model-based research from intuitive/heuristic stage to rigour/formal/comprehensive studies.

  13. From linear to generalized linear mixed models: A case study in repeated measures

    USDA-ARS?s Scientific Manuscript database

    Compared to traditional linear mixed models, generalized linear mixed models (GLMMs) can offer better correspondence between response variables and explanatory models, yielding more efficient estimates and tests in the analysis of data from designed experiments. Using proportion data from a designed...

  14. Linear Programming Solutions of Generalized Linear Impulsive Correction for Geostationary Stationkeeping

    NASA Astrophysics Data System (ADS)

    Park, Jae Woo

    1996-06-01

    The generalized linear impulsive correction problem is applied to make a linear programming problem for optimizing trajectory of an orbiting spacecraft. Numerical application for the stationkeeping maneuver problem of geostationary satellite shows that this problem can efficiently find the optimal solution of the stationkeeping parameters, such as velocity changes, and the points of impulse by using the revised simplex method.

  15. Generalized perceptual linear prediction features for animal vocalization analysis.

    PubMed

    Clemins, Patrick J; Johnson, Michael T

    2006-07-01

    A new feature extraction model, generalized perceptual linear prediction (gPLP), is developed to calculate a set of perceptually relevant features for digital signal analysis of animal vocalizations. The gPLP model is a generalized adaptation of the perceptual linear prediction model, popular in human speech processing, which incorporates perceptual information such as frequency warping and equal loudness normalization into the feature extraction process. Since such perceptual information is available for a number of animal species, this new approach integrates that information into a generalized model to extract perceptually relevant features for a particular species. To illustrate, qualitative and quantitative comparisons are made between the species-specific model, generalized perceptual linear prediction (gPLP), and the original PLP model using a set of vocalizations collected from captive African elephants (Loxodonta africana) and wild beluga whales (Delphinapterus leucas). The models that incorporate perceptional information outperform the original human-based models in both visualization and classification tasks.

  16. A novel crowd flow model based on linear fractional stable motion

    NASA Astrophysics Data System (ADS)

    Wei, Juan; Zhang, Hong; Wu, Zhenya; He, Junlin; Guo, Yangyong

    2016-03-01

    For the evacuation dynamics in indoor space, a novel crowd flow model is put forward based on Linear Fractional Stable Motion. Based on position attraction and queuing time, the calculation formula of movement probability is defined and the queuing time is depicted according to linear fractal stable movement. At last, an experiment and simulation platform can be used for performance analysis, studying deeply the relation among system evacuation time, crowd density and exit flow rate. It is concluded that the evacuation time and the exit flow rate have positive correlations with the crowd density, and when the exit width reaches to the threshold value, it will not effectively decrease the evacuation time by further increasing the exit width.

  17. A general non-linear multilevel structural equation mixture model

    PubMed Central

    Kelava, Augustin; Brandt, Holger

    2014-01-01

    In the past 2 decades latent variable modeling has become a standard tool in the social sciences. In the same time period, traditional linear structural equation models have been extended to include non-linear interaction and quadratic effects (e.g., Klein and Moosbrugger, 2000), and multilevel modeling (Rabe-Hesketh et al., 2004). We present a general non-linear multilevel structural equation mixture model (GNM-SEMM) that combines recent semiparametric non-linear structural equation models (Kelava and Nagengast, 2012; Kelava et al., 2014) with multilevel structural equation mixture models (Muthén and Asparouhov, 2009) for clustered and non-normally distributed data. The proposed approach allows for semiparametric relationships at the within and at the between levels. We present examples from the educational science to illustrate different submodels from the general framework. PMID:25101022

  18. Generalized poroviscoelastic model based on effective Biot theory and its application to borehole guided wave analysis

    NASA Astrophysics Data System (ADS)

    Liu, Xu; Greenhalgh, Stewart; Zhou, Bing; Heinson, Graham

    2016-12-01

    A method using modified attenuation factor function is suggested to determine the parameters of the generalized Zener model approximating the attenuation factor function. This method is applied to constitute the poroviscoelastic model based on the effective Biot theory which considers the attenuative solid frame of reservoir. In the poroviscoelastic model, frequency-dependent bulk modulus and shear modulus of solid frame are represented by generalized Zener models. As an application, the borehole logging dispersion equations from Biot theory are extended to include effects from the intrinsic body attenuation in formation media in full-frequency range. The velocity dispersions of borehole guided waves are calculated to investigate the influence from attenuative bore fluid, attenuative solid frame of the formation and impermeable bore wall.

  19. Linear equations in general purpose codes for stiff ODEs

    SciTech Connect

    Shampine, L. F.

    1980-02-01

    It is noted that it is possible to improve significantly the handling of linear problems in a general-purpose code with very little trouble to the user or change to the code. In such situations analytical evaluation of the Jacobian is a lot cheaper than numerical differencing. A slight change in the point at which the Jacobian is evaluated results in a more accurate Jacobian in linear problems. (RWR)

  20. Generalized Linear Multi-Frequency Imaging in VLBI

    NASA Astrophysics Data System (ADS)

    Likhachev, S.; Ladygin, V.; Guirin, I.

    2004-07-01

    In VLBI, generalized Linear Multi-Frequency Imaging (MFI) consists of multi-frequency synthesis (MFS) and multi-frequency analysis (MFA) of the VLBI data obtained from observations on various frequencies. A set of linear deconvolution MFI algorithms is described. The algorithms make it possible to obtain high quality images interpolated on any given frequency inside any given bandwidth, and to derive reliable estimates of spectral indexes for radio sources with continuum spectrum.

  1. Nonrigid, Resistive Linear Plasma Response Models Based on Perturbed Equilibria for Axisymmetric Tokamak Control Design

    NASA Astrophysics Data System (ADS)

    Humphreys, D. A.; Ferron, J. R.; Leuer, J. A.; Walker, M. L.; Welander, A. S.

    2003-10-01

    Linear, perturbed equilibrium plasma response models can accurately represent the experimental response of tokamak plasmas to applied fields [A. Coutlis, et al., Nucl. Fusion 39, 663 (1999)]. However, agreement between experiment and model is much better when average flux over the plasma, rather than at each fluid element, is conserved [P. Vyas, et al., Nucl. Fusion 38, 1043 (1998)]. The close experimental agreement of average flux-conserving models is consistent with approximating field penetration effects produced by finite plasma resistivity, particularly in the edge region. We report on the development of nonrigid linear plasma response models which include finite local plasma resistivity in order to more accurately represent the dynamic response due to this field penetration. Such response models are expected to be important for designing profile control algorithms in advanced tokamaks. Accounting for finite plasma resistivity is also important in designing multivariable integrated controllers which must simultaneously regulate plasma shape and plasma current. Consequences of including resisitivity will be illustrated and comparisons with DIII-D experimental plasma responses will be made.

  2. Log-Linear Model Based Behavior Selection Method for Artificial Fish Swarm Algorithm

    PubMed Central

    Huang, Zhehuang; Chen, Yidong

    2015-01-01

    Artificial fish swarm algorithm (AFSA) is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm. PMID:25691895

  3. Log-linear model based behavior selection method for artificial fish swarm algorithm.

    PubMed

    Huang, Zhehuang; Chen, Yidong

    2015-01-01

    Artificial fish swarm algorithm (AFSA) is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm.

  4. A Matrix Approach for General Higher Order Linear Recurrences

    DTIC Science & Technology

    2011-01-01

    properties of linear recurrences (such as the well-known Fibonacci and Pell sequences ). In [2], Er defined k linear recurring sequences of order at...the nth term of the ith generalized order-k Fibonacci sequence . Communicated by Lee See Keong. Received: March 26, 2009; Revised: August 28, 2009...6], the author gave the generalized order-k Fibonacci and Pell (F-P) sequence as follows: For m ≥ 0, n > 0 and 1 ≤ i ≤ k uin = 2 muin−1 + u i n−2

  5. Optimal explicit strong-stability-preserving general linear methods.

    SciTech Connect

    Constantinescu, E.; Sandu, A.

    2010-07-01

    This paper constructs strong-stability-preserving general linear time-stepping methods that are well suited for hyperbolic PDEs discretized by the method of lines. These methods generalize both Runge-Kutta (RK) and linear multistep schemes. They have high stage orders and hence are less susceptible than RK methods to order reduction from source terms or nonhomogeneous boundary conditions. A global optimization strategy is used to find the most efficient schemes that have low storage requirements. Numerical results illustrate the theoretical findings.

  6. Transferability of regional permafrost disturbance susceptibility modelling using generalized linear and generalized additive models

    NASA Astrophysics Data System (ADS)

    Rudy, Ashley C. A.; Lamoureux, Scott F.; Treitz, Paul; van Ewijk, Karin Y.

    2016-07-01

    To effectively assess and mitigate risk of permafrost disturbance, disturbance-prone areas can be predicted through the application of susceptibility models. In this study we developed regional susceptibility models for permafrost disturbances using a field disturbance inventory to test the transferability of the model to a broader region in the Canadian High Arctic. Resulting maps of susceptibility were then used to explore the effect of terrain variables on the occurrence of disturbances within this region. To account for a large range of landscape characteristics, the model was calibrated using two locations: Sabine Peninsula, Melville Island, NU, and Fosheim Peninsula, Ellesmere Island, NU. Spatial patterns of disturbance were predicted with a generalized linear model (GLM) and generalized additive model (GAM), each calibrated using disturbed and randomized undisturbed locations from both locations and GIS-derived terrain predictor variables including slope, potential incoming solar radiation, wetness index, topographic position index, elevation, and distance to water. Each model was validated for the Sabine and Fosheim Peninsulas using independent data sets while the transferability of the model to an independent site was assessed at Cape Bounty, Melville Island, NU. The regional GLM and GAM validated well for both calibration sites (Sabine and Fosheim) with the area under the receiver operating curves (AUROC) > 0.79. Both models were applied directly to Cape Bounty without calibration and validated equally with AUROC's of 0.76; however, each model predicted disturbed and undisturbed samples differently. Additionally, the sensitivity of the transferred model was assessed using data sets with different sample sizes. Results indicated that models based on larger sample sizes transferred more consistently and captured the variability within the terrain attributes in the respective study areas. Terrain attributes associated with the initiation of disturbances were

  7. Solution of generalized shifted linear systems with complex symmetric matrices

    NASA Astrophysics Data System (ADS)

    Sogabe, Tomohiro; Hoshi, Takeo; Zhang, Shao-Liang; Fujiwara, Takeo

    2012-07-01

    We develop the shifted COCG method [R. Takayama, T. Hoshi, T. Sogabe, S.-L. Zhang, T. Fujiwara, Linear algebraic calculation of Green's function for large-scale electronic structure theory, Phys. Rev. B 73 (165108) (2006) 1-9] and the shifted WQMR method [T. Sogabe, T. Hoshi, S.-L. Zhang, T. Fujiwara, On a weighted quasi-residual minimization strategy of the QMR method for solving complex symmetric shifted linear systems, Electron. Trans. Numer. Anal. 31 (2008) 126-140] for solving generalized shifted linear systems with complex symmetric matrices that arise from the electronic structure theory. The complex symmetric Lanczos process with a suitable bilinear form plays an important role in the development of the methods. The numerical examples indicate that the methods are highly attractive when the inner linear systems can efficiently be solved.

  8. Beam envelope calculations in general linear coupled lattices

    SciTech Connect

    Chung, Moses; Qin, Hong; Groening, Lars; Xiao, Chen; Davidson, Ronald C.

    2015-01-15

    The envelope equations and Twiss parameters (β and α) provide important bases for uncoupled linear beam dynamics. For sophisticated beam manipulations, however, coupling elements between two transverse planes are intentionally introduced. The recently developed generalized Courant-Snyder theory offers an effective way of describing the linear beam dynamics in such coupled systems with a remarkably similar mathematical structure to the original Courant-Snyder theory. In this work, we present numerical solutions to the symmetrized matrix envelope equation for β which removes the gauge freedom in the matrix envelope equation for w. Furthermore, we construct the transfer and beam matrices in terms of the generalized Twiss parameters, which enables calculation of the beam envelopes in arbitrary linear coupled systems.

  9. A parallel domain decomposition algorithm for coastal ocean circulation models based on integer linear programming

    NASA Astrophysics Data System (ADS)

    Jordi, Antoni; Georgas, Nickitas; Blumberg, Alan

    2017-05-01

    This paper presents a new parallel domain decomposition algorithm based on integer linear programming (ILP), a mathematical optimization method. To minimize the computation time of coastal ocean circulation models, the ILP decomposition algorithm divides the global domain in local domains with balanced work load according to the number of processors and avoids computations over as many as land grid cells as possible. In addition, it maintains the use of logically rectangular local domains and achieves the exact same results as traditional domain decomposition algorithms (such as Cartesian decomposition). However, the ILP decomposition algorithm may not converge to an exact solution for relatively large domains. To overcome this problem, we developed two ILP decomposition formulations. The first one (complete formulation) has no additional restriction, although it is impractical for large global domains. The second one (feasible) imposes local domains with the same dimensions and looks for the feasibility of such decomposition, which allows much larger global domains. Parallel performance of both ILP formulations is compared to a base Cartesian decomposition by simulating two cases with the newly created parallel version of the Stevens Institute of Technology's Estuarine and Coastal Ocean Model (sECOM). Simulations with the ILP formulations run always faster than the ones with the base decomposition, and the complete formulation is better than the feasible one when it is applicable. In addition, parallel efficiency with the ILP decomposition may be greater than one.

  10. An Ionospheric Index Model based on Linear Regression and Neural Network Approaches

    NASA Astrophysics Data System (ADS)

    Tshisaphungo, Mpho; McKinnell, Lee-Anne; Bosco Habarulema, John

    2017-04-01

    The ionosphere is well known to reflect radio wave signals in the high frequency (HF) band due to the present of electron and ions within the region. To optimise the use of long distance HF communications, it is important to understand the drivers of ionospheric storms and accurately predict the propagation conditions especially during disturbed days. This paper presents the development of an ionospheric storm-time index over the South African region for the application of HF communication users. The model will result into a valuable tool to measure the complex ionospheric behaviour in an operational space weather monitoring and forecasting environment. The development of an ionospheric storm-time index is based on a single ionosonde station data over Grahamstown (33.3°S,26.5°E), South Africa. Critical frequency of the F2 layer (foF2) measurements for a period 1996-2014 were considered for this study. The model was developed based on linear regression and neural network approaches. In this talk validation results for low, medium and high solar activity periods will be discussed to demonstrate model's performance.

  11. Hierarchical Generalized Linear Models for the Analysis of Judge Ratings

    ERIC Educational Resources Information Center

    Muckle, Timothy J.; Karabatsos, George

    2009-01-01

    It is known that the Rasch model is a special two-level hierarchical generalized linear model (HGLM). This article demonstrates that the many-faceted Rasch model (MFRM) is also a special case of the two-level HGLM, with a random intercept representing examinee ability on a test, and fixed effects for the test items, judges, and possibly other…

  12. Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models

    ERIC Educational Resources Information Center

    Wagler, Amy E.

    2014-01-01

    Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…

  13. Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models

    ERIC Educational Resources Information Center

    Wagler, Amy E.

    2014-01-01

    Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…

  14. Model-based denoising in diffusion-weighted imaging using generalized spherical deconvolution.

    PubMed

    Sperl, Jonathan I; Sprenger, Tim; Tan, Ek T; Menzel, Marion I; Hardy, Christopher J; Marinelli, Luca

    2017-02-28

    Diffusion MRI often suffers from low signal-to-noise ratio, especially for high b-values. This work proposes a model-based denoising technique to address this limitation. A generalization of the multi-shell spherical deconvolution model using a Richardson-Lucy algorithm is applied to noisy data. The reconstructed coefficients are then used in the forward model to compute denoised diffusion-weighted images (DWIs). The proposed method operates in the diffusion space and thus is complementary to image-based denoising methods. We demonstrate improved image quality on the DWIs themselves, maps of neurite orientation dispersion and density imaging, and diffusional kurtosis imaging (DKI), as well as reduced spurious peaks in deterministic tractography. For DKI in particular, we observe up to 50% error reduction and demonstrate high image quality using just 30 DWIs. This corresponds to greater than fourfold reduction in scan time if compared to the widely used 140-DWI acquisitions. We also confirm consistent performance in pathological data sets, namely in white matter lesions of a multiple sclerosis patient. The proposed denoising technique termed generalized spherical deconvolution has the potential of significantly improving image quality in diffusion MRI. Magn Reson Med, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  15. Generalized linear mixed models for meta-analysis.

    PubMed

    Platt, R W; Leroux, B G; Breslow, N

    1999-03-30

    We examine two strategies for meta-analysis of a series of 2 x 2 tables with the odds ratio modelled as a linear combination of study level covariates and random effects representing between-study variation. Penalized quasi-likelihood (PQL), an approximate inference technique for generalized linear mixed models, and a linear model fitted by weighted least squares to the observed log-odds ratios are used to estimate regression coefficients and dispersion parameters. Simulation results demonstrate that both methods perform adequate approximate inference under many conditions, but that neither method works well in the presence of highly sparse data. Under certain conditions with small cell frequencies the PQL method provides better inference.

  16. A general theory of linear cosmological perturbations: bimetric theories

    NASA Astrophysics Data System (ADS)

    Lagos, Macarena; Ferreira, Pedro G.

    2017-01-01

    We implement the method developed in [1] to construct the most general parametrised action for linear cosmological perturbations of bimetric theories of gravity. Specifically, we consider perturbations around a homogeneous and isotropic background, and identify the complete form of the action invariant under diffeomorphism transformations, as well as the number of free parameters characterising this cosmological class of theories. We discuss, in detail, the case without derivative interactions, and compare our results with those found in massive bigravity.

  17. Electromagnetic axial anomaly in a generalized linear sigma model

    NASA Astrophysics Data System (ADS)

    Fariborz, Amir H.; Jora, Renata

    2017-06-01

    We construct the electromagnetic anomaly effective term for a generalized linear sigma model with two chiral nonets, one with a quark-antiquark structure, the other one with a four-quark content. We compute in the leading order of this framework the decays into two photons of six pseudoscalars: π0(137 ), π0(1300 ), η (547 ), η (958 ), η (1295 ) and η (1760 ). Our results agree well with the available experimental data.

  18. Credibility analysis of risk classes by generalized linear model

    NASA Astrophysics Data System (ADS)

    Erdemir, Ovgucan Karadag; Sucu, Meral

    2016-06-01

    In this paper generalized linear model (GLM) and credibility theory which are frequently used in nonlife insurance pricing are combined for reliability analysis. Using full credibility standard, GLM is associated with limited fluctuation credibility approach. Comparison criteria such as asymptotic variance and credibility probability are used to analyze the credibility of risk classes. An application is performed by using one-year claim frequency data of a Turkish insurance company and results of credible risk classes are interpreted.

  19. Residuals analysis of the generalized linear models for longitudinal data.

    PubMed

    Chang, Y C

    2000-05-30

    The generalized estimation equation (GEE) method, one of the generalized linear models for longitudinal data, has been used widely in medical research. However, the related sensitivity analysis problem has not been explored intensively. One of the possible reasons for this was due to the correlated structure within the same subject. We showed that the conventional residuals plots for model diagnosis in longitudinal data could mislead a researcher into trusting the fitted model. A non-parametric method, named the Wald-Wolfowitz run test, was proposed to check the residuals plots both quantitatively and graphically. The rationale proposedin this paper is well illustrated with two real clinical studies in Taiwan.

  20. Linear spin-2 fields in most general backgrounds

    NASA Astrophysics Data System (ADS)

    Bernard, Laura; Deffayet, Cédric; Schmidt-May, Angnis; von Strauss, Mikael

    2016-04-01

    We derive the full perturbative equations of motion for the most general background solutions in ghost-free bimetric theory in its metric formulation. Clever field redefinitions at the level of fluctuations enable us to circumvent the problem of varying a square-root matrix appearing in the theory. This greatly simplifies the expressions for the linear variation of the bimetric interaction terms. We show that these field redefinitions exist and are uniquely invertible if and only if the variation of the square-root matrix itself has a unique solution, which is a requirement for the linearized theory to be well defined. As an application of our results we examine the constraint structure of ghost-free bimetric theory at the level of linear equations of motion for the first time. We identify a scalar combination of equations which is responsible for the absence of the Boulware-Deser ghost mode in the theory. The bimetric scalar constraint is in general not manifestly covariant in its nature. However, in the massive gravity limit the constraint assumes a covariant form when one of the interaction parameters is set to zero. For that case our analysis provides an alternative and almost trivial proof of the absence of the Boulware-Deser ghost. Our findings generalize previous results in the metric formulation of massive gravity and also agree with studies of its vielbein version.

  1. Generalized in vitro-in vivo relationship (IVIVR) model based on artificial neural networks

    PubMed Central

    Mendyk, Aleksander; Tuszyński, Paweł K; Polak, Sebastian; Jachowicz, Renata

    2013-01-01

    Background The aim of this study was to develop a generalized in vitro-in vivo relationship (IVIVR) model based on in vitro dissolution profiles together with quantitative and qualitative composition of dosage formulations as covariates. Such a model would be of substantial aid in the early stages of development of a pharmaceutical formulation, when no in vivo results are yet available and it is impossible to create a classical in vitro-in vivo correlation (IVIVC)/IVIVR. Methods Chemoinformatics software was used to compute the molecular descriptors of drug substances (ie, active pharmaceutical ingredients) and excipients. The data were collected from the literature. Artificial neural networks were used as the modeling tool. The training process was carried out using the 10-fold cross-validation technique. Results The database contained 93 formulations with 307 inputs initially, and was later limited to 28 in a course of sensitivity analysis. The four best models were introduced into the artificial neural network ensemble. Complete in vivo profiles were predicted accurately for 37.6% of the formulations. Conclusion It has been shown that artificial neural networks can be an effective predictive tool for constructing IVIVR in an integrated generalized model for various formulations. Because IVIVC/IVIVR is classically conducted for 2–4 formulations and with a single active pharmaceutical ingredient, the approach described here is unique in that it incorporates various active pharmaceutical ingredients and dosage forms into a single model. Thus, preliminary IVIVC/IVIVR can be available without in vivo data, which is impossible using current IVIVC/IVIVR procedures. PMID:23569360

  2. Comparative Study of Algorithms for Automated Generalization of Linear Objects

    NASA Astrophysics Data System (ADS)

    Azimjon, S.; Gupta, P. K.; Sukhmani, R. S. G. S.

    2014-11-01

    Automated generalization, rooted from conventional cartography, has become an increasing concern in both geographic information system (GIS) and mapping fields. All geographic phenomenon and the processes are bound to the scale, as it is impossible for human being to observe the Earth and the processes in it without decreasing its scale. To get optimal results, cartographers and map-making agencies develop set of rules and constraints, however these rules are under consideration and topic for many researches up until recent days. Reducing map generating time and giving objectivity is possible by developing automated map generalization algorithms (McMaster and Shea, 1988). Modification of the scale traditionally is a manual process, which requires knowledge of the expert cartographer, and it depends on the experience of the user, which makes the process very subjective as every user may generate different map with same requirements. However, automating generalization based on the cartographic rules and constrains can give consistent result. Also, developing automated system for map generation is the demand of this rapid changing world. The research that we have conveyed considers only generalization of the roads, as it is one of the indispensable parts of a map. Dehradun city, Uttarakhand state of India was selected as a study area. The study carried out comparative study of the generalization software sets, operations and algorithms available currently, also considers advantages and drawbacks of the existing software used worldwide. Research concludes with the development of road network generalization tool and with the final generalized road map of the study area, which explores the use of open source python programming language and attempts to compare different road network generalization algorithms. Thus, the paper discusses the alternative solutions for automated generalization of linear objects using GIS-technologies. Research made on automated of road network

  3. Parametrizing linear generalized Langevin dynamics from explicit molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Gottwald, Fabian; Karsten, Sven; Ivanov, Sergei D.; Kühn, Oliver

    2015-06-01

    Fundamental understanding of complex dynamics in many-particle systems on the atomistic level is of utmost importance. Often the systems of interest are of macroscopic size but can be partitioned into a few important degrees of freedom which are treated most accurately and others which constitute a thermal bath. Particular attention in this respect attracts the linear generalized Langevin equation, which can be rigorously derived by means of a linear projection technique. Within this framework, a complicated interaction with the bath can be reduced to a single memory kernel. This memory kernel in turn is parametrized for a particular system studied, usually by means of time-domain methods based on explicit molecular dynamics data. Here, we discuss that this task is more naturally achieved in frequency domain and develop a Fourier-based parametrization method that outperforms its time-domain analogues. Very surprisingly, the widely used rigid bond method turns out to be inappropriate in general. Importantly, we show that the rigid bond approach leads to a systematic overestimation of relaxation times, unless the system under study consists of a harmonic bath bi-linearly coupled to the relevant degrees of freedom.

  4. Extracting Embedded Generalized Networks from Linear Programming Problems.

    DTIC Science & Technology

    1984-09-01

    E EXTRACTING EMBEDDED GENERALIZED NETWORKS FROM LINEAR PROGRAMMING PROBLEMS by Gerald G. Brown * . ___Richard D. McBride * R. Kevin Wood LcL7...authorized. EA Gerald ’Brown Richar-rD. McBride 46;val Postgrduate School University of Southern California Monterey, California 93943 Los Angeles...REOT UBE . OV S.SF- PERFOING’ CAORG soN UER. 7. AUTNOR(a) S. CONTRACT ON GRANT NUME111() Gerald G. Brown Richard D. McBride S. PERFORMING ORGANIZATION

  5. Generalization of continuous-variable quantum cloning with linear optics

    SciTech Connect

    Zhai Zehui; Guo Juan; Gao Jiangrui

    2006-05-15

    We propose an asymmetric quantum cloning scheme. Based on the proposal and experiment by Andersen et al. [Phys. Rev. Lett. 94, 240503 (2005)], we generalize it to two asymmetric cases: quantum cloning with asymmetry between output clones and between quadrature variables. These optical implementations also employ linear elements and homodyne detection only. Finally, we also compare the utility of symmetric and asymmetric cloning in an analysis of a squeezed-state quantum key distribution protocol and find that the asymmetric one is more advantageous.

  6. Generalized space and linear momentum operators in quantum mechanics

    SciTech Connect

    Costa, Bruno G. da

    2014-06-15

    We propose a modification of a recently introduced generalized translation operator, by including a q-exponential factor, which implies in the definition of a Hermitian deformed linear momentum operator p{sup ^}{sub q}, and its canonically conjugate deformed position operator x{sup ^}{sub q}. A canonical transformation leads the Hamiltonian of a position-dependent mass particle to another Hamiltonian of a particle with constant mass in a conservative force field of a deformed phase space. The equation of motion for the classical phase space may be expressed in terms of the generalized dual q-derivative. A position-dependent mass confined in an infinite square potential well is shown as an instance. Uncertainty and correspondence principles are analyzed.

  7. Variable-Speed Wind Turbine Controller Systematic Design Methodology: A Comparison of Non-Linear and Linear Model-Based Designs

    SciTech Connect

    Hand, M. M.

    1999-07-30

    Variable-speed, horizontal axis wind turbines use blade-pitch control to meet specified objectives for three regions of operation. This paper focuses on controller design for the constant power production regime. A simple, rigid, non-linear turbine model was used to systematically perform trade-off studies between two performance metrics. Minimization of both the deviation of the rotor speed from the desired speed and the motion of the actuator is desired. The robust nature of the proportional-integral-derivative (PID) controller is illustrated, and optimal operating conditions are determined. Because numerous simulation runs may be completed in a short time, the relationship of the two opposing metrics is easily visualized. Traditional controller design generally consists of linearizing a model about an operating point. This step was taken for two different operating points, and the systematic design approach was used. A comparison of the optimal regions selected using the n on-linear model and the two linear models shows similarities. The linearization point selection does, however, affect the turbine performance slightly. Exploitation of the simplicity of the model allows surfaces consisting of operation under a wide range of gain values to be created. This methodology provides a means of visually observing turbine performance based upon the two metrics chosen for this study. Design of a PID controller is simplified, and it is possible to ascertain the best possible combination of controller parameters. The wide, flat surfaces indicate that a PID controller is very robust in this variable-speed wind turbine application.

  8. General quantum constraints on detector noise in continuous linear measurements

    NASA Astrophysics Data System (ADS)

    Miao, Haixing

    2017-01-01

    In quantum sensing and metrology, an important class of measurement is the continuous linear measurement, in which the detector is coupled to the system of interest linearly and continuously in time. One key aspect involved is the quantum noise of the detector, arising from quantum fluctuations in the detector input and output. It determines how fast we acquire information about the system and also influences the system evolution in terms of measurement backaction. We therefore often categorize it as the so-called imprecision noise and quantum backaction noise. There is a general Heisenberg-like uncertainty relation that constrains the magnitude of and the correlation between these two types of quantum noise. The main result of this paper is to show that, when the detector becomes ideal, i.e., at the quantum limit with minimum uncertainty, not only does the uncertainty relation takes the equal sign as expected, but also there are two new equalities. This general result is illustrated by using the typical cavity QED setup with the system being either a qubit or a mechanical oscillator. Particularly, the dispersive readout of a qubit state, and the measurement of mechanical motional sideband asymmetry are considered.

  9. Generalized linear mixed model for segregation distortion analysis.

    PubMed

    Zhan, Haimao; Xu, Shizhong

    2011-11-11

    Segregation distortion is a phenomenon that the observed genotypic frequencies of a locus fall outside the expected Mendelian segregation ratio. The main cause of segregation distortion is viability selection on linked marker loci. These viability selection loci can be mapped using genome-wide marker information. We developed a generalized linear mixed model (GLMM) under the liability model to jointly map all viability selection loci of the genome. Using a hierarchical generalized linear mixed model, we can handle the number of loci several times larger than the sample size. We used a dataset from an F(2) mouse family derived from the cross of two inbred lines to test the model and detected a major segregation distortion locus contributing 75% of the variance of the underlying liability. Replicated simulation experiments confirm that the power of viability locus detection is high and the false positive rate is low. Not only can the method be used to detect segregation distortion loci, but also used for mapping quantitative trait loci of disease traits using case only data in humans and selected populations in plants and animals.

  10. Generalized linear mixed model for segregation distortion analysis

    PubMed Central

    2011-01-01

    Background Segregation distortion is a phenomenon that the observed genotypic frequencies of a locus fall outside the expected Mendelian segregation ratio. The main cause of segregation distortion is viability selection on linked marker loci. These viability selection loci can be mapped using genome-wide marker information. Results We developed a generalized linear mixed model (GLMM) under the liability model to jointly map all viability selection loci of the genome. Using a hierarchical generalized linear mixed model, we can handle the number of loci several times larger than the sample size. We used a dataset from an F2 mouse family derived from the cross of two inbred lines to test the model and detected a major segregation distortion locus contributing 75% of the variance of the underlying liability. Replicated simulation experiments confirm that the power of viability locus detection is high and the false positive rate is low. Conclusions Not only can the method be used to detect segregation distortion loci, but also used for mapping quantitative trait loci of disease traits using case only data in humans and selected populations in plants and animals. PMID:22078575

  11. A Standardized Generalized Dimensionality Discrepancy Measure and a Standardized Model-Based Covariance for Dimensionality Assessment for Multidimensional Models

    ERIC Educational Resources Information Center

    Levy, Roy; Xu, Yuning; Yel, Nedim; Svetina, Dubravka

    2015-01-01

    The standardized generalized dimensionality discrepancy measure and the standardized model-based covariance are introduced as tools to critique dimensionality assumptions in multidimensional item response models. These tools are grounded in a covariance theory perspective and associated connections between dimensionality and local independence.…

  12. Log-normal frailty models fitted as Poisson generalized linear mixed models.

    PubMed

    Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver

    2016-12-01

    The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  13. A new family of gauges in linearized general relativity

    NASA Astrophysics Data System (ADS)

    Esposito, Giampiero; Stornaiolo, Cosimo

    2000-05-01

    For vacuum Maxwell theory in four dimensions, a supplementary condition exists (due to Eastwood and Singer) which is invariant under conformal rescalings of the metric, in agreement with the conformal symmetry of the Maxwell equations. Thus, starting from the de Donder gauge, which is not conformally invariant but is the gravitational counterpart of the Lorenz gauge, one can consider, led by formal analogy, a new family of gauges in general relativity, which involve fifth-order covariant derivatives of metric perturbations. The admissibility of such gauges in the classical theory is first proven in the cases of linearized theory about flat Euclidean space or flat Minkowski spacetime. In the former, the general solution of the equation for the fulfillment of the gauge condition after infinitesimal diffeomorphisms involves a 3-harmonic 1-form and an inverse Fourier transform. In the latter, one needs instead the kernel of powers of the wave operator, and a contour integral. The analysis is also used to put restrictions on the dimensionless parameter occurring in the DeWitt supermetric, while the proof of admissibility is generalized to a suitable class of curved Riemannian backgrounds. Eventually, a non-local construction of the tensor field is obtained which makes it possible to achieve conformal invariance of the above gauges.

  14. On homogeneous second order linear general quantum difference equations.

    PubMed

    Faried, Nashat; Shehata, Enas M; El Zafarani, Rasha M

    2017-01-01

    In this paper, we prove the existence and uniqueness of solutions of the β-Cauchy problem of second order β-difference equations [Formula: see text] [Formula: see text], in a neighborhood of the unique fixed point [Formula: see text] of the strictly increasing continuous function β, defined on an interval [Formula: see text]. These equations are based on the general quantum difference operator [Formula: see text], which is defined by [Formula: see text], [Formula: see text]. We also construct a fundamental set of solutions for the second order linear homogeneous β-difference equations when the coefficients are constants and study the different cases of the roots of their characteristic equations. Finally, we drive the Euler-Cauchy β-difference equation.

  15. Optimization in generalized linear models: A case study

    NASA Astrophysics Data System (ADS)

    Silva, Eliana Costa e.; Correia, Aldina; Lopes, Isabel Cristina

    2016-06-01

    The maximum likelihood method is usually chosen to estimate the regression parameters of Generalized Linear Models (GLM) and also for hypothesis testing and goodness of fit tests. The classical method for estimating GLM parameters is the Fisher scores. In this work we propose to compute the estimates of the parameters with two alternative methods: a derivative-based optimization method, namely the BFGS method which is one of the most popular of the quasi-Newton algorithms, and the PSwarm derivative-free optimization method that combines features of a pattern search optimization method with a global Particle Swarm scheme. As a case study we use a dataset of biological parameters (phytoplankton) and chemical and environmental parameters of the water column of a Portuguese reservoir. The results show that, for this dataset, BFGS and PSwarm methods provided a better fit, than Fisher scores method, and can be good alternatives for finding the estimates for the parameters of a GLM.

  16. Using parallel banded linear system solvers in generalized eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Zhang, Hong; Moss, William F.

    1993-01-01

    Subspace iteration is a reliable and cost effective method for solving positive definite banded symmetric generalized eigenproblems, especially in the case of large scale problems. This paper discusses an algorithm that makes use of two parallel banded solvers in subspace iteration. A shift is introduced to decompose the banded linear systems into relatively independent subsystems and to accelerate the iterations. With this shift, an eigenproblem is mapped efficiently into the memories of a multiprocessor and a high speed-up is obtained for parallel implementations. An optimal shift is a shift that balances total computation and communication costs. Under certain conditions, we show how to estimate an optimal shift analytically using the decay rate for the inverse of a banded matrix, and how to improve this estimate. Computational results on iPSC/2 and iPSC/860 multiprocessors are presented.

  17. Modeling local item dependence with the hierarchical generalized linear model.

    PubMed

    Jiao, Hong; Wang, Shudong; Kamata, Akihito

    2005-01-01

    Local item dependence (LID) can emerge when the test items are nested within common stimuli or item groups. This study proposes a three-level hierarchical generalized linear model (HGLM) to model LID when LID is due to such contextual effects. The proposed three-level HGLM was examined by analyzing simulated data sets and was compared with the Rasch-equivalent two-level HGLM that ignores such a nested structure of test items. The results demonstrated that the proposed model could capture LID and estimate its magnitude. Also, the two-level HGLM resulted in larger mean absolute differences between the true and the estimated item difficulties than those from the proposed three-level HGLM. Furthermore, it was demonstrated that the proposed three-level HGLM estimated the ability distribution variance unaffected by the LID magnitude, while the two-level HGLM with no LID consideration increasingly underestimated the ability variance as the LID magnitude increased.

  18. Adaptive Error Estimation in Linearized Ocean General Circulation Models

    NASA Technical Reports Server (NTRS)

    Chechelnitsky, Michael Y.

    1999-01-01

    Data assimilation methods are routinely used in oceanography. The statistics of the model and measurement errors need to be specified a priori. This study addresses the problem of estimating model and measurement error statistics from observations. We start by testing innovation based methods of adaptive error estimation with low-dimensional models in the North Pacific (5-60 deg N, 132-252 deg E) to TOPEX/POSEIDON (TIP) sea level anomaly data, acoustic tomography data from the ATOC project, and the MIT General Circulation Model (GCM). A reduced state linear model that describes large scale internal (baroclinic) error dynamics is used. The methods are shown to be sensitive to the initial guess for the error statistics and the type of observations. A new off-line approach is developed, the covariance matching approach (CMA), where covariance matrices of model-data residuals are "matched" to their theoretical expectations using familiar least squares methods. This method uses observations directly instead of the innovations sequence and is shown to be related to the MT method and the method of Fu et al. (1993). Twin experiments using the same linearized MIT GCM suggest that altimetric data are ill-suited to the estimation of internal GCM errors, but that such estimates can in theory be obtained using acoustic data. The CMA is then applied to T/P sea level anomaly data and a linearization of a global GFDL GCM which uses two vertical modes. We show that the CMA method can be used with a global model and a global data set, and that the estimates of the error statistics are robust. We show that the fraction of the GCM-T/P residual variance explained by the model error is larger than that derived in Fukumori et al.(1999) with the method of Fu et al.(1993). Most of the model error is explained by the barotropic mode. However, we find that impact of the change in the error statistics on the data assimilation estimates is very small. This is explained by the large

  19. Adaptive Error Estimation in Linearized Ocean General Circulation Models

    NASA Technical Reports Server (NTRS)

    Chechelnitsky, Michael Y.

    1999-01-01

    Data assimilation methods are routinely used in oceanography. The statistics of the model and measurement errors need to be specified a priori. This study addresses the problem of estimating model and measurement error statistics from observations. We start by testing innovation based methods of adaptive error estimation with low-dimensional models in the North Pacific (5-60 deg N, 132-252 deg E) to TOPEX/POSEIDON (TIP) sea level anomaly data, acoustic tomography data from the ATOC project, and the MIT General Circulation Model (GCM). A reduced state linear model that describes large scale internal (baroclinic) error dynamics is used. The methods are shown to be sensitive to the initial guess for the error statistics and the type of observations. A new off-line approach is developed, the covariance matching approach (CMA), where covariance matrices of model-data residuals are "matched" to their theoretical expectations using familiar least squares methods. This method uses observations directly instead of the innovations sequence and is shown to be related to the MT method and the method of Fu et al. (1993). Twin experiments using the same linearized MIT GCM suggest that altimetric data are ill-suited to the estimation of internal GCM errors, but that such estimates can in theory be obtained using acoustic data. The CMA is then applied to T/P sea level anomaly data and a linearization of a global GFDL GCM which uses two vertical modes. We show that the CMA method can be used with a global model and a global data set, and that the estimates of the error statistics are robust. We show that the fraction of the GCM-T/P residual variance explained by the model error is larger than that derived in Fukumori et al.(1999) with the method of Fu et al.(1993). Most of the model error is explained by the barotropic mode. However, we find that impact of the change in the error statistics on the data assimilation estimates is very small. This is explained by the large

  20. Linear and non-linear heart rate metrics for the assessment of anaesthetists' workload during general anaesthesia.

    PubMed

    Martin, J; Schneider, F; Kowalewskij, A; Jordan, D; Hapfelmeier, A; Kochs, E F; Wagner, K J; Schulz, C M

    2016-12-01

    Excessive workload may impact the anaesthetists' ability to adequately process information during clinical practice in the operation room and may result in inaccurate situational awareness and performance. This exploratory study investigated heart rate (HR), linear and non-linear heart rate variability (HRV) metrics and subjective ratings scales for the assessment of workload associated with the anaesthesia stages induction, maintenance and emergence. HR and HRV metrics were calculated based on five min segments from each of the three anaesthesia stages. The area under the receiver operating characteristics curve (AUC) of the investigated metrics was calculated to assess their ability to discriminate between the stages of anaesthesia. Additionally, a multiparametric approach based on logistic regression models was performed to further evaluate whether linear or non-linear heart rate metrics are suitable for the assessment of workload. Mean HR and several linear and non-linear HRV metrics including subjective workload ratings differed significantly between stages of anaesthesia. Permutation Entropy (PeEn, AUC=0.828) and mean HR (AUC=0.826) discriminated best between the anaesthesia stages induction and maintenance. In the multiparametric approach using logistic regression models, the model based on non-linear heart rate metrics provided a higher AUC compared with the models based on linear metrics. In this exploratory study based on short ECG segment analysis, PeEn and HR seem to be promising to separate workload levels between different stages of anaesthesia. The multiparametric analysis of the regression models favours non-linear heart rate metrics over linear metrics. © The Author 2016. Published by Oxford University Press on behalf of the British Journal of Anaesthesia. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. Process Setting through General Linear Model and Response Surface Method

    NASA Astrophysics Data System (ADS)

    Senjuntichai, Angsumalin

    2010-10-01

    The objective of this study is to improve the efficiency of the flow-wrap packaging process in soap industry through the reduction of defectives. At the 95% confidence level, with the regression analysis, the sealing temperature, temperatures of upper and lower crimper are found to be the significant factors for the flow-wrap process with respect to the number/percentage of defectives. Twenty seven experiments have been designed and performed according to three levels of each controllable factor. With the general linear model (GLM), the suggested values for the sealing temperature, temperatures of upper and lower crimpers are 185, 85 and 85° C, respectively while the response surface method (RSM) provides the optimal process conditions at 186, 89 and 88° C. Due to different assumptions between percentage of defective and all three temperature parameters, the suggested conditions from the two methods are then slightly different. Fortunately, the estimated percentage of defectives at 5.51% under GLM process condition and the predicted percentage of defectives at 4.62% under RSM process condition are not significant different. But at 95% confidence level, the percentage of defectives under RSM condition can be much lower approximately 2.16% than those under GLM condition in accordance with wider variation. Lastly, the percentages of defectives under the conditions suggested by GLM and RSM are reduced by 55.81% and 62.95%, respectively.

  2. Variational Bayesian Parameter Estimation Techniques for the General Linear Model

    PubMed Central

    Starke, Ludger; Ostwald, Dirk

    2017-01-01

    Variational Bayes (VB), variational maximum likelihood (VML), restricted maximum likelihood (ReML), and maximum likelihood (ML) are cornerstone parametric statistical estimation techniques in the analysis of functional neuroimaging data. However, the theoretical underpinnings of these model parameter estimation techniques are rarely covered in introductory statistical texts. Because of the widespread practical use of VB, VML, ReML, and ML in the neuroimaging community, we reasoned that a theoretical treatment of their relationships and their application in a basic modeling scenario may be helpful for both neuroimaging novices and practitioners alike. In this technical study, we thus revisit the conceptual and formal underpinnings of VB, VML, ReML, and ML and provide a detailed account of their mathematical relationships and implementational details. We further apply VB, VML, ReML, and ML to the general linear model (GLM) with non-spherical error covariance as commonly encountered in the first-level analysis of fMRI data. To this end, we explicitly derive the corresponding free energy objective functions and ensuing iterative algorithms. Finally, in the applied part of our study, we evaluate the parameter and model recovery properties of VB, VML, ReML, and ML, first in an exemplary setting and then in the analysis of experimental fMRI data acquired from a single participant under visual stimulation. PMID:28966572

  3. Generalized linear model for estimation of missing daily rainfall data

    NASA Astrophysics Data System (ADS)

    Rahman, Nurul Aishah; Deni, Sayang Mohd; Ramli, Norazan Mohamed

    2017-04-01

    The analysis of rainfall data with no missingness is vital in various applications including climatological, hydrological and meteorological study. The issue of missing data is a serious concern since it could introduce bias and lead to misleading conclusions. In this study, five imputation methods including simple arithmetic average, normal ratio method, inverse distance weighting method, correlation coefficient weighting method and geographical coordinate were used to estimate the missing data. However, these imputation methods ignored the seasonality in rainfall dataset which could give more reliable estimation. Thus this study is aimed to estimate the missingness in daily rainfall data by using generalized linear model with gamma and Fourier series as the link function and smoothing technique, respectively. Forty years daily rainfall data for the period from 1975 until 2014 which consists of seven stations at Kelantan region were selected for the analysis. The findings indicated that the imputation methods could provide more accurate estimation values based on the least mean absolute error, root mean squared error and coefficient of variation root mean squared error when seasonality in the dataset are considered.

  4. A general protocol to afford enantioenriched linear homoprenylic amines.

    PubMed

    Bosque, Irene; Foubelo, Francisco; Gonzalez-Gomez, Jose C

    2013-11-21

    The reaction of a readily obtained chiral branched homoprenylamonium salt with a range of aldehydes, including aliphatic substrates, affords the corresponding linear isomers in good yields and enantioselectivities.

  5. A general approach to mixed effects modeling of residual variances in generalized linear mixed models

    PubMed Central

    Kizilkaya, Kadir; Tempelman, Robert J

    2005-01-01

    We propose a general Bayesian approach to heteroskedastic error modeling for generalized linear mixed models (GLMM) in which linked functions of conditional means and residual variances are specified as separate linear combinations of fixed and random effects. We focus on the linear mixed model (LMM) analysis of birth weight (BW) and the cumulative probit mixed model (CPMM) analysis of calving ease (CE). The deviance information criterion (DIC) was demonstrated to be useful in correctly choosing between homoskedastic and heteroskedastic error GLMM for both traits when data was generated according to a mixed model specification for both location parameters and residual variances. Heteroskedastic error LMM and CPMM were fitted, respectively, to BW and CE data on 8847 Italian Piemontese first parity dams in which residual variances were modeled as functions of fixed calf sex and random herd effects. The posterior mean residual variance for male calves was over 40% greater than that for female calves for both traits. Also, the posterior means of the standard deviation of the herd-specific variance ratios (relative to a unitary baseline) were estimated to be 0.60 ± 0.09 for BW and 0.74 ± 0.14 for CE. For both traits, the heteroskedastic error LMM and CPMM were chosen over their homoskedastic error counterparts based on DIC values. PMID:15588567

  6. Generalizing on Multiple Grounds: Performance Learning in Model-Based Troubleshooting

    DTIC Science & Technology

    1989-02-01

    57 5 Similar = Same Fault Hypothesis 58 5.1 Fault Hypotheses ........................................ 58 5.2 Generalizing Fault Envisionments ...59 5.2.1 Utility of EBG on Envisionments ..... ................ 61 5.3 Lifting Fault Hypotheses...5.2. GENERALIZING FAULT ENVISIONMENTS 59 A= A= A10 A1 B=3L- B=k5 C= y C=2-4 D=E2 Y 1 D= M2E= =12 E=3--EM3---- S ()(b) Figure 5.1: The Polybox Circuit

  7. Ammonia quantitative analysis model based on miniaturized Al ionization gas sensor and non-linear bistable dynamic model

    PubMed Central

    Ma, Rongfei

    2015-01-01

    In this paper, ammonia quantitative analysis based on miniaturized Al ionization gas sensor and non-linear bistable dynamic model was proposed. Al plate anodic gas-ionization sensor was used to obtain the current-voltage (I-V) data. Measurement data was processed by non-linear bistable dynamics model. Results showed that the proposed method quantitatively determined ammonia concentrations. PMID:25975362

  8. Ammonia quantitative analysis model based on miniaturized Al ionization gas sensor and non-linear bistable dynamic model.

    PubMed

    Ma, Rongfei

    2015-01-01

    In this paper, ammonia quantitative analysis based on miniaturized Al ionization gas sensor and non-linear bistable dynamic model was proposed. Al plate anodic gas-ionization sensor was used to obtain the current-voltage (I-V) data. Measurement data was processed by non-linear bistable dynamics model. Results showed that the proposed method quantitatively determined ammonia concentrations.

  9. Enhancing Retrieval with Hyperlinks: A General Model Based on Propositional Argumentation Systems.

    ERIC Educational Resources Information Center

    Picard, Justin; Savoy, Jacques

    2003-01-01

    Discusses the use of hyperlinks for improving information retrieval on the World Wide Web and proposes a general model for using hyperlinks based on Probabilistic Argumentation Systems. Topics include propositional logic, knowledge, and uncertainty; assumptions; using hyperlinks to modify document score and rank; and estimating the popularity of a…

  10. Generalization of the Aoki-Yoshikawa sectoral productivity model based on extreme physical information principle

    NASA Astrophysics Data System (ADS)

    Bednarek, Ilona; Makowski, Marcin; Piotrowski, Edward W.; Sładkowski, Jan; Syska, Jacek

    2015-06-01

    This paper presents a continuous variable generalization of the Aoki-Yoshikawa sectoral productivity model. Information theoretical methods from the Frieden-Soffer extreme physical information statistical estimation methodology were used to construct exact solutions. Both approaches coincide in first order approximation. The approach proposed here can be successfully applied in other fields of research.

  11. Connections between Generalizing and Justifying: Students' Reasoning with Linear Relationships

    ERIC Educational Resources Information Center

    Ellis, Amy B.

    2007-01-01

    Research investigating algebra students' abilities to generalize and justify suggests that they experience difficulty in creating and using appropriate generalizations and proofs. Although the field has documented students' errors, less is known about what students do understand to be general and convincing. This study examines the ways in which…

  12. Understanding General and Specific Connections between Psychopathology and Marital Distress: A Model Based Approach

    PubMed Central

    South, Susan C.; Krueger, Robert F.; Iacono, William G.

    2011-01-01

    Marital distress is linked to many types of mental disorders; however, no study to date has examined this link in the context of empirically-based hierarchical models of psychopathology. There may be general associations between low levels of marital quality and broad groups of comorbid psychiatric disorders as well as links between marital adjustment and specific types of mental disorders. The authors examined this issue in a sample (N=929 couples) of currently married couples from the Minnesota Twin Family Study who completed self-report measures of relationship adjustment and were also assessed for common mental disorders. Structural equation modeling indicated that a) higher standing on latent factors of internalizing (INT) and externalizing (EXT) psychopathology was associated with lower standing on latent factors of general marital adjustment for both husbands and wives, b) the magnitude of these effects was similar across husbands and wives, and c) there were no residual associations between any specific mental disorder and overall relationship adjustment after controlling for the INT and EXT factors. These findings point to the utility of hierarchical models in understanding psychopathology and its correlates. Much of the link between mental disorder and marital distress operated at the level of broad spectrums of psychopathological variation (i.e., higher levels of marital distress were associated with disorder comorbidity), suggesting that the temperamental core of these spectrums contributes not only to symptoms of mental illness but to the behaviors that lead to impaired marital quality in adulthood. PMID:21942335

  13. A General Linear Method for Equating with Small Samples

    ERIC Educational Resources Information Center

    Albano, Anthony D.

    2015-01-01

    Research on equating with small samples has shown that methods with stronger assumptions and fewer statistical estimates can lead to decreased error in the estimated equating function. This article introduces a new approach to linear observed-score equating, one which provides flexible control over how form difficulty is assumed versus estimated…

  14. On the Feasibility of a Generalized Linear Program

    DTIC Science & Technology

    1989-03-01

    generealized linear program by applying the same algorithm to a "phase-one" problem without requiring that the initial basic feasible solution to the latter be non-degenerate. secUrMTY C.AMlIS CAYI S OP ?- PAeES( UII -W & ,

  15. Generalizing a Categorization of Students' Interpretations of Linear Kinematics Graphs

    ERIC Educational Resources Information Center

    Bollen, Laurens; De Cock, Mieke; Zuza, Kristina; Guisasola, Jenaro; van Kampen, Paul

    2016-01-01

    We have investigated whether and how a categorization of responses to questions on linear distance-time graphs, based on a study of Irish students enrolled in an algebra-based course, could be adopted and adapted to responses from students enrolled in calculus-based physics courses at universities in Flanders, Belgium (KU Leuven) and the Basque…

  16. Generalizing a Categorization of Students' Interpretations of Linear Kinematics Graphs

    ERIC Educational Resources Information Center

    Bollen, Laurens; De Cock, Mieke; Zuza, Kristina; Guisasola, Jenaro; van Kampen, Paul

    2016-01-01

    We have investigated whether and how a categorization of responses to questions on linear distance-time graphs, based on a study of Irish students enrolled in an algebra-based course, could be adopted and adapted to responses from students enrolled in calculus-based physics courses at universities in Flanders, Belgium (KU Leuven) and the Basque…

  17. A General Linear Method for Equating with Small Samples

    ERIC Educational Resources Information Center

    Albano, Anthony D.

    2015-01-01

    Research on equating with small samples has shown that methods with stronger assumptions and fewer statistical estimates can lead to decreased error in the estimated equating function. This article introduces a new approach to linear observed-score equating, one which provides flexible control over how form difficulty is assumed versus estimated…

  18. A Heuristic Ceiling Point Algorithm for General Integer Linear Programming

    DTIC Science & Technology

    1988-11-01

    narrowly satisfies the il h constraint: taking a unit step from x toward the ilh constraining hyperplane in a direction parallel to some coordinate...Business, Stanford Univesity , Stanford, Calif., December 1964. Hillier, F., "Efficient Heuristic Procedures for Integer Linear Programming with an Inte- rior

  19. a Unified Gravity-Electroweak Model Based on a Generalized Yang-Mills Framework

    NASA Astrophysics Data System (ADS)

    Hsu, Jong-Ping

    Gravitational and electroweak interactions can be unified in analogy with the unification in the Weinberg-Salam theory. The Yang-Mills framework is generalized to include spacetime translational group T(4), whose generators Tμ ( = ∂/∂xμ) do not have constant matrix representations. By gauging T(4) × SU(2) × U(1) in flat spacetime, we have a new tensor field ϕμν which universally couples to all particles and anti-particles with the same constant g, which has the dimension of length. In this unified model, the T(4) gauge symmetry dictates that all wave equations of fermions, massive bosons and the photon in flat spacetime reduce to a Hamilton-Jacobi equation with the same "effective Riemann metric tensor" in the geometric-optics limit. Consequently, the results are consistent with experiments. We demonstrated that the T(4) gravitational gauge field can be quantized in inertial frames.

  20. Hydraulic fracturing model based on the discrete fracture model and the generalized J integral

    NASA Astrophysics Data System (ADS)

    Liu, Z. Q.; Liu, Z. F.; Wang, X. H.; Zeng, B.

    2016-08-01

    The hydraulic fracturing technique is an effective stimulation for low permeability reservoirs. In fracturing models, one key point is to accurately calculate the flux across the fracture surface and the stress intensity factor. To achieve high precision, the discrete fracture model is recommended to calculate the flux. Using the generalized J integral, the present work obtains an accurate simulation of the stress intensity factor. Based on the above factors, an alternative hydraulic fracturing model is presented. Examples are included to demonstrate the reliability of the proposed model and its ability to model the fracture propagation. Subsequently, the model is used to describe the relationship between the geometry of the fracture and the fracturing equipment parameters. The numerical results indicate that the working pressure and the pump power will significantly influence the fracturing process.

  1. Forest height estimation from mountain forest areas using general model-based decomposition for polarimetric interferometric synthetic aperture radar images

    NASA Astrophysics Data System (ADS)

    Minh, Nghia Pham; Zou, Bin; Cai, Hongjun; Wang, Chengyi

    2014-01-01

    The estimation of forest parameters over mountain forest areas using polarimetric interferometric synthetic aperture radar (PolInSAR) images is one of the greatest interests in remote sensing applications. For mountain forest areas, scattering mechanisms are strongly affected by the ground topography variations. Most of the previous studies in modeling microwave backscattering signatures of forest area have been carried out over relatively flat areas. Therefore, a new algorithm for the forest height estimation from mountain forest areas using the general model-based decomposition (GMBD) for PolInSAR image is proposed. This algorithm enables the retrieval of not only the forest parameters, but also the magnitude associated with each mechanism. In addition, general double- and single-bounce scattering models are proposed to fit for the cross-polarization and off-diagonal term by separating their independent orientation angle, which remains unachieved in the previous model-based decompositions. The efficiency of the proposed approach is demonstrated with simulated data from PolSARProSim software and ALOS-PALSAR spaceborne PolInSAR datasets over the Kalimantan areas, Indonesia. Experimental results indicate that forest height could be effectively estimated by GMBD.

  2. Generalized linear and generalized additive models in studies of species distributions: Setting the scene

    USGS Publications Warehouse

    Guisan, A.; Edwards, T.C.; Hastie, T.

    2002-01-01

    An important statistical development of the last 30 years has been the advance in regression analysis provided by generalized linear models (GLMs) and generalized additive models (GAMs). Here we introduce a series of papers prepared within the framework of an international workshop entitled: Advances in GLMs/GAMs modeling: from species distribution to environmental management, held in Riederalp, Switzerland, 6-11 August 2001. We first discuss some general uses of statistical models in ecology, as well as provide a short review of several key examples of the use of GLMs and GAMs in ecological modeling efforts. We next present an overview of GLMs and GAMs, and discuss some of their related statistics used for predictor selection, model diagnostics, and evaluation. Included is a discussion of several new approaches applicable to GLMs and GAMs, such as ridge regression, an alternative to stepwise selection of predictors, and methods for the identification of interactions by a combined use of regression trees and several other approaches. We close with an overview of the papers and how we feel they advance our understanding of their application to ecological modeling. ?? 2002 Elsevier Science B.V. All rights reserved.

  3. An individual-specific gait pattern prediction model based on generalized regression neural networks.

    PubMed

    Luu, Trieu Phat; Low, K H; Qu, Xingda; Lim, H B; Hoon, K H

    2014-01-01

    Robotics is gaining its popularity in gait rehabilitation. Gait pattern planning is important to ensure that the gait patterns induced by robotic systems are tailored to each individual and varying walking speed. Most research groups planned gait patterns for their robotics systems based on Clinical Gait Analysis (CGA) data. The major problem with the method using the CGA data is that it cannot accommodate inter-subject differences. In addition, CGA data is limited to only one walking speed as per the published data. The objective of this work was to develop an individual-specific gait pattern prediction model for gait pattern planning in the robotic gait rehabilitation systems. The waveforms of lower limb joint angles in the sagittal plane during walking were obtained with a motion capture system. Each waveform was represented and reconstructed by a Fourier coefficient vector which consisted of eleven elements. Generalized regression neural networks (GRNNs) were designed to predict Fourier coefficient vectors from given gait parameters and lower limb anthropometric data. The generated waveforms from the predicted Fourier coefficient vectors were compared to the actual waveforms and CGA waveforms by using the assessment parameters of correlation coefficients, mean absolute deviation (MAD) and threshold absolute deviation (TAD). The results showed that lower limb joint angle waveforms generated by the gait pattern prediction model were closer to the actual waveforms compared to the CGA waveforms. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. Spatial downscaling of soil prediction models based on weighted generalized additive models in smallholder farm settings.

    PubMed

    Xu, Yiming; Smith, Scot E; Grunwald, Sabine; Abd-Elrahman, Amr; Wani, Suhas P; Nair, Vimala D

    2017-09-11

    Digital soil mapping (DSM) is gaining momentum as a technique to help smallholder farmers secure soil security and food security in developing regions. However, communications of the digital soil mapping information between diverse audiences become problematic due to the inconsistent scale of DSM information. Spatial downscaling can make use of accessible soil information at relatively coarse spatial resolution to provide valuable soil information at relatively fine spatial resolution. The objective of this research was to disaggregate the coarse spatial resolution soil exchangeable potassium (Kex) and soil total nitrogen (TN) base map into fine spatial resolution soil downscaled map using weighted generalized additive models (GAMs) in two smallholder villages in South India. By incorporating fine spatial resolution spectral indices in the downscaling process, the soil downscaled maps not only conserve the spatial information of coarse spatial resolution soil maps but also depict the spatial details of soil properties at fine spatial resolution. The results of this study demonstrated difference between the fine spatial resolution downscaled maps and fine spatial resolution base maps is smaller than the difference between coarse spatial resolution base maps and fine spatial resolution base maps. The appropriate and economical strategy to promote the DSM technique in smallholder farms is to develop the relatively coarse spatial resolution soil prediction maps or utilize available coarse spatial resolution soil maps at the regional scale and to disaggregate these maps to the fine spatial resolution downscaled soil maps at farm scale.

  5. Analysis and Regulation of Nonlinear and Generalized Linear Systems.

    DTIC Science & Technology

    1985-09-06

    But this intuition is based on a linearized analysis, and may well be too conservative -or even totally inappropiate - for a particular (global...in the field of stochastic estimation. Given a time series, it is often possible to compute sufficient statistics of the associated process...and dynamically updating sufficient statistics with finite resources had received almost no attention in the literature, and turns out to be

  6. Generalizing a categorization of students' interpretations of linear kinematics graphs

    NASA Astrophysics Data System (ADS)

    Bollen, Laurens; De Cock, Mieke; Zuza, Kristina; Guisasola, Jenaro; van Kampen, Paul

    2016-06-01

    We have investigated whether and how a categorization of responses to questions on linear distance-time graphs, based on a study of Irish students enrolled in an algebra-based course, could be adopted and adapted to responses from students enrolled in calculus-based physics courses at universities in Flanders, Belgium (KU Leuven) and the Basque Country, Spain (University of the Basque Country). We discuss how we adapted the categorization to accommodate a much more diverse student cohort and explain how the prior knowledge of students may account for many differences in the prevalence of approaches and success rates. Although calculus-based physics students make fewer mistakes than algebra-based physics students, they encounter similar difficulties that are often related to incorrectly dividing two coordinates. We verified that a qualitative understanding of kinematics is an important but not sufficient condition for students to determine a correct value for the speed. When comparing responses to questions on linear distance-time graphs with responses to isomorphic questions on linear water level versus time graphs, we observed that the context of a question influences the approach students use. Neither qualitative understanding nor an ability to find the slope of a context-free graph proved to be a reliable predictor for the approach students use when they determine the instantaneous speed.

  7. Spikelet structure and development in Cyperoideae (Cyperaceae): a monopodial general model based on ontogenetic evidence

    PubMed Central

    Vrijdaghs, Alexander; Reynders, Marc; Larridon, Isabel; Muasya, A. Muthama; Smets, Erik; Goetghebeur, Paul

    2010-01-01

    Background and Aims In Cyperoideae, one of the two subfamilies in Cyperaceae, unresolved homology questions about spikelets remained. This was particularly the case in taxa with distichously organized spikelets and in Cariceae, a tribe with complex compound inflorescences comprising male (co)florescences and deciduous female single-flowered lateral spikelets. Using ontogenetic techniques, a wide range of taxa were investigated, including some controversial ones, in order to find morphological arguments to understand the nature of the spikelet in Cyperoideae. This paper presents a review of both new ontogenetic data and current knowledge, discussing a cyperoid, general, monopodial spikelet model. Methods Scanning electron microscopy and light microscopy were used to examine spikelets of 106 species from 33 cyperoid genera. Results Ontogenetic data presented allow a consistent cyperoid spikelet model to be defined. Scanning and light microscopic images in controversial taxa such as Schoenus nigricans, Cariceae and Cypereae are interpreted accordingly. Conclusions Spikelets in all species studied consist of an indeterminate rachilla, and one to many spirally to distichously arranged glumes, each subtending a flower or empty. Lateral spikelets are subtended by a bract and have a spikelet prophyll. In distichously organized spikelets, combined concaulescence of the flowers and epicaulescence (a newly defined metatopic displacement) of the glumes has caused interpretational controversy in the past. In Cariceae, the male (co)florescences are terminal spikelets. Female single-flowered spikelets are positioned proximally on the rachis. To explain both this and the secondary spikelets in some Cypereae, the existence of an ontogenetic switch determining the development of a primordium into flower, or lateral axis is postulated. PMID:20197291

  8. Generalized linear IgA dermatosis with palmar involvement.

    PubMed

    Norris, Ivy N; Haeberle, M Tye; Callen, Jeffrey P; Malone, Janine C

    2015-09-17

    Linear IgA bullous dermatosis (LABD) is a sub-epidermal blistering disorder characterized by deposition of IgA along the basement membrane zone (BMZ) as detected by immunofluorescence microscopy. The diagnosis is made by clinicopathologic correlation with immunofluorescence confirmation. Differentiation from other bullous dermatoses is important because therapeutic measures differ. Prompt initiation of the appropriate therapies can have a major impact on outcomes. We present three cases with prominent palmar involvement to alert the clinician of this potential physical exam finding and to consider LABD in the right context.

  9. Commensurate Priors for Incorporating Historical Information in Clinical Trials Using General and Generalized Linear Models.

    PubMed

    Hobbs, Brian P; Sargent, Daniel J; Carlin, Bradley P

    2012-08-28

    Assessing between-study variability in the context of conventional random-effects meta-analysis is notoriously difficult when incorporating data from only a small number of historical studies. In order to borrow strength, historical and current data are often assumed to be fully homogeneous, but this can have drastic consequences for power and Type I error if the historical information is biased. In this paper, we propose empirical and fully Bayesian modifications of the commensurate prior model (Hobbs et al., 2011) extending Pocock (1976), and evaluate their frequentist and Bayesian properties for incorporating patient-level historical data using general and generalized linear mixed regression models. Our proposed commensurate prior models lead to preposterior admissible estimators that facilitate alternative bias-variance trade-offs than those offered by pre-existing methodologies for incorporating historical data from a small number of historical studies. We also provide a sample analysis of a colon cancer trial comparing time-to-disease progression using a Weibull regression model.

  10. Commensurate Priors for Incorporating Historical Information in Clinical Trials Using General and Generalized Linear Models

    PubMed Central

    Hobbs, Brian P.; Sargent, Daniel J.; Carlin, Bradley P.

    2014-01-01

    Assessing between-study variability in the context of conventional random-effects meta-analysis is notoriously difficult when incorporating data from only a small number of historical studies. In order to borrow strength, historical and current data are often assumed to be fully homogeneous, but this can have drastic consequences for power and Type I error if the historical information is biased. In this paper, we propose empirical and fully Bayesian modifications of the commensurate prior model (Hobbs et al., 2011) extending Pocock (1976), and evaluate their frequentist and Bayesian properties for incorporating patient-level historical data using general and generalized linear mixed regression models. Our proposed commensurate prior models lead to preposterior admissible estimators that facilitate alternative bias-variance trade-offs than those offered by pre-existing methodologies for incorporating historical data from a small number of historical studies. We also provide a sample analysis of a colon cancer trial comparing time-to-disease progression using a Weibull regression model. PMID:24795786

  11. A Nonlinear Multigrid Solver for an Atmospheric General Circulation Model Based on Semi-Implicit Semi-Lagrangian Advection of Potential Vorticity

    NASA Technical Reports Server (NTRS)

    McCormick, S.; Ruge, John W.

    1998-01-01

    This work represents a part of a project to develop an atmospheric general circulation model based on the semi-Lagrangian advection of potential vorticity (PC) with divergence as the companion prognostic variable.

  12. A study of the linear free energy model for DNA structures using the generalized Hamiltonian formalism

    SciTech Connect

    Yavari, M.

    2016-06-15

    We generalize the results of Nesterenko [13, 14] and Gogilidze and Surovtsev [15] for DNA structures. Using the generalized Hamiltonian formalism, we investigate solutions of the equilibrium shape equations for the linear free energy model.

  13. Characterizing the performance of the Conway-Maxwell Poisson generalized linear model.

    PubMed

    Francis, Royce A; Geedipally, Srinivas Reddy; Guikema, Seth D; Dhavala, Soma Sekhar; Lord, Dominique; LaRocca, Sarah

    2012-01-01

    Count data are pervasive in many areas of risk analysis; deaths, adverse health outcomes, infrastructure system failures, and traffic accidents are all recorded as count events, for example. Risk analysts often wish to estimate the probability distribution for the number of discrete events as part of doing a risk assessment. Traditional count data regression models of the type often used in risk assessment for this problem suffer from limitations due to the assumed variance structure. A more flexible model based on the Conway-Maxwell Poisson (COM-Poisson) distribution was recently proposed, a model that has the potential to overcome the limitations of the traditional model. However, the statistical performance of this new model has not yet been fully characterized. This article assesses the performance of a maximum likelihood estimation method for fitting the COM-Poisson generalized linear model (GLM). The objectives of this article are to (1) characterize the parameter estimation accuracy of the MLE implementation of the COM-Poisson GLM, and (2) estimate the prediction accuracy of the COM-Poisson GLM using simulated data sets. The results of the study indicate that the COM-Poisson GLM is flexible enough to model under-, equi-, and overdispersed data sets with different sample mean values. The results also show that the COM-Poisson GLM yields accurate parameter estimates. The COM-Poisson GLM provides a promising and flexible approach for performing count data regression.

  14. Using Parallel Banded Linear System Solvers in Generalized Eigenvalue Problems

    DTIC Science & Technology

    1993-09-01

    systems. The PPT algorithm is similar to an algorithm introduced by Lawrie and Sameh in [18]. The PDD algorithm is a variant of PPT which uses the fa-t...AND L. JOHNSSON, Solving banded systems on a parallel processor, Parallel Comput., 5 (1987), pp. 219-246. [10] J. J. DONGARRA AND A. SAMEH , On some...symmetric generalized matrix eigenvalur problem, SIAM J. Matrix Anal. Appl., 14 (1993). [18] D. H. LAWRIE AND A. H. SAMEH , The computation and

  15. Linear Transformations, Projection Operators and Generalized Inverses; A Geometric Approach

    DTIC Science & Technology

    1988-03-01

    all direct complements of a and k respectively. Proof. From the representation (2.6) G m T P GMO = Tm a.1 Then A Tm Pa.1 A A Tm A =A, using (1.13) T P...closed range on Hibert spaces. ’p he V 5 0 •0 • -S. 19 " 6. REFERENCES 1. Langenhop, C. E. (1967). On generalized inverse of matrices. Siam J Appl . Math

  16. New Linear Partitioning Models Based on Experimental Water: Supercritical CO2 Partitioning Data of Selected Organic Compounds.

    PubMed

    Burant, Aniela; Thompson, Christopher; Lowry, Gregory V; Karamalidis, Athanasios K

    2016-05-17

    Partitioning coefficients of organic compounds between water and supercritical CO2 (sc-CO2) are necessary to assess the risk of migration of these chemicals from subsurface CO2 storage sites. Despite the large number of potential organic contaminants, the current data set of published water-sc-CO2 partitioning coefficients is very limited. Here, the partitioning coefficients of thiophene, pyrrole, and anisole were measured in situ over a range of temperatures and pressures using a novel pressurized batch-reactor system with dual spectroscopic detectors: a near-infrared spectrometer for measuring the organic analyte in the CO2 phase and a UV detector for quantifying the analyte in the aqueous phase. Our measured partitioning coefficients followed expected trends based on volatility and aqueous solubility. The partitioning coefficients and literature data were then used to update a published poly parameter linear free-energy relationship and to develop five new linear free-energy relationships for predicting water-sc-CO2 partitioning coefficients. A total of four of the models targeted a single class of organic compounds. Unlike models that utilize Abraham solvation parameters, the new relationships use vapor pressure and aqueous solubility of the organic compound at 25 °C and CO2 density to predict partitioning coefficients over a range of temperature and pressure conditions. The compound class models provide better estimates of partitioning behavior for compounds in that class than does the model built for the entire data set.

  17. Dynamic modelling and simulation of linear Fresnel solar field model based on molten salt heat transfer fluid

    NASA Astrophysics Data System (ADS)

    Hakkarainen, Elina; Tähtinen, Matti

    2016-05-01

    Demonstrations of direct steam generation (DSG) in linear Fresnel collectors (LFC) have given promising results related to higher steam parameters compared to the current state-of-the-art parabolic trough collector (PTC) technology using oil as heat transfer fluid (HTF). However, DSG technology lacks feasible solution for long-term thermal energy storage (TES) system. This option is important for CSP technology in order to offer dispatchable power. Recently, molten salts have been proposed to be used as HTF and directly as storage medium in both line-focusing solar fields, offering storage capacity of several hours. This direct molten salt (DMS) storage concept has already gained operational experience in solar tower power plant, and it is under demonstration phase both in the case of LFC and PTC systems. Dynamic simulation programs offer a valuable effort for design and optimization of solar power plants. In this work, APROS dynamic simulation program is used to model a DMS linear Fresnel solar field with two-tank TES system, and example simulation results are presented in order to verify the functionality of the model and capability of APROS for CSP modelling and simulation.

  18. New Linear Partitioning Models Based on Experimental Water: Supercritical CO 2 Partitioning Data of Selected Organic Compounds

    SciTech Connect

    Burant, Aniela; Thompson, Christopher; Lowry, Gregory V.; Karamalidis, Athanasios K.

    2016-05-17

    Partitioning coefficients of organic compounds between water and supercritical CO2 (sc-CO2) are necessary to assess the risk of migration of these chemicals from subsurface CO2 storage sites. Despite the large number of potential organic contaminants, the current data set of published water-sc-CO2 partitioning coefficients is very limited. Here, the partitioning coefficients of thiophene, pyrrole, and anisole were measured in situ over a range of temperatures and pressures using a novel pressurized batch reactor system with dual spectroscopic detectors: a near infrared spectrometer for measuring the organic analyte in the CO2 phase, and a UV detector for quantifying the analyte in the aqueous phase. Our measured partitioning coefficients followed expected trends based on volatility and aqueous solubility. The partitioning coefficients and literature data were then used to update a published poly-parameter linear free energy relationship and to develop five new linear free energy relationships for predicting water-sc-CO2 partitioning coefficients. Four of the models targeted a single class of organic compounds. Unlike models that utilize Abraham solvation parameters, the new relationships use vapor pressure and aqueous solubility of the organic compound at 25 °C and CO2 density to predict partitioning coefficients over a range of temperature and pressure conditions. The compound class models provide better estimates of partitioning behavior for compounds in that class than the model built for the entire dataset.

  19. Item Purification in Differential Item Functioning Using Generalized Linear Mixed Models

    ERIC Educational Resources Information Center

    Liu, Qian

    2011-01-01

    For this dissertation, four item purification procedures were implemented onto the generalized linear mixed model for differential item functioning (DIF) analysis, and the performance of these item purification procedures was investigated through a series of simulations. Among the four procedures, forward and generalized linear mixed model (GLMM)…

  20. Item Purification in Differential Item Functioning Using Generalized Linear Mixed Models

    ERIC Educational Resources Information Center

    Liu, Qian

    2011-01-01

    For this dissertation, four item purification procedures were implemented onto the generalized linear mixed model for differential item functioning (DIF) analysis, and the performance of these item purification procedures was investigated through a series of simulations. Among the four procedures, forward and generalized linear mixed model (GLMM)…

  1. Computer analysis of general linear networks using digraphs.

    NASA Technical Reports Server (NTRS)

    Mcclenahan, J. O.; Chan, S.-P.

    1972-01-01

    Investigation of the application of digraphs in analyzing general electronic networks, and development of a computer program based on a particular digraph method developed by Chen. The Chen digraph method is a topological method for solution of networks and serves as a shortcut when hand calculations are required. The advantage offered by this method of analysis is that the results are in symbolic form. It is limited, however, by the size of network that may be handled. Usually hand calculations become too tedious for networks larger than about five nodes, depending on how many elements the network contains. Direct determinant expansion for a five-node network is a very tedious process also.

  2. Computer analysis of general linear networks using digraphs.

    NASA Technical Reports Server (NTRS)

    Mcclenahan, J. O.; Chan, S.-P.

    1972-01-01

    Investigation of the application of digraphs in analyzing general electronic networks, and development of a computer program based on a particular digraph method developed by Chen. The Chen digraph method is a topological method for solution of networks and serves as a shortcut when hand calculations are required. The advantage offered by this method of analysis is that the results are in symbolic form. It is limited, however, by the size of network that may be handled. Usually hand calculations become too tedious for networks larger than about five nodes, depending on how many elements the network contains. Direct determinant expansion for a five-node network is a very tedious process also.

  3. Bayesian generalized linear mixed modeling of Tuberculosis using informative priors.

    PubMed

    Ojo, Oluwatobi Blessing; Lougue, Siaka; Woldegerima, Woldegebriel Assefa

    2017-01-01

    TB is rated as one of the world's deadliest diseases and South Africa ranks 9th out of the 22 countries with hardest hit of TB. Although many pieces of research have been carried out on this subject, this paper steps further by inculcating past knowledge into the model, using Bayesian approach with informative prior. Bayesian statistics approach is getting popular in data analyses. But, most applications of Bayesian inference technique are limited to situations of non-informative prior, where there is no solid external information about the distribution of the parameter of interest. The main aim of this study is to profile people living with TB in South Africa. In this paper, identical regression models are fitted for classical and Bayesian approach both with non-informative and informative prior, using South Africa General Household Survey (GHS) data for the year 2014. For the Bayesian model with informative prior, South Africa General Household Survey dataset for the year 2011 to 2013 are used to set up priors for the model 2014.

  4. Bayesian generalized linear mixed modeling of Tuberculosis using informative priors

    PubMed Central

    Woldegerima, Woldegebriel Assefa

    2017-01-01

    TB is rated as one of the world’s deadliest diseases and South Africa ranks 9th out of the 22 countries with hardest hit of TB. Although many pieces of research have been carried out on this subject, this paper steps further by inculcating past knowledge into the model, using Bayesian approach with informative prior. Bayesian statistics approach is getting popular in data analyses. But, most applications of Bayesian inference technique are limited to situations of non-informative prior, where there is no solid external information about the distribution of the parameter of interest. The main aim of this study is to profile people living with TB in South Africa. In this paper, identical regression models are fitted for classical and Bayesian approach both with non-informative and informative prior, using South Africa General Household Survey (GHS) data for the year 2014. For the Bayesian model with informative prior, South Africa General Household Survey dataset for the year 2011 to 2013 are used to set up priors for the model 2014. PMID:28257437

  5. Evaluating the environmental fate of pharmaceuticals using a level III model based on poly-parameter linear free energy relationships.

    PubMed

    Zukowska, Barbara; Breivik, Knut; Wania, Frank

    2006-04-15

    We recently proposed how to expand the applicability of multimedia models towards polar organic chemicals by expressing environmental phase partitioning with the help of poly-parameter linear free energy relationships (PP-LFERs). Here we elaborate on this approach by applying it to three pharmaceutical substances. A PP-LFER-based version of a Level III fugacity model calculates overall persistence, concentrations and intermedia fluxes of polar and non-polar organic chemicals between air, water, soil and sediments at steady-state. Illustrative modeling results for the pharmaceuticals within a defined coastal region are presented and discussed. The model results are highly sensitive to the degradation rate in water and the equilibrium partitioning between organic carbon and water, suggesting that an accurate description of this particular partitioning equilibrium is essential in order to obtain reliable predictions of environmental fate. The PP-LFER based modeling approach furthermore illustrates that the greatest mobility in aqueous phases may be experienced by pharmaceuticals that combines a small molecular size with strong H-acceptor properties.

  6. A general model-based design of experiments approach to achieve practical identifiability of pharmacokinetic and pharmacodynamic models.

    PubMed

    Galvanin, Federico; Ballan, Carlo C; Barolo, Massimiliano; Bezzo, Fabrizio

    2013-08-01

    The use of pharmacokinetic (PK) and pharmacodynamic (PD) models is a common and widespread practice in the preliminary stages of drug development. However, PK-PD models may be affected by structural identifiability issues intrinsically related to their mathematical formulation. A preliminary structural identifiability analysis is usually carried out to check if the set of model parameters can be uniquely determined from experimental observations under the ideal assumptions of noise-free data and no model uncertainty. However, even for structurally identifiable models, real-life experimental conditions and model uncertainty may strongly affect the practical possibility to estimate the model parameters in a statistically sound way. A systematic procedure coupling the numerical assessment of structural identifiability with advanced model-based design of experiments formulations is presented in this paper. The objective is to propose a general approach to design experiments in an optimal way, detecting a proper set of experimental settings that ensure the practical identifiability of PK-PD models. Two simulated case studies based on in vitro bacterial growth and killing models are presented to demonstrate the applicability and generality of the methodology to tackle model identifiability issues effectively, through the design of feasible and highly informative experiments.

  7. MCTP system model based on linear programming optimization of apertures obtained from sequencing patient image data maps

    SciTech Connect

    Ureba, A.; Salguero, F. J.; Barbeiro, A. R.; Jimenez-Ortega, E.; Baeza, J. A.; Leal, A.; Miras, H.; Linares, R.; Perucha, M.

    2014-08-15

    Purpose: The authors present a hybrid direct multileaf collimator (MLC) aperture optimization model exclusively based on sequencing of patient imaging data to be implemented on a Monte Carlo treatment planning system (MC-TPS) to allow the explicit radiation transport simulation of advanced radiotherapy treatments with optimal results in efficient times for clinical practice. Methods: The planning system (called CARMEN) is a full MC-TPS, controlled through aMATLAB interface, which is based on the sequencing of a novel map, called “biophysical” map, which is generated from enhanced image data of patients to achieve a set of segments actually deliverable. In order to reduce the required computation time, the conventional fluence map has been replaced by the biophysical map which is sequenced to provide direct apertures that will later be weighted by means of an optimization algorithm based on linear programming. A ray-casting algorithm throughout the patient CT assembles information about the found structures, the mass thickness crossed, as well as PET values. Data are recorded to generate a biophysical map for each gantry angle. These maps are the input files for a home-made sequencer developed to take into account the interactions of photons and electrons with the MLC. For each linac (Axesse of Elekta and Primus of Siemens) and energy beam studied (6, 9, 12, 15 MeV and 6 MV), phase space files were simulated with the EGSnrc/BEAMnrc code. The dose calculation in patient was carried out with the BEAMDOSE code. This code is a modified version of EGSnrc/DOSXYZnrc able to calculate the beamlet dose in order to combine them with different weights during the optimization process. Results: Three complex radiotherapy treatments were selected to check the reliability of CARMEN in situations where the MC calculation can offer an added value: A head-and-neck case (Case I) with three targets delineated on PET/CT images and a demanding dose-escalation; a partial breast

  8. MCTP system model based on linear programming optimization of apertures obtained from sequencing patient image data maps.

    PubMed

    Ureba, A; Salguero, F J; Barbeiro, A R; Jimenez-Ortega, E; Baeza, J A; Miras, H; Linares, R; Perucha, M; Leal, A

    2014-08-01

    The authors present a hybrid direct multileaf collimator (MLC) aperture optimization model exclusively based on sequencing of patient imaging data to be implemented on a Monte Carlo treatment planning system (MC-TPS) to allow the explicit radiation transport simulation of advanced radiotherapy treatments with optimal results in efficient times for clinical practice. The planning system (called CARMEN) is a full MC-TPS, controlled through aMATLAB interface, which is based on the sequencing of a novel map, called "biophysical" map, which is generated from enhanced image data of patients to achieve a set of segments actually deliverable. In order to reduce the required computation time, the conventional fluence map has been replaced by the biophysical map which is sequenced to provide direct apertures that will later be weighted by means of an optimization algorithm based on linear programming. A ray-casting algorithm throughout the patient CT assembles information about the found structures, the mass thickness crossed, as well as PET values. Data are recorded to generate a biophysical map for each gantry angle. These maps are the input files for a home-made sequencer developed to take into account the interactions of photons and electrons with the MLC. For each linac (Axesse of Elekta and Primus of Siemens) and energy beam studied (6, 9, 12, 15 MeV and 6 MV), phase space files were simulated with the EGSnrc/BEAMnrc code. The dose calculation in patient was carried out with the BEAMDOSE code. This code is a modified version of EGSnrc/DOSXYZnrc able to calculate the beamlet dose in order to combine them with different weights during the optimization process. Three complex radiotherapy treatments were selected to check the reliability of CARMEN in situations where the MC calculation can offer an added value: A head-and-neck case (Case I) with three targets delineated on PET/CT images and a demanding dose-escalation; a partial breast irradiation case (Case II) solved

  9. Development of an atmospheric model based on a generalized vertical coordinate. Final report, September 12, 1991--August 31, 1997

    SciTech Connect

    Arakawa, Akio; Konor, C.S.

    1997-12-31

    There are great conceptual advantages in the use of an isentropic vertical coordinate in atmospheric models. Design of such a model, however, requires to overcome computational problems due to intersection of coordinate surfaces with the earth`s surface. Under this project, the authors have completed the development of a model based on a generalized vertical coordinate, {zeta} = F({Theta}, p, p{sub s}), in which an isentropic coordinate can be combined with a terrain-following {sigma}-coordinate a smooth transition between the two. One of the key issues in developing such a model is to satisfy the consistency between the predictions of pressure and potential temperature. In the model, the consistency is satisfied by the use of an equation that determines the vertical mass flux. A procedure to properly choose {zeta} = F({Theta}, p, p{sub s}) is also developed, which guarantees that {zeta} is a monotonic function of height even when unstable stratification occurs. There are two versions of the model constructed in parallel: one is the middle-latitude {beta}-plane version and the other is the global version. Both of these versions include moisture prediction, relaxed large-scale condensation and relaxed moist-convective adjustment schemes. A well-mixed planetary boundary layer (PBL) is also added.

  10. New approach to assess bioequivalence parameters using generalized gamma mixed-effect model (model-based asymptotic bioequivalence test).

    PubMed

    Chen, Yuh-Ing; Huang, Chi-Shen

    2014-02-28

    In the pharmacokinetic (PK) study under a 2x2 crossover design that involves both the test and reference drugs, we propose a mixed-effects model for the drug concentration-time profiles obtained from subjects who receive different drugs at different periods. In the proposed model, the drug concentrations repeatedly measured from the same subject at different time points are distributed according to a multivariate generalized gamma distribution, and the drug concentration-time profiles are described by a compartmental PK model with between-subject and within-subject variations. We then suggest a bioequivalence test based on the estimated bioavailability parameters in the proposed mixed-effects model. The results of a Monte Carlo study further show that the proposed model-based bioequivalence test is not only better on maintaining its level but also more powerful for detecting the bioequivalence of the two drugs than the conventional bioequivalence test based on a non-compartmental analysis or the one based on a mixed-effects model with a normal error variable. The application of the proposed model and test is finally illustrated by using data sets in two PK studies.

  11. Quasi-periodic solutions for quasi-linear generalized KdV equations

    NASA Astrophysics Data System (ADS)

    Giuliani, Filippo

    2017-05-01

    We prove the existence of Cantor families of small amplitude, linearly stable, quasi-periodic solutions of quasi-linear autonomous Hamiltonian generalized KdV equations. We consider the most general quasi-linear quadratic nonlinearity. The proof is based on an iterative Nash-Moser algorithm. To initialize this scheme, we need to perform a bifurcation analysis taking into account the strongly perturbative effects of the nonlinearity near the origin. In particular, we implement a weak version of the Birkhoff normal form method. The inversion of the linearized operators at each step of the iteration is achieved by pseudo-differential techniques, linear Birkhoff normal form algorithms and a linear KAM reducibility scheme.

  12. Use of generalized linear models and digital data in a forest inventory of Northern Utah

    USGS Publications Warehouse

    Moisen, Gretchen G.; Edwards, Thomas C.

    1999-01-01

    Forest inventories, like those conducted by the Forest Service's Forest Inventory and Analysis Program (FIA) in the Rocky Mountain Region, are under increased pressure to produce better information at reduced costs. Here we describe our efforts in Utah to merge satellite-based information with forest inventory data for the purposes of reducing the costs of estimates of forest population totals and providing spatial depiction of forest resources. We illustrate how generalized linear models can be used to construct approximately unbiased and efficient estimates of population totals while providing a mechanism for prediction in space for mapping of forest structure. We model forest type and timber volume of five tree species groups as functions of a variety of predictor variables in the northern Utah mountains. Predictor variables include elevation, aspect, slope, geographic coordinates, as well as vegetation cover types based on satellite data from both the Advanced Very High Resolution Radiometer (AVHRR) and Thematic Mapper (TM) platforms. We examine the relative precision of estimates of area by forest type and mean cubic-foot volumes under six different models, including the traditional double sampling for stratification strategy. Only very small gains in precision were realized through the use of expensive photointerpreted or TM-based data for stratification, while models based on topography and spatial coordinates alone were competitive. We also compare the predictive capability of the models through various map accuracy measures. The models including the TM-based vegetation performed best overall, while topography and spatial coordinates alone provided substantial information at very low cost.

  13. The Generalized Logit-Linear Item Response Model for Binary-Designed Items

    ERIC Educational Resources Information Center

    Revuelta, Javier

    2008-01-01

    This paper introduces the generalized logit-linear item response model (GLLIRM), which represents the item-solving process as a series of dichotomous operations or steps. The GLLIRM assumes that the probability function of the item response is a logistic function of a linear composite of basic parameters which describe the operations, and the…

  14. Generalized linear porokeratosis: a rare entity with excellent response to acitretin.

    PubMed

    Garg, Taru; Ramchander; Varghese, Bincy; Barara, Meenu; Nangia, Anita

    2011-05-15

    Linear porokeratosis is a rare disorder of keratinization that usually presents at birth. We report a 17-year-old male with generalized linear porokeratosis, a very rare variant of porokeratosis, with extensive involvement of the trunk and extremities along with nail and genital involvement. The patient was treated with oral acitretin with excellent clinical response.

  15. Doubly robust estimation of generalized partial linear models for longitudinal data with dropouts.

    PubMed

    Lin, Huiming; Fu, Bo; Qin, Guoyou; Zhu, Zhongyi

    2017-04-03

    We develop a doubly robust estimation of generalized partial linear models for longitudinal data with dropouts. Our method extends the highly efficient aggregate unbiased estimating function approach proposed in Qu et al. (2010) to a doubly robust one in the sense that under missing at random (MAR), our estimator is consistent when either the linear conditional mean condition is satisfied or a model for the dropout process is correctly specified. We begin with a generalized linear model for the marginal mean, and then move forward to a generalized partial linear model, allowing for nonparametric covariate effect by using the regression spline smoothing approximation. We establish the asymptotic theory for the proposed method and use simulation studies to compare its finite sample performance with that of Qu's method, the complete-case generalized estimating equation (GEE) and the inverse-probability weighted GEE. The proposed method is finally illustrated using data from a longitudinal cohort study.

  16. Generalized linear mixed models can detect unimodal species-environment relationships.

    PubMed

    Jamil, Tahira; Ter Braak, Cajo J F

    2013-01-01

    Niche theory predicts that species occurrence and abundance show non-linear, unimodal relationships with respect to environmental gradients. Unimodal models, such as the Gaussian (logistic) model, are however more difficult to fit to data than linear ones, particularly in a multi-species context in ordination, with trait modulated response and when species phylogeny and species traits must be taken into account. Adding squared terms to a linear model is a possibility but gives uninterpretable parameters. This paper explains why and when generalized linear mixed models, even without squared terms, can effectively analyse unimodal data and also presents a graphical tool and statistical test to test for unimodal response while fitting just the generalized linear mixed model. The R-code for this is supplied in Supplemental Information 1.

  17. A Hierarchical Generalized Linear Model in Combination with Dispersion Modeling to Improve Sib-Pair Linkage Analysis.

    PubMed

    Lee, Woojoo; Kim, Jeonghwan; Lee, Youngjo; Park, Taesung; Suh, Young Ju

    2015-01-01

    We explored a hierarchical generalized linear model (HGLM) in combination with dispersion modeling to improve the sib-pair linkage analysis based on the revised Haseman-Elston regression model for a quantitative trait. A dispersion modeling technique was investigated for sib-pair linkage analysis using simulation studies and real data applications. We considered 4 heterogeneous dispersion settings according to a signal-to-noise ratio (SNR) in the various statistical models based on the Haseman-Elston regression model. Our numerical studies demonstrated that susceptibility loci could be detected well by modeling the dispersion parameter appropriately. In particular, the HGLM had better performance than the linear regression model and the ordinary linear mixed model when the SNR is low, i.e., when substantial noise was present in the data. The study shows that the HGLM in combination with dispersion modeling can be utilized to identify multiple markers showing linkage to familial complex traits accurately. Appropriate dispersion modeling might be more powerful to identify markers closest to the major genes which determine a quantitative trait. © 2015 S. Karger AG, Basel.

  18. On the Bohl and general exponents of the discrete time-varying linear system

    NASA Astrophysics Data System (ADS)

    Niezabitowski, Michał

    2014-12-01

    Many properties of dynamical systems may be characterized by certain numbers called characteristic exponents. The most important are: Lyapunov, Bohl and general exponents. In this paper we investigate relations between certain subtypes of the general exponents of discrete time-varying linear systems, namely the senior lower and the junior upper once. The main contribution of the paper is to construct an example of a system with the senior lower exponent strictly smaller than the junior upper general exponents.

  19. Flexible Approaches to Computing Mediated Effects in Generalized Linear Models: Generalized Estimating Equations and Bootstrapping

    ERIC Educational Resources Information Center

    Schluchter, Mark D.

    2008-01-01

    In behavioral research, interest is often in examining the degree to which the effect of an independent variable X on an outcome Y is mediated by an intermediary or mediator variable M. This article illustrates how generalized estimating equations (GEE) modeling can be used to estimate the indirect or mediated effect, defined as the amount by…

  20. Flexible Approaches to Computing Mediated Effects in Generalized Linear Models: Generalized Estimating Equations and Bootstrapping

    ERIC Educational Resources Information Center

    Schluchter, Mark D.

    2008-01-01

    In behavioral research, interest is often in examining the degree to which the effect of an independent variable X on an outcome Y is mediated by an intermediary or mediator variable M. This article illustrates how generalized estimating equations (GEE) modeling can be used to estimate the indirect or mediated effect, defined as the amount by…

  1. A comparison of model-based imputation methods for handling missing predictor values in a linear regression model: A simulation study

    NASA Astrophysics Data System (ADS)

    Hasan, Haliza; Ahmad, Sanizah; Osman, Balkish Mohd; Sapri, Shamsiah; Othman, Nadirah

    2017-08-01

    In regression analysis, missing covariate data has been a common problem. Many researchers use ad hoc methods to overcome this problem due to the ease of implementation. However, these methods require assumptions about the data that rarely hold in practice. Model-based methods such as Maximum Likelihood (ML) using the expectation maximization (EM) algorithm and Multiple Imputation (MI) are more promising when dealing with difficulties caused by missing data. Then again, inappropriate methods of missing value imputation can lead to serious bias that severely affects the parameter estimates. The main objective of this study is to provide a better understanding regarding missing data concept that can assist the researcher to select the appropriate missing data imputation methods. A simulation study was performed to assess the effects of different missing data techniques on the performance of a regression model. The covariate data were generated using an underlying multivariate normal distribution and the dependent variable was generated as a combination of explanatory variables. Missing values in covariate were simulated using a mechanism called missing at random (MAR). Four levels of missingness (10%, 20%, 30% and 40%) were imposed. ML and MI techniques available within SAS software were investigated. A linear regression analysis was fitted and the model performance measures; MSE, and R-Squared were obtained. Results of the analysis showed that MI is superior in handling missing data with highest R-Squared and lowest MSE when percent of missingness is less than 30%. Both methods are unable to handle larger than 30% level of missingness.

  2. An efficient method for generalized linear multiplicative programming problem with multiplicative constraints.

    PubMed

    Zhao, Yingfeng; Liu, Sanyang

    2016-01-01

    We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.

  3. On the dynamics of canopy resistance: Generalized linear estimation and relationships with primary micrometeorological variables

    NASA Astrophysics Data System (ADS)

    Irmak, Suat; Mutiibwa, Denis

    2010-08-01

    The 1-D and single layer combination-based energy balance Penman-Monteith (PM) model has limitations in practical application due to the lack of canopy resistance (rc) data for different vegetation surfaces. rc could be estimated by inversion of the PM model if the actual evapotranspiration (ETa) rate is known, but this approach has its own set of issues. Instead, an empirical method of estimating rc is suggested in this study. We investigated the relationships between primary micrometeorological parameters and rc and developed seven models to estimate rc for a nonstressed maize canopy on an hourly time step using a generalized-linear modeling approach. The most complex rc model uses net radiation (Rn), air temperature (Ta), vapor pressure deficit (VPD), relative humidity (RH), wind speed at 3 m (u3), aerodynamic resistance (ra), leaf area index (LAI), and solar zenith angle (Θ). The simplest model requires Rn, Ta, and RH. We present the practical implementation of all models via experimental validation using scaled up rc data obtained from the dynamic diffusion porometer-measured leaf stomatal resistance through an extensive field campaign in 2006. For further validation, we estimated ETa by solving the PM model using the modeled rc from all seven models and compared the PM ETa estimates with the Bowen ratio energy balance system (BREBS)-measured ETa for an independent data set in 2005. The relationships between hourly rc versus Ta, RH, VPD, Rn, incoming shortwave radiation (Rs), u3, wind direction, LAI, Θ, and ra were presented and discussed. We demonstrated the negative impact of exclusion of LAI when modeling rc, whereas exclusion of ra and Θ did not impact the performance of the rc models. Compared to the calibration results, the validation root mean square difference between observed and modeled rc increased by 5 s m-1 for all rc models developed, ranging from 9.9 s m-1 for the most complex model to 22.8 s m-1 for the simplest model, as compared with the

  4. Optimal explicit strong-stability-preserving general linear methods : complete results.

    SciTech Connect

    Constantinescu, E. M.; Sandu, A.; Mathematics and Computer Science; Virginia Polytechnic Inst. and State Univ.

    2009-03-03

    This paper constructs strong-stability-preserving general linear time-stepping methods that are well suited for hyperbolic PDEs discretized by the method of lines. These methods generalize both Runge-Kutta (RK) and linear multistep schemes. They have high stage orders and hence are less susceptible than RK methods to order reduction from source terms or nonhomogeneous boundary conditions. A global optimization strategy is used to find the most efficient schemes that have low storage requirements. Numerical results illustrate the theoretical findings.

  5. A generalized concordance correlation coefficient based on the variance components generalized linear mixed models for overdispersed count data.

    PubMed

    Carrasco, Josep L

    2010-09-01

    The classical concordance correlation coefficient (CCC) to measure agreement among a set of observers assumes data to be distributed as normal and a linear relationship between the mean and the subject and observer effects. Here, the CCC is generalized to afford any distribution from the exponential family by means of the generalized linear mixed models (GLMMs) theory and applied to the case of overdispersed count data. An example of CD34+ cell count data is provided to show the applicability of the procedure. In the latter case, different CCCs are defined and applied to the data by changing the GLMM that fits the data. A simulation study is carried out to explore the behavior of the procedure with a small and moderate sample size. © 2009, The International Biometric Society.

  6. Discussion on climate oscillations: CMIP5 general circulation models versus a semi-empirical harmonic model based on astronomical cycles

    NASA Astrophysics Data System (ADS)

    Scafetta, Nicola

    2013-11-01

    Power spectra of global surface temperature (GST) records (available since 1850) reveal major periodicities at about 9.1, 10-11, 19-22 and 59-62 years. Equivalent oscillations are found in numerous multisecular paleoclimatic records. The Coupled Model Intercomparison Project 5 (CMIP5) general circulation models (GCMs), to be used in the IPCC Fifth Assessment Report (AR5, 2013), are analyzed and found not able to reconstruct this variability. In particular, from 2000 to 2013.5 a GST plateau is observed while the GCMs predicted a warming rate of about 2 °C/century. In contrast, the hypothesis that the climate is regulated by specific natural oscillations more accurately fits the GST records at multiple time scales. For example, a quasi 60-year natural oscillation simultaneously explains the 1850-1880, 1910-1940 and 1970-2000 warming periods, the 1880-1910 and 1940-1970 cooling periods and the post 2000 GST plateau. This hypothesis implies that about 50% of the ~ 0.5 °C global surface warming observed from 1970 to 2000 was due to natural oscillations of the climate system, not to anthropogenic forcing as modeled by the CMIP3 and CMIP5 GCMs. Consequently, the climate sensitivity to CO2 doubling should be reduced by half, for example from the 2.0-4.5 °C range (as claimed by the IPCC, 2007) to 1.0-2.3 °C with a likely median of ~ 1.5 °C instead of ~ 3.0 °C. Also modern paleoclimatic temperature reconstructions showing a larger preindustrial variability than the hockey-stick shaped temperature reconstructions developed in early 2000 imply a weaker anthropogenic effect and a stronger solar contribution to climatic changes. The observed natural oscillations could be driven by astronomical forcings. The ~ 9.1 year oscillation appears to be a combination of long soli-lunar tidal oscillations, while quasi 10-11, 20 and 60 year oscillations are typically found among major solar and heliospheric oscillations driven mostly by Jupiter and Saturn movements. Solar models based

  7. Linear and nonlinear associations between general intelligence and personality in Project TALENT.

    PubMed

    Major, Jason T; Johnson, Wendy; Deary, Ian J

    2014-04-01

    Research on the relations of personality traits to intelligence has primarily been concerned with linear associations. Yet, there are no a priori reasons why linear relations should be expected over nonlinear ones, which represent a much larger set of all possible associations. Using 2 techniques, quadratic and generalized additive models, we tested for linear and nonlinear associations of general intelligence (g) with 10 personality scales from Project TALENT (PT), a nationally representative sample of approximately 400,000 American high school students from 1960, divided into 4 grade samples (Flanagan et al., 1962). We departed from previous studies, including one with PT (Reeve, Meyer, & Bonaccio, 2006), by modeling latent quadratic effects directly, controlling the influence of the common factor in the personality scales, and assuming a direction of effect from g to personality. On the basis of the literature, we made 17 directional hypotheses for the linear and quadratic associations. Of these, 53% were supported in all 4 male grades and 58% in all 4 female grades. Quadratic associations explained substantive variance above and beyond linear effects (mean R² between 1.8% and 3.6%) for Sociability, Maturity, Vigor, and Leadership in males and Sociability, Maturity, and Tidiness in females; linear associations were predominant for other traits. We discuss how suited current theories of the personality-intelligence interface are to explain these associations, and how research on intellectually gifted samples may provide a unique way of understanding them. We conclude that nonlinear models can provide incremental detail regarding personality and intelligence associations.

  8. General linear methods and friends: Toward efficient solutions of multiphysics problems

    NASA Astrophysics Data System (ADS)

    Sandu, Adrian

    2017-07-01

    Time dependent multiphysics partial differential equations are of great practical importance as they model diverse phenomena that appear in mechanical and chemical engineering, aeronautics, astrophysics, meteorology and oceanography, financial modeling, environmental sciences, etc. There is no single best time discretization for the complex multiphysics systems of practical interest. We discuss "multimethod" approaches that combine different time steps and discretizations using the rigourous frameworks provided by Partitioned General Linear Methods and Generalize-structure Additive Runge Kutta Methods..

  9. Prediction of formability for non-linear deformation history using generalized forming limit concept (GFLC)

    NASA Astrophysics Data System (ADS)

    Volk, Wolfram; Suh, Joungsik

    2013-12-01

    The prediction of formability is one of the most important tasks in sheet metal process simulation. The common criterion in industrial applications is the Forming Limit Curve (FLC). The big advantage of FLCs is the easy interpretation of simulation or measurement data in combination with an ISO standard for the experimental determination. However, the conventional FLCs are limited to almost linear and unbroken strain paths, i.e. deformation histories with non-linear strain increments often lead to big differences in comparison to the prediction of the FLC. In this paper a phenomenological approach, the so-called Generalized Forming Limit Concept (GFLC), is introduced to predict the localized necking on arbitrary deformation history with unlimited number of non-linear strain increments. The GFLC consists of the conventional FLC and an acceptable number of experiments with bi-linear deformation history. With the idea of the new defined "Principle of Equivalent Pre-Forming" every deformation state built up of two linear strain increments can be transformed to a pure linear strain path with the same used formability of the material. In advance this procedure can be repeated as often as necessary. Therefore, it allows a robust and cost effective analysis of beginning instability in Finite Element Analysis (FEA) for arbitrary deformation histories. In addition, the GFLC is fully downwards compatible to the established FLC for pure linear strain paths.

  10. Estimation of Complex Generalized Linear Mixed Models for Measurement and Growth

    ERIC Educational Resources Information Center

    Jeon, Minjeong

    2012-01-01

    Maximum likelihood (ML) estimation of generalized linear mixed models (GLMMs) is technically challenging because of the intractable likelihoods that involve high dimensional integrations over random effects. The problem is magnified when the random effects have a crossed design and thus the data cannot be reduced to small independent clusters. A…

  11. Regression Is a Univariate General Linear Model Subsuming Other Parametric Methods as Special Cases.

    ERIC Educational Resources Information Center

    Vidal, Sherry

    Although the concept of the general linear model (GLM) has existed since the 1960s, other univariate analyses such as the t-test and the analysis of variance models have remained popular. The GLM produces an equation that minimizes the mean differences of independent variables as they are related to a dependent variable. From a computer printout…

  12. Structural Modeling of Measurement Error in Generalized Linear Models with Rasch Measures as Covariates

    ERIC Educational Resources Information Center

    Battauz, Michela; Bellio, Ruggero

    2011-01-01

    This paper proposes a structural analysis for generalized linear models when some explanatory variables are measured with error and the measurement error variance is a function of the true variables. The focus is on latent variables investigated on the basis of questionnaires and estimated using item response theory models. Latent variable…

  13. Estimation of Complex Generalized Linear Mixed Models for Measurement and Growth

    ERIC Educational Resources Information Center

    Jeon, Minjeong

    2012-01-01

    Maximum likelihood (ML) estimation of generalized linear mixed models (GLMMs) is technically challenging because of the intractable likelihoods that involve high dimensional integrations over random effects. The problem is magnified when the random effects have a crossed design and thus the data cannot be reduced to small independent clusters. A…

  14. The microcomputer scientific software series 2: general linear model--regression.

    Treesearch

    Harold M. Rauscher

    1983-01-01

    The general linear model regression (GLMR) program provides the microcomputer user with a sophisticated regression analysis capability. The output provides a regression ANOVA table, estimators of the regression model coefficients, their confidence intervals, confidence intervals around the predicted Y-values, residuals for plotting, a check for multicollinearity, a...

  15. Modeling containment of large wildfires using generalized linear mixed-model analysis

    Treesearch

    Mark Finney; Isaac C. Grenfell; Charles W. McHugh

    2009-01-01

    Billions of dollars are spent annually in the United States to contain large wildland fires, but the factors contributing to suppression success remain poorly understood. We used a regression model (generalized linear mixed-model) to model containment probability of individual fires, assuming that containment was a repeated-measures problem (fixed effect) and...

  16. Structural Modeling of Measurement Error in Generalized Linear Models with Rasch Measures as Covariates

    ERIC Educational Resources Information Center

    Battauz, Michela; Bellio, Ruggero

    2011-01-01

    This paper proposes a structural analysis for generalized linear models when some explanatory variables are measured with error and the measurement error variance is a function of the true variables. The focus is on latent variables investigated on the basis of questionnaires and estimated using item response theory models. Latent variable…

  17. Implementing general quantum measurements on linear optical and solid-state qubits

    NASA Astrophysics Data System (ADS)

    Ota, Yukihiro; Ashhab, Sahel; Nori, Franco

    2013-03-01

    We show a systematic construction for implementing general measurements on a single qubit, including both strong (or projection) and weak measurements. We mainly focus on linear optical qubits. The present approach is composed of simple and feasible elements, i.e., beam splitters, wave plates, and polarizing beam splitters. We show how the parameters characterizing the measurement operators are controlled by the linear optical elements. We also propose a method for the implementation of general measurements in solid-state qubits. Furthermore, we show an interesting application of the general measurements, i.e., entanglement amplification. YO is partially supported by the SPDR Program, RIKEN. SA and FN acknowledge ARO, NSF grant No. 0726909, JSPS-RFBR contract No. 12-02-92100, Grant-in-Aid for Scientific Research (S), MEXT Kakenhi on Quantum Cybernetics, and the JSPS via its FIRST program.

  18. The linear stability of plane stagnation-point flow against general disturbances

    NASA Astrophysics Data System (ADS)

    Brattkus, K.; Davis, S. H.

    1991-02-01

    The linear-stability theory of plane stagnation-point flow against an infinite flat plate is re-examined. Disturbances are generalized from those of Goertler type to include other types of variations along the plate. It is shown that Hiemenz flow is linearly stable and that the Goertler-type modes are those that decay slowest. This work then rationalizes the use of such self-similar disturbances on Hiemenz flow and shows how questions of disturbance structure can be approached on other self-similar flows.

  19. The linear stability of plane stagnation-point flow against general disturbances

    NASA Technical Reports Server (NTRS)

    Brattkus, K.; Davis, S. H.

    1991-01-01

    The linear-stability theory of plane stagnation-point flow against an infinite flat plate is re-examined. Disturbances are generalized from those of Goertler type to include other types of variations along the plate. It is shown that Hiemenz flow is linearly stable and that the Goertler-type modes are those that decay slowest. This work then rationalizes the use of such self-similar disturbances on Hiemenz flow and shows how questions of disturbance structure can be approached on other self-similar flows.

  20. The linear stability of plane stagnation-point flow against general disturbances

    NASA Technical Reports Server (NTRS)

    Brattkus, K.; Davis, S. H.

    1991-01-01

    The linear-stability theory of plane stagnation-point flow against an infinite flat plate is re-examined. Disturbances are generalized from those of Goertler type to include other types of variations along the plate. It is shown that Hiemenz flow is linearly stable and that the Goertler-type modes are those that decay slowest. This work then rationalizes the use of such self-similar disturbances on Hiemenz flow and shows how questions of disturbance structure can be approached on other self-similar flows.

  1. Estimate of influenza cases using generalized linear, additive and mixed models.

    PubMed

    Oviedo, Manuel; Domínguez, Ángela; Pilar Muñoz, M

    2015-01-01

    We investigated the relationship between reported cases of influenza in Catalonia (Spain). Covariates analyzed were: population, age, data of report of influenza, and health region during 2010-2014 using data obtained from the SISAP program (Institut Catala de la Salut - Generalitat of Catalonia). Reported cases were related with the study of covariates using a descriptive analysis. Generalized Linear Models, Generalized Additive Models and Generalized Additive Mixed Models were used to estimate the evolution of the transmission of influenza. Additive models can estimate non-linear effects of the covariates by smooth functions; and mixed models can estimate data dependence and variability in factor variables using correlations structures and random effects, respectively. The incidence rate of influenza was calculated as the incidence per 100 000 people. The mean rate was 13.75 (range 0-27.5) in the winter months (December, January, February) and 3.38 (range 0-12.57) in the remaining months. Statistical analysis showed that Generalized Additive Mixed Models were better adapted to the temporal evolution of influenza (serial correlation 0.59) than classical linear models.

  2. Estimate of influenza cases using generalized linear, additive and mixed models

    PubMed Central

    Oviedo, Manuel; Domínguez, Ángela; Pilar Muñoz, M

    2014-01-01

    We investigated the relationship between reported cases of influenza in Catalonia (Spain). Covariates analyzed were: population, age, data of report of influenza, and health region during 2010–2014 using data obtained from the SISAP program (Institut Catala de la Salut - Generalitat of Catalonia). Reported cases were related with the study of covariates using a descriptive analysis. Generalized Linear Models, Generalized Additive Models and Generalized Additive Mixed Models were used to estimate the evolution of the transmission of influenza. Additive models can estimate non-linear effects of the covariates by smooth functions; and mixed models can estimate data dependence and variability in factor variables using correlations structures and random effects, respectively. The incidence rate of influenza was calculated as the incidence per 100 000 people. The mean rate was 13.75 (range 0–27.5) in the winter months (December, January, February) and 3.38 (range 0–12.57) in the remaining months. Statistical analysis showed that Generalized Additive Mixed Models were better adapted to the temporal evolution of influenza (serial correlation 0.59) than classical linear models. PMID:25483550

  3. Hierarchical Shrinkage Priors and Model Fitting for High-dimensional Generalized Linear Models

    PubMed Central

    Yi, Nengjun; Ma, Shuangge

    2013-01-01

    Genetic and other scientific studies routinely generate very many predictor variables, which can be naturally grouped, with predictors in the same groups being highly correlated. It is desirable to incorporate the hierarchical structure of the predictor variables into generalized linear models for simultaneous variable selection and coefficient estimation. We propose two prior distributions: hierarchical Cauchy and double-exponential distributions, on coefficients in generalized linear models. The hierarchical priors include both variable-specific and group-specific tuning parameters, thereby not only adopting different shrinkage for different coefficients and different groups but also providing a way to pool the information within groups. We fit generalized linear models with the proposed hierarchical priors by incorporating flexible expectation-maximization (EM) algorithms into the standard iteratively weighted least squares as implemented in the general statistical package R. The methods are illustrated with data from an experiment to identify genetic polymorphisms for survival of mice following infection with Listeria monocytogenes. The performance of the proposed procedures is further assessed via simulation studies. The methods are implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). PMID:23192052

  4. Conditional Akaike information under generalized linear and proportional hazards mixed models

    PubMed Central

    Donohue, M. C.; Overholser, R.; Xu, R.; Vaida, F.

    2011-01-01

    We study model selection for clustered data, when the focus is on cluster specific inference. Such data are often modelled using random effects, and conditional Akaike information was proposed in Vaida & Blanchard (2005) and used to derive an information criterion under linear mixed models. Here we extend the approach to generalized linear and proportional hazards mixed models. Outside the normal linear mixed models, exact calculations are not available and we resort to asymptotic approximations. In the presence of nuisance parameters, a profile conditional Akaike information is proposed. Bootstrap methods are considered for their potential advantage in finite samples. Simulations show that the performance of the bootstrap and the analytic criteria are comparable, with bootstrap demonstrating some advantages for larger cluster sizes. The proposed criteria are applied to two cancer datasets to select models when the cluster-specific inference is of interest. PMID:22822261

  5. Semiparametric Analysis of Heterogeneous Data Using Varying-Scale Generalized Linear Models.

    PubMed

    Xie, Minge; Simpson, Douglas G; Carroll, Raymond J

    2008-01-01

    This article describes a class of heteroscedastic generalized linear regression models in which a subset of the regression parameters are rescaled nonparametrically, and develops efficient semiparametric inferences for the parametric components of the models. Such models provide a means to adapt for heterogeneity in the data due to varying exposures, varying levels of aggregation, and so on. The class of models considered includes generalized partially linear models and nonparametrically scaled link function models as special cases. We present an algorithm to estimate the scale function nonparametrically, and obtain asymptotic distribution theory for regression parameter estimates. In particular, we establish that the asymptotic covariance of the semiparametric estimator for the parametric part of the model achieves the semiparametric lower bound. We also describe bootstrap-based goodness-of-scale test. We illustrate the methodology with simulations, published data, and data from collaborative research on ultrasound safety.

  6. A review of linear response theory for general differentiable dynamical systems

    NASA Astrophysics Data System (ADS)

    Ruelle, David

    2009-04-01

    The classical theory of linear response applies to statistical mechanics close to equilibrium. Away from equilibrium, one may describe the microscopic time evolution by a general differentiable dynamical system, identify nonequilibrium steady states (NESS) and study how these vary under perturbations of the dynamics. Remarkably, it turns out that for uniformly hyperbolic dynamical systems (those satisfying the 'chaotic hypothesis'), the linear response away from equilibrium is very similar to the linear response close to equilibrium: the Kramers-Kronig dispersion relations hold, and the fluctuation-dispersion theorem survives in a modified form (which takes into account the oscillations around the 'attractor' corresponding to the NESS). If the chaotic hypothesis does not hold, two new phenomena may arise. The first is a violation of linear response in the sense that the NESS does not depend differentiably on parameters (but this nondifferentiability may be hard to see experimentally). The second phenomenon is a violation of the dispersion relations: the susceptibility has singularities in the upper half complex plane. These 'acausal' singularities are actually due to 'energy nonconservation': for a small periodic perturbation of the system, the amplitude of the linear response is arbitrarily large. This means that the NESS of the dynamical system under study is not 'inert' but can give energy to the outside world. An 'active' NESS of this sort is very different from an equilibrium state, and it would be interesting to see what happens for active states to the Gallavotti-Cohen fluctuation theorem.

  7. Invariance of the generalized oscillator under a linear transformation of the related system of orthogonal polynomials

    NASA Astrophysics Data System (ADS)

    Borzov, V. V.; Damaskinsky, E. V.

    2017-02-01

    We consider the families of polynomials P = { P n ( x)} n=0 ∞ and Q = { Q n ( x)} n=0 ∞ orthogonal on the real line with respect to the respective probability measures μ and ν. We assume that { Q n ( x)} n=0 ∞ and { P n ( x)} n=0 ∞ are connected by linear relations. In the case k = 2, we describe all pairs (P,Q) for which the algebras A P and A Q of generalized oscillators generated by { Qn(x)} n=0 ∞ and { Pn(x)} n=0 ∞ coincide. We construct generalized oscillators corresponding to pairs (P,Q) for arbitrary k ≥ 1.

  8. Extending local canonical correlation analysis to handle general linear contrasts for FMRI data.

    PubMed

    Jin, Mingwu; Nandy, Rajesh; Curran, Tim; Cordes, Dietmar

    2012-01-01

    Local canonical correlation analysis (CCA) is a multivariate method that has been proposed to more accurately determine activation patterns in fMRI data. In its conventional formulation, CCA has several drawbacks that limit its usefulness in fMRI. A major drawback is that, unlike the general linear model (GLM), a test of general linear contrasts of the temporal regressors has not been incorporated into the CCA formalism. To overcome this drawback, a novel directional test statistic was derived using the equivalence of multivariate multiple regression (MVMR) and CCA. This extension will allow CCA to be used for inference of general linear contrasts in more complicated fMRI designs without reparameterization of the design matrix and without reestimating the CCA solutions for each particular contrast of interest. With the proper constraints on the spatial coefficients of CCA, this test statistic can yield a more powerful test on the inference of evoked brain regional activations from noisy fMRI data than the conventional t-test in the GLM. The quantitative results from simulated and pseudoreal data and activation maps from fMRI data were used to demonstrate the advantage of this novel test statistic.

  9. Extending Local Canonical Correlation Analysis to Handle General Linear Contrasts for fMRI Data

    PubMed Central

    Jin, Mingwu; Nandy, Rajesh; Curran, Tim; Cordes, Dietmar

    2012-01-01

    Local canonical correlation analysis (CCA) is a multivariate method that has been proposed to more accurately determine activation patterns in fMRI data. In its conventional formulation, CCA has several drawbacks that limit its usefulness in fMRI. A major drawback is that, unlike the general linear model (GLM), a test of general linear contrasts of the temporal regressors has not been incorporated into the CCA formalism. To overcome this drawback, a novel directional test statistic was derived using the equivalence of multivariate multiple regression (MVMR) and CCA. This extension will allow CCA to be used for inference of general linear contrasts in more complicated fMRI designs without reparameterization of the design matrix and without reestimating the CCA solutions for each particular contrast of interest. With the proper constraints on the spatial coefficients of CCA, this test statistic can yield a more powerful test on the inference of evoked brain regional activations from noisy fMRI data than the conventional t-test in the GLM. The quantitative results from simulated and pseudoreal data and activation maps from fMRI data were used to demonstrate the advantage of this novel test statistic. PMID:22461786

  10. The general linear inverse problem - Implication of surface waves and free oscillations for earth structure.

    NASA Technical Reports Server (NTRS)

    Wiggins, R. A.

    1972-01-01

    The discrete general linear inverse problem reduces to a set of m equations in n unknowns. There is generally no unique solution, but we can find k linear combinations of parameters for which restraints are determined. The parameter combinations are given by the eigenvectors of the coefficient matrix. The number k is determined by the ratio of the standard deviations of the observations to the allowable standard deviations in the resulting solution. Various linear combinations of the eigenvectors can be used to determine parameter resolution and information distribution among the observations. Thus we can determine where information comes from among the observations and exactly how it constraints the set of possible models. The application of such analyses to surface-wave and free-oscillation observations indicates that (1) phase, group, and amplitude observations for any particular mode provide basically the same type of information about the model; (2) observations of overtones can enhance the resolution considerably; and (3) the degree of resolution has generally been overestimated for many model determinations made from surface waves.

  11. Application of the Hyper-Poisson Generalized Linear Model for Analyzing Motor Vehicle Crashes.

    PubMed

    Khazraee, S Hadi; Sáez-Castillo, Antonio Jose; Geedipally, Srinivas Reddy; Lord, Dominique

    2015-05-01

    The hyper-Poisson distribution can handle both over- and underdispersion, and its generalized linear model formulation allows the dispersion of the distribution to be observation-specific and dependent on model covariates. This study's objective is to examine the potential applicability of a newly proposed generalized linear model framework for the hyper-Poisson distribution in analyzing motor vehicle crash count data. The hyper-Poisson generalized linear model was first fitted to intersection crash data from Toronto, characterized by overdispersion, and then to crash data from railway-highway crossings in Korea, characterized by underdispersion. The results of this study are promising. When fitted to the Toronto data set, the goodness-of-fit measures indicated that the hyper-Poisson model with a variable dispersion parameter provided a statistical fit as good as the traditional negative binomial model. The hyper-Poisson model was also successful in handling the underdispersed data from Korea; the model performed as well as the gamma probability model and the Conway-Maxwell-Poisson model previously developed for the same data set. The advantages of the hyper-Poisson model studied in this article are noteworthy. Unlike the negative binomial model, which has difficulties in handling underdispersed data, the hyper-Poisson model can handle both over- and underdispersed crash data. Although not a major issue for the Conway-Maxwell-Poisson model, the effect of each variable on the expected mean of crashes is easily interpretable in the case of this new model.

  12. Can conclusions drawn from phantom-based image noise assessments be generalized to in vivo studies for the nonlinear model-based iterative reconstruction method?

    PubMed Central

    Gomez-Cardona, Daniel; Li, Ke; Hsieh, Jiang; Lubner, Meghan G.; Pickhardt, Perry J.; Chen, Guang-Hong

    2016-01-01

    Purpose: Phantom-based objective image quality assessment methods are widely used in the medical physics community. For a filtered backprojection (FBP) reconstruction-based linear or quasilinear imaging system, the use of this methodology is well justified. Many key image quality metrics acquired with phantom studies can be directly applied to in vivo human subject studies. Recently, a variety of image quality metrics have been investigated for model-based iterative image reconstruction (MBIR) methods and several novel characteristics have been discovered in phantom studies. However, the following question remains unanswered: can certain results obtained from phantom studies be generalized to in vivo animal studies and human subject studies? The purpose of this paper is to address this question. Methods: One of the most striking results obtained from phantom studies is a novel power-law relationship between noise variance of MBIR (σ2) and tube current-rotation time product (mAs): σ2 ∝ (mAs)−0.4 [K. Li et al., “Statistical model based iterative reconstruction (MBIR) in clinical CT systems: Experimental assessment of noise performance,” Med. Phys. 41, 041906 (15pp.) (2014)]. To examine whether the same power-law works for in vivo cases, experimental data from two types of in vivo studies were analyzed in this paper. All scans were performed with a 64-slice diagnostic CT scanner (Discovery CT750 HD, GE Healthcare) and reconstructed with both FBP and a MBIR method (Veo, GE Healthcare). An Institutional Animal Care and Use Committee-approved in vivo animal study was performed with an adult swine at six mAs levels (10–290). Additionally, human subject data (a total of 110 subjects) acquired from an IRB-approved clinical trial were analyzed. In this clinical trial, a reduced-mAs scan was performed immediately following the standard mAs scan; the specific mAs used for the two scans varied across human subjects and were determined based on patient size and

  13. Unified Einstein-Virasoro Master Equation in the General Non-Linear Sigma Model

    SciTech Connect

    Boer, J. de; Halpern, M.B.

    1996-06-05

    The Virasoro master equation (VME) describes the general affine-Virasoro construction $T=L^abJ_aJ_b+iD^a \\dif J_a$ in the operator algebra of the WZW model, where $L^ab$ is the inverse inertia tensor and $D^a $ is the improvement vector. In this paper, we generalize this construction to find the general (one-loop) Virasoro construction in the operator algebra of the general non-linear sigma model. The result is a unified Einstein-Virasoro master equation which couples the spacetime spin-two field $L^ab$ to the background fields of the sigma model. For a particular solution $L_G^ab$, the unified system reduces to the canonical stress tensors and conventional Einstein equations of the sigma model, and the system reduces to the general affine-Virasoro construction and the VME when the sigma model is taken to be the WZW action. More generally, the unified system describes a space of conformal field theories which is presumably much larger than the sum of the general affine-Virasoro construction and the sigma model with its canonical stress tensors. We also discuss a number of algebraic and geometrical properties of the system, including its relation to an unsolved problem in the theory of $G$-structures on manifolds with torsion.

  14. Fitting host-parasitoid models with CV2 > 1 using hierarchical generalized linear models.

    PubMed Central

    Perry, J N; Noh, M S; Lee, Y; Alston, R D; Norowi, H M; Powell, W; Rennolls, K

    2000-01-01

    The powerful general Pacala-Hassell host-parasitoid model for a patchy environment, which allows host density-dependent heterogeneity (HDD) to be distinguished from between-patch, host density-independent heterogeneity (HDI), is reformulated within the class of the generalized linear model (GLM) family. This improves accessibility through the provision of general software within well-known statistical systems, and allows a rich variety of models to be formulated. Covariates such as age class, host density and abiotic factors may be included easily. For the case where there is no HDI, the formulation is a simple GLM. When there is HDI in addition to HDD, the formulation is a hierarchical generalized linear model. Two forms of HDI model are considered, both with between-patch variability: one has binomial variation within patches and one has extra-binomial, overdispersed variation within patches. Examples are given demonstrating parameter estimation with standard errors, and hypothesis testing. For one example given, the extra-binomial component of the HDI heterogeneity in parasitism is itself shown to be strongly density dependent. PMID:11416907

  15. Bayesian Variable Selection and Computation for Generalized Linear Models with Conjugate Priors.

    PubMed

    Chen, Ming-Hui; Huang, Lan; Ibrahim, Joseph G; Kim, Sungduk

    2008-07-01

    In this paper, we consider theoretical and computational connections between six popular methods for variable subset selection in generalized linear models (GLM's). Under the conjugate priors developed by Chen and Ibrahim (2003) for the generalized linear model, we obtain closed form analytic relationships between the Bayes factor (posterior model probability), the Conditional Predictive Ordinate (CPO), the L measure, the Deviance Information Criterion (DIC), the Aikiake Information Criterion (AIC), and the Bayesian Information Criterion (BIC) in the case of the linear model. Moreover, we examine computational relationships in the model space for these Bayesian methods for an arbitrary GLM under conjugate priors as well as examine the performance of the conjugate priors of Chen and Ibrahim (2003) in Bayesian variable selection. Specifically, we show that once Markov chain Monte Carlo (MCMC) samples are obtained from the full model, the four Bayesian criteria can be simultaneously computed for all possible subset models in the model space. We illustrate our new methodology with a simulation study and a real dataset.

  16. Normality of raw data in general linear models: The most widespread myth in statistics

    USGS Publications Warehouse

    Kery, Marc; Hatfield, Jeff S.

    2003-01-01

    In years of statistical consulting for ecologists and wildlife biologists, by far the most common misconception we have come across has been the one about normality in general linear models. These comprise a very large part of the statistical models used in ecology and include t tests, simple and multiple linear regression, polynomial regression, and analysis of variance (ANOVA) and covariance (ANCOVA). There is a widely held belief that the normality assumption pertains to the raw data rather than to the model residuals. We suspect that this error may also occur in countless published studies, whenever the normality assumption is tested prior to analysis. This may lead to the use of nonparametric alternatives (if there are any), when parametric tests would indeed be appropriate, or to use of transformations of raw data, which may introduce hidden assumptions such as multiplicative effects on the natural scale in the case of log-transformed data. Our aim here is to dispel this myth. We very briefly describe relevant theory for two cases of general linear models to show that the residuals need to be normally distributed if tests requiring normality are to be used, such as t and F tests. We then give two examples demonstrating that the distribution of the response variable may be nonnormal, and yet the residuals are well behaved. We do not go into the issue of how to test normality; instead we display the distributions of response variables and residuals graphically.

  17. Generalized Degrees of Freedom and Adaptive Model Selection in Linear Mixed-Effects Models.

    PubMed

    Zhang, Bo; Shen, Xiaotong; Mumford, Sunni L

    2012-03-01

    Linear mixed-effects models involve fixed effects, random effects and covariance structure, which require model selection to simplify a model and to enhance its interpretability and predictability. In this article, we develop, in the context of linear mixed-effects models, the generalized degrees of freedom and an adaptive model selection procedure defined by a data-driven model complexity penalty. Numerically, the procedure performs well against its competitors not only in selecting fixed effects but in selecting random effects and covariance structure as well. Theoretically, asymptotic optimality of the proposed methodology is established over a class of information criteria. The proposed methodology is applied to the BioCycle study, to determine predictors of hormone levels among premenopausal women and to assess variation in hormone levels both between and within women across the menstrual cycle.

  18. The general linear model and fMRI: does love last forever?

    PubMed

    Poline, Jean-Baptiste; Brett, Matthew

    2012-08-15

    In this review, we first set out the general linear model (GLM) for the non technical reader, as a tool able to do both linear regression and ANOVA within the same flexible framework. We present a short history of its development in the fMRI community, and describe some interesting examples of its early use. We offer a few warnings, as the GLM relies on assumptions that may not hold in all situations. We conclude with a few wishes for the future of fMRI analyses, with or without the GLM. The appendix develops some aspects of use of contrasts for testing for the more technical reader. Copyright © 2012 Elsevier Inc. All rights reserved.

  19. Use of generalized linear mixed models for network meta-analysis.

    PubMed

    Tu, Yu-Kang

    2014-10-01

    In the past decade, a new statistical method-network meta-analysis-has been developed to address limitations in traditional pairwise meta-analysis. Network meta-analysis incorporates all available evidence into a general statistical framework for comparisons of multiple treatments. Bayesian network meta-analysis, as proposed by Lu and Ades, also known as "mixed treatments comparisons," provides a flexible modeling framework to take into account complexity in the data structure. This article shows how to implement the Lu and Ades model in the frequentist generalized linear mixed model. Two examples are provided to demonstrate how centering the covariates for random effects estimation within each trial can yield correct estimation of random effects. Moreover, under the correct specification for random effects estimation, the dummy coding and contrast basic parameter coding schemes will yield the same results. It is straightforward to incorporate covariates, such as moderators and confounders, into the generalized linear mixed model to conduct meta-regression for multiple treatment comparisons. Moreover, this approach may be extended easily to other types of outcome variables, such as continuous, counts, and multinomial. © The Author(s) 2014.

  20. Assessing correlation of clustered mixed outcomes from a multivariate generalized linear mixed model.

    PubMed

    Chen, Hsiang-Chun; Wehrly, Thomas E

    2015-02-20

    The classic concordance correlation coefficient measures the agreement between two variables. In recent studies, concordance correlation coefficients have been generalized to deal with responses from a distribution from the exponential family using the univariate generalized linear mixed model. Multivariate data arise when responses on the same unit are measured repeatedly by several methods. The relationship among these responses is often of interest. In clustered mixed data, the correlation could be present between repeated measurements either within the same observer or between different methods on the same subjects. Indices for measuring such association are needed. This study proposes a series of indices, namely, intra-correlation, inter-correlation, and total correlation coefficients to measure the correlation under various circumstances in a multivariate generalized linear model, especially for joint modeling of clustered count and continuous outcomes. The proposed indices are natural extensions of the concordance correlation coefficient. We demonstrate the methodology with simulation studies. A case example of osteoarthritis study is provided to illustrate the use of these proposed indices. Copyright © 2014 John Wiley & Sons, Ltd.

  1. Random generalized linear model: a highly accurate and interpretable ensemble predictor

    PubMed Central

    2013-01-01

    Background Ensemble predictors such as the random forest are known to have superior accuracy but their black-box predictions are difficult to interpret. In contrast, a generalized linear model (GLM) is very interpretable especially when forward feature selection is used to construct the model. However, forward feature selection tends to overfit the data and leads to low predictive accuracy. Therefore, it remains an important research goal to combine the advantages of ensemble predictors (high accuracy) with the advantages of forward regression modeling (interpretability). To address this goal several articles have explored GLM based ensemble predictors. Since limited evaluations suggested that these ensemble predictors were less accurate than alternative predictors, they have found little attention in the literature. Results Comprehensive evaluations involving hundreds of genomic data sets, the UCI machine learning benchmark data, and simulations are used to give GLM based ensemble predictors a new and careful look. A novel bootstrap aggregated (bagged) GLM predictor that incorporates several elements of randomness and instability (random subspace method, optional interaction terms, forward variable selection) often outperforms a host of alternative prediction methods including random forests and penalized regression models (ridge regression, elastic net, lasso). This random generalized linear model (RGLM) predictor provides variable importance measures that can be used to define a “thinned” ensemble predictor (involving few features) that retains excellent predictive accuracy. Conclusion RGLM is a state of the art predictor that shares the advantages of a random forest (excellent predictive accuracy, feature importance measures, out-of-bag estimates of accuracy) with those of a forward selected generalized linear model (interpretability). These methods are implemented in the freely available R software package randomGLM. PMID:23323760

  2. Robust root clustering for linear uncertain systems using generalized Lyapunov theory

    NASA Technical Reports Server (NTRS)

    Yedavalli, R. K.

    1993-01-01

    Consideration is given to the problem of matrix root clustering in subregions of a complex plane for linear state space models with real parameter uncertainty. The nominal matrix root clustering theory of Gutman & Jury (1981) using the generalized Liapunov equation is extended to the perturbed matrix case, and bounds are derived on the perturbation to maintain root clustering inside a given region. The theory makes it possible to obtain an explicit relationship between the parameters of the root clustering region and the uncertainty range of the parameter space.

  3. Robust root clustering for linear uncertain systems using generalized Lyapunov theory

    NASA Technical Reports Server (NTRS)

    Yedavalli, R. K.

    1993-01-01

    Consideration is given to the problem of matrix root clustering in subregions of a complex plane for linear state space models with real parameter uncertainty. The nominal matrix root clustering theory of Gutman & Jury (1981) using the generalized Liapunov equation is extended to the perturbed matrix case, and bounds are derived on the perturbation to maintain root clustering inside a given region. The theory makes it possible to obtain an explicit relationship between the parameters of the root clustering region and the uncertainty range of the parameter space.

  4. Capelli bitableaux and Z-forms of general linear Lie superalgebras.

    PubMed Central

    Brini, A; Teolis, A G

    1990-01-01

    The combinatorics of the enveloping algebra UQ(pl(L)) of the general linear Lie superalgebra of a finite dimensional Z2-graded Q-vector space is studied. Three non-equivalent Z-forms of UQ(pl(L)) are introduced: one of these Z-forms is a version of the Kostant Z-form and the others are Lie algebra analogs of Rota and Stein's straightening formulae for the supersymmetric algebra Super[L P] and for its dual Super[L* P*]. The method is based on an extension of Capelli's technique of variabili ausiliarie to algebras containing positively and negatively signed elements. PMID:11607048

  5. Generalized free-space diffuse photon transport model based on the influence analysis of a camera lens diaphragm.

    PubMed

    Chen, Xueli; Gao, Xinbo; Qu, Xiaochao; Chen, Duofang; Ma, Xiaopeng; Liang, Jimin; Tian, Jie

    2010-10-10

    The camera lens diaphragm is an important component in a noncontact optical imaging system and has a crucial influence on the images registered on the CCD camera. However, this influence has not been taken into account in the existing free-space photon transport models. To model the photon transport process more accurately, a generalized free-space photon transport model is proposed. It combines Lambertian source theory with analysis of the influence of the camera lens diaphragm to simulate photon transport process in free space. In addition, the radiance theorem is also adopted to establish the energy relationship between the virtual detector and the CCD camera. The accuracy and feasibility of the proposed model is validated with a Monte-Carlo-based free-space photon transport model and physical phantom experiment. A comparison study with our previous hybrid radiosity-radiance theorem based model demonstrates the improvement performance and potential of the proposed model for simulating photon transport process in free space.

  6. Linear and nonlinear quantification of respiratory sinus arrhythmia during propofol general anesthesia.

    PubMed

    Chen, Zhe; Purdon, Patrick L; Pierce, Eric T; Harrell, Grace; Walsh, John; Salazar, Andres F; Tavares, Casie L; Brown, Emery N; Barbieri, Riccardo

    2009-01-01

    Quantitative evaluation of respiratory sinus arrhythmia (RSA) may provide important information in clinical practice of anesthesia and postoperative care. In this paper, we apply a point process method to assess dynamic RSA during propofol general anesthesia. Specifically, an inverse Gaussian probability distribution is used to model the heartbeat interval, whereas the instantaneous mean is identified by a linear or bilinear bivariate regression on the previous R-R intervals and respiratory measures. The estimated second-order bilinear interaction allows us to evaluate the nonlinear component of the RSA. The instantaneous RSA gain and phase can be estimated with an adaptive point process filter. The algorithm's ability to track non-stationary dynamics is demonstrated using one clinical recording. Our proposed statistical indices provide a valuable quantitative assessment of instantaneous cardiorespiratory control and heart rate variability (HRV) during general anesthesia.

  7. Generalized linear sampling method for elastic-wave sensing of heterogeneous fractures

    NASA Astrophysics Data System (ADS)

    Pourahmadian, Fatemeh; Guzina, Bojan B.; Haddar, Houssem

    2017-05-01

    A theoretical foundation is developed for the active seismic reconstruction of fractures endowed with spatially varying interfacial conditions (e.g. partially closed fractures, hydraulic fractures). The proposed indicator functional carries a superior localization property with no significant sensitivity to the fracture’s contact condition, measurement errors, or illumination frequency. This is accomplished through the paradigm of the {F}\\sharp -factorization technique and the recently developed generalized linear sampling method (GLSM) applied to elastodynamics. The direct scattering problem is formulated in the frequency domain where the fracture surface is illuminated by a set of incident plane waves, while monitoring the induced scattered field in the form of (elastic) far-field patterns. The analysis of the well-posedness of the forward problem leads to an admissibility condition on the fracture’s (linearized) contact parameters. This in turn contributes to the establishment of the applicability of the {F}\\sharp -factorization method, and consequently aids the formulation of a convex GLSM cost functional whose minimizer can be computed without iterations. Such a minimizer is then used to construct a robust fracture indicator function, whose performance is illustrated through a set of numerical experiments. For completeness, the results of the GLSM reconstruction are compared to those obtained by the classical linear sampling method (LSM).

  8. Neutron source strength measurements for Varian, Siemens, Elekta, and General Electric linear accelerators.

    PubMed

    Followill, David S; Stovall, Marilyn S; Kry, Stephen F; Ibbott, Geoffrey S

    2003-01-01

    The shielding calculations for high energy (>10 MV) linear accelerators must include the photoneutron production within the head of the accelerator. Procedures have been described to calculate the treatment room door shielding based on the neutron source strength (Q value) for a specific accelerator and energy combination. Unfortunately, there is currently little data in the literature stating the neutron source strengths for the most widely used linear accelerators. In this study, the neutron fluence for 36 linear accelerators, including models from Varian, Siemens, Elekta/Philips, and General Electric, was measured using gold-foil activation. Several of the models and energy combinations had multiple measurements. The neutron fluence measured in the patient plane was independent of the surface area of the room, suggesting that neutron fluence is more dependent on the direct neutron fluence from the head of the accelerator than from room scatter. Neutron source strength, Q, was determined from the measured neutron fluences. As expected, Q increased with increasing photon energy. The Q values ranged from 0.02 for a 10 MV beam to 1.44(x10(12)) neutrons per photon Gy for a 25 MV beam. The most comprehensive set of neutron source strength values, Q, for the current accelerators in clinical use are presented for use in calculating room shielding.

  9. Wave packet dynamics in one-dimensional linear and nonlinear generalized Fibonacci lattices.

    PubMed

    Zhang, Zhenjun; Tong, Peiqing; Gong, Jiangbin; Li, Baowen

    2011-05-01

    The spreading of an initially localized wave packet in one-dimensional linear and nonlinear generalized Fibonacci (GF) lattices is studied numerically. The GF lattices can be classified into two classes depending on whether or not the lattice possesses the Pisot-Vijayaraghavan property. For linear GF lattices of the first class, both the second moment and the participation number grow with time. For linear GF lattices of the second class, in the regime of a weak on-site potential, wave packet spreading is close to ballistic diffusion, whereas in the regime of a strong on-site potential, it displays stairlike growth in both the second moment and the participation number. Nonlinear GF lattices are then investigated in parallel. For the first class of nonlinear GF lattices, the second moment of the wave packet still grows with time, but the corresponding participation number does not grow simultaneously. For the second class of nonlinear GF lattices, an analogous phenomenon is observed for the weak on-site potential only. For a strong on-site potential that leads to an enhanced nonlinear self-trapping effect, neither the second moment nor the participation number grows with time. The results can be useful in guiding experiments on the expansion of noninteracting or interacting cold atoms in quasiperiodic optical lattices.

  10. Thermodynamic bounds and general properties of optimal efficiency and power in linear responses.

    PubMed

    Jiang, Jian-Hua

    2014-10-01

    We study the optimal exergy efficiency and power for thermodynamic systems with an Onsager-type "current-force" relationship describing the linear response to external influences. We derive, in analytic forms, the maximum efficiency and optimal efficiency for maximum power for a thermodynamic machine described by a N×N symmetric Onsager matrix with arbitrary integer N. The figure of merit is expressed in terms of the largest eigenvalue of the "coupling matrix" which is solely determined by the Onsager matrix. Some simple but general relationships between the power and efficiency at the conditions for (i) maximum efficiency and (ii) optimal efficiency for maximum power are obtained. We show how the second law of thermodynamics bounds the optimal efficiency and the Onsager matrix and relate those bounds together. The maximum power theorem (Jacobi's Law) is generalized to all thermodynamic machines with a symmetric Onsager matrix in the linear-response regime. We also discuss systems with an asymmetric Onsager matrix (such as systems under magnetic field) for a particular situation and we show that the reversible limit of efficiency can be reached at finite output power. Cooperative effects are found to improve the figure of merit significantly in systems with multiply cross-correlated responses. Application to example systems demonstrates that the theory is helpful in guiding the search for high performance materials and structures in energy researches.

  11. Thermodynamic bounds and general properties of optimal efficiency and power in linear responses

    NASA Astrophysics Data System (ADS)

    Jiang, Jian-Hua

    2014-10-01

    We study the optimal exergy efficiency and power for thermodynamic systems with an Onsager-type "current-force" relationship describing the linear response to external influences. We derive, in analytic forms, the maximum efficiency and optimal efficiency for maximum power for a thermodynamic machine described by a N ×N symmetric Onsager matrix with arbitrary integer N. The figure of merit is expressed in terms of the largest eigenvalue of the "coupling matrix" which is solely determined by the Onsager matrix. Some simple but general relationships between the power and efficiency at the conditions for (i) maximum efficiency and (ii) optimal efficiency for maximum power are obtained. We show how the second law of thermodynamics bounds the optimal efficiency and the Onsager matrix and relate those bounds together. The maximum power theorem (Jacobi's Law) is generalized to all thermodynamic machines with a symmetric Onsager matrix in the linear-response regime. We also discuss systems with an asymmetric Onsager matrix (such as systems under magnetic field) for a particular situation and we show that the reversible limit of efficiency can be reached at finite output power. Cooperative effects are found to improve the figure of merit significantly in systems with multiply cross-correlated responses. Application to example systems demonstrates that the theory is helpful in guiding the search for high performance materials and structures in energy researches.

  12. The heritability of general cognitive ability increases linearly from childhood to young adulthood.

    PubMed

    Haworth, C M A; Wright, M J; Luciano, M; Martin, N G; de Geus, E J C; van Beijsterveldt, C E M; Bartels, M; Posthuma, D; Boomsma, D I; Davis, O S P; Kovas, Y; Corley, R P; Defries, J C; Hewitt, J K; Olson, R K; Rhea, S-A; Wadsworth, S J; Iacono, W G; McGue, M; Thompson, L A; Hart, S A; Petrill, S A; Lubinski, D; Plomin, R

    2010-11-01

    Although common sense suggests that environmental influences increasingly account for individual differences in behavior as experiences accumulate during the course of life, this hypothesis has not previously been tested, in part because of the large sample sizes needed for an adequately powered analysis. Here we show for general cognitive ability that, to the contrary, genetic influence increases with age. The heritability of general cognitive ability increases significantly and linearly from 41% in childhood (9 years) to 55% in adolescence (12 years) and to 66% in young adulthood (17 years) in a sample of 11 000 pairs of twins from four countries, a larger sample than all previous studies combined. In addition to its far-reaching implications for neuroscience and molecular genetics, this finding suggests new ways of thinking about the interface between nature and nurture during the school years. Why, despite life's 'slings and arrows of outrageous fortune', do genetically driven differences increasingly account for differences in general cognitive ability? We suggest that the answer lies with genotype-environment correlation: as children grow up, they increasingly select, modify and even create their own experiences in part based on their genetic propensities.

  13. On relating the generalized equivalent uniform dose formalism to the linear-quadratic model.

    PubMed

    Djajaputra, David; Wu, Qiuwen

    2006-12-01

    Two main approaches are commonly used in the literature for computing the equivalent uniform dose (EUD) in radiotherapy. The first approach is based on the cell-survival curve as defined in the linear-quadratic model. The second approach assumes that EUD can be computed as the generalized mean of the dose distribution with an appropriate fitting parameter. We have analyzed the connection between these two formalisms by deriving explicit formulas for the EUD which are applicable to normal distributions. From these formulas we have established an explicit connection between the two formalisms. We found that the EUD parameter has strong dependence on the parameters that characterize the distribution, namely the mean dose and the standard deviation around the mean. By computing the corresponding parameters for clinical dose distributions, which in general do not follow the normal distribution, we have shown that our results are also applicable to actual dose distributions. Our analysis suggests that caution should be used in using generalized EUD approach for reporting and analyzing dose distributions.

  14. An Efficient Test for Gene-Environment Interaction in Generalized Linear Mixed Models with Family Data.

    PubMed

    Mazo Lopera, Mauricio A; Coombes, Brandon J; de Andrade, Mariza

    2017-09-27

    Gene-environment (GE) interaction has important implications in the etiology of complex diseases that are caused by a combination of genetic factors and environment variables. Several authors have developed GE analysis in the context of independent subjects or longitudinal data using a gene-set. In this paper, we propose to analyze GE interaction for discrete and continuous phenotypes in family studies by incorporating the relatedness among the relatives for each family into a generalized linear mixed model (GLMM) and by using a gene-based variance component test. In addition, we deal with collinearity problems arising from linkage disequilibrium among single nucleotide polymorphisms (SNPs) by considering their coefficients as random effects under the null model estimation. We show that the best linear unbiased predictor (BLUP) of such random effects in the GLMM is equivalent to the ridge regression estimator. This equivalence provides a simple method to estimate the ridge penalty parameter in comparison to other computationally-demanding estimation approaches based on cross-validation schemes. We evaluated the proposed test using simulation studies and applied it to real data from the Baependi Heart Study consisting of 76 families. Using our approach, we identified an interaction between BMI and the Peroxisome Proliferator Activated Receptor Gamma (PPARG) gene associated with diabetes.

  15. A generalized fuzzy linear programming approach for environmental management problem under uncertainty.

    PubMed

    Fan, Yurui; Huang, Guohe; Veawab, Amornvadee

    2012-01-01

    In this study, a generalized fuzzy linear programming (GFLP) method was developed to deal with uncertainties expressed as fuzzy sets that exist in the constraints and objective function. A stepwise interactive algorithm (SIA) was advanced to solve GFLP model and generate solutions expressed as fuzzy sets. To demonstrate its application, the developed GFLP method was applied to a regional sulfur dioxide (SO2) control planning model to identify effective SO2 mitigation polices with a minimized system performance cost under uncertainty. The results were obtained to represent the amount of SO2 allocated to different control measures from different sources. Compared with the conventional interval-parameter linear programming (ILP) approach, the solutions obtained through GFLP were expressed as fuzzy sets, which can provide intervals for the decision variables and objective function, as well as related possibilities. Therefore, the decision makers can make a tradeoff between model stability and the plausibility based on solutions obtained through GFLP and then identify desired policies for SO2-emission control under uncertainty.

  16. Model Averaging Methods for Weight Trimming in Generalized Linear Regression Models.

    PubMed

    Elliott, Michael R

    2009-03-01

    In sample surveys where units have unequal probabilities of inclusion, associations between the inclusion probability and the statistic of interest can induce bias in unweighted estimates. This is true even in regression models, where the estimates of the population slope may be biased if the underlying mean model is misspecified or the sampling is nonignorable. Weights equal to the inverse of the probability of inclusion are often used to counteract this bias. Highly disproportional sample designs have highly variable weights; weight trimming reduces large weights to a maximum value, reducing variability but introducing bias. Most standard approaches are ad hoc in that they do not use the data to optimize bias-variance trade-offs. This article uses Bayesian model averaging to create "data driven" weight trimming estimators. We extend previous results for linear regression models (Elliott 2008) to generalized linear regression models, developing robust models that approximate fully-weighted estimators when bias correction is of greatest importance, and approximate unweighted estimators when variance reduction is critical.

  17. Two-stage method of estimation for general linear growth curve models.

    PubMed

    Stukel, T A; Demidenko, E

    1997-06-01

    We extend the linear random-effects growth curve model (REGCM) (Laird and Ware, 1982, Biometrics 38, 963-974) to study the effects of population covariates on one or more characteristics of the growth curve when the characteristics are expressed as linear combinations of the growth curve parameters. This definition includes the actual growth curve parameters (the usual model) or any subset of these parameters. Such an analysis would be cumbersome using standard growth curve methods because it would require reparameterization of the original growth curve. We implement a two-stage method of estimation based on the two-stage growth curve model used to describe the response. The resulting generalized least squares (GLS) estimator for the population parameters is consistent, asymptotically efficient, and multivariate normal when the number of individuals is large. It is also robust to model misspecification in terms of bias and efficiency of the parameter estimates compared to maximum likelihood with the usual REGCM. We apply the method to a study of factors affecting the growth rate of salmonellae in a cubic growth model, a characteristic that cannot be analyzed easily using standard techniques.

  18. Towards downscaling precipitation for Senegal - An approach based on generalized linear models and weather types

    NASA Astrophysics Data System (ADS)

    Rust, H. W.; Vrac, M.; Lengaigne, M.; Sultan, B.

    2012-04-01

    Changes in precipitation patterns with potentially less precipitation and an increasing risk for droughts pose a threat to water resources and agricultural yields in Senegal. Precipitation in this region is dominated by the West-African Monsoon being active from May to October, a seasonal pattern with inter-annual to decadal variability in the 20th century which is likely to be affected by climate change. We built a generalized linear model for a full spatial description of rainfall in Senegal. The model uses season, location, and a discrete set of weather types as predictors and yields a spatially continuous description of precipitation occurrences and intensities. Weather types have been defined on NCEP/NCAR reanalysis using zonal and meridional winds, as well as relative humidity. This model is suitable for downscaling precipitation, particularly precipitation occurrences relevant for drough risk mapping.

  19. General linear codes for fault-tolerant matrix operations on processor arrays

    NASA Technical Reports Server (NTRS)

    Nair, V. S. S.; Abraham, J. A.

    1988-01-01

    Various checksum codes have been suggested for fault-tolerant matrix computations on processor arrays. Use of these codes is limited due to potential roundoff and overflow errors. Numerical errors may also be misconstrued as errors due to physical faults in the system. In this a set of linear codes is identified which can be used for fault-tolerant matrix operations such as matrix addition, multiplication, transposition, and LU-decomposition, with minimum numerical error. Encoding schemes are given for some of the example codes which fall under the general set of codes. With the help of experiments, a rule of thumb for the selection of a particular code for a given application is derived.

  20. Analysis of linear two-dimensional general rate model for chromatographic columns of cylindrical geometry.

    PubMed

    Qamar, Shamsul; Uche, David U; Khan, Farman U; Seidel-Morgenstern, Andreas

    2017-05-05

    This work is concerned with the analytical solutions and moment analysis of a linear two-dimensional general rate model (2D-GRM) describing the transport of a solute through a chromatographic column of cylindrical geometry. Analytical solutions are derived through successive implementation of finite Hankel and Laplace transformations for two different sets of boundary conditions. The process is further analyzed by deriving analytical temporal moments from the Laplace domain solutions. Radial gradients are typically neglected in liquid chromatography studies which are particularly important in the case of non-perfect injections. Several test problems of single-solute transport are considered. The derived analytical results are validated against the numerical solutions of a high resolution finite volume scheme. The derived analytical results can play an important role in further development of liquid chromatography. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Generalization of the ordinary state-based peridynamic model for isotropic linear viscoelasticity

    NASA Astrophysics Data System (ADS)

    Delorme, Rolland; Tabiai, Ilyass; Laberge Lebel, Louis; Lévesque, Martin

    2017-02-01

    This paper presents a generalization of the original ordinary state-based peridynamic model for isotropic linear viscoelasticity. The viscoelastic material response is represented using the thermodynamically acceptable Prony series approach. It can feature as many Prony terms as required and accounts for viscoelastic spherical and deviatoric components. The model was derived from an equivalence between peridynamic viscoelastic parameters and those appearing in classical continuum mechanics, by equating the free energy densities expressed in both frameworks. The model was simplified to a uni-dimensional expression and implemented to simulate a creep-recovery test. This implementation was finally validated by comparing peridynamic predictions to those predicted from classical continuum mechanics. An exact correspondence between peridynamics and the classical continuum approach was shown when the peridynamic horizon becomes small, meaning peridynamics tends toward classical continuum mechanics. This work provides a clear and direct means to researchers dealing with viscoelastic phenomena to tackle their problem within the peridynamic framework.

  2. Regional differences of outpatient physician supply as a theoretical economic and empirical generalized linear model.

    PubMed

    Scholz, Stefan; Graf von der Schulenburg, Johann-Matthias; Greiner, Wolfgang

    2015-11-17

    Regional differences in physician supply can be found in many health care systems, regardless of their organizational and financial structure. A theoretical model is developed for the physicians' decision on office allocation, covering demand-side factors and a consumption time function. To test the propositions following the theoretical model, generalized linear models were estimated to explain differences in 412 German districts. Various factors found in the literature were included to control for physicians' regional preferences. Evidence in favor of the first three propositions of the theoretical model could be found. Specialists show a stronger association to higher populated districts than GPs. Although indicators for regional preferences are significantly correlated with physician density, their coefficients are not as high as population density. If regional disparities should be addressed by political actions, the focus should be to counteract those parameters representing physicians' preferences in over- and undersupplied regions.

  3. Constraining the general linear model for sensible hemodynamic response function waveforms.

    PubMed

    Ciftçi, Koray; Sankur, Bülent; Kahya, Yasemin P; Akin, Ata

    2008-08-01

    We propose a method to do constrained parameter estimation and inference from neuroimaging data using general linear model (GLM). Constrained approach precludes unrealistic hemodynamic response function (HRF) estimates to appear at the outcome of the GLM analysis. The permissible ranges of waveform parameters were determined from the study of a repertoire of plausible waveforms. These parameter intervals played the role of prior distributions in the subsequent Bayesian analysis of the GLM, and Gibbs sampling was used to derive posterior distributions. The method was applied to artificial null data and near infrared spectroscopy (NIRS) data. The results show that constraining the GLM eliminates unrealistic HRF waveforms and decreases false activations, without affecting the inference for "realistic" activations, which satisfy the constraints.

  4. Compact tunable silicon photonic differential-equation solver for general linear time-invariant systems.

    PubMed

    Wu, Jiayang; Cao, Pan; Hu, Xiaofeng; Jiang, Xinhong; Pan, Ting; Yang, Yuxing; Qiu, Ciyuan; Tremblay, Christine; Su, Yikai

    2014-10-20

    We propose and experimentally demonstrate an all-optical temporal differential-equation solver that can be used to solve ordinary differential equations (ODEs) characterizing general linear time-invariant (LTI) systems. The photonic device implemented by an add-drop microring resonator (MRR) with two tunable interferometric couplers is monolithically integrated on a silicon-on-insulator (SOI) wafer with a compact footprint of ~60 μm × 120 μm. By thermally tuning the phase shifts along the bus arms of the two interferometric couplers, the proposed device is capable of solving first-order ODEs with two variable coefficients. The operation principle is theoretically analyzed, and system testing of solving ODE with tunable coefficients is carried out for 10-Gb/s optical Gaussian-like pulses. The experimental results verify the effectiveness of the fabricated device as a tunable photonic ODE solver.

  5. Solving the Linear Balance Equation on the Globe as a Generalized Inverse Problem

    NASA Technical Reports Server (NTRS)

    Lu, Huei-Iin; Robertson, Franklin R.

    1999-01-01

    A generalized (pseudo) inverse technique was developed to facilitate a better understanding of the numerical effects of tropical singularities inherent in the spectral linear balance equation (LBE). Depending upon the truncation, various levels of determinancy are manifest. The traditional fully-determined (FD) systems give rise to a strong response, while the under-determined (UD) systems yield a weak response to the tropical singularities. The over-determined (OD) systems result in a modest response and a large residual in the tropics. The FD and OD systems can be alternatively solved by the iterative method. Differences in the solutions of an UD system exist between the inverse technique and the iterative method owing to the non- uniqueness of the problem. A realistic balanced wind was obtained by solving the principal components of the spectral LBE in terms of vorticity in an intermediate resolution. Improved solutions were achieved by including the singular-component solutions which best fit the observed wind data.

  6. Solving the Linear Balance Equation on the Globe as a Generalized Inverse Problem

    NASA Technical Reports Server (NTRS)

    Lu, Huei-Iin; Robertson, Franklin R.

    1999-01-01

    A generalized (pseudo) inverse technique was developed to facilitate a better understanding of the numerical effects of tropical singularities inherent in the spectral linear balance equation (LBE). Depending upon the truncation, various levels of determinancy are manifest. The traditional fully-determined (FD) systems give rise to a strong response, while the under-determined (UD) systems yield a weak response to the tropical singularities. The over-determined (OD) systems result in a modest response and a large residual in the tropics. The FD and OD systems can be alternatively solved by the iterative method. Differences in the solutions of an UD system exist between the inverse technique and the iterative method owing to the non- uniqueness of the problem. A realistic balanced wind was obtained by solving the principal components of the spectral LBE in terms of vorticity in an intermediate resolution. Improved solutions were achieved by including the singular-component solutions which best fit the observed wind data.

  7. A Bayesian approach for inducing sparsity in generalized linear models with multi-category response

    PubMed Central

    2015-01-01

    Background The dimension and complexity of high-throughput gene expression data create many challenges for downstream analysis. Several approaches exist to reduce the number of variables with respect to small sample sizes. In this study, we utilized the Generalized Double Pareto (GDP) prior to induce sparsity in a Bayesian Generalized Linear Model (GLM) setting. The approach was evaluated using a publicly available microarray dataset containing 99 samples corresponding to four different prostate cancer subtypes. Results A hierarchical Sparse Bayesian GLM using GDP prior (SBGG) was developed to take into account the progressive nature of the response variable. We obtained an average overall classification accuracy between 82.5% and 94%, which was higher than Support Vector Machine, Random Forest or a Sparse Bayesian GLM using double exponential priors. Additionally, SBGG outperforms the other 3 methods in correctly identifying pre-metastatic stages of cancer progression, which can prove extremely valuable for therapeutic and diagnostic purposes. Importantly, using Geneset Cohesion Analysis Tool, we found that the top 100 genes produced by SBGG had an average functional cohesion p-value of 2.0E-4 compared to 0.007 to 0.131 produced by the other methods. Conclusions Using GDP in a Bayesian GLM model applied to cancer progression data results in better subclass prediction. In particular, the method identifies pre-metastatic stages of prostate cancer with substantially better accuracy and produces more functionally relevant gene sets. PMID:26423345

  8. Summary goodness-of-fit statistics for binary generalized linear models with noncanonical link functions.

    PubMed

    Canary, Jana D; Blizzard, Leigh; Barry, Ronald P; Hosmer, David W; Quinn, Stephen J

    2016-05-01

    Generalized linear models (GLM) with a canonical logit link function are the primary modeling technique used to relate a binary outcome to predictor variables. However, noncanonical links can offer more flexibility, producing convenient analytical quantities (e.g., probit GLMs in toxicology) and desired measures of effect (e.g., relative risk from log GLMs). Many summary goodness-of-fit (GOF) statistics exist for logistic GLM. Their properties make the development of GOF statistics relatively straightforward, but it can be more difficult under noncanonical links. Although GOF tests for logistic GLM with continuous covariates (GLMCC) have been applied to GLMCCs with log links, we know of no GOF tests in the literature specifically developed for GLMCCs that can be applied regardless of link function chosen. We generalize the Tsiatis GOF statistic originally developed for logistic GLMCCs, (TG), so that it can be applied under any link function. Further, we show that the algebraically related Hosmer-Lemeshow (HL) and Pigeon-Heyse (J(2) ) statistics can be applied directly. In a simulation study, TG, HL, and J(2) were used to evaluate the fit of probit, log-log, complementary log-log, and log models, all calculated with a common grouping method. The TG statistic consistently maintained Type I error rates, while those of HL and J(2) were often lower than expected if terms with little influence were included. Generally, the statistics had similar power to detect an incorrect model. An exception occurred when a log GLMCC was incorrectly fit to data generated from a logistic GLMCC. In this case, TG had more power than HL or J(2) .

  9. General linear response formula for non integrable systems obeying the Vlasov equation

    NASA Astrophysics Data System (ADS)

    Patelli, Aurelio; Ruffo, Stefano

    2014-11-01

    Long-range interacting N-particle systems get trapped into long-living out-of-equilibrium stationary states called quasi-stationary states (QSS). We study here the response to a small external perturbation when such systems are settled into a QSS. In the N → ∞ limit the system is described by the Vlasov equation and QSS are mapped into stable stationary solutions of such equation. We consider this problem in the context of a model that has recently attracted considerable attention, the Hamiltonian mean field (HMF) model. For such a model, stationary inhomogeneous and homogeneous states determine an integrable dynamics in the mean-field effective potential and an action-angle transformation allows one to derive an exact linear response formula. However, such a result would be of limited interest if restricted to the integrable case. In this paper, we show how to derive a general linear response formula which does not use integrability as a requirement. The presence of conservation laws (mass, energy, momentum, etc.) and of further Casimir invariants can be imposed a posteriori. We perform an analysis of the infinite time asymptotics of the response formula for a specific observable, the magnetization in the HMF model, as a result of the application of an external magnetic field, for two stationary stable distributions: the Boltzmann-Gibbs equilibrium distribution and the Fermi-Dirac one. When compared with numerical simulations the predictions of the theory are very good away from the transition energy from inhomogeneous to homogeneous states. Contribution to the Topical Issue "Theory and Applications of the Vlasov Equation", edited by Francesco Pegoraro, Francesco Califano, Giovanni Manfredi and Philip J. Morrison.

  10. A simulation study of confounding in generalized linear models for air pollution epidemiology.

    PubMed Central

    Chen, C; Chock, D P; Winkler, S L

    1999-01-01

    Confounding between the model covariates and causal variables (which may or may not be included as model covariates) is a well-known problem in regression models used in air pollution epidemiology. This problem is usually acknowledged but hardly ever investigated, especially in the context of generalized linear models. Using synthetic data sets, the present study shows how model overfit, underfit, and misfit in the presence of correlated causal variables in a Poisson regression model affect the estimated coefficients of the covariates and their confidence levels. The study also shows how this effect changes with the ranges of the covariates and the sample size. There is qualitative agreement between these study results and the corresponding expressions in the large-sample limit for the ordinary linear models. Confounding of covariates in an overfitted model (with covariates encompassing more than just the causal variables) does not bias the estimated coefficients but reduces their significance. The effect of model underfit (with some causal variables excluded as covariates) or misfit (with covariates encompassing only noncausal variables), on the other hand, leads to not only erroneous estimated coefficients, but a misguided confidence, represented by large t-values, that the estimated coefficients are significant. The results of this study indicate that models which use only one or two air quality variables, such as particulate matter [less than and equal to] 10 microm and sulfur dioxide, are probably unreliable, and that models containing several correlated and toxic or potentially toxic air quality variables should also be investigated in order to minimize the situation of model underfit or misfit. Images Figure 1 Figure 2 Figure 3 Figure 4 Figure 5 Figure 6 Figure 7 Figure 8 PMID:10064552

  11. A Generalized Linear Model for Estimating Spectrotemporal Receptive Fields from Responses to Natural Sounds

    PubMed Central

    Calabrese, Ana; Schumacher, Joseph W.; Schneider, David M.; Paninski, Liam; Woolley, Sarah M. N.

    2011-01-01

    In the auditory system, the stimulus-response properties of single neurons are often described in terms of the spectrotemporal receptive field (STRF), a linear kernel relating the spectrogram of the sound stimulus to the instantaneous firing rate of the neuron. Several algorithms have been used to estimate STRFs from responses to natural stimuli; these algorithms differ in their functional models, cost functions, and regularization methods. Here, we characterize the stimulus-response function of auditory neurons using a generalized linear model (GLM). In this model, each cell's input is described by: 1) a stimulus filter (STRF); and 2) a post-spike filter, which captures dependencies on the neuron's spiking history. The output of the model is given by a series of spike trains rather than instantaneous firing rate, allowing the prediction of spike train responses to novel stimuli. We fit the model by maximum penalized likelihood to the spiking activity of zebra finch auditory midbrain neurons in response to conspecific vocalizations (songs) and modulation limited (ml) noise. We compare this model to normalized reverse correlation (NRC), the traditional method for STRF estimation, in terms of predictive power and the basic tuning properties of the estimated STRFs. We find that a GLM with a sparse prior predicts novel responses to both stimulus classes significantly better than NRC. Importantly, we find that STRFs from the two models derived from the same responses can differ substantially and that GLM STRFs are more consistent between stimulus classes than NRC STRFs. These results suggest that a GLM with a sparse prior provides a more accurate characterization of spectrotemporal tuning than does the NRC method when responses to complex sounds are studied in these neurons. PMID:21264310

  12. Assessing erectile neurogenic dysfunction from heart rate variability through a Generalized Linear Mixed Model framework.

    PubMed

    Fernández, Elmer Andrés; Souza Neto, E P; Abry, P; Macchiavelli, R; Balzarini, M; Cuzin, B; Baude, C; Frutoso, J; Gharib, C

    2010-07-01

    The low (LF) vs. high (HF) frequency energy ratio, computed from the spectral decomposition of heart beat intervals, has become a major tool in cardiac autonomic system control and sympatho-vagal balance studies. The (statistical) distributions of response variables designed from ratios of two quantities, such as the LF/HF ratio, are likely to non-normal, hence preventing e.g., from a relevant use of the t-test. Even using a non-parametric formulation, the solution may be not appropriate as the test statistics do not account for correlation and heteroskedasticity, such as those that can be observed when several measures are taken from the same patient. The analyses for such type of data require the application of statistical models which do not assume a priori independence. In this spirit, the present contribution proposes the use of the Generalized Linear Mixed Models (GLMMs) framework to assess differences between groups of measures performed over classes of patients. Statistical linear mixed models allow the inclusion of at least one random effect, besides the error term, which induces correlation between observations from the same subject. Moreover, by using GLMM, practitioners could assume any probability distribution, within the exponential family, for the data, and naturally model heteroskedasticity. Here, the sympatho-vagal balance expressed as LF/HF ratio of patients suffering neurogenic erectile dysfunction under three different body positions was analyzed in a case-control protocol by means of a GLMM under gamma and Gaussian distributed responses assumptions. The gamma GLMM model was compared with the normal linear mixed model (LMM) approach conducted using raw and log transformed data. Both raw GLMM gamma and log transformed LMM allow better inference for factor effects, including correlations between observations from the same patient under different body position compared to the raw LMM. The gamma GLMM provides a more natural distribution assumption

  13. Model based manipulator control

    NASA Technical Reports Server (NTRS)

    Petrosky, Lyman J.; Oppenheim, Irving J.

    1989-01-01

    The feasibility of using model based control (MBC) for robotic manipulators was investigated. A double inverted pendulum system was constructed as the experimental system for a general study of dynamically stable manipulation. The original interest in dynamically stable systems was driven by the objective of high vertical reach (balancing), and the planning of inertially favorable trajectories for force and payload demands. The model-based control approach is described and the results of experimental tests are summarized. Results directly demonstrate that MBC can provide stable control at all speeds of operation and support operations requiring dynamic stability such as balancing. The application of MBC to systems with flexible links is also discussed.

  14. Development and validation of a general purpose linearization program for rigid aircraft models

    NASA Technical Reports Server (NTRS)

    Duke, E. L.; Antoniewicz, R. F.

    1985-01-01

    A FORTRAN program that provides the user with a powerful and flexible tool for the linearization of aircraft models is discussed. The program LINEAR numerically determines a linear systems model using nonlinear equations of motion and a user-supplied, nonlinear aerodynamic model. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to allow easy selection and definition of the state, control, and observation variables to be used in a particular model. Also, included in the report is a comparison of linear and nonlinear models for a high performance aircraft.

  15. MGMRES: A generalization of GMRES for solving large sparse nonsymmetric linear systems

    SciTech Connect

    Young, D.M.; Chen, J.Y.

    1994-12-31

    The authors are concerned with the solution of the linear system (1): Au = b, where A is a real square nonsingular matrix which is large, sparse and non-symmetric. They consider the use of Krylov subspace methods. They first choose an initial approximation u{sup (0)} to the solution {bar u} = A{sup {minus}1}B of (1). They also choose an auxiliary matrix Z which is nonsingular. For n = 1,2,{hor_ellipsis} they determine u{sup (n)} such that u{sup (n)} {minus} u{sup (0)}{epsilon}K{sub n}(r{sup (0)},A) where K{sub n}(r{sup (0)},A) is the (Krylov) subspace spanned by the Krylov vectors r{sup (0)}, Ar{sup (0)}, {hor_ellipsis}, A{sup n{minus}1}r{sup 0} and where r{sup (0)} = b{minus}Au{sup (0)}. If ZA is SPD they also require that (u{sup (n)}{minus}{bar u}, ZA(u{sup (n)}{minus}{bar u})) be minimized. If, on the other hand, ZA is not SPD, then they require that the Galerkin condition, (Zr{sup n}, v) = 0, be satisfied for all v{epsilon}K{sub n}(r{sup (0)}, A) where r{sup n} = b{minus}Au{sup (n)}. In this paper the authors consider a generalization of GMRES. This generalized method, which they refer to as `MGMRES`, is very similar to GMRES except that they let Z = A{sup T}Y where Y is a nonsingular matrix which is symmetric by not necessarily SPD.

  16. Efficient analysis of Q-level nested hierarchical general linear models given ignorable missing data.

    PubMed

    Shin, Yongyun; Raudenbush, Stephen W

    2013-09-28

    This article extends single-level missing data methods to efficient estimation of a Q-level nested hierarchical general linear model given ignorable missing data with a general missing pattern at any of the Q levels. The key idea is to reexpress a desired hierarchical model as the joint distribution of all variables including the outcome that are subject to missingness, conditional on all of the covariates that are completely observed and to estimate the joint model under normal theory. The unconstrained joint model, however, identifies extraneous parameters that are not of interest in subsequent analysis of the hierarchical model and that rapidly multiply as the number of levels, the number of variables subject to missingness, and the number of random coefficients grow. Therefore, the joint model may be extremely high dimensional and difficult to estimate well unless constraints are imposed to avoid the proliferation of extraneous covariance components at each level. Furthermore, the over-identified hierarchical model may produce considerably biased inferences. The challenge is to represent the constraints within the framework of the Q-level model in a way that is uniform without regard to Q; in a way that facilitates efficient computation for any number of Q levels; and also in a way that produces unbiased and efficient analysis of the hierarchical model. Our approach yields Q-step recursive estimation and imputation procedures whose qth-step computation involves only level-q data given higher-level computation components. We illustrate the approach with a study of the growth in body mass index analyzing a national sample of elementary school children.

  17. Predicting estuarine use patterns of juvenile fish with Generalized Linear Models

    NASA Astrophysics Data System (ADS)

    Vasconcelos, R. P.; Le Pape, O.; Costa, M. J.; Cabral, H. N.

    2013-03-01

    Statistical models are key for estimating fish distributions based on environmental variables, and validation is generally advocated as indispensable but seldom applied. Generalized Linear Models were applied to distributions of juvenile Solea solea, Solea senegalensis, Platichthys flesus and Dicentrarchus labrax in response to environmental variables throughout Portuguese estuaries. Species-specific Delta models with two sub-models were used: Binomial (presence/absence); Gamma (density when present). Models were fitted and tested on separate data sets to estimate the accuracy and robustness of predictions. Temperature, salinity and mud content in sediment were included in most models for presence/absence; salinity and depth in most models for density (when present). In Binomial models (presence/absence), goodness-of-fit, accuracy and robustness varied concurrently among species, and fair to high accuracy and robustness were attained for all species, in models with poor to high goodness-of-fit. But in Gamma models (density when present), goodness-of-fit was not indicative of accuracy and robustness. Only for Platichthys flesus were Gamma and also coupled Delta models (density) accurate and robust, despite some moderate bias and inconsistency in predicted density. The accuracy and robustness of final density estimations were defined by the accuracy and robustness of the estimations of presence/absence and density (when present) provided by the sub-models. The mismatches between goodness-of-fit, accuracy and robustness of positive density models, as well as the difference in performance of presence/absence and density models demonstrated the importance of validation procedures in the evaluation of the value of habitat suitability models as predictive tools.

  18. The overlooked potential of Generalized Linear Models in astronomy-II: Gamma regression and photometric redshifts

    NASA Astrophysics Data System (ADS)

    Elliott, J.; de Souza, R. S.; Krone-Martins, A.; Cameron, E.; Ishida, E. E. O.; Hilbe, J.

    2015-04-01

    Machine learning techniques offer a precious tool box for use within astronomy to solve problems involving so-called big data. They provide a means to make accurate predictions about a particular system without prior knowledge of the underlying physical processes of the data. In this article, and the companion papers of this series, we present the set of Generalized Linear Models (GLMs) as a fast alternative method for tackling general astronomical problems, including the ones related to the machine learning paradigm. To demonstrate the applicability of GLMs to inherently positive and continuous physical observables, we explore their use in estimating the photometric redshifts of galaxies from their multi-wavelength photometry. Using the gamma family with a log link function we predict redshifts from the PHoto-z Accuracy Testing simulated catalogue and a subset of the Sloan Digital Sky Survey from Data Release 10. We obtain fits that result in catastrophic outlier rates as low as ∼1% for simulated and ∼2% for real data. Moreover, we can easily obtain such levels of precision within a matter of seconds on a normal desktop computer and with training sets that contain merely thousands of galaxies. Our software is made publicly available as a user-friendly package developed in Python, R and via an interactive web application. This software allows users to apply a set of GLMs to their own photometric catalogues and generates publication quality plots with minimum effort. By facilitating their ease of use to the astronomical community, this paper series aims to make GLMs widely known and to encourage their implementation in future large-scale projects, such as the Large Synoptic Survey Telescope.

  19. Generalized Jeans' Escape of Pick-Up Ions in Quasi-Linear Relaxation

    NASA Technical Reports Server (NTRS)

    Moore, T. E.; Khazanov, G. V.

    2011-01-01

    Jeans escape is a well-validated formulation of upper atmospheric escape that we have generalized to estimate plasma escape from ionospheres. It involves the computation of the parts of particle velocity space that are unbound by the gravitational potential at the exobase, followed by a calculation of the flux carried by such unbound particles as they escape from the potential well. To generalize this approach for ions, we superposed an electrostatic ambipolar potential and a centrifugal potential, for motions across and along a divergent magnetic field. We then considered how the presence of superthermal electrons, produced by precipitating auroral primary electrons, controls the ambipolar potential. We also showed that the centrifugal potential plays a small role in controlling the mass escape flux from the terrestrial ionosphere. We then applied the transverse ion velocity distribution produced when ions, picked up by supersonic (i.e., auroral) ionospheric convection, relax via quasi-linear diffusion, as estimated for cometary comas [1]. The results provide a theoretical basis for observed ion escape response to electromagnetic and kinetic energy sources. They also suggest that super-sonic but sub-Alfvenic flow, with ion pick-up, is a unique and important regime of ion-neutral coupling, in which plasma wave-particle interactions are driven by ion-neutral collisions at densities for which the collision frequency falls near or below the gyro-frequency. As another possible illustration of this process, the heliopause ribbon discovered by the IBEX mission involves interactions between the solar wind ions and the interstellar neutral gas, in a regime that may be analogous [2].

  20. Efficient Analysis of Q-Level Nested Hierarchical General Linear Models Given Ignorable Missing Data

    PubMed Central

    Shin, Yongyun; Raudenbush, Stephen W.

    2014-01-01

    This paper extends single-level missing data methods to efficient estimation of a Q-level nested hierarchical general linear model given ignorable missing data with a general missing pattern at any of the Q levels. The key idea is to reexpress a desired hierarchical model as the joint distribution of all variables including the outcome that are subject to missingness, conditional on all of the covariates that are completely observed; and to estimate the joint model under normal theory. The unconstrained joint model, however, identifies extraneous parameters that are not of interest in subsequent analysis of the hierarchical model, and that rapidly multiply as the number of levels, the number of variables subject to missingness, and the number of random coefficients grow. Therefore, the joint model may be extremely high dimensional and difficult to estimate well unless constraints are imposed to avoid the proliferation of extraneous covariance components at each level. Furthermore, the over-identified hierarchical model may produce considerably biased inferences. The challenge is to represent the constraints within the framework of the Q-level model in a way that is uniform without regard to Q; in a way that facilitates efficient computation for any number of Q levels; and also in a way that produces unbiased and efficient analysis of the hierarchical model. Our approach yields Q-step recursive estimation and imputation procedures whose qth step computation involves only level-q data given higher-level computation components. We illustrate the approach with a study of the growth in body mass index analyzing a national sample of elementary school children. PMID:24077621

  1. Three-photon circular dichroism: towards a generalization of chiroptical non-linear light absorption.

    PubMed

    Friese, Daniel H; Ruud, Kenneth

    2016-02-07

    We present the theory of three-photon circular dichroism (3PCD), a novel non-linear chiroptical property not yet described in the literature. We derive the observable absorption cross section including the orientational average of the necessary seventh-rank tensors and provide origin-independent expressions for 3PCD using either a velocity-gauge treatment of the electric dipole operator or a length-gauge formulation using London atomic orbitals. We present the first numerical results for hydrogen peroxide, 3-methylcyclopentanone (MCP) and 4-helicene, including also a study of the origin dependence and basis set convergence of 3PCD. We find that for the 3PCD-brightest low-lying Rydberg state of hydrogen peroxide, the dichroism is extremely basis set dependent, with basis set convergence not being reached before a sextuple-zeta basis is used, whereas for the MCP and 4-helicene molecules, the basis set dependence is more moderate and at the triple-zeta level the 3PCD contributions are more or less converged irrespective of whether the considered states are Rydberg states or not. The character of the 3PCD-brightest states in MCP is characterized by a fairly large charge-transfer character from the carbonyl group to the ring system. In general, the quadrupole contributions to 3PCD are found to be very small.

  2. Fast inference in generalized linear models via expected log-likelihoods

    PubMed Central

    Ramirez, Alexandro D.; Paninski, Liam

    2015-01-01

    Generalized linear models play an essential role in a wide variety of statistical applications. This paper discusses an approximation of the likelihood in these models that can greatly facilitate computation. The basic idea is to replace a sum that appears in the exact log-likelihood by an expectation over the model covariates; the resulting “expected log-likelihood” can in many cases be computed significantly faster than the exact log-likelihood. In many neuroscience experiments the distribution over model covariates is controlled by the experimenter and the expected log-likelihood approximation becomes particularly useful; for example, estimators based on maximizing this expected log-likelihood (or a penalized version thereof) can often be obtained with orders of magnitude computational savings compared to the exact maximum likelihood estimators. A risk analysis establishes that these maximum EL estimators often come with little cost in accuracy (and in some cases even improved accuracy) compared to standard maximum likelihood estimates. Finally, we find that these methods can significantly decrease the computation time of marginal likelihood calculations for model selection and of Markov chain Monte Carlo methods for sampling from the posterior parameter distribution. We illustrate our results by applying these methods to a computationally-challenging dataset of neural spike trains obtained via large-scale multi-electrode recordings in the primate retina. PMID:23832289

  3. Assessment of cross-frequency coupling with confidence using generalized linear models

    PubMed Central

    Kramer, M. A.; Eden, U. T.

    2013-01-01

    Background Brain voltage activity displays distinct neuronal rhythms spanning a wide frequency range. How rhythms of different frequency interact – and the function of these interactions – remains an active area of research. Many methods have been proposed to assess the interactions between different frequency rhythms, in particular measures that characterize the relationship between the phase of a low frequency rhythm and the amplitude envelope of a high frequency rhythm. However, an optimal analysis method to assess this cross-frequency coupling (CFC) does not yet exist. New Method Here we describe a new procedure to assess CFC that utilizes the generalized linear modeling (GLM) framework. Results We illustrate the utility of this procedure in three synthetic examples. The proposed GLM-CFC procedure allows a rapid and principled assessment of CFC with confidence bounds, scales with the intensity of the CFC, and accurately detects biphasic coupling. Comparison with Existing Methods Compared to existing methods, the proposed GLM-CFC procedure is easily interpretable, possesses confidence intervals that are easy and efficient to compute, and accurately detects biphasic coupling. Conclusions The GLM-CFC statistic provides a method for accurate and statistically rigorous assessment of CFC. PMID:24012829

  4. Projecting nuisance flooding in a warming climate using generalized linear models and Gaussian processes

    NASA Astrophysics Data System (ADS)

    Vandenberg-Rodes, Alexander; Moftakhari, Hamed R.; AghaKouchak, Amir; Shahbaba, Babak; Sanders, Brett F.; Matthew, Richard A.

    2016-11-01

    Nuisance flooding corresponds to minor and frequent flood events that have significant socioeconomic and public health impacts on coastal communities. Yearly averaged local mean sea level can be used as proxy to statistically predict the impacts of sea level rise (SLR) on the frequency of nuisance floods (NFs). In this study, we use generalized linear models (GLM) and Gaussian Process (GP) models combined to (i) estimate the frequency of NF associated with the change in mean sea level, and (ii) quantify the associated uncertainties via a novel and statistically robust approach. We calibrate our models to the water level data from 18 tide gauges along the coasts of United States, and after validation, we estimate the frequency of NF associated with the SLR projections in year 2030 (under RCPs 2.6 and 8.5), along with their 90% bands, at each gauge. The historical NF-SLR data are very noisy, and show large changes in variability (heteroscedasticity) with SLR. Prior models in the literature do not properly account for the observed heteroscedasticity, and thus their projected uncertainties are highly suspect. Among the models used in this study, the Negative Binomial Distribution GLM with GP best characterizes the uncertainties associated with NF estimates; on validation data ≈93% of the points fall within the 90% credible limit, showing our approach to be a robust model for uncertainty quantification.

  5. The overlooked potential of Generalized Linear Models in astronomy, I: Binomial regression

    NASA Astrophysics Data System (ADS)

    de Souza, R. S.; Cameron, E.; Killedar, M.; Hilbe, J.; Vilalta, R.; Maio, U.; Biffi, V.; Ciardi, B.; Riggs, J. D.

    2015-09-01

    Revealing hidden patterns in astronomical data is often the path to fundamental scientific breakthroughs; meanwhile the complexity of scientific enquiry increases as more subtle relationships are sought. Contemporary data analysis problems often elude the capabilities of classical statistical techniques, suggesting the use of cutting edge statistical methods. In this light, astronomers have overlooked a whole family of statistical techniques for exploratory data analysis and robust regression, the so-called Generalized Linear Models (GLMs). In this paper-the first in a series aimed at illustrating the power of these methods in astronomical applications-we elucidate the potential of a particular class of GLMs for handling binary/binomial data, the so-called logit and probit regression techniques, from both a maximum likelihood and a Bayesian perspective. As a case in point, we present the use of these GLMs to explore the conditions of star formation activity and metal enrichment in primordial minihaloes from cosmological hydro-simulations including detailed chemistry, gas physics, and stellar feedback. We predict that for a dark mini-halo with metallicity ≈ 1.3 × 10-4Z⨀, an increase of 1.2 × 10-2 in the gas molecular fraction, increases the probability of star formation occurrence by a factor of 75%. Finally, we highlight the use of receiver operating characteristic curves as a diagnostic for binary classifiers, and ultimately we use these to demonstrate the competitive predictive performance of GLMs against the popular technique of artificial neural networks.

  6. Developing a methodology to predict PM10 concentrations in urban areas using generalized linear models.

    PubMed

    Garcia, J M; Teodoro, F; Cerdeira, R; Coelho, L M R; Kumar, Prashant; Carvalho, M G

    2016-09-01

    A methodology to predict PM10 concentrations in urban outdoor environments is developed based on the generalized linear models (GLMs). The methodology is based on the relationship developed between atmospheric concentrations of air pollutants (i.e. CO, NO2, NOx, VOCs, SO2) and meteorological variables (i.e. ambient temperature, relative humidity (RH) and wind speed) for a city (Barreiro) of Portugal. The model uses air pollution and meteorological data from the Portuguese monitoring air quality station networks. The developed GLM considers PM10 concentrations as a dependent variable, and both the gaseous pollutants and meteorological variables as explanatory independent variables. A logarithmic link function was considered with a Poisson probability distribution. Particular attention was given to cases with air temperatures both below and above 25°C. The best performance for modelled results against the measured data was achieved for the model with values of air temperature above 25°C compared with the model considering all ranges of air temperatures and with the model considering only temperature below 25°C. The model was also tested with similar data from another Portuguese city, Oporto, and results found to behave similarly. It is concluded that this model and the methodology could be adopted for other cities to predict PM10 concentrations when these data are not available by measurements from air quality monitoring stations or other acquisition means.

  7. Establishment of a new initial dose plan for vancomycin using the generalized linear mixed model.

    PubMed

    Kourogi, Yasuyuki; Ogata, Kenji; Takamura, Norito; Tokunaga, Jin; Setoguchi, Nao; Kai, Mitsuhiro; Tanaka, Emi; Chiyotanda, Susumu

    2017-04-08

    When administering vancomycin hydrochloride (VCM), the initial dose is adjusted to ensure that the steady-state trough value (Css-trough) remains within the effective concentration range. However, the Css-trough (population mean method predicted value [PMMPV]) calculated using the population mean method (PMM) often deviate from the effective concentration range. In this study, we used the generalized linear mixed model (GLMM) for initial dose planning to create a model that accurately predicts Css-trough, and subsequently assessed its prediction accuracy. The study included 46 subjects whose trough values were measured after receiving VCM. We calculated the Css-trough (Bayesian estimate predicted value [BEPV]) from the Bayesian estimates of trough values. Using the patients' medical data, we created models that predict the BEPV and selected the model with minimum information criterion (GLMM best model). We then calculated the Css-trough (GLMMPV) from the GLMM best model and compared the BEPV correlation with GLMMPV and with PMMPV. The GLMM best model was {[0.977 + (males: 0.029 or females: -0.081)] × PMMPV + 0.101 × BUN/adjusted SCr - 12.899 × SCr adjusted amount}. The coefficients of determination for BEPV/GLMMPV and BEPV/PMMPV were 0.623 and 0.513, respectively. We demonstrated that the GLMM best model was more accurate in predicting the Css-trough than the PMM.

  8. Statistical Methods for Quality Control of Steel Coils Manufacturing Process using Generalized Linear Models

    NASA Astrophysics Data System (ADS)

    García-Díaz, J. Carlos

    2009-11-01

    Fault detection and diagnosis is an important problem in process engineering. Process equipments are subject to malfunctions during operation. Galvanized steel is a value added product, furnishing effective performance by combining the corrosion resistance of zinc with the strength and formability of steel. Fault detection and diagnosis is an important problem in continuous hot dip galvanizing and the increasingly stringent quality requirements in automotive industry has also demanded ongoing efforts in process control to make the process more robust. When faults occur, they change the relationship among these observed variables. This work compares different statistical regression models proposed in the literature for estimating the quality of galvanized steel coils on the basis of short time histories. Data for 26 batches were available. Five variables were selected for monitoring the process: the steel strip velocity, four bath temperatures and bath level. The entire data consisting of 48 galvanized steel coils was divided into sets. The first training data set was 25 conforming coils and the second data set was 23 nonconforming coils. Logistic regression is a modeling tool in which the dependent variable is categorical. In most applications, the dependent variable is binary. The results show that the logistic generalized linear models do provide good estimates of quality coils and can be useful for quality control in manufacturing process.

  9. A generalized linear model for peak calling in ChIP-Seq data.

    PubMed

    Xu, Jialin; Zhang, Yu

    2012-06-01

    Chromatin immunoprecipitation followed by massively parallel sequencing (ChIP-Seq) has become a routine for detecting genome-wide protein-DNA interaction. The success of ChIP-Seq data analysis highly depends on the quality of peak calling (i.e., to detect peaks of tag counts at a genomic location and evaluate if the peak corresponds to a real protein-DNA interaction event). The challenges in peak calling include (1) how to combine the forward and the reverse strand tag data to improve the power of peak calling and (2) how to account for the variation of tag data observed across different genomic locations. We introduce a new peak calling method based on the generalized linear model (GLMNB) that utilizes negative binomial distribution to model the tag count data and account for the variation of background tags that may randomly bind to the DNA sequence at varying levels due to local genomic structures and sequence contents. We allow local shifting of peaks observed on the forward and the reverse stands, such that at each potential binding site, a binding profile representing the pattern of a real peak signal is fitted to best explain the observed tag data with maximum likelihood. Our method can also detect multiple peaks within a local region if there are multiple binding sites in the region.

  10. Population Decoding of Motor Cortical Activity using a Generalized Linear Model with Hidden States

    PubMed Central

    Lawhern, Vernon; Wu, Wei; Hatsopoulos, Nicholas G.; Paninski, Liam

    2010-01-01

    Generalized linear models (GLMs) have been developed for modeling and decoding population neuronal spiking activity in the motor cortex. These models provide reasonable characterizations between neural activity and motor behavior. However, they lack a description of movement-related terms which are not observed directly in these experiments, such as muscular activation, the subject's level of attention, and other internal or external states. Here we propose to include a multi-dimensional hidden state to address these states in a GLM framework where the spike count at each time is described as a function of the hand state (position, velocity, and acceleration), truncated spike history, and the hidden state. The model can be identified by an Expectation-Maximization algorithm. We tested this new method in two datasets where spikes were simultaneously recorded using a multi-electrode array in the primary motor cortex of two monkeys. It was found that this method significantly improves the model-fitting over the classical GLM, for hidden dimensions varying from 1 to 4. This method also provides more accurate decoding of hand state (lowering the Mean Square Error by up to 29% in some cases), while retaining real-time computational efficiency. These improvements on representation and decoding over the classical GLM model suggest that this new approach could contribute as a useful tool to motor cortical decoding and prosthetic applications. PMID:20359500

  11. Generalized linear discriminant analysis: a unified framework and efficient model selection.

    PubMed

    Ji, Shuiwang; Ye, Jieping

    2008-10-01

    High-dimensional data are common in many domains, and dimensionality reduction is the key to cope with the curse-of-dimensionality. Linear discriminant analysis (LDA) is a well-known method for supervised dimensionality reduction. When dealing with high-dimensional and low sample size data, classical LDA suffers from the singularity problem. Over the years, many algorithms have been developed to overcome this problem, and they have been applied successfully in various applications. However, there is a lack of a systematic study of the commonalities and differences of these algorithms, as well as their intrinsic relationships. In this paper, a unified framework for generalized LDA is proposed, which elucidates the properties of various algorithms and their relationships. Based on the proposed framework, we show that the matrix computations involved in LDA-based algorithms can be simplified so that the cross-validation procedure for model selection can be performed efficiently. We conduct extensive experiments using a collection of high-dimensional data sets, including text documents, face images, gene expression data, and gene expression pattern images, to evaluate the proposed theories and algorithms.

  12. Profile local linear estimation of generalized semiparametric regression model for longitudinal data

    PubMed Central

    Sun, Liuquan; Zhou, Jie

    2013-01-01

    This paper studies the generalized semiparametric regression model for longitudinal data where the covariate effects are constant for some and time-varying for others. Different link functions can be used to allow more flexible modelling of longitudinal data. The nonparametric components of the model are estimated using a local linear estimating equation and the parametric components are estimated through a profile estimating function. The method automatically adjusts for heterogeneity of sampling times, allowing the sampling strategy to depend on the past sampling history as well as possibly time-dependent covariates without specifically model such dependence. A K -fold cross-validation bandwidth selection is proposed as a working tool for locating an appropriate bandwidth. A criteria for selecting the link function is proposed to provide better fit of the data. Large sample properties of the proposed estimators are investigated. Large sample pointwise and simultaneous confidence intervals for the regression coefficients are constructed. Formal hypothesis testing procedures are proposed to check for the covariate effects and whether the effects are time-varying. A simulation study is conducted to examine the finite sample performances of the proposed estimation and hypothesis testing procedures. The methods are illustrated with a data example. PMID:23471814

  13. Fast inference in generalized linear models via expected log-likelihoods.

    PubMed

    Ramirez, Alexandro D; Paninski, Liam

    2014-04-01

    Generalized linear models play an essential role in a wide variety of statistical applications. This paper discusses an approximation of the likelihood in these models that can greatly facilitate computation. The basic idea is to replace a sum that appears in the exact log-likelihood by an expectation over the model covariates; the resulting "expected log-likelihood" can in many cases be computed significantly faster than the exact log-likelihood. In many neuroscience experiments the distribution over model covariates is controlled by the experimenter and the expected log-likelihood approximation becomes particularly useful; for example, estimators based on maximizing this expected log-likelihood (or a penalized version thereof) can often be obtained with orders of magnitude computational savings compared to the exact maximum likelihood estimators. A risk analysis establishes that these maximum EL estimators often come with little cost in accuracy (and in some cases even improved accuracy) compared to standard maximum likelihood estimates. Finally, we find that these methods can significantly decrease the computation time of marginal likelihood calculations for model selection and of Markov chain Monte Carlo methods for sampling from the posterior parameter distribution. We illustrate our results by applying these methods to a computationally-challenging dataset of neural spike trains obtained via large-scale multi-electrode recordings in the primate retina.

  14. Population decoding of motor cortical activity using a generalized linear model with hidden states.

    PubMed

    Lawhern, Vernon; Wu, Wei; Hatsopoulos, Nicholas; Paninski, Liam

    2010-06-15

    Generalized linear models (GLMs) have been developed for modeling and decoding population neuronal spiking activity in the motor cortex. These models provide reasonable characterizations between neural activity and motor behavior. However, they lack a description of movement-related terms which are not observed directly in these experiments, such as muscular activation, the subject's level of attention, and other internal or external states. Here we propose to include a multi-dimensional hidden state to address these states in a GLM framework where the spike count at each time is described as a function of the hand state (position, velocity, and acceleration), truncated spike history, and the hidden state. The model can be identified by an Expectation-Maximization algorithm. We tested this new method in two datasets where spikes were simultaneously recorded using a multi-electrode array in the primary motor cortex of two monkeys. It was found that this method significantly improves the model-fitting over the classical GLM, for hidden dimensions varying from 1 to 4. This method also provides more accurate decoding of hand state (reducing the mean square error by up to 29% in some cases), while retaining real-time computational efficiency. These improvements on representation and decoding over the classical GLM model suggest that this new approach could contribute as a useful tool to motor cortical decoding and prosthetic applications.

  15. Applications of multivariate modeling to neuroimaging group analysis: A comprehensive alternative to univariate general linear model

    PubMed Central

    Chen, Gang; Adleman, Nancy E.; Saad, Ziad S.; Leibenluft, Ellen; Cox, RobertW.

    2014-01-01

    All neuroimaging packages can handle group analysis with t-tests or general linear modeling (GLM). However, they are quite hamstrung when there are multiple within-subject factors or when quantitative covariates are involved in the presence of a within-subject factor. In addition, sphericity is typically assumed for the variance–covariance structure when there are more than two levels in a within-subject factor. To overcome such limitations in the traditional AN(C)OVA and GLM, we adopt a multivariate modeling (MVM) approach to analyzing neuroimaging data at the group level with the following advantages: a) there is no limit on the number of factors as long as sample sizes are deemed appropriate; b) quantitative covariates can be analyzed together with within- subject factors; c) when a within-subject factor is involved, three testing methodologies are provided: traditional univariate testing (UVT)with sphericity assumption (UVT-UC) and with correction when the assumption is violated (UVT-SC), and within-subject multivariate testing (MVT-WS); d) to correct for sphericity violation at the voxel level, we propose a hybrid testing (HT) approach that achieves equal or higher power via combining traditional sphericity correction methods (Greenhouse–Geisser and Huynh–Feldt) with MVT-WS. PMID:24954281

  16. Predicting oropharyngeal tumor volume throughout the course of radiation therapy from pretreatment computed tomography data using general linear models

    SciTech Connect

    Yock, Adam D. Kudchadker, Rajat J.; Rao, Arvind; Dong, Lei; Beadle, Beth M.; Garden, Adam S.; Court, Laurence E.

    2014-05-15

    Purpose: The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Methods: Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. Results: In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: −11.6%–23.8%) and 14.6% (range: −7.3%–27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: −6.8%–40.3%) and 13.1% (range: −1.5%–52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: −11.1%–20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. Conclusions: A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography

  17. Generalized Functional Linear Models for Gene-based Case-Control Association Studies

    PubMed Central

    Mills, James L.; Carter, Tonia C.; Lobach, Iryna; Wilson, Alexander F.; Bailey-Wilson, Joan E.; Weeks, Daniel E.; Xiong, Momiao

    2014-01-01

    By using functional data analysis techniques, we developed generalized functional linear models for testing association between a dichotomous trait and multiple genetic variants in a genetic region while adjusting for covariates. Both fixed and mixed effect models are developed and compared. Extensive simulations show that Rao's efficient score tests of the fixed effect models are very conservative since they generate lower type I errors than nominal levels, and global tests of the mixed effect models generate accurate type I errors. Furthermore, we found that the Rao's efficient score test statistics of the fixed effect models have higher power than the sequence kernel association test (SKAT) and its optimal unified version (SKAT-O) in most cases when the causal variants are both rare and common. When the causal variants are all rare (i.e., minor allele frequencies less than 0.03), the Rao's efficient score test statistics and the global tests have similar or slightly lower power than SKAT and SKAT-O. In practice, it is not known whether rare variants or common variants in a gene are disease-related. All we can assume is that a combination of rare and common variants influences disease susceptibility. Thus, the improved performance of our models when the causal variants are both rare and common shows that the proposed models can be very useful in dissecting complex traits. We compare the performance of our methods with SKAT and SKAT-O on real neural tube defects and Hirschsprung's disease data sets. The Rao's efficient score test statistics and the global tests are more sensitive than SKAT and SKAT-O in the real data analysis. Our methods can be used in either gene-disease genome-wide/exome-wide association studies or candidate gene analyses. PMID:25203683

  18. Generalized functional linear models for gene-based case-control association studies.

    PubMed

    Fan, Ruzong; Wang, Yifan; Mills, James L; Carter, Tonia C; Lobach, Iryna; Wilson, Alexander F; Bailey-Wilson, Joan E; Weeks, Daniel E; Xiong, Momiao

    2014-11-01

    By using functional data analysis techniques, we developed generalized functional linear models for testing association between a dichotomous trait and multiple genetic variants in a genetic region while adjusting for covariates. Both fixed and mixed effect models are developed and compared. Extensive simulations show that Rao's efficient score tests of the fixed effect models are very conservative since they generate lower type I errors than nominal levels, and global tests of the mixed effect models generate accurate type I errors. Furthermore, we found that the Rao's efficient score test statistics of the fixed effect models have higher power than the sequence kernel association test (SKAT) and its optimal unified version (SKAT-O) in most cases when the causal variants are both rare and common. When the causal variants are all rare (i.e., minor allele frequencies less than 0.03), the Rao's efficient score test statistics and the global tests have similar or slightly lower power than SKAT and SKAT-O. In practice, it is not known whether rare variants or common variants in a gene region are disease related. All we can assume is that a combination of rare and common variants influences disease susceptibility. Thus, the improved performance of our models when the causal variants are both rare and common shows that the proposed models can be very useful in dissecting complex traits. We compare the performance of our methods with SKAT and SKAT-O on real neural tube defects and Hirschsprung's disease datasets. The Rao's efficient score test statistics and the global tests are more sensitive than SKAT and SKAT-O in the real data analysis. Our methods can be used in either gene-disease genome-wide/exome-wide association studies or candidate gene analyses.

  19. Protein structure validation by generalized linear model root-mean-square deviation prediction.

    PubMed

    Bagaria, Anurag; Jaravine, Victor; Huang, Yuanpeng J; Montelione, Gaetano T; Güntert, Peter

    2012-02-01

    Large-scale initiatives for obtaining spatial protein structures by experimental or computational means have accentuated the need for the critical assessment of protein structure determination and prediction methods. These include blind test projects such as the critical assessment of protein structure prediction (CASP) and the critical assessment of protein structure determination by nuclear magnetic resonance (CASD-NMR). An important aim is to establish structure validation criteria that can reliably assess the accuracy of a new protein structure. Various quality measures derived from the coordinates have been proposed. A universal structural quality assessment method should combine multiple individual scores in a meaningful way, which is challenging because of their different measurement units. Here, we present a method based on a generalized linear model (GLM) that combines diverse protein structure quality scores into a single quantity with intuitive meaning, namely the predicted coordinate root-mean-square deviation (RMSD) value between the present structure and the (unavailable) "true" structure (GLM-RMSD). For two sets of structural models from the CASD-NMR and CASP projects, this GLM-RMSD value was compared with the actual accuracy given by the RMSD value to the corresponding, experimentally determined reference structure from the Protein Data Bank (PDB). The correlation coefficients between actual (model vs. reference from PDB) and predicted (model vs. "true") heavy-atom RMSDs were 0.69 and 0.76, for the two datasets from CASD-NMR and CASP, respectively, which is considerably higher than those for the individual scores (-0.24 to 0.68). The GLM-RMSD can thus predict the accuracy of protein structures more reliably than individual coordinate-based quality scores.

  20. Node-Splitting Generalized Linear Mixed Models for Evaluation of Inconsistency in Network Meta-Analysis.

    PubMed

    Yu-Kang, Tu

    2016-12-01

    Network meta-analysis for multiple treatment comparisons has been a major development in evidence synthesis methodology. The validity of a network meta-analysis, however, can be threatened by inconsistency in evidence within the network. One particular issue of inconsistency is how to directly evaluate the inconsistency between direct and indirect evidence with regard to the effects difference between two treatments. A Bayesian node-splitting model was first proposed and a similar frequentist side-splitting model has been put forward recently. Yet, assigning the inconsistency parameter to one or the other of the two treatments or splitting the parameter symmetrically between the two treatments can yield different results when multi-arm trials are involved in the evaluation. We aimed to show that a side-splitting model can be viewed as a special case of design-by-treatment interaction model, and different parameterizations correspond to different design-by-treatment interactions. We demonstrated how to evaluate the side-splitting model using the arm-based generalized linear mixed model, and an example data set was used to compare results from the arm-based models with those from the contrast-based models. The three parameterizations of side-splitting make slightly different assumptions: the symmetrical method assumes that both treatments in a treatment contrast contribute to inconsistency between direct and indirect evidence, whereas the other two parameterizations assume that only one of the two treatments contributes to this inconsistency. With this understanding in mind, meta-analysts can then make a choice about how to implement the side-splitting method for their analysis. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  1. Modeling psychophysical data at the population-level: the generalized linear mixed model.

    PubMed

    Moscatelli, Alessandro; Mezzetti, Maura; Lacquaniti, Francesco

    2012-10-25

    In psychophysics, researchers usually apply a two-level model for the analysis of the behavior of the single subject and the population. This classical model has two main disadvantages. First, the second level of the analysis discards information on trial repetitions and subject-specific variability. Second, the model does not easily allow assessing the goodness of fit. As an alternative to this classical approach, here we propose the Generalized Linear Mixed Model (GLMM). The GLMM separately estimates the variability of fixed and random effects, it has a higher statistical power, and it allows an easier assessment of the goodness of fit compared with the classical two-level model. GLMMs have been frequently used in many disciplines since the 1990s; however, they have been rarely applied in psychophysics. Furthermore, to our knowledge, the issue of estimating the point-of-subjective-equivalence (PSE) within the GLMM framework has never been addressed. Therefore the article has two purposes: It provides a brief introduction to the usage of the GLMM in psychophysics, and it evaluates two different methods to estimate the PSE and its variability within the GLMM framework. We compare the performance of the GLMM and the classical two-level model on published experimental data and simulated data. We report that the estimated values of the parameters were similar between the two models and Type I errors were below the confidence level in both models. However, the GLMM has a higher statistical power than the two-level model. Moreover, one can easily compare the fit of different GLMMs according to different criteria. In conclusion, we argue that the GLMM can be a useful method in psychophysics.

  2. Predicting stem borer density in maize using RapidEye data and generalized linear models

    NASA Astrophysics Data System (ADS)

    Abdel-Rahman, Elfatih M.; Landmann, Tobias; Kyalo, Richard; Ong'amo, George; Mwalusepo, Sizah; Sulieman, Saad; Ru, Bruno Le

    2017-05-01

    Average maize yield in eastern Africa is 2.03 t ha-1 as compared to global average of 6.06 t ha-1 due to biotic and abiotic constraints. Amongst the biotic production constraints in Africa, stem borers are the most injurious. In eastern Africa, maize yield losses due to stem borers are currently estimated between 12% and 21% of the total production. The objective of the present study was to explore the possibility of RapidEye spectral data to assess stem borer larva densities in maize fields in two study sites in Kenya. RapidEye images were acquired for the Bomet (western Kenya) test site on the 9th of December 2014 and on 27th of January 2015, and for Machakos (eastern Kenya) a RapidEye image was acquired on the 3rd of January 2015. Five RapidEye spectral bands as well as 30 spectral vegetation indices (SVIs) were utilized to predict per field maize stem borer larva densities using generalized linear models (GLMs), assuming Poisson ('Po') and negative binomial ('NB') distributions. Root mean square error (RMSE) and ratio prediction to deviation (RPD) statistics were used to assess the models performance using a leave-one-out cross-validation approach. The Zero-inflated NB ('ZINB') models outperformed the 'NB' models and stem borer larva densities could only be predicted during the mid growing season in December and early January in both study sites, respectively (RMSE = 0.69-1.06 and RPD = 8.25-19.57). Overall, all models performed similar when all the 30 SVIs (non-nested) and only the significant (nested) SVIs were used. The models developed could improve decision making regarding controlling maize stem borers within integrated pest management (IPM) interventions.

  3. A generalized harmonic balance method for forced non-linear oscillations: the subharmonic cases

    NASA Astrophysics Data System (ADS)

    Wu, J. J.

    1992-12-01

    This paper summarizes and extends results in two previous papers, published in conference proceedings, on a variant of the generalized harmonic balance method (GHB) and its application to obtain subharmonic solutions of forced non-linear oscillation problems. This method was introduced as an alternative to the method of multiple scales, and it essentially consists of two parts. First, the part of the multiple scales method used to reduce the problem to a set of differential equations is used to express the solution as a sum of terms of various harmonics with unknown, time dependent coefficients. Second, the form of solution so obtained is substituted into the original equation and the coefficients of each harmonic are set to zero. Key equations of approximations for a subharmonic case are derived for the cases of both "small" damping and excitations, and "Large" damping and excitations, which are shown to be identical, in the intended order of approximation, to those obtained by Nayfeh using the method of multiple scales. Detailed numerical formulations, including the derivation of the initial conditions, are presented, as well as some numerical results for the frequency-response relations and the time evolution of various harmonic components. Excellent agreement is demonstrated between results by GHB and by integrating the original differential equation directly. The improved efficiency in obtaining numerical solutions using GHB as compared with integrating the original differential equation is demonstrated also. For the case of large damping and excitations and for non-trivial solutions, it is noted that there exists a threshold value of the force beyond which no subharmonic excitations are possible.

  4. Determinants of hospital closure in South Korea: use of a hierarchical generalized linear model.

    PubMed

    Noh, Maengseok; Lee, Youngjo; Yun, Sung-Cheol; Lee, Sang-Il; Lee, Moo-Song; Khang, Young-Ho

    2006-11-01

    Understanding causes of hospital closure is important if hospitals are to survive and continue to fulfill their missions as the center for health care in their neighborhoods. Knowing which hospitals are most susceptible to closure can be of great use for hospital administrators and others interested in hospital performance. Although prior studies have identified a range of factors associated with increased risk of hospital closure, most are US-based and do not directly relate to health care systems in other countries. We examined determinants of hospital closure in a nationally representative sample: 805 hospitals established in South Korea before 1996 were examined-hospitals established in 1996 or after were excluded. Major organizational changes (survival vs. closure) were followed for all South Korean hospitals from 1996 through 2002. With the use of a hierarchical generalized linear model, a frailty model was used to control correlation among repeated measurements for risk factors for hospital closure. Results showed that ownership and hospital size were significantly associated with hospital closure. Urban hospitals were less likely to close than rural hospitals. However, the urban location of a hospital was not associated with hospital closure after adjustment for the proportion of elderly. Two measures for hospital competition (competitive beds and 1-Hirshman--Herfindalh index) were positively associated with risk of hospital closure before and after adjustment for confounders. In addition, annual 10% change in competitive beds was significantly predictive of hospital closure. In conclusion, yearly trends in hospital competition as well as the level of hospital competition each year affected hospital survival. Future studies need to examine the contribution of internal factors such as management strategies and financial status to hospital closure in South Korea.

  5. The linear co-variance between joint muscle torques is not a generalized principle.

    PubMed

    Sande de Souza, Luciane Aparecida Pascucci; Dionísio, Valdeci Carlos; Lerena, Mario Adrian Misailidis; Marconi, Nadia Fernanda; Almeida, Gil Lúcio

    2009-06-01

    In 1996, Gottlieb et al. [Gottlieb GL, Song Q, Hong D, Almeida GL, Corcos DM. Coordinating movement at two joints: A principle of linear covariance. J Neurophysiol 1996;75(4):1760-4] identified a linear co-variance between the joint muscle torques generated at two connected joints. The joint muscle torques changed directions and magnitudes in a synchronized and linear fashion and called it the principle of linear co-variance. Here we showed that this principle cannot hold for some class of movements. Neurologically normal subjects performed multijoint movements involving elbow and shoulder with reversal towards three targets in the sagittal plane without any constraints. The movement kinematics was calculated using the X and Y coordinates of the markers positioned over the joints. Inverse dynamics was used to calculate the joint muscle, interaction and net torques. We found that for the class of voluntary movements analyzed, the joint muscle torques of the elbow and the shoulder were not linearly correlated. The same was observed for the interaction torques. But, the net torques at both joints, i.e., the sum of the interaction and the joint muscle torques were linearly correlated. We showed that by decoupling the joint muscle torques, but keeping the net torques linearly correlated, the CNS was able to generate fast and accurate movements with straight fingertip paths. The movement paths were typical of the ones in which the joint muscle torques were linearly correlated.

  6. Assessing the Tangent Linear Behaviour of Common Tracer Transport Schemes and Their Use in a Linearised Atmospheric General Circulation Model

    NASA Technical Reports Server (NTRS)

    Holdaway, Daniel; Kent, James

    2015-01-01

    The linearity of a selection of common advection schemes is tested and examined with a view to their use in the tangent linear and adjoint versions of an atmospheric general circulation model. The schemes are tested within a simple offline one-dimensional periodic domain as well as using a simplified and complete configuration of the linearised version of NASA's Goddard Earth Observing System version 5 (GEOS-5). All schemes which prevent the development of negative values and preserve the shape of the solution are confirmed to have nonlinear behaviour. The piecewise parabolic method (PPM) with certain flux limiters, including that used by default in GEOS-5, is found to support linear growth near the shocks. This property can cause the rapid development of unrealistically large perturbations within the tangent linear and adjoint models. It is shown that these schemes with flux limiters should not be used within the linearised version of a transport scheme. The results from tests using GEOS-5 show that the current default scheme (a version of PPM) is not suitable for the tangent linear and adjoint model, and that using a linear third-order scheme for the linearised model produces better behaviour. Using the third-order scheme for the linearised model improves the correlations between the linear and non-linear perturbation trajectories for cloud liquid water and cloud liquid ice in GEOS-5.

  7. Assessing the Tangent Linear Behaviour of Common Tracer Transport Schemes and Their Use in a Linearised Atmospheric General Circulation Model

    NASA Technical Reports Server (NTRS)

    Holdaway, Daniel; Kent, James

    2015-01-01

    The linearity of a selection of common advection schemes is tested and examined with a view to their use in the tangent linear and adjoint versions of an atmospheric general circulation model. The schemes are tested within a simple offline one-dimensional periodic domain as well as using a simplified and complete configuration of the linearised version of NASA's Goddard Earth Observing System version 5 (GEOS-5). All schemes which prevent the development of negative values and preserve the shape of the solution are confirmed to have nonlinear behaviour. The piecewise parabolic method (PPM) with certain flux limiters, including that used by default in GEOS-5, is found to support linear growth near the shocks. This property can cause the rapid development of unrealistically large perturbations within the tangent linear and adjoint models. It is shown that these schemes with flux limiters should not be used within the linearised version of a transport scheme. The results from tests using GEOS-5 show that the current default scheme (a version of PPM) is not suitable for the tangent linear and adjoint model, and that using a linear third-order scheme for the linearised model produces better behaviour. Using the third-order scheme for the linearised model improves the correlations between the linear and non-linear perturbation trajectories for cloud liquid water and cloud liquid ice in GEOS-5.

  8. On the Global and Linear Convergence of the Generalized Alternating Direction Method of Multipliers

    DTIC Science & Technology

    2012-08-01

    This paper shows that global linear convergence can be guaranteed under the above assumptions on strong convexity and Lipschitz gradient on one of the...linear convergence can be guaranteed under the above assumptions on strong convexity and Lipschitz gradient on one of the two functions, along with certain...extensive literature on the ADM and its applications , there are very few results on its rate of convergence until the very recent past. Work [13] shows

  9. Meta-analysis of Complex Diseases at Gene Level with Generalized Functional Linear Models.

    PubMed

    Fan, Ruzong; Wang, Yifan; Chiu, Chi-Yang; Chen, Wei; Ren, Haobo; Li, Yun; Boehnke, Michael; Amos, Christopher I; Moore, Jason H; Xiong, Momiao

    2016-02-01

    We developed generalized functional linear models (GFLMs) to perform a meta-analysis of multiple case-control studies to evaluate the relationship of genetic data to dichotomous traits adjusting for covariates. Unlike the previously developed meta-analysis for sequence kernel association tests (MetaSKATs), which are based on mixed-effect models to make the contributions of major gene loci random, GFLMs are fixed models; i.e., genetic effects of multiple genetic variants are fixed. Based on GFLMs, we developed chi-squared-distributed Rao's efficient score test and likelihood-ratio test (LRT) statistics to test for an association between a complex dichotomous trait and multiple genetic variants. We then performed extensive simulations to evaluate the empirical type I error rates and power performance of the proposed tests. The Rao's efficient score test statistics of GFLMs are very conservative and have higher power than MetaSKATs when some causal variants are rare and some are common. When the causal variants are all rare [i.e., minor allele frequencies (MAF) < 0.03], the Rao's efficient score test statistics have similar or slightly lower power than MetaSKATs. The LRT statistics generate accurate type I error rates for homogeneous genetic-effect models and may inflate type I error rates for heterogeneous genetic-effect models owing to the large numbers of degrees of freedom and have similar or slightly higher power than the Rao's efficient score test statistics. GFLMs were applied to analyze genetic data of 22 gene regions of type 2 diabetes data from a meta-analysis of eight European studies and detected significant association for 18 genes (P < 3.10 × 10(-6)), tentative association for 2 genes (HHEX and HMGA2; P ≈ 10(-5)), and no association for 2 genes, while MetaSKATs detected none. In addition, the traditional additive-effect model detects association at gene HHEX. GFLMs and related tests can analyze rare or common variants or a combination of the two and

  10. A methodology for evaluation of parent-mutant competition using a generalized non-linear ecosystem model

    Treesearch

    Raymond L. Czaplewski

    1973-01-01

    A generalized, non-linear population dynamics model of an ecosystem is used to investigate the direction of selective pressures upon a mutant by studying the competition between parent and mutant populations. The model has the advantages of considering selection as operating on the phenotype, of retaining the interaction of the mutant population with the ecosystem as a...

  11. EVALUATING PREDICTIVE ERRORS OF A COMPLEX ENVIRONMENTAL MODEL USING A GENERAL LINEAR MODEL AND LEAST SQUARE MEANS

    EPA Science Inventory

    A General Linear Model (GLM) was used to evaluate the deviation of predicted values from expected values for a complex environmental model. For this demonstration, we used the default level interface of the Regional Mercury Cycling Model (R-MCM) to simulate epilimnetic total mer...

  12. Reversibility of a quantum channel: General conditions and their applications to Bosonic linear channels

    SciTech Connect

    Shirokov, M. E.

    2013-11-15

    The method of complementary channel for analysis of reversibility (sufficiency) of a quantum channel with respect to families of input states (pure states for the most part) are considered and applied to Bosonic linear (quasi-free) channels, in particular, to Bosonic Gaussian channels. The obtained reversibility conditions for Bosonic linear channels have clear physical interpretation and their sufficiency is also shown by explicit construction of reversing channels. The method of complementary channel gives possibility to prove necessity of these conditions and to describe all reversed families of pure states in the Schrodinger representation. Some applications in quantum information theory are considered. Conditions for existence of discrete classical-quantum subchannels and of completely depolarizing subchannels of a Bosonic linear channel are presented.

  13. GENERAL: A Study on Stochastic Resonance in Biased Subdiffusive Smoluchowski Systems within Linear Response Range

    NASA Astrophysics Data System (ADS)

    Li, Yi-Juan; Kang, Yan-Mei

    2010-08-01

    The method of matrix continued fraction is used to investigate stochastic resonance (SR) in the biased subdiffusive Smoluchowski system within linear response range. Numerical results of linear dynamic susceptibility and spectral amplification factor are presented and discussed in two-well potential and mono-well potential with different subdiffusion exponents. Following our observation, the introduction of a bias in the potential weakens the SR effect in the subdiffusive system just as in the normal diffusive case. Our observation also discloses that the subdiffusion inhibits the low-frequency SR, but it enhances the high-frequency SR in the biased Smoluchowski system, which should reflect a “flattening" influence of the subdiffusion on the linear susceptibility.

  14. Reversibility of a quantum channel: General conditions and their applications to Bosonic linear channels

    NASA Astrophysics Data System (ADS)

    Shirokov, M. E.

    2013-11-01

    The method of complementary channel for analysis of reversibility (sufficiency) of a quantum channel with respect to families of input states (pure states for the most part) are considered and applied to Bosonic linear (quasi-free) channels, in particular, to Bosonic Gaussian channels. The obtained reversibility conditions for Bosonic linear channels have clear physical interpretation and their sufficiency is also shown by explicit construction of reversing channels. The method of complementary channel gives possibility to prove necessity of these conditions and to describe all reversed families of pure states in the Schrodinger representation. Some applications in quantum information theory are considered. Conditions for existence of discrete classical-quantum subchannels and of completely depolarizing subchannels of a Bosonic linear channel are presented.

  15. Evidence for the conjecture that sampling generalized cat states with linear optics is hard

    NASA Astrophysics Data System (ADS)

    Rohde, Peter P.; Motes, Keith R.; Knott, Paul A.; Fitzsimons, Joseph; Munro, William J.; Dowling, Jonathan P.

    2015-01-01

    Boson sampling has been presented as a simplified model for linear optical quantum computing. In the boson-sampling model, Fock states are passed through a linear optics network and sampled via number-resolved photodetection. It has been shown that this sampling problem likely cannot be efficiently classically simulated. This raises the question as to whether there are other quantum states of light for which the equivalent sampling problem is also computationally hard. We present evidence, without using a full complexity proof, that a very broad class of quantum states of light—arbitrary superpositions of two or more coherent states—when evolved via passive linear optics and sampled with number-resolved photodetection, likely implements a classically hard sampling problem.

  16. A generalized hybrid transfinite element computational approach for nonlinear/linear unified thermal/structural analysis

    NASA Technical Reports Server (NTRS)

    Tamma, Kumar K.; Railkar, Sudhir B.

    1987-01-01

    The present paper describes the development of a new hybrid computational approach for applicability for nonlinear/linear thermal structural analysis. The proposed transfinite element approach is a hybrid scheme as it combines the modeling versatility of contemporary finite elements in conjunction with transform methods and the classical Bubnov-Galerkin schemes. Applicability of the proposed formulations for nonlinear analysis is also developed. Several test cases are presented to include nonlinear/linear unified thermal-stress and thermal-stress wave propagations. Comparative results validate the fundamental capablities of the proposed hybrid transfinite element methodology.

  17. Non-linear generalization of the relativistic Schrödinger equations.

    NASA Astrophysics Data System (ADS)

    Ochs, U.; Sorg, M.

    1996-09-01

    The theory of the relativistic Schrödinger equations is further developped and extended to non-linear field equations. The technical advantage of the relativistic Schroedinger approach is demonstrated explicitly by solving the coupled Einstein-Klein-Gordon equations including a non-linear Higgs potential in case of a Robertson-Walker universe. The numerical results yield the effect of dynamical self-diagonalization of the Hamiltonian which corresponds to a kind of quantum de-coherence being enabled by the inflation of the universe.

  18. Linear ion trap with a deterministic voltage of the general form

    NASA Astrophysics Data System (ADS)

    Rozhdestvenskii, Yu. V.; Rudyi, S. S.

    2017-04-01

    An analysis of the stability zones of a linear ion trap in the case of applying the voltage of the common form to the electrodes has been presented. The possibility of the localization of ions for specific types of periodic (but not harmonic) signals has been investigated. It has been shown that, when changing the types of temporal functions of the applied voltage the control by both trapping and dynamics of ions in a linear radiofrequency (RF) trap occurs, while preserving its design. The latest developments present new possibilities of implementing devices based on single ions, e.g., quantum frequency standards and quantum processors.

  19. Commentary on the statistical properties of noise and its implication on general linear models in functional near-infrared spectroscopy

    PubMed Central

    Huppert, Theodore J.

    2016-01-01

    Abstract. Functional near-infrared spectroscopy (fNIRS) is a noninvasive neuroimaging technique that uses low levels of light to measure changes in cerebral blood oxygenation levels. In the majority of NIRS functional brain studies, analysis of this data is based on a statistical comparison of hemodynamic levels between a baseline and task or between multiple task conditions by means of a linear regression model: the so-called general linear model. Although these methods are similar to their implementation in other fields, particularly for functional magnetic resonance imaging, the specific application of these methods in fNIRS research differs in several key ways related to the sources of noise and artifacts unique to fNIRS. In this brief communication, we discuss the application of linear regression models in fNIRS and the modifications needed to generalize these models in order to deal with structured (colored) noise due to systemic physiology and noise heteroscedasticity due to motion artifacts. The objective of this work is to present an overview of these noise properties in the context of the linear model as it applies to fNIRS data. This work is aimed at explaining these mathematical issues to the general fNIRS experimental researcher but is not intended to be a complete mathematical treatment of these concepts. PMID:26989756

  20. Hierarchical Generalized Linear Models for Multiple Groups of Rare and Common Variants: Jointly Estimating Group and Individual-Variant Effects

    PubMed Central

    Yi, Nengjun; Liu, Nianjun; Zhi, Degui; Li, Jun

    2011-01-01

    Complex diseases and traits are likely influenced by many common and rare genetic variants and environmental factors. Detecting disease susceptibility variants is a challenging task, especially when their frequencies are low and/or their effects are small or moderate. We propose here a comprehensive hierarchical generalized linear model framework for simultaneously analyzing multiple groups of rare and common variants and relevant covariates. The proposed hierarchical generalized linear models introduce a group effect and a genetic score (i.e., a linear combination of main-effect predictors for genetic variants) for each group of variants, and jointly they estimate the group effects and the weights of the genetic scores. This framework includes various previous methods as special cases, and it can effectively deal with both risk and protective variants in a group and can simultaneously estimate the cumulative contribution of multiple variants and their relative importance. Our computational strategy is based on extending the standard procedure for fitting generalized linear models in the statistical software R to the proposed hierarchical models, leading to the development of stable and flexible tools. The methods are illustrated with sequence data in gene ANGPTL4 from the Dallas Heart Study. The performance of the proposed procedures is further assessed via simulation studies. The methods are implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). PMID:22144906

  1. Recent advances toward a general purpose linear-scaling quantum force field.

    PubMed

    Giese, Timothy J; Huang, Ming; Chen, Haoyuan; York, Darrin M

    2014-09-16

    Conspectus There is need in the molecular simulation community to develop new quantum mechanical (QM) methods that can be routinely applied to the simulation of large molecular systems in complex, heterogeneous condensed phase environments. Although conventional methods, such as the hybrid quantum mechanical/molecular mechanical (QM/MM) method, are adequate for many problems, there remain other applications that demand a fully quantum mechanical approach. QM methods are generally required in applications that involve changes in electronic structure, such as when chemical bond formation or cleavage occurs, when molecules respond to one another through polarization or charge transfer, or when matter interacts with electromagnetic fields. A full QM treatment, rather than QM/MM, is necessary when these features present themselves over a wide spatial range that, in some cases, may span the entire system. Specific examples include the study of catalytic events that involve delocalized changes in chemical bonds, charge transfer, or extensive polarization of the macromolecular environment; drug discovery applications, where the wide range of nonstandard residues and protonation states are challenging to model with purely empirical MM force fields; and the interpretation of spectroscopic observables. Unfortunately, the enormous computational cost of conventional QM methods limit their practical application to small systems. Linear-scaling electronic structure methods (LSQMs) make possible the calculation of large systems but are still too computationally intensive to be applied with the degree of configurational sampling often required to make meaningful comparison with experiment. In this work, we present advances in the development of a quantum mechanical force field (QMFF) suitable for application to biological macromolecules and condensed phase simulations. QMFFs leverage the benefits provided by the LSQM and QM/MM approaches to produce a fully QM method that is able to

  2. Comparing Regression Coefficients between Nested Linear Models for Clustered Data with Generalized Estimating Equations

    ERIC Educational Resources Information Center

    Yan, Jun; Aseltine, Robert H., Jr.; Harel, Ofer

    2013-01-01

    Comparing regression coefficients between models when one model is nested within another is of great practical interest when two explanations of a given phenomenon are specified as linear models. The statistical problem is whether the coefficients associated with a given set of covariates change significantly when other covariates are added into…

  3. Comparing Regression Coefficients between Nested Linear Models for Clustered Data with Generalized Estimating Equations

    ERIC Educational Resources Information Center

    Yan, Jun; Aseltine, Robert H., Jr.; Harel, Ofer

    2013-01-01

    Comparing regression coefficients between models when one model is nested within another is of great practical interest when two explanations of a given phenomenon are specified as linear models. The statistical problem is whether the coefficients associated with a given set of covariates change significantly when other covariates are added into…

  4. Some Numerical Methods for Exponential Analysis with Connection to a General Identification Scheme for Linear Processes

    DTIC Science & Technology

    1980-11-01

    generalized nodel described by Eykhoff [1, 2], Astrom and Eykhoff [3], and on pages 209-220 of Eykhoff [4]. The origin of the general- ized model can be...aspects of process-parameter estimation," IEEE Trans. Auto. Control, October 1963, pp. 347-357. 3. K. J. Astrom and P. Eykhoff, "System

  5. Large deformation image classification using generalized locality-constrained linear coding.

    PubMed

    Zhang, Pei; Wee, Chong-Yaw; Niethammer, Marc; Shen, Dinggang; Yap, Pew-Thian

    2013-01-01

    Magnetic resonance (MR) imaging has been demonstrated to be very useful for clinical diagnosis of Alzheimer's disease (AD). A common approach to using MR images for AD detection is to spatially normalize the images by non-rigid image registration, and then perform statistical analysis on the resulting deformation fields. Due to the high nonlinearity of the deformation field, recent studies suggest to use initial momentum instead as it lies in a linear space and fully encodes the deformation field. In this paper we explore the use of initial momentum for image classification by focusing on the problem of AD detection. Experiments on the public ADNI dataset show that the initial momentum, together with a simple sparse coding technique-locality-constrained linear coding (LLC)--can achieve a classification accuracy that is comparable to or even better than the state of the art. We also show that the performance of LLC can be greatly improved by introducing proper weights to the codebook.

  6. Expected Estimating Equation using Calibration Data for Generalized Linear Models with a Mixture of Berkson and Classical Errors in Covariates

    PubMed Central

    de Dieu Tapsoba, Jean; Lee, Shen-Ming; Wang, Ching-Yun

    2013-01-01

    Data collected in many epidemiological or clinical research studies are often contaminated with measurement errors that may be of classical or Berkson error type. The measurement error may also be a combination of both classical and Berkson errors and failure to account for both errors could lead to unreliable inference in many situations. We consider regression analysis in generalized linear models when some covariates are prone to a mixture of Berkson and classical errors and calibration data are available only for some subjects in a subsample. We propose an expected estimating equation approach to accommodate both errors in generalized linear regression analyses. The proposed method can consistently estimate the classical and Berkson error variances based on the available data, without knowing the mixture percentage. Its finite-sample performance is investigated numerically. Our method is illustrated by an application to real data from an HIV vaccine study. PMID:24009099

  7. CABARET scheme for the numerical solution of aeroacoustics problems: Generalization to linearized one-dimensional Euler equations

    NASA Astrophysics Data System (ADS)

    Goloviznin, V. M.; Karabasov, S. A.; Kozubskaya, T. K.; Maksimov, N. V.

    2009-12-01

    A generalization of the CABARET finite difference scheme is proposed for linearized one-dimensional Euler equations based on the characteristic decomposition into local Riemann invariants. The new method is compared with several central finite difference schemes that are widely used in computational aeroacoustics. Numerical results for the propagation of an acoustic wave in a homogeneous field and the refraction of this wave through a contact discontinuity obtained on a strongly nonuniform grid are presented.

  8. Solution of a General Linear Complementarity Problem Using Smooth Optimization and Its Application to Bilinear Programming and LCP

    SciTech Connect

    Fernandes, L.; Friedlander, A.; Guedes, M.; Judice, J.

    2001-07-01

    This paper addresses a General Linear Complementarity Problem (GLCP) that has found applications in global optimization. It is shown that a solution of the GLCP can be computed by finding a stationary point of a differentiable function over a set defined by simple bounds on the variables. The application of this result to the solution of bilinear programs and LCPs is discussed. Some computational evidence of its usefulness is included in the last part of the paper.

  9. On the Energy Release Rate for Dynamic Transient Anti-Plane Shear Crack Propagation in a General Linear Viscoelastic Body

    DTIC Science & Technology

    1988-09-01

    properties.> Moreover, it is found that whether or not a failure zone is incorporated into the model si nif icantly influences both quantitatively and...Moreover, it is found that whether or not a failure zone is incorporated into the model significantly influences both quantitatively and...Hopf technique, Willis constructed the dynamic stress intensity factor (SIP) for a standard linear solid material model and general crack face

  10. Principal components and generalized linear modeling in the correlation between hospital admissions and air pollution

    PubMed Central

    de Souza, Juliana Bottoni; Reisen, Valdério Anselmo; Santos, Jane Méri; Franco, Glaura Conceição

    2014-01-01

    OBJECTIVE To analyze the association between concentrations of air pollutants and admissions for respiratory causes in children. METHODS Ecological time series study. Daily figures for hospital admissions of children aged < 6, and daily concentrations of air pollutants (PM10, SO2, NO2, O3 and CO) were analyzed in the Região da Grande Vitória, ES, Southeastern Brazil, from January 2005 to December 2010. For statistical analysis, two techniques were combined: Poisson regression with generalized additive models and principal model component analysis. Those analysis techniques complemented each other and provided more significant estimates in the estimation of relative risk. The models were adjusted for temporal trend, seasonality, day of the week, meteorological factors and autocorrelation. In the final adjustment of the model, it was necessary to include models of the Autoregressive Moving Average Models (p, q) type in the residuals in order to eliminate the autocorrelation structures present in the components. RESULTS For every 10:49 μg/m3 increase (interquartile range) in levels of the pollutant PM10 there was a 3.0% increase in the relative risk estimated using the generalized additive model analysis of main components-seasonal autoregressive – while in the usual generalized additive model, the estimate was 2.0%. CONCLUSIONS Compared to the usual generalized additive model, in general, the proposed aspect of generalized additive model − principal component analysis, showed better results in estimating relative risk and quality of fit. PMID:25119940

  11. General solution of the diffusion equation with a nonlocal diffusive term and a linear force term.

    PubMed

    Malacarne, L C; Mendes, R S; Lenzi, E K; Lenzi, M K

    2006-10-01

    We obtain a formal solution for a large class of diffusion equations with a spatial kernel dependence in the diffusive term. The presence of this kernel represents a nonlocal dependence of the diffusive process and, by a suitable choice, it has the spatial fractional diffusion equations as a particular case. We also consider the presence of a linear external force and source terms. In addition, we show that a rich class of anomalous diffusion, e.g., the Lévy superdiffusion, can be obtained by an appropriated choice of kernel.

  12. Identification of general linear relationships between activation energies and enthalpy changes for dissociation reactions at surfaces.

    PubMed

    Michaelides, Angelos; Liu, Z-P; Zhang, C J; Alavi, Ali; King, David A; Hu, P

    2003-04-02

    The activation energy to reaction is a key quantity that controls catalytic activity. Having used ab inito calculations to determine an extensive and broad ranging set of activation energies and enthalpy changes for surface-catalyzed reactions, we show that linear relationships exist between dissociation activation energies and enthalpy changes. Known in the literature as empirical Brønsted-Evans-Polanyi (BEP) relationships, we identify and discuss the physical origin of their presence in heterogeneous catalysis. The key implication is that merely from knowledge of adsorption energies the barriers to catalytic elementary reaction steps can be estimated.

  13. Use of a generalized linear model to evaluate range forage production estimates

    NASA Astrophysics Data System (ADS)

    Mitchell, John E.; Joyce, Linda A.

    1986-05-01

    Interdisciplinary teams have been used in federal land planning and in the private sector to reach consensus on the environmental impact of management. When a large data base is constructed, verifiability of the accuracy of the coded estimates and the underlying assumptions becomes a problem. A mechanism is provided by the use of a linear statistical model to evaluate production coefficients in terms of errors in coding and underlying assumptions. The technique can be used to evaluate other intuitive models depicting natural resource production in relation to prescribed variables, such as site factors or secondary succession.

  14. A general algorithm for control problems with variable parameters and quasi-linear models

    NASA Astrophysics Data System (ADS)

    Bayón, L.; Grau, J. M.; Ruiz, M. M.; Suárez, P. M.

    2015-12-01

    This paper presents an algorithm that is able to solve optimal control problems in which the modelling of the system contains variable parameters, with the added complication that, in certain cases, these parameters can lead to control problems governed by quasi-linear equations. Combining the techniques of Pontryagin's Maximum Principle and the shooting method, an algorithm has been developed that is not affected by the values of the parameters, being able to solve conventional problems as well as cases in which the optimal solution is shown to be bang-bang with singular arcs.

  15. A General Method for Solving Systems of Non-Linear Equations

    NASA Technical Reports Server (NTRS)

    Nachtsheim, Philip R.; Deiss, Ron (Technical Monitor)

    1995-01-01

    The method of steepest descent is modified so that accelerated convergence is achieved near a root. It is assumed that the function of interest can be approximated near a root by a quadratic form. An eigenvector of the quadratic form is found by evaluating the function and its gradient at an arbitrary point and another suitably selected point. The terminal point of the eigenvector is chosen to lie on the line segment joining the two points. The terminal point found lies on an axis of the quadratic form. The selection of a suitable step size at this point leads directly to the root in the direction of steepest descent in a single step. Newton's root finding method not infrequently diverges if the starting point is far from the root. However, the current method in these regions merely reverts to the method of steepest descent with an adaptive step size. The current method's performance should match that of the Levenberg-Marquardt root finding method since they both share the ability to converge from a starting point far from the root and both exhibit quadratic convergence near a root. The Levenberg-Marquardt method requires storage for coefficients of linear equations. The current method which does not require the solution of linear equations requires more time for additional function and gradient evaluations. The classic trade off of time for space separates the two methods.

  16. A substructure coupling procedure applicable to general linear time-invariant dynamic systems

    NASA Technical Reports Server (NTRS)

    Howsman, T. G.; Craig, R. R., Jr.

    1984-01-01

    A substructure synthesis procedure applicable to structural systems containing general nonconservative terms is presented. In their final form, the nonself-adjoint substructure equations of motion are cast in state vector form through the use of a variational principle. A reduced-order mode for each substructure is implemented by representing the substructure as a combination of a small number of Ritz vectors. For the method presented, the substructure Ritz vectors are identified as a truncated set of substructure eigenmodes, which are typically complex, along with a set of generalized real attachment modes. The formation of the generalized attachment modes does not require any knowledge of the substructure flexible modes; hence, only the eigenmodes used explicitly as Ritz vectors need to be extracted from the substructure eigenproblem. An example problem is presented to illustrate the method.

  17. The Exact Solution for Linear Thermoelastic Axisymmetric Deformations of Generally Laminated Circular Cylindrical Shells

    NASA Technical Reports Server (NTRS)

    Nemeth, Michael P.; Schultz, Marc R.

    2012-01-01

    A detailed exact solution is presented for laminated-composite circular cylinders with general wall construction and that undergo axisymmetric deformations. The overall solution is formulated in a general, systematic way and is based on the solution of a single fourth-order, nonhomogeneous ordinary differential equation with constant coefficients in which the radial displacement is the dependent variable. Moreover, the effects of general anisotropy are included and positive-definiteness of the strain energy is used to define uniquely the form of the basis functions spanning the solution space of the ordinary differential equation. Loading conditions are considered that include axisymmetric edge loads, surface tractions, and temperature fields. Likewise, all possible axisymmetric boundary conditions are considered. Results are presented for five examples that demonstrate a wide range of behavior for specially orthotropic and fully anisotropic cylinders.

  18. General theory of spherically symmetric boundary-value problems of the linear transport theory.

    NASA Technical Reports Server (NTRS)

    Kanal, M.

    1972-01-01

    A general theory of spherically symmetric boundary-value problems of the one-speed neutron transport theory is presented. The formulation is also applicable to the 'gray' problems of radiative transfer. The Green's function for the purely absorbing medium is utilized in obtaining the normal mode expansion of the angular densities for both interior and exterior problems. As the integral equations for unknown coefficients are regular, a general class of reduction operators is introduced to reduce such regular integral equations to singular ones with a Cauchy-type kernel. Such operators then permit one to solve the singular integral equations by the standard techniques due to Muskhelishvili. We discuss several spherically symmetric problems. However, the treatment is kept sufficiently general to deal with problems lacking azimuthal symmetry. In particular the procedure seems to work for regions whose boundary coincides with one of the coordinate surfaces for which the Helmholtz equation is separable.

  19. Optimization of biochemical systems by linear programming and general mass action model representations.

    PubMed

    Marín-Sanguino, Alberto; Torres, Néstor V

    2003-08-01

    A new method is proposed for the optimization of biochemical systems. The method, based on the separation of the stoichiometric and kinetic aspects of the system, follows the general approach used in the previously presented indirect optimization method (IOM) developed within biochemical systems theory. It is called GMA-IOM because it makes use of the generalized mass action (GMA) as the model system representation form. The GMA representation avoids flux aggregation and thus prevents possible stoichiometric errors. The optimization of a system is used to illustrate and compare the features, advantages and shortcomings of both versions of the IOM method as a general strategy for designing improved microbial strains of biotechnological interest. Special attention has been paid to practical problems for the actual implementation of the new proposed strategy, such as the total protein content of the engineered strain or the deviation from the original steady state and its influence on cell viability.

  20. General theory of spherically symmetric boundary-value problems of the linear transport theory.

    NASA Technical Reports Server (NTRS)

    Kanal, M.

    1972-01-01

    A general theory of spherically symmetric boundary-value problems of the one-speed neutron transport theory is presented. The formulation is also applicable to the 'gray' problems of radiative transfer. The Green's function for the purely absorbing medium is utilized in obtaining the normal mode expansion of the angular densities for both interior and exterior problems. As the integral equations for unknown coefficients are regular, a general class of reduction operators is introduced to reduce such regular integral equations to singular ones with a Cauchy-type kernel. Such operators then permit one to solve the singular integral equations by the standard techniques due to Muskhelishvili. We discuss several spherically symmetric problems. However, the treatment is kept sufficiently general to deal with problems lacking azimuthal symmetry. In particular the procedure seems to work for regions whose boundary coincides with one of the coordinate surfaces for which the Helmholtz equation is separable.

  1. Linear stability of plane Poiseuille flow over a generalized Stokes layer

    NASA Astrophysics Data System (ADS)

    Quadrio, Maurizio; Martinelli, Fulvio; Schmid, Peter J.

    2011-12-01

    Linear stability of plane Poiseuille flow subject to spanwise velocity forcing applied at the wall is studied. The forcing is stationary and sinusoidally distributed along the streamwise direction. The long-term aim of the study is to explore a possible relationship between the modification induced by the wall forcing to the stability characteristic of the unforced Poiseuille flow and the signifcant capabilities demonstrated by the same forcing in reducing turbulent friction drag. We present in this paper the statement of the mathematical problem, which is considerably more complex that the classic Orr-Sommerfeld-Squire approach, owing to the streamwise-varying boundary condition. We also report some preliminary results which, although not yet conclusive, describe the effects of the wall forcing on modal and non-modal characteristics of the flow stability.

  2. A generalized analog implementation of piecewise linear neuron models using CCII building blocks.

    PubMed

    Soleimani, Hamid; Ahmadi, Arash; Bavandpour, Mohammad; Sharifipoor, Ozra

    2014-03-01

    This paper presents a set of reconfigurable analog implementations of piecewise linear spiking neuron models using second generation current conveyor (CCII) building blocks. With the same topology and circuit elements, without W/L modification which is impossible after circuit fabrication, these circuits can produce different behaviors, similar to the biological neurons, both for a single neuron as well as a network of neurons just by tuning reference current and voltage sources. The models are investigated, in terms of analog implementation feasibility and costs, targeting large scale hardware implementations. Results show that, in order to gain the best performance, area and accuracy; these models can be compromised. Simulation results are presented for different neuron behaviors with CMOS 350 nm technology. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Robust conic generalized partial linear models using RCMARS method - A robustification of CGPLM

    NASA Astrophysics Data System (ADS)

    Özmen, Ayşe; Weber, Gerhard Wilhelm

    2012-11-01

    GPLM is a combination of two different regression models each of which is used to apply on different parts of the data set. It is also adequate to high dimensional, non-normal and nonlinear data sets having the flexibility to reflect all anomalies effectively. In our previous study, Conic GPLM (CGPLM) was introduced using CMARS and Logistic Regression. According to a comparison with CMARS, CGPLM gives better results. In this study, we include the existence of uncertainty in the future scenarios into CMARS and linear/logit regression part in CGPLM and robustify it with robust optimization which is dealt with data uncertainty. Moreover, we apply RCGPLM on a small data set as a numerical experience from the financial sector.

  4. Quasi-Linear Parameter Varying Representation of General Aircraft Dynamics Over Non-Trim Region

    NASA Technical Reports Server (NTRS)

    Shin, Jong-Yeob

    2007-01-01

    For applying linear parameter varying (LPV) control synthesis and analysis to a nonlinear system, it is required that a nonlinear system be represented in the form of an LPV model. In this paper, a new representation method is developed to construct an LPV model from a nonlinear mathematical model without the restriction that an operating point must be in the neighborhood of equilibrium points. An LPV model constructed by the new method preserves local stabilities of the original nonlinear system at "frozen" scheduling parameters and also represents the original nonlinear dynamics of a system over a non-trim region. An LPV model of the motion of FASER (Free-flying Aircraft for Subscale Experimental Research) is constructed by the new method.

  5. Iterative solution of general sparse linear systems on clusters of workstations

    SciTech Connect

    Lo, Gen-Ching; Saad, Y.

    1996-12-31

    Solving sparse irregularly structured linear systems on parallel platforms poses several challenges. First, sparsity makes it difficult to exploit data locality, whether in a distributed or shared memory environment. A second, perhaps more serious challenge, is to find efficient ways to precondition the system. Preconditioning techniques which have a large degree of parallelism, such as multicolor SSOR, often have a slower rate of convergence than their sequential counterparts. Finally, a number of other computational kernels such as inner products could ruin any gains gained from parallel speed-ups, and this is especially true on workstation clusters where start-up times may be high. In this paper we discuss these issues and report on our experience with PSPARSLIB, an on-going project for building a library of parallel iterative sparse matrix solvers.

  6. Evaluation of cavity occurrence in the Maynardville Limestone and the Copper Ridge Dolomite at the Y-12 Plant using logistic and general linear models

    SciTech Connect

    Shevenell, L.A.; Beauchamp, J.J.

    1994-11-01

    Several waste disposal sites are located on or adjacent to the karstic Maynardville Limestone (Cmn) and the Copper Ridge Dolomite (Ccr) at the Oak Ridge Y-12 Plant. These formations receive contaminants in groundwaters from nearby disposal sites, which can be transported quite rapidly due to the karst flow system. In order to evaluate transport processes through the karst aquifer, the solutional aspects of the formations must be characterized. As one component of this characterization effort, statistical analyses were conducted on the data related to cavities in order to determine if a suitable model could be identified that is capable of predicting the probability of cavity size or distribution in locations for which drilling data are not available. Existing data on the locations (East, North coordinates), depths (and elevations), and sizes of known conduits and other water zones were used in the analyses. Two different models were constructed in the attempt to predict the distribution of cavities in the vicinity of the Y-12 Plant: General Linear Models (GLM), and Logistic Regression Models (LOG). Each of the models attempted was very sensitive to the data set used. Models based on subsets of the full data set were found to do an inadequate job of predicting the behavior of the full data set. The fact that the Ccr and Cmn data sets differ significantly is not surprising considering the hydrogeology of the two formations differs. Flow in the Cmn is generally at elevations between 600 and 950 ft and is dominantly strike parallel through submerged, partially mud-filled cavities with sizes up to 40 ft, but more typically less than 5 ft. Recognized flow in the Ccr is generally above 950 ft elevation, with flow both parallel and perpendicular to geologic strike through conduits, which tend to be large than those on the Cnm, and are often not fully saturated at the shallower depths.

  7. Development of the complex general linear model in the Fourier domain: application to fMRI multiple input-output evoked responses for single subjects.

    PubMed

    Rio, Daniel E; Rawlings, Robert R; Woltz, Lawrence A; Gilman, Jodi; Hommer, Daniel W

    2013-01-01

    A linear time-invariant model based on statistical time series analysis in the Fourier domain for single subjects is further developed and applied to functional MRI (fMRI) blood-oxygen level-dependent (BOLD) multivariate data. This methodology was originally developed to analyze multiple stimulus input evoked response BOLD data. However, to analyze clinical data generated using a repeated measures experimental design, the model has been extended to handle multivariate time series data and demonstrated on control and alcoholic subjects taken from data previously analyzed in the temporal domain. Analysis of BOLD data is typically carried out in the time domain where the data has a high temporal correlation. These analyses generally employ parametric models of the hemodynamic response function (HRF) where prewhitening of the data is attempted using autoregressive (AR) models for the noise. However, this data can be analyzed in the Fourier domain. Here, assumptions made on the noise structure are less restrictive, and hypothesis tests can be constructed based on voxel-specific nonparametric estimates of the hemodynamic transfer function (HRF in the Fourier domain). This is especially important for experimental designs involving multiple states (either stimulus or drug induced) that may alter the form of the response function.

  8. FIDDLE: A Computer Code for Finite Difference Development of Linear Elasticity in Generalized Curvilinear Coordinates

    NASA Technical Reports Server (NTRS)

    Kaul, Upender K.

    2005-01-01

    A three-dimensional numerical solver based on finite-difference solution of three-dimensional elastodynamic equations in generalized curvilinear coordinates has been developed and used to generate data such as radial and tangential stresses over various gear component geometries under rotation. The geometries considered are an annulus, a thin annular disk, and a thin solid disk. The solution is based on first principles and does not involve lumped parameter or distributed parameter systems approach. The elastodynamic equations in the velocity-stress formulation that are considered here have been used in the solution of problems of geophysics where non-rotating Cartesian grids are considered. For arbitrary geometries, these equations along with the appropriate boundary conditions have been cast in generalized curvilinear coordinates in the present study.

  9. A generalized Lyapunov theory for robust root clustering of linear state space models with real parameter uncertainty

    NASA Technical Reports Server (NTRS)

    Yedavalli, R. K.

    1992-01-01

    The problem of analyzing and designing controllers for linear systems subject to real parameter uncertainty is considered. An elegant, unified theory for robust eigenvalue placement is presented for a class of D-regions defined by algebraic inequalities by extending the nominal matrix root clustering theory of Gutman and Jury (1981) to linear uncertain time systems. The author presents explicit conditions for matrix root clustering for different D-regions and establishes the relationship between the eigenvalue migration range and the parameter range. The bounds are all obtained by one-shot computation in the matrix domain and do not need any frequency sweeping or parameter gridding. The method uses the generalized Lyapunov theory for getting the bounds.

  10. A novel synchronization scheme with a simple linear control and guaranteed convergence time for generalized Lorenz chaotic systems.

    PubMed

    Chuang, Chun-Fu; Sun, Yeong-Jeu; Wang, Wen-June

    2012-12-01

    In this study, exponential finite-time synchronization for generalized Lorenz chaotic systems is investigated. The significant contribution of this paper is that master-slave synchronization is achieved within a pre-specified convergence time and with a simple linear control. The designed linear control consists of two parts: one achieves exponential synchronization, and the other realizes finite-time synchronization within a guaranteed convergence time. Furthermore, the control gain depends on the parameters of the exponential convergence rate, the finite-time convergence rate, the bound of the initial states of the master system, and the system parameter. In addition, the proposed approach can be directly and efficiently applied to secure communication. Finally, four numerical examples are provided to demonstrate the feasibility and correctness of the obtained results.

  11. Classical and Generalized Solutions of Time-Dependent Linear Differential Algebraic Equations

    DTIC Science & Technology

    1993-10-15

    matrix pencils, [G59]. The book [GrM86] also contains a treatment of the general system (1.1) utilizing a condition of "transferabilitv’" which...C(t) and N(t) are analytic functions of t and N(t) is nilpotent upper (or lower) triangular for all t E J. From the structure of N(t), it follows that...the operator Y(t)l7 n is nilpotent , so that (1.2b) has the unique solution z = E (-1)k(N(t)-)kg, and (1.2a) is k=1 it an explicit ODE. But no

  12. Wavelet-generalized least squares: a new BLU estimator of linear regression models with 1/f errors.

    PubMed

    Fadili, M J; Bullmore, E T

    2002-01-01

    Long-memory noise is common to many areas of signal processing and can seriously confound estimation of linear regression model parameters and their standard errors. Classical autoregressive moving average (ARMA) methods can adequately address the problem of linear time invariant, short-memory errors but may be inefficient and/or insufficient to secure type 1 error control in the context of fractal or scale invariant noise with a more slowly decaying autocorrelation function. Here we introduce a novel method, called wavelet-generalized least squares (WLS), which is (to a good approximation) the best linear unbiased (BLU) estimator of regression model parameters in the context of long-memory errors. The method also provides maximum likelihood (ML) estimates of the Hurst exponent (which can be readily translated to the fractal dimension or spectral exponent) characterizing the correlational structure of the errors, and the error variance. The algorithm exploits the whitening or Karhunen-Loéve-type property of the discrete wavelet transform to diagonalize the covariance matrix of the errors generated by an iterative fitting procedure after both data and design matrix have been transformed to the wavelet domain. Properties of this estimator, including its Cramèr-Rao bounds, are derived theoretically and compared to its empirical performance on a range of simulated data. Compared to ordinary least squares and ARMA-based estimators, WLS is shown to be more efficient and to give excellent type 1 error control. The method is also applied to some real (neurophysiological) data acquired by functional magnetic resonance imaging (fMRI) of the human brain. We conclude that wavelet-generalized least squares may be a generally useful estimator of regression models in data complicated by long-memory or fractal noise.

  13. Methodological Quality and Reporting of Generalized Linear Mixed Models in Clinical Medicine (2000–2012): A Systematic Review

    PubMed Central

    Casals, Martí; Girabent-Farrés, Montserrat; Carrasco, Josep L.

    2014-01-01

    Background Modeling count and binary data collected in hierarchical designs have increased the use of Generalized Linear Mixed Models (GLMMs) in medicine. This article presents a systematic review of the application and quality of results and information reported from GLMMs in the field of clinical medicine. Methods A search using the Web of Science database was performed for published original articles in medical journals from 2000 to 2012. The search strategy included the topic “generalized linear mixed models”,“hierarchical generalized linear models”, “multilevel generalized linear model” and as a research domain we refined by science technology. Papers reporting methodological considerations without application, and those that were not involved in clinical medicine or written in English were excluded. Results A total of 443 articles were detected, with an increase over time in the number of articles. In total, 108 articles fit the inclusion criteria. Of these, 54.6% were declared to be longitudinal studies, whereas 58.3% and 26.9% were defined as repeated measurements and multilevel design, respectively. Twenty-two articles belonged to environmental and occupational public health, 10 articles to clinical neurology, 8 to oncology, and 7 to infectious diseases and pediatrics. The distribution of the response variable was reported in 88% of the articles, predominantly Binomial (n = 64) or Poisson (n = 22). Most of the useful information about GLMMs was not reported in most cases. Variance estimates of random effects were described in only 8 articles (9.2%). The model validation, the method of covariate selection and the method of goodness of fit were only reported in 8.0%, 36.8% and 14.9% of the articles, respectively. Conclusions During recent years, the use of GLMMs in medical literature has increased to take into account the correlation of data when modeling qualitative data or counts. According to the current recommendations, the quality of

  14. A regularized point process generalized linear model for assessing the functional connectivity in the cat motor cortex.

    PubMed

    Chen, Zhe; Putrino, David F; Ba, Demba E; Ghosh, Soumya; Barbieri, Riccardo; Brown, Emery N

    2009-01-01

    Identification of multiple simultaneously recorded neural spike train recordings is an important task in understanding neuronal dependency, functional connectivity, and temporal causality in neural systems. An assessment of the functional connectivity in a group of ensemble cells was performed using a regularized point process generalized linear model (GLM) that incorporates temporal smoothness or contiguity of the solution. An efficient convex optimization algorithm was then developed for the regularized solution. The point process model was applied to an ensemble of neurons recorded from the cat motor cortex during a skilled reaching task. The implications of this analysis to the coding of skilled movement in primary motor cortex is discussed.

  15. Biohybrid Control of General Linear Systems Using the Adaptive Filter Model of Cerebellum

    PubMed Central

    Wilson, Emma D.; Assaf, Tareq; Pearson, Martin J.; Rossiter, Jonathan M.; Dean, Paul; Anderson, Sean R.; Porrill, John

    2015-01-01

    The adaptive filter model of the cerebellar microcircuit has been successfully applied to biological motor control problems, such as the vestibulo-ocular reflex (VOR), and to sensory processing problems, such as the adaptive cancelation of reafferent noise. It has also been successfully applied to problems in robotics, such as adaptive camera stabilization and sensor noise cancelation. In previous applications to inverse control problems, the algorithm was applied to the velocity control of a plant dominated by viscous and elastic elements. Naive application of the adaptive filter model to the displacement (as opposed to velocity) control of this plant results in unstable learning and control. To be more generally useful in engineering problems, it is essential to remove this restriction to enable the stable control of plants of any order. We address this problem here by developing a biohybrid model reference adaptive control (MRAC) scheme, which stabilizes the control algorithm for strictly proper plants. We evaluate the performance of this novel cerebellar-inspired algorithm with MRAC scheme in the experimental control of a dielectric electroactive polymer, a class of artificial muscle. The results show that the augmented cerebellar algorithm is able to accurately control the displacement response of the artificial muscle. The proposed solution not only greatly extends the practical applicability of the cerebellar-inspired algorithm, but may also shed light on cerebellar involvement in a wider range of biological control tasks. PMID:26257638

  16. Biohybrid Control of General Linear Systems Using the Adaptive Filter Model of Cerebellum.

    PubMed

    Wilson, Emma D; Assaf, Tareq; Pearson, Martin J; Rossiter, Jonathan M; Dean, Paul; Anderson, Sean R; Porrill, John

    2015-01-01

    The adaptive filter model of the cerebellar microcircuit has been successfully applied to biological motor control problems, such as the vestibulo-ocular reflex (VOR), and to sensory processing problems, such as the adaptive cancelation of reafferent noise. It has also been successfully applied to problems in robotics, such as adaptive camera stabilization and sensor noise cancelation. In previous applications to inverse control problems, the algorithm was applied to the velocity control of a plant dominated by viscous and elastic elements. Naive application of the adaptive filter model to the displacement (as opposed to velocity) control of this plant results in unstable learning and control. To be more generally useful in engineering problems, it is essential to remove this restriction to enable the stable control of plants of any order. We address this problem here by developing a biohybrid model reference adaptive control (MRAC) scheme, which stabilizes the control algorithm for strictly proper plants. We evaluate the performance of this novel cerebellar-inspired algorithm with MRAC scheme in the experimental control of a dielectric electroactive polymer, a class of artificial muscle. The results show that the augmented cerebellar algorithm is able to accurately control the displacement response of the artificial muscle. The proposed solution not only greatly extends the practical applicability of the cerebellar-inspired algorithm, but may also shed light on cerebellar involvement in a wider range of biological control tasks.

  17. A generalized linear mixed model for longitudinal binary data with a marginal logit link function

    PubMed Central

    Parzen, Michael; Ghosh, Souparno; Lipsitz, Stuart; Sinha, Debajyoti; Fitzmaurice, Garrett M.; Mallick, Bani K.; Ibrahim, Joseph G.

    2010-01-01

    Summary Longitudinal studies of a binary outcome are common in the health, social, and behavioral sciences. In general, a feature of random effects logistic regression models for longitudinal binary data is that the marginal functional form, when integrated over the distribution of the random effects, is no longer of logistic form. Recently, Wang and Louis (2003) proposed a random intercept model in the clustered binary data setting where the marginal model has a logistic form. An acknowledged limitation of their model is that it allows only a single random effect that varies from cluster to cluster. In this paper, we propose a modification of their model to handle longitudinal data, allowing separate, but correlated, random intercepts at each measurement occasion. The proposed model allows for a flexible correlation structure among the random intercepts, where the correlations can be interpreted in terms of Kendall’s τ. For example, the marginal correlations among the repeated binary outcomes can decline with increasing time separation, while the model retains the property of having matching conditional and marginal logit link functions. Finally, the proposed method is used to analyze data from a longitudinal study designed to monitor cardiac abnormalities in children born to HIV-infected women. PMID:21532998

  18. Point particle binary system with components of different masses in the linear regime of the characteristic formulation of general relativity

    NASA Astrophysics Data System (ADS)

    Cedeño M, C. E.; de Araujo, J. C. N.

    2016-05-01

    A study of binary systems composed of two point particles with different masses in the linear regime of the characteristic formulation of general relativity with a Minkowski background is provided. The present paper generalizes a previous study by Bishop et al. The boundary conditions at the world tubes generated by the particles's orbits are explored, where the metric variables are decomposed in spin-weighted spherical harmonics. The power lost by the emission of gravitational waves is computed using the Bondi News function. The power found is the well-known result obtained by Peters and Mathews using a different approach. This agreement validates the approach considered here. Several multipole term contributions to the gravitational radiation field are also shown.

  19. FUSED KERNEL-SPLINE SMOOTHING FOR REPEATEDLY MEASURED OUTCOMES IN A GENERALIZED PARTIALLY LINEAR MODEL WITH FUNCTIONAL SINGLE INDEX.

    PubMed

    Jiang, Fei; Ma, Yanyuan; Wang, Yuanjia

    We propose a generalized partially linear functional single index risk score model for repeatedly measured outcomes where the index itself is a function of time. We fuse the nonparametric kernel method and regression spline method, and modify the generalized estimating equation to facilitate estimation and inference. We use local smoothing kernel to estimate the unspecified coefficient functions of time, and use B-splines to estimate the unspecified function of the single index component. The covariance structure is taken into account via a working model, which provides valid estimation and inference procedure whether or not it captures the true covariance. The estimation method is applicable to both continuous and discrete outcomes. We derive large sample properties of the estimation procedure and show different convergence rate of each component of the model. The asymptotic properties when the kernel and regression spline methods are combined in a nested fashion has not been studied prior to this work even in the independent data case.

  20. Airfoil profiles for minimum pressure drag at supersonic velocities -- general analysis with application to linearized supersonic flow

    NASA Technical Reports Server (NTRS)

    Chapman, Dean R

    1952-01-01

    A theoretical investigation is made of the airfoil profile for minimum pressure drag at zero lift in supersonic flow. In the first part of the report a general method is developed for calculating the profile having the least pressure drag for a given auxiliary condition, such as a given structural requirement or a given thickness ratio. The various structural requirements considered include bending strength, bending stiffness, torsional strength, and torsional stiffness. No assumption is made regarding the trailing-edge thickness; the optimum value is determined in the calculations as a function of the base pressure. To illustrate the general method, the optimum airfoil, defined as the airfoil having minimum pressure drag for a given auxiliary condition, is calculated in a second part of the report using the equations of linearized supersonic flow.

  1. FUSED KERNEL-SPLINE SMOOTHING FOR REPEATEDLY MEASURED OUTCOMES IN A GENERALIZED PARTIALLY LINEAR MODEL WITH FUNCTIONAL SINGLE INDEX*

    PubMed Central

    Jiang, Fei; Ma, Yanyuan; Wang, Yuanjia

    2015-01-01

    We propose a generalized partially linear functional single index risk score model for repeatedly measured outcomes where the index itself is a function of time. We fuse the nonparametric kernel method and regression spline method, and modify the generalized estimating equation to facilitate estimation and inference. We use local smoothing kernel to estimate the unspecified coefficient functions of time, and use B-splines to estimate the unspecified function of the single index component. The covariance structure is taken into account via a working model, which provides valid estimation and inference procedure whether or not it captures the true covariance. The estimation method is applicable to both continuous and discrete outcomes. We derive large sample properties of the estimation procedure and show different convergence rate of each component of the model. The asymptotic properties when the kernel and regression spline methods are combined in a nested fashion has not been studied prior to this work even in the independent data case. PMID:26283801

  2. An algorithm for the construction of substitution box for block ciphers based on projective general linear group

    NASA Astrophysics Data System (ADS)

    Altaleb, Anas; Saeed, Muhammad Sarwar; Hussain, Iqtadar; Aslam, Muhammad

    2017-03-01

    The aim of this work is to synthesize 8*8 substitution boxes (S-boxes) for block ciphers. The confusion creating potential of an S-box depends on its construction technique. In the first step, we have applied the algebraic action of the projective general linear group PGL(2,GF(28)) on Galois field GF(28). In step 2 we have used the permutations of the symmetric group S256 to construct new kind of S-boxes. To explain the proposed extension scheme, we have given an example and constructed one new S-box. The strength of the extended S-box is computed, and an insight is given to calculate the confusion-creating potency. To analyze the security of the S-box some popular algebraic and statistical attacks are performed as well. The proposed S-box has been analyzed by bit independent criterion, linear approximation probability test, non-linearity test, strict avalanche criterion, differential approximation probability test, and majority logic criterion. A comparison of the proposed S-box with existing S-boxes shows that the analyses of the extended S-box are comparatively better.

  3. Generalized Uncertainty Quantification for Linear Inverse Problems in X-ray Imaging

    SciTech Connect

    Fowler, Michael James

    2014-04-25

    In industrial and engineering applications, X-ray radiography has attained wide use as a data collection protocol for the assessment of material properties in cases where direct observation is not possible. The direct measurement of nuclear materials, particularly when they are under explosive or implosive loading, is not feasible, and radiography can serve as a useful tool for obtaining indirect measurements. In such experiments, high energy X-rays are pulsed through a scene containing material of interest, and a detector records a radiograph by measuring the radiation that is not attenuated in the scene. One approach to the analysis of these radiographs is to model the imaging system as an operator that acts upon the object being imaged to produce a radiograph. In this model, the goal is to solve an inverse problem to reconstruct the values of interest in the object, which are typically material properties such as density or areal density. The primary objective in this work is to provide quantitative solutions with uncertainty estimates for three separate applications in X-ray radiography: deconvolution, Abel inversion, and radiation spot shape reconstruction. For each problem, we introduce a new hierarchical Bayesian model for determining a posterior distribution on the unknowns and develop efficient Markov chain Monte Carlo (MCMC) methods for sampling from the posterior. A Poisson likelihood, based on a noise model for photon counts at the detector, is combined with a prior tailored to each application: an edge-localizing prior for deconvolution; a smoothing prior with non-negativity constraints for spot reconstruction; and a full covariance sampling prior based on a Wishart hyperprior for Abel inversion. After developing our methods in a general setting, we demonstrate each model on both synthetically generated datasets, including those from a well known radiation transport code, and real high energy radiographs taken at two U. S. Department of Energy

  4. The generalized cross-validation method applied to geophysical linear traveltime tomography

    NASA Astrophysics Data System (ADS)

    Bassrei, A.; Oliveira, N. P.

    2009-12-01

    The oil industry is the major user of Applied Geophysics methods for the subsurface imaging. Among different methods, the so-called seismic (or exploration seismology) methods are the most important. Tomography was originally developed for medical imaging and was introduced in exploration seismology in the 1980's. There are two main classes of geophysical tomography: those that use only the traveltimes between sources and receivers, which is a cinematic approach and those that use the wave amplitude itself, being a dynamic approach. Tomography is a kind of inverse problem, and since inverse problems are usually ill-posed, it is necessary to use some method to reduce their deficiencies. These difficulties of the inverse procedure are associated with the fact that the involved matrix is ill-conditioned. To compensate this shortcoming, it is appropriate to use some technique of regularization. In this work we make use of regularization with derivative matrices, also called smoothing. There is a crucial problem in regularization, which is the selection of the regularization parameter lambda. We use generalized cross validation (GCV) as a tool for the selection of lambda. GCV chooses the regularization parameter associated with the best average prediction for all possible omissions of one datum, corresponding to the minimizer of GCV function. GCV is used for an application in traveltime tomography, where the objective is to obtain the 2-D velocity distribution from the measured values of the traveltimes between sources and receivers. We present results with synthetic data, using a geological model that simulates different features, like a fault and a reservoir. The results using GCV are very good, including those contaminated with noise, and also using different regularization orders, attesting the feasibility of this technique.

  5. A generalized fuzzy credibility-constrained linear fractional programming approach for optimal irrigation water allocation under uncertainty

    NASA Astrophysics Data System (ADS)

    Zhang, Chenglong; Guo, Ping

    2017-10-01

    The vague and fuzzy parametric information is a challenging issue in irrigation water management problems. In response to this problem, a generalized fuzzy credibility-constrained linear fractional programming (GFCCFP) model is developed for optimal irrigation water allocation under uncertainty. The model can be derived from integrating generalized fuzzy credibility-constrained programming (GFCCP) into a linear fractional programming (LFP) optimization framework. Therefore, it can solve ratio optimization problems associated with fuzzy parameters, and examine the variation of results under different credibility levels and weight coefficients of possibility and necessary. It has advantages in: (1) balancing the economic and resources objectives directly; (2) analyzing system efficiency; (3) generating more flexible decision solutions by giving different credibility levels and weight coefficients of possibility and (4) supporting in-depth analysis of the interrelationships among system efficiency, credibility level and weight coefficient. The model is applied to a case study of irrigation water allocation in the middle reaches of Heihe River Basin, northwest China. Therefore, optimal irrigation water allocation solutions from the GFCCFP model can be obtained. Moreover, factorial analysis on the two parameters (i.e. λ and γ) indicates that the weight coefficient is a main factor compared with credibility level for system efficiency. These results can be effective for support reasonable irrigation water resources management and agricultural production.

  6. Generalized Confidence Intervals for Intra- and Inter-subject Coefficients of Variation in Linear Mixed-effects Models.

    PubMed

    Forkman, Johannes

    2017-06-15

    Linear mixed-effects models are linear models with several variance components. Models with a single random-effects factor have two variance components: the random-effects variance, i. e., the inter-subject variance, and the residual error variance, i. e., the intra-subject variance. In many applications, it is practice to report variance components as coefficients of variation. The intra- and inter-subject coefficients of variation are the square roots of the corresponding variances divided by the mean. This article proposes methods for computing confidence intervals for intra- and inter-subject coefficients of variation using generalized pivotal quantities. The methods are illustrated through two examples. In the first example, precision is assessed within and between runs in a bioanalytical method validation. In the second example, variation is estimated within and between main plots in an agricultural split-plot experiment. Coverage of generalized confidence intervals is investigated through simulation and shown to be close to the nominal value.

  7. Application of a generalized linear mixed model to analyze mixture toxicity: survival of brown trout affected by copper and zinc.

    PubMed

    Iwasaki, Yuichi; Brinkman, Stephen F

    2015-04-01

    Increased concerns about the toxicity of chemical mixtures have led to greater emphasis on analyzing the interactions among the mixture components based on observed effects. The authors applied a generalized linear mixed model (GLMM) to analyze survival of brown trout (Salmo trutta) acutely exposed to metal mixtures that contained copper and zinc. Compared with dominant conventional approaches based on an assumption of concentration addition and the concentration of a chemical that causes x% effect (ECx), the GLMM approach has 2 major advantages. First, binary response variables such as survival can be modeled without any transformations, and thus sample size can be taken into consideration. Second, the importance of the chemical interaction can be tested in a simple statistical manner. Through this application, the authors investigated whether the estimated concentration of the 2 metals binding to humic acid, which is assumed to be a proxy of nonspecific biotic ligand sites, provided a better prediction of survival effects than dissolved and free-ion concentrations of metals. The results suggest that the estimated concentration of metals binding to humic acid is a better predictor of survival effects, and thus the metal competition at the ligands could be an important mechanism responsible for effects of metal mixtures. Application of the GLMM (and the generalized linear model) presents an alternative or complementary approach to analyzing mixture toxicity. © 2015 SETAC.

  8. Mediation analysis when a continuous mediator is measured with error and the outcome follows a generalized linear model.

    PubMed

    Valeri, Linda; Lin, Xihong; VanderWeele, Tyler J

    2014-12-10

    Mediation analysis is a popular approach to examine the extent to which the effect of an exposure on an outcome is through an intermediate variable (mediator) and the extent to which the effect is direct. When the mediator is mis-measured, the validity of mediation analysis can be severely undermined. In this paper, we first study the bias of classical, non-differential measurement error on a continuous mediator in the estimation of direct and indirect causal effects in generalized linear models when the outcome is either continuous or discrete and exposure-mediator interaction may be present. Our theoretical results as well as a numerical study demonstrate that in the presence of non-linearities, the bias of naive estimators for direct and indirect effects that ignore measurement error can take unintuitive directions. We then develop methods to correct for measurement error. Three correction approaches using method of moments, regression calibration, and SIMEX are compared. We apply the proposed method to the Massachusetts General Hospital lung cancer study to evaluate the effect of genetic variants mediated through smoking on lung cancer risk.

  9. Instability and change detection in exponential families and generalized linear models, with a study of Atlantic tropical storms

    NASA Astrophysics Data System (ADS)

    Lu, Y.; Chatterjee, S.

    2014-11-01

    Exponential family statistical distributions, including the well-known normal, binomial, Poisson, and exponential distributions, are overwhelmingly used in data analysis. In the presence of covariates, an exponential family distributional assumption for the response random variables results in a generalized linear model. However, it is rarely ensured that the parameters of the assumed distributions are stable through the entire duration of the data collection process. A failure of stability leads to nonsmoothness and nonlinearity in the physical processes that result in the data. In this paper, we propose testing for stability of parameters of exponential family distributions and generalized linear models. A rejection of the hypothesis of stable parameters leads to change detection. We derive the related likelihood ratio test statistic. We compare the performance of this test statistic to the popular normal distributional assumption dependent cumulative sum (Gaussian CUSUM) statistic in change detection problems. We study Atlantic tropical storms using the techniques developed here, so to understand whether the nature of these tropical storms has remained stable over the last few decades.

  10. Generalized Linear Mixed Models for Binary Data: Are Matching Results from Penalized Quasi-Likelihood and Numerical Integration Less Biased?

    PubMed Central

    Benedetti, Andrea; Platt, Robert; Atherton, Juli

    2014-01-01

    Background Over time, adaptive Gaussian Hermite quadrature (QUAD) has become the preferred method for estimating generalized linear mixed models with binary outcomes. However, penalized quasi-likelihood (PQL) is still used frequently. In this work, we systematically evaluated whether matching results from PQL and QUAD indicate less bias in estimated regression coefficients and variance parameters via simulation. Methods We performed a simulation study in which we varied the size of the data set, probability of the outcome, variance of the random effect, number of clusters and number of subjects per cluster, etc. We estimated bias in the regression coefficients, odds ratios and variance parameters as estimated via PQL and QUAD. We ascertained if similarity of estimated regression coefficients, odds ratios and variance parameters predicted less bias. Results Overall, we found that the absolute percent bias of the odds ratio estimated via PQL or QUAD increased as the PQL- and QUAD-estimated odds ratios became more discrepant, though results varied markedly depending on the characteristics of the dataset Conclusions Given how markedly results varied depending on data set characteristics, specifying a rule above which indicated biased results proved impossible. This work suggests that comparing results from generalized linear mixed models estimated via PQL and QUAD is a worthwhile exercise for regression coefficients and variance components obtained via QUAD, in situations where PQL is known to give reasonable results. PMID:24416249

  11. General expressions for R1ρ relaxation for N-site chemical exchange and the special case of linear chains

    NASA Astrophysics Data System (ADS)

    Koss, Hans; Rance, Mark; Palmer, Arthur G.

    2017-01-01

    Exploration of dynamic processes in proteins and nucleic acids by spin-locking NMR experiments has been facilitated by the development of theoretical expressions for the R1ρ relaxation rate constant covering a variety of kinetic situations. Herein, we present a generalized approximation to the chemical exchange, Rex, component of R1ρ for arbitrary kinetic schemes, assuming the presence of a dominant major site population, derived from the negative reciprocal trace of the inverse Bloch-McConnell evolution matrix. This approximation is equivalent to first-order truncation of the characteristic polynomial derived from the Bloch-McConnell evolution matrix. For three- and four-site chemical exchange, the first-order approximations are sufficient to distinguish different kinetic schemes. We also introduce an approach to calculate R1ρ for linear N-site schemes, using the matrix determinant lemma to reduce the corresponding 3N × 3N Bloch-McConnell evolution matrix to a 3 × 3 matrix. The first- and second order-expansions of the determinant of this 3 × 3 matrix are closely related to previously derived equations for two-site exchange. The second-order approximations for linear N-site schemes can be used to obtain more accurate approximations for non-linear N-site schemes, such as triangular three-site or star four-site topologies. The expressions presented herein provide powerful means for the estimation of Rex contributions for both low (CEST-limit) and high (R1ρ-limit) radiofrequency field strengths, provided that the population of one state is dominant. The general nature of the new expressions allows for consideration of complex kinetic situations in the analysis of NMR spin relaxation data.

  12. A semiparametric negative binomial generalized linear model for modeling over-dispersed count data with a heavy tail: Characteristics and applications to crash data.

    PubMed

    Shirazi, Mohammadali; Lord, Dominique; Dhavala, Soma Sekhar; Geedipally, Srinivas Reddy

    2016-06-01

    Crash data can often be characterized by over-dispersion, heavy (long) tail and many observations with the value zero. Over the last few years, a small number of researchers have started developing and applying novel and innovative multi-parameter models to analyze such data. These multi-parameter models have been proposed for overcoming the limitations of the traditional negative binomial (NB) model, which cannot handle this kind of data efficiently. The research documented in this paper continues the work related to multi-parameter models. The objective of this paper is to document the development and application of a flexible NB generalized linear model with randomly distributed mixed effects characterized by the Dirichlet process (NB-DP) to model crash data. The objective of the study was accomplished using two datasets. The new model was compared to the NB and the recently introduced model based on the mixture of the NB and Lindley (NB-L) distributions. Overall, the research study shows that the NB-DP model offers a better performance than the NB model once data are over-dispersed and have a heavy tail. The NB-DP performed better than the NB-L when the dataset has a heavy tail, but a smaller percentage of zeros. However, both models performed similarly when the dataset contained a large amount of zeros. In addition to a greater flexibility, the NB-DP provides a clustering by-product that allows the safety analyst to better understand the characteristics of the data, such as the identification of outliers and sources of dispersion.

  13. Solutions for Determining the Significance Region Using the Johnson-Neyman Type Procedure in Generalized Linear (Mixed) Models.

    PubMed

    Lazar, Ann A; Zerbe, Gary O

    2011-12-01

    Researchers often compare the relationship between an outcome and covariate for two or more groups by evaluating whether the fitted regression curves differ significantly. When they do, researchers need to determine the "significance region," or the values of the covariate where the curves significantly differ. In analysis of covariance (ANCOVA), the Johnson-Neyman procedure can be used to determine the significance region; for the hierarchical linear model (HLM), the Miyazaki and Maier (M-M) procedure has been suggested. However, neither procedure can assume nonnormally distributed data. Furthermore, the M-M procedure produces biased (downward) results because it uses the Wald test, does not control the inflated Type I error rate due to multiple testing, and requires implementing multiple software packages to determine the significance region. In this article, we address these limitations by proposing solutions for determining the significance region suitable for generalized linear (mixed) model (GLM or GLMM). These proposed solutions incorporate test statistics that resolve the biased results, control the Type I error rate using Scheffé's method, and uses a single statistical software package to determine the significance region.

  14. AN ALMOST LINEAR TIME ALGORITHM FOR A GENERAL HAPLOTYPE SOLUTION ON TREE PEDIGREES WITH NO RECOMBINATION AND ITS EXTENSIONS

    PubMed Central

    Li, Xin

    2010-01-01

    We study the haplotype inference problem from pedigree data under the zero recombination assumption, which is well supported by real data for tightly linked markers (i.e. single nucleotide polymorphisms (SNPs)) over a relatively large chromosome segment. We solve the problem in a rigorous mathematical manner by formulating genotype constraints as a linear system of inheritance variables. We then utilize disjoint-set structures to encode connectivity information among individuals, to detect constraints from genotypes, and to check consistency of constraints. On a tree pedigree without missing data, our algorithm can output a general solution as well as the number of total specific solutions in a nearly linear time O(mn · α(n)), where m is the number of loci, n is the number of individuals and α is the inverse Ackermann function, which is a further improvement over existing ones. We also extend the idea to looped pedigrees and pedigrees with missing data by considering existing (partial) constraints on inheritance variables. The algorithm has been implemented in C++ and will be incorporated into our PedPhase package. Experimental results show that it can correctly identify all 0-recombinant solutions with great efficiency. Comparisons with other two popular algorithms show that the proposed algorithm achieves 10 to 105-fold improvements over a variety of parameter settings. The experimental study also provides empirical evidences on the complexity bounds suggested by theoretical analysis. PMID:19507288

  15. A generalized electrostatic micro-mirror (GEM) model for a two-axis convex piecewise linear shaped MEMS mirror

    NASA Astrophysics Data System (ADS)

    Edwards, C. L.; Edwards, M. L.

    2009-05-01

    MEMS micro-mirror technology offers the opportunity to replace larger optical actuators with smaller, faster ones for lidar, network switching, and other beam steering applications. Recent developments in modeling and simulation of MEMS two-axis (tip-tilt) mirrors have resulted in closed-form solutions that are expressed in terms of physical, electrical and environmental parameters related to the MEMS device. The closed-form analytical expressions enable dynamic time-domain simulations without excessive computational overhead and are referred to as the Micro-mirror Pointing Model (MPM). Additionally, these first-principle models have been experimentally validated with in-situ static, dynamic, and stochastic measurements illustrating their reliability. These models have assumed that the mirror has a rectangular shape. Because the corners can limit the dynamic operation of a rectangular mirror, it is desirable to shape the mirror, e.g., mitering the corners. Presented in this paper is the formulation of a generalized electrostatic micromirror (GEM) model with an arbitrary convex piecewise linear shape that is readily implemented in MATLAB and SIMULINK for steady-state and dynamic simulations. Additionally, such a model permits an arbitrary shaped mirror to be approximated as a series of linearly tapered segments. Previously, "effective area" arguments were used to model a non-rectangular shaped mirror with an equivalent rectangular one. The GEM model shows the limitations of this approach and provides a pre-fabrication tool for designing mirror shapes.

  16. Correlated-imaging-based chosen plaintext attack on general cryptosystems composed of linear canonical transforms and phase encodings

    NASA Astrophysics Data System (ADS)

    Wu, Jingjing; Liu, Wei; Liu, Zhengjun; Liu, Shutian

    2015-03-01

    We introduce a chosen-plaintext attack scheme on general optical cryptosystems that use linear canonical transform and phase encoding based on correlated imaging. The plaintexts are chosen as Gaussian random real number matrixes, and the corresponding ciphertexts are regarded as prior knowledge of the proposed attack method. To establish the reconstruct of the secret plaintext, correlated imaging is employed using the known resources. Differing from the reported attack methods, there is no need to decipher the distribution of the decryption key. The original secret image can be directly recovered by the attack in the absence of decryption key. In addition, the improved cryptosystems combined with pixel scrambling operations are also vulnerable to the proposed attack method. Necessary mathematical derivations and numerical simulations are carried out to demonstrate the validity of the proposed attack scheme.

  17. Parametric Variable Selection in Generalized Partially Linear Models with an Application to Assess Condom Use by HIV-infected Patients

    PubMed Central

    Leng, Chenlei; Liang, Hua; Martinson, Neil

    2011-01-01

    To study significant predictors of condom use in HIV-infected adults, we propose the use of generalized partially linear models and develop a variable selection procedure incorporating a least squares approximation. Local polynomial regression and spline smoothing techniques are used to estimate the baseline nonparametric function. The asymptotic normality of the resulting estimate is established. We further demonstrate that, with the proper choice of the penalty functions and the regularization parameter, the resulting estimate performs as well as an oracle procedure. Finite sample performance of the proposed inference procedure is assessed by Monte Carlo simulation studies. An application to assess condom use by HIV-infected patients gains some interesting results, which can not be obtained when an ordinary logistic model is used. PMID:21465515

  18. SAS macro programs for geographically weighted generalized linear modeling with spatial point data: applications to health research.

    PubMed

    Chen, Vivian Yi-Ju; Yang, Tse-Chuan

    2012-08-01

    An increasing interest in exploring spatial non-stationarity has generated several specialized analytic software programs; however, few of these programs can be integrated natively into a well-developed statistical environment such as SAS. We not only developed a set of SAS macro programs to fill this gap, but also expanded the geographically weighted generalized linear modeling (GWGLM) by integrating the strengths of SAS into the GWGLM framework. Three features distinguish our work. First, the macro programs of this study provide more kernel weighting functions than the existing programs. Second, with our codes the users are able to better specify the bandwidth selection process compared to the capabilities of existing programs. Third, the development of the macro programs is fully embedded in the SAS environment, providing great potential for future exploration of complicated spatially varying coefficient models in other disciplines. We provided three empirical examples to illustrate the use of the SAS macro programs and demonstrated the advantages explained above.

  19. Variable selection in Bayesian generalized linear-mixed models: an illustration using candidate gene case-control association studies.

    PubMed

    Tsai, Miao-Yu

    2015-03-01

    The problem of variable selection in the generalized linear-mixed models (GLMMs) is pervasive in statistical practice. For the purpose of variable selection, many methodologies for determining the best subset of explanatory variables currently exist according to the model complexity and differences between applications. In this paper, we develop a "higher posterior probability model with bootstrap" (HPMB) approach to select explanatory variables without fitting all possible GLMMs involving a small or moderate number of explanatory variables. Furthermore, to save computational load, we propose an efficient approximation approach with Laplace's method and Taylor's expansion to approximate intractable integrals in GLMMs. Simulation studies and an application of HapMap data provide evidence that this selection approach is computationally feasible and reliable for exploring true candidate genes and gene-gene associations, after adjusting for complex structures among clusters. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Event-Triggered Schemes on Leader-Following Consensus of General Linear Multiagent Systems Under Different Topologies.

    PubMed

    Xu, Wenying; Ho, Daniel W C; Li, Lulu; Cao, Jinde

    2017-01-01

    This paper investigates the leader-following consensus for multiagent systems with general linear dynamics by means of event-triggered scheme (ETS). We propose three types of schemes, namely, distributed ETS (distributed-ETS), centralized ETS (centralized-ETS), and clustered ETS (clustered-ETS) for different network topologies. All these schemes guarantee that all followers can track the leader eventually. It should be emphasized that all event-triggered protocols in this paper depend on local information and their executions are distributed. Moreover, it is shown that such event-triggered mechanism can significantly reduce the frequency of control's update. Further, positive inner-event time intervals are assured for those cases of distributed-ETS, centralized-ETS, and clustered-ETS. In addition, two methods are proposed to avoid continuous communication between agents for event detection. Finally, numerical examples are provided to illustrate the effectiveness of the ETSs.

  1. Metrics of separation performance in chromatography: Part 3: General separation performance of linear solvent strength gradient liquid chromatography.

    PubMed

    Blumberg, Leonid M; Desmet, Gert

    2015-09-25

    The separation performance metrics defined in Part 1 of this series are applied to the evaluation of general separation performance of linear solvent strength (LSS) gradient LC. Among the evaluated metrics was the peak capacity of an arbitrary segment of a chromatogram. Also evaluated were the peak width, the separability of two solutes, the utilization of separability, and the speed of analysis-all at an arbitrary point of a chromatogram. The means are provided to express all these metrics as functions of an arbitrary time during LC analysis, as functions of an arbitrary outlet solvent strength changing during the analysis, as functions of parameters of the solutes eluting during the analysis, and as functions of several other factors. The separation performance of gradient LC is compared with the separation performance of temperature-programmed GC evaluated in Part 2.

  2. Development of generalized space time autoregressive integrated with ARCH error (GSTARI - ARCH) model based on consumer price index phenomenon at several cities in North Sumatera province

    NASA Astrophysics Data System (ADS)

    Bonar, Hot; Ruchjana, Budi Nurani; Darmawan, Gumgum

    2017-03-01

    Inflation is defined as a situation where generally the price of goods has increased continuously. In order to measure inflation, Statistics of Indonesia (BPS) use the Consumer Price Index (CPI). Inflation in North Sumatera Province monitored through CPI change in several major cities which are Medan, Pematang Siantar, Sibolga, and Padangsidimpuan. The CPI value in these cities was affected by the previous times value and have correlation between one another. In data modeling, data that have correlation in time and spatial is called space time data. One of data modeling methods that can be used to analyze the space time data is the Generalized Space Time Autoregressive (GSTAR) which was introduced by Ruchjana (2002) with assumed constant variance error. Furthermore, time series data such as inflation often have high volatility which implicates on an inconstant value of variance and error. Nainggolan (2011) was introduced GSTAR model with an Autoregressive Conditional Heteroscedastic (ARCH) error, called GSTAR-ARCH model. In this model, the mean equation was modeled by GSTAR model and the variance equation was modeled by the ARCH model. For non stationarity data, we apply GSTAR-Integrated with ARCH error (GSTARI-ARCH) model, and the estimation parameters are using Generalized Least Square (GLS) method as introduced by Nainggolan (2011).

  3. Whole-body PET parametric imaging employing direct 4D nested reconstruction and a generalized non-linear Patlak model

    NASA Astrophysics Data System (ADS)

    Karakatsanis, Nicolas A.; Rahmim, Arman

    2014-03-01

    Graphical analysis is employed in the research setting to provide quantitative estimation of PET tracer kinetics from dynamic images at a single bed. Recently, we proposed a multi-bed dynamic acquisition framework enabling clinically feasible whole-body parametric PET imaging by employing post-reconstruction parameter estimation. In addition, by incorporating linear Patlak modeling within the system matrix, we enabled direct 4D reconstruction in order to effectively circumvent noise amplification in dynamic whole-body imaging. However, direct 4D Patlak reconstruction exhibits a relatively slow convergence due to the presence of non-sparse spatial correlations in temporal kinetic analysis. In addition, the standard Patlak model does not account for reversible uptake, thus underestimating the influx rate Ki. We have developed a novel whole-body PET parametric reconstruction framework in the STIR platform, a widely employed open-source reconstruction toolkit, a) enabling accelerated convergence of direct 4D multi-bed reconstruction, by employing a nested algorithm to decouple the temporal parameter estimation from the spatial image update process, and b) enhancing the quantitative performance particularly in regions with reversible uptake, by pursuing a non-linear generalized Patlak 4D nested reconstruction algorithm. A set of published kinetic parameters and the XCAT phantom were employed for the simulation of dynamic multi-bed acquisitions. Quantitative analysis on the Ki images demonstrated considerable acceleration in the convergence of the nested 4D whole-body Patlak algorithm. In addition, our simulated and patient whole-body data in the postreconstruction domain indicated the quantitative benefits of our extended generalized Patlak 4D nested reconstruction for tumor diagnosis and treatment response monitoring.

  4. Comparing Multiple-Group Multinomial Log-Linear Models for Multidimensional Skill Distributions in the General Diagnostic Model. Research Report. ETS RR-08-35

    ERIC Educational Resources Information Center

    Xu, Xueli; von Davier, Matthias

    2008-01-01

    The general diagnostic model (GDM) utilizes located latent classes for modeling a multidimensional proficiency variable. In this paper, the GDM is extended by employing a log-linear model for multiple populations that assumes constraints on parameters across multiple groups. This constrained model is compared to log-linear models that assume…

  5. Mixture and non-mixture cure fraction models based on the generalized modified Weibull distribution with an application to gastric cancer data.

    PubMed

    Martinez, Edson Z; Achcar, Jorge A; Jácome, Alexandre A A; Santos, José S

    2013-12-01

    The cure fraction models are usually used to model lifetime time data with long-term survivors. In the present article, we introduce a Bayesian analysis of the four-parameter generalized modified Weibull (GMW) distribution in presence of cure fraction, censored data and covariates. In order to include the proportion of "cured" patients, mixture and non-mixture formulation models are considered. To demonstrate the ability of using this model in the analysis of real data, we consider an application to data from patients with gastric adenocarcinoma. Inferences are obtained by using MCMC (Markov Chain Monte Carlo) methods.

  6. Accounting for uncertainty in confounder and effect modifier selection when estimating average causal effects in generalized linear models.

    PubMed

    Wang, Chi; Dominici, Francesca; Parmigiani, Giovanni; Zigler, Corwin Matthew

    2015-09-01

    Confounder selection and adjustment are essential elements of assessing the causal effect of an exposure or treatment in observational studies. Building upon work by Wang et al. (2012, Biometrics 68, 661-671) and Lefebvre et al. (2014, Statistics in Medicine 33, 2797-2813), we propose and evaluate a Bayesian method to estimate average causal effects in studies with a large number of potential confounders, relatively few observations, likely interactions between confounders and the exposure of interest, and uncertainty on which confounders and interaction terms should be included. Our method is applicable across all exposures and outcomes that can be handled through generalized linear models. In this general setting, estimation of the average causal effect is different from estimation of the exposure coefficient in the outcome model due to noncollapsibility. We implement a Bayesian bootstrap procedure to integrate over the distribution of potential confounders and to estimate the causal effect. Our method permits estimation of both the overall population causal effect and effects in specified subpopulations, providing clear characterization of heterogeneous exposure effects that may vary considerably across different covariate profiles. Simulation studies demonstrate that the proposed method performs well in small sample size situations with 100-150 observations and 50 covariates. The method is applied to data on 15,060 US Medicare beneficiaries diagnosed with a malignant brain tumor between 2000 and 2009 to evaluate whether surgery reduces hospital readmissions within 30 days of diagnosis.

  7. Effect of Smoothing in Generalized Linear Mixed Models on the Estimation of Covariance Parameters for Longitudinal Data.

    PubMed

    Mullah, Muhammad Abu Shadeque; Benedetti, Andrea

    2016-11-01

    Besides being mainly used for analyzing clustered or longitudinal data, generalized linear mixed models can also be used for smoothing via restricting changes in the fit at the knots in regression splines. The resulting models are usually called semiparametric mixed models (SPMMs). We investigate the effect of smoothing using SPMMs on the correlation and variance parameter estimates for serially correlated longitudinal normal, Poisson and binary data. Through simulations, we compare the performance of SPMMs to other simpler methods for estimating the nonlinear association such as fractional polynomials, and using a parametric nonlinear function. Simulation results suggest that, in general, the SPMMs recover the true curves very well and yield reasonable estimates of the correlation and variance parameters. However, for binary outcomes, SPMMs produce biased estimates of the variance parameters for high serially correlated data. We apply these methods to a dataset investigating the association between CD4 cell count and time since seroconversion for HIV infected men enrolled in the Multicenter AIDS Cohort Study.

  8. Quasideterminant solutions of the generalized Heisenberg magnet model

    NASA Astrophysics Data System (ADS)

    Saleem, U.; Hassan, M.

    2010-01-01

    In this paper we present the Darboux transformation for the generalized Heisenberg magnet (GHM) model based on the general linear Lie group GL(n) and construct multi-soliton solutions in terms of quasideterminants. Further we relate the quasideterminant multi-soliton solutions obtained by means of Darboux transformation with those obtained by the dressing method. We also discuss the model based on the Lie group SU(n) and obtain explicit soliton solutions of the model based on SU(2).

  9. The statistical performance of an MCF-7 cell culture assay evaluated using generalized linear mixed models and a score test.

    PubMed

    Rey deCastro, B; Neuberg, Donna

    2007-05-30

    Biological assays often utilize experimental designs where observations are replicated at multiple levels, and where each level represents a separate component of the assay's overall variance. Statistical analysis of such data usually ignores these design effects, whereas more sophisticated methods would improve the statistical power of assays. This report evaluates the statistical performance of an in vitro MCF-7 cell proliferation assay (E-SCREEN) by identifying the optimal generalized linear mixed model (GLMM) that accurately represents the assay's experimental design and variance components. Our statistical assessment found that 17beta-oestradiol cell culture assay data were best modelled with a GLMM configured with a reciprocal link function, a gamma error distribution, and three sources of design variation: plate-to-plate; well-to-well, and the interaction between plate-to-plate variation and dose. The gamma-distributed random error of the assay was estimated to have a coefficient of variation (COV) = 3.2 per cent, and a variance component score test described by X. Lin found that each of the three variance components were statistically significant. The optimal GLMM also confirmed the estrogenicity of five weakly oestrogenic polychlorinated biphenyls (PCBs 17, 49, 66, 74, and 128). Based on information criteria, the optimal gamma GLMM consistently out-performed equivalent naive normal and log-normal linear models, both with and without random effects terms. Because the gamma GLMM was by far the best model on conceptual and empirical grounds, and requires only trivially more effort to use, we encourage its use and suggest that naive models be avoided when possible. Copyright 2006 John Wiley & Sons, Ltd.

  10. The fuzzy oil drop model, based on hydrophobicity density distribution, generalizes the influence of water environment on protein structure and function.

    PubMed

    Banach, Mateusz; Konieczny, Leszek; Roterman, Irena

    2014-10-21

    In this paper we show that the fuzzy oil drop model represents a general framework for describing the generation of hydrophobic cores in proteins and thus provides insight into the influence of the water environment upon protein structure and stability. The model has been successfully applied in the study of a wide range of proteins, however this paper focuses specifically on domains representing immunoglobulin-like folds. Here we provide evidence that immunoglobulin-like domains, despite being structurally similar, differ with respect to their participation in the generation of hydrophobic core. It is shown that β-structural fragments in β-barrels participate in hydrophobic core formation in a highly differentiated manner. Quantitatively measured participation in core formation helps explain the variable stability of proteins and is shown to be related to their biological properties. This also includes the known tendency of immunoglobulin domains to form amyloids, as shown using transthyretin to reveal the clear relation between amyloidogenic properties and structural characteristics based on the fuzzy oil drop model.

  11. Kitaev models based on unitary quantum groupoids

    SciTech Connect

    Chang, Liang

    2014-04-15

    We establish a generalization of Kitaev models based on unitary quantum groupoids. In particular, when inputting a Kitaev-Kong quantum groupoid H{sub C}, we show that the ground state manifold of the generalized model is canonically isomorphic to that of the Levin-Wen model based on a unitary fusion category C. Therefore, the generalized Kitaev models provide realizations of the target space of the Turaev-Viro topological quantum field theory based on C.

  12. Kitaev models based on unitary quantum groupoids

    SciTech Connect

    Chang, Liang

    2014-04-15

    We establish a generalization of Kitaev models based on unitary quantum groupoids. In particular, when inputting a Kitaev-Kong quantum groupoid H{sub C}, we show that the ground state manifold of the generalized model is canonically isomorphic to that of the Levin-Wen model based on a unitary fusion category C. Therefore, the generalized Kitaev models provide realizations of the target space of the Turaev-Viro topological quantum field theory based on C.

  13. Validation of components of the water cycle in the ECHAM4 general circulation model based on the Newtonian relaxation technique: a case study of an intense winter cyclone

    NASA Astrophysics Data System (ADS)

    Bauer, Hans-Stefan; Wulfmeyer, Volker

    2009-07-01

    The representation of a simulated synoptic-scale weather system is compared with observations. To force the model to the observed state, the so-called Newtonian relaxation technique (nudging) is applied to relax vorticity, divergence, temperature, and the logarithm of surface pressure to the European Centre for Medium-Range Weather Forecasts (ECMWF) reanalysis fields. The development of an extraordinary strong cyclone along the East Coast of the USA during 12-14 March 1993 was chosen as the case study. The synoptic-scale features were well represented in the model simulation. However, systematic differences to observations of the International Satellite Cloud Climatology Project (ISCCP) occurred. The model underestimated clouds in lower and middle levels of the troposphere. Low-level clouds were mainly underestimated behind the cold front of the developing cyclone, while the underestimation of mid-level clouds seems to be a more general feature. The reason for the latter is the fact that the relative humidity has to exceed a critical threshold before clouds can develop. In contrast, thin upper-level cirrus clouds in pre-frontal regions were systematically overestimated by the model. Therefore, we investigated the effects of changed physical parameterizations with two sensitivity studies. In the PCI experiment, the standard cloud scheme operated in ECHAM4 was replaced by a more sophisticated one which defines separate prognostic equations for cloud liquid water and cloud ice. The second experiment, RHCRIT, changed the profile of the critical relative humidity threshold for the development of clouds in the standard scheme. Both experiments showed positive changes in the representation of clouds during the development of the cyclone as compared to the ISCCP. PCI clearly reduced the upper-level cloud amounts by intensifying the precipitation flux in the middle troposphere. The changed condensation threshold in the RHCRIT experiment led to a sharper represented cold

  14. Use of reflectance spectrophotometry and colorimetry in a general linear model for the determination of the age of bruises.

    PubMed

    Hughes, Vanessa K; Langlois, Neil E I

    2010-12-01

    Bruises can have medicolegal significance such that the age of a bruise may be an important issue. This study sought to determine if colorimetry or reflectance spectrophotometry could be employed to objectively estimate the age of bruises. Based on a previously described method, reflectance spectrophotometric scans were obtained from bruises using a Cary 100 Bio spectrophotometer fitted with a fibre-optic reflectance probe. Measurements were taken from the bruise and a control area. Software was used to calculate the first derivative at 490 and 480 nm; the proportion of oxygenated hemoglobin was calculated using an isobestic point method and a software application converted the scan data into colorimetry data. In addition, data on factors that might be associated with the determination of the age of a bruise: subject age, subject sex, degree of trauma, bruise size, skin color, body build, and depth of bruise were recorded. From 147 subjects, 233 reflectance spectrophotometry scans were obtained for analysis. The age of the bruises ranged from 0.5 to 231.5 h. A General Linear Model analysis method was used. This revealed that colorimetric measurement of the yellowness of a bruise accounted for 13% of the bruise age. By incorporation of the other recorded data (as above), yellowness could predict up to 32% of the age of a bruise-implying that 68% of the variation was dependent on other factors. However, critical appraisal of the model revealed that the colorimetry method of determining the age of a bruise was affected by skin tone and required a measure of the proportion of oxygenated hemoglobin, which is obtained by spectrophotometric methods. Using spectrophotometry, the first derivative at 490 nm alone accounted for 18% of the bruise age estimate. When additional factors (subject sex, bruise depth and oxygenation of hemoglobin) were included in the General Linear Model this increased to 31%-implying that 69% of the variation was dependent on other factors. This

  15. A Community Needs Index for Adolescent Pregnancy Prevention Program Planning: Application of Spatial Generalized Linear Mixed Models.

    PubMed

    Johnson, Glen D; Mesler, Kristine; Kacica, Marilyn A

    2017-02-06

    Objective The objective is to estimate community needs with respect to risky adolescent sexual behavior in a way that is risk-adjusted for multiple community factors. Methods Generalized linear mixed modeling was applied for estimating teen pregnancy and sexually transmitted disease (STD) incidence by postal ZIP code in New York State, in a way that adjusts for other community covariables and residual spatial autocorrelation. A community needs index was then obtained by summing the risk-adjusted estimates of pregnancy and STD cases. Results Poisson regression with a spatial random effect was chosen among competing modeling approaches. Both the risk-adjusted caseloads and rates were computed for ZIP codes, which allowed risk-based prioritization to help guide funding decisions for a comprehensive adolescent pregnancy prevention program. Conclusions This approach provides quantitative evidence of community needs with respect to risky adolescent sexual behavior, while adjusting for other community-level variables and stabilizing estimates in areas with small populations. Therefore, it was well accepted by the affected groups and proved valuable for program planning. This methodology may also prove valuable for follow up program evaluation. Current research is directed towards further improving the statistical modeling approach and applying to different health and behavioral outcomes, along with different predictor variables.

  16. General characterization of Tityus fasciolatus scorpion venom. Molecular identification of toxins and localization of linear B-cell epitopes.

    PubMed

    Mendes, T M; Guimarães-Okamoto, P T C; Machado-de-Avila, R A; Oliveira, D; Melo, M M; Lobato, Z I; Kalapothakis, E; Chávez-Olórtegui, C

    2015-06-01

    This communication describes the general characteristics of the venom from the Brazilian scorpion Tityus fasciolatus, which is an endemic species found in the central Brazil (States of Goiás and Minas Gerais), being responsible for sting accidents in this area. The soluble venom obtained from this scorpion is toxic to mice being the LD50 is 2.984 mg/kg (subcutaneally). SDS-PAGE of the soluble venom resulted in 10 fractions ranged in size from 6 to 10-80 kDa. Sheep were employed for anti-T. fasciolatus venom serum production. Western blotting analysis showed that most of these venom proteins are immunogenic. T. fasciolatus anti-venom revealed consistent cross-reactivity with venom antigens from Tityus serrulatus. Using known primers for T. serrulatus toxins, we have identified three toxins sequences from T. fasciolatus venom. Linear epitopes of these toxins were localized and fifty-five overlapping pentadecapeptides covering complete amino acid sequence of the three toxins were synthesized in cellulose membrane (spot-synthesis technique). The epitopes were located on the 3D structures and some important residues for structure/function were identified. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Depth-compensated diffuse optical tomography enhanced by general linear model analysis and an anatomical atlas of human head.

    PubMed

    Tian, Fenghua; Liu, Hanli

    2014-01-15

    One of the main challenges in functional diffuse optical tomography (DOT) is to accurately recover the depth of brain activation, which is even more essential when differentiating true brain signals from task-evoked artifacts in the scalp. Recently, we developed a depth-compensated algorithm (DCA) to minimize the depth localization error in DOT. However, the semi-infinite model that was used in DCA deviated significantly from the realistic human head anatomy. In the present work, we incorporated depth-compensated DOT (DC-DOT) with a standard anatomical atlas of human head. Computer simulations and human measurements of sensorimotor activation were conducted to examine and prove the depth specificity and quantification accuracy of brain atlas-based DC-DOT. In addition, node-wise statistical analysis based on the general linear model (GLM) was also implemented and performed in this study, showing the robustness of DC-DOT that can accurately identify brain activation at the correct depth for functional brain imaging, even when co-existing with superficial artifacts.

  18. Assessing intervention efficacy on high-risk drinkers using generalized linear mixed models with a new class of link functions.

    PubMed

    Prates, Marcos O; Aseltine, Robert H; Dey, Dipak K; Yan, Jun

    2013-11-01

    Unhealthy alcohol use is one of the leading causes of morbidity and mortality in the United States. Brief interventions with high-risk drinkers during an emergency department (ED) visit are of great interest due to their possible efficacy and low cost. In a collaborative study with patients recruited at 14 academic ED across the United States, we examined the self-reported number of drinks per week by each patient following the exposure to a brief intervention. Count data with overdispersion have been mostly analyzed with generalized linear mixed models (GLMMs), of which only a limited number of link functions are available. Different choices of link function provide different fit and predictive power for a particular dataset. We propose a class of link functions from an alternative way to incorporate random effects in a GLMM, which encompasses many existing link functions as special cases. The methodology is naturally implemented in a Bayesian framework, with competing links selected with Bayesian model selection criteria such as the conditional predictive ordinate (CPO). In application to the ED intervention study, all models suggest that the intervention was effective in reducing the number of drinks, but some new models are found to significantly outperform the traditional model as measured by CPO. The validity of CPO in link selection is confirmed in a simulation study that shared the same characteristics as the count data from high-risk drinkers. The dataset and the source code for the best fitting model are available in Supporting Information.

  19. Automatic optimal filament segmentation with sub-pixel accuracy using generalized linear models and B-spline level-sets.

    PubMed

    Xiao, Xun; Geyer, Veikko F; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F

    2016-08-01

    Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy.

  20. SNP_NLMM: A SAS Macro to Implement a Flexible Random Effects Density for Generalized Linear and Nonlinear Mixed Models

    PubMed Central

    Vock, David M.; Davidian, Marie; Tsiatis, Anastasios A.

    2014-01-01

    Generalized linear and nonlinear mixed models (GMMMs and NLMMs) are commonly used to represent non-Gaussian or nonlinear longitudinal or clustered data. A common assumption is that the random effects are Gaussian. However, this assumption may be unrealistic in some applications, and misspecification of the random effects density may lead to maximum likelihood parameter estimators that are inconsistent, biased, and inefficient. Because testing if the random effects are Gaussian is difficult, previous research has recommended using a flexible random effects density. However, computational limitations have precluded widespread use of flexible random effects densities for GLMMs and NLMMs. We develop a SAS macro, SNP_NLMM, that overcomes the computational challenges to fit GLMMs and NLMMs where the random effects are assumed to follow a smooth density that can be represented by the seminonparametric formulation proposed by Gallant and Nychka (1987). The macro is flexible enough to allow for any density of the response conditional on the random effects and any nonlinear mean trajectory. We demonstrate the SNP_NLMM macro on a GLMM of the disease progression of toenail infection and on a NLMM of intravenous drug concentration over time. PMID:24688453

  1. Acute toxicity of ammonia (NH3-N) in sewage effluent to Chironomus riparius: II. Using a generalized linear model

    USGS Publications Warehouse

    Monda, D.P.; Galat, D.L.; Finger, S.E.; Kaiser, M.S.

    1995-01-01

    Toxicity of un-ionized ammonia (NH3-N) to the midge, Chironomus riparius was compared, using laboratory culture (well) water and sewage effluent (≈0.4 mg/L NH3-N) in two 96-h, static-renewal toxicity experiments. A generalized linear model was used for data analysis. For the first and second experiments, respectively, LC50 values were 9.4 mg/L (Test 1A) and 6.6 mg/L (Test 2A) for ammonia in well water, and 7.8 mg/L (Test 1B) and 4.1 mg/L (Test 2B) for ammonia in sewage effluent. Slopes of dose-response curves for Tests 1A and 2A were equal, but mortality occurred at lower NH3-N concentrations in Test 2A (unequal intercepts). Response ofC. riparius to NH3 in effluent was not consistent; dose-response curves for tests 1B and 2B differed in slope and intercept. Nevertheless, C. riparius was more sensitive to ammonia in effluent than in well water in both experiments, indicating a synergistic effect of ammonia in sewage effluent. These results demonstrate the advantages of analyzing the organisms entire range of response, as opposed to generating LC50 values, which represent only one point on the dose-response curve.

  2. A Sequence Kernel Association Test for Dichotomous Traits in Family Samples under a Generalized Linear Mixed Model.

    PubMed

    Yan, Qi; Tiwari, Hemant K; Yi, Nengjun; Gao, Guimin; Zhang, Kui; Lin, Wan-Yu; Lou, Xiang-Yang; Cui, Xiangqin; Liu, Nianjun

    2015-01-01

    The existing methods for identifying multiple rare variants underlying complex diseases in family samples are underpowered. Therefore, we aim to develop a new set-based method for an association study of dichotomous traits in family samples. We introduce a framework for testing the association of genetic variants with diseases in family samples based on a generalized linear mixed model. Our proposed method is based on a kernel machine regression and can be viewed as an extension of the sequence kernel association test (SKAT and famSKAT) for application to family data with dichotomous traits (F-SKAT). Our simulation studies show that the original SKAT has inflated type I error rates when applied directly to family data. By contrast, our proposed F-SKAT has the correct type I error rate. Furthermore, in all of the considered scenarios, F-SKAT, which uses all family data, has higher power than both SKAT, which uses only unrelated individuals from the family data, and another method, which uses all family data. We propose a set-based association test that can be used to analyze family data with dichotomous phenotypes while handling genetic variants with the same or opposite directions of effects as well as any types of family relationships. © 2015 S. Karger AG, Basel.

  3. Projected changes in precipitation and temperature over the Canadian Prairie Provinces using the Generalized Linear Model statistical downscaling approach

    NASA Astrophysics Data System (ADS)

    Asong, Z. E.; Khaliq, M. N.; Wheater, H. S.

    2016-08-01

    In this study, a multisite multivariate statistical downscaling approach based on the Generalized Linear Model (GLM) framework is developed to downscale daily observations of precipitation and minimum and maximum temperatures from 120 sites located across the Canadian Prairie Provinces: Alberta, Saskatchewan and Manitoba. First, large scale atmospheric covariates from the National Center for Environmental Prediction (NCEP) Reanalysis-I, teleconnection indices, geographical site attributes, and observed precipitation and temperature records are used to calibrate GLMs for the 1971-2000 period. Then the calibrated models are used to generate daily sequences of precipitation and temperature for the 1962-2005 historical (conditioned on NCEP predictors), and future period (2006-2100) using outputs from five CMIP5 (Coupled Model Intercomparison Project Phase-5) Earth System Models corresponding to Representative Concentration Pathway (RCP): RCP2.6, RCP4.5, and RCP8.5 scenarios. The results indicate that the fitted GLMs are able to capture spatiotemporal characteristics of observed precipitation and temperature fields. According to the downscaled future climate, mean precipitation is projected to increase in summer and decrease in winter while minimum temperature is expected to warm faster than the maximum temperature. Climate extremes are projected to intensify with increased radiative forcing.

  4. Complex-number representation of informed basis functions in general linear modeling of Functional Magnetic Resonance Imaging.

    PubMed

    Wang, Pengwei; Wang, Zhishun; He, Lianghua

    2012-03-30

    Functional Magnetic Resonance Imaging (fMRI), measuring Blood Oxygen Level-Dependent (BOLD), is a widely used tool to reveal spatiotemporal pattern of neural activity in human brain. Standard analysis of fMRI data relies on a general linear model and the model is constructed by convolving the task stimuli with a hypothesized hemodynamic response function (HRF). To capture possible phase shifts in the observed BOLD response, the informed basis functions including canonical HRF and its temporal derivative, have been proposed to extend the hypothesized hemodynamic response in order to obtain a good fitting model. Different t contrasts are constructed from the estimated model parameters for detecting the neural activity between different task conditions. However, the estimated model parameters corresponding to the orthogonal basis functions have different physical meanings. It remains unclear how to combine the neural features detected by the two basis functions and construct t contrasts for further analyses. In this paper, we have proposed a novel method for representing multiple basis functions in complex domain to model the task-driven fMRI data. Using this method, we can treat each pair of model parameters, corresponding respectively to canonical HRF and its temporal derivative, as one complex number for each task condition. Using the specific rule we have defined, we can conveniently perform arithmetical operations on the estimated model parameters and generate different t contrasts. We validate this method using the fMRI data acquired from twenty-two healthy participants who underwent an auditory stimulation task.

  5. Towards obtaining spatiotemporally precise responses to continuous sensory stimuli in humans: a general linear modeling approach to EEG.

    PubMed

    Gonçalves, Nuno R; Whelan, Robert; Foxe, John J; Lalor, Edmund C

    2014-08-15

    Noninvasive investigation of human sensory processing with high temporal resolution typically involves repeatedly presenting discrete stimuli and extracting an average event-related response from scalp recorded neuroelectric or neuromagnetic signals. While this approach is and has been extremely useful, it suffers from two drawbacks: a lack of naturalness in terms of the stimulus and a lack of precision in terms of the cortical response generators. Here we show that a linear modeling approach that exploits functional specialization in sensory systems can be used to rapidly obtain spatiotemporally precise responses to complex sensory stimuli using electroencephalography (EEG). We demonstrate the method by example through the controlled modulation of the contrast and coherent motion of visual stimuli. Regressing the data against these modulation signals produces spatially focal, highly temporally resolved response measures that are suggestive of specific activation of visual areas V1 and V6, respectively, based on their onset latency, their topographic distribution and the estimated location of their sources. We discuss our approach by comparing it with fMRI/MRI informed source analysis methods and, in doing so, we provide novel information on the timing of coherent motion processing in human V6. Generalizing such an approach has the potential to facilitate the rapid, inexpensive spatiotemporal localization of higher perceptual functions in behaving humans.

  6. Multisite multivariate modeling of daily precipitation and temperature in the Canadian Prairie Provinces using generalized linear models

    NASA Astrophysics Data System (ADS)

    Asong, Zilefac E.; Khaliq, M. N.; Wheater, H. S.

    2016-11-01

    Based on the Generalized Linear Model (GLM) framework, a multisite stochastic modelling approach is developed using daily observations of precipitation and minimum and maximum temperatures from 120 sites located across the Canadian Prairie Provinces: Alberta, Saskatchewan and Manitoba. Temperature is modeled using a two-stage normal-heteroscedastic model by fitting mean and variance components separately. Likewise, precipitation occurrence and conditional precipitation intensity processes are modeled separately. The relationship between precipitation and temperature is accounted for by using transformations of precipitation as covariates to predict temperature fields. Large scale atmospheric covariates from the National Center for Environmental Prediction Reanalysis-I, teleconnection indices, geographical site attributes, and observed precipitation and temperature records are used to calibrate these models for the 1971-2000 period. Validation of the developed models is performed on both pre- and post-calibration period data. Results of the study indicate that the developed models are able to capture spatiotemporal characteristics of observed precipitation and temperature fields, such as inter-site and inter-variable correlation structure, and systematic regional variations present in observed sequences. A number of simulated weather statistics ranging from seasonal means to characteristics of temperature and precipitation extremes and some of the commonly used climate indices are also found to be in close agreement with those derived from observed data. This GLM-based modelling approach will be developed further for multisite statistical downscaling of Global Climate Model outputs to explore climate variability and change in this region of Canada.

  7. Optimizing the general linear model for functional near-infrared spectroscopy: an adaptive hemodynamic response function approach

    PubMed Central

    Uga, Minako; Dan, Ippeita; Sano, Toshifumi; Dan, Haruka; Watanabe, Eiju

    2014-01-01

    Abstract. An increasing number of functional near-infrared spectroscopy (fNIRS) studies utilize a general linear model (GLM) approach, which serves as a standard statistical method for functional magnetic resonance imaging (fMRI) data analysis. While fMRI solely measures the blood oxygen level dependent (BOLD) signal, fNIRS measures the changes of oxy-hemoglobin (oxy-Hb) and deoxy-hemoglobin (deoxy-Hb) signals at a temporal resolution severalfold higher. This suggests the necessity of adjusting the temporal parameters of a GLM for fNIRS signals. Thus, we devised a GLM-based method utilizing an adaptive hemodynamic response function (HRF). We sought the optimum temporal parameters to best explain the observed time series data during verbal fluency and naming tasks. The peak delay of the HRF was systematically changed to achieve the best-fit model for the observed oxy- and deoxy-Hb time series data. The optimized peak delay showed different values for each Hb signal and task. When the optimized peak delays were adopted, the deoxy-Hb data yielded comparable activations with similar statistical power and spatial patterns to oxy-Hb data. The adaptive HRF method could suitably explain the behaviors of both Hb parameters during tasks with the different cognitive loads during a time course, and thus would serve as an objective method to fully utilize the temporal structures of all fNIRS data. PMID:26157973

  8. Generalized linear solvation energy model applied to solute partition coefficients in ionic liquid-supercritical carbon dioxide systems.

    PubMed

    Planeta, Josef; Karásek, Pavel; Hohnová, Barbora; Sťavíková, Lenka; Roth, Michal

    2012-08-10

    Biphasic solvent systems composed of an ionic liquid (IL) and supercritical carbon dioxide (scCO(2)) have become frequented in synthesis, extractions and electrochemistry. In the design of related applications, information on interphase partitioning of the target organics is essential, and the infinite-dilution partition coefficients of the organic solutes in IL-scCO(2) systems can conveniently be obtained by supercritical fluid chromatography. The data base of experimental partition coefficients obtained previously in this laboratory has been employed to test a generalized predictive model for the solute partition coefficients. The model is an amended version of that described before by Hiraga et al. (J. Supercrit. Fluids, in press). Because of difficulty of the problem to be modeled, the model involves several different concepts - linear solvation energy relationships, density-dependent solvent power of scCO(2), regular solution theory, and the Flory-Huggins theory of athermal solutions. The model shows a moderate success in correlating the infinite-dilution solute partition coefficients (K-factors) in individual IL-scCO(2) systems at varying temperature and pressure. However, larger K-factor data sets involving multiple IL-scCO(2) systems appear to be beyond reach of the model, especially when the ILs involved pertain to different cation classes.

  9. Misconceptions in the use of the General Linear Model applied to functional MRI: a tutorial for junior neuro-imagers

    PubMed Central

    Pernet, Cyril R.

    2014-01-01

    This tutorial presents several misconceptions related to the use the General Linear Model (GLM) in functional Magnetic Resonance Imaging (fMRI). The goal is not to present mathematical proofs but to educate using examples and computer code (in Matlab). In particular, I address issues related to (1) model parameterization (modeling baseline or null events) and scaling of the design matrix; (2) hemodynamic modeling using basis functions, and (3) computing percentage signal change. Using a simple controlled block design and an alternating block design, I first show why “baseline” should not be modeled (model over-parameterization), and how this affects effect sizes. I also show that, depending on what is tested; over-parameterization does not necessarily impact upon statistical results. Next, using a simple periodic vs. random event related design, I show how the hemodynamic model (hemodynamic function only or using derivatives) can affects parameter estimates, as well as detail the role of orthogonalization. I then relate the above results to the computation of percentage signal change. Finally, I discuss how these issues affect group analyses and give some recommendations. PMID:24478622

  10. Age- and region-specific hepatitis B prevalence in Turkey estimated using generalized linear mixed models: a systematic review

    PubMed Central

    2011-01-01

    Background To provide a clear picture of the current hepatitis B situation, the authors performed a systematic review to estimate the age- and region-specific prevalence of chronic hepatitis B (CHB) in Turkey. Methods A total of 339 studies with original data on the prevalence of hepatitis B surface antigen (HBsAg) in Turkey and published between 1999 and 2009 were identified through a search of electronic databases, by reviewing citations, and by writing to authors. After a critical assessment, the authors included 129 studies, divided into categories: 'age-specific'; 'region-specific'; and 'specific population group'. To account for the differences among the studies, a generalized linear mixed model was used to estimate the overall prevalence across all age groups and regions. For specific population groups, the authors calculated the weighted mean prevalence. Results The estimated overall population prevalence was 4.57, 95% confidence interval (CI): 3.58, 5.76, and the estimated total number of CHB cases was about 3.3 million. The outcomes of the age-specific groups varied from 2.84, (95% CI: 2.60, 3.10) for the 0-14-yearolds to 6.36 (95% CI: 5.83, 6.90) in the 25-34-year-old group. Conclusion There are large age-group and regional differences in CHB prevalence in Turkey, where CHB remains a serious health problem. PMID:22151620

  11. Sample size calculation based on generalized linear models for differential expression analysis in RNA-seq data.

    PubMed

    Li, Chung-I; Shyr, Yu

    2016-12-01

    As RNA-seq rapidly develops and costs continually decrease, the quantity and frequency of samples being sequenced will grow exponentially. With proteomic investigations becoming more multivariate and quantitative, determining a study's optimal sample size is now a vital step in experimental design. Current methods for calculating a study's required sample size are mostly based on the hypothesis testing framework, which assumes each gene count can be modeled through Poisson or negative binomial distributions; however, these methods are limited when it comes to accommodating covariates. To address this limitation, we propose an estimating procedure based on the generalized linear model. This easy-to-use method constructs a representative exemplary dataset and estimates the conditional power, all without requiring complicated mathematical approximations or formulas. Even more attractive, the downstream analysis can be performed with current R/Bioconductor packages. To demonstrate the practicability and efficiency of this method, we apply it to three real-world studies, and introduce our on-line calculator developed to determine the optimal sample size for a RNA-seq study.

  12. Estimation of breeding values for mean and dispersion, their variance and correlation using double hierarchical generalized linear models.

    PubMed

    Felleki, M; Lee, D; Lee, Y; Gilmour, A R; Rönnegård, L

    2012-12-01

    The possibility of breeding for uniform individuals by selecting animals expressing a small response to environment has been studied extensively in animal breeding. Bayesian methods for fitting models with genetic components in the residual variance have been developed for this purpose, but have limitations due to the computational demands. We use the hierarchical (h)-likelihood from the theory of double hierarchical generalized linear models (DHGLM) to derive an estimation algorithm that is computationally feasible for large datasets. Random effects for both the mean and residual variance parts of the model are estimated together with their variance/covariance components. An important feature of the algorithm is that it can fit a correlation between the random effects for mean and variance. An h-likelihood estimator is implemented in the R software and an iterative reweighted least square (IRWLS) approximation of the h-likelihood is implemented using ASReml. The difference in variance component estimates between the two implementations is investigated, as well as the potential bias of the methods, using simulations. IRWLS gives the same results as h-likelihood in simple cases with no severe indication of bias. For more complex cases, only IRWLS could be used, and bias did appear. The IRWLS is applied on the pig litter size data previously analysed by Sorensen & Waagepetersen (2003) using Bayesian methodology. The estimates we obtained by using IRWLS are similar to theirs, with the estimated correlation between the random genetic effects being -0·52 for IRWLS and -0·62 in Sorensen & Waagepetersen (2003).

  13. Automatic optimal filament segmentation with sub-pixel accuracy using generalized linear models and B-spline level-sets

    PubMed Central

    Xiao, Xun; Geyer, Veikko F.; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F.

    2016-01-01

    Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy. PMID:27104582

  14. The overlooked potential of generalized linear models in astronomy - III. Bayesian negative binomial regression and globular cluster populations

    NASA Astrophysics Data System (ADS)

    de Souza, R. S.; Hilbe, J. M.; Buelens, B.; Riggs, J. D.; Cameron, E.; Ishida, E. E. O.; Chies-Santos, A. L.; Killedar, M.

    2015-10-01

    In this paper, the third in a series illustrating the power of generalized linear models (GLMs) for the astronomical community, we elucidate the potential of the class of GLMs which handles count data. The size of a galaxy's globular cluster (GC) population (NGC) is a prolonged puzzle in the astronomical literature. It falls in the category of count data analysis, yet it is usually modelled as if it were a continuous response variable. We have developed a Bayesian negative binomial regression model to study the connection between NGC and the following galaxy properties: central black hole mass, dynamical bulge mass, bulge velocity dispersion and absolute visual magnitude. The methodology introduced herein naturally accounts for heteroscedasticity, intrinsic scatter, errors in measurements in both axes (either discrete or continuous) and allows modelling the population of GCs on their natural scale as a non-negative integer variable. Prediction intervals of 99 per cent around the trend for expected NGC comfortably envelope the data, notably including the Milky Way, which has hitherto been considered a problematic outlier. Finally, we demonstrate how random intercept models can incorporate information of each particular galaxy morphological type. Bayesian variable selection methodology allows for automatically identifying galaxy types with different productions of GCs, suggesting that on average S0 galaxies have a GC population 35 per cent smaller than other types with similar brightness.

  15. Hierarchical multivariate mixture generalized linear models for the analysis of spatial data: An application to disease mapping.

    PubMed

    Torabi, Mahmoud

    2016-09-01

    Disease mapping of a single disease has been widely studied in the public health setup. Simultaneous modeling of related diseases can also be a valuable tool both from the epidemiological and from the statistical point of view. In particular, when we have several measurements recorded at each spatial location, we need to consider multivariate models in order to handle the dependence among the multivariate components as well as the spatial dependence between locations. It is then customary to use multivariate spatial models assuming the same distribution through the entire population density. However, in many circumstances, it is a very strong assumption to have the same distribution for all the areas of population density. To overcome this issue, we propose a hierarchical multivariate mixture generalized linear model to simultaneously analyze spatial Normal and non-Normal outcomes. As an application of our proposed approach, esophageal and lung cancer deaths in Minnesota are used to show the outperformance of assuming different distributions for different counties of Minnesota rather than assuming a single distribution for the population density. Performance of the proposed approach is also evaluated through a simulation study. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Estimation of genetic variance for macro- and micro-environmental sensitivity using double hierarchical generalized linear models.

    PubMed

    Mulder, Han A; Rönnegård, Lars; Fikse, W Freddy; Veerkamp, Roel F; Strandberg, Erling

    2013-07-04

    Genetic variation for environmental sensitivity indicates that animals are genetically different in their response to environmental factors. Environmental factors are either identifiable (e.g. temperature) and called macro-environmental or unknown and called micro-environmental. The objectives of this study were to develop a statistical method to estimate genetic parameters for macro- and micro-environmental sensitivities simultaneously, to investigate bias and precision of resulting estimates of genetic parameters and to develop and evaluate use of Akaike's information criterion using h-likelihood to select the best fitting model. We assumed that genetic variation in macro- and micro-environmental sensitivities is expressed as genetic variance in the slope of a linear reaction norm and environmental variance, respectively. A reaction norm model to estimate genetic variance for macro-environmental sensitivity was combined with a structural model for residual variance to estimate genetic variance for micro-environmental sensitivity using a double hierarchical generalized linear model in ASReml. Akaike's information criterion was constructed as model selection criterion using approximated h-likelihood. Populations of sires with large half-sib offspring groups were simulated to investigate bias and precision of estimated genetic parameters. Designs with 100 sires, each with at least 100 offspring, are required to have standard deviations of estimated variances lower than 50% of the true value. When the number of offspring increased, standard deviations of estimates across replicates decreased substantially, especially for genetic variances of macro- and micro-environmental sensitivities. Standard deviations of estimated genetic correlations across replicates were quite large (between 0.1 and 0.4), especially when sires had few offspring. Practically, no bias was observed for estimates of any of the parameters. Using Akaike's information criterion the true genetic

  17. Estimation of genetic variance for macro- and micro-environmental sensitivity using double hierarchical generalized linear models

    PubMed Central

    2013-01-01

    Background Genetic variation for environmental sensitivity indicates that animals are genetically different in their response to environmental factors. Environmental factors are either identifiable (e.g. temperature) and called macro-environmental or unknown and called micro-environmental. The objectives of this study were to develop a statistical method to estimate genetic parameters for macro- and micro-environmental sensitivities simultaneously, to investigate bias and precision of resulting estimates of genetic parameters and to develop and evaluate use of Akaike’s information criterion using h-likelihood to select the best fitting model. Methods We assumed that genetic variation in macro- and micro-environmental sensitivities is expressed as genetic variance in the slope of a linear reaction norm and environmental variance, respectively. A reaction norm model to estimate genetic variance for macro-environmental sensitivity was combined with a structural model for residual variance to estimate genetic variance for micro-environmental sensitivity using a double hierarchical generalized linear model in ASReml. Akaike’s information criterion was constructed as model selection criterion using approximated h-likelihood. Populations of sires with large half-sib offspring groups were simulated to investigate bias and precision of estimated genetic parameters. Results Designs with 100 sires, each with at least 100 offspring, are required to have standard deviations of estimated variances lower than 50% of the true value. When the number of offspring increased, standard deviations of estimates across replicates decreased substantially, especially for genetic variances of macro- and micro-environmental sensitivities. Standard deviations of estimated genetic correlations across replicates were quite large (between 0.1 and 0.4), especially when sires had few offspring. Practically, no bias was observed for estimates of any of the parameters. Using Akaike

  18. Weil Representation of a Generalized Linear Group over a Ring of Truncated Polynomials over a Finite Field Endowed with a Second Class Involution

    NASA Astrophysics Data System (ADS)

    Gutiérrez Frez, Luis; Pantoja, José

    2015-09-01

    We construct a complex linear Weil representation ρ of the generalized special linear group G={SL}_*^{1}(2,A_n) (A_n=K[x]/< x^nrangle , K the quadratic extension of the finite field k of q elements, q odd), where A_n is endowed with a second class involution. After the construction of a specific data, the representation is defined on the generators of a Bruhat presentation of G, via linear operators satisfying the relations of the presentation. The structure of a unitary group U associated to G is described. Using this group we obtain a first decomposition of ρ.

  19. Nested generalized linear mixed model with ordinal response: Simulation and application on poverty data in Java Island

    NASA Astrophysics Data System (ADS)

    Widyaningsih, Yekti; Saefuddin, Asep; Notodiputro, Khairil A.; Wigena, Aji H.

    2012-05-01

    The objective of this research is to build a nested generalized linear mixed model using an ordinal response variable with some covariates. There are three main jobs in this paper, i.e. parameters estimation procedure, simulation, and implementation of the model for the real data. At the part of parameters estimation procedure, concepts of threshold, nested random effect, and computational algorithm are described. The simulations data are built for 3 conditions to know the effect of different parameter values of random effect distributions. The last job is the implementation of the model for the data about poverty in 9 districts of Java Island. The districts are Kuningan, Karawang, and Majalengka chose randomly in West Java; Temanggung, Boyolali, and Cilacap from Central Java; and Blitar, Ngawi, and Jember from East Java. The covariates in this model are province, number of bad nutrition cases, number of farmer families, and number of health personnel. In this modeling, all covariates are grouped as ordinal scale. Unit observation in this research is sub-district (kecamatan) nested in district, and districts (kabupaten) are nested in province. For the result of simulation, ARB (Absolute Relative Bias) and RRMSE (Relative Root of mean square errors) scale is used. They show that prov parameters have the highest bias, but more stable RRMSE in all conditions. The simulation design needs to be improved by adding other condition, such as higher correlation between covariates. Furthermore, as the result of the model implementation for the data, only number of farmer family and number of medical personnel have significant contributions to the level of poverty in Central Java and East Java province, and only district 2 (Karawang) of province 1 (West Java) has different random effect from the others. The source of the data is PODES (Potensi Desa) 2008 from BPS (Badan Pusat Statistik).

  20. Statistical Downscaling of Seasonal Forecasts and Climate Change Scenarios using Generalized Linear Modeling Approach for Stochastic Weather Generators

    NASA Astrophysics Data System (ADS)

    Kim, Y.; Katz, R. W.; Rajagopalan, B.; Podesta, G. P.

    2009-12-01

    Climate forecasts and climate change scenarios are typically provided in the form of monthly or seasonally aggregated totals or means. But time series of daily weather (e.g., precipitation amount, minimum and maximum temperature) are commonly required for use in agricultural decision-making. Stochastic weather generators constitute one technique to temporally downscale such climate information. The recently introduced approach for stochastic weather generators, based generalized linear modeling (GLM), is convenient for this purpose, especially with covariates to account for seasonality and teleconnections (e.g., with the El Niño phenomenon). Yet one important limitation of stochastic weather generators is a marked tendency to underestimate the observed interannual variance of seasonally aggregated variables. To reduce this “overdispersion” phenomenon, we incorporate time series of seasonal total precipitation and seasonal mean minimum and maximum temperature in the GLM weather generator as covariates. These seasonal time series are smoothed using locally weighted scatterplot smoothing (LOESS) to avoid introducing underdispersion. Because the aggregate variables appear explicitly in the weather generator, downscaling to daily sequences can be readily implemented. The proposed method is applied to time series of daily weather at Pergamino and Pilar in the Argentine Pampas. Seasonal precipitation and temperature forecasts produced by the International Research Institute for Climate and Society (IRI) are used as prototypes. In conjunction with the GLM weather generator, a resampling scheme is used to translate the uncertainty in the seasonal forecasts (the IRI format only specifies probabilities for three categories: below normal, near normal, and above normal) into the corresponding uncertainty for the daily weather statistics. The method is able to generate potentially useful shifts in the probability distributions of seasonally aggregated precipitation and

  1. Does a physical activity program in the nursing home impact on depressive symptoms? A generalized linear mixed-model approach.

    PubMed

    Diegelmann, Mona; Jansen, Carl-Philipp; Wahl, Hans-Werner; Schilling, Oliver K; Schnabel, Eva-Luisa; Hauer, Klaus

    2017-04-18

    Physical activity (PA) may counteract depressive symptoms in nursing home (NH) residents considering biological, psychological, and person-environment transactional pathways. Empirical results, however, have remained inconsistent. Addressing potential shortcomings of previous research, we examined the effect of a whole-ecology PA intervention program on NH residents' depressive symptoms using generalized linear mixed-models (GLMMs). We used longitudinal data from residents of two German NHs who were included without any pre-selection regarding physical and mental functioning (n = 163, Mage = 83.1, 53-100 years; 72% female) and assessed on four occasions each three months apart. Residents willing to participate received a 12-week PA training program. Afterwards, the training was implemented in weekly activity schedules by NH staff. We ran GLMMs to account for the highly skewed depressive symptoms outcome measure (12-item Geriatric Depression Scale-Residential) by using gamma distribution. Exercising (n = 78) and non-exercising residents (n = 85) showed a comparable level of depressive symptoms at pretest. For exercising residents, depressive symptoms stabilized between pre-, posttest, and at follow-up, whereas an increase was observed for non-exercising residents. The intervention group's stabilization in depressive symptoms was maintained at follow-up, but increased further for non-exercising residents. Implementing an innovative PA intervention appears to be a promising approach to prevent the increase of NH residents' depressive symptoms. At the data-analytical level, GLMMs seem to be a promising tool for intervention research at large, because all longitudinally available data points and non-normality of outcome data can be considered.

  2. An Application of Interactive Computer Graphics to the Study of Inferential Statistics and the General Linear Model

    DTIC Science & Technology

    1991-09-01

    v List of Tables .......................................... vi A bstract ............................................. vii...Population Mean of a Normally Distributed Random Variable .............. 53 Linear Simple Regression to Estimate E(Y I X) ......... 58 V . Conclusions and...Appendix A .......................................... 66 Bibliography .......................................... 156 V ita

  3. Permutation-based variance component test in generalized linear mixed model with application to multilocus genetic association study.

    PubMed

    Zeng, Ping; Zhao, Yang; Li, Hongliang; Wang, Ting; Chen, Feng

    2015-04-22

    In many medical studies the likelihood ratio test (LRT) has been widely applied to examine whether the random effects variance component is zero within the mixed effects models framework; whereas little work about likelihood-ratio based variance component test has been done in the generalized linear mixed models (GLMM), where the response is discrete and the log-likelihood cannot be computed exactly. Before applying the LRT for variance component in GLMM, several difficulties need to be overcome, including the computation of the log-likelihood, the parameter estimation and the derivation of the null distribution for the LRT statistic. To overcome these problems, in this paper we make use of the penalized quasi-likelihood algorithm and calculate the LRT statistic based on the resulting working response and the quasi-likelihood. The permutation procedure is used to obtain the null distribution of the LRT statistic. We evaluate the permutation-based LRT via simulations and compare it with the score-based variance component test and the tests based on the mixture of chi-square distributions. Finally we apply the permutation-based LRT to multilocus association analysis in the case-control study, where the problem can be investigated under the framework of logistic mixed effects model. The simulations show that the permutation-based LRT can effectively control the type I error rate, while the score test is sometimes slightly conservative and the tests based on mixtures cannot maintain the type I error rate. Our studies also show that the permutation-based LRT has higher power than these existing tests and still maintains a reasonably high power even when the random effects do not follow a normal distribution. The application to GAW17 data also demonstrates that the proposed LRT has a higher probability to identify the association signals than the score test and the tests based on mixtures. In the present paper the permutation-based LRT was developed for variance

  4. Direct 4D parametric imaging for linearized models of reversibly binding PET tracers using generalized AB-EM reconstruction

    PubMed Central

    Rahmim, Arman; Zhou, Yun; Tang, Jing; Lu, Lijun; Sossi, Vesna; Wong, Dean F.

    2012-01-01

    Due to high noise levels in the voxel kinetics, development of reliable parametric imaging algorithms remains as one of most active areas in dynamic brain PET imaging, which in the vast majority of cases involves receptor/transporter studies with reversibly binding tracers. As such, the focus of this work has been to develop a novel direct 4D parametric image reconstruction scheme for such tracers. Based on a relative equilibrium (RE) graphical analysis formulation (Zhou et al., 2009b), we developed a closed-form 4D EM algorithm to directly reconstruct distribution volume (DV) parametric images within a plasma input model, as well as DV ratio (DVR) images within a reference tissue model scheme (wherein an initial reconstruction was used to estimate the reference tissue time-activity-curves). A particular challenge with the direct 4D EM formulation is that the intercept parameters in graphical (linearized) analysis of reversible tracers (e.g. Logan or RE analysis) are commonly negative (unlike for irreversible tracers; e.g. using Patlak analysis). Subsequently, we focused our attention on the AB-EM algorithm, derived by Byrne (1998) to allow inclusion of prior information about the lower (A) and upper (B) bounds for image values. We then generalized this algorithm to the 4D EM framework thus allowing negative intercept parameters. Furthermore, our 4D AB-EM algorithm incorporated, and emphasized the use of spatially varying lower bounds to achieve enhanced performance. As validation, the means of parameters estimated from 55 human 11C-raclopride dynamic PET studies were used for extensive simulations using a mathematical brain phantom. Images were reconstructed using conventional indirect as well as proposed direct parametric imaging methods. Noise vs. bias quantitative measurements were performed in various regions of the brain. Direct 4D EM reconstruction resulted in notable qualitative and quantitative accuracy improvements (over 35% noise reduction, with matched

  5. Developing a Measure of General Academic Ability: An Application of Maximal Reliability and Optimal Linear Combination to High School Students' Scores

    ERIC Educational Resources Information Center

    Dimitrov, Dimiter M.; Raykov, Tenko; AL-Qataee, Abdullah Ali

    2015-01-01

    This article is concerned with developing a measure of general academic ability (GAA) for high school graduates who apply to colleges, as well as with the identification of optimal weights of the GAA indicators in a linear combination that yields a composite score with maximal reliability and maximal predictive validity, employing the framework of…

  6. Symposium on General Linear Model Approach to the Analysis of Experimental Data in Educational Research (Athens, Georgia, June 29-July 1, 1967). Final Report.

    ERIC Educational Resources Information Center

    Bashaw, W. L., Ed.; Findley, Warren G., Ed.

    This volume contains the five major addresses and subsequent discussion from the Symposium on the General Linear Models Approach to the Analysis of Experimental Data in Educational Research, which was held in 1967 in Athens, Georgia. The symposium was designed to produce systematic information, including new methodology, for dissemination to the…

  7. Developing a Measure of General Academic Ability: An Application of Maximal Reliability and Optimal Linear Combination to High School Students' Scores

    ERIC Educational Resources Information Center

    Dimitrov, Dimiter M.; Raykov, Tenko; AL-Qataee, Abdullah Ali

    2015-01-01

    This article is concerned with developing a measure of general academic ability (GAA) for high school graduates who apply to colleges, as well as with the identification of optimal weights of the GAA indicators in a linear combination that yields a composite score with maximal reliability and maximal predictive validity, employing the framework of…

  8. A Comparison between Linear IRT Observed-Score Equating and Levine Observed-Score Equating under the Generalized Kernel Equating Framework

    ERIC Educational Resources Information Center

    Chen, Haiwen

    2012-01-01

    In this article, linear item response theory (IRT) observed-score equating is compared under a generalized kernel equating framework with Levine observed-score equating for nonequivalent groups with anchor test design. Interestingly, these two equating methods are closely related despite being based on different methodologies. Specifically, when…

  9. A generating set direct search augmented Lagrangian algorithm for optimization with a combination of general and linear constraints.

    SciTech Connect

    Lewis, Robert Michael (College of William and Mary, Williamsburg, VA); Torczon, Virginia Joanne (College of William and Mary, Williamsburg, VA); Kolda, Tamara Gibson

    2006-08-01

    We consider the solution of nonlinear programs in the case where derivatives of the objective function and nonlinear constraints are unavailable. To solve such problems, we propose an adaptation of a method due to Conn, Gould, Sartenaer, and Toint that proceeds by approximately minimizing a succession of linearly constrained augmented Lagrangians. Our modification is to use a derivative-free generating set direct search algorithm to solve the linearly constrained subproblems. The stopping criterion proposed by Conn, Gould, Sartenaer and Toint for the approximate solution of the subproblems requires explicit knowledge of derivatives. Such information is presumed absent in the generating set search method we employ. Instead, we show that stationarity results for linearly constrained generating set search methods provide a derivative-free stopping criterion, based on a step-length control parameter, that is sufficient to preserve the convergence properties of the original augmented Lagrangian algorithm.

  10. Cadmium-hazard mapping using a general linear regression model (Irr-Cad) for rapid risk assessment.

    PubMed

    Simmons, Robert W; Noble, Andrew D; Pongsakul, P; Sukreeyapongse, O; Chinabut, N

    2009-02-01

    Research undertaken over the last 40 years has identified the irrefutable relationship between the long-term consumption of cadmium (Cd)-contaminated rice and human Cd disease. In order to protect public health and livelihood security, the ability to accurately and rapidly determine spatial Cd contamination is of high priority. During 2001-2004, a General Linear Regression Model Irr-Cad was developed to predict the spatial distribution of soil Cd in a Cd/Zn co-contaminated cascading irrigated rice-based system in Mae Sot District, Tak Province, Thailand (Longitude E 98 degrees 59'-E 98 degrees 63' and Latitude N 16 degrees 67'-16 degrees 66'). The results indicate that Irr-Cad accounted for 98% of the variance in mean Field Order total soil Cd. Preliminary validation indicated that Irr-Cad 'predicted' mean Field Order total soil Cd, was significantly (p < 0.001) correlated (R (2) = 0.92) with 'observed' mean Field Order total soil Cd values. Field Order is determined by a given field's proximity to primary outlets from in-field irrigation channels and subsequent inter-field irrigation flows. This in turn determines Field Order in Irrigation Sequence (Field Order(IS)). Mean Field Order total soil Cd represents the mean total soil Cd (aqua regia-digested) for a given Field Order(IS). In 2004-2005, Irr-Cad was utilized to evaluate the spatial distribution of total soil Cd in a 'high-risk' area of Mae Sot District. Secondary validation on six randomly selected field groups verified that Irr-Cad predicted mean Field Order total soil Cd and was significantly (p < 0.001) correlated with the observed mean Field Order total soil Cd with R (2) values ranging from 0.89 to 0.97. The practical applicability of Irr-Cad is in its minimal input requirements, namely the classification of fields in terms of Field Order(IS), strategic sampling of all primary fields and laboratory based determination of total soil Cd (T-Cd(P)) and the use of a weighed coefficient for Cd (Coeff

  11. Model-Based Improvement

    DTIC Science & Technology

    2006-10-01

    2006 4. TITLE AND SUBTITLE Model-Based Improvement 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR( S ) 5d. PROJECT...NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME( S ) AND ADDRESS(ES) Carnegie Mellon University ,Software Engineering...Institute (SEI),Pittsburgh,PA,15213 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME( S ) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S

  12. ELAS: A general-purpose computer program for the equilibrium problems of linear structures. Volume 2: Documentation of the program. [subroutines and flow charts

    NASA Technical Reports Server (NTRS)

    Utku, S.

    1969-01-01

    A general purpose digital computer program for the in-core solution of linear equilibrium problems of structural mechanics is documented. The program requires minimum input for the description of the problem. The solution is obtained by means of the displacement method and the finite element technique. Almost any geometry and structure may be handled because of the availability of linear, triangular, quadrilateral, tetrahedral, hexahedral, conical, triangular torus, and quadrilateral torus elements. The assumption of piecewise linear deflection distribution insures monotonic convergence of the deflections from the stiffer side with decreasing mesh size. The stresses are provided by the best-fit strain tensors in the least squares at the mesh points where the deflections are given. The selection of local coordinate systems whenever necessary is automatic. The core memory is used by means of dynamic memory allocation, an optional mesh-point relabelling scheme and imposition of the boundary conditions during the assembly time.

  13. Accuracy assessment of the linear Poisson-Boltzmann equation and reparametrization of the OBC generalized Born model for nucleic acids and nucleic acid-protein complexes.

    PubMed

    Fogolari, Federico; Corazza, Alessandra; Esposito, Gennaro

    2015-04-05

    The generalized Born model in the Onufriev, Bashford, and Case (Onufriev et al., Proteins: Struct Funct Genet 2004, 55, 383) implementation has emerged as one of the best compromises between accuracy and speed of computation. For simulations of nucleic acids, however, a number of issues should be addressed: (1) the generalized Born model is based on a linear model and the linearization of the reference Poisson-Boltmann equation may be questioned for highly charged systems as nucleic acids; (2) although much attention has been given to potentials, solvation forces could be much less sensitive to linearization than the potentials; and (3) the accuracy of the Onufriev-Bashford-Case (OBC) model for nucleic acids depends on fine tuning of parameters. Here, we show that the linearization of the Poisson Boltzmann equation has mild effects on computed forces, and that with optimal choice of the OBC model parameters, solvation forces, essential for molecular dynamics simulations, agree well with those computed using the reference Poisson-Boltzmann model.

  14. An Exploratory Study of the Application of Generalized Inverse to ILS Estimation of Overidentified Equations in Linear Models

    DTIC Science & Technology

    1975-04-15

    paper is a compromise in the same nature as the 2SLS. We use the Moore - Penrose (MP) generalized inverse to... Moore - Penrose generalized inverse ; Indirect Least Squares; 1’wo Stage Least Squares; Instrumental Variables; Limited Information Maximum L-..clihood...Abstract -In this paper , we propose a procedure based on the use of the Moore - Penrose inverse of matrices for deriving unique Indirect Least Squares

  15. Model Based Filtered Backprojection Algorithm: A Tutorial

    PubMed Central

    2014-01-01

    Purpose People have been wandering for a long time whether a filtered backprojection (FBP) algorithm is able to incorporate measurement noise in image reconstruction. The purpose of this tutorial is to develop such an FBP algorithm that is able to minimize an objective function with an embedded noise model. Methods An objective function is first set up to model measurement noise and to enforce some constraints so that the resultant image has some pre-specified properties. An iterative algorithm is used to minimize the objective function, and then the result of the iterative algorithm is converted into the Fourier domain, which in turn leads to an FBP algorithm. The model based FBP algorithm is almost the same as the conventional FBP algorithm, except for the filtering step. Results The model based FBP algorithm has been applied to low-dose x-ray CT, nuclear medicine, and real-time MRI applications. Compared with the conventional FBP algorithm, the model based FBP algorithm is more effective in reducing noise. Even though an iterative algorithm can achieve the same noise-reducing performance, the model based FBP algorithm is much more computationally efficient. Conclusions The model based FBP algorithm is an efficient and effective image reconstruction tool. In many applications, it can replace the state-of-the-art iterative algorithms, which usually have a heavy computational cost. The model based FBP algorithm is linear and it has advantages over a nonlinear iterative algorithm in parametric image reconstruction and noise analysis. PMID:25574421

  16. Model-Based Prognostics of Hybrid Systems

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew; Roychoudhury, Indranil; Bregon, Anibal

    2015-01-01

    Model-based prognostics has become a popular approach to solving the prognostics problem. However, almost all work has focused on prognostics of systems with continuous dynamics. In this paper, we extend the model-based prognostics framework to hybrid systems models that combine both continuous and discrete dynamics. In general, most systems are hybrid in nature, including those that combine physical processes with software. We generalize the model-based prognostics formulation to hybrid systems, and describe the challenges involved. We present a general approach for modeling hybrid systems, and overview methods for solving estimation and prediction in hybrid systems. As a case study, we consider the problem of conflict (i.e., loss of separation) prediction in the National Airspace System, in which the aircraft models are hybrid dynamical systems.

  17. Multivariate General Linear Models (MGLM) on Riemannian Manifolds with Applications to Statistical Analysis of Diffusion Weighted Images

    PubMed Central

    Kim, Hyunwoo J.; Adluru, Nagesh; Collins, Maxwell D.; Chung, Moo K.; Bendlin, Barbara B.; Johnson, Sterling C.; Davidson, Richard J.; Singh, Vikas

    2014-01-01

    Linear regression is a parametric model which is ubiquitous in scientific analysis. The classical setup where the observations and responses, i.e., (xi, yi) pairs, are Euclidean is well studied. The setting where yi is manifold valued is a topic of much interest, motivated by applications in shape analysis, topic modeling, and medical imaging. Recent work gives strategies for max-margin classifiers, principal components analysis, and dictionary learning on certain types of manifolds. For parametric regression specifically, results within the last year provide mechanisms to regress one real-valued parameter, xi ∈ R, against a manifold-valued variable, yi ∈ . We seek to substantially extend the operating range of such methods by deriving schemes for multivariate multiple linear regression —a manifold-valued dependent variable against multiple independent variables, i.e., f : Rn → . Our variational algorithm efficiently solves for multiple geodesic bases on the manifold concurrently via gradient updates. This allows us to answer questions such as: what is the relationship of the measurement at voxel y to disease when conditioned on age and gender. We show applications to statistical analysis of diffusion weighted images, which give rise to regression tasks on the manifold GL(n)/O(n) for diffusion tensor images (DTI) and the Hilbert unit sphere for orientation distribution functions (ODF) from high angular resolution acquisition. The companion open-source code is available on nitrc.org/projects/riem_mglm. PMID:25580070

  18. Multiple Linear Regressions by Maximizing the Likelihood under Assumption of Generalized Gauss-Laplace Distribution of the Error.

    PubMed

    Jäntschi, Lorentz; Bálint, Donatella; Bolboacă, Sorana D

    2016-01-01

    Multiple linear regression analysis is widely used to link an outcome with predictors for better understanding of the behaviour of the outcome of interest. Usually, under the assumption that the errors follow a normal distribution, the coefficients of the model are estimated by minimizing the sum of squared deviations. A new approach based on maximum likelihood estimation is proposed for finding the coefficients on linear models with two predictors without any constrictive assumptions on the distribution of the errors. The algorithm was developed, implemented, and tested as proof-of-concept using fourteen sets of compounds by investigating the link between activity/property (as outcome) and structural feature information incorporated by molecular descriptors (as predictors). The results on real data demonstrated that in all investigated cases the power of the error is significantly different by the convenient value of two when the Gauss-Laplace distribution was used to relax the constrictive assumption of the normal distribution of the error. Therefore, the Gauss-Laplace distribution of the error could not be rejected while the hypothesis that the power of the error from Gauss-Laplace distribution is normal distributed also failed to be rejected.

  19. Multiple Linear Regressions by Maximizing the Likelihood under Assumption of Generalized Gauss-Laplace Distribution of the Error

    PubMed Central

    Jäntschi, Lorentz

    2016-01-01

    Multiple linear regression analysis is widely used to link an outcome with predictors for better understanding of the behaviour of the outcome of interest. Usually, under the assumption that the errors follow a normal distribution, the coefficients of the model are estimated by minimizing the sum of squared deviations. A new approach based on maximum likelihood estimation is proposed for finding the coefficients on linear models with two predictors without any constrictive assumptions on the distribution of the errors. The algorithm was developed, implemented, and tested as proof-of-concept using fourteen sets of compounds by investigating the link between activity/property (as outcome) and structural feature information incorporated by molecular descriptors (as predictors). The results on real data demonstrated that in all investigated cases the power of the error is significantly different by the convenient value of two when the Gauss-Laplace distribution was used to relax the constrictive assumption of the normal distribution of the error. Therefore, the Gauss-Laplace distribution of the error could not be rejected while the hypothesis that the power of the error from Gauss-Laplace distribution is normal distributed also failed to be rejected. PMID:28090215

  20. Generalized two-dimensional (2D) linear system analysis metrics (GMTF, GDQE) for digital radiography systems including the effect of focal spot, magnification, scatter, and detector characteristics

    PubMed Central

    Kuhls-Gilcrist, Andrew T.; Gupta, Sandesh K.; Bednarek, Daniel R.; Rudin, Stephen

    2010-01-01

    The MTF, NNPS, and DQE are standard linear system metrics used to characterize intrinsic detector performance. To evaluate total system performance for actual clinical conditions, generalized linear system metrics (GMTF, GNNPS and GDQE) that include the effect of the focal spot distribution, scattered radiation, and geometric unsharpness are more meaningful and appropriate. In this study, a two-dimensional (2D) generalized linear system analysis was carried out for a standard flat panel detector (FPD) (194-micron pixel pitch and 600-micron thick CsI) and a newly-developed, high-resolution, micro-angiographic fluoroscope (MAF) (35-micron pixel pitch and 300-micron thick CsI). Realistic clinical parameters and x-ray spectra were used. The 2D detector MTFs were calculated using the new Noise Response method and slanted edge method and 2D focal spot distribution measurements were done using a pin-hole assembly. The scatter fraction, generated for a uniform head equivalent phantom, was measured and the scatter MTF was simulated with a theoretical model. Different magnifications and scatter fractions were used to estimate the 2D GMTF, GNNPS and GDQE for both detectors. Results show spatial non-isotropy for the 2D generalized metrics which provide a quantitative description of the performance of the complete imaging system for both detectors. This generalized analysis demonstrated that the MAF and FPD have similar capabilities at lower spatial frequencies, but that the MAF has superior performance over the FPD at higher frequencies even when considering focal spot blurring and scatter. This 2D generalized performance analysis is a valuable tool to evaluate total system capabilities and to enable optimized design for specific imaging tasks. PMID:21243038

  1. Model-Based Systems

    NASA Technical Reports Server (NTRS)

    Frisch, Harold P.

    2007-01-01

    Engineers, who design systems using text specification documents, focus their work upon the completed system to meet Performance, time and budget goals. Consistency and integrity is difficult to maintain within text documents for a single complex system and more difficult to maintain as several systems are combined into higher-level systems, are maintained over decades, and evolve technically and in performance through updates. This system design approach frequently results in major changes during the system integration and test phase, and in time and budget overruns. Engineers who build system specification documents within a model-based systems environment go a step further and aggregate all of the data. They interrelate all of the data to insure consistency and integrity. After the model is constructed, the various system specification documents are prepared, all from the same database. The consistency and integrity of the model is assured, therefore the consistency and integrity of the various specification documents is insured. This article attempts to define model-based systems relative to such an environment. The intent is to expose the complexity of the enabling problem by outlining what is needed, why it is needed and how needs are being addressed by international standards writing teams.

  2. a General Algorithm for the Generation of Quasilattices by Means of the Cut and Projection Method Using the SIMPLEX Method of Linear Programming and the Moore-Penrose Generalized Inverse

    NASA Astrophysics Data System (ADS)

    Aragón, J. L.; Vázquez Polo, G.; Gómez, A.

    A computational algorithm for the generation of quasiperiodic tiles based on the cut and projection method is presented. The algorithm is capable of projecting any type of lattice embedded in any euclidean space onto any subspace making it possible to generate quasiperiodic tiles with any desired symmetry. The simplex method of linear programming and the Moore-Penrose generalized inverse are used to construct the cut (strip) in the higher dimensional space which is to be projected.

  3. Wronskian solutions of the T-, Q- and Y-systems related to infinite dimensional unitarizable modules of the general linear superalgebra gl (M | N)

    NASA Astrophysics Data System (ADS)

    Tsuboi, Zengo

    2013-05-01

    In [1] (Z. Tsuboi, Nucl. Phys. B 826 (2010) 399, arxiv:arXiv:0906.2039), we proposed Wronskian-like solutions of the T-system for [ M , N ]-hook of the general linear superalgebra gl (M | N). We have generalized these Wronskian-like solutions to the ones for the general T-hook, which is a union of [M1 ,N1 ]-hook and [M2 ,N2 ]-hook (M =M1 +M2, N =N1 +N2). These solutions are related to Weyl-type supercharacter formulas of infinite dimensional unitarizable modules of gl (M | N). Our solutions also include a Wronskian-like solution discussed in [2] (N. Gromov, V. Kazakov, S. Leurent, Z. Tsuboi, JHEP 1101 (2011) 155, arxiv:arXiv:1010.2720) in relation to the AdS5 /CFT4 spectral problem.

  4. Observer-based distributed adaptive fault-tolerant containment control of multi-agent systems with general linear dynamics.

    PubMed

    Ye, Dan; Chen, Mengmeng; Li, Kui

    2017-06-22

    In this paper, we consider the distributed containment control problem of multi-agent systems with actuator bias faults based on observer method. The objective is to drive the followers into the convex hull spanned by the dynamic leaders, where the input is unknown but bounded. By constructing an observer to estimate the states and bias faults, an effective distributed adaptive fault-tolerant controller is developed. Different from the traditional method, an auxiliary controller gain is designed to deal with the unknown inputs and bias faults together. Moreover, the coupling gain can be adjusted online through the adaptive mechanism without using the global information. Furthermore, the proposed control protocol can guarantee that all the signals of the closed-loop systems are bounded and all the followers converge to the convex hull with bounded residual errors formed by the dynamic leaders. Finally, a decoupled linearized longitudinal motion model of the F-18 aircraft is used to demonstrate the effectiveness. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Distributed Time-Varying Formation Robust Tracking for General Linear Multiagent Systems With Parameter Uncertainties and External Disturbances.

    PubMed

    Hua, Yongzhao; Dong, Xiwang; Li, Qingdong; Ren, Zhang

    2017-05-18

    This paper investigates the time-varying formation robust tracking problems for high-order linear multiagent systems with a leader of unknown control input in the presence of heterogeneous parameter uncertainties and external disturbances. The followers need to accomplish an expected time-varying formation in the state space and track the state trajectory produced by the leader simultaneously. First, a time-varying formation robust tracking protocol with a totally distributed form is proposed utilizing the neighborhood state information. With the adaptive updating mechanism, neither any global knowledge about the communication topology nor the upper bounds of the parameter uncertainties, external disturbances and leader's unknown input are required in the proposed protocol. Then, in order to determine the control parameters, an algorithm with four steps is presented, where feasible conditions for the followers to accomplish the expected time-varying formation tracking are provided. Furthermore, based on the Lyapunov-like analysis theory, it is proved that the formation tracking error can converge to zero asymptotically. Finally, the effectiveness of the theoretical results is verified by simulation examples.

  6. Polynomial approximation of functions of matrices and its application to the solution of a general system of linear equations

    NASA Technical Reports Server (NTRS)

    Tal-Ezer, Hillel

    1987-01-01

    During the process of solving a mathematical model numerically, there is often a need to operate on a vector v by an operator which can be expressed as f(A) while A is NxN matrix (ex: exp(A), sin(A), A sup -1). Except for very simple matrices, it is impractical to construct the matrix f(A) explicitly. Usually an approximation to it is used. In the present research, an algorithm is developed which uses a polynomial approximation to f(A). It is reduced to a problem of approximating f(z) by a polynomial in z while z belongs to the domain D in the complex plane which includes all the eigenvalues of A. This problem of approximation is approached by interpolating the function f(z) in a certain set of points which is known to have some maximal properties. The approximation thus achieved is almost best. Implementing the algorithm to some practical problem is described. Since a solution to a linear system Ax = b is x= A sup -1 b, an iterative solution to it can be regarded as a polynomial approximation to f(A) = A sup -1. Implementing the algorithm in this case is also described.

  7. VISCEL: A general-purpose computer program for analysis of linear viscoelastic structures (user's manual), volume 1

    NASA Technical Reports Server (NTRS)

    Gupta, K. K.; Akyuz, F. A.; Heer, E.

    1972-01-01

    This program, an extension of the linear equilibrium problem solver ELAS, is an updated and extended version of its earlier form (written in FORTRAN 2 for the IBM 7094 computer). A synchronized material property concept utilizing incremental time steps and the finite element matrix displacement approach has been adopted for the current analysis. A special option enables employment of constant time steps in the logarithmic scale, thereby reducing computational efforts resulting from accumulative material memory effects. A wide variety of structures with elastic or viscoelastic material properties can be analyzed by VISCEL. The program is written in FORTRAN 5 language for the Univac 1108 computer operating under the EXEC 8 system. Dynamic storage allocation is automatically effected by the program, and the user may request up to 195K core memory in a 260K Univac 1108/EXEC 8 machine. The physical program VISCEL, consisting of about 7200 instructions, has four distinct links (segments), and the compiled program occupies a maximum of about 11700 words decimal of core storage.

  8. Direct Linearization and Adjoint Approaches to Evaluation of Atmospheric Weighting Functions and Surface Partial Derivatives: General Principles, Synergy and Areas of Application

    NASA Technical Reports Server (NTRS)

    Ustino, Eugene A.

    2006-01-01

    This slide presentation reviews the observable radiances as functions of atmospheric parameters and of surface parameters; the mathematics of atmospheric weighting functions (WFs) and surface partial derivatives (PDs) are presented; and the equation of the forward radiative transfer (RT) problem is presented. For non-scattering atmospheres this can be done analytically, and all WFs and PDs can be computed analytically using the direct linearization approach. For scattering atmospheres, in general case, the solution of the forward RT problem can be obtained only numerically, but we need only two numerical solutions: one of the forward RT problem and one of the adjoint RT problem to compute all WFs and PDs we can think of. In this presentation we discuss applications of both the linearization and adjoint approaches

  9. Direct Linearization and Adjoint Approaches to Evaluation of Atmospheric Weighting Functions and Surface Partial Derivatives: General Principles, Synergy and Areas of Application

    NASA Technical Reports Server (NTRS)

    Ustino, Eugene A.

    2006-01-01

    This slide presentation reviews the observable radiances as functions of atmospheric parameters and of surface parameters; the mathematics of atmospheric weighting functions (WFs) and surface partial derivatives (PDs) are presented; and the equation of the forward radiative transfer (RT) problem is presented. For non-scattering atmospheres this can be done analytically, and all WFs and PDs can be computed analytically using the direct linearization approach. For scattering atmospheres, in general case, the solution of the forward RT problem can be obtained only numerically, but we need only two numerical solutions: one of the forward RT problem and one of the adjoint RT problem to compute all WFs and PDs we can think of. In this presentation we discuss applications of both the linearization and adjoint approaches

  10. An a-posteriori error estimator for linear elastic fracture mechanics using the stable generalized/extended finite element method

    NASA Astrophysics Data System (ADS)

    Lins, R. M.; Ferreira, M. D. C.; Proença, S. P. B.; Duarte, C. A.

    2015-12-01

    In this study, a recovery-based a-posteriori error estimator originally proposed for the Corrected XFEM is investigated in the framework of the stable generalized FEM (SGFEM). Both Heaviside and branch functions are adopted to enrich the approximations in the SGFEM. Some necessary adjustments to adapt the expressions defining the enhanced stresses in the original error estimator are discussed in the SGFEM framework. Relevant aspects such as effectivity indexes, error distribution, convergence rates and accuracy of the recovered stresses are used in order to highlight the main findings and the effectiveness of the error estimator. Two benchmark problems of the 2-D fracture mechanics are selected to assess the robustness of the error estimator hereby investigated. The main findings of this investigation are: the SGFEM shows higher accuracy than G/XFEM and a reduced sensitivity to blending element issues. The error estimator can accurately capture these features of both methods.

  11. Quantum, classical, and hybrid QM/MM calculations in solution: General implementation of the ddCOSMO linear scaling strategy

    SciTech Connect

    Lipparini, Filippo; Scalmani, Giovanni; Frisch, Michael J.; Lagardère, Louis; Stamm, Benjamin; Cancès, Eric; Maday, Yvon; Piquemal, Jean-Philip; Mennucci, Benedetta

    2014-11-14

    We present the general theory and implementation of the Conductor-like Screening Model according to the recently developed ddCOSMO paradigm. The various quantities needed to apply ddCOSMO at different levels of theory, including quantum mechanical descriptions, are discussed in detail, with a particular focus on how to compute the integrals needed to evaluate the ddCOSMO solvation energy and its derivatives. The overall computational cost of a ddCOSMO computation is then analyzed and decomposed in the various steps: the different relative weights of such contributions are then discussed for both ddCOSMO and the fastest available alternative discretization to the COSMO equations. Finally, the scaling of the cost of the various steps with respect to the size of the solute is analyzed and discussed, showing how ddCOSMO opens significantly new possibilities when cheap or hybrid molecular mechanics/quantum mechanics methods are used to describe the solute.

  12. Modeling Learning in Doubly Multilevel Binary Longitudinal Data Using Generalized Linear Mixed Models: An Application to Measuring and Explaining Word Learning.

    PubMed

    Cho, Sun-Joo; Goodwin, Amanda P

    2016-04-01

    When word learning is supported by instruction in experimental studies for adolescents, word knowledge outcomes tend to be collected from complex data structure, such as multiple aspects of word knowledge, multilevel reader data, multilevel item data, longitudinal design, and multiple groups. This study illustrates how generalized linear mixed models can be used to measure and explain word learning for data having such complexity. Results from this application provide deeper understanding of word knowledge than could be attained from simpler models and show that word knowledge is multidimensional and depends on word characteristics and instructional contexts.

  13. Model Based Definition

    NASA Technical Reports Server (NTRS)

    Rowe, Sidney E.

    2010-01-01

    In September 2007, the Engineering Directorate at the Marshall Space Flight Center (MSFC) created the Design System Focus Team (DSFT). MSFC was responsible for the in-house design and development of the Ares 1 Upper Stage and the Engineering Directorate was preparing to deploy a new electronic Configuration Management and Data Management System with the Design Data Management System (DDMS) based upon a Commercial Off The Shelf (COTS) Product Data Management (PDM) System. The DSFT was to establish standardized CAD practices and a new data life cycle for design data. Of special interest here, the design teams were to implement Model Based Definition (MBD) in support of the Upper Stage manufacturing contract. It is noted that this MBD does use partially dimensioned drawings for auxiliary information to the model. The design data lifecycle implemented several new release states to be used prior to formal release that allowed the models to move through a flow of progressive maturity. The DSFT identified some 17 Lessons Learned as outcomes of the standards development, pathfinder deployments and initial application to the Upper Stage design completion. Some of the high value examples are reviewed.

  14. Well-conditioning global-local analysis using stable generalized/extended finite element method for linear elastic fracture mechanics

    NASA Astrophysics Data System (ADS)

    Malekan, Mohammad; Barros, Felicio Bruzzi

    2016-11-01

    Using the locally-enriched strategy to enrich a small/local part of the problem by generalized/extended finite element method (G/XFEM) leads to non-optimal convergence rate and ill-conditioning system of equations due to presence of blending elements. The local enrichment can be chosen from polynomial, singular, branch or numerical types. The so-called stable version of G/XFEM method provides a well-conditioning approach when only singular functions are used in the blending elements. This paper combines numeric enrichment functions obtained from global-local G/XFEM method with the polynomial enrichment along with a well-conditioning approach, stable G/XFEM, in order to show the robustness and effectiveness of the approach. In global-local G/XFEM, the enrichment functions are constructed numerically from the solution of a local problem. Furthermore, several enrichment strategies are adopted along with the global-local enrichment. The results obtained with these enrichments strategies are discussed in detail, considering convergence rate in strain energy, growth rate of condition number, and computational processing. Numerical experiments show that using geometrical enrichment along with stable G/XFEM for global-local strategy improves the convergence rate and the conditioning of the problem. In addition, results shows that using polynomial enrichment for global problem simultaneously with global-local enrichments lead to ill-conditioned system matrices and bad convergence rate.

  15. Generalized linear dynamics of a plant-parasitic nematode population and the economic evaluation of crop rotations.

    PubMed

    Van Den Berg, W; Rossing, W A H

    2005-03-01

    In 1-year experiments, the final population density of nematodes is usually modeled as a function of initial density. Often, estimation of the parameters is precarious because nematode measurements, although laborious and expensive, are imprecise and the range in initial densities may be small. The estimation procedure can be improved by using orthogonal regression with a parameter for initial density on each experimental unit. In multi-year experiments parameters of a dynamic model can be estimated with optimization techniques like simulated annealing or Bayesian methods such as Markov chain Monte Carlo (MCMC). With these algorithms information from different experiments can be combined. In multi-year dynamic models, the stability of the steady states is an important issue. With chaotic dynamics, prediction of densities and associated economic loss will be possible only on a short timescale. In this study, a generic model was developed that describes population dynamics in crop rotations. Mathematical analysis showed stable steady states do exist for this dynamic model. Using the Metropolis algorithm, the model was fitted to data from a multi-year experiment on Pratylenchus penetrans dynamics with treatments that varied between years. For three crops, parameters for a yield loss assessment model were available and gross margin of the six possible rotations comprising these three crops and a fallow year were compared at the steady state of nematode density. Sensitivity of mean gross margin to changes in the parameter estimates was investigated. We discuss the general applicability of the dynamic rotation model and the opportunities arising from combination of the model with Bayesian calibration techniques for more efficient utilization and collection of data relevant for economic evaluation of crop rotations.

  16. Exact power series solutions of the structure equations of the general relativistic isotropic fluid stars with linear barotropic and polytropic equations of state

    NASA Astrophysics Data System (ADS)

    Harko, T.; Mak, M. K.

    2016-09-01

    Obtaining exact solutions of the spherically symmetric general relativistic gravitational field equations describing the interior structure of an isotropic fluid sphere is a long standing problem in theoretical and mathematical physics. The usual approach to this problem consists mainly in the numerical investigation of the Tolman-Oppenheimer-Volkoff and of the mass continuity equations, which describes the hydrostatic stability of the dense stars. In the present paper we introduce an alternative approach for the study of the relativistic fluid sphere, based on the relativistic mass equation, obtained by eliminating the energy density in the Tolman-Oppenheimer-Volkoff equation. Despite its apparent complexity, the relativistic mass equation can be solved exactly by using a power series representation for the mass, and the Cauchy convolution for infinite power series. We obtain exact series solutions for general relativistic dense astrophysical objects described by the linear barotropic and the polytropic equations of state, respectively. For the polytropic case we obtain the exact power series solution corresponding to arbitrary values of the polytropic index n. The explicit form of the solution is presented for the polytropic index n=1, and for the indexes n=1/2 and n=1/5, respectively. The case of n=3 is also considered. In each case the exact power series solution is compared with the exact numerical solutions, which are reproduced by the power series solutions truncated to seven terms only. The power series representations of the geometric and physical properties of the linear barotropic and polytropic stars are also obtained.

  17. A comparative study of generalized linear mixed modelling and artificial neural network approach for the joint modelling of survival and incidence of Dengue patients in Sri Lanka

    NASA Astrophysics Data System (ADS)

    Hapugoda, J. C.; Sooriyarachchi, M. R.

    2017-09-01

    Survival time of patients with a disease and the incidence of that particular disease (count) is frequently observed in medical studies with the data of a clustered nature. In many cases, though, the survival times and the count can be correlated in a way that, diseases that occur rarely could have shorter survival times or vice versa. Due to this fact, joint modelling of these two variables will provide interesting and certainly improved results than modelling these separately. Authors have previously proposed a methodology using Generalized Linear Mixed Models (GLMM) by joining the Discrete Time Hazard model with the Poisson Regression model to jointly model survival and count model. As Aritificial Neural Network (ANN) has become a most powerful computational tool to model complex non-linear systems, it was proposed to develop a new joint model of survival and count of Dengue patients of Sri Lanka by using that approach. Thus, the objective of this study is to develop a model using ANN approach and compare the results with the previously developed GLMM model. As the response variables are continuous in nature, Generalized Regression Neural Network (GRNN) approach was adopted to model the data. To compare the model fit, measures such as root mean square error (RMSE), absolute mean error (AME) and correlation coefficient (R) were used. The measures indicate the GRNN model fits the data better than the GLMM model.

  18. Fast optical signals in the sensorimotor cortex: General Linear Convolution Model applied to multiple source-detector distance-based data.

    PubMed

    Chiarelli, Antonio Maria; Romani, Gian Luca; Merla, Arcangelo

    2014-01-15

    In this study, we applied the General Linear Convolution Model to detect fast optical signals (FOS) in the somatosensory cortex, and to study their dependence on the source-detector separation distance (2.0 to 3.5 cm) and irradiated light wavelength (690 and 830 nm). We modeled the impulse response function as a rectangular function that lasted 30 ms, with variable time delay with respect to the stimulus onset. The model was tested in a cohort of 20 healthy volunteers who underwent supra-motor threshold electrical stimulation of the median nerve. The impulse response function quantified the time delay for the maximal response at 70 ms to 110 ms after stimulus onset, in agreement with classical somatosensory-evoked potentials in the literature, previous optical imaging studies based on a grand-average approach, and grand-average based processing. Phase signals at longer wavelength were used to identify FOS for all the source-detector separation distances, but the shortest one. Intensity signals only detected FOS at the greatest distance; i.e., for the largest channel depth. There was no activation for the shorter wavelength light. Correlational analysis between the phase and intensity of FOS further confirmed diffusive rather than optical absorption changes associated with neuronal activity in the activated cortical volume. Our study demonstrates the reliability of our method based on the General Linear Convolution Model for the detection of fast cortical activation through FOS. © 2013 Elsevier Inc. All rights reserved.

  19. Dynamic Model Based Vector Control of Linear Induction Motor

    DTIC Science & Technology

    2012-05-01

    reference frame. In Section III, the basic structure of vector control is introduced. Proportional-Integral ( PI ) control is incorporated into vector...The load mass is then released from the slider. The performed simulation is based on selected PI control gains of Kp = 35 and KI = 75. Fig. 12 shows...controlled separately to maintain a desired flux level in the machine. The force current Isq is proportional to the load which is regulated using a PI

  20. Principles of models based engineering

    SciTech Connect

    Dolin, R.M.; Hefele, J.

    1996-11-01

    This report describes a Models Based Engineering (MBE) philosophy and implementation strategy that has been developed at Los Alamos National Laboratory`s Center for Advanced Engineering Technology. A major theme in this discussion is that models based engineering is an information management technology enabling the development of information driven engineering. Unlike other information management technologies, models based engineering encompasses the breadth of engineering information, from design intent through product definition to consumer application.

  1. Extending the simple linear regression model to account for correlated responses: an introduction to generalized estimating equations and multi-level mixed modelling.

    PubMed

    Burton, P; Gurrin, L; Sly, P

    1998-06-15

    Much of the research in epidemiology and clinical science is based upon longitudinal designs which involve repeated measurements of a variable of interest in each of a series of individuals. Such designs can be very powerful, both statistically and scientifically, because they enable one to study changes within individual subjects over time or under varied conditions. However, this power arises because the repeated measurements tend to be correlated with one another, and this must be taken into proper account at the time of analysis or misleading conclusions may result. Recent advances in statistical theory and in software development mean that studies based upon such designs can now be analysed more easily, in a valid yet flexible manner, using a variety of approaches which include the use of generalized estimating equations, and mixed models which incorporate random effects. This paper provides a particularly simple illustration of the use of these two approaches, taking as a practical example the analysis of a study which examined the response of portable peak expiratory flow meters to changes in true peak expiratory flow in 12 children with asthma. The paper takes the reader through the relevant practicalities of model fitting, interpretation and criticism and demonstrates that, in a simple case such as this, analyses based upon these model-based approaches produce reassuringly similar inferences to standard analyses based upon more conventional methods.

  2. Meta-analysis for the comparison of two diagnostic tests to a common gold standard: A generalized linear mixed model approach.

    PubMed

    Hoyer, Annika; Kuss, Oliver

    2016-08-02

    Meta-analysis of diagnostic studies is still a rapidly developing area of biostatistical research. Especially, there is an increasing interest in methods to compare different diagnostic tests to a common gold standard. Restricting to the case of two diagnostic tests, in these meta-analyses the parameters of interest are the differences of sensitivities and specificities (with their corresponding confidence intervals) between the two diagnostic tests while accounting for the various associations across single studies and between the two tests. We propose statistical models with a quadrivariate response (where sensitivity of test 1, specificity of test 1, sensitivity of test 2, and specificity of test 2 are the four responses) as a sensible approach to this task. Using a quadrivariate generalized linear mixed model naturally generalizes the common standard bivariate model of meta-analysis for a single diagnostic test. If information on several thresholds of the tests is available, the quadrivariate model can be further generalized to yield a comparison of full receiver operating characteristic (ROC) curves. We illustrate our model by an example where two screening methods for the diagnosis of type 2 diabetes are compared. © The Author(s) 2016.

  3. First principles approach to the Abraham-Minkowski controversy for the momentum of light in general linear non-dispersive media

    NASA Astrophysics Data System (ADS)

    Ramos, Tomás; Rubilar, Guillermo F.; Obukhov, Yuri N.

    2015-02-01

    We study the problem of the definition of the energy-momentum tensor of light in general moving non-dispersive media with linear constitutive law. Using the basic principles of classical field theory, we show that for the correct understanding of the problem, one needs to carefully distinguish situations when the material medium is modeled either as a background on which light propagates or as a dynamical part of the total system. In the former case, we prove that the (generalized) Belinfante-Rosenfeld (BR) tensor for the electromagnetic field coincides with the Minkowski tensor. We derive a complete set of balance equations for this open system and show that the symmetries of the background medium are directly related to the conservation of the Minkowski quantities. In particular, for isotropic media, the angular momentum of light is conserved despite of the fact that the Minkowski tensor is non-symmetric. For the closed system of light interacting with matter, we model the material medium as a relativistic non-dissipative fluid and we prove that it is always possible to express the total BR tensor of the closed system either in the Abraham or in the Minkowski separation. However, in the case of dynamical media, the balance equations have a particularly convenient form in terms of the Abraham tensor. Our results generalize previous attempts and provide a first principles basis for a unified understanding of the long-standing Abraham-Minkowski controversy without ad hoc arguments.

  4. Limitations of Non Model-Based Recognition Schemes

    DTIC Science & Technology

    1991-05-01

    general classes: model-based vs. non model-based schemes. In this paper we establish some limitation on the class of non model-based recognition schemes. A ...perfect, but is allowed to make mistakes and misidentify each object from a substantial fraction of viewing directions. It follows that every...symmetric objects) a nontrivial recognition scheme exists. We define the notion of a discrimination power of a consistent recognition function for a class

  5. Application of the Generalized Scattering Matrix Method and Time Domain Computation of the Transverse Long Range Wake in Linear Accelerator Structures

    NASA Astrophysics Data System (ADS)

    Jöstingmeier, A.; Dohlus, M.; Rieckmann, C.

    1997-05-01

    For a proper design of linear colliders it is important to know the transverse long range wake which can either be computed directly in time domain or alternatively from the corresponding higher order resonant modes. In this contribution, the MAFIA program package has been applied to calculate the wake function of the 180-cell accelerating structure used for the S-band linear collider at DESY. Furthermore, the corresponding resonant modes of the first, third and sixth dipole passband have been analyzed using an accurate and numerically efficient generalized scattering matrix method. For this method a special numerical technique is suggested allowing a reliable computation of the so-called trapped modes. The agreement of both methods turns out to be excellent. From the results one can predict that the sixth dipole passband significantly contributes to the transverse long range wakefield. From the calculation of the wake functions corresponding to the individual passbands it will finally be shown which modes have to be included in beam dynamics simulations.

  6. Novel and general approach to linear filter design for contrast-to-noise ratio enhancement of magnetic resonance images with multiple interfering features in the scene

    NASA Astrophysics Data System (ADS)

    Soltanian-Zadeh, Hamid; Windham, Joe P.

    1992-04-01

    Maximizing the minimum absolute contrast-to-noise ratios (CNRs) between a desired feature and multiple interfering processes, by linear combination of images in a magnetic resonance imaging (MRI) scene sequence, is attractive for MRI analysis and interpretation. A general formulation of the problem is presented, along with a novel solution utilizing the simple and numerically stable method of Gram-Schmidt orthogonalization. We derive explicit solutions for the case of two interfering features first, then for three interfering features, and, finally, using a typical example, for an arbitrary number of interfering feature. For the case of two interfering features, we also provide simplified analytical expressions for the signal-to-noise ratios (SNRs) and CNRs of the filtered images. The technique is demonstrated through its applications to simulated and acquired MRI scene sequences of a human brain with a cerebral infarction. For these applications, a 50 to 100% improvement for the smallest absolute CNR is obtained.

  7. Using a generalized linear mixed model approach to explore the role of age, motor proficiency, and cognitive styles in children's reach estimation accuracy.

    PubMed

    Caçola, Priscila M; Pant, Mohan D

    2014-10-01

    The purpose was to use a multi-level statistical technique to analyze how children's age, motor proficiency, and cognitive styles interact to affect accuracy on reach estimation tasks via Motor Imagery and Visual Imagery. Results from the Generalized Linear Mixed Model analysis (GLMM) indicated that only the 7-year-old age group had significant random intercepts for both tasks. Motor proficiency predicted accuracy in reach tasks, and cognitive styles (object scale) predicted accuracy in the motor imagery task. GLMM analysis is suitable to explore age and other parameters of development. In this case, it allowed an assessment of motor proficiency interacting with age to shape how children represent, plan, and act on the environment.

  8. Euclidean Closed Linear Transformations of Complex Spacetime and generally of Complex Spaces of dimension four endowed with the Same or Different Metric

    NASA Astrophysics Data System (ADS)

    Vossos, Spyridon; Vossos, Elias

    2016-08-01

    closed LSTT is reduced, if one RIO has small velocity wrt another RIO. Thus, we have infinite number of closed LSTTs, each one with the corresponding SR theory. In case that we relate accelerated observers with variable metric of spacetime, we have the case of General Relativity (GR). For being that clear, we produce a generalized Schwarzschild metric, which is in accordance with any SR based on this closed complex LSTT and Einstein equations. The application of this kind of transformations to the SR and GR is obvious. But, the results may be applied to any linear space of dimension four endowed with steady or variable metric, whose elements (four- vectors) have spatial part (vector) with Euclidean metric.

  9. Model based control of polymer composite manufacturing processes

    NASA Astrophysics Data System (ADS)

    Potaraju, Sairam

    2000-10-01

    The objective of this research is to develop tools that help process engineers design, analyze and control polymeric composite manufacturing processes to achieve higher productivity and cost reduction. Current techniques for process design and control of composite manufacturing suffer from the paucity of good process models that can accurately represent these non-linear systems. Existing models developed by researchers in the past are designed to be process and operation specific, hence generating new simulation models is time consuming and requires significant effort. To address this issue, an Object Oriented Design (OOD) approach is used to develop a component-based model building framework. Process models for two commonly used industrial processes (Injected Pultrusion and Autoclave Curing) are developed using this framework to demonstrate the flexibility. Steady state and dynamic validation of this simulator is performed using a bench scale injected pultrusion process. This simulator could not be implemented online for control due to computational constraints. Models that are fast enough for online implementation, with nearly the same degree of accuracy are developed using a two-tier scheme. First, lower dimensional models that captures essential resin flow, heat transfer and cure kinetics important from a process monitoring and control standpoint are formulated. The second step is to reduce these low dimensional models to Reduced Order Models (ROM) suited for online model based estimation, control and optimization. Model reduction is carried out using Proper Orthogonal Decomposition (POD) technique in conjunction with a Galerkin formulation procedure. Subsequently, a nonlinear model-based estimation and inferential control scheme based on the ROM is implemented. In particular, this research work contributes in the following general areas: (1) Design and implementation of versatile frameworks for modeling and simulation of manufacturing processes using object

  10. Argumentation in Science Education: A Model-Based Framework

    ERIC Educational Resources Information Center

    Bottcher, Florian; Meisert, Anke

    2011-01-01

    The goal of this article is threefold: First, the theoretical background for a model-based framework of argumentation to describe and evaluate argumentative processes in science education is presented. Based on the general model-based perspective in cognitive science and the philosophy of science, it is proposed to understand arguments as reasons…

  11. Generalized Vibrational Perturbation Theory for Rotovibrational Energies of Linear, Symmetric and Asymmetric Tops: Theory, Approximations, and Automated Approaches to Deal with Medium-to-Large Molecular Systems.

    PubMed

    Piccardo, Matteo; Bloino, Julien; Barone, Vincenzo

    2015-08-05

    Models going beyond the rigid-rotor and the harmonic oscillator levels are mandatory for providing accurate theoretical predictions for several spectroscopic properties. Different strategies have been devised for this purpose. Among them, the treatment by perturbation theory of the molecular Hamiltonian after its expansion in power series of products of vibrational and rotational operators, also referred to as vibrational perturbation theory (VPT), is particularly appealing for its computational efficiency to treat medium-to-large systems. Moreover, generalized (GVPT) strategies combining the use of perturbative and variational formalisms can be adopted to further improve the accuracy of the results, with the first approach used for weakly coupled terms, and the second one to handle tightly coupled ones. In this context, the GVPT formulation for asymmetric, symmetric, and linear tops is revisited and fully generalized to both minima and first-order saddle points of the molecular potential energy surface. The computational strategies and approximations that can be adopted in dealing with GVPT computations are pointed out, with a particular attention devoted to the treatment of symmetry and degeneracies. A number of tests and applications are discussed, to show the possibilities of the developments, as regards both the variety of treatable systems and eligible methods.

  12. Generalized Vibrational Perturbation Theory for Rotovibrational Energies of Linear, Symmetric and Asymmetric Tops: Theory, Approximations, and Automated Approaches to Deal with Medium-to-Large Molecular Systems

    PubMed Central

    Piccardo, Matteo; Bloino, Julien; Barone, Vincenzo

    2015-01-01

    Models going beyond the rigid-rotor and the harmonic oscillator levels are mandatory for providing accurate theoretical predictions for several spectroscopic properties. Different strategies have been devised for this purpose. Among them, the treatment by perturbation theory of the molecular Hamiltonian after its expansion in power series of products of vibrational and rotational operators, also referred to as vibrational perturbation theory (VPT), is particularly appealing for its computational efficiency to treat medium-to-large systems. Moreover, generalized (GVPT) strategies combining the use of perturbative and variational formalisms can be adopted to further improve the accuracy of the results, with the first approach used for weakly coupled terms, and the second one to handle tightly coupled ones. In this context, the GVPT formulation for asymmetric, symmetric, and linear tops is revisited and fully generalized to both minima and first-order saddle points of the molecular potential energy surface. The computational strategies and approximations that can be adopted in dealing with GVPT computations are pointed out, with a particular attention devoted to the treatment of symmetry and degeneracies. A number of tests and applications are discussed, to show the possibilities of the developments, as regards both the variety of treatable systems and eligible methods. © 2015 Wiley Periodicals, Inc. PMID:26345131

  13. The impact of a misspecified random-effects distribution on the estimation and the performance of inferential procedures in generalized linear mixed models.

    PubMed

    Litière, S; Alonso, A; Molenberghs, G

    2008-07-20

    Estimation in generalized linear mixed models (GLMMs) is often based on maximum likelihood theory, assuming that the underlying probability model is correctly specified. However, the validity of this assumption is sometimes difficult to verify. In this paper we study, through simulations, the impact of misspecifying the random-effects distribution on the estimation and hypothesis testing in GLMMs. It is shown that the maximum likelihood estimators are inconsistent in the presence of misspecification. The bias induced in the mean-structure parameters is generally small, as far as the variability of the underlying random-effects distribution is small as well. However, the estimates of this variability are always severely biased. Given that the variance components are the only tool to study the variability of the true distribution, it is difficult to assess whether problems in the estimation of the mean structure occur. The type I error rate and the power of the commonly used inferential procedures are also severely affected. The situation is aggravated if more than one random effect is included in the model. Further, we propose to deal with possible misspecification by way of sensitivity analysis, considering several random-effects distributions. All the results are illustrated using data from a clinical trial in schizophrenia.

  14. On the characterization of dynamic supramolecular systems: a general mathematical association model for linear supramolecular copolymers and application on a complex two-component hydrogen-bonding system.

    PubMed

    Odille, Fabrice G J; Jónsson, Stefán; Stjernqvist, Susann; Rydén, Tobias; Wärnmark, Kenneth

    2007-01-01

    A general mathematical model for the characterization of the dynamic (kinetically labile) association of supramolecular assemblies in solution is presented. It is an extension of the equal K (EK) model by the stringent use of linear algebra to allow for the simultaneous presence of an unlimited number of different units in the resulting assemblies. It allows for the analysis of highly complex dynamic equilibrium systems in solution, including both supramolecular homo- and copolymers without the recourse to extensive approximations, in a field in which other analytical methods are difficult. The derived mathematical methodology makes it possible to analyze dynamic systems such as supramolecular copolymers regarding for instance the degree of polymerization, the distribution of a given monomer in different copolymers as well as its position in an aggregate. It is to date the only general means to characterize weak supramolecular systems. The model was fitted to NMR dilution titration data by using the program Matlab, and a detailed algorithm for the optimization of the different parameters has been developed. The methodology is applied to a case study, a hydrogen-bonded supramolecular system, salen 4+porphyrin 5. The system is formally a two-component system but in reality a three-component system. This results in a complex dynamic system in which all monomers are associated to each other by hydrogen bonding with different association constants, resulting in homo- and copolymers 4n5m as well as cyclic structures 6 and 7, in addition to free 4 and 5. The system was analyzed by extensive NMR dilution titrations at variable temperatures. All chemical shifts observed at different temperatures were used in the fitting to obtain the DeltaH degrees and DeltaS degrees values producing the best global fit. From the derived general mathematical expressions, system 4+5 could be characterized with respect to above-mentioned parameters.

  15. Genetic parameters for feather pecking and aggressive behavior in a large F2-cross of laying hens using generalized linear mixed models.

    PubMed

    Bennewitz, J; Bögelein, S; Stratz, P; Rodehutscord, M; Piepho, H P; Kjaer, J B; Bessei, W

    2014-04-01

    Feather pecking and aggressive pecking is a well-known problem in egg production. In the present study, genetic parameters for 4 feather-pecking-related traits were estimated using generalized linear mixed models. The traits were bouts of feather pecking delivered (FPD), bouts of feather pecking received (FPR), bouts of aggressive pecking delivered (APD), and bouts of aggressive pecking received (APR). An F2-design was established from 2 divergent selected founder lines. The lines were selected for low or high feather pecking for 10 generations. The number of F2 hens was 910. They were housed in pens with around 40 birds. Each pen was observed in 21 sessions of 20 min, distributed over 3 consecutive days. An animal model was applied that treated the bouts observed within 20 min as repeated observations. An over-dispersed Poisson distribution was assumed for observed counts and the link function was a log link. The model included a random animal effect, a random permanent environment effect, and a random day-by-hen effect. Residual variance was approximated on the link scale by the delta method. The results showed a heritability around 0.10 on the link scale for FPD and APD and of 0.04 for APR. The heritability of FPR was zero. For all behavior traits, substantial permanent environmental effects were observed. The approximate genetic correlation between FPD and APD (FPD and APR) was 0.81 (0.54). Egg production and feather eating records were collected on the same hens as well and were analyzed with a generalized linear mixed model, assuming a binomial distribution and using a probit link function. The heritability on the link scale for egg production was 0.40 and for feather eating 0.57. The approximate genetic correlation between FPD and egg production was 0.50 and between FPD and feather eating 0.73. Selection might help to reduce feather pecking, but this might result in an unfavorable correlated selection response reducing egg production. Feather eating and

  16. Model-based OPC for first-generation 193-nm lithography

    NASA Astrophysics Data System (ADS)

    Lucas, Kevin D.; Word, James C.; Vandenberghe, Geert; Verhaegen, Staf; Jonckheere, Rik M.

    2001-09-01

    The first 193 nm lithography processes using model-based OPC will soon be in production for 0.13 micrometer technology semiconductor manufacturing. However, the relative immaturity of 193 nm resist, etch and reticle processes places considerable strain upon the OPC software to compensate increased non-linearity, proximity bias, corner rounding and line-end pullback. We have evaluated three leading model-based OPC software packages with 193 nm lithography on random logic poly gate designs for the 0.13 micrometer generation. Our analysis has been performed for three different OPC reticle write processes, two leading 193 nm resists and multiple illumination conditions. The results indicate that the maturity of the model-OPC software tools for 193 nm lithography is generally good, although specific improvements are recommended.

  17. Using generalized linear models to estimate selectivity from short-term recoveries of tagged red drum Sciaenops ocellatus: Effects of gear, fate, and regulation period

    USGS Publications Warehouse

    Bacheler, N.M.; Hightower, J.E.; Burdick, S.M.; Paramore, L.M.; Buckel, J.A.; Pollock, K.H.

    2010-01-01

    Estimating the selectivity patterns of various fishing gears is a critical component of fisheries stock assessment due to the difficulty in obtaining representative samples from most gears. We used short-term recoveries (n = 3587) of tagged red drum Sciaenops ocellatus to directly estimate age- and length-based selectivity patterns using generalized linear models. The most parsimonious models were selected using AIC, and standard deviations were estimated using simulations. Selectivity of red drum was dependent upon the regulation period in which the fish was caught, the gear used to catch the fish (i.e., hook-and-line, gill nets, pound nets), and the fate of the fish upon recovery (i.e., harvested or released); models including all first-order interactions between main effects outperformed models without interactions. Selectivity of harvested fish was generally dome-shaped and shifted toward larger, older fish in response to regulation changes. Selectivity of caught-and-released red drum was highest on the youngest and smallest fish in the early and middle regulation periods, but increased on larger, legal-sized fish in the late regulation period. These results suggest that catch-and-release mortality has consistently been high for small, young red drum, but has recently become more common in larger, older fish. This method of estimating selectivity from short-term tag recoveries is valuable because it is simpler than full tag-return models, and may be more robust because yearly fishing and natural mortality rates do not need to be modeled and estimated. ?? 2009 Elsevier B.V.

  18. Using generalized linear models to estimate selectivity from short-term recoveries of tagged red drum Sciaenops ocellatus: Effects of gear, fate, and regulation period

    USGS Publications Warehouse

    Burdick, Summer M.; Hightower, Joseph E.; Bacheler, Nathan M.; Paramore, Lee M.; Buckel, Jeffrey A.; Pollock, Kenneth H.

    2010-01-01

    Estimating the selectivity patterns of various fishing gears is a critical component of fisheries stock assessment due to the difficulty in obtaining representative samples from most gears. We used short-term recoveries (n = 3587) of tagged red drum Sciaenops ocellatus to directly estimate age- and length-based selectivity patterns using generalized linear models. The most parsimonious models were selected using AIC, and standard deviations were estimated using simulations. Selectivity of red drum was dependent upon the regulation period in which the fish was caught, the gear used to catch the fish (i.e., hook-and-line, gill nets, pound nets), and the fate of the fish upon recovery (i.e., harvested or released); models including all first-order interactions between main effects outperformed models without interactions. Selectivity of harvested fish was generally dome-shaped and shifted toward larger, older fish in response to regulation changes. Selectivity of caught-and-released red drum was highest on the youngest and smallest fish in the early and middle regulation periods, but increased on larger, legal-sized fish in the late regulation period. These results suggest that catch-and-release mortality has consistently been high for small, young red drum, but has recently become more common in larger, older fish. This method of estimating selectivity from short-term tag recoveries is valuable because it is simpler than full tag-return models, and may be more robust because yearly fishing and natural mortality rates do not need to be modeled and estimated.

  19. Pathways of the North Pacific Intermediate Water identified through the tangent linear and adjoint models of an ocean general circulation model

    NASA Astrophysics Data System (ADS)

    Fujii, Yosuke; Nakano, Toshiya; Usui, Norihisa; Matsumoto, Satoshi; Tsujino, Hiroyuki; Kamachi, Masafumi

    2013-04-01

    This study develops a strategy for tracing a target water mass, and applies it to analyzing the pathway of the North Pacific Intermediate Water (NPIW) from the subarctic gyre to the northwestern part of the subtropical gyre south of Japan in a simulation of an ocean general circulation model. This strategy estimates the pathway of the water mass that travels from an origin to a destination area during a specific period using a conservation property concerning tangent linear and adjoint models. In our analysis, a large fraction of the low salinity origin water mass of NPIW initially comes from the Okhotsk or Bering Sea, flows through the southeastern side of the Kuril Islands, and is advected to the Mixed Water Region (MWR) by the Oyashio current. It then enters the Kuroshio Extension (KE) at the first KE ridge, and is advected eastward by the KE current. However, it deviates southward from the KE axis around 158°E over the Shatsky Rise, or around 170°E on the western side of the Emperor Seamount Chain, and enters the subtropical gyre. It is finally transported westward by the recirculation flow. This pathway corresponds well to the shortcut route of NPIW from MWR to the region south of Japan inferred from analysis of the long-term freshening trend of NPIW observation. Copyright 2013 John Wiley & Sons, Ltd.

  20. Pathways of the North Pacific Intermediate Water identified through the tangent linear and adjoint models of an ocean general circulation model

    NASA Astrophysics Data System (ADS)

    Fujii, Y.; Nakano, T.; Usui, N.; Matsumoto, S.; Tsujino, H.; Kamachi, M.

    2014-12-01

    This study develops a strategy for tracing a target water mass, and applies it to analyzing the pathway of the North Pacific Intermediate Water (NPIW) from the subarctic gyre to the northwestern part of the subtropical gyre south of Japan in a simulation of an ocean general circulation model. This strategy estimates the pathway of the water mass that travels from an origin to a destination area during a specific period using a conservation property concerning tangent linear and adjoint models. In our analysis, a large fraction of the low salinity origin water mass of NPIW initially comes from the Okhotsk or Bering Sea, flows through the southeastern side of the Kuril Islands, and is advected to the Mixed Water Region (MWR) by the Oyashio current. It then enters the Kuroshio Extension (KE) at the first KE ridge, and is advected eastward by the KE current. However, it deviates southward from the KE axis around 158°E over the Shatsky Rise, or around 170ºE on the western side of the Emperor Seamount Chain, and enters the subtropical gyre. It is finally transported westward by the recirculation flow. This pathway corresponds well to the shortcut route of NPIW from MWR to the region south of Japan inferred from analysis of the long-term freshening trend of NPIW observation.

  1. Generalized linear mixed model analysis of risk factors for contamination of moisture-enhanced pork with Campylobacter jejuni and Salmonella enterica Typhimurium.

    PubMed

    Wen, Xuesong; Li, Jing; Dickson, James S

    2014-10-01

    Translocation of foodborne pathogens into the interior tissues of pork through moisture enhancement may be of concern if the meat is undercooked. In the present study, a five-strain mixture of Campylobacter jejuni or Salmonella enterica Typhimurium was evenly spread on the surface of fresh pork loins. Pork loins were injected, sliced, vacuum packaged, and stored. After storage, sliced pork was cooked by traditional grilling. Survival of Salmonella Typhimurium and C. jejuni in the interior tissues of the samples were analyzed by enumeration. The populations of these pathogens dropped below the detection limit (10 colony-forming units/g) in most samples that were cooked to 71.1°C or above. The general linear mixed model procedure was used to model the association between risk factors and the presence/absence of these pathogens after cooking. Estimated regression coefficients associated with the fixed effects indicated that the recovery probability of Salmonella Typhimurium was negatively associated with increasing level of enhancement. The effects of moisture enhancement and cooking on the recovery probability of C. jejuni were moderated by storage temperature. Our findings will assist food processors and regulatory agencies with science-based evaluation of the current processing, storage condition, and cooking guideline for moisture-enhanced pork.

  2. Removing an intersubject variance component in a general linear model improves multiway factoring of event-related spectral perturbations in group EEG studies.

    PubMed

    Spence, Jeffrey S; Brier, Matthew R; Hart, John; Ferree, Thomas C

    2013-03-01

    Linear statistical models are used very effectively to assess task-related differences in EEG power spectral analyses. Mixed models, in particular, accommodate more than one variance component in a multisubject study, where many trials of each condition of interest are measured on each subject. Generally, intra- and intersubject variances are both important to determine correct standard errors for inference on functions of model parameters, but it is often assumed that intersubject variance is the most important consideration in a group study. In this article, we show that, under common assumptions, estimates of some functions of model parameters, including estimates of task-related differences, are properly tested relative to the intrasubject variance component only. A substantial gain in statistical power can arise from the proper separation of variance components when there is more than one source of variability. We first develop this result analytically, then show how it benefits a multiway factoring of spectral, spatial, and temporal components from EEG data acquired in a group of healthy subjects performing a well-studied response inhibition task.

  3. Linear shaped charge

    DOEpatents

    Peterson, David; Stofleth, Jerome H.; Saul, Venner W.

    2017-07-11

    Linear shaped charges are described herein. In a general embodiment, the linear shaped charge has an explosive with an elongated arrowhead-shaped profile. The linear shaped charge also has and an elongated v-shaped liner that is inset into a recess of the explosive. Another linear shaped charge includes an explosive that is shaped as a star-shaped prism. Liners are inset into crevices of the explosive, where the explosive acts as a tamper.

  4. Generalization of color-difference formulas for any illuminant and any observer by assuming perfect color constancy in a color-vision model based on the OSA-UCS system.

    PubMed

    Oleari, Claudio; Melgosa, Manuel; Huertas, Rafael

    2011-11-01

    The most widely used color-difference formulas are based on color-difference data obtained under D65 illumination or similar and for a 10° visual field; i.e., these formulas hold true for the CIE 1964 observer adapted to D65 illuminant. This work considers the psychometric color-vision model based on the Optical Society of America-Uniform Color Scales (OSA-UCS) system previously published by the first author [J. Opt. Soc. Am. A 21, 677 (2004); Color Res. Appl. 30, 31 (2005)] with the additional hypothesis that complete illuminant adaptation with perfect color constancy exists in the visual evaluation of color differences. In this way a computational procedure is defined for color conversion between different illuminant adaptations, which is an alternative to the current chromatic adaptation transforms. This color conversion allows the passage between different observers, e.g., CIE 1964 and CIE 1931. An application of this color conversion is here made in the color-difference evaluation for any observer and in any illuminant adaptation: these transformations convert tristimulus values related to any observer and illuminant adaptation to those related to the observer and illuminant adaptation of the definition of the color-difference formulas, i.e., to the CIE 1964 observer adapted to the D65 illuminant, and then the known color-difference formulas can be applied. The adaptations to the illuminants A, C, F11, D50, Planckian and daylight at any color temperature and for CIE 1931 and CIE 1964 observers are considered as examples, and all the corresponding transformations are given for practical use.

  5. Relationship between neighbourhood socioeconomic position and neighbourhood public green space availability: An environmental inequality analysis in a large German city applying generalized linear models.

    PubMed

    Schüle, Steffen Andreas; Gabriel, Katharina M A; Bolte, Gabriele

    2017-06-01

    The environmental justice framework states that besides environmental burdens also resources may be social unequally distributed both on the individual and on the neighbourhood level. This ecological study investigated whether neighbourhood socioeconomic position (SEP) was associated with neighbourhood public green space availability in a large German city with more than 1 million inhabitants. Two different measures were defined for green space availability. Firstly, percentage of green space within neighbourhoods was calculated with the additional consideration of various buffers around the boundaries. Secondly, percentage of green space was calculated based on various radii around the neighbourhood centroid. An index of neighbourhood SEP was calculated with principal component analysis. Log-gamma regression from the group of generalized linear models was applied in order to consider the non-normal distribution of the response variable. All models were adjusted for population density. Low neighbourhood SEP was associated with decreasing neighbourhood green space availability including 200m up to 1000m buffers around the neighbourhood boundaries. Low neighbourhood SEP was also associated with decreasing green space availability based on catchment areas measured from neighbourhood centroids with different radii (1000m up to 3000 m). With an increasing radius the strength of the associations decreased. Social unequally distributed green space may amplify environmental health inequalities in an urban context. Thus, the identification of vulnerable neighbourhoods and population groups plays an important role for epidemiological research and healthy city planning. As a methodical aspect, log-gamma regression offers an adequate parametric modelling strategy for positively distributed environmental variables. Copyright © 2017 Elsevier GmbH. All rights reserved.

  6. Atlas-guided volumetric diffuse optical tomography enhanced by generalized linear model analysis to image risk decision-making responses in young adults

    PubMed Central

    Lin, Zi-Jing; Li, Lin; Cazzell, Mary; Liu, Hanli

    2014-01-01

    Diffuse optical tomography (DOT) is a variant of functional near infrared spectroscopy and has the capability of mapping or reconstructing three dimensional (3D) hemodynamic changes due to brain activity. Common methods used in DOT image analysis to define brain activation have limitations because the selection of activation period is relatively subjective. General linear model (GLM)-based analysis can overcome this limitation. In this study, we combine the atlas-guided 3D DOT image reconstruction with GLM-based analysis (i.e., voxel-wise GLM analysis) to investigate the brain activity that is associated with risk decision-making processes. Risk decision-making is an important cognitive process and thus is an essential topic in the field of neuroscience. The Balloon Analog Risk Task (BART) is a valid experimental model and has been commonly used to assess human risk-taking actions and tendencies while facing risks. We have used the BART paradigm with a blocked design to investigate brain activations in the prefrontal and frontal cortical areas during decision-making from 37 human participants (22 males and 15 females). Voxel-wise GLM analysis was performed after a human brain atlas template and a depth compensation algorithm were combined to form atlas-guided DOT images. In this work, we wish to demonstrate the excellence of using voxel-wise GLM analysis with DOT to image and study cognitive functions in response to risk decision-making. Results have shown significant hemodynamic changes in the dorsal lateral prefrontal cortex (DLPFC) during the active-choice mode and a different activation pattern between genders; these findings correlate well with published literature in functional magnetic resonance imaging (fMRI) and fNIRS studies. PMID:24619964

  7. Multitemporal Modelling of Socio-Economic Wildfire Drivers in Central Spain between the 1980s and the 2000s: Comparing Generalized Linear Models to Machine Learning Algorithms

    PubMed Central

    Vilar, Lara; Gómez, Israel; Martínez-Vega, Javier; Echavarría, Pilar; Riaño, David; Martín, M. Pilar

    2016-01-01

    The socio-economic factors are of key importance during all phases of wildfire management that include prevention, suppression and restoration. However, modeling these factors, at the proper spatial and temporal scale to understand fire regimes is still challenging. This study analyses socio-economic drivers of wildfire occurrence in central Spain. This site represents a good example of how human activities play a key role over wildfires in the European Mediterranean basin. Generalized Linear Models (GLM) and machine learning Maximum Entropy models (Maxent) predicted wildfire occurrence in the 1980s and also in the 2000s to identify changes between each period in the socio-economic drivers affecting wildfire occurrence. GLM base their estimation on wildfire presence-absence observations whereas Maxent on wildfire presence-only. According to indicators like sensitivity or commission error Maxent outperformed GLM in both periods. It achieved a sensitivity of 38.9% and a commission error of 43.9% for the 1980s, and 67.3% and 17.9% for the 2000s. Instead, GLM obtained 23.33, 64.97, 9.41 and 18.34%, respectively. However GLM performed steadier than Maxent in terms of the overall fit. Both models explained wildfires from predictors such as population density and Wildland Urban Interface (WUI), but differed in their relative contribution. As a result of the urban sprawl and an abandonment of rural areas, predictors like WUI and distance to roads increased their contribution to both models in the 2000s, whereas Forest-Grassland Interface (FGI) influence decreased. This study demonstrates that human component can be modelled with a spatio-temporal dimension to integrate it into wildfire risk assessment. PMID:27557113

  8. Particle size, surface charge and concentration dependent ecotoxicity of three organo-coated silver nanoparticles: comparison between general linear model-predicted and observed toxicity.

    PubMed

    Silva, Thilini; Pokhrel, Lok R; Dubey, Brajesh; Tolaymat, Thabet M; Maier, Kurt J; Liu, Xuefeng

    2014-01-15

    Mechanism underlying nanotoxicity has remained elusive. Hence, efforts to understand whether nanoparticle properties might explain its toxicity are ongoing. Considering three different types of organo-coated silver nanoparticles (AgNPs): citrate-coated AgNP, polyvinylpyrrolidone-coated AgNP, and branched polyethyleneimine-coated AgNP, with different surface charge scenarios and core particle sizes, herein we systematically evaluate the potential role of particle size and surface charge on the toxicity of the three types of AgNPs against two model organisms, Escherichia coli and Daphnia magna. We find particle size, surface charge, and concentration dependent toxicity of all the three types of AgNPs against both the test organisms. Notably, Ag(+) (as added AgNO3) toxicity is greater than each type of AgNPs tested and the toxicity follows the trend: AgNO3 > BPEI-AgNP > Citrate-AgNP > PVP-AgNP. Modeling particle properties using the general linear model (GLM), a significant interaction effect of primary particle size and surface charge emerges that can explain empirically-derived acute toxicity with great precision. The model explains 99.9% variation of toxicity in E. coli and 99.8% variation of toxicity in D. magna, revealing satisfactory predictability of the regression models developed to predict the toxicity of the three organo-coated AgNPs. We anticipate that the use of GLM to satisfactorily predict the toxicity based on nanoparticle physico-chemical characteristics could contribute to our understanding of nanotoxicology and underscores the need to consider potential interactions among nanoparticle properties when explaining nanotoxicity. © 2013.

  9. General model for estimating partition coefficients to organisms and their tissues using the biological compositions and polyparameter linear free energy relationships.

    PubMed

    Endo, Satoshi; Brown, Trevor N; Goss, Kai-Uwe

    2013-06-18

    Equilibrium partition coefficients of organic chemicals from water to an organism or its tissues are typically estimated by using the total lipid content in combination with the octanol-water partition coefficient (K(ow)). This estimation method can cause systematic errors if (1) different lipid types have different sorptive capacities, (2) nonlipid components such as proteins have a significant contribution, and/or (3) K(ow) is not a suitable descriptor. As an alternative, this study proposes a more general model that uses detailed organism and tissue compositions (i.e., contents of storage lipid, membrane lipid, albumin, other proteins, and water) and polyparameter linear free energy relationships (PP-LFERs). The values calculated by the established PP-LFER-composition-based model agree well with experimental in vitro partition coefficients and in vivo steady-state concentration ratios from the literature with a root mean squared error of 0.32-0.53 log units, without any additional fitting. This model estimates a high contribution of the protein fraction to the overall tissue sorptive capacity in lean tissues (e.g., muscle), in particular for H-bond donor polar compounds. Direct model comparison revealed that the simple lipid-octanol model still calculates many tissue-water partition coefficients within 1 log unit of those calculated by the PP-LFER-composition-based model. Thus, the lipid-octanol model can be used as an order-of-magnitude approximation, for example, for multimedia fate modeling, but may not be suitable for more accurate predictions. Storage lipid-rich phases (e.g., adipose, milk) are prone to particularly large systematic errors. The new model provides useful implications for validity of lipid-normalization of concentrations in organisms, interpretation of biomonitoring results, and assessment of toxicity.

  10. Multitemporal Modelling of Socio-Economic Wildfire Drivers in Central Spain between the 1980s and the 2000s: Comparing Generalized Linear Models to Machine Learning Algorithms.

    PubMed

    Vilar, Lara; Gómez, Israel; Martínez-Vega, Javier; Echavarría, Pilar; Riaño, David; Martín, M Pilar

    2016-01-01

    The socio-economic factors are of key importance during all phases of wildfire management that include prevention, suppression and restoration. However, modeling these factors, at the proper spatial and temporal scale to understand fire regimes is still challenging. This study analyses socio-economic drivers of wildfire occurrence in central Spain. This site represents a good example of how human activities play a key role over wildfires in the European Mediterranean basin. Generalized Linear Models (GLM) and machine learning Maximum Entropy models (Maxent) predicted wildfire occurrence in the 1980s and also in the 2000s to identify changes between each period in the socio-economic drivers affecting wildfire occurrence. GLM base their estimation on wildfire presence-absence observations whereas Maxent on wildfire presence-only. According to indicators like sensitivity or commission error Maxent outperformed GLM in both periods. It achieved a sensitivity of 38.9% and a commission error of 43.9% for the 1980s, and 67.3% and 17.9% for the 2000s. Instead, GLM obtained 23.33, 64.97, 9.41 and 18.34%, respectively. However GLM performed steadier than Maxent in terms of the overall fit. Both models explained wildfires from predictors such as population density and Wildland Urban Interface (WUI), but differed in their relative contribution. As a result of the urban sprawl and an abandonment of rural areas, predictors like WUI and distance to roads increased their contribution to both models in the 2000s, whereas Forest-Grassland Interface (FGI) influence decreased. This study demonstrates that human component can be modelled with a spatio-temporal dimension to integrate it into wildfire risk assessment.

  11. A Java-based fMRI processing pipeline evaluation system for assessment of univariate general linear model and multivariate canonical variate analysis-based pipelines.

    PubMed

    Zhang, Jing; Liang, Lichen; Anderson, Jon R; Gatewood, Lael; Rottenberg, David A; Strother, Stephen C

    2008-01-01

    As functional magnetic resonance imaging (fMRI) becomes widely used, the demands for evaluation of fMRI processing pipelines and validation of fMRI analysis results is increasing rapidly. The current NPAIRS package, an IDL-based fMRI processing pipeline evaluation framework, lacks system interoperability and the ability to evaluate general linear model (GLM)-based pipelines using prediction metrics. Thus, it can not fully evaluate fMRI analytical software modules such as FSL.FEAT and NPAIRS.GLM. In order to overcome these limitations, a Java-based fMRI processing pipeline evaluation system was developed. It integrated YALE (a machine learning environment) into Fiswidgets (a fMRI software environment) to obtain system interoperability and applied an algorithm to measure GLM prediction accuracy. The results demonstrated that the system can evaluate fMRI processing pipelines with univariate GLM and multivariate canonical variates analysis (CVA)-based models on real fMRI data based on prediction accuracy (classification accuracy) and statistical parametric image (SPI) reproducibility. In addition, a preliminary study was performed where four fMRI processing pipelines with GLM and CVA modules such as FSL.FEAT and NPAIRS.CVA were evaluated with the system. The results indicated that (1) the system can compare different fMRI processing pipelines with heterogeneous models (NPAIRS.GLM, NPAIRS.CVA and FSL.FEAT) and rank their performance by automatic performance scoring, and (2) the rank of pipeline performance is highly dependent on the preprocessing operations. These results suggest that the system will be of value for the comparison, validation, standardization and optimization of functional neuroimaging software packages and fMRI processing pipelines.

  12. Women have relatively larger brains than men: a comment on the misuse of general linear models in the study of sexual dimorphism.

    PubMed

    Forstmeier, Wolfgang

    2011-11-01

    General linear models (GLM) have become such universal tools of statistical inference, that their applicability to a particular data set is rarely questioned. These models are designed to minimize residuals along the y-axis, while assuming that the predictor (x-axis) is free of statistical noise (ordinary least square regression, OLS). However, in practice, this assumption is often violated, which can lead to erroneous conclusions, particularly when two predictors are correlated with each other. This is best illustrated by two examples from the study of allometry, which have received great interest: (1) the question of whether men or women have relatively larger brains after accounting for body size differences, and (2) whether men indeed have shorter index fingers relative to ring fingers (digit ratio) than women. In depth analysis of these examples clearly shows that GLMs produce spurious sexual dimorphism in body shape where there is none (e.g. relative brain size). Likewise, they may fail to detect existing sexual dimorphisms in which the larger sex has the lower trait values (e.g. digit ratio) and, conversely, tend to exaggerate sexual dimorphism in which the larger sex has the relatively larger trait value (e.g. most sexually selected traits). These artifacts can be avoided with reduced major axis regression (RMA), which simultaneously minimizes residuals along both the x and the y-axis. Alternatively, in cases where isometry can be established there are no objections against and good reasons for the continued use of ratios as a simple means of correcting for size differences.

  13. Model-based Hyperspectral Exploitation Algorithm Development

    DTIC Science & Technology

    2006-01-01

    pixels, and an iterative constrained optimization using generalized reduced gradients ( GRG ). Sample results are shown in Figure 5. Much progress has...in-water optical parameters from remote observations involved a non-linear optimization that required observations of several regions of interests...retrieval from long wave infrared airborne hyperspectral imagery. The optimized land surface temperature and emissivity retrieval (OLSTER) algorithm

  14. Model-based Hyperspectral Exploitation Algorithm Development

    DTIC Science & Technology

    2007-09-30

    near- blackbody pixels, and an iterative constrained optimization using generalized reduced gradients ( GRG ). Sample results are shown in Figure 5...derive the spectral in-water optical parameters from remote observations involved a non-linear optimization that required observations of several...and emissivity retrieval from long wave infrared airborne hyperspectral imagery. The optimized land surface temperature and emissivity retrieval

  15. Efficient Model-Based Diagnosis Engine

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Vatan, Farrokh; Barrett, Anthony; James, Mark; Mackey, Ryan; Williams, Colin

    2009-01-01

    An efficient diagnosis engine - a combination of mathematical models and algorithms - has been developed for identifying faulty components in a possibly complex engineering system. This model-based diagnosis engine embodies a twofold approach to reducing, relative to prior model-based diagnosis engines, the amount of computation needed to perform a thorough, accurate diagnosis. The first part of the approach involves a reconstruction of the general diagnostic engine to reduce the complexity of the mathematical-model calculations and of the software needed to perform them. The second part of the approach involves algorithms for computing a minimal diagnosis (the term "minimal diagnosis" is defined below). A somewhat lengthy background discussion is prerequisite to a meaningful summary of the innovative aspects of the present efficient model-based diagnosis engine. In model-based diagnosis, the function of each component and the relationships among all the components of the engineering system to be diagnosed are represented as a logical system denoted the system description (SD). Hence, the expected normal behavior of the engineering system is the set of logical consequences of the SD. Faulty components lead to inconsistencies between the observed behaviors of the system and the SD (see figure). Diagnosis - the task of finding faulty components - is reduced to finding those components, the abnormalities of which could explain all the inconsistencies. The solution of the diagnosis problem should be a minimal diagnosis, which is a minimal set of faulty components. A minimal diagnosis stands in contradistinction to the trivial solution, in which all components are deemed to be faulty, and which, therefore, always explains all inconsistencies.

  16. Model-based Bayesian inference for ROC data analysis

    NASA Astrophysics Data System (ADS)

    Lei, Tianhu; Bae, K. Ty

    2013-03-01

    This paper presents a study of model-based Bayesian inference to Receiver Operating Characteristics (ROC) data. The model is a simple version of general non-linear regression model. Different from Dorfman model, it uses a probit link function with a covariate variable having zero-one two values to express binormal distributions in a single formula. Model also includes a scale parameter. Bayesian inference is implemented by Markov Chain Monte Carlo (MCMC) method carried out by Bayesian analysis Using Gibbs Sampling (BUGS). Contrast to the classical statistical theory, Bayesian approach considers model parameters as random variables characterized by prior distributions. With substantial amount of simulated samples generated by sampling algorithm, posterior distributions of parameters as well as parameters themselves can be accurately estimated. MCMC-based BUGS adopts Adaptive Rejection Sampling (ARS) protocol which requires the probability density function (pdf) which samples are drawing from be log concave with respect to the targeted parameters. Our study corrects a common misconception and proves that pdf of this regression model is log concave with respect to its scale parameter. Therefore, ARS's requirement is satisfied and a Gaussian prior which is conjugate and possesses many analytic and computational advantages is assigned to the scale parameter. A cohort of 20 simulated data sets and 20 simulations from each data set are used in our study. Output analysis and convergence diagnostics for MCMC method are assessed by CODA package. Models and methods by using continuous Gaussian prior and discrete categorical prior are compared. Intensive simulations and performance measures are given to illustrate our practice in the framework of model-based Bayesian inference using MCMC method.

  17. Model based design introduction: modeling game controllers to microprocessor architectures

    NASA Astrophysics Data System (ADS)

    Jungwirth, Patrick; Badawy, Abdel-Hameed

    2017-04-01

    We present an introduction to model based design. Model based design is a visual representation, generally a block diagram, to model and incrementally develop a complex system. Model based design is a commonly used design methodology for digital signal processing, control systems, and embedded systems. Model based design's philosophy is: to solve a problem - a step at a time. The approach can be compared to a series of steps to converge to a solution. A block diagram simulation tool allows a design to be simulated with real world measurement data. For example, if an analog control system is being upgraded to a digital control system, the analog sensor input signals can be recorded. The digital control algorithm can be simulated with the real world sensor data. The output from the simulated digital control system can then be compared to the old analog based control system. Model based design can compared to Agile software develop. The Agile software development goal is to develop working software in incremental steps. Progress is measured in completed and tested code units. Progress is measured in model based design by completed and tested blocks. We present a concept for a video game controller and then use model based design to iterate the design towards a working system. We will also describe a model based design effort to develop an OS Friendly Microprocessor Architecture based on the RISC-V.

  18. Model-based tomographic reconstruction

    DOEpatents

    Chambers, David H; Lehman, Sean K; Goodman, Dennis M

    2012-06-26

    A model-based approach to estimating wall positions for a building is developed and tested using simulated data. It borrows two techniques from geophysical inversion problems, layer stripping and stacking, and combines them with a model-based estimation algorithm that minimizes the mean-square error between the predicted signal and the data. The technique is designed to process multiple looks from an ultra wideband radar array. The processed signal is time-gated and each section processed to detect the presence of a wall and estimate its position, thickness, and material parameters. The floor plan of a building is determined by moving the array around the outside of the building. In this paper we describe how the stacking and layer stripping algorithms are combined and show the results from a simple numerical example of three parallel walls.

  19. Qualitative model-based diagnosis using possibility theory

    NASA Technical Reports Server (NTRS)

    Joslyn, Cliff

    1994-01-01

    The potential for the use of possibility in the qualitative model-based diagnosis of spacecraft systems is described. The first sections of the paper briefly introduce the Model-Based Diagnostic (MBD) approach to spacecraft fault diagnosis; Qualitative Modeling (QM) methodologies; and the concepts of possibilistic modeling in the context of Generalized Information Theory (GIT). Then the necessary conditions for the applicability of possibilistic methods to qualitative MBD, and a number of potential directions for such an application, are described.

  20. Comparing denominator degrees of freedom approximations for the generalized linear mixed model in analyzing binary outcome in small sample cluster-randomized trials.

    PubMed

    Li, Peng; Redden, David T

    2015-04-23

    Small number of clusters and large variation of cluster sizes commonly exist in cluster-randomized trials (CRTs) and are often the critical factors affecting the validity and efficiency of statistical analyses. F tests are commonly used in the generalized linear mixed model (GLMM) to test intervention effects in CRTs. The most challenging issue for the approximate Wald F test is the estimation of the denominator degrees of freedom (DDF). Some DDF approximation methods have been proposed, but their small sample performances in analysing binary outcomes in CRTs with few heterogeneous clusters are not well studied. The small sample performances of five DDF approximations for the F test are compared and contrasted under CRT frameworks with simulations. Specifically, we illustrate how the intraclass correlation (ICC), sample size, and the variation of cluster sizes affect the type I error and statistical power when different DDF approximation methods in GLMM are used to test intervention effect in CRTs with binary outcomes. The results are also illustrated using a real CRT dataset. Our simulation results suggest that the Between-Within method maintains the nominal type I error rates even when the total number of clusters is as low as 10 and is robust to the variation of the cluster sizes. The Residual and Containment methods have inflated type I error rates when the cluster number is small (<30) and the inflation becomes more severe with increased variation in cluster sizes. In contrast, the Satterthwaite and Kenward-Roger methods can provide tests with very conservative type I error rates when the total cluster number is small (<30) and the conservativeness becomes more severe as variation in cluster sizes increases. Our simulations also suggest that the Between-Within method is statistically more powerful than the Satterthwaite or Kenward-Roger method in analysing CRTs with heterogeneous cluster sizes, especially when the cluster number is small. We conclude that the

  1. Prevalence of antimicrobial resistance in enteric Escherichia coli from domestic pets and assessment of associated risk markers using a generalized linear mixed model.

    PubMed

    Leite-Martins, Liliana R; Mahú, Maria I M; Costa, Ana L; Mendes, Angelo; Lopes, Elisabete; Mendonça, Denisa M V; Niza-Ribeiro, João J R; de Matos, Augusto J F; da Costa, Paulo Martins

    2014-11-01

    Antimicrobial resistance (AMR) is a growing global public health problem, which is caused by the use of antimicrobials in both human and animal medical practice. The objectives of the present cross-sectional study were as follows: (1) to determine the prevalence of resistance in Escherichia coli isolated from the feces of pets from the Porto region of Portugal against 19 antimicrobial agents and (2) to assess the individual, clinical and environmental characteristics associated with each pet as risk markers for the AMR of the E. coli isolates. From September 2009 to May 2012, rectal swabs were collected from pets selected using a systematic random procedure from the ordinary population of animals attending the Veterinary Hospital of Porto University. A total of 78 dogs and 22 cats were sampled with the objective of isolating E. coli. The animals' owners, who allowed the collection of fecal samples from their pets, answered a questionnaire to collect information about the markers that could influence the AMR of the enteric E. coli. Chromocult tryptone bile X-glucuronide agar was used for E. coli isolation, and the disk diffusion method was used to determine the antimicrobial susceptibility. The data were analyzed using a multilevel, univariable and multivariable generalized linear mixed model (GLMM). Several (49.7%) of the 396 isolates obtained in this study were multidrug-resistant. The E. coli isolates exhibited resistance to the antimicrobial agent's ampicillin (51.3%), cephalothin (46.7%), tetracycline (45.2%) and streptomycin (43.4%). Previous quinolone treatment was the main risk marker for the presence of AMR for 12 (ampicillin, cephalothin, ceftazidime, cefotaxime, nalidixic acid, ciprofloxacin, gentamicin, tetracycline, streptomycin, chloramphenicol, trimethoprim-sulfamethoxazole and aztreonam) of the 15 antimicrobials assessed. Coprophagic habits were also positively associated with an increased risk of AMR for six drugs, ampicillin, amoxicillin

  2. Model-based machine learning

    PubMed Central

    Bishop, Christopher M.

    2013-01-01

    Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications. PMID:23277612

  3. Model-based machine learning.

    PubMed

    Bishop, Christopher M

    2013-02-13

    Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications.

  4. Model-Based Safety Analysis

    NASA Technical Reports Server (NTRS)

    Joshi, Anjali; Heimdahl, Mats P. E.; Miller, Steven P.; Whalen, Mike W.

    2006-01-01

    System safety analysis techniques are well established and are used extensively during the design of safety-critical systems. Despite this, most of the techniques are highly subjective and dependent on the skill of the practitioner. Since these analyses are usually based on an informal system model, it is unlikely that they will be complete, consistent, and error free. In fact, the lack of precise models of the system architecture and its failure modes often forces the safety analysts to devote much of their effort to gathering architectural details about the system behavior from several sources and embedding this information in the safety artifacts such as the fault trees. This report describes Model-Based Safety Analysis, an approach in which the system and safety engineers share a common system model created using a model-based development process. By extending the system model with a fault model as well as relevant portions of the physical system to be controlled, automated support can be provided for much of the safety analysis. We believe that by using a common model for both system and safety engineering and automating parts of the safety analysis, we can both reduce the cost and improve the quality of the safety analysis. Here we present our vision of model-based safety analysis and discuss the advantages and challenges in making this approach practical.

  5. Model-based reconfiguration: Diagnosis and recovery

    NASA Technical Reports Server (NTRS)

    Crow, Judy; Rushby, John

    1994-01-01

    We extend Reiter's general theory of model-based diagnosis to a theory of fault detection, identification, and reconfiguration (FDIR). The generality of Reiter's theory readily supports an extension in which the problem of reconfiguration is viewed as a close analog of the problem of diagnosis. Using a reconfiguration predicate 'rcfg' analogous to the abnormality predicate 'ab,' we derive a strategy for reconfiguration by transforming the corresponding strategy for diagnosis. There are two obvious benefits of this approach: algorithms for diagnosis can be exploited as algorithms for reconfiguration and we have a theoretical framework for an integrated approach to FDIR. As a first step toward realizing these benefits we show that a class of diagnosis engines can be used for reconfiguration and we discuss algorithms for integrated FDIR. We argue that integrating recovery and diagnosis is an essential next step if this technology is to be useful for practical applications.

  6. Model-based control of cardiac alternans on a ring

    NASA Astrophysics Data System (ADS)

    Garzón, Alejandro; Grigoriev, Roman O.; Fenton, Flavio H.

    2009-08-01

    Cardiac alternans, a beat-to-beat alternation of cardiac electrical dynamics, and ventricular tachycardia, generally associated with a spiral wave of electrical activity, have been identified as frequent precursors of the life-threatening spatiotemporally chaotic electrical state of ventricular fibrillation (VF). Schemes for the elimination of alternans and the stabilization of spiral waves through the injection of weak external currents have been proposed as methods to prevent VF but have not performed at the level required for clinical implementation. In this paper we propose a control method based on linear-quadratic regulator (LQR) control. Unlike most previously proposed approaches, our method incorporates information from the underlying model to increase efficiency. We use a one-dimensional ringlike geometry, with a single control electrode, to compare the performance of our method with that of two other approaches, quasi-instantaneous suppression of unstable modes (QISUM) and time-delay autosynchronization (TDAS). We find that QISUM fails to suppress alternans due to conduction block. Although both TDAS and LQR succeed in suppressing alternans, LQR is able to suppress the alternans faster and using a much weaker control current. Our results highlight the benefits of a model-based control approach despite its inherent complexity compared with nonmodel-based control such as TDAS.

  7. Model-based control of cardiac alternans on a ring.

    PubMed

    Garzón, Alejandro; Grigoriev, Roman O; Fenton, Flavio H

    2009-08-01

    Cardiac alternans, a beat-to-beat alternation of cardiac electrical dynamics, and ventricular tachycardia, generally associated with a spiral wave of electrical activity, have been identified as frequent precursors of the life-threatening spatiotemporally chaotic electrical state of ventricular fibrillation (VF). Schemes for the elimination of alternans and the stabilization of spiral waves through the injection of weak external currents have been proposed as methods to prevent VF but have not performed at the level required for clinical implementation. In this paper we propose a control method based on linear-quadratic regulator (LQR) control. Unlike most previously proposed approaches, our method incorporates information from the underlying model to increase efficiency. We use a one-dimensional ringlike geometry, with a single control electrode, to compare the performance of our method with that of two other approaches, quasi-instantaneous suppression of unstable modes (QISUM) and time-delay autosynchronization (TDAS). We find that QISUM fails to suppress alternans due to conduction block. Although both TDAS and LQR succeed in suppressing alternans, LQR is able to suppress the alternans faster and using a much weaker control current. Our results highlight the benefits of a model-based control approach despite its inherent complexity compared with nonmodel-based control such as TDAS.

  8. Linear Accelerators

    NASA Astrophysics Data System (ADS)

    Sidorin, Anatoly

    2010-01-01

    In linear accelerators the particles are accelerated by either electrostatic fields or oscillating Radio Frequency (RF) fields. Accordingly the linear accelerators are divided in three large groups: electrostatic, induction and RF accelerators. Overview of the different types of accelerators is given. Stability of longitudinal and transverse motion in the RF linear accelerators is briefly discussed. The methods of beam focusing in linacs are described.

  9. Linear Accelerators

    SciTech Connect

    Sidorin, Anatoly

    2010-01-05

    In linear accelerators the particles are accelerated by either electrostatic fields or oscillating Radio Frequency (RF) fields. Accordingly the linear accelerators are divided in three large groups: electrostatic, induction and RF accelerators. Overview of the different types of accelerators is given. Stability of longitudinal and transverse motion in the RF linear accelerators is briefly discussed. The methods of beam focusing in linacs are described.

  10. A Generalization of Pythagoras's Theorem and Application to Explanations of Variance Contributions in Linear Models. Research Report. ETS RR-14-18

    ERIC Educational Resources Information Center

    Carlson, James E.

    2014-01-01

    Many aspects of the geometry of linear statistical models and least squares estimation are well known. Discussions of the geometry may be found in many sources. Some aspects of the geometry relating to the partitioning of variation that can be explained using a little-known theorem of Pappus and have not been discussed previously are the topic of…

  11. A general entry to linear, dendritic and branched thiourea-linked glycooligomers as new motifs for phosphate ester recognition in water.

    PubMed

    Jiménez Blanco, José L; Bootello, Purificación; Ortiz Mellet, Carmen; Gutiérrez Gallego, Ricardo; García Fernández, José M

    2004-01-07

    A blockwise iterative synthetic strategy for the preparation of linear, dendritic and branched full-carbohydrate architectures has been developed by using sugar azido(carbamate) isothiocyanates as key templates; the presence of intersaccharide thiourea bridges provides anchoring points for hydrogen bond-directed molecular recognition of phosphate esters in water.

  12. mb-FLIM: model-based fluorescence lifetime imaging

    NASA Astrophysics Data System (ADS)

    Zhao, Qiaole; Young, Ian Ted; Schouten, Raymond; Stallinga, Sjoerd; Jalink, Kees; de Jong, Sander

    2012-03-01

    We have developed a model-based, parallel procedure to estimate fluorescence lifetimes. Multiple frequencies are present in the excitation signal. Modeling the entire fluorescence and measurement process produces an analytical ratio of polynomials in the lifetime variable τ. A non-linear model-fitting procedure is then used to estimate τ. We have analyzed this model-based approach by simulating a 10 μM fluorescein solution (τ = 4 ns) and all relevant noise sources. We have used real LED data to drive the simulation. Using 240 μs of data, we estimate τ = 3.99 ns. Preliminary experiments on real fluorescent images taken from fluorescein solutions (measured τ = 4.1 ns), green plastic test slides (measured τ = 3.0 ns), and GFP in U2OS (osteosarcoma) cells (measured τ = 2.1 ns) demonstrate that this model-based measurement technique works.

  13. Effects of empirical versus model-based reflectance calibration on automated analysis of imaging spectrometer data: a case study from the Drum Mountains, Utah

    USGS Publications Warehouse

    Dwyer, John L.; Kruse, Fred A.; Lefkoff, Adam B.

    1995-01-01

    Data collected by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) have been calibrated to surface reflectance using an empirical method and an atmospheric model-based method. Single spectra extracted from both calibrated data sets for locations with known mineralogy compared favorably with laboratory and field spectral measurements of samples from the same locations. Generally, spectral features were somewhat subdued in data calibrated using the model-based method when compared with those calibrated using the empirical method. Automated feature extraction and expert system analysis techniques have been successfully applied to both data sets to produce similar endmember probability images and spectral endmember libraries. Linear spectral unmixing procedures applied to both calibrated data sets produced similar image maps. These comparisons demonstrated the utility of the model-based approach for atmospherically correcting imaging spectrometer data prior to extraction of scientific information. The results indicated that imaging spectrometer data can be calibrated and analyzed without a priori knowledge of the remote target.

  14. Sequential Bayesian Detection: A Model-Based Approach

    SciTech Connect

    Sullivan, E J; Candy, J V

    2007-08-13

    Sequential detection theory has been known for a long time evolving in the late 1940's by Wald and followed by Middleton's classic exposition in the 1960's coupled with the concurrent enabling technology of digital computer systems and the development of sequential processors. Its development, when coupled to modern sequential model-based processors, offers a reasonable way to attack physics-based problems. In this chapter, the fundamentals of the sequential detection are reviewed from the Neyman-Pearson theoretical perspective and formulated for both linear and nonlinear (approximate) Gauss-Markov, state-space representations. We review the development of modern sequential detectors and incorporate the sequential model-based processors as an integral part of their solution. Motivated by a wealth of physics-based detection problems, we show how both linear and nonlinear processors can seamlessly be embedded into the sequential detection framework to provide a powerful approach to solving non-stationary detection problems.

  15. Sequential Bayesian Detection: A Model-Based Approach

    SciTech Connect

    Candy, J V

    2008-12-08

    Sequential detection theory has been known for a long time evolving in the late 1940's by Wald and followed by Middleton's classic exposition in the 1960's coupled with the concurrent enabling technology of digital computer systems and the development of sequential processors. Its development, when coupled to modern sequential model-based processors, offers a reasonable way to attack physics-based problems. In this chapter, the fundamentals of the sequential detection are reviewed from the Neyman-Pearson theoretical perspective and formulated for both linear and nonlinear (approximate) Gauss-Markov, state-space representations. We review the development of modern sequential detectors and incorporate the sequential model-based processors as an integral part of their solution. Motivated by a wealth of physics-based detection problems, we show how both linear and nonlinear processors can seamlessly be embedded into the sequential detection framework to provide a powerful approach to solving non-stationary detection problems.

  16. Model based control of dynamic atomic force microscope

    SciTech Connect

    Lee, Chibum; Salapaka, Srinivasa M.

    2015-04-15

    A model-based robust control approach is proposed that significantly improves imaging bandwidth for the dynamic mode atomic force microscopy. A model for cantilever oscillation amplitude and phase dynamics is derived and used for the control design. In particular, the control design is based on a linearized model and robust H{sub ∞} control theory. This design yields a significant improvement when compared to the conventional proportional-integral designs and verified by experiments.

  17. Model based control of dynamic atomic force microscope

    NASA Astrophysics Data System (ADS)

    Lee, Chibum; Salapaka, Srinivasa M.

    2015-04-01

    A model-based robust control approach is proposed that significantly improves imaging bandwidth for the dynamic mode atomic force microscopy. A model for cantilever oscillation amplitude and phase dynamics is derived and used for the control design. In particular, the control design is based on a linearized model and robust H∞ control theory. This design yields a significant improvement when compared to the conventional proportional-integral designs and verified by experiments.

  18. Model based control of dynamic atomic force microscope.

    PubMed

    Lee, Chibum; Salapaka, Srinivasa M

    2015-04-01

    A model-based robust control approach is proposed that significantly improves imaging bandwidth for the dynamic mode atomic force microscopy. A model for cantilever oscillation amplitude and phase dynamics is derived and used for the control design. In particular, the control design is based on a linearized model and robust H(∞) control theory. This design yields a significant improvement when compared to the conventional proportional-integral designs and verified by experiments.

  19. Applying knowledge compilation techniques to model-based reasoning

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.

    1991-01-01

    Researchers in the area of knowledge compilation are developing general purpose techniques for improving the efficiency of knowledge-based systems. In this article, an attempt is made to define knowledge compilation, to characterize several classes of knowledge compilation techniques, and to illustrate how some of these techniques can be applied to improve the performance of model-based reasoning systems.

  20. Model-based Utility Functions

    NASA Astrophysics Data System (ADS)

    Hibbard, Bill

    2012-05-01

    Orseau and Ring, as well as Dewey, have recently described problems, including self-delusion, with the behavior of agents using various definitions of utility functions. An agent's utility function is defined in terms of the agent's history of interactions with its environment. This paper argues, via two examples, that the behavior problems can be avoided by formulating the utility function in two steps: 1) inferring a model of the environment from interactions, and 2) computing utility as a function of the environment model. Basing a utility function on a model that the agent must learn implies that the utility function must initially be expressed in terms of specifications to be matched to structures in the learned model. These specifications constitute prior assumptions about the environment so this approach will not work with arbitrary environments. But the approach should work for agents designed by humans to act in the physical world. The paper also addresses the issue of self-modifying agents and shows that if provided with the possibility to modify their utility functions agents will not choose to do so, under some usual assumptions.

  1. Generalizing Fisher's “reproductive value”: “Incipient” and “penultimate” reproductive-value functions when environment limits growth; linear approximants for nonlinear Mendelian mating models†

    PubMed Central

    Samuelson, Paul A.

    1978-01-01

    In the usual Darwinian case in which struggle for existence leads to density limitations on the environment's carrying capacity, R. A. Fisher's reproductive-value concept reduces to zero for every initial age group. To salvage some meaning for Fisher's notion, two variant reproductive-value concepts are defined here: an “incipient reproductive-value function,” applicable to a system's early dilute stage when density effects are still ignorable; and a “second-order penultimate reproductive-value function,” linking to a system's initial conditions near equilibrium its much later small deviations from carrying-capacity equilibrium. Also, slowly changing age-structured mortality and fertility parameters of Lotka and Mendelian mating systems are shown to suggest linear reproductive-value surrogates that provide approximations for truly nonlinear diploid and haploid models. PMID:16592600

  2. LINEAR ACCELERATOR

    DOEpatents

    Christofilos, N.C.; Polk, I.J.

    1959-02-17

    Improvements in linear particle accelerators are described. A drift tube system for a linear ion accelerator reduces gap capacity between adjacent drift tube ends. This is accomplished by reducing the ratio of the diameter of the drift tube to the diameter of the resonant cavity. Concentration of magnetic field intensity at the longitudinal midpoint of the external sunface of each drift tube is reduced by increasing the external drift tube diameter at the longitudinal center region.

  3. Piecewise Linear Slope Estimation.

    PubMed

    Ingle, A N; Sethares, W A; Varghese, T; Bucklew, J A

    2014-11-01

    This paper presents a method for directly estimating slope values in a noisy piecewise linear function. By imposing a Markov structure on the sequence of slopes, piecewise linear fitting is posed as a maximum a posteriori estimation problem. A dynamic program efficiently solves this by traversing a linearly growing trellis. The alternating maximization algorithm (a kind of pseudo-EM method) is used to estimate the model parameters from data and its convergence behavior is analyzed. Ultrasound shear wave imaging is presented as a primary application. The algorithm is general enough for applicability in other fields, as suggested by an application to the estimation of shifts in financial interest rate data.

  4. Piecewise Linear Slope Estimation

    PubMed Central

    Sethares, W. A.; Bucklew, J. A.

    2015-01-01

    This paper presents a method for directly estimating slope values in a noisy piecewise linear function. By imposing a Markov structure on the sequence of slopes, piecewise linear fitting is posed as a maximum a posteriori estimation problem. A dynamic program efficiently solves this by traversing a linearly growing trellis. The alternating maximization algorithm (a kind of pseudo-EM method) is used to estimate the model parameters from data and its convergence behavior is analyzed. Ultrasound shear wave imaging is presented as a primary application. The algorithm is general enough for applicability in other fields, as suggested by an application to the estimation of shifts in financial interest rate data. PMID:26229417

  5. Conserving the linear momentum in stochastic dynamics: Dissipative particle dynamics as a general strategy to achieve local thermostatization in molecular dynamics simulations.

    PubMed

    Passler, Peter P; Hofer, Thomas S

    2017-02-15

    Stochastic dynamics is a widely employed strategy to achieve local thermostatization in molecular dynamics simulation studies; however, it suffers from an inherent violation of momentum conservation. Although this short-coming has little impact on structural and short-time dynamic properties, it can be shown that dynamics in the long-time limit such as diffusion is strongly dependent on the respective thermostat setting. Application of the methodically similar dissipative particle dynamics (DPD) provides a simple, effective strategy to ensure the advantages of local, stochastic thermostatization while at the same time the linear momentum of the system remains conserved. In this work, the key parameters to employ the DPD thermostats in the framework of periodic boundary conditions are investigated, in particular the dependence of the system properties on the size of the DPD-region as well as the treatment of forces near the cutoff. Structural and dynamical data for light and heavy water as well as a Lennard-Jones fluid have been compared to simulations executed via stochastic dynamics as well as via use of the widely employed Nose-Hoover chain and Berendsen thermostats. It is demonstrated that a small size of the DPD region is sufficient to achieve local thermalization, while at the same time artifacts in the self-diffusion characteristic for stochastic dynamics are eliminated. © 2016 Wiley Periodicals, Inc.

  6. Argumentation in Science Education: A Model-based Framework

    NASA Astrophysics Data System (ADS)

    Böttcher, Florian; Meisert, Anke

    2011-02-01

    The goal of this article is threefold: First, the theoretical background for a model-based framework of argumentation to describe and evaluate argumentative processes in science education is presented. Based on the general model-based perspective in cognitive science and the philosophy of science, it is proposed to understand arguments as reasons for the appropriateness of a theoretical model which explains a certain phenomenon. Argumentation is considered to be the process of the critical evaluation of such a model if necessary in relation to alternative models. Secondly, some methodological details are exemplified for the use of a model-based analysis in the concrete classroom context. Third, the application of the approach in comparison with other analytical models will be presented to demonstrate the explicatory power and depth of the model-based perspective. Primarily, the framework of Toulmin to structurally analyse arguments is contrasted with the approach presented here. It will be demonstrated how common methodological and theoretical problems in the context of Toulmin's framework can be overcome through a model-based perspective. Additionally, a second more complex argumentative sequence will also be analysed according to the invented analytical scheme to give a broader impression of its potential in practical use.

  7. General theory for multiple input-output perturbations in complex molecular systems. 1. Linear QSPR electronegativity models in physical, organic, and medicinal chemistry.

    PubMed

    González-Díaz, Humberto; Arrasate, Sonia; Gómez-SanJuan, Asier; Sotomayor, Nuria; Lete, Esther; Besada-Porto, Lina; Ruso, Juan M

    2013-01-01

    In general perturbation methods starts with a known exact solution of a problem and add "small" variation terms in order to approach to a solution for a related problem without known exact solution. Perturbation theory has been widely used in almost all areas of science. Bhor's quantum model, Heisenberg's matrix mechanincs, Feyman diagrams, and Poincare's chaos model or "butterfly effect" in complex systems are examples of perturbation theories. On the other hand, the study of Quantitative Structure-Property Relationships (QSPR) in molecular complex systems is an ideal area for the application of perturbation theory. There are several problems with exact experimental solutions (new chemical reactions, physicochemical properties, drug activity and distribution, metabolic networks, etc.) in public databases like CHEMBL. However, in all these cases, we have an even larger list of related problems without known solutions. We need to know the change in all these properties after a perturbation of initial boundary conditions. It means, when we test large sets of similar, but different, compounds and/or chemical reactions under the slightly different conditions (temperature, time, solvents, enzymes, assays, protein targets, tissues, partition systems, organisms, etc.). However, to the best of our knowledge, there is no QSPR general-purpose perturbation theory to solve this problem. In this work, firstly we review general aspects and applications of both perturbation theory and QSPR models. Secondly, we formulate a general-purpose perturbation theory for multiple-boundary QSPR problems. Last, we develop three new QSPR-Perturbation theory models. The first model classify correctly >100,000 pairs of intra-molecular carbolithiations with 75-95% of Accuracy (Ac), Sensitivity (Sn), and Specificity (Sp). The model predicts probabilities of variations in the yield and enantiomeric excess of reactions due to at least one perturbation in boundary conditions (solvent, temperature

  8. Parametric Identification of Systems Via Linear Operators.

    DTIC Science & Technology

    1978-09-01

    A general parametric identification /approximation model is developed for the black box identification of linear time invariant systems in terms of... parametric identification techniques derive from the general model as special cases associated with a particular linear operator. Some possible

  9. Boilermodel: A Qualitative Model-Based Reasoning System Implemented in Ada

    DTIC Science & Technology

    1991-09-01

    knowledge base; therefore, the more facts and rules, (generally) the more robust the expert system. Model- based systems approach the problem from a...actual observed values. "The essence of [the] model-based expert system approach is to generate a model that acts as close to the real world as...on the experience of the experts and the questions posed by the designers. Since the model-based approach is founded on first principles which

  10. Model-based phase-shifting interferometer

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Zhang, Lei; Shi, Tu; Yang, Yongying; Chong, Shiyao; Miao, Liang; Huang, Wei; Shen, Yibing; Bai, Jian

    2015-10-01

    A model-based phase-shifting interferometer (MPI) is developed, in which a novel calculation technique is proposed instead of the traditional complicated system structure, to achieve versatile, high precision and quantitative surface tests. In the MPI, the partial null lens (PNL) is employed to implement the non-null test. With some alternative PNLs, similar as the transmission spheres in ZYGO interferometers, the MPI provides a flexible test for general spherical and aspherical surfaces. Based on modern computer modeling technique, a reverse iterative optimizing construction (ROR) method is employed for the retrace error correction of non-null test, as well as figure error reconstruction. A self-compiled ray-tracing program is set up for the accurate system modeling and reverse ray tracing. The surface figure error then can be easily extracted from the wavefront data in forms of Zernike polynomials by the ROR method. Experiments of the spherical and aspherical tests are presented to validate the flexibility and accuracy. The test results are compared with those of Zygo interferometer (null tests), which demonstrates the high accuracy of the MPI. With such accuracy and flexibility, the MPI would possess large potential in modern optical shop testing.

  11. Model-based control of fuel cells:. (1) Regulatory control

    NASA Astrophysics Data System (ADS)

    Golbert, Joshua; Lewin, Daniel R.

    This paper describes a model-based controller for the regulation of a proton exchange membrane (PEM) fuel cell. The model accounts for spatial dependencies of voltage, current, material flows, and temperatures in the fuel channel. Analysis of the process model shows that the effective gain of the process undergoes a sign change in the normal operating range of the fuel cell, indicating that it cannot be stabilized using a linear controller with integral action. Consequently, a nonlinear model-predictive-controller based on a simplified model has been developed, enabling the use of optimal control to satisfy power demands robustly. The models and controller have been realized in the MATLAB and SIMULINK environment. Initial results indicate improved performance and robustness when using model-based control in comparison with that obtained using an adaptive controller.

  12. Comparison of chiller models for use in model-based fault detection

    SciTech Connect

    Sreedharan, Priya; Haves, Philip

    2001-06-07

    Selecting the model is an important and essential step in model based fault detection and diagnosis (FDD). Factors that are considered in evaluating a model include accuracy, training data requirements, calibration effort, generality, and computational requirements. The objective of this study was to evaluate different modeling approaches for their applicability to model based FDD of vapor compression chillers. Three different models were studied: the Gordon and Ng Universal Chiller model (2nd generation) and a modified version of the ASHRAE Primary Toolkit model, which are both based on first principles, and the DOE-2 chiller model, as implemented in CoolTools{trademark}, which is empirical. The models were compared in terms of their ability to reproduce the observed performance of an older, centrifugal chiller operating in a commercial office building and a newer centrifugal chiller in a laboratory. All three models displayed similar levels of accuracy. Of the first principles models, the Gordon-Ng model has the advantage of being linear in the parameters, which allows more robust parameter estimation methods to be used and facilitates estimation of the uncertainty in the parameter values. The ASHRAE Toolkit Model may have advantages when refrigerant temperature measurements are also available. The DOE-2 model can be expected to have advantages when very limited data are available to calibrate the model, as long as one of the previously identified models in the CoolTools library matches the performance of the chiller in question.

  13. GENI: A graphical environment for model-based control

    SciTech Connect

    Kleban, S.; Lee, M.; Zambre, Y.

    1989-10-01

    A new method to operate machine and beam simulation programs for accelerator control has been developed. Existing methods, although cumbersome, have been used in control systems for commissioning and operation of many machines. We developed GENI, a generalized graphical interface to these programs for model-based control. This object-oriented''-like environment is described and some typical applications are presented. 4 refs., 5 figs.

  14. GENI: A graphical environment for model-based control

    NASA Astrophysics Data System (ADS)

    Kleban, Stephen; Lee, Martin; Zambre, Yadunath

    1990-08-01

    A new method of operating machine-modeling and beam-simulation programs for accelerator control has been developed. Existing methods, although cumbersome, have been used in control systems for commissioning and operation of many machines. We developed GENI, a generalized graphical interface to these programs for model-based control. This "object-oriented"-like environment is described and some typical applications are presented.

  15. A tool for model based diagnostics of the AGS Booster

    SciTech Connect

    Luccio, A.

    1993-12-31

    A model-based algorithmic tool was developed to search for lattice errors by a systematic analysis of orbit data in the AGS Booster synchrotron. The algorithm employs transfer matrices calculated with MAD between points in the ring. Iterative model fitting of the data allows one to find and eventually correct magnet displacements and angles or field errors. The tool, implemented on a HP-Apollo workstation system, has proved very general and of immediate physical interpretation.

  16. Stochastic Model-Based Control of Multi-Robot Systems

    DTIC Science & Technology

    2009-06-30

    dual [6]. For example, we use the optimal control theory to derive linear quadratic regulator ( LQR ), and in the same theoretical framework we can derive...a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1...Final Technical Report 23-09-2008 - 22-06-2009 Stochastic Model-Based Control of Multi-Robot Systems W911NF-08-1-0503 Dejan Milutinovic and Devendra P

  17. Generalized Predictive and Neural Generalized Predictive Control of Aerospace Systems

    NASA Technical Reports Server (NTRS)

    Kelkar, Atul G.

    2000-01-01

    The research work presented in this thesis addresses the problem of robust control of uncertain linear and nonlinear systems using Neural network-based Generalized Predictive Control (NGPC) methodology. A brief overview of predictive control and its comparison with Linear Quadratic (LQ) control is given to emphasize advantages and drawbacks of predictive control methods. It is shown that the Generalized Predictive Control (GPC) methodology overcomes the drawbacks associated with traditional LQ control as well as conventional predictive control methods. It is shown that in spite of the model-based nature of GPC it has good robustness properties being special case of receding horizon control. The conditions for choosing tuning parameters for GPC to ensure closed-loop stability are derived. A neural network-based GPC architecture is proposed for the control of linear and nonlinear uncertain systems. A methodology to account for parametric uncertainty in the system is proposed using on-line training capability of multi-layer neural network. Several simulation examples and results from real-time experiments are given to demonstrate the effectiveness of the proposed methodology.

  18. Gaussian model-based partitioning using iterated local search.

    PubMed

    Brusco, Michael J; Shireman, Emilie; Steinley, Douglas; Brudvig, Susan; Cradit, J Dennis

    2017-02-01

    The emergence of Gaussian model-based partitioning as a viable alternative to K-means clustering fosters a need for discrete optimization methods that can be efficiently implemented using model-based criteria. A variety of alternative partitioning criteria have been proposed for more general data conditions that permit elliptical clusters, different spatial orientations for the clusters, and unequal cluster sizes. Unfortunately, many of these partitioning criteria are computationally demanding, which makes the multiple-restart (multistart) approach commonly used for K-means partitioning less effective as a heuristic solution strategy. As an alternative, we propose an approach based on iterated local search (ILS), which has proved effective in previous combinatorial data analysis contexts. We compared multistart, ILS and hybrid multistart-ILS procedures for minimizing a very general model-based criterion that assumes no restrictions on cluster size or within-group covariance structure. This comparison, which used 23 data sets from the classification literature, revealed that the ILS and hybrid heuristics generally provided better criterion function values than the multistart approach when all three methods were constrained to the same 10-min time limit. In many instances, these differences in criterion function values reflected profound differences in the partitions obtained.

  19. LINEAR ACCELERATOR

    DOEpatents

    Colgate, S.A.

    1958-05-27

    An improvement is presented in linear accelerators for charged particles with respect to the stable focusing of the particle beam. The improvement consists of providing a radial electric field transverse to the accelerating electric fields and angularly introducing the beam of particles in the field. The results of the foregoing is to achieve a beam which spirals about the axis of the acceleration path. The combination of the electric fields and angular motion of the particles cooperate to provide a stable and focused particle beam.

  20. Model-based drug development: the road to quantitative pharmacology.

    PubMed

    Zhang, Liping; Sinha, Vikram; Forgue, S Thomas; Callies, Sophie; Ni, Lan; Peck, Richard; Allerheiligen, Sandra R B

    2006-06-01

    High development costs and low success rates in bringing new medicines to the market demand more efficient and effective approaches. Identified by the FDA as a valuable prognostic tool for fulfilling such a demand, model-based drug development is a mathematical and statistical approach that constructs, validates, and utilizes disease models, drug exposure-response models, and pharmacometric models to facilitate drug development. Quantitative pharmacology is a discipline that learns and confirms the key characteristics of new molecular entities in a quantitative manner, with goal of providing explicit, reproducible, and predictive evidence for optimizing drug development plans and enabling critical decision making. Model-based drug development serves as an integral part of quantitative pharmacology. This work reviews the general concept, basic elements, and evolving role of model-based drug development in quantitative pharmacology. Two case studies are presented to illustrate how the model-based drug development approach can facilitate knowledge management and decision making during drug development. The case studies also highlight the organizational learning that comes through implementation of quantitative pharmacology as a discipline. Finally, the prospects of quantitative pharmacology as an emerging discipline are discussed. Advances in this discipline will require continued collaboration between academia, industry and regulatory agencies.

  1. Expediting model-based optoacoustic reconstructions with tomographic symmetries

    SciTech Connect

    Lutzweiler, Christian; Deán-Ben, Xosé Luís; Razansky, Daniel

    2014-01-15

    Purpose: Image quantification in optoacoustic tomography implies the use of accurate forward models of excitation, propagation, and detection of optoacoustic signals while inversions with high spatial resolution usually involve very large matrices, leading to unreasonably long computation times. The development of fast and memory efficient model-based approaches represents then an important challenge to advance on the quantitative and dynamic imaging capabilities of tomographic optoacoustic imaging. Methods: Herein, a method for simplification and acceleration of model-based inversions, relying on inherent symmetries present in common tomographic acquisition geometries, has been introduced. The method is showcased for the case of cylindrical symmetries by using polar image discretization of the time-domain optoacoustic forward model combined with efficient storage and inversion strategies. Results: The suggested methodology is shown to render fast and accurate model-based inversions in both numerical simulations andpost mortem small animal experiments. In case of a full-view detection scheme, the memory requirements are reduced by one order of magnitude while high-resolution reconstructions are achieved at video rate. Conclusions: By considering the rotational symmetry present in many tomographic optoacoustic imaging systems, the proposed methodology allows exploiting the advantages of model-based algorithms with feasible computational requirements and fast reconstruction times, so that its convenience and general applicability in optoacoustic imaging systems with tomographic symmetries is anticipated.

  2. Linear Clouds

    NASA Technical Reports Server (NTRS)

    2006-01-01

    [figure removed for brevity, see original site] Context image for PIA03667 Linear Clouds

    These clouds are located near the edge of the south polar region. The cloud tops are the puffy white features in the bottom half of the image.

    Image information: VIS instrument. Latitude -80.1N, Longitude 52.1E. 17 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  3. Inkjet printer model-based halftoning.

    PubMed

    Lee, Je-Ho; Allebach, Jan P

    2005-05-01

    The quality of halftone prints produced by inkjet (IJ) printers can be limited by random dot-placement errors. While a large literature addresses model-based halftoning for electrophotographic printers, little work has been done on model-based halftoning for IJ printers. In this paper, we propose model-based approaches to both iterative least-squares halftoning and tone-dependent error diffusion (TDED). The particular approach to iterative least-squares halftoning that we use is direct binary search (DBS). For DBS, we use a stochastic model for the equivalent gray-scale image, based on measured dot statistics of printed IJ halftone patterns. For TDED, we train the tone-dependent weights and thresholds to mimic the spectrum of halftone textures generated by model-based DBS. We do this under a metric that enforces both the correct radially averaged spectral profile and angular symmetry at each radial frequency. Experimental results generated with simulated printers and a real printer show that both IJ model-based DBS and IJ model-based TDED very effectively suppress IJ printer-induced artifacts.

  4. Linear Programming Problems for Generalized Uncertainty

    ERIC Educational Resources Information Center

    Thipwiwatpotjana, Phantipa

    2010-01-01

    Uncertainty occurs when there is more than one realization that can represent an information. This dissertation concerns merely discrete realizations of an uncertainty. Different interpretations of an uncertainty and their relationships are addressed when the uncertainty is not a probability of each realization. A well known model that can handle…

  5. Linear derivative Cartan formulation of general relativity

    NASA Astrophysics Data System (ADS)

    Kummer, W.; Schütz, H.

    2005-07-01

    Beside diffeomorphism invariance also manifest SO(3,1) local Lorentz invariance is implemented in a formulation of Einstein gravity (with or without cosmological term) in terms of initially completely independent vielbein and spin connection variables and auxiliary two-form fields. In the systematic study of all possible embeddings of Einstein gravity into that formulation with auxiliary fields, the introduction of a “bi-complex” algebra possesses crucial technical advantages. Certain components of the new two-form fields directly provide canonical momenta for spatial components of all Cartan variables, whereas the remaining ones act as Lagrange multipliers for a large number of constraints, some of which have been proposed already in different, less radical approaches. The time-like components of the Cartan variables play that role for the Lorentz constraints and others associated to the vierbein fields. Although also some ternary ones appear, we show that relations exist between these constraints, and how the Lagrange multipliers are to be determined to take care of second class ones. We believe that our formulation of standard Einstein gravity as a gauge theory with consistent local Poincaré algebra is superior to earlier similar attempts.

  6. Linear Programming Problems for Generalized Uncertainty

    ERIC Educational Resources Information Center

    Thipwiwatpotjana, Phantipa

    2010-01-01

    Uncertainty occurs when there is more than one realization that can represent an information. This dissertation concerns merely discrete realizations of an uncertainty. Different interpretations of an uncertainty and their relationships are addressed when the uncertainty is not a probability of each realization. A well known model that can handle…

  7. Generalized Ultrametric Semilattices of Linear Signals

    DTIC Science & Technology

    2014-01-23

    second edition, 2001. [10] Robert C. Flagg and Ralph Kopperman. Continuity spaces: Reconciling domains and metric spaces. Theoretical Computer Science, 177...point. Theoretical Computer Science, 238(1-2):483–488, 2000. [31] Alan V. Oppenheim , Alan S. Willsky, and S. Hamid Nawab. Signals & Systems. Prentice

  8. hi_class: Horndeski in the Cosmic Linear Anisotropy Solving System

    NASA Astrophysics Data System (ADS)

    Zumalacárregui, Miguel; Bellini, Emilio; Sawicki, Ignacy; Lesgourgues, Julien; Ferreira, Pedro G.

    2017-08-01

    We present the public version of hi_class (www.hiclass-code.net), an extension of the Boltzmann code CLASS to a broad ensemble of modifications to general relativity. In particular, hi_class can calculate predictions for models based on Horndeski's theory, which is the most general scalar-tensor theory described by second-order equations of motion and encompasses any perfect-fluid dark energy, quintessence, Brans-Dicke, f(R) and covariant Galileon models. hi_class has been thoroughly tested and can be readily used to understand the impact of alternative theories of gravity on linear structure formation as well as for cosmological parameter extraction.

  9. Multiple Linear Regression

    NASA Astrophysics Data System (ADS)

    Grégoire, G.

    2014-12-01

    This chapter deals with the multiple linear regression. That is we investigate the situation where the mean of a variable depends linearly on a set of covariables. The noise is supposed to be gaussian. We develop the least squared method to get the parameter estimators and estimates of their precisions. This leads to design confidence intervals, prediction intervals, global tests, individual tests and more generally tests of submodels defined by linear constraints. Methods for model's choice and variables selection, measures of the quality of the fit, residuals study, diagnostic methods are presented. Finally identification of departures from the model's assumptions and the way to deal with these problems are addressed. A real data set is used to illustrate the methodology with software R. Note that this chapter is intended to serve as a guide for other regression methods, like logistic regression or AFT models and Cox regression.

  10. Automated extraction of knowledge for model-based diagnostics

    NASA Technical Reports Server (NTRS)

    Gonzalez, Avelino J.; Myler, Harley R.; Towhidnejad, Massood; Mckenzie, Frederic D.; Kladke, Robin R.

    1990-01-01

    The concept of accessing computer aided design (CAD) design databases and extracting a process model automatically is investigated as a possible source for the generation of knowledge bases for model-based reasoning systems. The resulting system, referred to as automated knowledge generation (AKG), uses an object-oriented programming structure and constraint techniques as well as internal database of component descriptions to generate a frame-based structure that describes the model. The procedure has been designed to be general enough to be easily coupled to CAD systems that feature a database capable of providing label and connectivity data from the drawn system. The AKG system is capable of defining knowledge bases in formats required by various model-based reasoning tools.

  11. Generalized Parabolas

    ERIC Educational Resources Information Center

    Joseph, Dan; Hartman, Gregory; Gibson, Caleb

    2011-01-01

    In this article we explore the consequences of modifying the common definition of a parabola by considering the locus of all points equidistant from a focus and (not necessarily linear) directrix. The resulting derived curves, which we call "generalized parabolas," are often quite beautiful and possess many interesting properties. We show that…

  12. Testing Strategies for Model-Based Development

    NASA Technical Reports Server (NTRS)

    Heimdahl, Mats P. E.; Whalen, Mike; Rajan, Ajitha; Miller, Steven P.

    2006-01-01

    This report presents an approach for testing artifacts generated in a model-based development process. This approach divides the traditional testing process into two parts: requirements-based testing (validation testing) which determines whether the model implements the high-level requirements and model-based testing (conformance testing) which determines whether the code generated from a model is behaviorally equivalent to the model. The goals of the two processes differ significantly and this report explores suitable testing metrics and automation strategies for each. To support requirements-based testing, we define novel objective requirements coverage metrics similar to existing specification and code coverage metrics. For model-based testing, we briefly describe automation strategies and examine the fault-finding capability of different structural coverage metrics using tests automatically generated from the model.

  13. Reduced-order-model based feedback control of the Modified Hasegawa-Wakatani equations

    NASA Astrophysics Data System (ADS)

    Goumiri, Imene; Rowley, Clarence; Ma, Zhanhua; Gates, David; Parker, Jeffrey; Krommes, John

    2012-10-01

    In this study, we demonstrate the development of model-based feedback control for stabilization of an unstable equilibrium obtained in the Modified Hasegawa-Wakatani (MHW) equations, a classic model in plasma turbulence. First, a balanced truncation is applied; a model reduction technique that has been proved successful in flow control design problems, to obtain a low dimensional model of the linearized MHW equation. A model-based feedback controller is then designed for the reduced order model using linear quadratic regulators (LQR) then a linear quadratic gaussian (LQG) control. The controllers are then applied on the original linearized and nonlinear MHW equations to stabilize the equilibrium and suppress the transition to drift-wave induced turbulences.

  14. Model-based internal wave processing

    SciTech Connect

    Candy, J.V.; Chambers, D.H.

    1995-06-09

    A model-based approach is proposed to solve the oceanic internal wave signal processing problem that is based on state-space representations of the normal-mode vertical velocity and plane wave horizontal velocity propagation models. It is shown that these representations can be utilized to spatially propagate the modal (dept) vertical velocity functions given the basic parameters (wave numbers, Brunt-Vaisala frequency profile etc.) developed from the solution of the associated boundary value problem as well as the horizontal velocity components. Based on this framework, investigations are made of model-based solutions to the signal enhancement problem for internal waves.

  15. Multimode model based defect characterization in composites

    NASA Astrophysics Data System (ADS)

    Roberts, R.; Holland, S.; Gregory, E.

    2016-02-01

    A newly-initiated research program for model-based defect characterization in CFRP composites is summarized. The work utilizes computational models of the interaction of NDE probing energy fields (ultrasound and thermography), to determine 1) the measured signal dependence on material and defect properties (forward problem), and 2) an assessment of performance-critical defect properties from analysis of measured NDE signals (inverse problem). Work is reported on model implementation for inspection of CFRP laminates containing delamination and porosity. Forward predictions of measurement response are presented, as well as examples of model-based inversion of measured data for the estimation of defect parameters.

  16. Model-Based Inquiries in Chemistry

    ERIC Educational Resources Information Center

    Khan, Samia

    2007-01-01

    In this paper, instructional strategies for sustaining model-based inquiry in an undergraduate chemistry class were analyzed through data collected from classroom observations, a student survey, and in-depth problem-solving sessions with the instructor and students. Analysis of teacher-student interactions revealed a cyclical pattern in which…

  17. Sandboxes for Model-Based Inquiry

    ERIC Educational Resources Information Center

    Brady, Corey; Holbert, Nathan; Soylu, Firat; Novak, Michael; Wilensky, Uri

    2015-01-01

    In this article, we introduce a class of constructionist learning environments that we call "Emergent Systems Sandboxes" ("ESSs"), which have served as a centerpiece of our recent work in developing curriculum to support scalable model-based learning in classroom settings. ESSs are a carefully specified form of virtual…

  18. Sandboxes for Model-Based Inquiry

    ERIC Educational Resources Information Center

    Brady, Corey; Holbert, Nathan; Soylu, Firat; Novak, Michael; Wilensky, Uri

    2015-01-01

    In this article, we introduce a class of constructionist learning environments that we call "Emergent Systems Sandboxes" ("ESSs"), which have served as a centerpiece of our recent work in developing curriculum to support scalable model-based learning in classroom settings. ESSs are a carefully specified form of virtual…

  19. Model-Based Inquiries in Chemistry

    ERIC Educational Resources Information Center

    Khan, Samia

    2007-01-01

    In this paper, instructional strategies for sustaining model-based inquiry in an undergraduate chemistry class were analyzed through data collected from classroom observations, a student survey, and in-depth problem-solving sessions with the instructor and students. Analysis of teacher-student interactions revealed a cyclical pattern in which…

  20. Opinion dynamics model based on quantum formalism

    SciTech Connect

    Artawan, I. Nengah; Trisnawati, N. L. P.

    2016-03-11

    Opinion dynamics model based on quantum formalism is proposed. The core of the quantum formalism is on the half spin dynamics system. In this research the implicit time evolution operators are derived. The analogy between the model with Deffuant dan Sznajd models is discussed.