Science.gov

Sample records for generalized linear model-based

  1. Time series models based on generalized linear models: some further results.

    PubMed

    Li, W K

    1994-06-01

    This paper considers the problem of extending the classical moving average models to time series with conditional distributions given by generalized linear models. These models have the advantage of easy construction and estimation. Statistical modelling techniques are also proposed. Some simulation results and an illustrative example are reported to illustrate the methodology. The models will have potential applications in longitudinal data analysis. PMID:8068850

  2. Kalman estimator- and general linear model-based on-line brain activation mapping by near-infrared spectroscopy

    PubMed Central

    2010-01-01

    Background Near-infrared spectroscopy (NIRS) is a non-invasive neuroimaging technique that recently has been developed to measure the changes of cerebral blood oxygenation associated with brain activities. To date, for functional brain mapping applications, there is no standard on-line method for analysing NIRS data. Methods In this paper, a novel on-line NIRS data analysis framework taking advantages of both the general linear model (GLM) and the Kalman estimator is devised. The Kalman estimator is used to update the GLM coefficients recursively, and one critical coefficient regarding brain activities is then passed to a t-statistical test. The t-statistical test result is used to update a topographic brain activation map. Meanwhile, a set of high-pass filters is plugged into the GLM to prevent very low-frequency noises, and an autoregressive (AR) model is used to prevent the temporal correlation caused by physiological noises in NIRS time series. A set of data recorded in finger tapping experiments is studied using the proposed framework. Results The obtained results suggest that the method can effectively track the task related brain activation areas, and prevent the noise distortion in the estimation while the experiment is running. Thereby, the potential of the proposed method for real-time NIRS-based brain imaging was demonstrated. Conclusions This paper presents a novel on-line approach for analysing NIRS data for functional brain mapping applications. This approach demonstrates the potential of a real-time-updating topographic brain activation map. PMID:21138595

  3. General linear chirplet transform

    NASA Astrophysics Data System (ADS)

    Yu, Gang; Zhou, Yiqi

    2016-03-01

    Time-frequency (TF) analysis (TFA) method is an effective tool to characterize the time-varying feature of a signal, which has drawn many attentions in a fairly long period. With the development of TFA, many advanced methods are proposed, which can provide more precise TF results. However, some restrictions are introduced inevitably. In this paper, we introduce a novel TFA method, termed as general linear chirplet transform (GLCT), which can overcome some limitations existed in current TFA methods. In numerical and experimental validations, by comparing with current TFA methods, some advantages of GLCT are demonstrated, which consist of well-characterizing the signal of multi-component with distinct non-linear features, being independent to the mathematical model and initial TFA method, allowing for the reconstruction of the interested component, and being non-sensitivity to noise.

  4. Generalized Linear Covariance Analysis

    NASA Astrophysics Data System (ADS)

    Markley, F. Landis; Carpenter, J. Russell

    2009-01-01

    This paper presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into "solve-for" and "consider" parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and a priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and a priori solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the "variance sandpile" and the "sensitivity mosaic," and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.

  5. Generalized Linear Covariance Analysis

    NASA Technical Reports Server (NTRS)

    Carpenter, James R.; Markley, F. Landis

    2014-01-01

    This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.

  6. Generalized Linear Covariance Analysis

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Markley, F. Landis

    2008-01-01

    We review and extend in two directions the results of prior work on generalized covariance analysis methods. This prior work allowed for partitioning of the state space into "solve-for" and "consider" parameters, allowed for differences between the formal values and the true values of the measurement noise, process noise, and a priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and a priori solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator s anchor time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the "variance sandpile" and the "sensitivity mosaic," and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.

  7. Quantization of general linear electrodynamics

    SciTech Connect

    Rivera, Sergio; Schuller, Frederic P.

    2011-03-15

    General linear electrodynamics allow for an arbitrary linear constitutive relation between the field strength 2-form and induction 2-form density if crucial hyperbolicity and energy conditions are satisfied, which render the theory predictive and physically interpretable. Taking into account the higher-order polynomial dispersion relation and associated causal structure of general linear electrodynamics, we carefully develop its Hamiltonian formulation from first principles. Canonical quantization of the resulting constrained system then results in a quantum vacuum which is sensitive to the constitutive tensor of the classical theory. As an application we calculate the Casimir effect in a birefringent linear optical medium.

  8. Linear equality constraints in the general linear mixed model.

    PubMed

    Edwards, L J; Stewart, P W; Muller, K E; Helms, R W

    2001-12-01

    Scientists may wish to analyze correlated outcome data with constraints among the responses. For example, piecewise linear regression in a longitudinal data analysis can require use of a general linear mixed model combined with linear parameter constraints. Although well developed for standard univariate models, there are no general results that allow a data analyst to specify a mixed model equation in conjunction with a set of constraints on the parameters. We resolve the difficulty by precisely describing conditions that allow specifying linear parameter constraints that insure the validity of estimates and tests in a general linear mixed model. The recommended approach requires only straightforward and noniterative calculations to implement. We illustrate the convenience and advantages of the methods with a comparison of cognitive developmental patterns in a study of individuals from infancy to early adulthood for children from low-income families.

  9. Generalized Linear Models in Family Studies

    ERIC Educational Resources Information Center

    Wu, Zheng

    2005-01-01

    Generalized linear models (GLMs), as defined by J. A. Nelder and R. W. M. Wedderburn (1972), unify a class of regression models for categorical, discrete, and continuous response variables. As an extension of classical linear models, GLMs provide a common body of theory and methodology for some seemingly unrelated models and procedures, such as…

  10. Extended Generalized Linear Latent and Mixed Model

    ERIC Educational Resources Information Center

    Segawa, Eisuke; Emery, Sherry; Curry, Susan J.

    2008-01-01

    The generalized linear latent and mixed modeling (GLLAMM framework) includes many models such as hierarchical and structural equation models. However, GLLAMM cannot currently accommodate some models because it does not allow some parameters to be random. GLLAMM is extended to overcome the limitation by adding a submodel that specifies a…

  11. Identification of general linear mechanical systems

    NASA Technical Reports Server (NTRS)

    Sirlin, S. W.; Longman, R. W.; Juang, J. N.

    1983-01-01

    Previous work in identification theory has been concerned with the general first order time derivative form. Linear mechanical systems, a large and important class, naturally have a second order form. This paper utilizes this additional structural information for the purpose of identification. A realization is obtained from input-output data, and then knowledge of the system input, output, and inertia matrices is used to determine a set of linear equations whereby we identify the remaining unknown system matrices. Necessary and sufficient conditions on the number, type and placement of sensors and actuators are given which guarantee identificability, and less stringent conditions are given which guarantee generic identifiability. Both a priori identifiability and a posteriori identifiability are considered, i.e., identifiability being insured prior to obtaining data, and identifiability being assured with a given data set.

  12. Reduced Order Models Based on Linear and Nonlinear Aerodynamic Impulse Responses

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.

    1999-01-01

    This paper discusses a method for the identification and application of reduced-order models based on linear and nonlinear aerodynamic impulse responses. The Volterra theory of nonlinear systems and an appropriate kernel identification technique are described. Insight into the nature of kernels is provided by applying the method to the nonlinear Riccati equation in a non-aerodynamic application. The method is then applied to a nonlinear aerodynamic model of an RAE 2822 supercritical airfoil undergoing plunge motions using the CFL3D Navier-Stokes flow solver with the Spalart-Allmaras turbulence model. Results demonstrate the computational efficiency of the technique.

  13. Reduced-Order Models Based on Linear and Nonlinear Aerodynamic Impulse Responses

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.

    1999-01-01

    This paper discusses a method for the identification and application of reduced-order models based on linear and nonlinear aerodynamic impulse responses. The Volterra theory of nonlinear systems and an appropriate kernel identification technique are described. Insight into the nature of kernels is provided by applying the method to the nonlinear Riccati equation in a non-aerodynamic application. The method is then applied to a nonlinear aerodynamic model of RAE 2822 supercritical airfoil undergoing plunge motions using the CFL3D Navier-Stokes flow solver with the Spalart-Allmaras turbulence model. Results demonstrate the computational efficiency of the technique.

  14. Role of symbolic computation in linear and model-based controller development

    NASA Astrophysics Data System (ADS)

    Tripathi, Sumit

    Model based controllers for articulated-mechanical-systems are gaining popularity among control system designers by virtue of their significant performance gains. However a critical precursor to deployment is availability of plant-model-equations together with a systematic means for generating them, typically by applying the postulates of physics. The complexity and tractability of first generating and then analyzing models often serves to limit the type and complexity of the example systems. However, using simpler examples alone may sometimes fail to capture important physical phenomena (e.g. gyroscopic, coriolis). Larger systems nevertheless remain intractable which restricts the exploration of non-linear controller design techniques. Hence, we examine the use of some contemporary symbolic- and numeric-computation tools to assist with the automated symbolic equation generation and subsequent analysis. The principal underlying goal of this thesis is to establish linkage between traditional approach and block diagram modeling, controller development. The inverted Furuta Pendulum example allows us to showcase the emergence of model-complexity even in relatively-simple two-jointed mechanical system. Advanced concepts, e.g. manipulator singularity and constraint system modeling, are studied with a 6 Degree of Freedom manipulator. We will focus on various aspects of model-creation, model-linearization as well as study development and performance of both model-independent and model-based controller designs.

  15. A General Framework for Multiphysics Modeling Based on Numerical Averaging

    NASA Astrophysics Data System (ADS)

    Lunati, I.; Tomin, P.

    2014-12-01

    In the last years, multiphysics (hybrid) modeling has attracted increasing attention as a tool to bridge the gap between pore-scale processes and a continuum description at the meter-scale (laboratory scale). This approach is particularly appealing for complex nonlinear processes, such as multiphase flow, reactive transport, density-driven instabilities, and geomechanical coupling. We present a general framework that can be applied to all these classes of problems. The method is based on ideas from the Multiscale Finite-Volume method (MsFV), which has been originally developed for Darcy-scale application. Recently, we have reformulated MsFV starting with a local-global splitting, which allows us to retain the original degree of coupling for the local problems and to use spatiotemporal adaptive strategies. The new framework is based on the simple idea that different characteristic temporal scales are inherited from different spatial scales, and the global and the local problems are solved with different temporal resolutions. The global (coarse-scale) problem is constructed based on a numerical volume-averaging paradigm and a continuum (Darcy-scale) description is obtained by introducing additional simplifications (e.g., by assuming that pressure is the only independent variable at the coarse scale, we recover an extended Darcy's law). We demonstrate that it is possible to adaptively and dynamically couple the Darcy-scale and the pore-scale descriptions of multiphase flow in a single conceptual and computational framework. Pore-scale problems are solved only in the active front region where fluid distribution changes with time. In the rest of the domain, only a coarse description is employed. This framework can be applied to other important problems such as reactive transport and crack propagation. As it is based on a numerical upscaling paradigm, our method can be used to explore the limits of validity of macroscopic models and to illuminate the meaning of

  16. Non-linear analysis indicates chaotic dynamics and reduced resilience in model-based Daphnia populations exposed to environmental stress.

    PubMed

    Ottermanns, Richard; Szonn, Kerstin; Preuβ, Thomas G; Roβ-Nickoll, Martina

    2014-01-01

    In this study we present evidence that anthropogenic stressors can reduce the resilience of age-structured populations. Enhancement of disturbance in a model-based Daphnia population lead to a repression of chaotic population dynamics at the same time increasing the degree of synchrony between the population's age classes. Based on the theory of chaos-mediated survival an increased risk of extinction was revealed for this population exposed to high concentrations of a chemical stressor. The Lyapunov coefficient was supposed to be a useful indicator to detect disturbance thresholds leading to alterations in population dynamics. One possible explanation could be a discrete change in attractor orientation due to external disturbance. The statistical analysis of Lyapunov coefficient distribution is proposed as a methodology to test for significant non-linear effects of general disturbance on populations. Although many new questions arose, this study forms a theoretical basis for a dynamical definition of population recovery. PMID:24809537

  17. Permutation inference for the general linear model

    PubMed Central

    Winkler, Anderson M.; Ridgway, Gerard R.; Webster, Matthew A.; Smith, Stephen M.; Nichols, Thomas E.

    2014-01-01

    Permutation methods can provide exact control of false positives and allow the use of non-standard statistics, making only weak assumptions about the data. With the availability of fast and inexpensive computing, their main limitation would be some lack of flexibility to work with arbitrary experimental designs. In this paper we report on results on approximate permutation methods that are more flexible with respect to the experimental design and nuisance variables, and conduct detailed simulations to identify the best method for settings that are typical for imaging research scenarios. We present a generic framework for permutation inference for complex general linear models (glms) when the errors are exchangeable and/or have a symmetric distribution, and show that, even in the presence of nuisance effects, these permutation inferences are powerful while providing excellent control of false positives in a wide range of common and relevant imaging research scenarios. We also demonstrate how the inference on glm parameters, originally intended for independent data, can be used in certain special but useful cases in which independence is violated. Detailed examples of common neuroimaging applications are provided, as well as a complete algorithm – the “randomise” algorithm – for permutation inference with the glm. PMID:24530839

  18. Irreducible Characters of General Linear Superalgebra and Super Duality

    NASA Astrophysics Data System (ADS)

    Cheng, Shun-Jen; Lam, Ngau

    2010-09-01

    We develop a new method to solve the irreducible character problem for a wide class of modules over the general linear superalgebra, including all the finite-dimensional modules, by directly relating the problem to the classical Kazhdan-Lusztig theory. Furthermore, we prove that certain parabolic BGG categories over the general linear algebra and over the general linear superalgebra are equivalent. We also verify a parabolic version of a conjecture of Brundan on the irreducible characters in the BGG category of the general linear superalgebra.

  19. Development of a CFD-compatible transition model based on linear stability theory

    NASA Astrophysics Data System (ADS)

    Coder, James G.

    A new laminar-turbulent transition model for low-turbulence external aerodynamic applications is presented that incorporates linear stability theory in a manner compatible with modern computational fluid dynamics solvers. The model uses a new transport equation that describes the growth of the maximum Tollmien-Schlichting instability amplitude in the presence of a boundary layer. To avoid the need for integration paths and non-local operations, a locally defined non-dimensional pressure-gradient parameter is used that serves as an estimator of the integral boundary-layer properties. The model has been implemented into the OVERFLOW 2.2f solver and interacts with the Spalart-Allmaras and Menter SST eddy-viscosity turbulence models. Comparisons of predictions using the new transition model with high-quality wind-tunnel measurements of airfoil section characteristics validate the predictive qualities of the model. Predictions for three-dimensional aircraft and wing geometries show the correct qualitative behavior even though limited experimental data are available. These cases also demonstrate that the model is well-behaved about general aeronautical configurations. These cases confirm that the new transition model is an improvement over the current state of the art in computational fluid dynamics transition modeling by providing more accurate solutions at approximately half the added computational expense.

  20. Centering, Scale Indeterminacy, and Differential Item Functioning Detection in Hierarchical Generalized Linear and Generalized Linear Mixed Models

    ERIC Educational Resources Information Center

    Cheong, Yuk Fai; Kamata, Akihito

    2013-01-01

    In this article, we discuss and illustrate two centering and anchoring options available in differential item functioning (DIF) detection studies based on the hierarchical generalized linear and generalized linear mixed modeling frameworks. We compared and contrasted the assumptions of the two options, and examined the properties of their DIF…

  1. Linear stability of general magnetically insulated electron flow

    NASA Astrophysics Data System (ADS)

    Swegle, J. A.; Mendel, C. W., Jr.; Seidel, D. B.; Quintenz, J. P.

    1984-03-01

    A linear stability theory for magnetically insulated systems was formulated by linearizing the general 3-D, time dependent theory of Mendel, Seidel, and Slut. It is found that, case of electron trajectories which are nearly laminar, with only small transverse motion, several suggestive simplifications occur in the eigenvalue equations.

  2. Linear stability of general magnetically insulated electron flow

    SciTech Connect

    Swegle, J.A.; Mendel, C.W. Jr.; Seidel, D.B.; Quintenz, J.P.

    1984-01-01

    We have formulated a linear stability theory for magnetically insulated systems by linearizing the general 3-D, time-dependent theory of Mendel, Seidel, and Slutz. In the physically interesting case of electron trajectories which are nearly laminar, with only small transverse motion, we have found that several suggestive simplifications occur in the eigenvalue equations.

  3. The General Linear Model and Direct Standardization: A Comparison.

    ERIC Educational Resources Information Center

    Little, Roderick J. A.; Pullum, Thomas W.

    1979-01-01

    Two methods of analyzing nonorthogonal (uneven cell sizes) cross-classified data sets are compared. The methods are direct standardization and the general linear model. The authors illustrate when direct standardization may be a desirable method of analysis. (JKS)

  4. Estimation of biological parameters of marine organisms using linear and nonlinear acoustic scattering model-based inversion methods.

    PubMed

    Chu, Dezhang; Lawson, Gareth L; Wiebe, Peter H

    2016-05-01

    The linear inversion commonly used in fisheries and zooplankton acoustics assumes a constant inversion kernel and ignores the uncertainties associated with the shape and behavior of the scattering targets, as well as other relevant animal parameters. Here, errors of the linear inversion due to uncertainty associated with the inversion kernel are quantified. A scattering model-based nonlinear inversion method is presented that takes into account the nonlinearity of the inverse problem and is able to estimate simultaneously animal abundance and the parameters associated with the scattering model inherent to the kernel. It uses sophisticated scattering models to estimate first, the abundance, and second, the relevant shape and behavioral parameters of the target organisms. Numerical simulations demonstrate that the abundance, size, and behavior (tilt angle) parameters of marine animals (fish or zooplankton) can be accurately inferred from the inversion by using multi-frequency acoustic data. The influence of the singularity and uncertainty in the inversion kernel on the inversion results can be mitigated by examining the singular values for linear inverse problems and employing a non-linear inversion involving a scattering model-based kernel. PMID:27250181

  5. A general U-block model-based design procedure for nonlinear polynomial control systems

    NASA Astrophysics Data System (ADS)

    Zhu, Q. M.; Zhao, D. Y.; Zhang, Jianhua

    2016-10-01

    The proposition of U-model concept (in terms of 'providing concise and applicable solutions for complex problems') and a corresponding basic U-control design algorithm was originated in the first author's PhD thesis. The term of U-model appeared (not rigorously defined) for the first time in the first author's other journal paper, which established a framework for using linear polynomial control system design approaches to design nonlinear polynomial control systems (in brief, linear polynomial approaches → nonlinear polynomial plants). This paper represents the next milestone work - using linear state-space approaches to design nonlinear polynomial control systems (in brief, linear state-space approaches → nonlinear polynomial plants). The overall aim of the study is to establish a framework, defined as the U-block model, which provides a generic prototype for using linear state-space-based approaches to design the control systems with smooth nonlinear plants/processes described by polynomial models. For analysing the feasibility and effectiveness, sliding mode control design approach is selected as an exemplary case study. Numerical simulation studies provide a user-friendly step-by-step procedure for the readers/users with interest in their ad hoc applications. In formality, this is the first paper to present the U-model-oriented control system design in a formal way and to study the associated properties and theorems. The previous publications, in the main, have been algorithm-based studies and simulation demonstrations. In some sense, this paper can be treated as a landmark for the U-model-based research from intuitive/heuristic stage to rigour/formal/comprehensive studies.

  6. From linear to generalized linear mixed models: A case study in repeated measures

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Compared to traditional linear mixed models, generalized linear mixed models (GLMMs) can offer better correspondence between response variables and explanatory models, yielding more efficient estimates and tests in the analysis of data from designed experiments. Using proportion data from a designed...

  7. A novel crowd flow model based on linear fractional stable motion

    NASA Astrophysics Data System (ADS)

    Wei, Juan; Zhang, Hong; Wu, Zhenya; He, Junlin; Guo, Yangyong

    2016-03-01

    For the evacuation dynamics in indoor space, a novel crowd flow model is put forward based on Linear Fractional Stable Motion. Based on position attraction and queuing time, the calculation formula of movement probability is defined and the queuing time is depicted according to linear fractal stable movement. At last, an experiment and simulation platform can be used for performance analysis, studying deeply the relation among system evacuation time, crowd density and exit flow rate. It is concluded that the evacuation time and the exit flow rate have positive correlations with the crowd density, and when the exit width reaches to the threshold value, it will not effectively decrease the evacuation time by further increasing the exit width.

  8. Linear equations in general purpose codes for stiff ODEs

    SciTech Connect

    Shampine, L. F.

    1980-02-01

    It is noted that it is possible to improve significantly the handling of linear problems in a general-purpose code with very little trouble to the user or change to the code. In such situations analytical evaluation of the Jacobian is a lot cheaper than numerical differencing. A slight change in the point at which the Jacobian is evaluated results in a more accurate Jacobian in linear problems. (RWR)

  9. Log-linear model based behavior selection method for artificial fish swarm algorithm.

    PubMed

    Huang, Zhehuang; Chen, Yidong

    2015-01-01

    Artificial fish swarm algorithm (AFSA) is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm. PMID:25691895

  10. Log-Linear Model Based Behavior Selection Method for Artificial Fish Swarm Algorithm

    PubMed Central

    Huang, Zhehuang; Chen, Yidong

    2015-01-01

    Artificial fish swarm algorithm (AFSA) is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm. PMID:25691895

  11. Log-linear model based behavior selection method for artificial fish swarm algorithm.

    PubMed

    Huang, Zhehuang; Chen, Yidong

    2015-01-01

    Artificial fish swarm algorithm (AFSA) is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm.

  12. Optimal explicit strong-stability-preserving general linear methods.

    SciTech Connect

    Constantinescu, E.; Sandu, A.

    2010-07-01

    This paper constructs strong-stability-preserving general linear time-stepping methods that are well suited for hyperbolic PDEs discretized by the method of lines. These methods generalize both Runge-Kutta (RK) and linear multistep schemes. They have high stage orders and hence are less susceptible than RK methods to order reduction from source terms or nonhomogeneous boundary conditions. A global optimization strategy is used to find the most efficient schemes that have low storage requirements. Numerical results illustrate the theoretical findings.

  13. A review of some extensions to generalized linear models.

    PubMed

    Lindsey, J K

    Although generalized linear models are reasonably well known, they are not as widely used in medical statistics as might be appropriate, with the exception of logistic, log-linear, and some survival models. At the same time, the generalized linear modelling methodology is decidedly outdated in that more powerful methods, involving wider classes of distributions, non-linear regression, censoring and dependence among responses, are required. Limitations of the generalized linear modelling approach include the need for the iterated weighted least squares (IWLS) procedure for estimation and deviances for inferences; these restrict the class of models that can be used and do not allow direct comparisons among models from different distributions. Powerful non-linear optimization routines are now available and comparisons can more fruitfully be made using the complete likelihood function. The link function is an artefact, necessary for IWLS to function with linear models, but that disappears once the class is extended to truly non-linear models. Restricting comparisons of responses under different treatments to differences in means can be extremely misleading if the shape of the distribution is changing. This may involve changes in dispersion, or of other shape-related parameters such as the skewness in a stable distribution, with the treatments or covariates. Any exact likelihood function, defined as the probability of the observed data, takes into account the fact that all observable data are interval censored, thus directly encompassing the various types of censoring possible with duration-type data. In most situations this can now be as easily used as the traditional approximate likelihood based on densities. Finally, methods are required for incorporating dependencies among responses in models including conditioning on previous history and on random effects. One important procedure for constructing such likelihoods is based on Kalman filtering. PMID:10474135

  14. Transferability of regional permafrost disturbance susceptibility modelling using generalized linear and generalized additive models

    NASA Astrophysics Data System (ADS)

    Rudy, Ashley C. A.; Lamoureux, Scott F.; Treitz, Paul; van Ewijk, Karin Y.

    2016-07-01

    To effectively assess and mitigate risk of permafrost disturbance, disturbance-prone areas can be predicted through the application of susceptibility models. In this study we developed regional susceptibility models for permafrost disturbances using a field disturbance inventory to test the transferability of the model to a broader region in the Canadian High Arctic. Resulting maps of susceptibility were then used to explore the effect of terrain variables on the occurrence of disturbances within this region. To account for a large range of landscape characteristics, the model was calibrated using two locations: Sabine Peninsula, Melville Island, NU, and Fosheim Peninsula, Ellesmere Island, NU. Spatial patterns of disturbance were predicted with a generalized linear model (GLM) and generalized additive model (GAM), each calibrated using disturbed and randomized undisturbed locations from both locations and GIS-derived terrain predictor variables including slope, potential incoming solar radiation, wetness index, topographic position index, elevation, and distance to water. Each model was validated for the Sabine and Fosheim Peninsulas using independent data sets while the transferability of the model to an independent site was assessed at Cape Bounty, Melville Island, NU. The regional GLM and GAM validated well for both calibration sites (Sabine and Fosheim) with the area under the receiver operating curves (AUROC) > 0.79. Both models were applied directly to Cape Bounty without calibration and validated equally with AUROC's of 0.76; however, each model predicted disturbed and undisturbed samples differently. Additionally, the sensitivity of the transferred model was assessed using data sets with different sample sizes. Results indicated that models based on larger sample sizes transferred more consistently and captured the variability within the terrain attributes in the respective study areas. Terrain attributes associated with the initiation of disturbances were

  15. Beam envelope calculations in general linear coupled lattices

    SciTech Connect

    Chung, Moses; Qin, Hong; Groening, Lars; Xiao, Chen; Davidson, Ronald C.

    2015-01-15

    The envelope equations and Twiss parameters (β and α) provide important bases for uncoupled linear beam dynamics. For sophisticated beam manipulations, however, coupling elements between two transverse planes are intentionally introduced. The recently developed generalized Courant-Snyder theory offers an effective way of describing the linear beam dynamics in such coupled systems with a remarkably similar mathematical structure to the original Courant-Snyder theory. In this work, we present numerical solutions to the symmetrized matrix envelope equation for β which removes the gauge freedom in the matrix envelope equation for w. Furthermore, we construct the transfer and beam matrices in terms of the generalized Twiss parameters, which enables calculation of the beam envelopes in arbitrary linear coupled systems.

  16. Beam envelope calculations in general linear coupled lattices

    NASA Astrophysics Data System (ADS)

    Chung, Moses; Qin, Hong; Groening, Lars; Davidson, Ronald C.; Xiao, Chen

    2015-01-01

    The envelope equations and Twiss parameters (β and α) provide important bases for uncoupled linear beam dynamics. For sophisticated beam manipulations, however, coupling elements between two transverse planes are intentionally introduced. The recently developed generalized Courant-Snyder theory offers an effective way of describing the linear beam dynamics in such coupled systems with a remarkably similar mathematical structure to the original Courant-Snyder theory. In this work, we present numerical solutions to the symmetrized matrix envelope equation for β which removes the gauge freedom in the matrix envelope equation for w. Furthermore, we construct the transfer and beam matrices in terms of the generalized Twiss parameters, which enables calculation of the beam envelopes in arbitrary linear coupled systems.

  17. Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models

    ERIC Educational Resources Information Center

    Wagler, Amy E.

    2014-01-01

    Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…

  18. Canonical Correlation Analysis as the General Linear Model.

    ERIC Educational Resources Information Center

    Vidal, Sherry

    The concept of the general linear model (GLM) is illustrated and how canonical correlation analysis is the GLM is explained, using a heuristic data set to demonstrate how canonical correlation analysis subsumes various multivariate and univariate methods. The paper shows how each of these analyses produces a synthetic variable, like the Yhat…

  19. Model based generalization analysis of common spatial pattern in brain computer interfaces

    PubMed Central

    Liu, Guangquan; Meng, Jianjun; Zhang, Dingguo; Zhu, Xiangyang

    2010-01-01

    In the motor imagery based Brain Computer Interface (BCI) research, Common Spatial Pattern (CSP) algorithm is used widely as a spatial filter on multi-channel electroencephalogram (EEG) recordings. Recently the overfitting effect of CSP has been gradually noticed, but what influence the overfitting is still unclear. In this work, the generalization of CSP is investigated by a simple linear mixing model. Several factors in this model are discussed, and the simulation results indicate that channel numbers and the correlation between signals influence the generalization of CSP significantly. A larger number of training trials and a longer time length of the trial would prevent overfitting. The experiments on real data also verify our conclusion. PMID:21886674

  20. A metahillslope model based on an analytical solution to a linearized Boussinesq equation for temporally variable recharge rates

    NASA Astrophysics Data System (ADS)

    Pauwels, Valentijn R. N.; Verhoest, Niko E. C.; de Troch, FrançOis P.

    2002-12-01

    In hydrology the slow, subsurface component of the discharge is usually referred to as base flow. One method to model base flow is the conceptual approach, in which the complex physical reality is simplified using hypotheses and assumptions, and the various physical processes are described mathematically. The purpose of this paper is to develop and validate a conceptual method, based on hydraulic theory, to calculate the base flow of a catchment, under observed precipitation rates. The governing groundwater equation, the Boussinesq equation, valid for a unit width sloping aquifer, is linearized and solved for a temporally variable recharge rate. The solution allows the calculation of the transient water table profile in and the outflow from an aquifer under temporally variable recharge rates. When a catchment is considered a metahillslope, the solution can be used, when coupled to a routing model, to calculate the catchment base flow. The model is applied to the Zwalm catchment and four subcatchments in Belgium. The results suggest that it is possible to model base flow at the catchment scale, using a Boussinesq-based metahillslope model. The results further indicate that it is sufficient to use a relatively simple formulation of the infiltration, overland flow, and base flow processes to obtain reasonable estimates of the total catchment discharge.

  1. The generalized sidelobe canceller based on quaternion widely linear processing.

    PubMed

    Tao, Jian-wu; Chang, Wen-xiu

    2014-01-01

    We investigate the problem of quaternion beamforming based on widely linear processing. First, a quaternion model of linear symmetric array with two-component electromagnetic (EM) vector sensors is presented. Based on array's quaternion model, we propose the general expression of a quaternion semiwidely linear (QSWL) beamformer. Unlike the complex widely linear beamformer, the QSWL beamformer is based on the simultaneous operation on the quaternion vector, which is composed of two jointly proper complex vectors, and its involution counterpart. Second, we propose a useful implementation of QSWL beamformer, that is, QSWL generalized sidelobe canceller (GSC), and derive the simple expressions of the weight vectors. The QSWL GSC consists of two-stage beamformers. By designing the weight vectors of two-stage beamformers, the interference is completely canceled in the output of QSWL GSC and the desired signal is not distorted. We derive the array's gain expression and analyze the performance of the QSWL GSC in the presence of one type of interference. The advantage of QSWL GSC is that the main beam can always point to the desired signal's direction and the robustness to DOA mismatch is improved. Finally, simulations are used to verify the performance of the proposed QSWL GSC. PMID:24955425

  2. The Generalized Sidelobe Canceller Based on Quaternion Widely Linear Processing

    PubMed Central

    Tao, Jian-wu; Chang, Wen-xiu

    2014-01-01

    We investigate the problem of quaternion beamforming based on widely linear processing. First, a quaternion model of linear symmetric array with two-component electromagnetic (EM) vector sensors is presented. Based on array's quaternion model, we propose the general expression of a quaternion semiwidely linear (QSWL) beamformer. Unlike the complex widely linear beamformer, the QSWL beamformer is based on the simultaneous operation on the quaternion vector, which is composed of two jointly proper complex vectors, and its involution counterpart. Second, we propose a useful implementation of QSWL beamformer, that is, QSWL generalized sidelobe canceller (GSC), and derive the simple expressions of the weight vectors. The QSWL GSC consists of two-stage beamformers. By designing the weight vectors of two-stage beamformers, the interference is completely canceled in the output of QSWL GSC and the desired signal is not distorted. We derive the array's gain expression and analyze the performance of the QSWL GSC in the presence of one type of interference. The advantage of QSWL GSC is that the main beam can always point to the desired signal's direction and the robustness to DOA mismatch is improved. Finally, simulations are used to verify the performance of the proposed QSWL GSC. PMID:24955425

  3. Estimating classification images with generalized linear and additive models.

    PubMed

    Knoblauch, Kenneth; Maloney, Laurence T

    2008-12-22

    Conventional approaches to modeling classification image data can be described in terms of a standard linear model (LM). We show how the problem can be characterized as a Generalized Linear Model (GLM) with a Bernoulli distribution. We demonstrate via simulation that this approach is more accurate in estimating the underlying template in the absence of internal noise. With increasing internal noise, however, the advantage of the GLM over the LM decreases and GLM is no more accurate than LM. We then introduce the Generalized Additive Model (GAM), an extension of GLM that can be used to estimate smooth classification images adaptively. We show that this approach is more robust to the presence of internal noise, and finally, we demonstrate that GAM is readily adapted to estimation of higher order (nonlinear) classification images and to testing their significance.

  4. Credibility analysis of risk classes by generalized linear model

    NASA Astrophysics Data System (ADS)

    Erdemir, Ovgucan Karadag; Sucu, Meral

    2016-06-01

    In this paper generalized linear model (GLM) and credibility theory which are frequently used in nonlife insurance pricing are combined for reliability analysis. Using full credibility standard, GLM is associated with limited fluctuation credibility approach. Comparison criteria such as asymptotic variance and credibility probability are used to analyze the credibility of risk classes. An application is performed by using one-year claim frequency data of a Turkish insurance company and results of credible risk classes are interpreted.

  5. Generalization of continuous-variable quantum cloning with linear optics

    NASA Astrophysics Data System (ADS)

    Zhai, Zehui; Guo, Juan; Gao, Jiangrui

    2006-05-01

    We propose an asymmetric quantum cloning scheme. Based on the proposal and experiment by Andersen [Phys. Rev. Lett. 94, 240503 (2005)], we generalize it to two asymmetric cases: quantum cloning with asymmetry between output clones and between quadrature variables. These optical implementations also employ linear elements and homodyne detection only. Finally, we also compare the utility of symmetric and asymmetric cloning in an analysis of a squeezed-state quantum key distribution protocol and find that the asymmetric one is more advantageous.

  6. Linear spin-2 fields in most general backgrounds

    NASA Astrophysics Data System (ADS)

    Bernard, Laura; Deffayet, Cédric; Schmidt-May, Angnis; von Strauss, Mikael

    2016-04-01

    We derive the full perturbative equations of motion for the most general background solutions in ghost-free bimetric theory in its metric formulation. Clever field redefinitions at the level of fluctuations enable us to circumvent the problem of varying a square-root matrix appearing in the theory. This greatly simplifies the expressions for the linear variation of the bimetric interaction terms. We show that these field redefinitions exist and are uniquely invertible if and only if the variation of the square-root matrix itself has a unique solution, which is a requirement for the linearized theory to be well defined. As an application of our results we examine the constraint structure of ghost-free bimetric theory at the level of linear equations of motion for the first time. We identify a scalar combination of equations which is responsible for the absence of the Boulware-Deser ghost mode in the theory. The bimetric scalar constraint is in general not manifestly covariant in its nature. However, in the massive gravity limit the constraint assumes a covariant form when one of the interaction parameters is set to zero. For that case our analysis provides an alternative and almost trivial proof of the absence of the Boulware-Deser ghost. Our findings generalize previous results in the metric formulation of massive gravity and also agree with studies of its vielbein version.

  7. Obtaining General Relativity's N-body non-linear Lagrangian from iterative, linear algebraic scaling equations

    NASA Astrophysics Data System (ADS)

    Nordtvedt, K.

    2015-11-01

    A local system of bodies in General Relativity whose exterior metric field asymptotically approaches the Minkowski metric effaces any effects of the matter distribution exterior to its Minkowski boundary condition. To enforce to all orders this property of gravity which appears to hold in nature, a method using linear algebraic scaling equations is developed which generates by an iterative process an N-body Lagrangian expansion for gravity's motion-independent potentials which fulfills exterior effacement along with needed metric potential expansions. Then additional properties of gravity - interior effacement and Lorentz time dilation and spatial contraction - produce additional iterative, linear algebraic equations for obtaining the full non-linear and motion-dependent N-body gravity Lagrangian potentials as well.

  8. Comparative Study of Algorithms for Automated Generalization of Linear Objects

    NASA Astrophysics Data System (ADS)

    Azimjon, S.; Gupta, P. K.; Sukhmani, R. S. G. S.

    2014-11-01

    Automated generalization, rooted from conventional cartography, has become an increasing concern in both geographic information system (GIS) and mapping fields. All geographic phenomenon and the processes are bound to the scale, as it is impossible for human being to observe the Earth and the processes in it without decreasing its scale. To get optimal results, cartographers and map-making agencies develop set of rules and constraints, however these rules are under consideration and topic for many researches up until recent days. Reducing map generating time and giving objectivity is possible by developing automated map generalization algorithms (McMaster and Shea, 1988). Modification of the scale traditionally is a manual process, which requires knowledge of the expert cartographer, and it depends on the experience of the user, which makes the process very subjective as every user may generate different map with same requirements. However, automating generalization based on the cartographic rules and constrains can give consistent result. Also, developing automated system for map generation is the demand of this rapid changing world. The research that we have conveyed considers only generalization of the roads, as it is one of the indispensable parts of a map. Dehradun city, Uttarakhand state of India was selected as a study area. The study carried out comparative study of the generalization software sets, operations and algorithms available currently, also considers advantages and drawbacks of the existing software used worldwide. Research concludes with the development of road network generalization tool and with the final generalized road map of the study area, which explores the use of open source python programming language and attempts to compare different road network generalization algorithms. Thus, the paper discusses the alternative solutions for automated generalization of linear objects using GIS-technologies. Research made on automated of road network

  9. Parametrizing linear generalized Langevin dynamics from explicit molecular dynamics simulations

    SciTech Connect

    Gottwald, Fabian; Karsten, Sven; Ivanov, Sergei D. Kühn, Oliver

    2015-06-28

    Fundamental understanding of complex dynamics in many-particle systems on the atomistic level is of utmost importance. Often the systems of interest are of macroscopic size but can be partitioned into a few important degrees of freedom which are treated most accurately and others which constitute a thermal bath. Particular attention in this respect attracts the linear generalized Langevin equation, which can be rigorously derived by means of a linear projection technique. Within this framework, a complicated interaction with the bath can be reduced to a single memory kernel. This memory kernel in turn is parametrized for a particular system studied, usually by means of time-domain methods based on explicit molecular dynamics data. Here, we discuss that this task is more naturally achieved in frequency domain and develop a Fourier-based parametrization method that outperforms its time-domain analogues. Very surprisingly, the widely used rigid bond method turns out to be inappropriate in general. Importantly, we show that the rigid bond approach leads to a systematic overestimation of relaxation times, unless the system under study consists of a harmonic bath bi-linearly coupled to the relevant degrees of freedom.

  10. Elastic capsule deformation in general irrotational linear flows

    PubMed Central

    Szatmary, Alex C.; Eggleton, Charles D.

    2012-01-01

    Knowledge of the response of elastic capsules to imposed fluid flow is necessary for predicting deformation and motion of biological cells and synthetic capsules in microfluidic devices and in the microcirculation. Capsules have been studied in shear, planar extensional, and axisymmetric extensional flows. Here, the flow gradient matrix of a general irrotational linear flow is characterized by two parameters, its strain rate, defined as the maximum of the principal strain rates, and by a new term, q, the difference in the two lesser principal strain rates, scaled by the maximum principal strain rate; this characterization is valid for ellipsoids in irrotational linear flow, and it gives good results for spheres in general linear flows at low capillary numbers. We demonstrate that deformable non-spherical particles align with the principal axes of an imposed irrotational flow. Thus, it is most practical to model deformation of non-spherical particles already aligned with the flow, rather than considering each arbitrary orientation. Capsule deformation was modeled for a sphere, a prolate spheroid, and an oblate spheroid, subjected to combinations of uniaxial, biaxial, and planar extensional flows; modeling was performed using the immersed boundary method. The time response of each capsule to each flow was found, as were the steady-state deformation factor, mean strain energy, and surface area. For a given capillary number, planar flows led to more deformation than uniaxial or biaxial extensional flows. Capsule behavior in all cases was bounded by the response of capsules to uniaxial, biaxial, and planar extensional flow. PMID:23426110

  11. Generalization of continuous-variable quantum cloning with linear optics

    SciTech Connect

    Zhai Zehui; Guo Juan; Gao Jiangrui

    2006-05-15

    We propose an asymmetric quantum cloning scheme. Based on the proposal and experiment by Andersen et al. [Phys. Rev. Lett. 94, 240503 (2005)], we generalize it to two asymmetric cases: quantum cloning with asymmetry between output clones and between quadrature variables. These optical implementations also employ linear elements and homodyne detection only. Finally, we also compare the utility of symmetric and asymmetric cloning in an analysis of a squeezed-state quantum key distribution protocol and find that the asymmetric one is more advantageous.

  12. General linear mode conversion coefficient in one dimension

    NASA Astrophysics Data System (ADS)

    Littlejohn, Robert G.; Flynn, William G.

    1993-03-01

    A general formula is presented for the mode conversion coefficient for linear mode conversion in one dimension, in terms of an arbitrary 2 x 2 reduced dispersion matrix describing the coupling of the modes. The mode conversion coefficient has three invariance properties which are discussed, namely, invariance under scaling transformations, canonical transformations, and a certain kind of Lorentz transformation. Formulas for the S matrix of mode conversion are also presented. The example of the conversion of electromagnetic waves to electrostatic waves in the ionosphere is used to illustrate the formulas.

  13. General linear mode conversion coefficient in one dimension

    NASA Astrophysics Data System (ADS)

    Littlejohn, Robert G.; Flynn, William G.

    1993-03-01

    A general formula is presented for the mode conversion coefficient for linear mode conversion in one dimension, in terms of an arbitrary 2×2 reduced dispersion matrix describing the coupling of the modes. The mode conversion coefficient has three invariance properties which are discussed, namely, invariance under scaling transformations, canonical transformations, and a certain kind of Lorentz transformation. Formulas for the S matrix of mode conversion are also presented. The example of the conversion of electromagnetic waves to electrostatic waves in the ionosphere is used to illustrate the formulas.

  14. Generalized space and linear momentum operators in quantum mechanics

    SciTech Connect

    Costa, Bruno G. da

    2014-06-15

    We propose a modification of a recently introduced generalized translation operator, by including a q-exponential factor, which implies in the definition of a Hermitian deformed linear momentum operator p{sup ^}{sub q}, and its canonically conjugate deformed position operator x{sup ^}{sub q}. A canonical transformation leads the Hamiltonian of a position-dependent mass particle to another Hamiltonian of a particle with constant mass in a conservative force field of a deformed phase space. The equation of motion for the classical phase space may be expressed in terms of the generalized dual q-derivative. A position-dependent mass confined in an infinite square potential well is shown as an instance. Uncertainty and correspondence principles are analyzed.

  15. Generalized space and linear momentum operators in quantum mechanics

    NASA Astrophysics Data System (ADS)

    da Costa, Bruno G.; Borges, Ernesto P.

    2014-06-01

    We propose a modification of a recently introduced generalized translation operator, by including a q-exponential factor, which implies in the definition of a Hermitian deformed linear momentum operator hat{p}_q, and its canonically conjugate deformed position operator hat{x}_q. A canonical transformation leads the Hamiltonian of a position-dependent mass particle to another Hamiltonian of a particle with constant mass in a conservative force field of a deformed phase space. The equation of motion for the classical phase space may be expressed in terms of the generalized dual q-derivative. A position-dependent mass confined in an infinite square potential well is shown as an instance. Uncertainty and correspondence principles are analyzed.

  16. General mirror pairs for gauged linear sigma models

    NASA Astrophysics Data System (ADS)

    Aspinwall, Paul S.; Plesser, M. Ronen

    2015-11-01

    We carefully analyze the conditions for an abelian gauged linear σ-model to exhibit nontrivial IR behavior described by a nonsingular superconformal field theory determining a superstring vacuum. This is done without reference to a geometric phase, by associating singular behavior to a noncompact space of (semi-)classical vacua. We find that models determined by reflexive combinatorial data are nonsingular for generic values of their parameters. This condition has the pleasant feature that the mirror of a nonsingular gauged linear σ-model is another such model, but it is clearly too strong and we provide an example of a non-reflexive mirror pair. We discuss a weaker condition inspired by considering extremal transitions, which is also mirror symmetric and which we conjecture to be sufficient. We apply these ideas to extremal transitions and to understanding the way in which both Berglund-Hübsch mirror symmetry and the Vafa-Witten mirror orbifold with discrete torsion can be seen as special cases of the general combinatorial duality of gauged linear σ-models. In the former case we encounter an example showing that our weaker condition is still not necessary.

  17. A Standardized Generalized Dimensionality Discrepancy Measure and a Standardized Model-Based Covariance for Dimensionality Assessment for Multidimensional Models

    ERIC Educational Resources Information Center

    Levy, Roy; Xu, Yuning; Yel, Nedim; Svetina, Dubravka

    2015-01-01

    The standardized generalized dimensionality discrepancy measure and the standardized model-based covariance are introduced as tools to critique dimensionality assumptions in multidimensional item response models. These tools are grounded in a covariance theory perspective and associated connections between dimensionality and local independence.…

  18. GENERALIZED PARTIALLY LINEAR MIXED-EFFECTS MODELS INCORPORATING MISMEASURED COVARIATES

    PubMed Central

    Liang, Hua

    2009-01-01

    In this article we consider a semiparametric generalized mixed-effects model, and propose combining local linear regression, and penalized quasilikelihood and local quasilikelihood techniques to estimate both population and individual parameters and nonparametric curves. The proposed estimators take into account the local correlation structure of the longitudinal data. We establish normality for the estimators of the parameter and asymptotic expansion for the estimators of the nonparametric part. For practical implementation, we propose an appropriate algorithm. We also consider the measurement error problem in covariates in our model, and suggest a strategy for adjusting the effects of measurement errors. We apply the proposed models and methods to study the relation between virologic and immunologic responses in AIDS clinical trials, in which virologic response is classified into binary variables. A dataset from an AIDS clinical study is analyzed. PMID:20160899

  19. Diagnostic Measures for Generalized Linear Models with Missing Covariates

    PubMed Central

    ZHU, HONGTU; IBRAHIM, JOSEPH G.; SHI, XIAOYAN

    2009-01-01

    In this paper, we carry out an in-depth investigation of diagnostic measures for assessing the influence of observations and model misspecification in the presence of missing covariate data for generalized linear models. Our diagnostic measures include case-deletion measures and conditional residuals. We use the conditional residuals to construct goodness-of-fit statistics for testing possible misspecifications in model assumptions, including the sampling distribution. We develop specific strategies for incorporating missing data into goodness-of-fit statistics in order to increase the power of detecting model misspecification. A resampling method is proposed to approximate the p-value of the goodness-of-fit statistics. Simulation studies are conducted to evaluate our methods and a real data set is analysed to illustrate the use of our various diagnostic measures. PMID:20037674

  20. Optimization in generalized linear models: A case study

    NASA Astrophysics Data System (ADS)

    Silva, Eliana Costa e.; Correia, Aldina; Lopes, Isabel Cristina

    2016-06-01

    The maximum likelihood method is usually chosen to estimate the regression parameters of Generalized Linear Models (GLM) and also for hypothesis testing and goodness of fit tests. The classical method for estimating GLM parameters is the Fisher scores. In this work we propose to compute the estimates of the parameters with two alternative methods: a derivative-based optimization method, namely the BFGS method which is one of the most popular of the quasi-Newton algorithms, and the PSwarm derivative-free optimization method that combines features of a pattern search optimization method with a global Particle Swarm scheme. As a case study we use a dataset of biological parameters (phytoplankton) and chemical and environmental parameters of the water column of a Portuguese reservoir. The results show that, for this dataset, BFGS and PSwarm methods provided a better fit, than Fisher scores method, and can be good alternatives for finding the estimates for the parameters of a GLM.

  1. Using parallel banded linear system solvers in generalized eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Zhang, Hong; Moss, William F.

    1994-01-01

    Subspace iteration is a reliable and cost effective method for solving positive definite banded symmetric generalized eigenproblems, especially in the case of large scale problems. This paper discusses an algorithm that makes use of two parallel banded solvers in subspace iteration. A shift is introduced to decompose the banded linear systems into relatively independent subsystems and to accelerate the iterations. With this shift, an eigenproblem is mapped efficiently into the memories of a multiprocessor and a high speedup is obtained for parallel implementations. An optimal shift is a shift that balances total computation and communication costs. Under certain conditions, we show how to estimate an optimal shift analytically using the decay rate for the inverse of a banded matrix, and how to improve this estimate. Computational results on iPSC/2 and iPSC/860 multiprocessors are presented.

  2. Using parallel banded linear system solvers in generalized eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Zhang, Hong; Moss, William F.

    1993-01-01

    Subspace iteration is a reliable and cost effective method for solving positive definite banded symmetric generalized eigenproblems, especially in the case of large scale problems. This paper discusses an algorithm that makes use of two parallel banded solvers in subspace iteration. A shift is introduced to decompose the banded linear systems into relatively independent subsystems and to accelerate the iterations. With this shift, an eigenproblem is mapped efficiently into the memories of a multiprocessor and a high speed-up is obtained for parallel implementations. An optimal shift is a shift that balances total computation and communication costs. Under certain conditions, we show how to estimate an optimal shift analytically using the decay rate for the inverse of a banded matrix, and how to improve this estimate. Computational results on iPSC/2 and iPSC/860 multiprocessors are presented.

  3. Adaptive Error Estimation in Linearized Ocean General Circulation Models

    NASA Technical Reports Server (NTRS)

    Chechelnitsky, Michael Y.

    1999-01-01

    Data assimilation methods are routinely used in oceanography. The statistics of the model and measurement errors need to be specified a priori. This study addresses the problem of estimating model and measurement error statistics from observations. We start by testing innovation based methods of adaptive error estimation with low-dimensional models in the North Pacific (5-60 deg N, 132-252 deg E) to TOPEX/POSEIDON (TIP) sea level anomaly data, acoustic tomography data from the ATOC project, and the MIT General Circulation Model (GCM). A reduced state linear model that describes large scale internal (baroclinic) error dynamics is used. The methods are shown to be sensitive to the initial guess for the error statistics and the type of observations. A new off-line approach is developed, the covariance matching approach (CMA), where covariance matrices of model-data residuals are "matched" to their theoretical expectations using familiar least squares methods. This method uses observations directly instead of the innovations sequence and is shown to be related to the MT method and the method of Fu et al. (1993). Twin experiments using the same linearized MIT GCM suggest that altimetric data are ill-suited to the estimation of internal GCM errors, but that such estimates can in theory be obtained using acoustic data. The CMA is then applied to T/P sea level anomaly data and a linearization of a global GFDL GCM which uses two vertical modes. We show that the CMA method can be used with a global model and a global data set, and that the estimates of the error statistics are robust. We show that the fraction of the GCM-T/P residual variance explained by the model error is larger than that derived in Fukumori et al.(1999) with the method of Fu et al.(1993). Most of the model error is explained by the barotropic mode. However, we find that impact of the change in the error statistics on the data assimilation estimates is very small. This is explained by the large

  4. Models for cultural inheritance: a general linear model.

    PubMed

    Feldman, M W; Cavalli-Sforza, L L

    1975-07-01

    A theory of cultural evolution is proposed based on a general linear mode of cultural transmission. The trait of an individual is assumed to depend on the values of the same trait in other individuals of the same, the previous or earlier generation. The transmission matrix W has as its elements the proportional contributions of each individual (i) of one generation to each individual (j) of another. In addition, there is random variation (copy error or innovation) for each individual. Means and variances of a group of N individuals change with time and will stabilize asymptotically if the matrix W is irreducible and aperiodic. The rate of convergence is geometric and is governed by the largest non-unit eigenvalue of W. Groups fragment and evolve independently if W is reducible. The means of independent groups vary at random at a predicted rate, a phenomenon termed "random cultural drift". Variances within a group tend to be small, assuming cultural homogeneity. Transmission matrices of the teacher/leader type, and of parental type have been specifically considered, as well as social hierarchies. Various limitations, extensions, and some chances of application are discussed.

  5. Bayesian Inference for Generalized Linear Models for Spiking Neurons

    PubMed Central

    Gerwinn, Sebastian; Macke, Jakob H.; Bethge, Matthias

    2010-01-01

    Generalized Linear Models (GLMs) are commonly used statistical methods for modelling the relationship between neural population activity and presented stimuli. When the dimension of the parameter space is large, strong regularization has to be used in order to fit GLMs to datasets of realistic size without overfitting. By imposing properly chosen priors over parameters, Bayesian inference provides an effective and principled approach for achieving regularization. Here we show how the posterior distribution over model parameters of GLMs can be approximated by a Gaussian using the Expectation Propagation algorithm. In this way, we obtain an estimate of the posterior mean and posterior covariance, allowing us to calculate Bayesian confidence intervals that characterize the uncertainty about the optimal solution. From the posterior we also obtain a different point estimate, namely the posterior mean as opposed to the commonly used maximum a posteriori estimate. We systematically compare the different inference techniques on simulated as well as on multi-electrode recordings of retinal ganglion cells, and explore the effects of the chosen prior and the performance measure used. We find that good performance can be achieved by choosing an Laplace prior together with the posterior mean estimate. PMID:20577627

  6. A general protocol to afford enantioenriched linear homoprenylic amines.

    PubMed

    Bosque, Irene; Foubelo, Francisco; Gonzalez-Gomez, Jose C

    2013-11-21

    The reaction of a readily obtained chiral branched homoprenylamonium salt with a range of aldehydes, including aliphatic substrates, affords the corresponding linear isomers in good yields and enantioselectivities.

  7. Ammonia quantitative analysis model based on miniaturized Al ionization gas sensor and non-linear bistable dynamic model

    PubMed Central

    Ma, Rongfei

    2015-01-01

    In this paper, ammonia quantitative analysis based on miniaturized Al ionization gas sensor and non-linear bistable dynamic model was proposed. Al plate anodic gas-ionization sensor was used to obtain the current-voltage (I-V) data. Measurement data was processed by non-linear bistable dynamics model. Results showed that the proposed method quantitatively determined ammonia concentrations. PMID:25975362

  8. GENERAL: Linear Optical Scheme for Implementing Optimal Real State Cloning

    NASA Astrophysics Data System (ADS)

    Wan, Hong-Bo; Ye, Liu

    2010-06-01

    We propose an experimental scheme for implementing the optimal 1 → 3 real state cloning via linear optical elements. This method relies on one polarized qubit and two location qubits and is feasible with current experimental technology.

  9. Understanding general and specific connections between psychopathology and marital distress: a model based approach.

    PubMed

    South, Susan C; Krueger, Robert F; Iacono, William G

    2011-11-01

    Marital distress is linked to many types of mental disorders; however, no study to date has examined this link in the context of empirically based hierarchical models of psychopathology. There may be general associations between low levels of marital quality and broad groups of comorbid psychiatric disorders as well as links between marital adjustment and specific types of mental disorders. The authors examined this issue in a sample (N = 929 couples) of currently married couples from the Minnesota Twin Family Study who completed self-report measures of relationship adjustment and were also assessed for common mental disorders. Structural equation modeling indicated that (a) higher standing on latent factors of internalizing (INT) and externalizing (EXT) psychopathology was associated with lower standing on latent factors of general marital adjustment for both husbands and wives, (b) the magnitude of these effects was similar across husbands and wives, and (c) there were no residual associations between any specific mental disorder and overall relationship adjustment after controlling for the INT and EXT factors. These findings point to the utility of hierarchical models in understanding psychopathology and its correlates. Much of the link between mental disorder and marital distress operated at the level of broad spectrums of psychopathological variation (i.e., higher levels of marital distress were associated with disorder comorbidity), suggesting that the temperamental core of these spectrums contributes not only to symptoms of mental illness but to the behaviors that lead to impaired marital quality in adulthood.

  10. Accelerated Hazards Model based on Parametric Families Generalized with Bernstein Polynomials

    PubMed Central

    Chen, Yuhui; Hanson, Timothy; Zhang, Jiajia

    2015-01-01

    Summary A transformed Bernstein polynomial that is centered at standard parametric families, such as Weibull or log-logistic, is proposed for use in the accelerated hazards model. This class provides a convenient way towards creating a Bayesian non-parametric prior for smooth densities, blending the merits of parametric and non-parametric methods, that is amenable to standard estimation approaches. For example optimization methods in SAS or R can yield the posterior mode and asymptotic covariance matrix. This novel nonparametric prior is employed in the accelerated hazards model, which is further generalized to time-dependent covariates. The proposed approach fares considerably better than previous approaches in simulations; data on the effectiveness of biodegradable carmustine polymers on recurrent brain malignant gliomas is investigated. PMID:24261450

  11. Hydraulic fracturing model based on the discrete fracture model and the generalized J integral

    NASA Astrophysics Data System (ADS)

    Liu, Z. Q.; Liu, Z. F.; Wang, X. H.; Zeng, B.

    2016-08-01

    The hydraulic fracturing technique is an effective stimulation for low permeability reservoirs. In fracturing models, one key point is to accurately calculate the flux across the fracture surface and the stress intensity factor. To achieve high precision, the discrete fracture model is recommended to calculate the flux. Using the generalized J integral, the present work obtains an accurate simulation of the stress intensity factor. Based on the above factors, an alternative hydraulic fracturing model is presented. Examples are included to demonstrate the reliability of the proposed model and its ability to model the fracture propagation. Subsequently, the model is used to describe the relationship between the geometry of the fracture and the fracturing equipment parameters. The numerical results indicate that the working pressure and the pump power will significantly influence the fracturing process.

  12. Generalizing a Categorization of Students' Interpretations of Linear Kinematics Graphs

    ERIC Educational Resources Information Center

    Bollen, Laurens; De Cock, Mieke; Zuza, Kristina; Guisasola, Jenaro; van Kampen, Paul

    2016-01-01

    We have investigated whether and how a categorization of responses to questions on linear distance-time graphs, based on a study of Irish students enrolled in an algebra-based course, could be adopted and adapted to responses from students enrolled in calculus-based physics courses at universities in Flanders, Belgium (KU Leuven) and the Basque…

  13. PYESSENCE: Generalized Coupled Quintessence Linear Perturbation Python Code

    NASA Astrophysics Data System (ADS)

    Leithes, Alexander

    2016-09-01

    PYESSENCE evolves linearly perturbed coupled quintessence models with multiple (cold dark matter) CDM fluid species and multiple DE (dark energy) scalar fields, and can be used to generate quantities such as the growth factor of large scale structure for any coupled quintessence model with an arbitrary number of fields and fluids and arbitrary couplings.

  14. A General Linear Method for Equating with Small Samples

    ERIC Educational Resources Information Center

    Albano, Anthony D.

    2015-01-01

    Research on equating with small samples has shown that methods with stronger assumptions and fewer statistical estimates can lead to decreased error in the estimated equating function. This article introduces a new approach to linear observed-score equating, one which provides flexible control over how form difficulty is assumed versus estimated…

  15. Spikelet structure and development in Cyperoideae (Cyperaceae): a monopodial general model based on ontogenetic evidence

    PubMed Central

    Vrijdaghs, Alexander; Reynders, Marc; Larridon, Isabel; Muasya, A. Muthama; Smets, Erik; Goetghebeur, Paul

    2010-01-01

    Background and Aims In Cyperoideae, one of the two subfamilies in Cyperaceae, unresolved homology questions about spikelets remained. This was particularly the case in taxa with distichously organized spikelets and in Cariceae, a tribe with complex compound inflorescences comprising male (co)florescences and deciduous female single-flowered lateral spikelets. Using ontogenetic techniques, a wide range of taxa were investigated, including some controversial ones, in order to find morphological arguments to understand the nature of the spikelet in Cyperoideae. This paper presents a review of both new ontogenetic data and current knowledge, discussing a cyperoid, general, monopodial spikelet model. Methods Scanning electron microscopy and light microscopy were used to examine spikelets of 106 species from 33 cyperoid genera. Results Ontogenetic data presented allow a consistent cyperoid spikelet model to be defined. Scanning and light microscopic images in controversial taxa such as Schoenus nigricans, Cariceae and Cypereae are interpreted accordingly. Conclusions Spikelets in all species studied consist of an indeterminate rachilla, and one to many spirally to distichously arranged glumes, each subtending a flower or empty. Lateral spikelets are subtended by a bract and have a spikelet prophyll. In distichously organized spikelets, combined concaulescence of the flowers and epicaulescence (a newly defined metatopic displacement) of the glumes has caused interpretational controversy in the past. In Cariceae, the male (co)florescences are terminal spikelets. Female single-flowered spikelets are positioned proximally on the rachis. To explain both this and the secondary spikelets in some Cypereae, the existence of an ontogenetic switch determining the development of a primordium into flower, or lateral axis is postulated. PMID:20197291

  16. Generalized linear and generalized additive models in studies of species distributions: Setting the scene

    USGS Publications Warehouse

    Guisan, A.; Edwards, T.C.; Hastie, T.

    2002-01-01

    An important statistical development of the last 30 years has been the advance in regression analysis provided by generalized linear models (GLMs) and generalized additive models (GAMs). Here we introduce a series of papers prepared within the framework of an international workshop entitled: Advances in GLMs/GAMs modeling: from species distribution to environmental management, held in Riederalp, Switzerland, 6-11 August 2001. We first discuss some general uses of statistical models in ecology, as well as provide a short review of several key examples of the use of GLMs and GAMs in ecological modeling efforts. We next present an overview of GLMs and GAMs, and discuss some of their related statistics used for predictor selection, model diagnostics, and evaluation. Included is a discussion of several new approaches applicable to GLMs and GAMs, such as ridge regression, an alternative to stepwise selection of predictors, and methods for the identification of interactions by a combined use of regression trees and several other approaches. We close with an overview of the papers and how we feel they advance our understanding of their application to ecological modeling. ?? 2002 Elsevier Science B.V. All rights reserved.

  17. Generalizing a categorization of students' interpretations of linear kinematics graphs

    NASA Astrophysics Data System (ADS)

    Bollen, Laurens; De Cock, Mieke; Zuza, Kristina; Guisasola, Jenaro; van Kampen, Paul

    2016-06-01

    We have investigated whether and how a categorization of responses to questions on linear distance-time graphs, based on a study of Irish students enrolled in an algebra-based course, could be adopted and adapted to responses from students enrolled in calculus-based physics courses at universities in Flanders, Belgium (KU Leuven) and the Basque Country, Spain (University of the Basque Country). We discuss how we adapted the categorization to accommodate a much more diverse student cohort and explain how the prior knowledge of students may account for many differences in the prevalence of approaches and success rates. Although calculus-based physics students make fewer mistakes than algebra-based physics students, they encounter similar difficulties that are often related to incorrectly dividing two coordinates. We verified that a qualitative understanding of kinematics is an important but not sufficient condition for students to determine a correct value for the speed. When comparing responses to questions on linear distance-time graphs with responses to isomorphic questions on linear water level versus time graphs, we observed that the context of a question influences the approach students use. Neither qualitative understanding nor an ability to find the slope of a context-free graph proved to be a reliable predictor for the approach students use when they determine the instantaneous speed.

  18. A Nonlinear Multigrid Solver for an Atmospheric General Circulation Model Based on Semi-Implicit Semi-Lagrangian Advection of Potential Vorticity

    NASA Technical Reports Server (NTRS)

    McCormick, S.; Ruge, John W.

    1998-01-01

    This work represents a part of a project to develop an atmospheric general circulation model based on the semi-Lagrangian advection of potential vorticity (PC) with divergence as the companion prognostic variable.

  19. The general RF tuning for IH-DTL linear accelerators

    NASA Astrophysics Data System (ADS)

    Lu, Y. R.; Ratzinger, U.; Schlitt, B.; Tiede, R.

    2007-11-01

    The RF tuning is the most important research for achieving the resonant frequency and the flatness of electric field distributions along the axis of RF accelerating structures. The six different tuning concepts and that impacts on the longitudinal field distributions have been discussed in detail combining the RF tuning process of a 1:2 modeled 20.85 MV compact IH-DTL cavity, which was designed to accelerate proton, helium, oxygen or C 4+ from 400 keV/ u to 7 MeV/u and used as the linear injector of 430 MeV/ u synchrotron [Y.R. Lu, S. Minaev, U. Ratzinger, B. Schlitt, R.Tiede, The Compact 20MV IH-DTL for the Heidelberg Therapy Facility, in: Proceedings of the LINAC Conference, Luebeck, Germany, 2004 [1]; Y.R. Lu, Frankfurt University Dissertation, 2005. [2

  20. Dynamic modelling and simulation of linear Fresnel solar field model based on molten salt heat transfer fluid

    NASA Astrophysics Data System (ADS)

    Hakkarainen, Elina; Tähtinen, Matti

    2016-05-01

    Demonstrations of direct steam generation (DSG) in linear Fresnel collectors (LFC) have given promising results related to higher steam parameters compared to the current state-of-the-art parabolic trough collector (PTC) technology using oil as heat transfer fluid (HTF). However, DSG technology lacks feasible solution for long-term thermal energy storage (TES) system. This option is important for CSP technology in order to offer dispatchable power. Recently, molten salts have been proposed to be used as HTF and directly as storage medium in both line-focusing solar fields, offering storage capacity of several hours. This direct molten salt (DMS) storage concept has already gained operational experience in solar tower power plant, and it is under demonstration phase both in the case of LFC and PTC systems. Dynamic simulation programs offer a valuable effort for design and optimization of solar power plants. In this work, APROS dynamic simulation program is used to model a DMS linear Fresnel solar field with two-tank TES system, and example simulation results are presented in order to verify the functionality of the model and capability of APROS for CSP modelling and simulation.

  1. New Linear Partitioning Models Based on Experimental Water: Supercritical CO2 Partitioning Data of Selected Organic Compounds.

    PubMed

    Burant, Aniela; Thompson, Christopher; Lowry, Gregory V; Karamalidis, Athanasios K

    2016-05-17

    Partitioning coefficients of organic compounds between water and supercritical CO2 (sc-CO2) are necessary to assess the risk of migration of these chemicals from subsurface CO2 storage sites. Despite the large number of potential organic contaminants, the current data set of published water-sc-CO2 partitioning coefficients is very limited. Here, the partitioning coefficients of thiophene, pyrrole, and anisole were measured in situ over a range of temperatures and pressures using a novel pressurized batch-reactor system with dual spectroscopic detectors: a near-infrared spectrometer for measuring the organic analyte in the CO2 phase and a UV detector for quantifying the analyte in the aqueous phase. Our measured partitioning coefficients followed expected trends based on volatility and aqueous solubility. The partitioning coefficients and literature data were then used to update a published poly parameter linear free-energy relationship and to develop five new linear free-energy relationships for predicting water-sc-CO2 partitioning coefficients. A total of four of the models targeted a single class of organic compounds. Unlike models that utilize Abraham solvation parameters, the new relationships use vapor pressure and aqueous solubility of the organic compound at 25 °C and CO2 density to predict partitioning coefficients over a range of temperature and pressure conditions. The compound class models provide better estimates of partitioning behavior for compounds in that class than does the model built for the entire data set.

  2. Item Purification in Differential Item Functioning Using Generalized Linear Mixed Models

    ERIC Educational Resources Information Center

    Liu, Qian

    2011-01-01

    For this dissertation, four item purification procedures were implemented onto the generalized linear mixed model for differential item functioning (DIF) analysis, and the performance of these item purification procedures was investigated through a series of simulations. Among the four procedures, forward and generalized linear mixed model (GLMM)…

  3. Computer analysis of general linear networks using digraphs.

    NASA Technical Reports Server (NTRS)

    Mcclenahan, J. O.; Chan, S.-P.

    1972-01-01

    Investigation of the application of digraphs in analyzing general electronic networks, and development of a computer program based on a particular digraph method developed by Chen. The Chen digraph method is a topological method for solution of networks and serves as a shortcut when hand calculations are required. The advantage offered by this method of analysis is that the results are in symbolic form. It is limited, however, by the size of network that may be handled. Usually hand calculations become too tedious for networks larger than about five nodes, depending on how many elements the network contains. Direct determinant expansion for a five-node network is a very tedious process also.

  4. MCTP system model based on linear programming optimization of apertures obtained from sequencing patient image data maps

    SciTech Connect

    Ureba, A.; Salguero, F. J.; Barbeiro, A. R.; Jimenez-Ortega, E.; Baeza, J. A.; Leal, A.; Miras, H.; Linares, R.; Perucha, M.

    2014-08-15

    Purpose: The authors present a hybrid direct multileaf collimator (MLC) aperture optimization model exclusively based on sequencing of patient imaging data to be implemented on a Monte Carlo treatment planning system (MC-TPS) to allow the explicit radiation transport simulation of advanced radiotherapy treatments with optimal results in efficient times for clinical practice. Methods: The planning system (called CARMEN) is a full MC-TPS, controlled through aMATLAB interface, which is based on the sequencing of a novel map, called “biophysical” map, which is generated from enhanced image data of patients to achieve a set of segments actually deliverable. In order to reduce the required computation time, the conventional fluence map has been replaced by the biophysical map which is sequenced to provide direct apertures that will later be weighted by means of an optimization algorithm based on linear programming. A ray-casting algorithm throughout the patient CT assembles information about the found structures, the mass thickness crossed, as well as PET values. Data are recorded to generate a biophysical map for each gantry angle. These maps are the input files for a home-made sequencer developed to take into account the interactions of photons and electrons with the MLC. For each linac (Axesse of Elekta and Primus of Siemens) and energy beam studied (6, 9, 12, 15 MeV and 6 MV), phase space files were simulated with the EGSnrc/BEAMnrc code. The dose calculation in patient was carried out with the BEAMDOSE code. This code is a modified version of EGSnrc/DOSXYZnrc able to calculate the beamlet dose in order to combine them with different weights during the optimization process. Results: Three complex radiotherapy treatments were selected to check the reliability of CARMEN in situations where the MC calculation can offer an added value: A head-and-neck case (Case I) with three targets delineated on PET/CT images and a demanding dose-escalation; a partial breast

  5. Development of an atmospheric model based on a generalized vertical coordinate. Final report, September 12, 1991--August 31, 1997

    SciTech Connect

    Arakawa, Akio; Konor, C.S.

    1997-12-31

    There are great conceptual advantages in the use of an isentropic vertical coordinate in atmospheric models. Design of such a model, however, requires to overcome computational problems due to intersection of coordinate surfaces with the earth`s surface. Under this project, the authors have completed the development of a model based on a generalized vertical coordinate, {zeta} = F({Theta}, p, p{sub s}), in which an isentropic coordinate can be combined with a terrain-following {sigma}-coordinate a smooth transition between the two. One of the key issues in developing such a model is to satisfy the consistency between the predictions of pressure and potential temperature. In the model, the consistency is satisfied by the use of an equation that determines the vertical mass flux. A procedure to properly choose {zeta} = F({Theta}, p, p{sub s}) is also developed, which guarantees that {zeta} is a monotonic function of height even when unstable stratification occurs. There are two versions of the model constructed in parallel: one is the middle-latitude {beta}-plane version and the other is the global version. Both of these versions include moisture prediction, relaxed large-scale condensation and relaxed moist-convective adjustment schemes. A well-mixed planetary boundary layer (PBL) is also added.

  6. Use of generalized linear models and digital data in a forest inventory of Northern Utah

    USGS Publications Warehouse

    Moisen, G.G.; Edwards, T.C.

    1999-01-01

    Forest inventories, like those conducted by the Forest Service's Forest Inventory and Analysis Program (FIA) in the Rocky Mountain Region, are under increased pressure to produce better information at reduced costs. Here we describe our efforts in Utah to merge satellite-based information with forest inventory data for the purposes of reducing the costs of estimates of forest population totals and providing spatial depiction of forest resources. We illustrate how generalized linear models can be used to construct approximately unbiased and efficient estimates of population totals while providing a mechanism for prediction in space for mapping of forest structure. We model forest type and timber volume of five tree species groups as functions of a variety of predictor variables in the northern Utah mountains. Predictor variables include elevation, aspect, slope, geographic coordinates, as well as vegetation cover types based on satellite data from both the Advanced Very High Resolution Radiometer (AVHRR) and Thematic Mapper (TM) platforms. We examine the relative precision of estimates of area by forest type and mean cubic-foot volumes under six different models, including the traditional double sampling for stratification strategy. Only very small gains in precision were realized through the use of expensive photointerpreted or TM-based data for stratification, while models based on topography and spatial coordinates alone were competitive. We also compare the predictive capability of the models through various map accuracy measures. The models including the TM-based vegetation performed best overall, while topography and spatial coordinates alone provided substantial information at very low cost.

  7. The Generalized Logit-Linear Item Response Model for Binary-Designed Items

    ERIC Educational Resources Information Center

    Revuelta, Javier

    2008-01-01

    This paper introduces the generalized logit-linear item response model (GLLIRM), which represents the item-solving process as a series of dichotomous operations or steps. The GLLIRM assumes that the probability function of the item response is a logistic function of a linear composite of basic parameters which describe the operations, and the…

  8. An efficient method for generalized linear multiplicative programming problem with multiplicative constraints.

    PubMed

    Zhao, Yingfeng; Liu, Sanyang

    2016-01-01

    We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient. PMID:27547676

  9. An efficient method for generalized linear multiplicative programming problem with multiplicative constraints.

    PubMed

    Zhao, Yingfeng; Liu, Sanyang

    2016-01-01

    We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.

  10. On the dynamics of canopy resistance: Generalized linear estimation and relationships with primary micrometeorological variables

    NASA Astrophysics Data System (ADS)

    Irmak, Suat; Mutiibwa, Denis

    2010-08-01

    The 1-D and single layer combination-based energy balance Penman-Monteith (PM) model has limitations in practical application due to the lack of canopy resistance (rc) data for different vegetation surfaces. rc could be estimated by inversion of the PM model if the actual evapotranspiration (ETa) rate is known, but this approach has its own set of issues. Instead, an empirical method of estimating rc is suggested in this study. We investigated the relationships between primary micrometeorological parameters and rc and developed seven models to estimate rc for a nonstressed maize canopy on an hourly time step using a generalized-linear modeling approach. The most complex rc model uses net radiation (Rn), air temperature (Ta), vapor pressure deficit (VPD), relative humidity (RH), wind speed at 3 m (u3), aerodynamic resistance (ra), leaf area index (LAI), and solar zenith angle (Θ). The simplest model requires Rn, Ta, and RH. We present the practical implementation of all models via experimental validation using scaled up rc data obtained from the dynamic diffusion porometer-measured leaf stomatal resistance through an extensive field campaign in 2006. For further validation, we estimated ETa by solving the PM model using the modeled rc from all seven models and compared the PM ETa estimates with the Bowen ratio energy balance system (BREBS)-measured ETa for an independent data set in 2005. The relationships between hourly rc versus Ta, RH, VPD, Rn, incoming shortwave radiation (Rs), u3, wind direction, LAI, Θ, and ra were presented and discussed. We demonstrated the negative impact of exclusion of LAI when modeling rc, whereas exclusion of ra and Θ did not impact the performance of the rc models. Compared to the calibration results, the validation root mean square difference between observed and modeled rc increased by 5 s m-1 for all rc models developed, ranging from 9.9 s m-1 for the most complex model to 22.8 s m-1 for the simplest model, as compared with the

  11. Discussion on climate oscillations: CMIP5 general circulation models versus a semi-empirical harmonic model based on astronomical cycles

    NASA Astrophysics Data System (ADS)

    Scafetta, Nicola

    2013-11-01

    Power spectra of global surface temperature (GST) records (available since 1850) reveal major periodicities at about 9.1, 10-11, 19-22 and 59-62 years. Equivalent oscillations are found in numerous multisecular paleoclimatic records. The Coupled Model Intercomparison Project 5 (CMIP5) general circulation models (GCMs), to be used in the IPCC Fifth Assessment Report (AR5, 2013), are analyzed and found not able to reconstruct this variability. In particular, from 2000 to 2013.5 a GST plateau is observed while the GCMs predicted a warming rate of about 2 °C/century. In contrast, the hypothesis that the climate is regulated by specific natural oscillations more accurately fits the GST records at multiple time scales. For example, a quasi 60-year natural oscillation simultaneously explains the 1850-1880, 1910-1940 and 1970-2000 warming periods, the 1880-1910 and 1940-1970 cooling periods and the post 2000 GST plateau. This hypothesis implies that about 50% of the ~ 0.5 °C global surface warming observed from 1970 to 2000 was due to natural oscillations of the climate system, not to anthropogenic forcing as modeled by the CMIP3 and CMIP5 GCMs. Consequently, the climate sensitivity to CO2 doubling should be reduced by half, for example from the 2.0-4.5 °C range (as claimed by the IPCC, 2007) to 1.0-2.3 °C with a likely median of ~ 1.5 °C instead of ~ 3.0 °C. Also modern paleoclimatic temperature reconstructions showing a larger preindustrial variability than the hockey-stick shaped temperature reconstructions developed in early 2000 imply a weaker anthropogenic effect and a stronger solar contribution to climatic changes. The observed natural oscillations could be driven by astronomical forcings. The ~ 9.1 year oscillation appears to be a combination of long soli-lunar tidal oscillations, while quasi 10-11, 20 and 60 year oscillations are typically found among major solar and heliospheric oscillations driven mostly by Jupiter and Saturn movements. Solar models based

  12. Consistent linearization of the element-independent corotational formulation for the structural analysis of general shells

    NASA Technical Reports Server (NTRS)

    Rankin, C. C.

    1988-01-01

    A consistent linearization is provided for the element-dependent corotational formulation, providing the proper first and second variation of the strain energy. As a result, the warping problem that has plagued flat elements has been overcome, with beneficial effects carried over to linear solutions. True Newton quadratic convergence has been restored to the Structural Analysis of General Shells (STAGS) code for conservative loading using the full corotational implementation. Some implications for general finite element analysis are discussed, including what effect the automatic frame invariance provided by this work might have on the development of new, improved elements.

  13. Optimal explicit strong-stability-preserving general linear methods : complete results.

    SciTech Connect

    Constantinescu, E. M.; Sandu, A.; Mathematics and Computer Science; Virginia Polytechnic Inst. and State Univ.

    2009-03-03

    This paper constructs strong-stability-preserving general linear time-stepping methods that are well suited for hyperbolic PDEs discretized by the method of lines. These methods generalize both Runge-Kutta (RK) and linear multistep schemes. They have high stage orders and hence are less susceptible than RK methods to order reduction from source terms or nonhomogeneous boundary conditions. A global optimization strategy is used to find the most efficient schemes that have low storage requirements. Numerical results illustrate the theoretical findings.

  14. A generalized concordance correlation coefficient based on the variance components generalized linear mixed models for overdispersed count data.

    PubMed

    Carrasco, Josep L

    2010-09-01

    The classical concordance correlation coefficient (CCC) to measure agreement among a set of observers assumes data to be distributed as normal and a linear relationship between the mean and the subject and observer effects. Here, the CCC is generalized to afford any distribution from the exponential family by means of the generalized linear mixed models (GLMMs) theory and applied to the case of overdispersed count data. An example of CD34+ cell count data is provided to show the applicability of the procedure. In the latter case, different CCCs are defined and applied to the data by changing the GLMM that fits the data. A simulation study is carried out to explore the behavior of the procedure with a small and moderate sample size.

  15. Prediction of formability for non-linear deformation history using generalized forming limit concept (GFLC)

    NASA Astrophysics Data System (ADS)

    Volk, Wolfram; Suh, Joungsik

    2013-12-01

    The prediction of formability is one of the most important tasks in sheet metal process simulation. The common criterion in industrial applications is the Forming Limit Curve (FLC). The big advantage of FLCs is the easy interpretation of simulation or measurement data in combination with an ISO standard for the experimental determination. However, the conventional FLCs are limited to almost linear and unbroken strain paths, i.e. deformation histories with non-linear strain increments often lead to big differences in comparison to the prediction of the FLC. In this paper a phenomenological approach, the so-called Generalized Forming Limit Concept (GFLC), is introduced to predict the localized necking on arbitrary deformation history with unlimited number of non-linear strain increments. The GFLC consists of the conventional FLC and an acceptable number of experiments with bi-linear deformation history. With the idea of the new defined "Principle of Equivalent Pre-Forming" every deformation state built up of two linear strain increments can be transformed to a pure linear strain path with the same used formability of the material. In advance this procedure can be repeated as often as necessary. Therefore, it allows a robust and cost effective analysis of beginning instability in Finite Element Analysis (FEA) for arbitrary deformation histories. In addition, the GFLC is fully downwards compatible to the established FLC for pure linear strain paths.

  16. A BGG-Type Resolution for Tensor Modules over General Linear Superalgebra

    NASA Astrophysics Data System (ADS)

    Cheng, Shun-Jen; Kwon, Jae-Hoon; Lam, Ngau

    2008-04-01

    We construct a Bernstein Gelfand Gelfand type resolution in terms of direct sums of Kac modules for the finite-dimensional irreducible tensor representations of the general linear superalgebra. As a consequence it follows that the unique maximal submodule of a corresponding reducible Kac module is generated by its proper singular vector.

  17. Structural Modeling of Measurement Error in Generalized Linear Models with Rasch Measures as Covariates

    ERIC Educational Resources Information Center

    Battauz, Michela; Bellio, Ruggero

    2011-01-01

    This paper proposes a structural analysis for generalized linear models when some explanatory variables are measured with error and the measurement error variance is a function of the true variables. The focus is on latent variables investigated on the basis of questionnaires and estimated using item response theory models. Latent variable…

  18. Estimation of Complex Generalized Linear Mixed Models for Measurement and Growth

    ERIC Educational Resources Information Center

    Jeon, Minjeong

    2012-01-01

    Maximum likelihood (ML) estimation of generalized linear mixed models (GLMMs) is technically challenging because of the intractable likelihoods that involve high dimensional integrations over random effects. The problem is magnified when the random effects have a crossed design and thus the data cannot be reduced to small independent clusters. A…

  19. Regression Is a Univariate General Linear Model Subsuming Other Parametric Methods as Special Cases.

    ERIC Educational Resources Information Center

    Vidal, Sherry

    Although the concept of the general linear model (GLM) has existed since the 1960s, other univariate analyses such as the t-test and the analysis of variance models have remained popular. The GLM produces an equation that minimizes the mean differences of independent variables as they are related to a dependent variable. From a computer printout…

  20. Generalized model of double random phase encoding based on linear algebra

    NASA Astrophysics Data System (ADS)

    Nakano, Kazuya; Takeda, Masafumi; Suzuki, Hiroyuki; Yamaguchi, Masahiro

    2013-01-01

    We propose a generalized model for double random phase encoding (DRPE) based on linear algebra. We defined the DRPE procedure in six steps. The first three steps form an encryption procedure, while the later three steps make up a decryption procedure. We noted that the first (mapping) and second (transform) steps can be generalized. As an example of this generalization, we used 3D mapping and a transform matrix, which is a combination of a discrete cosine transform and two permutation matrices. Finally, we investigated the sensitivity of the proposed model to errors in the decryption key.

  1. Implementing general quantum measurements on linear optical and solid-state qubits

    NASA Astrophysics Data System (ADS)

    Ota, Yukihiro; Ashhab, Sahel; Nori, Franco

    2013-03-01

    We show a systematic construction for implementing general measurements on a single qubit, including both strong (or projection) and weak measurements. We mainly focus on linear optical qubits. The present approach is composed of simple and feasible elements, i.e., beam splitters, wave plates, and polarizing beam splitters. We show how the parameters characterizing the measurement operators are controlled by the linear optical elements. We also propose a method for the implementation of general measurements in solid-state qubits. Furthermore, we show an interesting application of the general measurements, i.e., entanglement amplification. YO is partially supported by the SPDR Program, RIKEN. SA and FN acknowledge ARO, NSF grant No. 0726909, JSPS-RFBR contract No. 12-02-92100, Grant-in-Aid for Scientific Research (S), MEXT Kakenhi on Quantum Cybernetics, and the JSPS via its FIRST program.

  2. H∞ filtering of Markov jump linear systems with general transition probabilities and output quantization.

    PubMed

    Shen, Mouquan; Park, Ju H

    2016-07-01

    This paper addresses the H∞ filtering of continuous Markov jump linear systems with general transition probabilities and output quantization. S-procedure is employed to handle the adverse influence of the quantization and a new approach is developed to conquer the nonlinearity induced by uncertain and unknown transition probabilities. Then, sufficient conditions are presented to ensure the filtering error system to be stochastically stable with the prescribed performance requirement. Without specified structure imposed on introduced slack variables, a flexible filter design method is established in terms of linear matrix inequalities. The effectiveness of the proposed method is validated by a numerical example. PMID:27129765

  3. MGMRES: A generalization of GMRES for solving large sparse nonsymmetric linear systems

    SciTech Connect

    Young, D.M.; Chen, Jen Yuan

    1996-11-01

    This paper is concerned with the solution of the linear system Au = b, where A is a real square nonsingular matrix which is large, sparse and nonsymmetric. We consider the use of Krylov subspace methods. We first choose an initial approximation u{sup (0)} to the solution {bar u} = A{sup -1}b. The GMRES (Generalized Minimum Residual Algorithm for Solving Non Symmetric Linear Systems) method was developed by Saad and Schultz (1986) and used extensively for many years, for sparse systems. This paper considers a generalization of GMRES; it is similar to GMRES except that we let Z = A{sup T}Y, where Y is a nonsingular matrix which is symmetric but not necessarily SPD.

  4. Local influence to detect influential data structures for generalized linear mixed models.

    PubMed

    Ouwens, M J; Tan, F E; Berger, M P

    2001-12-01

    This article discusses the generalization of the local influence measures for normally distributed responses to local influence measures for generalized linear models with random effects. For these models, it is shown that the subject-oriented influence measure is a special case of the proposed observation-oriented influence measure. A two-step diagnostic procedure is proposed. The first step is to search for influential subjects. A search for influential observations is proposed as the second step. An illustration of a two-treatment, multiple-period crossover trial demonstrates the practical importance of the detection of influential observations in addition to the detection of influential subjects.

  5. Integrable generalized spin ladder models based on the SU(1|3) and SU(3|1) algebras

    NASA Astrophysics Data System (ADS)

    Tonel, Arlei Prestes; Foerster, Angela; Hibberd, Katrina; Links, Jon

    2003-12-01

    We present two integrable spin ladder models which possess a general free parameter besides the rung coupling J. The models are exactly solvable by means of the Bethe ansatz method and we present the Bethe ansatz equations. We analyze the elementary excitations of the models which reveal the existence of a gap for both models that depends on the free parameter.

  6. Fitting host-parasitoid models with CV2 > 1 using hierarchical generalized linear models.

    PubMed Central

    Perry, J N; Noh, M S; Lee, Y; Alston, R D; Norowi, H M; Powell, W; Rennolls, K

    2000-01-01

    The powerful general Pacala-Hassell host-parasitoid model for a patchy environment, which allows host density-dependent heterogeneity (HDD) to be distinguished from between-patch, host density-independent heterogeneity (HDI), is reformulated within the class of the generalized linear model (GLM) family. This improves accessibility through the provision of general software within well-known statistical systems, and allows a rich variety of models to be formulated. Covariates such as age class, host density and abiotic factors may be included easily. For the case where there is no HDI, the formulation is a simple GLM. When there is HDI in addition to HDD, the formulation is a hierarchical generalized linear model. Two forms of HDI model are considered, both with between-patch variability: one has binomial variation within patches and one has extra-binomial, overdispersed variation within patches. Examples are given demonstrating parameter estimation with standard errors, and hypothesis testing. For one example given, the extra-binomial component of the HDI heterogeneity in parasitism is itself shown to be strongly density dependent. PMID:11416907

  7. A general theory of linear cosmological perturbations: scalar-tensor and vector-tensor theories

    NASA Astrophysics Data System (ADS)

    Lagos, Macarena; Baker, Tessa; Ferreira, Pedro G.; Noller, Johannes

    2016-08-01

    We present a method for parametrizing linear cosmological perturbations of theories of gravity, around homogeneous and isotropic backgrounds. The method is sufficiently general and systematic that it can be applied to theories with any degrees of freedom (DoFs) and arbitrary gauge symmetries. In this paper, we focus on scalar-tensor and vector-tensor theories, invariant under linear coordinate transformations. In the case of scalar-tensor theories, we use our framework to recover the simple parametrizations of linearized Horndeski and ``Beyond Horndeski'' theories, and also find higher-derivative corrections. In the case of vector-tensor theories, we first construct the most general quadratic action for perturbations that leads to second-order equations of motion, which propagates two scalar DoFs. Then we specialize to the case in which the vector field is time-like (à la Einstein-Aether gravity), where the theory only propagates one scalar DoF. As a result, we identify the complete forms of the quadratic actions for perturbations, and the number of free parameters that need to be defined, to cosmologically characterize these two broad classes of theories.

  8. Normality of raw data in general linear models: The most widespread myth in statistics

    USGS Publications Warehouse

    Kery, Marc; Hatfield, Jeff S.

    2003-01-01

    In years of statistical consulting for ecologists and wildlife biologists, by far the most common misconception we have come across has been the one about normality in general linear models. These comprise a very large part of the statistical models used in ecology and include t tests, simple and multiple linear regression, polynomial regression, and analysis of variance (ANOVA) and covariance (ANCOVA). There is a widely held belief that the normality assumption pertains to the raw data rather than to the model residuals. We suspect that this error may also occur in countless published studies, whenever the normality assumption is tested prior to analysis. This may lead to the use of nonparametric alternatives (if there are any), when parametric tests would indeed be appropriate, or to use of transformations of raw data, which may introduce hidden assumptions such as multiplicative effects on the natural scale in the case of log-transformed data. Our aim here is to dispel this myth. We very briefly describe relevant theory for two cases of general linear models to show that the residuals need to be normally distributed if tests requiring normality are to be used, such as t and F tests. We then give two examples demonstrating that the distribution of the response variable may be nonnormal, and yet the residuals are well behaved. We do not go into the issue of how to test normality; instead we display the distributions of response variables and residuals graphically.

  9. Random generalized linear model: a highly accurate and interpretable ensemble predictor

    PubMed Central

    2013-01-01

    Background Ensemble predictors such as the random forest are known to have superior accuracy but their black-box predictions are difficult to interpret. In contrast, a generalized linear model (GLM) is very interpretable especially when forward feature selection is used to construct the model. However, forward feature selection tends to overfit the data and leads to low predictive accuracy. Therefore, it remains an important research goal to combine the advantages of ensemble predictors (high accuracy) with the advantages of forward regression modeling (interpretability). To address this goal several articles have explored GLM based ensemble predictors. Since limited evaluations suggested that these ensemble predictors were less accurate than alternative predictors, they have found little attention in the literature. Results Comprehensive evaluations involving hundreds of genomic data sets, the UCI machine learning benchmark data, and simulations are used to give GLM based ensemble predictors a new and careful look. A novel bootstrap aggregated (bagged) GLM predictor that incorporates several elements of randomness and instability (random subspace method, optional interaction terms, forward variable selection) often outperforms a host of alternative prediction methods including random forests and penalized regression models (ridge regression, elastic net, lasso). This random generalized linear model (RGLM) predictor provides variable importance measures that can be used to define a “thinned” ensemble predictor (involving few features) that retains excellent predictive accuracy. Conclusion RGLM is a state of the art predictor that shares the advantages of a random forest (excellent predictive accuracy, feature importance measures, out-of-bag estimates of accuracy) with those of a forward selected generalized linear model (interpretability). These methods are implemented in the freely available R software package randomGLM. PMID:23323760

  10. Capelli bitableaux and Z-forms of general linear Lie superalgebras.

    PubMed Central

    Brini, A; Teolis, A G

    1990-01-01

    The combinatorics of the enveloping algebra UQ(pl(L)) of the general linear Lie superalgebra of a finite dimensional Z2-graded Q-vector space is studied. Three non-equivalent Z-forms of UQ(pl(L)) are introduced: one of these Z-forms is a version of the Kostant Z-form and the others are Lie algebra analogs of Rota and Stein's straightening formulae for the supersymmetric algebra Super[L P] and for its dual Super[L* P*]. The method is based on an extension of Capelli's technique of variabili ausiliarie to algebras containing positively and negatively signed elements. PMID:11607048

  11. To transform or not to transform: using generalized linear mixed models to analyse reaction time data

    PubMed Central

    Lo, Steson; Andrews, Sally

    2015-01-01

    Linear mixed-effect models (LMMs) are being increasingly widely used in psychology to analyse multi-level research designs. This feature allows LMMs to address some of the problems identified by Speelman and McGann (2013) about the use of mean data, because they do not average across individual responses. However, recent guidelines for using LMM to analyse skewed reaction time (RT) data collected in many cognitive psychological studies recommend the application of non-linear transformations to satisfy assumptions of normality. Uncritical adoption of this recommendation has important theoretical implications which can yield misleading conclusions. For example, Balota et al. (2013) showed that analyses of raw RT produced additive effects of word frequency and stimulus quality on word identification, which conflicted with the interactive effects observed in analyses of transformed RT. Generalized linear mixed-effect models (GLMM) provide a solution to this problem by satisfying normality assumptions without the need for transformation. This allows differences between individuals to be properly assessed, using the metric most appropriate to the researcher's theoretical context. We outline the major theoretical decisions involved in specifying a GLMM, and illustrate them by reanalysing Balota et al.'s datasets. We then consider the broader benefits of using GLMM to investigate individual differences. PMID:26300841

  12. Digit Span is (mostly) related linearly to general intelligence: Every extra bit of span counts.

    PubMed

    Gignac, Gilles E; Weiss, Lawrence G

    2015-12-01

    Historically, Digit Span has been regarded as a relatively poor indicator of general intellectual functioning (g). In fact, Wechsler (1958) contended that beyond an average level of Digit Span performance, there was little benefit to possessing a greater memory span. Although Wechsler's position does not appear to have ever been tested empirically, it does appear to have become clinical lore. Consequently, the purpose of this investigation was to test Wechsler's contention on the Wechsler Adult Intelligence Scale-Fourth Edition normative sample (N = 1,800; ages: 16 - 69). Based on linear and nonlinear contrast analyses of means, as well as linear and nonlinear bifactor model analyses, all 3 Digit Span indicators (LDSF, LDSB, and LDSS) were found to exhibit primarily linear associations with FSIQ/g. Thus, the commonly held position that Digit Span performance beyond an average level is not indicative of greater intellectual functioning was not supported. The results are discussed in light of the increasing evidence across multiple domains that memory span plays an important role in intellectual functioning.

  13. Linear and nonlinear light scattering and absorption in free-electron nanoclusters with diffuse surface: General considerations and linear response

    SciTech Connect

    Fomichev, S. V.; Becker, W.

    2010-06-15

    Both linear and nonlinear scattering and absorption of a laser pulse by spherical nanoclusters with free electrons and with a diffuse surface are considered in the collisionless hydrodynamics approximation. The developed model of forced collective motion of electrons confined to a cluster permits one consistently to introduce into the theory all the sources of nonlinearity, as well as the inhomogeneity of the cluster near its boundary. Two different perturbation theories corresponding to different laser intensity ranges are developed in this context, and both cold metal clusters and hot laser-heated or -ionized clusters are considered within the same approach. In the present article, after developing the full nonlinear model, the linear response to the laser field of the free-electron cluster with diffuse surface is investigated in detail, especially the properties of the linear Mie resonance (width and position). Under certain conditions, depending on the various cluster parameters secondary resonances are found. The properties of resonance-enhanced third-order harmonic generation and nonlinear laser absorption and their dependence on the shape of the diffuse surface will be presented separately.

  14. A time series generalized functional model based method for vibration-based damage precise localization in structures consisting of 1D, 2D, and 3D elements

    NASA Astrophysics Data System (ADS)

    Sakaris, C. S.; Sakellariou, J. S.; Fassois, S. D.

    2016-06-01

    This study focuses on the problem of vibration-based damage precise localization via data-based, time series type, methods for structures consisting of 1D, 2D, or 3D elements. A Generalized Functional Model Based method is postulated based on an expanded Vector-dependent Functionally Pooled ARX (VFP-ARX) model form, capable of accounting for an arbitrary structural topology. The FP model's operating parameter vector elements are properly constrained to reflect any given topology. Damage localization is based on operating parameter vector estimation within the specified topology, so that the location estimate and its uncertainty bounds are statistically optimal. The method's effectiveness is experimentally demonstrated through damage precise localization on a laboratory spatial truss structure using various damage scenarios and a single pair of random excitation - vibration response signals in a low and limited frequency bandwidth.

  15. The heritability of general cognitive ability increases linearly from childhood to young adulthood.

    PubMed

    Haworth, C M A; Wright, M J; Luciano, M; Martin, N G; de Geus, E J C; van Beijsterveldt, C E M; Bartels, M; Posthuma, D; Boomsma, D I; Davis, O S P; Kovas, Y; Corley, R P; Defries, J C; Hewitt, J K; Olson, R K; Rhea, S-A; Wadsworth, S J; Iacono, W G; McGue, M; Thompson, L A; Hart, S A; Petrill, S A; Lubinski, D; Plomin, R

    2010-11-01

    Although common sense suggests that environmental influences increasingly account for individual differences in behavior as experiences accumulate during the course of life, this hypothesis has not previously been tested, in part because of the large sample sizes needed for an adequately powered analysis. Here we show for general cognitive ability that, to the contrary, genetic influence increases with age. The heritability of general cognitive ability increases significantly and linearly from 41% in childhood (9 years) to 55% in adolescence (12 years) and to 66% in young adulthood (17 years) in a sample of 11 000 pairs of twins from four countries, a larger sample than all previous studies combined. In addition to its far-reaching implications for neuroscience and molecular genetics, this finding suggests new ways of thinking about the interface between nature and nurture during the school years. Why, despite life's 'slings and arrows of outrageous fortune', do genetically driven differences increasingly account for differences in general cognitive ability? We suggest that the answer lies with genotype-environment correlation: as children grow up, they increasingly select, modify and even create their own experiences in part based on their genetic propensities. PMID:19488046

  16. Linear stability of a generalized multi-anticipative car following model with time delays

    NASA Astrophysics Data System (ADS)

    Ngoduy, D.

    2015-05-01

    In traffic flow, the multi-anticipative driving behavior describes the reaction of a vehicle to the driving behavior of many vehicles in front where as the time delay is defined as a physiological parameter reflecting the period of time between perceiving a stimulus of leading vehicles and performing a relevant action such as acceleration or deceleration. A lot of effort has been undertaken to understand the effects of either multi-anticipative driving behavior or time delays on traffic flow dynamics. This paper is a first attempt to analytically investigate the dynamics of a generalized class of car-following models with multi-anticipative driving behavior and different time delays associated with such multi-anticipations. To this end, this paper puts forwards to deriving the (long-wavelength) linear stability condition of such a car-following model and study how the combination of different choices of multi-anticipations and time delays affects the instabilities of traffic flow with respect to a small perturbation. It is found that the effect of delays and multi-anticipations are model-dependent, that is, the destabilization effect of delays is suppressed by the stabilization effect of multi-anticipations. Moreover, the weight factor reflecting the distribution of the driver's sensing to the relative gaps of leading vehicles is less sensitive to the linear stability condition of traffic flow than the weight factor for the relative speed of those leading vehicles.

  17. A generalized fuzzy linear programming approach for environmental management problem under uncertainty.

    PubMed

    Fan, Yurui; Huang, Guohe; Veawab, Amornvadee

    2012-01-01

    In this study, a generalized fuzzy linear programming (GFLP) method was developed to deal with uncertainties expressed as fuzzy sets that exist in the constraints and objective function. A stepwise interactive algorithm (SIA) was advanced to solve GFLP model and generate solutions expressed as fuzzy sets. To demonstrate its application, the developed GFLP method was applied to a regional sulfur dioxide (SO2) control planning model to identify effective SO2 mitigation polices with a minimized system performance cost under uncertainty. The results were obtained to represent the amount of SO2 allocated to different control measures from different sources. Compared with the conventional interval-parameter linear programming (ILP) approach, the solutions obtained through GFLP were expressed as fuzzy sets, which can provide intervals for the decision variables and objective function, as well as related possibilities. Therefore, the decision makers can make a tradeoff between model stability and the plausibility based on solutions obtained through GFLP and then identify desired policies for SO2-emission control under uncertainty.

  18. General theory of electronic transport in molecular crystals. I. Local linear electron-phonon coupling

    NASA Astrophysics Data System (ADS)

    Silbey, R.; Munn, R. W.

    1980-02-01

    An improved general theory of electronic transport in molecular crystals with local linear electron-phonon coupling is presented. It is valid for arbitrary electronic and phonon bandwidths and for arbitrary electron-phonon coupling strength, yielding small-polaron theory for narrow electronic bands and strong coupling, and semiconductor theory for wide electronic bands and weak coupling. Detailed results are derived for electronic excitations fully clothed with phonons and having a bandwidth no larger than the phonon frequency; the electronic and phonon densities of states are taken as Gaussian for simplicity. The dependence of the diffusion coefficient on temperature and on the other parameters is analyzed thoroughly. The calculated behavior provides a rational interpretation of observed trends in the magnitude and temperature dependence of charge-carrier drift mobilities in molecular crystals.

  19. Generalized Linear Models for Identifying Predictors of the Evolutionary Diffusion of Viruses

    PubMed Central

    Beard, Rachel; Magee, Daniel; Suchard, Marc A.; Lemey, Philippe; Scotch, Matthew

    2014-01-01

    Bioinformatics and phylogeography models use viral sequence data to analyze spread of epidemics and pandemics. However, few of these models have included analytical methods for testing whether certain predictors such as population density, rates of disease migration, and climate are drivers of spatial spread. Understanding the specific factors that drive spatial diffusion of viruses is critical for targeting public health interventions and curbing spread. In this paper we describe the application and evaluation of a model that integrates demographic and environmental predictors with molecular sequence data. The approach parameterizes evolutionary spread of RNA viruses as a generalized linear model (GLM) within a Bayesian inference framework using Markov chain Monte Carlo (MCMC). We evaluate this approach by reconstructing the spread of H5N1 in Egypt while assessing the impact of individual predictors on evolutionary diffusion of the virus. PMID:25717395

  20. Solving the Linear Balance Equation on the Globe as a Generalized Inverse Problem

    NASA Technical Reports Server (NTRS)

    Lu, Huei-Iin; Robertson, Franklin R.

    1999-01-01

    A generalized (pseudo) inverse technique was developed to facilitate a better understanding of the numerical effects of tropical singularities inherent in the spectral linear balance equation (LBE). Depending upon the truncation, various levels of determinancy are manifest. The traditional fully-determined (FD) systems give rise to a strong response, while the under-determined (UD) systems yield a weak response to the tropical singularities. The over-determined (OD) systems result in a modest response and a large residual in the tropics. The FD and OD systems can be alternatively solved by the iterative method. Differences in the solutions of an UD system exist between the inverse technique and the iterative method owing to the non- uniqueness of the problem. A realistic balanced wind was obtained by solving the principal components of the spectral LBE in terms of vorticity in an intermediate resolution. Improved solutions were achieved by including the singular-component solutions which best fit the observed wind data.

  1. Allowable sampling period for consensus control of multiple general linear dynamical agents in random networks

    NASA Astrophysics Data System (ADS)

    Zhang, Ya; Tian, Yu-Ping

    2010-11-01

    This article studies the consensus problem for a group of sampled-data general linear dynamical agents over random communication networks. Dynamic output feedback protocols are applied to solve the consensus problem. When the sampling period is sufficiently small, it is shown that as long as the mean topology has globally reachable nodes, the mean square consensus can be achieved by selecting protocol parameters so that n - 1 specified subsystems are simultaneously stabilised. However, when the sampling period is comparatively large, it is revealed that differing from low-order integrator multi-agent systems the consensus problem may be unsolvable. By using the hybrid dynamical system theory, an allowable upper bound of sampling period is further proposed. Two approaches to designing protocols are also provided. Simulations are given to illustrate the validity of the proposed approaches.

  2. A Bayesian approach for inducing sparsity in generalized linear models with multi-category response

    PubMed Central

    2015-01-01

    Background The dimension and complexity of high-throughput gene expression data create many challenges for downstream analysis. Several approaches exist to reduce the number of variables with respect to small sample sizes. In this study, we utilized the Generalized Double Pareto (GDP) prior to induce sparsity in a Bayesian Generalized Linear Model (GLM) setting. The approach was evaluated using a publicly available microarray dataset containing 99 samples corresponding to four different prostate cancer subtypes. Results A hierarchical Sparse Bayesian GLM using GDP prior (SBGG) was developed to take into account the progressive nature of the response variable. We obtained an average overall classification accuracy between 82.5% and 94%, which was higher than Support Vector Machine, Random Forest or a Sparse Bayesian GLM using double exponential priors. Additionally, SBGG outperforms the other 3 methods in correctly identifying pre-metastatic stages of cancer progression, which can prove extremely valuable for therapeutic and diagnostic purposes. Importantly, using Geneset Cohesion Analysis Tool, we found that the top 100 genes produced by SBGG had an average functional cohesion p-value of 2.0E-4 compared to 0.007 to 0.131 produced by the other methods. Conclusions Using GDP in a Bayesian GLM model applied to cancer progression data results in better subclass prediction. In particular, the method identifies pre-metastatic stages of prostate cancer with substantially better accuracy and produces more functionally relevant gene sets. PMID:26423345

  3. Unification of the general non-linear sigma model and the Virasoro master equation

    SciTech Connect

    Boer, J. de; Halpern, M.B. |

    1997-06-01

    The Virasoro master equation describes a large set of conformal field theories known as the affine-Virasoro constructions, in the operator algebra (affinie Lie algebra) of the WZW model, while the einstein equations of the general non-linear sigma model describe another large set of conformal field theories. This talk summarizes recent work which unifies these two sets of conformal field theories, together with a presumable large class of new conformal field theories. The basic idea is to consider spin-two operators of the form L{sub ij}{partial_derivative}x{sup i}{partial_derivative}x{sup j} in the background of a general sigma model. The requirement that these operators satisfy the Virasoro algebra leads to a set of equations called the unified Einstein-Virasoro master equation, in which the spin-two spacetime field L{sub ij} cuples to the usual spacetime fields of the sigma model. The one-loop form of this unified system is presented, and some of its algebraic and geometric properties are discussed.

  4. On a general theory for compressing process and aeroacoustics: linear analysis

    NASA Astrophysics Data System (ADS)

    Mao, F.; Shi, Y. P.; Wu, J. Z.

    2010-06-01

    Of the three mutually coupled fundamental processes (shearing, compressing, and thermal) in a general fluid motion, only the general formulation for the compressing process and a subprocess of it, the subject of aeroacoustics, as well as their physical coupling with shearing and thermal processes, have so far not reached a consensus. This situation has caused difficulties for various in-depth complex multiprocess flow diagnosis, optimal configuration design, and flow/noise control. As the first step toward the desired formulation in fully nonlinear regime, this paper employs the operator factorization method to revisit the analytic linear theories of the fundamental processes and their decomposition, especially the further splitting of compressing process into acoustic and entropy modes, developed in 1940s-1980s. The flow treated here is small disturbances of a compressible, viscous, and heat-conducting polytropic gas in an unbounded domain with arbitrary source of mass, external body force, and heat addition. Previous results are thereby revised and extended to a complete and unified theory. The theory provides a necessary basis and valuable guidance for developing corresponding nonlinear theory by clarifying certain basic issues, such as the proper choice of characteristic variables of compressing process and the feature of their governing equations.

  5. Generalized linear transport theory in dilute neutral gases and dispersion relation of sound waves.

    PubMed

    Bendib, A; Bendib-Kalache, K; Gombert, M M; Imadouchene, N

    2006-10-01

    The transport processes in dilute neutral gases are studied by using the kinetic equation with a collision relaxation model that meets all conservation requirements. The kinetic equation is solved keeping the whole anisotropic part of the distribution function with the use of the continued fractions. The conservative laws of the collision operator are taken into account with the projection operator techniques. The generalized heat flux and stress tensor are calculated in the linear approximation, as functions of the lower moments, i.e., the density, the flow velocity and the temperature. The results obtained are valid for arbitrary collision frequency nu with the respect to kv(t) and the characteristic frequency omega, where k(-1) is the characteristic length scale of the system and v(t) is the thermal velocity. The transport coefficients constitute accurate closure relations for the generalized hydrodynamic equations. An application to the dispersion and the attenuation of sound waves in the whole collisionality regime is presented. The results obtained are in very good agreement with the experimental data. PMID:17155048

  6. Summary goodness-of-fit statistics for binary generalized linear models with noncanonical link functions.

    PubMed

    Canary, Jana D; Blizzard, Leigh; Barry, Ronald P; Hosmer, David W; Quinn, Stephen J

    2016-05-01

    Generalized linear models (GLM) with a canonical logit link function are the primary modeling technique used to relate a binary outcome to predictor variables. However, noncanonical links can offer more flexibility, producing convenient analytical quantities (e.g., probit GLMs in toxicology) and desired measures of effect (e.g., relative risk from log GLMs). Many summary goodness-of-fit (GOF) statistics exist for logistic GLM. Their properties make the development of GOF statistics relatively straightforward, but it can be more difficult under noncanonical links. Although GOF tests for logistic GLM with continuous covariates (GLMCC) have been applied to GLMCCs with log links, we know of no GOF tests in the literature specifically developed for GLMCCs that can be applied regardless of link function chosen. We generalize the Tsiatis GOF statistic originally developed for logistic GLMCCs, (TG), so that it can be applied under any link function. Further, we show that the algebraically related Hosmer-Lemeshow (HL) and Pigeon-Heyse (J(2) ) statistics can be applied directly. In a simulation study, TG, HL, and J(2) were used to evaluate the fit of probit, log-log, complementary log-log, and log models, all calculated with a common grouping method. The TG statistic consistently maintained Type I error rates, while those of HL and J(2) were often lower than expected if terms with little influence were included. Generally, the statistics had similar power to detect an incorrect model. An exception occurred when a log GLMCC was incorrectly fit to data generated from a logistic GLMCC. In this case, TG had more power than HL or J(2) . PMID:26584470

  7. Summary goodness-of-fit statistics for binary generalized linear models with noncanonical link functions.

    PubMed

    Canary, Jana D; Blizzard, Leigh; Barry, Ronald P; Hosmer, David W; Quinn, Stephen J

    2016-05-01

    Generalized linear models (GLM) with a canonical logit link function are the primary modeling technique used to relate a binary outcome to predictor variables. However, noncanonical links can offer more flexibility, producing convenient analytical quantities (e.g., probit GLMs in toxicology) and desired measures of effect (e.g., relative risk from log GLMs). Many summary goodness-of-fit (GOF) statistics exist for logistic GLM. Their properties make the development of GOF statistics relatively straightforward, but it can be more difficult under noncanonical links. Although GOF tests for logistic GLM with continuous covariates (GLMCC) have been applied to GLMCCs with log links, we know of no GOF tests in the literature specifically developed for GLMCCs that can be applied regardless of link function chosen. We generalize the Tsiatis GOF statistic originally developed for logistic GLMCCs, (TG), so that it can be applied under any link function. Further, we show that the algebraically related Hosmer-Lemeshow (HL) and Pigeon-Heyse (J(2) ) statistics can be applied directly. In a simulation study, TG, HL, and J(2) were used to evaluate the fit of probit, log-log, complementary log-log, and log models, all calculated with a common grouping method. The TG statistic consistently maintained Type I error rates, while those of HL and J(2) were often lower than expected if terms with little influence were included. Generally, the statistics had similar power to detect an incorrect model. An exception occurred when a log GLMCC was incorrectly fit to data generated from a logistic GLMCC. In this case, TG had more power than HL or J(2) .

  8. Model based manipulator control

    NASA Technical Reports Server (NTRS)

    Petrosky, Lyman J.; Oppenheim, Irving J.

    1989-01-01

    The feasibility of using model based control (MBC) for robotic manipulators was investigated. A double inverted pendulum system was constructed as the experimental system for a general study of dynamically stable manipulation. The original interest in dynamically stable systems was driven by the objective of high vertical reach (balancing), and the planning of inertially favorable trajectories for force and payload demands. The model-based control approach is described and the results of experimental tests are summarized. Results directly demonstrate that MBC can provide stable control at all speeds of operation and support operations requiring dynamic stability such as balancing. The application of MBC to systems with flexible links is also discussed.

  9. Validity of tests under covariate-adaptive biased coin randomization and generalized linear models.

    PubMed

    Shao, Jun; Yu, Xinxin

    2013-12-01

    Some covariate-adaptive randomization methods have been used in clinical trials for a long time, but little theoretical work has been done about testing hypotheses under covariate-adaptive randomization until Shao et al. (2010) who provided a theory with detailed discussion for responses under linear models. In this article, we establish some asymptotic results for covariate-adaptive biased coin randomization under generalized linear models with possibly unknown link functions. We show that the simple t-test without using any covariate is conservative under covariate-adaptive biased coin randomization in terms of its Type I error rate, and that a valid test using the bootstrap can be constructed. This bootstrap test, utilizing covariates in the randomization scheme, is shown to be asymptotically as efficient as Wald's test correctly using covariates in the analysis. Thus, the efficiency loss due to not using covariates in the analysis can be recovered by utilizing covariates in covariate-adaptive biased coin randomization. Our theory is illustrated with two most popular types of discrete outcomes, binary responses and event counts under the Poisson model, and exponentially distributed continuous responses. We also show that an alternative simple test without using any covariate under the Poisson model has an inflated Type I error rate under simple randomization, but is valid under covariate-adaptive biased coin randomization. Effects on the validity of tests due to model misspecification is also discussed. Simulation studies about the Type I errors and powers of several tests are presented for both discrete and continuous responses. PMID:23848580

  10. Power Calculations for General Linear Multivariate Models Including Repeated Measures Applications.

    PubMed

    Muller, Keith E; Lavange, Lisa M; Ramey, Sharon Landesman; Ramey, Craig T

    1992-12-01

    Recently developed methods for power analysis expand the options available for study design. We demonstrate how easily the methods can be applied by (1) reviewing their formulation and (2) describing their application in the preparation of a particular grant proposal. The focus is a complex but ubiquitous setting: repeated measures in a longitudinal study. Describing the development of the research proposal allows demonstrating the steps needed to conduct an effective power analysis. Discussion of the example also highlights issues that typically must be considered in designing a study. First, we discuss the motivation for using detailed power calculations, focusing on multivariate methods in particular. Second, we survey available methods for the general linear multivariate model (GLMM) with Gaussian errors and recommend those based on F approximations. The treatment includes coverage of the multivariate and univariate approaches to repeated measures, MANOVA, ANOVA, multivariate regression, and univariate regression. Third, we describe the design of the power analysis for the example, a longitudinal study of a child's intellectual performance as a function of mother's estimated verbal intelligence. Fourth, we present the results of the power calculations. Fifth, we evaluate the tradeoffs in using reduced designs and tests to simplify power calculations. Finally, we discuss the benefits and costs of power analysis in the practice of statistics. We make three recommendations: Align the design and hypothesis of the power analysis with the planned data analysis, as best as practical.Embed any power analysis in a defensible sensitivity analysis.Have the extent of the power analysis reflect the ethical, scientific, and monetary costs. We conclude that power analysis catalyzes the interaction of statisticians and subject matter specialists. Using the recent advances for power analysis in linear models can further invigorate the interaction. PMID:24790282

  11. Development and validation of a general purpose linearization program for rigid aircraft models

    NASA Technical Reports Server (NTRS)

    Duke, E. L.; Antoniewicz, R. F.

    1985-01-01

    A FORTRAN program that provides the user with a powerful and flexible tool for the linearization of aircraft models is discussed. The program LINEAR numerically determines a linear systems model using nonlinear equations of motion and a user-supplied, nonlinear aerodynamic model. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to allow easy selection and definition of the state, control, and observation variables to be used in a particular model. Also, included in the report is a comparison of linear and nonlinear models for a high performance aircraft.

  12. Development and validation of a general purpose linearization program for rigid aircraft models

    NASA Technical Reports Server (NTRS)

    Duke, E. L.; Antoniewicz, R. F.

    1985-01-01

    This paper discusses a FORTRAN program that provides the user with a powerful and flexible tool for the linearization of aircraft models. The program LINEAR numerically determines a linear systems model using nonlinear equations of motion and a user-supplied, nonlinear aerodynamic model. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to allow easy selection and definition of the state, control, and observation variables to be used in a particular model. Also, included in the report is a comparison of linear and nonlinear models for a high-performance aircraft.

  13. MGMRES: A generalization of GMRES for solving large sparse nonsymmetric linear systems

    SciTech Connect

    Young, D.M.; Chen, J.Y.

    1994-12-31

    The authors are concerned with the solution of the linear system (1): Au = b, where A is a real square nonsingular matrix which is large, sparse and non-symmetric. They consider the use of Krylov subspace methods. They first choose an initial approximation u{sup (0)} to the solution {bar u} = A{sup {minus}1}B of (1). They also choose an auxiliary matrix Z which is nonsingular. For n = 1,2,{hor_ellipsis} they determine u{sup (n)} such that u{sup (n)} {minus} u{sup (0)}{epsilon}K{sub n}(r{sup (0)},A) where K{sub n}(r{sup (0)},A) is the (Krylov) subspace spanned by the Krylov vectors r{sup (0)}, Ar{sup (0)}, {hor_ellipsis}, A{sup n{minus}1}r{sup 0} and where r{sup (0)} = b{minus}Au{sup (0)}. If ZA is SPD they also require that (u{sup (n)}{minus}{bar u}, ZA(u{sup (n)}{minus}{bar u})) be minimized. If, on the other hand, ZA is not SPD, then they require that the Galerkin condition, (Zr{sup n}, v) = 0, be satisfied for all v{epsilon}K{sub n}(r{sup (0)}, A) where r{sup n} = b{minus}Au{sup (n)}. In this paper the authors consider a generalization of GMRES. This generalized method, which they refer to as `MGMRES`, is very similar to GMRES except that they let Z = A{sup T}Y where Y is a nonsingular matrix which is symmetric by not necessarily SPD.

  14. The overlooked potential of Generalized Linear Models in astronomy-II: Gamma regression and photometric redshifts

    NASA Astrophysics Data System (ADS)

    Elliott, J.; de Souza, R. S.; Krone-Martins, A.; Cameron, E.; Ishida, E. E. O.; Hilbe, J.

    2015-04-01

    Machine learning techniques offer a precious tool box for use within astronomy to solve problems involving so-called big data. They provide a means to make accurate predictions about a particular system without prior knowledge of the underlying physical processes of the data. In this article, and the companion papers of this series, we present the set of Generalized Linear Models (GLMs) as a fast alternative method for tackling general astronomical problems, including the ones related to the machine learning paradigm. To demonstrate the applicability of GLMs to inherently positive and continuous physical observables, we explore their use in estimating the photometric redshifts of galaxies from their multi-wavelength photometry. Using the gamma family with a log link function we predict redshifts from the PHoto-z Accuracy Testing simulated catalogue and a subset of the Sloan Digital Sky Survey from Data Release 10. We obtain fits that result in catastrophic outlier rates as low as ∼1% for simulated and ∼2% for real data. Moreover, we can easily obtain such levels of precision within a matter of seconds on a normal desktop computer and with training sets that contain merely thousands of galaxies. Our software is made publicly available as a user-friendly package developed in Python, R and via an interactive web application. This software allows users to apply a set of GLMs to their own photometric catalogues and generates publication quality plots with minimum effort. By facilitating their ease of use to the astronomical community, this paper series aims to make GLMs widely known and to encourage their implementation in future large-scale projects, such as the Large Synoptic Survey Telescope.

  15. The negative binomial-Lindley generalized linear model: characteristics and application using crash data.

    PubMed

    Geedipally, Srinivas Reddy; Lord, Dominique; Dhavala, Soma Sekhar

    2012-03-01

    There has been a considerable amount of work devoted by transportation safety analysts to the development and application of new and innovative models for analyzing crash data. One important characteristic about crash data that has been documented in the literature is related to datasets that contained a large amount of zeros and a long or heavy tail (which creates highly dispersed data). For such datasets, the number of sites where no crash is observed is so large that traditional distributions and regression models, such as the Poisson and Poisson-gamma or negative binomial (NB) models cannot be used efficiently. To overcome this problem, the NB-Lindley (NB-L) distribution has recently been introduced for analyzing count data that are characterized by excess zeros. The objective of this paper is to document the application of a NB generalized linear model with Lindley mixed effects (NB-L GLM) for analyzing traffic crash data. The study objective was accomplished using simulated and observed datasets. The simulated dataset was used to show the general performance of the model. The model was then applied to two datasets based on observed data. One of the dataset was characterized by a large amount of zeros. The NB-L GLM was compared with the NB and zero-inflated models. Overall, the research study shows that the NB-L GLM not only offers superior performance over the NB and zero-inflated models when datasets are characterized by a large number of zeros and a long tail, but also when the crash dataset is highly dispersed. PMID:22269508

  16. Generalized Jeans' Escape of Pick-Up Ions in Quasi-Linear Relaxation

    NASA Technical Reports Server (NTRS)

    Moore, T. E.; Khazanov, G. V.

    2011-01-01

    Jeans escape is a well-validated formulation of upper atmospheric escape that we have generalized to estimate plasma escape from ionospheres. It involves the computation of the parts of particle velocity space that are unbound by the gravitational potential at the exobase, followed by a calculation of the flux carried by such unbound particles as they escape from the potential well. To generalize this approach for ions, we superposed an electrostatic ambipolar potential and a centrifugal potential, for motions across and along a divergent magnetic field. We then considered how the presence of superthermal electrons, produced by precipitating auroral primary electrons, controls the ambipolar potential. We also showed that the centrifugal potential plays a small role in controlling the mass escape flux from the terrestrial ionosphere. We then applied the transverse ion velocity distribution produced when ions, picked up by supersonic (i.e., auroral) ionospheric convection, relax via quasi-linear diffusion, as estimated for cometary comas [1]. The results provide a theoretical basis for observed ion escape response to electromagnetic and kinetic energy sources. They also suggest that super-sonic but sub-Alfvenic flow, with ion pick-up, is a unique and important regime of ion-neutral coupling, in which plasma wave-particle interactions are driven by ion-neutral collisions at densities for which the collision frequency falls near or below the gyro-frequency. As another possible illustration of this process, the heliopause ribbon discovered by the IBEX mission involves interactions between the solar wind ions and the interstellar neutral gas, in a regime that may be analogous [2].

  17. Fast inference in generalized linear models via expected log-likelihoods.

    PubMed

    Ramirez, Alexandro D; Paninski, Liam

    2014-04-01

    Generalized linear models play an essential role in a wide variety of statistical applications. This paper discusses an approximation of the likelihood in these models that can greatly facilitate computation. The basic idea is to replace a sum that appears in the exact log-likelihood by an expectation over the model covariates; the resulting "expected log-likelihood" can in many cases be computed significantly faster than the exact log-likelihood. In many neuroscience experiments the distribution over model covariates is controlled by the experimenter and the expected log-likelihood approximation becomes particularly useful; for example, estimators based on maximizing this expected log-likelihood (or a penalized version thereof) can often be obtained with orders of magnitude computational savings compared to the exact maximum likelihood estimators. A risk analysis establishes that these maximum EL estimators often come with little cost in accuracy (and in some cases even improved accuracy) compared to standard maximum likelihood estimates. Finally, we find that these methods can significantly decrease the computation time of marginal likelihood calculations for model selection and of Markov chain Monte Carlo methods for sampling from the posterior parameter distribution. We illustrate our results by applying these methods to a computationally-challenging dataset of neural spike trains obtained via large-scale multi-electrode recordings in the primate retina.

  18. Developing a methodology to predict PM10 concentrations in urban areas using generalized linear models.

    PubMed

    Garcia, J M; Teodoro, F; Cerdeira, R; Coelho, L M R; Kumar, Prashant; Carvalho, M G

    2016-09-01

    A methodology to predict PM10 concentrations in urban outdoor environments is developed based on the generalized linear models (GLMs). The methodology is based on the relationship developed between atmospheric concentrations of air pollutants (i.e. CO, NO2, NOx, VOCs, SO2) and meteorological variables (i.e. ambient temperature, relative humidity (RH) and wind speed) for a city (Barreiro) of Portugal. The model uses air pollution and meteorological data from the Portuguese monitoring air quality station networks. The developed GLM considers PM10 concentrations as a dependent variable, and both the gaseous pollutants and meteorological variables as explanatory independent variables. A logarithmic link function was considered with a Poisson probability distribution. Particular attention was given to cases with air temperatures both below and above 25°C. The best performance for modelled results against the measured data was achieved for the model with values of air temperature above 25°C compared with the model considering all ranges of air temperatures and with the model considering only temperature below 25°C. The model was also tested with similar data from another Portuguese city, Oporto, and results found to behave similarly. It is concluded that this model and the methodology could be adopted for other cities to predict PM10 concentrations when these data are not available by measurements from air quality monitoring stations or other acquisition means. PMID:26839052

  19. Maximal freedom at minimum cost: linear large-scale structure in general modifications of gravity

    SciTech Connect

    Bellini, Emilio; Sawicki, Ignacy E-mail: ignacy.sawicki@outlook.com

    2014-07-01

    We present a turnkey solution, ready for implementation in numerical codes, for the study of linear structure formation in general scalar-tensor models involving a single universally coupled scalar field. We show that the totality of cosmological information on the gravitational sector can be compressed — without any redundancy — into five independent and arbitrary functions of time only and one constant. These describe physical properties of the universe: the observable background expansion history, fractional matter density today, and four functions of time describing the properties of the dark energy. We show that two of those dark-energy property functions control the existence of anisotropic stress, the other two — dark-energy clustering, both of which are can be scale-dependent. All these properties can in principle be measured, but no information on the underlying theory of acceleration beyond this can be obtained. We present a translation between popular models of late-time acceleration (e.g. perfect fluids, f(R), kinetic gravity braiding, galileons), as well as the effective field theory framework, and our formulation. In this way, implementing this formulation numerically would give a single tool which could consistently test the majority of models of late-time acceleration heretofore proposed.

  20. Use of the generalized linear models in data related to dental caries index.

    PubMed

    Javali, S B; Pandit, Parameshwar V

    2007-01-01

    The aim of this study is to encourage and initiate the application of generalized linear models (GLMs) in the analysis of the covariates of decayed, missing, and filled teeth (DMFT) index data, which is not necessarily normally distributed. GLMs can be performed assuming underlying many distributions; in fact Poisson distribution with log built-in link function and binomial distribution with Logit and Probit built-in link functions are considered. The Poisson model is used for modeling the DMFT index data and the Logit and Probit models are employed to model the dichotomous outcome of DMFT = 0 and DMFT not equal to 0 (caries free/caries present). The data comprised 7188 subjects aged 18-30 years from the study on the oral health status of Karnataka state conducted by SDM College of Dental Sciences and Hospital, Dharwad, Karnataka, India. The Poisson model and binomial models (Logit and Probit) displayed dissimilarity in the outcome of results at 5% level of significance ( P <0.05). The binomial models were a poor fit, whereas the Poisson model showed a good fit for the DMFT index data. Therefore, a suitable modeling approach for DMFT index data is to use a Poisson model for the DMFT response and a binomial model for the caries free and caries present (DMFT = 0 and DMFT not equal to 0). These GLMs allow separate estimation of those covariates which influence the magnitude of caries. PMID:17938491

  1. Developing a methodology to predict PM10 concentrations in urban areas using generalized linear models.

    PubMed

    Garcia, J M; Teodoro, F; Cerdeira, R; Coelho, L M R; Kumar, Prashant; Carvalho, M G

    2016-09-01

    A methodology to predict PM10 concentrations in urban outdoor environments is developed based on the generalized linear models (GLMs). The methodology is based on the relationship developed between atmospheric concentrations of air pollutants (i.e. CO, NO2, NOx, VOCs, SO2) and meteorological variables (i.e. ambient temperature, relative humidity (RH) and wind speed) for a city (Barreiro) of Portugal. The model uses air pollution and meteorological data from the Portuguese monitoring air quality station networks. The developed GLM considers PM10 concentrations as a dependent variable, and both the gaseous pollutants and meteorological variables as explanatory independent variables. A logarithmic link function was considered with a Poisson probability distribution. Particular attention was given to cases with air temperatures both below and above 25°C. The best performance for modelled results against the measured data was achieved for the model with values of air temperature above 25°C compared with the model considering all ranges of air temperatures and with the model considering only temperature below 25°C. The model was also tested with similar data from another Portuguese city, Oporto, and results found to behave similarly. It is concluded that this model and the methodology could be adopted for other cities to predict PM10 concentrations when these data are not available by measurements from air quality monitoring stations or other acquisition means.

  2. Statistical Methods for Quality Control of Steel Coils Manufacturing Process using Generalized Linear Models

    NASA Astrophysics Data System (ADS)

    García-Díaz, J. Carlos

    2009-11-01

    Fault detection and diagnosis is an important problem in process engineering. Process equipments are subject to malfunctions during operation. Galvanized steel is a value added product, furnishing effective performance by combining the corrosion resistance of zinc with the strength and formability of steel. Fault detection and diagnosis is an important problem in continuous hot dip galvanizing and the increasingly stringent quality requirements in automotive industry has also demanded ongoing efforts in process control to make the process more robust. When faults occur, they change the relationship among these observed variables. This work compares different statistical regression models proposed in the literature for estimating the quality of galvanized steel coils on the basis of short time histories. Data for 26 batches were available. Five variables were selected for monitoring the process: the steel strip velocity, four bath temperatures and bath level. The entire data consisting of 48 galvanized steel coils was divided into sets. The first training data set was 25 conforming coils and the second data set was 23 nonconforming coils. Logistic regression is a modeling tool in which the dependent variable is categorical. In most applications, the dependent variable is binary. The results show that the logistic generalized linear models do provide good estimates of quality coils and can be useful for quality control in manufacturing process.

  3. A general parallel sparse-blocked matrix multiply for linear scaling SCF theory

    NASA Astrophysics Data System (ADS)

    Challacombe, Matt

    2000-06-01

    A general approach to the parallel sparse-blocked matrix-matrix multiply is developed in the context of linear scaling self-consistent-field (SCF) theory. The data-parallel message passing method uses non-blocking communication to overlap computation and communication. The space filling curve heuristic is used to achieve data locality for sparse matrix elements that decay with “separation”. Load balance is achieved by solving the bin packing problem for blocks with variable size.With this new method as the kernel, parallel performance of the simplified density matrix minimization (SDMM) for solution of the SCF equations is investigated for RHF/6-31G ∗∗ water clusters and RHF/3-21G estane globules. Sustained rates above 5.7 GFLOPS for the SDMM have been achieved for (H 2 O) 200 with 95 Origin 2000 processors. Scalability is found to be limited by load imbalance, which increases with decreasing granularity, due primarily to the inhomogeneous distribution of variable block sizes.

  4. Master equation solutions in the linear regime of characteristic formulation of general relativity

    NASA Astrophysics Data System (ADS)

    Cedeño M., C. E.; de Araujo, J. C. N.

    2015-12-01

    From the field equations in the linear regime of the characteristic formulation of general relativity, Bishop, for a Schwarzschild's background, and Mädler, for a Minkowski's background, were able to show that it is possible to derive a fourth order ordinary differential equation, called master equation, for the J metric variable of the Bondi-Sachs metric. Once β , another Bondi-Sachs potential, is obtained from the field equations, and J is obtained from the master equation, the other metric variables are solved integrating directly the rest of the field equations. In the past, the master equation was solved for the first multipolar terms, for both the Minkowski's and Schwarzschild's backgrounds. Also, Mädler recently reported a generalisation of the exact solutions to the linearised field equations when a Minkowski's background is considered, expressing the master equation family of solutions for the vacuum in terms of Bessel's functions of the first and the second kind. Here, we report new solutions to the master equation for any multipolar moment l , with and without matter sources in terms only of the first kind Bessel's functions for the Minkowski, and in terms of the Confluent Heun's functions (Generalised Hypergeometric) for radiative (nonradiative) case in the Schwarzschild's background. We particularize our families of solutions for the known cases for l =2 reported previously in the literature and find complete agreement, showing the robustness of our results.

  5. Fast inference in generalized linear models via expected log-likelihoods

    PubMed Central

    Ramirez, Alexandro D.; Paninski, Liam

    2015-01-01

    Generalized linear models play an essential role in a wide variety of statistical applications. This paper discusses an approximation of the likelihood in these models that can greatly facilitate computation. The basic idea is to replace a sum that appears in the exact log-likelihood by an expectation over the model covariates; the resulting “expected log-likelihood” can in many cases be computed significantly faster than the exact log-likelihood. In many neuroscience experiments the distribution over model covariates is controlled by the experimenter and the expected log-likelihood approximation becomes particularly useful; for example, estimators based on maximizing this expected log-likelihood (or a penalized version thereof) can often be obtained with orders of magnitude computational savings compared to the exact maximum likelihood estimators. A risk analysis establishes that these maximum EL estimators often come with little cost in accuracy (and in some cases even improved accuracy) compared to standard maximum likelihood estimates. Finally, we find that these methods can significantly decrease the computation time of marginal likelihood calculations for model selection and of Markov chain Monte Carlo methods for sampling from the posterior parameter distribution. We illustrate our results by applying these methods to a computationally-challenging dataset of neural spike trains obtained via large-scale multi-electrode recordings in the primate retina. PMID:23832289

  6. Assessment of cross-frequency coupling with confidence using generalized linear models

    PubMed Central

    Kramer, M. A.; Eden, U. T.

    2013-01-01

    Background Brain voltage activity displays distinct neuronal rhythms spanning a wide frequency range. How rhythms of different frequency interact – and the function of these interactions – remains an active area of research. Many methods have been proposed to assess the interactions between different frequency rhythms, in particular measures that characterize the relationship between the phase of a low frequency rhythm and the amplitude envelope of a high frequency rhythm. However, an optimal analysis method to assess this cross-frequency coupling (CFC) does not yet exist. New Method Here we describe a new procedure to assess CFC that utilizes the generalized linear modeling (GLM) framework. Results We illustrate the utility of this procedure in three synthetic examples. The proposed GLM-CFC procedure allows a rapid and principled assessment of CFC with confidence bounds, scales with the intensity of the CFC, and accurately detects biphasic coupling. Comparison with Existing Methods Compared to existing methods, the proposed GLM-CFC procedure is easily interpretable, possesses confidence intervals that are easy and efficient to compute, and accurately detects biphasic coupling. Conclusions The GLM-CFC statistic provides a method for accurate and statistically rigorous assessment of CFC. PMID:24012829

  7. LLM3D: a log-linear modeling-based method to predict functional gene regulatory interactions from genome-wide expression data

    PubMed Central

    Geeven, Geert; MacGillavry, Harold D.; Eggers, Ruben; Sassen, Marion M.; Verhaagen, Joost; Smit, August B.; de Gunst, Mathisca C. M.; van Kesteren, Ronald E.

    2011-01-01

    All cellular processes are regulated by condition-specific and time-dependent interactions between transcription factors and their target genes. While in simple organisms, e.g. bacteria and yeast, a large amount of experimental data is available to support functional transcription regulatory interactions, in mammalian systems reconstruction of gene regulatory networks still heavily depends on the accurate prediction of transcription factor binding sites. Here, we present a new method, log-linear modeling of 3D contingency tables (LLM3D), to predict functional transcription factor binding sites. LLM3D combines gene expression data, gene ontology annotation and computationally predicted transcription factor binding sites in a single statistical analysis, and offers a methodological improvement over existing enrichment-based methods. We show that LLM3D successfully identifies novel transcriptional regulators of the yeast metabolic cycle, and correctly predicts key regulators of mouse embryonic stem cell self-renewal more accurately than existing enrichment-based methods. Moreover, in a clinically relevant in vivo injury model of mammalian neurons, LLM3D identified peroxisome proliferator-activated receptor γ (PPARγ) as a neuron-intrinsic transcriptional regulator of regenerative axon growth. In conclusion, LLM3D provides a significant improvement over existing methods in predicting functional transcription regulatory interactions in the absence of experimental transcription factor binding data. PMID:21422075

  8. Predicting oropharyngeal tumor volume throughout the course of radiation therapy from pretreatment computed tomography data using general linear models

    SciTech Connect

    Yock, Adam D. Kudchadker, Rajat J.; Rao, Arvind; Dong, Lei; Beadle, Beth M.; Garden, Adam S.; Court, Laurence E.

    2014-05-15

    Purpose: The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Methods: Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. Results: In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: −11.6%–23.8%) and 14.6% (range: −7.3%–27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: −6.8%–40.3%) and 13.1% (range: −1.5%–52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: −11.1%–20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. Conclusions: A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography

  9. Characterization of a generalized elliptical phase retarder by using equivalent theorem of a linear phase retarder and a polarization rotator

    NASA Astrophysics Data System (ADS)

    Yu, Chih-Jen; Chou, Chien

    2011-03-01

    An equivalence theory based on a unitary optical system of a generalized elliptical phase retarder was derived. Whereas the elliptical phase retarder can be treated as the combination of a linear phase retarder and a polarization rotator equivalently. Three fundamental parameters, including the elliptical phase retardation, the azimuth angle and the ellipticity angle of the fast elliptical eigen-polarization state were derived. All parameters of a generalized elliptical phase retarder can be determined from the analytical solution of the characteristic parameters of the optical components: linear phase retardation and fast axis angle of the equivalently linear phase retarder respectively, and polarization rotation angle of an equivalent polarization rotator. In this study, the experimental verification was demonstrated by testing a twisted nematic liquid crystal device (TNLCD) treated as a generalized elliptical phase retarder. A dual-frequency heterodyne ellipsometer was setup and the experimental result demonstrates the capability of the equivalent theory on elliptical birefringence measurement at high sensitivity by using heterodyne technique.

  10. Generalized Functional Linear Models for Gene-based Case-Control Association Studies

    PubMed Central

    Mills, James L.; Carter, Tonia C.; Lobach, Iryna; Wilson, Alexander F.; Bailey-Wilson, Joan E.; Weeks, Daniel E.; Xiong, Momiao

    2014-01-01

    By using functional data analysis techniques, we developed generalized functional linear models for testing association between a dichotomous trait and multiple genetic variants in a genetic region while adjusting for covariates. Both fixed and mixed effect models are developed and compared. Extensive simulations show that Rao's efficient score tests of the fixed effect models are very conservative since they generate lower type I errors than nominal levels, and global tests of the mixed effect models generate accurate type I errors. Furthermore, we found that the Rao's efficient score test statistics of the fixed effect models have higher power than the sequence kernel association test (SKAT) and its optimal unified version (SKAT-O) in most cases when the causal variants are both rare and common. When the causal variants are all rare (i.e., minor allele frequencies less than 0.03), the Rao's efficient score test statistics and the global tests have similar or slightly lower power than SKAT and SKAT-O. In practice, it is not known whether rare variants or common variants in a gene are disease-related. All we can assume is that a combination of rare and common variants influences disease susceptibility. Thus, the improved performance of our models when the causal variants are both rare and common shows that the proposed models can be very useful in dissecting complex traits. We compare the performance of our methods with SKAT and SKAT-O on real neural tube defects and Hirschsprung's disease data sets. The Rao's efficient score test statistics and the global tests are more sensitive than SKAT and SKAT-O in the real data analysis. Our methods can be used in either gene-disease genome-wide/exome-wide association studies or candidate gene analyses. PMID:25203683

  11. Protein structure validation by generalized linear model root-mean-square deviation prediction.

    PubMed

    Bagaria, Anurag; Jaravine, Victor; Huang, Yuanpeng J; Montelione, Gaetano T; Güntert, Peter

    2012-02-01

    Large-scale initiatives for obtaining spatial protein structures by experimental or computational means have accentuated the need for the critical assessment of protein structure determination and prediction methods. These include blind test projects such as the critical assessment of protein structure prediction (CASP) and the critical assessment of protein structure determination by nuclear magnetic resonance (CASD-NMR). An important aim is to establish structure validation criteria that can reliably assess the accuracy of a new protein structure. Various quality measures derived from the coordinates have been proposed. A universal structural quality assessment method should combine multiple individual scores in a meaningful way, which is challenging because of their different measurement units. Here, we present a method based on a generalized linear model (GLM) that combines diverse protein structure quality scores into a single quantity with intuitive meaning, namely the predicted coordinate root-mean-square deviation (RMSD) value between the present structure and the (unavailable) "true" structure (GLM-RMSD). For two sets of structural models from the CASD-NMR and CASP projects, this GLM-RMSD value was compared with the actual accuracy given by the RMSD value to the corresponding, experimentally determined reference structure from the Protein Data Bank (PDB). The correlation coefficients between actual (model vs. reference from PDB) and predicted (model vs. "true") heavy-atom RMSDs were 0.69 and 0.76, for the two datasets from CASD-NMR and CASP, respectively, which is considerably higher than those for the individual scores (-0.24 to 0.68). The GLM-RMSD can thus predict the accuracy of protein structures more reliably than individual coordinate-based quality scores.

  12. Power analysis for generalized linear mixed models in ecology and evolution

    PubMed Central

    Johnson, Paul C D; Barry, Sarah J E; Ferguson, Heather M; Müller, Pie

    2015-01-01

    ‘Will my study answer my research question?’ is the most fundamental question a researcher can ask when designing a study, yet when phrased in statistical terms – ‘What is the power of my study?’ or ‘How precise will my parameter estimate be?’ – few researchers in ecology and evolution (EE) try to answer it, despite the detrimental consequences of performing under- or over-powered research. We suggest that this reluctance is due in large part to the unsuitability of simple methods of power analysis (broadly defined as any attempt to quantify prospectively the ‘informativeness’ of a study) for the complex models commonly used in EE research. With the aim of encouraging the use of power analysis, we present simulation from generalized linear mixed models (GLMMs) as a flexible and accessible approach to power analysis that can account for random effects, overdispersion and diverse response distributions.We illustrate the benefits of simulation-based power analysis in two research scenarios: estimating the precision of a survey to estimate tick burdens on grouse chicks and estimating the power of a trial to compare the efficacy of insecticide-treated nets in malaria mosquito control. We provide a freely available R function, sim.glmm, for simulating from GLMMs.Analysis of simulated data revealed that the effects of accounting for realistic levels of random effects and overdispersion on power and precision estimates were substantial, with correspondingly severe implications for study design in the form of up to fivefold increases in sampling effort. We also show the utility of simulations for identifying scenarios where GLMM-fitting methods can perform poorly.These results illustrate the inadequacy of standard analytical power analysis methods and the flexibility of simulation-based power analysis for GLMMs. The wider use of these methods should contribute to improving the quality of study design in EE. PMID:25893088

  13. The linear co-variance between joint muscle torques is not a generalized principle.

    PubMed

    Sande de Souza, Luciane Aparecida Pascucci; Dionísio, Valdeci Carlos; Lerena, Mario Adrian Misailidis; Marconi, Nadia Fernanda; Almeida, Gil Lúcio

    2009-06-01

    In 1996, Gottlieb et al. [Gottlieb GL, Song Q, Hong D, Almeida GL, Corcos DM. Coordinating movement at two joints: A principle of linear covariance. J Neurophysiol 1996;75(4):1760-4] identified a linear co-variance between the joint muscle torques generated at two connected joints. The joint muscle torques changed directions and magnitudes in a synchronized and linear fashion and called it the principle of linear co-variance. Here we showed that this principle cannot hold for some class of movements. Neurologically normal subjects performed multijoint movements involving elbow and shoulder with reversal towards three targets in the sagittal plane without any constraints. The movement kinematics was calculated using the X and Y coordinates of the markers positioned over the joints. Inverse dynamics was used to calculate the joint muscle, interaction and net torques. We found that for the class of voluntary movements analyzed, the joint muscle torques of the elbow and the shoulder were not linearly correlated. The same was observed for the interaction torques. But, the net torques at both joints, i.e., the sum of the interaction and the joint muscle torques were linearly correlated. We showed that by decoupling the joint muscle torques, but keeping the net torques linearly correlated, the CNS was able to generate fast and accurate movements with straight fingertip paths. The movement paths were typical of the ones in which the joint muscle torques were linearly correlated.

  14. Assessing the Tangent Linear Behaviour of Common Tracer Transport Schemes and Their Use in a Linearised Atmospheric General Circulation Model

    NASA Technical Reports Server (NTRS)

    Holdaway, Daniel; Kent, James

    2015-01-01

    The linearity of a selection of common advection schemes is tested and examined with a view to their use in the tangent linear and adjoint versions of an atmospheric general circulation model. The schemes are tested within a simple offline one-dimensional periodic domain as well as using a simplified and complete configuration of the linearised version of NASA's Goddard Earth Observing System version 5 (GEOS-5). All schemes which prevent the development of negative values and preserve the shape of the solution are confirmed to have nonlinear behaviour. The piecewise parabolic method (PPM) with certain flux limiters, including that used by default in GEOS-5, is found to support linear growth near the shocks. This property can cause the rapid development of unrealistically large perturbations within the tangent linear and adjoint models. It is shown that these schemes with flux limiters should not be used within the linearised version of a transport scheme. The results from tests using GEOS-5 show that the current default scheme (a version of PPM) is not suitable for the tangent linear and adjoint model, and that using a linear third-order scheme for the linearised model produces better behaviour. Using the third-order scheme for the linearised model improves the correlations between the linear and non-linear perturbation trajectories for cloud liquid water and cloud liquid ice in GEOS-5.

  15. General methods for determining the linear stability of coronal magnetic fields

    NASA Technical Reports Server (NTRS)

    Craig, I. J. D.; Sneyd, A. D.; Mcclymont, A. N.

    1988-01-01

    A time integration of a linearized plasma equation of motion has been performed to calculate the ideal linear stability of arbitrary three-dimensional magnetic fields. The convergence rates of the explicit and implicit power methods employed are speeded up by using sequences of cyclic shifts. Growth rates are obtained for Gold-Hoyle force-free equilibria, and the corkscrew-kink instability is found to be very weak.

  16. Generalizations of the theorem of minimum entropy production to linear systems involving inertia

    NASA Astrophysics Data System (ADS)

    Rebhan, E.

    1985-07-01

    The temporal behavior of the excess entropy production Pex is investigated in linear electrical networks and in systems which can be described either by the linearized equations of viscous hydrodynamics or of resistive magnetohydrodynamics. As a result of inertial effects Pex is an oscillatory quantity. A kinetic potential is constructed which contains Pex additively. It is an upper bound of Pex and decreases monotonically in time, enforcing Pex-->0 as t-->∞.

  17. The elastostatic plane strain mode I crack tip stress and displacement fields in a generalized linear neo-Hookean elastomer

    NASA Astrophysics Data System (ADS)

    Begley, Matthew R.; Creton, Costantino; McMeeking, Robert M.

    2015-11-01

    A general asymptotic plane strain crack tip stress field is constructed for linear versions of neo-Hookean materials, which spans a wide variety of special cases including incompressible Mooney elastomers, the compressible Blatz-Ko elastomer, several cases of the Ogden constitutive law and a new result for a compressible linear neo-Hookean material. The nominal stress field has dominant terms that have a square root singularity with respect to the distance of material points from the crack tip in the undeformed reference configuration. At second order, there is a uniform tension parallel to the crack. The associated displacement field in plane strain at leading order has dependence proportional to the square root of the same coordinate. The relationship between the amplitude of the crack tip singularity (a stress intensity factor) and the plane strain energy release rate is outlined for the general linear material, with simplified relationships presented for notable special cases.

  18. Meta-analysis of Complex Diseases at Gene Level with Generalized Functional Linear Models.

    PubMed

    Fan, Ruzong; Wang, Yifan; Chiu, Chi-Yang; Chen, Wei; Ren, Haobo; Li, Yun; Boehnke, Michael; Amos, Christopher I; Moore, Jason H; Xiong, Momiao

    2016-02-01

    We developed generalized functional linear models (GFLMs) to perform a meta-analysis of multiple case-control studies to evaluate the relationship of genetic data to dichotomous traits adjusting for covariates. Unlike the previously developed meta-analysis for sequence kernel association tests (MetaSKATs), which are based on mixed-effect models to make the contributions of major gene loci random, GFLMs are fixed models; i.e., genetic effects of multiple genetic variants are fixed. Based on GFLMs, we developed chi-squared-distributed Rao's efficient score test and likelihood-ratio test (LRT) statistics to test for an association between a complex dichotomous trait and multiple genetic variants. We then performed extensive simulations to evaluate the empirical type I error rates and power performance of the proposed tests. The Rao's efficient score test statistics of GFLMs are very conservative and have higher power than MetaSKATs when some causal variants are rare and some are common. When the causal variants are all rare [i.e., minor allele frequencies (MAF) < 0.03], the Rao's efficient score test statistics have similar or slightly lower power than MetaSKATs. The LRT statistics generate accurate type I error rates for homogeneous genetic-effect models and may inflate type I error rates for heterogeneous genetic-effect models owing to the large numbers of degrees of freedom and have similar or slightly higher power than the Rao's efficient score test statistics. GFLMs were applied to analyze genetic data of 22 gene regions of type 2 diabetes data from a meta-analysis of eight European studies and detected significant association for 18 genes (P < 3.10 × 10(-6)), tentative association for 2 genes (HHEX and HMGA2; P ≈ 10(-5)), and no association for 2 genes, while MetaSKATs detected none. In addition, the traditional additive-effect model detects association at gene HHEX. GFLMs and related tests can analyze rare or common variants or a combination of the two and

  19. Applications of multivariate modeling to neuroimaging group analysis: a comprehensive alternative to univariate general linear model.

    PubMed

    Chen, Gang; Adleman, Nancy E; Saad, Ziad S; Leibenluft, Ellen; Cox, Robert W

    2014-10-01

    All neuroimaging packages can handle group analysis with t-tests or general linear modeling (GLM). However, they are quite hamstrung when there are multiple within-subject factors or when quantitative covariates are involved in the presence of a within-subject factor. In addition, sphericity is typically assumed for the variance-covariance structure when there are more than two levels in a within-subject factor. To overcome such limitations in the traditional AN(C)OVA and GLM, we adopt a multivariate modeling (MVM) approach to analyzing neuroimaging data at the group level with the following advantages: a) there is no limit on the number of factors as long as sample sizes are deemed appropriate; b) quantitative covariates can be analyzed together with within-subject factors; c) when a within-subject factor is involved, three testing methodologies are provided: traditional univariate testing (UVT) with sphericity assumption (UVT-UC) and with correction when the assumption is violated (UVT-SC), and within-subject multivariate testing (MVT-WS); d) to correct for sphericity violation at the voxel level, we propose a hybrid testing (HT) approach that achieves equal or higher power via combining traditional sphericity correction methods (Greenhouse-Geisser and Huynh-Feldt) with MVT-WS. To validate the MVM methodology, we performed simulations to assess the controllability for false positives and power achievement. A real FMRI dataset was analyzed to demonstrate the capability of the MVM approach. The methodology has been implemented into an open source program 3dMVM in AFNI, and all the statistical tests can be performed through symbolic coding with variable names instead of the tedious process of dummy coding. Our data indicates that the severity of sphericity violation varies substantially across brain regions. The differences among various modeling methodologies were addressed through direct comparisons between the MVM approach and some of the GLM implementations in

  20. Reversibility of a quantum channel: General conditions and their applications to Bosonic linear channels

    SciTech Connect

    Shirokov, M. E.

    2013-11-15

    The method of complementary channel for analysis of reversibility (sufficiency) of a quantum channel with respect to families of input states (pure states for the most part) are considered and applied to Bosonic linear (quasi-free) channels, in particular, to Bosonic Gaussian channels. The obtained reversibility conditions for Bosonic linear channels have clear physical interpretation and their sufficiency is also shown by explicit construction of reversing channels. The method of complementary channel gives possibility to prove necessity of these conditions and to describe all reversed families of pure states in the Schrodinger representation. Some applications in quantum information theory are considered. Conditions for existence of discrete classical-quantum subchannels and of completely depolarizing subchannels of a Bosonic linear channel are presented.

  1. Commentary on the statistical properties of noise and its implication on general linear models in functional near-infrared spectroscopy.

    PubMed

    Huppert, Theodore J

    2016-01-01

    Functional near-infrared spectroscopy (fNIRS) is a noninvasive neuroimaging technique that uses low levels of light to measure changes in cerebral blood oxygenation levels. In the majority of NIRS functional brain studies, analysis of this data is based on a statistical comparison of hemodynamic levels between a baseline and task or between multiple task conditions by means of a linear regression model: the so-called general linear model. Although these methods are similar to their implementation in other fields, particularly for functional magnetic resonance imaging, the specific application of these methods in fNIRS research differs in several key ways related to the sources of noise and artifacts unique to fNIRS. In this brief communication, we discuss the application of linear regression models in fNIRS and the modifications needed to generalize these models in order to deal with structured (colored) noise due to systemic physiology and noise heteroscedasticity due to motion artifacts. The objective of this work is to present an overview of these noise properties in the context of the linear model as it applies to fNIRS data. This work is aimed at explaining these mathematical issues to the general fNIRS experimental researcher but is not intended to be a complete mathematical treatment of these concepts. PMID:26989756

  2. Recent advances toward a general purpose linear-scaling quantum force field.

    PubMed

    Giese, Timothy J; Huang, Ming; Chen, Haoyuan; York, Darrin M

    2014-09-16

    Conspectus There is need in the molecular simulation community to develop new quantum mechanical (QM) methods that can be routinely applied to the simulation of large molecular systems in complex, heterogeneous condensed phase environments. Although conventional methods, such as the hybrid quantum mechanical/molecular mechanical (QM/MM) method, are adequate for many problems, there remain other applications that demand a fully quantum mechanical approach. QM methods are generally required in applications that involve changes in electronic structure, such as when chemical bond formation or cleavage occurs, when molecules respond to one another through polarization or charge transfer, or when matter interacts with electromagnetic fields. A full QM treatment, rather than QM/MM, is necessary when these features present themselves over a wide spatial range that, in some cases, may span the entire system. Specific examples include the study of catalytic events that involve delocalized changes in chemical bonds, charge transfer, or extensive polarization of the macromolecular environment; drug discovery applications, where the wide range of nonstandard residues and protonation states are challenging to model with purely empirical MM force fields; and the interpretation of spectroscopic observables. Unfortunately, the enormous computational cost of conventional QM methods limit their practical application to small systems. Linear-scaling electronic structure methods (LSQMs) make possible the calculation of large systems but are still too computationally intensive to be applied with the degree of configurational sampling often required to make meaningful comparison with experiment. In this work, we present advances in the development of a quantum mechanical force field (QMFF) suitable for application to biological macromolecules and condensed phase simulations. QMFFs leverage the benefits provided by the LSQM and QM/MM approaches to produce a fully QM method that is able to

  3. Comparing Regression Coefficients between Nested Linear Models for Clustered Data with Generalized Estimating Equations

    ERIC Educational Resources Information Center

    Yan, Jun; Aseltine, Robert H., Jr.; Harel, Ofer

    2013-01-01

    Comparing regression coefficients between models when one model is nested within another is of great practical interest when two explanations of a given phenomenon are specified as linear models. The statistical problem is whether the coefficients associated with a given set of covariates change significantly when other covariates are added into…

  4. Linear and Nonlinear Optical Properties in Spherical Quantum Dots: Generalized Hulthén Potential

    NASA Astrophysics Data System (ADS)

    Onyeaju, M. C.; Idiodi, J. O. A.; Ikot, A. N.; Solaimani, M.; Hassanabadi, H.

    2016-09-01

    In this work, we studied the optical properties of spherical quantum dots confined in Hulthén potential with the appropriate centrifugal term included. The approximate solution of the bound state and wave functions were obtained from the Schrödinger wave equation by applying the factorization method. Also, we have used the density matrix formalism to investigate the linear and third-order nonlinear absorption coefficient and refractive index changes.

  5. General-linear-models approach for comparing the response of several species in acute-toxicity tests

    SciTech Connect

    Daniels, K.L.; Goyert, J.C.; Farrell, M.P.; Strand, R.H.

    1982-01-01

    Acute toxicity tests (bioassays) estimate the concentration of a chemical required to produce a response (usually death) in fifty percent of a population (the LC50). Simple comparisons of LC5C values among several species are often inadequate because species can have identical LC50 values while their overall response to a chemical may differ in either the threshold concentration (intercept) or the rate of response (slope). A sequential approach using a general linear model is presented for testing differences among species in their overall response to a chemical. This method tests for equality of slopes followed by a test for equality of regression lines. This procedure employs the Statistical Analysis System's General Linear Models procedure for conducting a weighted least squares analysis with a convariable.

  6. Generalized linear Boltzmann equation, describing non-classical particle transport, and related asymptotic solutions for small mean free paths

    NASA Astrophysics Data System (ADS)

    Rukolaine, Sergey A.

    2016-05-01

    In classical kinetic models a particle free path distribution is exponential, but this is more likely to be an exception than a rule. In this paper we derive a generalized linear Boltzmann equation (GLBE) for a general free path distribution in the framework of Alt's model. In the case that the free path distribution has at least first and second finite moments we construct an asymptotic solution to the initial value problem for the GLBE for small mean free paths. In the special case of the one-speed transport problem the asymptotic solution results in a diffusion approximation to the GLBE.

  7. Dose-response relationship between total cadmium intake and prevalence of renal dysfunction using general linear models.

    PubMed

    Hochi, Y; Kido, T; Nogawa, K; Kito, H; Shaikh, Z A

    1995-01-01

    To determine the maximum allowable intake limits for chronic dietary exposure to cadmium (Cd), the dose-response relationship between total Cd intake and prevalence of renal dysfunction was examined using general linear models considering the effect of age as a confounder. The target population comprised 1850 Cd-exposed and 294 non-exposed inhabitants of Ishikawa, Japan. They were divided into 96 subgroups by sex, age (four categories) cadmium concentrations in rice (three categories) and length of residence (four categories). As indicators of the cadmium-induced renal dysfunction, glucose, total protein, amino nitrogen, beta 2-microglobulin and metallothionein in urine were employed. General linear models were fitted statistically to the relationship among prevalence of renal dysfunction, sex, age and total Cd intake. Prevalence of abnormal urinary findings other than glucosuria had significant associations with total Cd intake. When total Cd intake corresponding to the mean prevalence of each abnormal urinary finding in the non-exposed subjects was calculated using general linear models, total Cd intakes corresponding to glucosuria, proteinuria, aminoaciduria (men only) and proteinuria with glucosuria were determined to be ca. 2.2-3.8 g and those corresponding to prevalence of metallothioneinuria were calculated as ca. 1.5-2.6 g. The low-molecular-weight protein in urine was confirmed to be a more sensitive indicator of renal dysfunction, and these total Cd intake values were close to those calculated previously by simple regression analysis, suggesting them to be reasonable values as the maximum allowable intake of Cd.

  8. A general purpose non-linear curve fitting program for the British Broadcasting Corporation Microcomputer.

    PubMed

    Beynon, R J

    1985-01-01

    Software for non-linear curve fitting has been written in BASIC to execute on the British Broadcasting Corporation Microcomputer. The program uses the direct search algorithm Pattern-search, a robust algorithm that has the additional advantage of needing specification of the function without inclusion of the partial derivatives. Although less efficient than gradient methods, the program can be readily configured to solve low-dimensional optimization problems that are normally encountered in life sciences. In writing the software, emphasis has been placed upon the 'user interface' and making the most efficient use of the facilities provided by the minimal configuration of this system.

  9. A General Method for Solving Systems of Non-Linear Equations

    NASA Technical Reports Server (NTRS)

    Nachtsheim, Philip R.; Deiss, Ron (Technical Monitor)

    1995-01-01

    The method of steepest descent is modified so that accelerated convergence is achieved near a root. It is assumed that the function of interest can be approximated near a root by a quadratic form. An eigenvector of the quadratic form is found by evaluating the function and its gradient at an arbitrary point and another suitably selected point. The terminal point of the eigenvector is chosen to lie on the line segment joining the two points. The terminal point found lies on an axis of the quadratic form. The selection of a suitable step size at this point leads directly to the root in the direction of steepest descent in a single step. Newton's root finding method not infrequently diverges if the starting point is far from the root. However, the current method in these regions merely reverts to the method of steepest descent with an adaptive step size. The current method's performance should match that of the Levenberg-Marquardt root finding method since they both share the ability to converge from a starting point far from the root and both exhibit quadratic convergence near a root. The Levenberg-Marquardt method requires storage for coefficients of linear equations. The current method which does not require the solution of linear equations requires more time for additional function and gradient evaluations. The classic trade off of time for space separates the two methods.

  10. The Exact Solution for Linear Thermoelastic Axisymmetric Deformations of Generally Laminated Circular Cylindrical Shells

    NASA Technical Reports Server (NTRS)

    Nemeth, Michael P.; Schultz, Marc R.

    2012-01-01

    A detailed exact solution is presented for laminated-composite circular cylinders with general wall construction and that undergo axisymmetric deformations. The overall solution is formulated in a general, systematic way and is based on the solution of a single fourth-order, nonhomogeneous ordinary differential equation with constant coefficients in which the radial displacement is the dependent variable. Moreover, the effects of general anisotropy are included and positive-definiteness of the strain energy is used to define uniquely the form of the basis functions spanning the solution space of the ordinary differential equation. Loading conditions are considered that include axisymmetric edge loads, surface tractions, and temperature fields. Likewise, all possible axisymmetric boundary conditions are considered. Results are presented for five examples that demonstrate a wide range of behavior for specially orthotropic and fully anisotropic cylinders.

  11. A substructure coupling procedure applicable to general linear time-invariant dynamic systems

    NASA Technical Reports Server (NTRS)

    Howsman, T. G.; Craig, R. R., Jr.

    1984-01-01

    A substructure synthesis procedure applicable to structural systems containing general nonconservative terms is presented. In their final form, the non-self-adjoint substructure equations of motion are cast in state vector form through the use of a variational principle. A reduced-order model for each substructure is implemented by representing the substructure as a combination of a small number of Ritz vectors. For the method presented, the substructure Ritz vectors are identified as a truncated set of substructure eigenmodes, which are typically complex, along with a set of generalized real attachment modes. The formation of the generalized attachment modes does not require any knowledge of the substructure flexible modes; hence, only the eigenmodes used explicitly as Ritz vectors need to be extracted from the substructure eigenproblem. An example problem is presented to illustrate the method.

  12. Optimization of biochemical systems by linear programming and general mass action model representations.

    PubMed

    Marín-Sanguino, Alberto; Torres, Néstor V

    2003-08-01

    A new method is proposed for the optimization of biochemical systems. The method, based on the separation of the stoichiometric and kinetic aspects of the system, follows the general approach used in the previously presented indirect optimization method (IOM) developed within biochemical systems theory. It is called GMA-IOM because it makes use of the generalized mass action (GMA) as the model system representation form. The GMA representation avoids flux aggregation and thus prevents possible stoichiometric errors. The optimization of a system is used to illustrate and compare the features, advantages and shortcomings of both versions of the IOM method as a general strategy for designing improved microbial strains of biotechnological interest. Special attention has been paid to practical problems for the actual implementation of the new proposed strategy, such as the total protein content of the engineered strain or the deviation from the original steady state and its influence on cell viability.

  13. A substructure coupling procedure applicable to general linear time-invariant dynamic systems

    NASA Technical Reports Server (NTRS)

    Howsman, T. G.; Craig, R. R., Jr.

    1984-01-01

    A substructure synthesis procedure applicable to structural systems containing general nonconservative terms is presented. In their final form, the nonself-adjoint substructure equations of motion are cast in state vector form through the use of a variational principle. A reduced-order mode for each substructure is implemented by representing the substructure as a combination of a small number of Ritz vectors. For the method presented, the substructure Ritz vectors are identified as a truncated set of substructure eigenmodes, which are typically complex, along with a set of generalized real attachment modes. The formation of the generalized attachment modes does not require any knowledge of the substructure flexible modes; hence, only the eigenmodes used explicitly as Ritz vectors need to be extracted from the substructure eigenproblem. An example problem is presented to illustrate the method.

  14. Heuristics for Understanding the Concepts of Interaction, Polynomial Trend, and the General Linear Model.

    ERIC Educational Resources Information Center

    Thompson, Bruce

    The relationship between analysis of variance (ANOVA) methods and their analogs (analysis of covariance and multiple analyses of variance and covariance--collectively referred to as OVA methods) and the more general analytic case is explored. A small heuristic data set is used, with a hypothetical sample of 20 subjects, randomly assigned to five…

  15. General polynomial factorization-based design of sparse periodic linear arrays.

    PubMed

    Mitra, Sanjit K; Mondal, Kalyan; Tchobanou, Mikhail K; Dolecek, Gordana Jovanovic

    2010-09-01

    We have developed several methods of designing sparse periodic arrays based upon the polynomial factorization method. In these methods, transmit and receive aperture polynomials are selected such that their product results in a polynomial representing the desired combined transmit/receive (T/R) effective aperture function. A desired combined T/R effective aperture is simply an aperture with an appropriate width exhibiting a spectrum that corresponds to the desired two-way radiation pattern. At least one of the two aperture functions that constitute the combined T/R effective aperture function will be a sparse polynomial. A measure of sparsity of the designed array is defined in terms of the element reduction factor. We show that elements of a linear array can be reduced with varying degrees of beam mainlobe width to sidelobe reduction properties.

  16. Quasi-Linear Parameter Varying Representation of General Aircraft Dynamics Over Non-Trim Region

    NASA Technical Reports Server (NTRS)

    Shin, Jong-Yeob

    2007-01-01

    For applying linear parameter varying (LPV) control synthesis and analysis to a nonlinear system, it is required that a nonlinear system be represented in the form of an LPV model. In this paper, a new representation method is developed to construct an LPV model from a nonlinear mathematical model without the restriction that an operating point must be in the neighborhood of equilibrium points. An LPV model constructed by the new method preserves local stabilities of the original nonlinear system at "frozen" scheduling parameters and also represents the original nonlinear dynamics of a system over a non-trim region. An LPV model of the motion of FASER (Free-flying Aircraft for Subscale Experimental Research) is constructed by the new method.

  17. Microcontroller-based intelligent low-cost-linear-sensor-camera for general edge detection

    NASA Astrophysics Data System (ADS)

    Hussmann, Stephan; Justen, Detlef

    1997-09-01

    With this paper we would like to present an intelligent low- cost-camera. Intelligent means that a microcontroller does all the controlling and provides several in- and outputs. The camera is a stand-alone system. The basic element of the camera is a linear sensor that consists of a photodiode array (PDA). In comparison with standard CCD-chips this type of sensor is a low cost component and its operation is very simple. Furthermore this paper shows the mechanical, electrical and electro-optical differences between CCD- and PDA-sensors. So the reader will be able to choose the right sensor for a particular task. Two cases of industrial applications are listed at the end of this paper.

  18. Iterative solution of general sparse linear systems on clusters of workstations

    SciTech Connect

    Lo, Gen-Ching; Saad, Y.

    1996-12-31

    Solving sparse irregularly structured linear systems on parallel platforms poses several challenges. First, sparsity makes it difficult to exploit data locality, whether in a distributed or shared memory environment. A second, perhaps more serious challenge, is to find efficient ways to precondition the system. Preconditioning techniques which have a large degree of parallelism, such as multicolor SSOR, often have a slower rate of convergence than their sequential counterparts. Finally, a number of other computational kernels such as inner products could ruin any gains gained from parallel speed-ups, and this is especially true on workstation clusters where start-up times may be high. In this paper we discuss these issues and report on our experience with PSPARSLIB, an on-going project for building a library of parallel iterative sparse matrix solvers.

  19. Evaluation of cavity occurrence in the Maynardville Limestone and the Copper Ridge Dolomite at the Y-12 Plant using logistic and general linear models

    SciTech Connect

    Shevenell, L.A.; Beauchamp, J.J.

    1994-11-01

    Several waste disposal sites are located on or adjacent to the karstic Maynardville Limestone (Cmn) and the Copper Ridge Dolomite (Ccr) at the Oak Ridge Y-12 Plant. These formations receive contaminants in groundwaters from nearby disposal sites, which can be transported quite rapidly due to the karst flow system. In order to evaluate transport processes through the karst aquifer, the solutional aspects of the formations must be characterized. As one component of this characterization effort, statistical analyses were conducted on the data related to cavities in order to determine if a suitable model could be identified that is capable of predicting the probability of cavity size or distribution in locations for which drilling data are not available. Existing data on the locations (East, North coordinates), depths (and elevations), and sizes of known conduits and other water zones were used in the analyses. Two different models were constructed in the attempt to predict the distribution of cavities in the vicinity of the Y-12 Plant: General Linear Models (GLM), and Logistic Regression Models (LOG). Each of the models attempted was very sensitive to the data set used. Models based on subsets of the full data set were found to do an inadequate job of predicting the behavior of the full data set. The fact that the Ccr and Cmn data sets differ significantly is not surprising considering the hydrogeology of the two formations differs. Flow in the Cmn is generally at elevations between 600 and 950 ft and is dominantly strike parallel through submerged, partially mud-filled cavities with sizes up to 40 ft, but more typically less than 5 ft. Recognized flow in the Ccr is generally above 950 ft elevation, with flow both parallel and perpendicular to geologic strike through conduits, which tend to be large than those on the Cnm, and are often not fully saturated at the shallower depths.

  20. Development of the complex general linear model in the Fourier domain: application to fMRI multiple input-output evoked responses for single subjects.

    PubMed

    Rio, Daniel E; Rawlings, Robert R; Woltz, Lawrence A; Gilman, Jodi; Hommer, Daniel W

    2013-01-01

    A linear time-invariant model based on statistical time series analysis in the Fourier domain for single subjects is further developed and applied to functional MRI (fMRI) blood-oxygen level-dependent (BOLD) multivariate data. This methodology was originally developed to analyze multiple stimulus input evoked response BOLD data. However, to analyze clinical data generated using a repeated measures experimental design, the model has been extended to handle multivariate time series data and demonstrated on control and alcoholic subjects taken from data previously analyzed in the temporal domain. Analysis of BOLD data is typically carried out in the time domain where the data has a high temporal correlation. These analyses generally employ parametric models of the hemodynamic response function (HRF) where prewhitening of the data is attempted using autoregressive (AR) models for the noise. However, this data can be analyzed in the Fourier domain. Here, assumptions made on the noise structure are less restrictive, and hypothesis tests can be constructed based on voxel-specific nonparametric estimates of the hemodynamic transfer function (HRF in the Fourier domain). This is especially important for experimental designs involving multiple states (either stimulus or drug induced) that may alter the form of the response function.

  1. FIDDLE: A Computer Code for Finite Difference Development of Linear Elasticity in Generalized Curvilinear Coordinates

    NASA Technical Reports Server (NTRS)

    Kaul, Upender K.

    2005-01-01

    A three-dimensional numerical solver based on finite-difference solution of three-dimensional elastodynamic equations in generalized curvilinear coordinates has been developed and used to generate data such as radial and tangential stresses over various gear component geometries under rotation. The geometries considered are an annulus, a thin annular disk, and a thin solid disk. The solution is based on first principles and does not involve lumped parameter or distributed parameter systems approach. The elastodynamic equations in the velocity-stress formulation that are considered here have been used in the solution of problems of geophysics where non-rotating Cartesian grids are considered. For arbitrary geometries, these equations along with the appropriate boundary conditions have been cast in generalized curvilinear coordinates in the present study.

  2. Generalized linear stability of non-inertial rimming flow in a rotating horizontal cylinder.

    PubMed

    Aggarwal, Himanshu; Tiwari, Naveen

    2015-10-01

    The stability of a thin film of viscous liquid inside a horizontally rotating cylinder is studied using modal and non-modal analysis. The equation governing the film thickness is derived within lubrication approximation and up to first order in aspect ratio (average film thickness to radius of the cylinder). Effect of gravity, viscous stress and capillary pressure are considered in the model. Steady base profiles are computed in the parameter space of interest that are uniform in the axial direction. A linear stability analysis is performed on these base profiles to study their stability to axial perturbations. The destabilizing behavior of aspect ratio and surface tension is demonstrated which is attributed to capillary instability. The transient growth that gives maximum amplification of any initial disturbance and the pseudospectra of the stability operator are computed. These computations reveal weak effect of non-normality of the operator and the results of eigenvalue analysis are recovered after a brief transient period. Results from nonlinear simulations are also presented which also confirm the validity of the modal analysis for the flow considered in this study. PMID:26496740

  3. General, database-driven fast-feedback system for the Stanford Linear Collider

    SciTech Connect

    Rouse, F.; Allison, S.; Castillo, S.; Gromme, T.; Hall, B.; Hendrickson, L.; Himel, T.; Krauter, K.; Sass, B.' Shoaee, H.

    1991-05-01

    A new feedback system has been developed for stabilizing the SLC beams at many locations. The feedback loops are designed to sample and correct at the 60 Hz repetition rate of the accelerator. Each loop can be distributed across several of the standard 80386 microprocessors which control the SLC hardware. A new communications system, KISNet, has been implemented to pass signals between the microprocessors at this rate. The software is written in a general fashion using the state space formalism of digital control theory. This allows a new loop to be implemented by just setting up the online database and perhaps installing a communications link. 3 refs., 4 figs.

  4. General dispersion formulae for atomic third-order non-linear optical properties

    NASA Astrophysics Data System (ADS)

    Bishop, David M.

    1988-12-01

    Dispersion formulae for the parallel and perpendicular optical hyperpolarizabilities γ ∥ω=γ xxxx(—ω σ;ω 1,ω 2ω 3) and γ ·ω =γ xzzx(—ω σ;ω 1,ω 2,ω 3), where ω σ=ω 1+ω 2+ω 3, are (for atoms): γ ∥ω/γ ∥0=1+ Aω L2+ O(ω 4),γ ·ω/γ ·0=1+ Bω L2+ O(ω 4), 1/3γ ∥ω/γ ·ω=1+ Cω L2+ O(ω 4), where A is independent of the process, B is proportional to 1+ az where z is independent of the process and a=(ω σω 3—ω 1ω 2)/ω L2, C is proportional to 1-6 a, and ω L2=ω σ2+ω 12+ω 22+ω 32. The coefficients A, B and C are related by C= A— B. These results are more general than those previously reported and asymptotically exact for low frequencies.

  5. Biohybrid Control of General Linear Systems Using the Adaptive Filter Model of Cerebellum

    PubMed Central

    Wilson, Emma D.; Assaf, Tareq; Pearson, Martin J.; Rossiter, Jonathan M.; Dean, Paul; Anderson, Sean R.; Porrill, John

    2015-01-01

    The adaptive filter model of the cerebellar microcircuit has been successfully applied to biological motor control problems, such as the vestibulo-ocular reflex (VOR), and to sensory processing problems, such as the adaptive cancelation of reafferent noise. It has also been successfully applied to problems in robotics, such as adaptive camera stabilization and sensor noise cancelation. In previous applications to inverse control problems, the algorithm was applied to the velocity control of a plant dominated by viscous and elastic elements. Naive application of the adaptive filter model to the displacement (as opposed to velocity) control of this plant results in unstable learning and control. To be more generally useful in engineering problems, it is essential to remove this restriction to enable the stable control of plants of any order. We address this problem here by developing a biohybrid model reference adaptive control (MRAC) scheme, which stabilizes the control algorithm for strictly proper plants. We evaluate the performance of this novel cerebellar-inspired algorithm with MRAC scheme in the experimental control of a dielectric electroactive polymer, a class of artificial muscle. The results show that the augmented cerebellar algorithm is able to accurately control the displacement response of the artificial muscle. The proposed solution not only greatly extends the practical applicability of the cerebellar-inspired algorithm, but may also shed light on cerebellar involvement in a wider range of biological control tasks. PMID:26257638

  6. FUSED KERNEL-SPLINE SMOOTHING FOR REPEATEDLY MEASURED OUTCOMES IN A GENERALIZED PARTIALLY LINEAR MODEL WITH FUNCTIONAL SINGLE INDEX*

    PubMed Central

    Jiang, Fei; Ma, Yanyuan; Wang, Yuanjia

    2015-01-01

    We propose a generalized partially linear functional single index risk score model for repeatedly measured outcomes where the index itself is a function of time. We fuse the nonparametric kernel method and regression spline method, and modify the generalized estimating equation to facilitate estimation and inference. We use local smoothing kernel to estimate the unspecified coefficient functions of time, and use B-splines to estimate the unspecified function of the single index component. The covariance structure is taken into account via a working model, which provides valid estimation and inference procedure whether or not it captures the true covariance. The estimation method is applicable to both continuous and discrete outcomes. We derive large sample properties of the estimation procedure and show different convergence rate of each component of the model. The asymptotic properties when the kernel and regression spline methods are combined in a nested fashion has not been studied prior to this work even in the independent data case. PMID:26283801

  7. Point particle binary system with components of different masses in the linear regime of the characteristic formulation of general relativity

    NASA Astrophysics Data System (ADS)

    Cedeño M, C. E.; de Araujo, J. C. N.

    2016-05-01

    A study of binary systems composed of two point particles with different masses in the linear regime of the characteristic formulation of general relativity with a Minkowski background is provided. The present paper generalizes a previous study by Bishop et al. The boundary conditions at the world tubes generated by the particles's orbits are explored, where the metric variables are decomposed in spin-weighted spherical harmonics. The power lost by the emission of gravitational waves is computed using the Bondi News function. The power found is the well-known result obtained by Peters and Mathews using a different approach. This agreement validates the approach considered here. Several multipole term contributions to the gravitational radiation field are also shown.

  8. Generalized Uncertainty Quantification for Linear Inverse Problems in X-ray Imaging

    SciTech Connect

    Fowler, Michael James

    2014-04-25

    In industrial and engineering applications, X-ray radiography has attained wide use as a data collection protocol for the assessment of material properties in cases where direct observation is not possible. The direct measurement of nuclear materials, particularly when they are under explosive or implosive loading, is not feasible, and radiography can serve as a useful tool for obtaining indirect measurements. In such experiments, high energy X-rays are pulsed through a scene containing material of interest, and a detector records a radiograph by measuring the radiation that is not attenuated in the scene. One approach to the analysis of these radiographs is to model the imaging system as an operator that acts upon the object being imaged to produce a radiograph. In this model, the goal is to solve an inverse problem to reconstruct the values of interest in the object, which are typically material properties such as density or areal density. The primary objective in this work is to provide quantitative solutions with uncertainty estimates for three separate applications in X-ray radiography: deconvolution, Abel inversion, and radiation spot shape reconstruction. For each problem, we introduce a new hierarchical Bayesian model for determining a posterior distribution on the unknowns and develop efficient Markov chain Monte Carlo (MCMC) methods for sampling from the posterior. A Poisson likelihood, based on a noise model for photon counts at the detector, is combined with a prior tailored to each application: an edge-localizing prior for deconvolution; a smoothing prior with non-negativity constraints for spot reconstruction; and a full covariance sampling prior based on a Wishart hyperprior for Abel inversion. After developing our methods in a general setting, we demonstrate each model on both synthetically generated datasets, including those from a well known radiation transport code, and real high energy radiographs taken at two U. S. Department of Energy

  9. The generalized cross-validation method applied to geophysical linear traveltime tomography

    NASA Astrophysics Data System (ADS)

    Bassrei, A.; Oliveira, N. P.

    2009-12-01

    The oil industry is the major user of Applied Geophysics methods for the subsurface imaging. Among different methods, the so-called seismic (or exploration seismology) methods are the most important. Tomography was originally developed for medical imaging and was introduced in exploration seismology in the 1980's. There are two main classes of geophysical tomography: those that use only the traveltimes between sources and receivers, which is a cinematic approach and those that use the wave amplitude itself, being a dynamic approach. Tomography is a kind of inverse problem, and since inverse problems are usually ill-posed, it is necessary to use some method to reduce their deficiencies. These difficulties of the inverse procedure are associated with the fact that the involved matrix is ill-conditioned. To compensate this shortcoming, it is appropriate to use some technique of regularization. In this work we make use of regularization with derivative matrices, also called smoothing. There is a crucial problem in regularization, which is the selection of the regularization parameter lambda. We use generalized cross validation (GCV) as a tool for the selection of lambda. GCV chooses the regularization parameter associated with the best average prediction for all possible omissions of one datum, corresponding to the minimizer of GCV function. GCV is used for an application in traveltime tomography, where the objective is to obtain the 2-D velocity distribution from the measured values of the traveltimes between sources and receivers. We present results with synthetic data, using a geological model that simulates different features, like a fault and a reservoir. The results using GCV are very good, including those contaminated with noise, and also using different regularization orders, attesting the feasibility of this technique.

  10. Application of a generalized linear mixed model to analyze mixture toxicity: survival of brown trout affected by copper and zinc.

    PubMed

    Iwasaki, Yuichi; Brinkman, Stephen F

    2015-04-01

    Increased concerns about the toxicity of chemical mixtures have led to greater emphasis on analyzing the interactions among the mixture components based on observed effects. The authors applied a generalized linear mixed model (GLMM) to analyze survival of brown trout (Salmo trutta) acutely exposed to metal mixtures that contained copper and zinc. Compared with dominant conventional approaches based on an assumption of concentration addition and the concentration of a chemical that causes x% effect (ECx), the GLMM approach has 2 major advantages. First, binary response variables such as survival can be modeled without any transformations, and thus sample size can be taken into consideration. Second, the importance of the chemical interaction can be tested in a simple statistical manner. Through this application, the authors investigated whether the estimated concentration of the 2 metals binding to humic acid, which is assumed to be a proxy of nonspecific biotic ligand sites, provided a better prediction of survival effects than dissolved and free-ion concentrations of metals. The results suggest that the estimated concentration of metals binding to humic acid is a better predictor of survival effects, and thus the metal competition at the ligands could be an important mechanism responsible for effects of metal mixtures. Application of the GLMM (and the generalized linear model) presents an alternative or complementary approach to analyzing mixture toxicity. PMID:25524054

  11. A general linear mathematical model of power flow analysis and control for integrated structure-control systems

    NASA Astrophysics Data System (ADS)

    Xiong, Y. P.; Xing, J. T.; Price, W. G.

    2003-10-01

    Generalized integrated structure-control dynamical systems consisting of any number of active/passive controllers and three-dimensional rigid/flexible substructures are investigated. The developed mathematical model assessing the behaviour of these complex systems includes description of general boundary conditions, the interaction mechanisms between structures, power flows and control characteristics. Three active control strategies are examined. That is, multiple channel absolute/relative velocity feedback controllers, their hybrid combination and an existing passive control system to which the former control systems are attached in order to improve overall control efficiency. From the viewpoint of continuum mechanics, an analytical solution of this generalized structure-control system has been developed allowing predictions of the dynamic responses at any point on or in substructures of the coupled system. Absolute or relative dynamic response or receptance, transmissibility, mobility, transfer functions have been derived to evaluate complex dynamic interaction mechanisms through various transmission paths. The instantaneous and time-averaged power flow of energy input, transmission and dissipation or absorption within and between the source substructure, control subsystems and controlled substructure are presented. The general theory developed provides an integrated framework to solve various vibration isolation and control problems and provides a basis to develop a general algorithm that may allow the user to build arbitrarily complex linear control models using simple commands and inputs. The proposed approach is applied to a practical example to illustrate and validate the mathematical model as well as to assess control effectiveness and to provide important guidelines to assist vibration control designers.

  12. A semiparametric negative binomial generalized linear model for modeling over-dispersed count data with a heavy tail: Characteristics and applications to crash data.

    PubMed

    Shirazi, Mohammadali; Lord, Dominique; Dhavala, Soma Sekhar; Geedipally, Srinivas Reddy

    2016-06-01

    Crash data can often be characterized by over-dispersion, heavy (long) tail and many observations with the value zero. Over the last few years, a small number of researchers have started developing and applying novel and innovative multi-parameter models to analyze such data. These multi-parameter models have been proposed for overcoming the limitations of the traditional negative binomial (NB) model, which cannot handle this kind of data efficiently. The research documented in this paper continues the work related to multi-parameter models. The objective of this paper is to document the development and application of a flexible NB generalized linear model with randomly distributed mixed effects characterized by the Dirichlet process (NB-DP) to model crash data. The objective of the study was accomplished using two datasets. The new model was compared to the NB and the recently introduced model based on the mixture of the NB and Lindley (NB-L) distributions. Overall, the research study shows that the NB-DP model offers a better performance than the NB model once data are over-dispersed and have a heavy tail. The NB-DP performed better than the NB-L when the dataset has a heavy tail, but a smaller percentage of zeros. However, both models performed similarly when the dataset contained a large amount of zeros. In addition to a greater flexibility, the NB-DP provides a clustering by-product that allows the safety analyst to better understand the characteristics of the data, such as the identification of outliers and sources of dispersion. PMID:26945472

  13. A generalized electrostatic micro-mirror (GEM) model for a two-axis convex piecewise linear shaped MEMS mirror

    NASA Astrophysics Data System (ADS)

    Edwards, C. L.; Edwards, M. L.

    2009-05-01

    MEMS micro-mirror technology offers the opportunity to replace larger optical actuators with smaller, faster ones for lidar, network switching, and other beam steering applications. Recent developments in modeling and simulation of MEMS two-axis (tip-tilt) mirrors have resulted in closed-form solutions that are expressed in terms of physical, electrical and environmental parameters related to the MEMS device. The closed-form analytical expressions enable dynamic time-domain simulations without excessive computational overhead and are referred to as the Micro-mirror Pointing Model (MPM). Additionally, these first-principle models have been experimentally validated with in-situ static, dynamic, and stochastic measurements illustrating their reliability. These models have assumed that the mirror has a rectangular shape. Because the corners can limit the dynamic operation of a rectangular mirror, it is desirable to shape the mirror, e.g., mitering the corners. Presented in this paper is the formulation of a generalized electrostatic micromirror (GEM) model with an arbitrary convex piecewise linear shape that is readily implemented in MATLAB and SIMULINK for steady-state and dynamic simulations. Additionally, such a model permits an arbitrary shaped mirror to be approximated as a series of linearly tapered segments. Previously, "effective area" arguments were used to model a non-rectangular shaped mirror with an equivalent rectangular one. The GEM model shows the limitations of this approach and provides a pre-fabrication tool for designing mirror shapes.

  14. SAS macro programs for geographically weighted generalized linear modeling with spatial point data: applications to health research.

    PubMed

    Chen, Vivian Yi-Ju; Yang, Tse-Chuan

    2012-08-01

    An increasing interest in exploring spatial non-stationarity has generated several specialized analytic software programs; however, few of these programs can be integrated natively into a well-developed statistical environment such as SAS. We not only developed a set of SAS macro programs to fill this gap, but also expanded the geographically weighted generalized linear modeling (GWGLM) by integrating the strengths of SAS into the GWGLM framework. Three features distinguish our work. First, the macro programs of this study provide more kernel weighting functions than the existing programs. Second, with our codes the users are able to better specify the bandwidth selection process compared to the capabilities of existing programs. Third, the development of the macro programs is fully embedded in the SAS environment, providing great potential for future exploration of complicated spatially varying coefficient models in other disciplines. We provided three empirical examples to illustrate the use of the SAS macro programs and demonstrated the advantages explained above.

  15. Variable selection in Bayesian generalized linear-mixed models: an illustration using candidate gene case-control association studies.

    PubMed

    Tsai, Miao-Yu

    2015-03-01

    The problem of variable selection in the generalized linear-mixed models (GLMMs) is pervasive in statistical practice. For the purpose of variable selection, many methodologies for determining the best subset of explanatory variables currently exist according to the model complexity and differences between applications. In this paper, we develop a "higher posterior probability model with bootstrap" (HPMB) approach to select explanatory variables without fitting all possible GLMMs involving a small or moderate number of explanatory variables. Furthermore, to save computational load, we propose an efficient approximation approach with Laplace's method and Taylor's expansion to approximate intractable integrals in GLMMs. Simulation studies and an application of HapMap data provide evidence that this selection approach is computationally feasible and reliable for exploring true candidate genes and gene-gene associations, after adjusting for complex structures among clusters.

  16. A re-formulation of generalized linear mixed models to fit family data in genetic association studies

    PubMed Central

    Wang, Tao; He, Peng; Ahn, Kwang Woo; Wang, Xujing; Ghosh, Soumitra; Laud, Purushottam

    2015-01-01

    The generalized linear mixed model (GLMM) is a useful tool for modeling genetic correlation among family data in genetic association studies. However, when dealing with families of varied sizes and diverse genetic relatedness, the GLMM has a special correlation structure which often makes it difficult to be specified using standard statistical software. In this study, we propose a Cholesky decomposition based re-formulation of the GLMM so that the re-formulated GLMM can be specified conveniently via “proc nlmixed” and “proc glimmix” in SAS, or OpenBUGS via R package BRugs. Performances of these procedures in fitting the re-formulated GLMM are examined through simulation studies. We also apply this re-formulated GLMM to analyze a real data set from Type 1 Diabetes Genetics Consortium (T1DGC). PMID:25873936

  17. Comparing Multiple-Group Multinomial Log-Linear Models for Multidimensional Skill Distributions in the General Diagnostic Model. Research Report. ETS RR-08-35

    ERIC Educational Resources Information Center

    Xu, Xueli; von Davier, Matthias

    2008-01-01

    The general diagnostic model (GDM) utilizes located latent classes for modeling a multidimensional proficiency variable. In this paper, the GDM is extended by employing a log-linear model for multiple populations that assumes constraints on parameters across multiple groups. This constrained model is compared to log-linear models that assume…

  18. Mixture and non-mixture cure fraction models based on the generalized modified Weibull distribution with an application to gastric cancer data.

    PubMed

    Martinez, Edson Z; Achcar, Jorge A; Jácome, Alexandre A A; Santos, José S

    2013-12-01

    The cure fraction models are usually used to model lifetime time data with long-term survivors. In the present article, we introduce a Bayesian analysis of the four-parameter generalized modified Weibull (GMW) distribution in presence of cure fraction, censored data and covariates. In order to include the proportion of "cured" patients, mixture and non-mixture formulation models are considered. To demonstrate the ability of using this model in the analysis of real data, we consider an application to data from patients with gastric adenocarcinoma. Inferences are obtained by using MCMC (Markov Chain Monte Carlo) methods.

  19. An application of the complex general linear model to analysis of fMRI single subjects multiple stimuli input data

    NASA Astrophysics Data System (ADS)

    Rio, Daniel; Rawlings, Robert; Woltz, Lawrence; Gilman, Jodi; Hommer, Daniel

    2009-02-01

    The general linear model (GLM) has been extensively applied to fMRI data in the time domain. However, traditionally time series data can be analyzed in the Fourier domain where the assumptions made as to the noise in the signal can be less restrictive and statistical tests are mathematically more rigorous. A complex form of the GLM in the Fourier domain has been applied to the analysis of fMRI (BOLD) data. This methodology has a number of advantages over temporal methods: 1. Noise in the fMRI data is modeled more generally and closer to that actually seen in the data. 2. Any input function is allowed regardless of the timing. 3. Non-parametric estimation of the transfer functions at each voxel are possible. 4. Rigorous statistical inference of single subjects is possible. This is demonstrated in the analysis of an experimental design with random exponentially distributed stimulus inputs (a two way ANOVA design with input stimuli images of alcohol, non-alcohol beverage and positive or negative images) sampled at 400 milliseconds. This methodology applied to a pair of subjects showed precise and interesting results (e.g. alcoholic beverage images attenuate the response of negative images in an alcoholic as compared to a control subject).

  20. Fuzzy C-mean clustering on kinetic parameter estimation with generalized linear least square algorithm in SPECT

    NASA Astrophysics Data System (ADS)

    Choi, Hon-Chit; Wen, Lingfeng; Eberl, Stefan; Feng, Dagan

    2006-03-01

    Dynamic Single Photon Emission Computed Tomography (SPECT) has the potential to quantitatively estimate physiological parameters by fitting compartment models to the tracer kinetics. The generalized linear least square method (GLLS) is an efficient method to estimate unbiased kinetic parameters and parametric images. However, due to the low sensitivity of SPECT, noisy data can cause voxel-wise parameter estimation by GLLS to fail. Fuzzy C-Mean (FCM) clustering and modified FCM, which also utilizes information from the immediate neighboring voxels, are proposed to improve the voxel-wise parameter estimation of GLLS. Monte Carlo simulations were performed to generate dynamic SPECT data with different noise levels and processed by general and modified FCM clustering. Parametric images were estimated by Logan and Yokoi graphical analysis and GLLS. The influx rate (K I), volume of distribution (V d) were estimated for the cerebellum, thalamus and frontal cortex. Our results show that (1) FCM reduces the bias and improves the reliability of parameter estimates for noisy data, (2) GLLS provides estimates of micro parameters (K I-k 4) as well as macro parameters, such as volume of distribution (Vd) and binding potential (BP I & BP II) and (3) FCM clustering incorporating neighboring voxel information does not improve the parameter estimates, but improves noise in the parametric images. These findings indicated that it is desirable for pre-segmentation with traditional FCM clustering to generate voxel-wise parametric images with GLLS from dynamic SPECT data.

  1. Accounting for uncertainty in confounder and effect modifier selection when estimating average causal effects in generalized linear models.

    PubMed

    Wang, Chi; Dominici, Francesca; Parmigiani, Giovanni; Zigler, Corwin Matthew

    2015-09-01

    Confounder selection and adjustment are essential elements of assessing the causal effect of an exposure or treatment in observational studies. Building upon work by Wang et al. (2012, Biometrics 68, 661-671) and Lefebvre et al. (2014, Statistics in Medicine 33, 2797-2813), we propose and evaluate a Bayesian method to estimate average causal effects in studies with a large number of potential confounders, relatively few observations, likely interactions between confounders and the exposure of interest, and uncertainty on which confounders and interaction terms should be included. Our method is applicable across all exposures and outcomes that can be handled through generalized linear models. In this general setting, estimation of the average causal effect is different from estimation of the exposure coefficient in the outcome model due to noncollapsibility. We implement a Bayesian bootstrap procedure to integrate over the distribution of potential confounders and to estimate the causal effect. Our method permits estimation of both the overall population causal effect and effects in specified subpopulations, providing clear characterization of heterogeneous exposure effects that may vary considerably across different covariate profiles. Simulation studies demonstrate that the proposed method performs well in small sample size situations with 100-150 observations and 50 covariates. The method is applied to data on 15,060 US Medicare beneficiaries diagnosed with a malignant brain tumor between 2000 and 2009 to evaluate whether surgery reduces hospital readmissions within 30 days of diagnosis.

  2. Jamming and percolation in generalized models of random sequential adsorption of linear k -mers on a square lattice

    NASA Astrophysics Data System (ADS)

    Lebovka, Nikolai I.; Tarasevich, Yuri Yu.; Dubinin, Dmitri O.; Laptev, Valeri V.; Vygornitskii, Nikolai V.

    2015-12-01

    The jamming and percolation for two generalized models of random sequential adsorption (RSA) of linear k -mers (particles occupying k adjacent sites) on a square lattice are studied by means of Monte Carlo simulation. The classical RSA model assumes the absence of overlapping of the new incoming particle with the previously deposited ones. The first model is a generalized variant of the RSA model for both k -mers and a lattice with defects. Some of the occupying k adjacent sites are considered as insulating and some of the lattice sites are occupied by defects (impurities). For this model even a small concentration of defects can inhibit percolation for relatively long k -mers. The second model is the cooperative sequential adsorption one where, for each new k -mer, only a restricted number of lateral contacts z with previously deposited k -mers is allowed. Deposition occurs in the case when z ≤(1 -d ) zm where zm=2 (k +1 ) is the maximum numbers of the contacts of k -mer, and d is the fraction of forbidden contacts. Percolation is observed only at some interval kmin≤k ≤kmax where the values kmin and kmax depend upon the fraction of forbidden contacts d . The value kmax decreases as d increases. A logarithmic dependence of the type log10(kmax) =a +b d , where a =4.04 ±0.22 ,b =-4.93 ±0.57 , is obtained.

  3. Accounting for Uncertainty in Confounder and Effect Modifier Selection when Estimating Average Causal Effects in Generalized Linear Models

    PubMed Central

    Wang, Chi; Dominici, Francesca; Parmigiani, Giovanni; Zigler, Corwin Matthew

    2015-01-01

    Summary Confounder selection and adjustment are essential elements of assessing the causal effect of an exposure or treatment in observational studies. Building upon work by Wang et al. (2012) and Lefebvre et al. (2014), we propose and evaluate a Bayesian method to estimate average causal effects in studies with a large number of potential confounders, relatively few observations, likely interactions between confounders and the exposure of interest, and uncertainty on which confounders and interaction terms should be included. Our method is applicable across all exposures and outcomes that can be handled through generalized linear models. In this general setting, estimation of the average causal effect is different from estimation of the exposure coefficient in the outcome model due to non-collapsibility. We implement a Bayesian bootstrap procedure to integrate over the distribution of potential confounders and to estimate the causal effect. Our method permits estimation of both the overall population causal effect and effects in specified subpopulations, providing clear characterization of heterogeneous exposure effects that may vary considerably across different covariate profiles. Simulation studies demonstrate that the proposed method performs well in small sample size situations with 100 to 150 observations and 50 covariates. The method is applied to data on 15060 US Medicare beneficiaries diagnosed with a malignant brain tumor between 2000 and 2009 to evaluate whether surgery reduces hospital readmissions within thirty days of diagnosis. PMID:25899155

  4. Kitaev models based on unitary quantum groupoids

    SciTech Connect

    Chang, Liang

    2014-04-15

    We establish a generalization of Kitaev models based on unitary quantum groupoids. In particular, when inputting a Kitaev-Kong quantum groupoid H{sub C}, we show that the ground state manifold of the generalized model is canonically isomorphic to that of the Levin-Wen model based on a unitary fusion category C. Therefore, the generalized Kitaev models provide realizations of the target space of the Turaev-Viro topological quantum field theory based on C.

  5. A revised linear ozone photochemistry parameterization for use in transport and general circulation models: multi-annual simulations

    NASA Astrophysics Data System (ADS)

    Cariolle, D.; Teyssèdre, H.

    2007-01-01

    This article describes the validation of a linear parameterization of the ozone photochemistry for use in upper tropospheric and stratospheric studies. The present work extends a previously developed scheme by improving the 2D model used to derive the coefficients of the parameterization. The chemical reaction rates are updated from a compilation that includes recent laboratory works. Furthermore, the polar ozone destruction due to heterogeneous reactions at the surface of the polar stratospheric clouds is taken into account as a function of the stratospheric temperature and the total chlorine content. Two versions of the parameterization are tested. The first one only requires the resolution of a continuity equation for the time evolution of the ozone mixing ratio, the second one uses one additional equation for a cold tracer. The parameterization has been introduced into the chemical transport model MOCAGE. The model is integrated with wind and temperature fields from the ECMWF operational analyses over the period 2000-2004. Overall, the results show a very good agreement between the modelled ozone distribution and the Total Ozone Mapping Spectrometer (TOMS) satellite data and the "in-situ" vertical soundings. During the course of the integration the model does not show any drift and the biases are generally small. The model also reproduces fairly well the polar ozone variability, with notably the formation of "ozone holes" in the southern hemisphere with amplitudes and seasonal evolutions that follow the dynamics and time evolution of the polar vortex. The introduction of the cold tracer further improves the model simulation by allowing additional ozone destruction inside air masses exported from the high to the mid-latitudes, and by maintaining low ozone contents inside the polar vortex of the southern hemisphere over longer periods in spring time. It is concluded that for the study of climatic scenarios or the assimilation of ozone data, the present

  6. Jamming and percolation in generalized models of random sequential adsorption of linear k-mers on a square lattice.

    PubMed

    Lebovka, Nikolai I; Tarasevich, Yuri Yu; Dubinin, Dmitri O; Laptev, Valeri V; Vygornitskii, Nikolai V

    2015-12-01

    The jamming and percolation for two generalized models of random sequential adsorption (RSA) of linear k-mers (particles occupying k adjacent sites) on a square lattice are studied by means of Monte Carlo simulation. The classical RSA model assumes the absence of overlapping of the new incoming particle with the previously deposited ones. The first model is a generalized variant of the RSA model for both k-mers and a lattice with defects. Some of the occupying k adjacent sites are considered as insulating and some of the lattice sites are occupied by defects (impurities). For this model even a small concentration of defects can inhibit percolation for relatively long k-mers. The second model is the cooperative sequential adsorption one where, for each new k-mer, only a restricted number of lateral contacts z with previously deposited k-mers is allowed. Deposition occurs in the case when z≤(1-d)z(m) where z(m)=2(k+1) is the maximum numbers of the contacts of k-mer, and d is the fraction of forbidden contacts. Percolation is observed only at some interval k(min)≤k≤k(max) where the values k(min) and k(max) depend upon the fraction of forbidden contacts d. The value k(max) decreases as d increases. A logarithmic dependence of the type log(10)(k(max))=a+bd, where a=4.04±0.22,b=-4.93±0.57, is obtained. PMID:26764641

  7. SNP_NLMM: A SAS Macro to Implement a Flexible Random Effects Density for Generalized Linear and Nonlinear Mixed Models.

    PubMed

    Vock, David M; Davidian, Marie; Tsiatis, Anastasios A

    2014-01-01

    Generalized linear and nonlinear mixed models (GMMMs and NLMMs) are commonly used to represent non-Gaussian or nonlinear longitudinal or clustered data. A common assumption is that the random effects are Gaussian. However, this assumption may be unrealistic in some applications, and misspecification of the random effects density may lead to maximum likelihood parameter estimators that are inconsistent, biased, and inefficient. Because testing if the random effects are Gaussian is difficult, previous research has recommended using a flexible random effects density. However, computational limitations have precluded widespread use of flexible random effects densities for GLMMs and NLMMs. We develop a SAS macro, SNP_NLMM, that overcomes the computational challenges to fit GLMMs and NLMMs where the random effects are assumed to follow a smooth density that can be represented by the seminonparametric formulation proposed by Gallant and Nychka (1987). The macro is flexible enough to allow for any density of the response conditional on the random effects and any nonlinear mean trajectory. We demonstrate the SNP_NLMM macro on a GLMM of the disease progression of toenail infection and on a NLMM of intravenous drug concentration over time.

  8. Multisite multivariate modeling of daily precipitation and temperature in the Canadian Prairie Provinces using generalized linear models

    NASA Astrophysics Data System (ADS)

    Asong, Zilefac E.; Khaliq, M. N.; Wheater, H. S.

    2016-02-01

    Based on the Generalized Linear Model (GLM) framework, a multisite stochastic modelling approach is developed using daily observations of precipitation and minimum and maximum temperatures from 120 sites located across the Canadian Prairie Provinces: Alberta, Saskatchewan and Manitoba. Temperature is modeled using a two-stage normal-heteroscedastic model by fitting mean and variance components separately. Likewise, precipitation occurrence and conditional precipitation intensity processes are modeled separately. The relationship between precipitation and temperature is accounted for by using transformations of precipitation as covariates to predict temperature fields. Large scale atmospheric covariates from the National Center for Environmental Prediction Reanalysis-I, teleconnection indices, geographical site attributes, and observed precipitation and temperature records are used to calibrate these models for the 1971-2000 period. Validation of the developed models is performed on both pre- and post-calibration period data. Results of the study indicate that the developed models are able to capture spatiotemporal characteristics of observed precipitation and temperature fields, such as inter-site and inter-variable correlation structure, and systematic regional variations present in observed sequences. A number of simulated weather statistics ranging from seasonal means to characteristics of temperature and precipitation extremes and some of the commonly used climate indices are also found to be in close agreement with those derived from observed data. This GLM-based modelling approach will be developed further for multisite statistical downscaling of Global Climate Model outputs to explore climate variability and change in this region of Canada.

  9. SNP_NLMM: A SAS Macro to Implement a Flexible Random Effects Density for Generalized Linear and Nonlinear Mixed Models.

    PubMed

    Vock, David M; Davidian, Marie; Tsiatis, Anastasios A

    2014-01-01

    Generalized linear and nonlinear mixed models (GMMMs and NLMMs) are commonly used to represent non-Gaussian or nonlinear longitudinal or clustered data. A common assumption is that the random effects are Gaussian. However, this assumption may be unrealistic in some applications, and misspecification of the random effects density may lead to maximum likelihood parameter estimators that are inconsistent, biased, and inefficient. Because testing if the random effects are Gaussian is difficult, previous research has recommended using a flexible random effects density. However, computational limitations have precluded widespread use of flexible random effects densities for GLMMs and NLMMs. We develop a SAS macro, SNP_NLMM, that overcomes the computational challenges to fit GLMMs and NLMMs where the random effects are assumed to follow a smooth density that can be represented by the seminonparametric formulation proposed by Gallant and Nychka (1987). The macro is flexible enough to allow for any density of the response conditional on the random effects and any nonlinear mean trajectory. We demonstrate the SNP_NLMM macro on a GLMM of the disease progression of toenail infection and on a NLMM of intravenous drug concentration over time. PMID:24688453

  10. SNP_NLMM: A SAS Macro to Implement a Flexible Random Effects Density for Generalized Linear and Nonlinear Mixed Models

    PubMed Central

    Vock, David M.; Davidian, Marie; Tsiatis, Anastasios A.

    2014-01-01

    Generalized linear and nonlinear mixed models (GMMMs and NLMMs) are commonly used to represent non-Gaussian or nonlinear longitudinal or clustered data. A common assumption is that the random effects are Gaussian. However, this assumption may be unrealistic in some applications, and misspecification of the random effects density may lead to maximum likelihood parameter estimators that are inconsistent, biased, and inefficient. Because testing if the random effects are Gaussian is difficult, previous research has recommended using a flexible random effects density. However, computational limitations have precluded widespread use of flexible random effects densities for GLMMs and NLMMs. We develop a SAS macro, SNP_NLMM, that overcomes the computational challenges to fit GLMMs and NLMMs where the random effects are assumed to follow a smooth density that can be represented by the seminonparametric formulation proposed by Gallant and Nychka (1987). The macro is flexible enough to allow for any density of the response conditional on the random effects and any nonlinear mean trajectory. We demonstrate the SNP_NLMM macro on a GLMM of the disease progression of toenail infection and on a NLMM of intravenous drug concentration over time. PMID:24688453

  11. Acute toxicity of ammonia (NH3-N) in sewage effluent to Chironomus riparius: II. Using a generalized linear model

    USGS Publications Warehouse

    Monda, D.P.; Galat, D.L.; Finger, S.E.; Kaiser, M.S.

    1995-01-01

    Toxicity of un-ionized ammonia (NH3-N) to the midge, Chironomus riparius was compared, using laboratory culture (well) water and sewage effluent (≈0.4 mg/L NH3-N) in two 96-h, static-renewal toxicity experiments. A generalized linear model was used for data analysis. For the first and second experiments, respectively, LC50 values were 9.4 mg/L (Test 1A) and 6.6 mg/L (Test 2A) for ammonia in well water, and 7.8 mg/L (Test 1B) and 4.1 mg/L (Test 2B) for ammonia in sewage effluent. Slopes of dose-response curves for Tests 1A and 2A were equal, but mortality occurred at lower NH3-N concentrations in Test 2A (unequal intercepts). Response ofC. riparius to NH3 in effluent was not consistent; dose-response curves for tests 1B and 2B differed in slope and intercept. Nevertheless, C. riparius was more sensitive to ammonia in effluent than in well water in both experiments, indicating a synergistic effect of ammonia in sewage effluent. These results demonstrate the advantages of analyzing the organisms entire range of response, as opposed to generating LC50 values, which represent only one point on the dose-response curve.

  12. Depth-compensated diffuse optical tomography enhanced by general linear model analysis and an anatomical atlas of human head

    PubMed Central

    Tian, Fenghua; Liu, Hanli

    2013-01-01

    One of the main challenges in functional diffuse optical tomography (DOT) is to accurately recover the depth of brain activation, which is even more essential when differentiating true brain signals from task-evoked artifacts in the scalp. Recently, we developed a depth-compensated algorithm (DCA) to minimize the depth localization error in DOT. However, the semi-infinite model that was used in DCA deviated significantly from the realistic human head anatomy. In the present work, we incorporated depth-compensated DOT (DC-DOT) with a standard anatomical atlas of human head. Computer simulations and human measurements of sensorimotor activation were conducted to examine and prove the depth specificity and quantification accuracy of brain atlas-based DC-DOT. In addition, node-wise statistical analysis based on the general linear model (GLM) was also implemented and performed in this study, showing the robustness of DC-DOT that can accurately identify brain activation at the correct depth for functional brain imaging, even when co-existing with superficial artifacts. PMID:23859922

  13. Linear stability analysis of immiscible displacement including continuously changing mobility and capillary effects: Part II - general basic flow profiles

    SciTech Connect

    Huang, A.B.; Yortsos, Y.C.

    1984-09-01

    This paper reports on the continuation of previous work in the linear stability of immiscible, two-phase flow displacement processes in porous media that includes continuously changing mobility and capillary effects. In Part I simple basic-flow profiles that allow exact solutions to be obtained were investigated. First, the stability of non-capillary flows corresponding to a straight line fractional flow is examined. Next, the stability of capillary flows for general basic flow profiles is examined. For values of the viscosity ratio above the critical, the numerical results show that the displacement is unstable to small disturbances of wavelength larger than a critical value, and stable otherwise. This effect is attributed to the stabilizing action of capillarity. Values of wavelength corresponding to the highest rate of growth are numerically determined. It is found that stability is enhanced at lower values of the capillary number and the injection rate. Finally, a limited sensitivity study of the effect on stability of the functional forms of relative permeability and capillary pressure is carried out.

  14. General characterization of Tityus fasciolatus scorpion venom. Molecular identification of toxins and localization of linear B-cell epitopes.

    PubMed

    Mendes, T M; Guimarães-Okamoto, P T C; Machado-de-Avila, R A; Oliveira, D; Melo, M M; Lobato, Z I; Kalapothakis, E; Chávez-Olórtegui, C

    2015-06-01

    This communication describes the general characteristics of the venom from the Brazilian scorpion Tityus fasciolatus, which is an endemic species found in the central Brazil (States of Goiás and Minas Gerais), being responsible for sting accidents in this area. The soluble venom obtained from this scorpion is toxic to mice being the LD50 is 2.984 mg/kg (subcutaneally). SDS-PAGE of the soluble venom resulted in 10 fractions ranged in size from 6 to 10-80 kDa. Sheep were employed for anti-T. fasciolatus venom serum production. Western blotting analysis showed that most of these venom proteins are immunogenic. T. fasciolatus anti-venom revealed consistent cross-reactivity with venom antigens from Tityus serrulatus. Using known primers for T. serrulatus toxins, we have identified three toxins sequences from T. fasciolatus venom. Linear epitopes of these toxins were localized and fifty-five overlapping pentadecapeptides covering complete amino acid sequence of the three toxins were synthesized in cellulose membrane (spot-synthesis technique). The epitopes were located on the 3D structures and some important residues for structure/function were identified.

  15. Automatic optimal filament segmentation with sub-pixel accuracy using generalized linear models and B-spline level-sets.

    PubMed

    Xiao, Xun; Geyer, Veikko F; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F

    2016-08-01

    Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy.

  16. Misconceptions in the use of the General Linear Model applied to functional MRI: a tutorial for junior neuro-imagers

    PubMed Central

    Pernet, Cyril R.

    2014-01-01

    This tutorial presents several misconceptions related to the use the General Linear Model (GLM) in functional Magnetic Resonance Imaging (fMRI). The goal is not to present mathematical proofs but to educate using examples and computer code (in Matlab). In particular, I address issues related to (1) model parameterization (modeling baseline or null events) and scaling of the design matrix; (2) hemodynamic modeling using basis functions, and (3) computing percentage signal change. Using a simple controlled block design and an alternating block design, I first show why “baseline” should not be modeled (model over-parameterization), and how this affects effect sizes. I also show that, depending on what is tested; over-parameterization does not necessarily impact upon statistical results. Next, using a simple periodic vs. random event related design, I show how the hemodynamic model (hemodynamic function only or using derivatives) can affects parameter estimates, as well as detail the role of orthogonalization. I then relate the above results to the computation of percentage signal change. Finally, I discuss how these issues affect group analyses and give some recommendations. PMID:24478622

  17. Optimizing the general linear model for functional near-infrared spectroscopy: an adaptive hemodynamic response function approach

    PubMed Central

    Uga, Minako; Dan, Ippeita; Sano, Toshifumi; Dan, Haruka; Watanabe, Eiju

    2014-01-01

    Abstract. An increasing number of functional near-infrared spectroscopy (fNIRS) studies utilize a general linear model (GLM) approach, which serves as a standard statistical method for functional magnetic resonance imaging (fMRI) data analysis. While fMRI solely measures the blood oxygen level dependent (BOLD) signal, fNIRS measures the changes of oxy-hemoglobin (oxy-Hb) and deoxy-hemoglobin (deoxy-Hb) signals at a temporal resolution severalfold higher. This suggests the necessity of adjusting the temporal parameters of a GLM for fNIRS signals. Thus, we devised a GLM-based method utilizing an adaptive hemodynamic response function (HRF). We sought the optimum temporal parameters to best explain the observed time series data during verbal fluency and naming tasks. The peak delay of the HRF was systematically changed to achieve the best-fit model for the observed oxy- and deoxy-Hb time series data. The optimized peak delay showed different values for each Hb signal and task. When the optimized peak delays were adopted, the deoxy-Hb data yielded comparable activations with similar statistical power and spatial patterns to oxy-Hb data. The adaptive HRF method could suitably explain the behaviors of both Hb parameters during tasks with the different cognitive loads during a time course, and thus would serve as an objective method to fully utilize the temporal structures of all fNIRS data. PMID:26157973

  18. The overlooked potential of generalized linear models in astronomy - III. Bayesian negative binomial regression and globular cluster populations

    NASA Astrophysics Data System (ADS)

    de Souza, R. S.; Hilbe, J. M.; Buelens, B.; Riggs, J. D.; Cameron, E.; Ishida, E. E. O.; Chies-Santos, A. L.; Killedar, M.

    2015-10-01

    In this paper, the third in a series illustrating the power of generalized linear models (GLMs) for the astronomical community, we elucidate the potential of the class of GLMs which handles count data. The size of a galaxy's globular cluster (GC) population (NGC) is a prolonged puzzle in the astronomical literature. It falls in the category of count data analysis, yet it is usually modelled as if it were a continuous response variable. We have developed a Bayesian negative binomial regression model to study the connection between NGC and the following galaxy properties: central black hole mass, dynamical bulge mass, bulge velocity dispersion and absolute visual magnitude. The methodology introduced herein naturally accounts for heteroscedasticity, intrinsic scatter, errors in measurements in both axes (either discrete or continuous) and allows modelling the population of GCs on their natural scale as a non-negative integer variable. Prediction intervals of 99 per cent around the trend for expected NGC comfortably envelope the data, notably including the Milky Way, which has hitherto been considered a problematic outlier. Finally, we demonstrate how random intercept models can incorporate information of each particular galaxy morphological type. Bayesian variable selection methodology allows for automatically identifying galaxy types with different productions of GCs, suggesting that on average S0 galaxies have a GC population 35 per cent smaller than other types with similar brightness.

  19. Automatic optimal filament segmentation with sub-pixel accuracy using generalized linear models and B-spline level-sets.

    PubMed

    Xiao, Xun; Geyer, Veikko F; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F

    2016-08-01

    Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy. PMID:27104582

  20. Projected changes in precipitation and temperature over the Canadian Prairie Provinces using the Generalized Linear Model statistical downscaling approach

    NASA Astrophysics Data System (ADS)

    Asong, Z. E.; Khaliq, M. N.; Wheater, H. S.

    2016-08-01

    In this study, a multisite multivariate statistical downscaling approach based on the Generalized Linear Model (GLM) framework is developed to downscale daily observations of precipitation and minimum and maximum temperatures from 120 sites located across the Canadian Prairie Provinces: Alberta, Saskatchewan and Manitoba. First, large scale atmospheric covariates from the National Center for Environmental Prediction (NCEP) Reanalysis-I, teleconnection indices, geographical site attributes, and observed precipitation and temperature records are used to calibrate GLMs for the 1971-2000 period. Then the calibrated models are used to generate daily sequences of precipitation and temperature for the 1962-2005 historical (conditioned on NCEP predictors), and future period (2006-2100) using outputs from five CMIP5 (Coupled Model Intercomparison Project Phase-5) Earth System Models corresponding to Representative Concentration Pathway (RCP): RCP2.6, RCP4.5, and RCP8.5 scenarios. The results indicate that the fitted GLMs are able to capture spatiotemporal characteristics of observed precipitation and temperature fields. According to the downscaled future climate, mean precipitation is projected to increase in summer and decrease in winter while minimum temperature is expected to warm faster than the maximum temperature. Climate extremes are projected to intensify with increased radiative forcing.

  1. General characterization of Tityus fasciolatus scorpion venom. Molecular identification of toxins and localization of linear B-cell epitopes.

    PubMed

    Mendes, T M; Guimarães-Okamoto, P T C; Machado-de-Avila, R A; Oliveira, D; Melo, M M; Lobato, Z I; Kalapothakis, E; Chávez-Olórtegui, C

    2015-06-01

    This communication describes the general characteristics of the venom from the Brazilian scorpion Tityus fasciolatus, which is an endemic species found in the central Brazil (States of Goiás and Minas Gerais), being responsible for sting accidents in this area. The soluble venom obtained from this scorpion is toxic to mice being the LD50 is 2.984 mg/kg (subcutaneally). SDS-PAGE of the soluble venom resulted in 10 fractions ranged in size from 6 to 10-80 kDa. Sheep were employed for anti-T. fasciolatus venom serum production. Western blotting analysis showed that most of these venom proteins are immunogenic. T. fasciolatus anti-venom revealed consistent cross-reactivity with venom antigens from Tityus serrulatus. Using known primers for T. serrulatus toxins, we have identified three toxins sequences from T. fasciolatus venom. Linear epitopes of these toxins were localized and fifty-five overlapping pentadecapeptides covering complete amino acid sequence of the three toxins were synthesized in cellulose membrane (spot-synthesis technique). The epitopes were located on the 3D structures and some important residues for structure/function were identified. PMID:25817000

  2. Nested generalized linear mixed model with ordinal response: Simulation and application on poverty data in Java Island

    NASA Astrophysics Data System (ADS)

    Widyaningsih, Yekti; Saefuddin, Asep; Notodiputro, Khairil A.; Wigena, Aji H.

    2012-05-01

    The objective of this research is to build a nested generalized linear mixed model using an ordinal response variable with some covariates. There are three main jobs in this paper, i.e. parameters estimation procedure, simulation, and implementation of the model for the real data. At the part of parameters estimation procedure, concepts of threshold, nested random effect, and computational algorithm are described. The simulations data are built for 3 conditions to know the effect of different parameter values of random effect distributions. The last job is the implementation of the model for the data about poverty in 9 districts of Java Island. The districts are Kuningan, Karawang, and Majalengka chose randomly in West Java; Temanggung, Boyolali, and Cilacap from Central Java; and Blitar, Ngawi, and Jember from East Java. The covariates in this model are province, number of bad nutrition cases, number of farmer families, and number of health personnel. In this modeling, all covariates are grouped as ordinal scale. Unit observation in this research is sub-district (kecamatan) nested in district, and districts (kabupaten) are nested in province. For the result of simulation, ARB (Absolute Relative Bias) and RRMSE (Relative Root of mean square errors) scale is used. They show that prov parameters have the highest bias, but more stable RRMSE in all conditions. The simulation design needs to be improved by adding other condition, such as higher correlation between covariates. Furthermore, as the result of the model implementation for the data, only number of farmer family and number of medical personnel have significant contributions to the level of poverty in Central Java and East Java province, and only district 2 (Karawang) of province 1 (West Java) has different random effect from the others. The source of the data is PODES (Potensi Desa) 2008 from BPS (Badan Pusat Statistik).

  3. A revised linear ozone photochemistry parameterization for use in transport and general circulation models: multi-annual simulations

    NASA Astrophysics Data System (ADS)

    Cariolle, D.; Teyssèdre, H.

    2007-05-01

    This article describes the validation of a linear parameterization of the ozone photochemistry for use in upper tropospheric and stratospheric studies. The present work extends a previously developed scheme by improving the 2-D model used to derive the coefficients of the parameterization. The chemical reaction rates are updated from a compilation that includes recent laboratory work. Furthermore, the polar ozone destruction due to heterogeneous reactions at the surface of the polar stratospheric clouds is taken into account as a function of the stratospheric temperature and the total chlorine content. Two versions of the parameterization are tested. The first one only requires the solution of a continuity equation for the time evolution of the ozone mixing ratio, the second one uses one additional equation for a cold tracer. The parameterization has been introduced into the chemical transport model MOCAGE. The model is integrated with wind and temperature fields from the ECMWF operational analyses over the period 2000-2004. Overall, the results from the two versions show a very good agreement between the modelled ozone distribution and the Total Ozone Mapping Spectrometer (TOMS) satellite data and the "in-situ" vertical soundings. During the course of the integration the model does not show any drift and the biases are generally small, of the order of 10%. The model also reproduces fairly well the polar ozone variability, notably the formation of "ozone holes" in the Southern Hemisphere with amplitudes and a seasonal evolution that follow the dynamics and time evolution of the polar vortex. The introduction of the cold tracer further improves the model simulation by allowing additional ozone destruction inside air masses exported from the high to the mid-latitudes, and by maintaining low ozone content inside the polar vortex of the Southern Hemisphere over longer periods in spring time. It is concluded that for the study of climate scenarios or the assimilation of

  4. Modelling and mapping spatio-temporal trends of heavy metal accumulation in moss and natural surface soil monitored 1990-2010 throughout Norway by multivariate generalized linear models and geostatistics

    NASA Astrophysics Data System (ADS)

    Nickel, Stefan; Hertel, Anne; Pesch, Roland; Schröder, Winfried; Steinnes, Eiliv; Uggerud, Hilde Thelle

    2014-12-01

    Objective. This study explores the statistical relations between the accumulation of heavy metals in moss and natural surface soil and potential influencing factors such as atmospheric deposition by use of multivariate regression-kriging and generalized linear models. Based on data collected in 1995, 2000, 2005 and 2010 throughout Norway the statistical correlation of a set of potential predictors (elevation, precipitation, density of different land uses, population density, physical properties of soil) with concentrations of cadmium (Cd), mercury and lead in moss and natural surface soil (response variables), respectively, were evaluated. Spatio-temporal trends were estimated by applying generalized linear models and geostatistics on spatial data covering Norway. The resulting maps were used to investigate to what extent the HM concentrations in moss and natural surface soil are correlated. Results. From a set of ten potential predictor variables the modelled atmospheric deposition showed the highest correlation with heavy metals concentrations in moss and natural surface soil. Density of various land uses in a 5 km radius reveal significant correlations with lead and cadmium concentration in moss and mercury concentration in natural surface soil. Elevation also appeared as a relevant factor for accumulation of lead and mercury in moss and cadmium in natural surface soil respectively. Precipitation was found to be a significant factor for cadmium in moss and mercury in natural surface soil. The integrated use of multivariate generalized linear models and kriging interpolation enabled creating heavy metals maps at a high level of spatial resolution. The spatial patterns of cadmium and lead concentrations in moss and natural surface soil in 1995 and 2005 are similar. The heavy metals concentrations in moss and natural surface soil are correlated significantly with high coefficients for lead, medium for cadmium and moderate for mercury. From 1995 up to 2010 the

  5. Speed limit reduction in urban areas: a before-after study using Bayesian generalized mixed linear models.

    PubMed

    Heydari, Shahram; Miranda-Moreno, Luis F; Liping, Fu

    2014-12-01

    In fall 2009, a new speed limit of 40 km/h was introduced on local streets in Montreal (previous speed limit: 50 km/h). This paper proposes a methodology to efficiently estimate the effect of such reduction on speeding behaviors. We employ a full Bayes before-after approach, which overcomes the limitations of the empirical Bayes method. The proposed methodology allows for the analysis of speed data using hourly observations. Therefore, the entire daily profile of speed is considered. Furthermore, it accounts for the entire distribution of speed in contrast to the traditional approach of considering only a point estimate such as 85th percentile speed. Different reference speeds were used to examine variations in the treatment effectiveness in terms of speeding rate and frequency. In addition to comparing rates of vehicles exceeding reference speeds of 40 km/h and 50 km/h (speeding), we verified how the implemented treatment affected "excessive speeding" behaviors (exceeding 80 km/h). To model operating speeds, two Bayesian generalized mixed linear models were utilized. These models have the advantage of addressing the heterogeneity problem in observations and efficiently capturing potential intra-site correlations. A variety of site characteristics, temporal variables, and environmental factors were considered. The analyses indicated that variables such as lane width and night hour had an increasing effect on speeding. Conversely, roadside parking had a decreasing effect on speeding. One-way and lane width had an increasing effect on excessive speeding, whereas evening hour had a decreasing effect. This study concluded that although the treatment was effective with respect to speed references of 40 km/h and 50 km/h, its effectiveness was not significant with respect to excessive speeding-which carries a great risk to pedestrians and cyclists in urban areas. Therefore, caution must be taken in drawing conclusions about the effectiveness of speed limit reduction. This

  6. A Comparison between Linear IRT Observed-Score Equating and Levine Observed-Score Equating under the Generalized Kernel Equating Framework

    ERIC Educational Resources Information Center

    Chen, Haiwen

    2012-01-01

    In this article, linear item response theory (IRT) observed-score equating is compared under a generalized kernel equating framework with Levine observed-score equating for nonequivalent groups with anchor test design. Interestingly, these two equating methods are closely related despite being based on different methodologies. Specifically, when…

  7. Developing a Measure of General Academic Ability: An Application of Maximal Reliability and Optimal Linear Combination to High School Students' Scores

    ERIC Educational Resources Information Center

    Dimitrov, Dimiter M.; Raykov, Tenko; AL-Qataee, Abdullah Ali

    2015-01-01

    This article is concerned with developing a measure of general academic ability (GAA) for high school graduates who apply to colleges, as well as with the identification of optimal weights of the GAA indicators in a linear combination that yields a composite score with maximal reliability and maximal predictive validity, employing the framework of…

  8. Symposium on General Linear Model Approach to the Analysis of Experimental Data in Educational Research (Athens, Georgia, June 29-July 1, 1967). Final Report.

    ERIC Educational Resources Information Center

    Bashaw, W. L., Ed.; Findley, Warren G., Ed.

    This volume contains the five major addresses and subsequent discussion from the Symposium on the General Linear Models Approach to the Analysis of Experimental Data in Educational Research, which was held in 1967 in Athens, Georgia. The symposium was designed to produce systematic information, including new methodology, for dissemination to the…

  9. Improving model-based diagnosis through algebraic analysis: The Petri net challenge

    SciTech Connect

    Portinale, L.

    1996-12-31

    The present paper describes the empirical evaluation of a linear algebra approach to model-based diagnosis, in case the behavioral model of the device under examination is described through a Petri net model. In particular, we show that algebraic analysis based on P-invariants of the net model, can significantly improve the performance of a model-based diagnostic system, while keeping the integrity of a general framework defined from a formal logical theory. A system called INVADS is described and experimental results, performed on a car fault domain and involving the comparison of different implementations of P-invariant based diagnosis, are then discussed.

  10. A generating set direct search augmented Lagrangian algorithm for optimization with a combination of general and linear constraints.

    SciTech Connect

    Lewis, Robert Michael (College of William and Mary, Williamsburg, VA); Torczon, Virginia Joanne (College of William and Mary, Williamsburg, VA); Kolda, Tamara Gibson

    2006-08-01

    We consider the solution of nonlinear programs in the case where derivatives of the objective function and nonlinear constraints are unavailable. To solve such problems, we propose an adaptation of a method due to Conn, Gould, Sartenaer, and Toint that proceeds by approximately minimizing a succession of linearly constrained augmented Lagrangians. Our modification is to use a derivative-free generating set direct search algorithm to solve the linearly constrained subproblems. The stopping criterion proposed by Conn, Gould, Sartenaer and Toint for the approximate solution of the subproblems requires explicit knowledge of derivatives. Such information is presumed absent in the generating set search method we employ. Instead, we show that stationarity results for linearly constrained generating set search methods provide a derivative-free stopping criterion, based on a step-length control parameter, that is sufficient to preserve the convergence properties of the original augmented Lagrangian algorithm.

  11. ELAS: A general-purpose computer program for the equilibrium problems of linear structures. Volume 2: Documentation of the program. [subroutines and flow charts

    NASA Technical Reports Server (NTRS)

    Utku, S.

    1969-01-01

    A general purpose digital computer program for the in-core solution of linear equilibrium problems of structural mechanics is documented. The program requires minimum input for the description of the problem. The solution is obtained by means of the displacement method and the finite element technique. Almost any geometry and structure may be handled because of the availability of linear, triangular, quadrilateral, tetrahedral, hexahedral, conical, triangular torus, and quadrilateral torus elements. The assumption of piecewise linear deflection distribution insures monotonic convergence of the deflections from the stiffer side with decreasing mesh size. The stresses are provided by the best-fit strain tensors in the least squares at the mesh points where the deflections are given. The selection of local coordinate systems whenever necessary is automatic. The core memory is used by means of dynamic memory allocation, an optional mesh-point relabelling scheme and imposition of the boundary conditions during the assembly time.

  12. Model-Based Prognostics of Hybrid Systems

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew; Roychoudhury, Indranil; Bregon, Anibal

    2015-01-01

    Model-based prognostics has become a popular approach to solving the prognostics problem. However, almost all work has focused on prognostics of systems with continuous dynamics. In this paper, we extend the model-based prognostics framework to hybrid systems models that combine both continuous and discrete dynamics. In general, most systems are hybrid in nature, including those that combine physical processes with software. We generalize the model-based prognostics formulation to hybrid systems, and describe the challenges involved. We present a general approach for modeling hybrid systems, and overview methods for solving estimation and prediction in hybrid systems. As a case study, we consider the problem of conflict (i.e., loss of separation) prediction in the National Airspace System, in which the aircraft models are hybrid dynamical systems.

  13. Non-linear oscillation of inter-connected satellites system under the combined influence of the solar radiation pressure and dissipative force of general nature

    NASA Astrophysics Data System (ADS)

    Sharma, S.; Narayan, A.

    2001-06-01

    The non-linear oscillation of inter-connected satellites system about its equilibrium position in the neighabourhood of main resonance ??=3D 1, under the combined effects of the solar radiation pressure and the dissipative forces of general nature has been discussed. It is found that the oscillation of the system gets disturbed when the frequency of the natural oscillation approaches the resonance frequency.

  14. Model-Based Systems

    NASA Technical Reports Server (NTRS)

    Frisch, Harold P.

    2007-01-01

    Engineers, who design systems using text specification documents, focus their work upon the completed system to meet Performance, time and budget goals. Consistency and integrity is difficult to maintain within text documents for a single complex system and more difficult to maintain as several systems are combined into higher-level systems, are maintained over decades, and evolve technically and in performance through updates. This system design approach frequently results in major changes during the system integration and test phase, and in time and budget overruns. Engineers who build system specification documents within a model-based systems environment go a step further and aggregate all of the data. They interrelate all of the data to insure consistency and integrity. After the model is constructed, the various system specification documents are prepared, all from the same database. The consistency and integrity of the model is assured, therefore the consistency and integrity of the various specification documents is insured. This article attempts to define model-based systems relative to such an environment. The intent is to expose the complexity of the enabling problem by outlining what is needed, why it is needed and how needs are being addressed by international standards writing teams.

  15. Generalized two-dimensional (2D) linear system analysis metrics (GMTF, GDQE) for digital radiography systems including the effect of focal spot, magnification, scatter, and detector characteristics

    PubMed Central

    Kuhls-Gilcrist, Andrew T.; Gupta, Sandesh K.; Bednarek, Daniel R.; Rudin, Stephen

    2010-01-01

    The MTF, NNPS, and DQE are standard linear system metrics used to characterize intrinsic detector performance. To evaluate total system performance for actual clinical conditions, generalized linear system metrics (GMTF, GNNPS and GDQE) that include the effect of the focal spot distribution, scattered radiation, and geometric unsharpness are more meaningful and appropriate. In this study, a two-dimensional (2D) generalized linear system analysis was carried out for a standard flat panel detector (FPD) (194-micron pixel pitch and 600-micron thick CsI) and a newly-developed, high-resolution, micro-angiographic fluoroscope (MAF) (35-micron pixel pitch and 300-micron thick CsI). Realistic clinical parameters and x-ray spectra were used. The 2D detector MTFs were calculated using the new Noise Response method and slanted edge method and 2D focal spot distribution measurements were done using a pin-hole assembly. The scatter fraction, generated for a uniform head equivalent phantom, was measured and the scatter MTF was simulated with a theoretical model. Different magnifications and scatter fractions were used to estimate the 2D GMTF, GNNPS and GDQE for both detectors. Results show spatial non-isotropy for the 2D generalized metrics which provide a quantitative description of the performance of the complete imaging system for both detectors. This generalized analysis demonstrated that the MAF and FPD have similar capabilities at lower spatial frequencies, but that the MAF has superior performance over the FPD at higher frequencies even when considering focal spot blurring and scatter. This 2D generalized performance analysis is a valuable tool to evaluate total system capabilities and to enable optimized design for specific imaging tasks. PMID:21243038

  16. Wronskian solutions of the T-, Q- and Y-systems related to infinite dimensional unitarizable modules of the general linear superalgebra gl (M | N)

    NASA Astrophysics Data System (ADS)

    Tsuboi, Zengo

    2013-05-01

    In [1] (Z. Tsuboi, Nucl. Phys. B 826 (2010) 399, arxiv:arXiv:0906.2039), we proposed Wronskian-like solutions of the T-system for [ M , N ]-hook of the general linear superalgebra gl (M | N). We have generalized these Wronskian-like solutions to the ones for the general T-hook, which is a union of [M1 ,N1 ]-hook and [M2 ,N2 ]-hook (M =M1 +M2, N =N1 +N2). These solutions are related to Weyl-type supercharacter formulas of infinite dimensional unitarizable modules of gl (M | N). Our solutions also include a Wronskian-like solution discussed in [2] (N. Gromov, V. Kazakov, S. Leurent, Z. Tsuboi, JHEP 1101 (2011) 155, arxiv:arXiv:1010.2720) in relation to the AdS5 /CFT4 spectral problem.

  17. Direct Linearization and Adjoint Approaches to Evaluation of Atmospheric Weighting Functions and Surface Partial Derivatives: General Principles, Synergy and Areas of Application

    NASA Technical Reports Server (NTRS)

    Ustino, Eugene A.

    2006-01-01

    This slide presentation reviews the observable radiances as functions of atmospheric parameters and of surface parameters; the mathematics of atmospheric weighting functions (WFs) and surface partial derivatives (PDs) are presented; and the equation of the forward radiative transfer (RT) problem is presented. For non-scattering atmospheres this can be done analytically, and all WFs and PDs can be computed analytically using the direct linearization approach. For scattering atmospheres, in general case, the solution of the forward RT problem can be obtained only numerically, but we need only two numerical solutions: one of the forward RT problem and one of the adjoint RT problem to compute all WFs and PDs we can think of. In this presentation we discuss applications of both the linearization and adjoint approaches

  18. Polynomial approximation of functions of matrices and its application to the solution of a general system of linear equations

    NASA Technical Reports Server (NTRS)

    Tal-Ezer, Hillel

    1987-01-01

    During the process of solving a mathematical model numerically, there is often a need to operate on a vector v by an operator which can be expressed as f(A) while A is NxN matrix (ex: exp(A), sin(A), A sup -1). Except for very simple matrices, it is impractical to construct the matrix f(A) explicitly. Usually an approximation to it is used. In the present research, an algorithm is developed which uses a polynomial approximation to f(A). It is reduced to a problem of approximating f(z) by a polynomial in z while z belongs to the domain D in the complex plane which includes all the eigenvalues of A. This problem of approximation is approached by interpolating the function f(z) in a certain set of points which is known to have some maximal properties. The approximation thus achieved is almost best. Implementing the algorithm to some practical problem is described. Since a solution to a linear system Ax = b is x= A sup -1 b, an iterative solution to it can be regarded as a polynomial approximation to f(A) = A sup -1. Implementing the algorithm in this case is also described.

  19. VISCEL: A general-purpose computer program for analysis of linear viscoelastic structures (user's manual), volume 1

    NASA Technical Reports Server (NTRS)

    Gupta, K. K.; Akyuz, F. A.; Heer, E.

    1972-01-01

    This program, an extension of the linear equilibrium problem solver ELAS, is an updated and extended version of its earlier form (written in FORTRAN 2 for the IBM 7094 computer). A synchronized material property concept utilizing incremental time steps and the finite element matrix displacement approach has been adopted for the current analysis. A special option enables employment of constant time steps in the logarithmic scale, thereby reducing computational efforts resulting from accumulative material memory effects. A wide variety of structures with elastic or viscoelastic material properties can be analyzed by VISCEL. The program is written in FORTRAN 5 language for the Univac 1108 computer operating under the EXEC 8 system. Dynamic storage allocation is automatically effected by the program, and the user may request up to 195K core memory in a 260K Univac 1108/EXEC 8 machine. The physical program VISCEL, consisting of about 7200 instructions, has four distinct links (segments), and the compiled program occupies a maximum of about 11700 words decimal of core storage.

  20. Model Based Definition

    NASA Technical Reports Server (NTRS)

    Rowe, Sidney E.

    2010-01-01

    In September 2007, the Engineering Directorate at the Marshall Space Flight Center (MSFC) created the Design System Focus Team (DSFT). MSFC was responsible for the in-house design and development of the Ares 1 Upper Stage and the Engineering Directorate was preparing to deploy a new electronic Configuration Management and Data Management System with the Design Data Management System (DDMS) based upon a Commercial Off The Shelf (COTS) Product Data Management (PDM) System. The DSFT was to establish standardized CAD practices and a new data life cycle for design data. Of special interest here, the design teams were to implement Model Based Definition (MBD) in support of the Upper Stage manufacturing contract. It is noted that this MBD does use partially dimensioned drawings for auxiliary information to the model. The design data lifecycle implemented several new release states to be used prior to formal release that allowed the models to move through a flow of progressive maturity. The DSFT identified some 17 Lessons Learned as outcomes of the standards development, pathfinder deployments and initial application to the Upper Stage design completion. Some of the high value examples are reviewed.

  1. Quantum, classical, and hybrid QM/MM calculations in solution: general implementation of the ddCOSMO linear scaling strategy.

    PubMed

    Lipparini, Filippo; Scalmani, Giovanni; Lagardère, Louis; Stamm, Benjamin; Cancès, Eric; Maday, Yvon; Piquemal, Jean-Philip; Frisch, Michael J; Mennucci, Benedetta

    2014-11-14

    We present the general theory and implementation of the Conductor-like Screening Model according to the recently developed ddCOSMO paradigm. The various quantities needed to apply ddCOSMO at different levels of theory, including quantum mechanical descriptions, are discussed in detail, with a particular focus on how to compute the integrals needed to evaluate the ddCOSMO solvation energy and its derivatives. The overall computational cost of a ddCOSMO computation is then analyzed and decomposed in the various steps: the different relative weights of such contributions are then discussed for both ddCOSMO and the fastest available alternative discretization to the COSMO equations. Finally, the scaling of the cost of the various steps with respect to the size of the solute is analyzed and discussed, showing how ddCOSMO opens significantly new possibilities when cheap or hybrid molecular mechanics/quantum mechanics methods are used to describe the solute. PMID:25399133

  2. Quantum, classical, and hybrid QM/MM calculations in solution: General implementation of the ddCOSMO linear scaling strategy

    SciTech Connect

    Lipparini, Filippo; Scalmani, Giovanni; Frisch, Michael J.; Lagardère, Louis; Stamm, Benjamin; Cancès, Eric; Maday, Yvon; Piquemal, Jean-Philip; Mennucci, Benedetta

    2014-11-14

    We present the general theory and implementation of the Conductor-like Screening Model according to the recently developed ddCOSMO paradigm. The various quantities needed to apply ddCOSMO at different levels of theory, including quantum mechanical descriptions, are discussed in detail, with a particular focus on how to compute the integrals needed to evaluate the ddCOSMO solvation energy and its derivatives. The overall computational cost of a ddCOSMO computation is then analyzed and decomposed in the various steps: the different relative weights of such contributions are then discussed for both ddCOSMO and the fastest available alternative discretization to the COSMO equations. Finally, the scaling of the cost of the various steps with respect to the size of the solute is analyzed and discussed, showing how ddCOSMO opens significantly new possibilities when cheap or hybrid molecular mechanics/quantum mechanics methods are used to describe the solute.

  3. Quantum, classical, and hybrid QM/MM calculations in solution: General implementation of the ddCOSMO linear scaling strategy

    NASA Astrophysics Data System (ADS)

    Lipparini, Filippo; Scalmani, Giovanni; Lagardère, Louis; Stamm, Benjamin; Cancès, Eric; Maday, Yvon; Piquemal, Jean-Philip; Frisch, Michael J.; Mennucci, Benedetta

    2014-11-01

    We present the general theory and implementation of the Conductor-like Screening Model according to the recently developed ddCOSMO paradigm. The various quantities needed to apply ddCOSMO at different levels of theory, including quantum mechanical descriptions, are discussed in detail, with a particular focus on how to compute the integrals needed to evaluate the ddCOSMO solvation energy and its derivatives. The overall computational cost of a ddCOSMO computation is then analyzed and decomposed in the various steps: the different relative weights of such contributions are then discussed for both ddCOSMO and the fastest available alternative discretization to the COSMO equations. Finally, the scaling of the cost of the various steps with respect to the size of the solute is analyzed and discussed, showing how ddCOSMO opens significantly new possibilities when cheap or hybrid molecular mechanics/quantum mechanics methods are used to describe the solute.

  4. Well-conditioning global-local analysis using stable generalized/extended finite element method for linear elastic fracture mechanics

    NASA Astrophysics Data System (ADS)

    Malekan, Mohammad; Barros, Felicio Bruzzi

    2016-07-01

    Using the locally-enriched strategy to enrich a small/local part of the problem by generalized/extended finite element method (G/XFEM) leads to non-optimal convergence rate and ill-conditioning system of equations due to presence of blending elements. The local enrichment can be chosen from polynomial, singular, branch or numerical types. The so-called stable version of G/XFEM method provides a well-conditioning approach when only singular functions are used in the blending elements. This paper combines numeric enrichment functions obtained from global-local G/XFEM method with the polynomial enrichment along with a well-conditioning approach, stable G/XFEM, in order to show the robustness and effectiveness of the approach. In global-local G/XFEM, the enrichment functions are constructed numerically from the solution of a local problem. Furthermore, several enrichment strategies are adopted along with the global-local enrichment. The results obtained with these enrichments strategies are discussed in detail, considering convergence rate in strain energy, growth rate of condition number, and computational processing. Numerical experiments show that using geometrical enrichment along with stable G/XFEM for global-local strategy improves the convergence rate and the conditioning of the problem. In addition, results shows that using polynomial enrichment for global problem simultaneously with global-local enrichments lead to ill-conditioned system matrices and bad convergence rate.

  5. Well-conditioning global-local analysis using stable generalized/extended finite element method for linear elastic fracture mechanics

    NASA Astrophysics Data System (ADS)

    Malekan, Mohammad; Barros, Felicio Bruzzi

    2016-11-01

    Using the locally-enriched strategy to enrich a small/local part of the problem by generalized/extended finite element method (G/XFEM) leads to non-optimal convergence rate and ill-conditioning system of equations due to presence of blending elements. The local enrichment can be chosen from polynomial, singular, branch or numerical types. The so-called stable version of G/XFEM method provides a well-conditioning approach when only singular functions are used in the blending elements. This paper combines numeric enrichment functions obtained from global-local G/XFEM method with the polynomial enrichment along with a well-conditioning approach, stable G/XFEM, in order to show the robustness and effectiveness of the approach. In global-local G/XFEM, the enrichment functions are constructed numerically from the solution of a local problem. Furthermore, several enrichment strategies are adopted along with the global-local enrichment. The results obtained with these enrichments strategies are discussed in detail, considering convergence rate in strain energy, growth rate of condition number, and computational processing. Numerical experiments show that using geometrical enrichment along with stable G/XFEM for global-local strategy improves the convergence rate and the conditioning of the problem. In addition, results shows that using polynomial enrichment for global problem simultaneously with global-local enrichments lead to ill-conditioned system matrices and bad convergence rate.

  6. Exact power series solutions of the structure equations of the general relativistic isotropic fluid stars with linear barotropic and polytropic equations of state

    NASA Astrophysics Data System (ADS)

    Harko, T.; Mak, M. K.

    2016-09-01

    Obtaining exact solutions of the spherically symmetric general relativistic gravitational field equations describing the interior structure of an isotropic fluid sphere is a long standing problem in theoretical and mathematical physics. The usual approach to this problem consists mainly in the numerical investigation of the Tolman-Oppenheimer-Volkoff and of the mass continuity equations, which describes the hydrostatic stability of the dense stars. In the present paper we introduce an alternative approach for the study of the relativistic fluid sphere, based on the relativistic mass equation, obtained by eliminating the energy density in the Tolman-Oppenheimer-Volkoff equation. Despite its apparent complexity, the relativistic mass equation can be solved exactly by using a power series representation for the mass, and the Cauchy convolution for infinite power series. We obtain exact series solutions for general relativistic dense astrophysical objects described by the linear barotropic and the polytropic equations of state, respectively. For the polytropic case we obtain the exact power series solution corresponding to arbitrary values of the polytropic index n. The explicit form of the solution is presented for the polytropic index n=1, and for the indexes n=1/2 and n=1/5, respectively. The case of n=3 is also considered. In each case the exact power series solution is compared with the exact numerical solutions, which are reproduced by the power series solutions truncated to seven terms only. The power series representations of the geometric and physical properties of the linear barotropic and polytropic stars are also obtained.

  7. Principles of models based engineering

    SciTech Connect

    Dolin, R.M.; Hefele, J.

    1996-11-01

    This report describes a Models Based Engineering (MBE) philosophy and implementation strategy that has been developed at Los Alamos National Laboratory`s Center for Advanced Engineering Technology. A major theme in this discussion is that models based engineering is an information management technology enabling the development of information driven engineering. Unlike other information management technologies, models based engineering encompasses the breadth of engineering information, from design intent through product definition to consumer application.

  8. Sauer's non-linear voltage division.

    PubMed

    Schwan, H P; McAdams, E T; Jossinet, J

    2002-09-01

    The non-linearity of the electrode-tissue interface impedance gives rise to harmonics and thus degrades the accuracy of impedance measurements. Also, electrodes are often driven into the non-linear range of their polarisation impedance. This is particularly true in clinical applications. Techniques to correct for electrode effects are usually based on linear electrode impedance data. However, these data can be very different from the non-linear values needed. Non-linear electrode data suggested a model based on simple assumptions. It is useful in predicting the frequency dependence of non-linear effects from linear properties. Sauer's treatment is a first attempt to provide a more general and rigorous basis for modelling the non-linear state. The paper reports Sauer's treatment of the non-linear case and points out its limitations. The paper considers Sauer's treatment of a series arrangement of two impedances. The tissue impedance is represented by a linear voltage-current characteristic. The interface impedance is represented by a Volterra expansion. The response of this network to periodic signals is calculated up to the second-order term of the series expansion. The resultant, time-dependent current is found to contain a DC term (rectification), as well as frequency-dependent terms. Sauer's treatment assumes a voltage clamp across the impedances and neglects higher-order terms in the series expansion. As a consequence, it fails adequately to represent some experimentally observed phenomena. It is therefore suggested that Sauer's expressions for the voltage divider should be combined with the non-linear treatments previously published by the co-authors. Although Sauer's work on the non-linear voltage divider was originally applied to the study of the non-linear behaviour of the electrode-electrolyte interface and biological tissues, it is stressed, however, that the work is applicable to a wide range of research areas.

  9. First principles approach to the Abraham-Minkowski controversy for the momentum of light in general linear non-dispersive media

    NASA Astrophysics Data System (ADS)

    Ramos, Tomás; Rubilar, Guillermo F.; Obukhov, Yuri N.

    2015-02-01

    We study the problem of the definition of the energy-momentum tensor of light in general moving non-dispersive media with linear constitutive law. Using the basic principles of classical field theory, we show that for the correct understanding of the problem, one needs to carefully distinguish situations when the material medium is modeled either as a background on which light propagates or as a dynamical part of the total system. In the former case, we prove that the (generalized) Belinfante-Rosenfeld (BR) tensor for the electromagnetic field coincides with the Minkowski tensor. We derive a complete set of balance equations for this open system and show that the symmetries of the background medium are directly related to the conservation of the Minkowski quantities. In particular, for isotropic media, the angular momentum of light is conserved despite of the fact that the Minkowski tensor is non-symmetric. For the closed system of light interacting with matter, we model the material medium as a relativistic non-dissipative fluid and we prove that it is always possible to express the total BR tensor of the closed system either in the Abraham or in the Minkowski separation. However, in the case of dynamical media, the balance equations have a particularly convenient form in terms of the Abraham tensor. Our results generalize previous attempts and provide a first principles basis for a unified understanding of the long-standing Abraham-Minkowski controversy without ad hoc arguments.

  10. Quantitative measurement of temperature by proton resonance frequency shift at low field: a general method to correct non-linear spatial and temporal phase deformations

    NASA Astrophysics Data System (ADS)

    Grimault, S.; Lucas, T.; Quellec, S.; Mariette, F.

    2004-09-01

    MRI thermometry methods are usually based on the temperature dependence of the proton resonance frequency. Unfortunately, these methods are very sensitive to the phase drift induced by the instability of the scanner which prevents any temperature mapping over long periods of time. A general method based on 3D spatial modelling of the phase drift as a function of time is presented. The MRI temperature measurements were validated on gel samples with uniform and constant temperature and with a linear temperature gradient. In the case of uniform temperature conditions, correction of the phase drift proved to be essential when long periods of acquisition were required, as bias could reach values of up to 200 °C in its absence. The temperature uncertainty measured by MRI was 1.2 °C in average over 290 min. This accuracy is coherent with the requirements for food applications especially when thermocouples are useless.

  11. Argumentation in Science Education: A Model-Based Framework

    ERIC Educational Resources Information Center

    Bottcher, Florian; Meisert, Anke

    2011-01-01

    The goal of this article is threefold: First, the theoretical background for a model-based framework of argumentation to describe and evaluate argumentative processes in science education is presented. Based on the general model-based perspective in cognitive science and the philosophy of science, it is proposed to understand arguments as reasons…

  12. Euclidean Closed Linear Transformations of Complex Spacetime and generally of Complex Spaces of dimension four endowed with the Same or Different Metric

    NASA Astrophysics Data System (ADS)

    Vossos, Spyridon; Vossos, Elias

    2016-08-01

    closed LSTT is reduced, if one RIO has small velocity wrt another RIO. Thus, we have infinite number of closed LSTTs, each one with the corresponding SR theory. In case that we relate accelerated observers with variable metric of spacetime, we have the case of General Relativity (GR). For being that clear, we produce a generalized Schwarzschild metric, which is in accordance with any SR based on this closed complex LSTT and Einstein equations. The application of this kind of transformations to the SR and GR is obvious. But, the results may be applied to any linear space of dimension four endowed with steady or variable metric, whose elements (four- vectors) have spatial part (vector) with Euclidean metric.

  13. Generalized Vibrational Perturbation Theory for Rotovibrational Energies of Linear, Symmetric and Asymmetric Tops: Theory, Approximations, and Automated Approaches to Deal with Medium-to-Large Molecular Systems

    PubMed Central

    Piccardo, Matteo; Bloino, Julien; Barone, Vincenzo

    2015-01-01

    Models going beyond the rigid-rotor and the harmonic oscillator levels are mandatory for providing accurate theoretical predictions for several spectroscopic properties. Different strategies have been devised for this purpose. Among them, the treatment by perturbation theory of the molecular Hamiltonian after its expansion in power series of products of vibrational and rotational operators, also referred to as vibrational perturbation theory (VPT), is particularly appealing for its computational efficiency to treat medium-to-large systems. Moreover, generalized (GVPT) strategies combining the use of perturbative and variational formalisms can be adopted to further improve the accuracy of the results, with the first approach used for weakly coupled terms, and the second one to handle tightly coupled ones. In this context, the GVPT formulation for asymmetric, symmetric, and linear tops is revisited and fully generalized to both minima and first-order saddle points of the molecular potential energy surface. The computational strategies and approximations that can be adopted in dealing with GVPT computations are pointed out, with a particular attention devoted to the treatment of symmetry and degeneracies. A number of tests and applications are discussed, to show the possibilities of the developments, as regards both the variety of treatable systems and eligible methods. © 2015 Wiley Periodicals, Inc. PMID:26345131

  14. Dispersive wave processing: a model-based solution

    SciTech Connect

    Candy, J.V.; Chambers, D.C.

    1996-10-01

    Wave propagation through various media represents a significant problem in many applications in acoustics and electromagnetics especially when the medium is dispersive. We post a general dispersive wave propagation model that could easily represent many classes of dispersive waves and proceed to develop a model-based processor employing this underlying structure. The general solution to the model-based dispersive wave estimation problem is developed using the Bayesian maximum a posteriori approach which leads to the nonlinear extended Kalman filter processor.

  15. Genetic parameters for feather pecking and aggressive behavior in a large F2-cross of laying hens using generalized linear mixed models.

    PubMed

    Bennewitz, J; Bögelein, S; Stratz, P; Rodehutscord, M; Piepho, H P; Kjaer, J B; Bessei, W

    2014-04-01

    Feather pecking and aggressive pecking is a well-known problem in egg production. In the present study, genetic parameters for 4 feather-pecking-related traits were estimated using generalized linear mixed models. The traits were bouts of feather pecking delivered (FPD), bouts of feather pecking received (FPR), bouts of aggressive pecking delivered (APD), and bouts of aggressive pecking received (APR). An F2-design was established from 2 divergent selected founder lines. The lines were selected for low or high feather pecking for 10 generations. The number of F2 hens was 910. They were housed in pens with around 40 birds. Each pen was observed in 21 sessions of 20 min, distributed over 3 consecutive days. An animal model was applied that treated the bouts observed within 20 min as repeated observations. An over-dispersed Poisson distribution was assumed for observed counts and the link function was a log link. The model included a random animal effect, a random permanent environment effect, and a random day-by-hen effect. Residual variance was approximated on the link scale by the delta method. The results showed a heritability around 0.10 on the link scale for FPD and APD and of 0.04 for APR. The heritability of FPR was zero. For all behavior traits, substantial permanent environmental effects were observed. The approximate genetic correlation between FPD and APD (FPD and APR) was 0.81 (0.54). Egg production and feather eating records were collected on the same hens as well and were analyzed with a generalized linear mixed model, assuming a binomial distribution and using a probit link function. The heritability on the link scale for egg production was 0.40 and for feather eating 0.57. The approximate genetic correlation between FPD and egg production was 0.50 and between FPD and feather eating 0.73. Selection might help to reduce feather pecking, but this might result in an unfavorable correlated selection response reducing egg production. Feather eating and

  16. Model-based clustered-dot screening

    NASA Astrophysics Data System (ADS)

    Kim, Sang Ho

    2006-01-01

    I propose a halftone screen design method based on a human visual system model and the characteristics of the electro-photographic (EP) printer engine. Generally, screen design methods based on human visual models produce dispersed-dot type screens while design methods considering EP printer characteristics generate clustered-dot type screens. In this paper, I propose a cost function balancing the conflicting characteristics of the human visual system and the printer. By minimizing the obtained cost function, I design a model-based clustered-dot screen using a modified direct binary search algorithm. Experimental results demonstrate the superior quality of the model-based clustered-dot screen compared to a conventional clustered-dot screen.

  17. Model Based Testing for Agent Systems

    NASA Astrophysics Data System (ADS)

    Zhang, Zhiyong; Thangarajah, John; Padgham, Lin

    Although agent technology is gaining world wide popularity, a hindrance to its uptake is the lack of proper testing mechanisms for agent based systems. While many traditional software testing methods can be generalized to agent systems, there are many aspects that are different and which require an understanding of the underlying agent paradigm. In this paper we present certain aspects of a testing framework that we have developed for agent based systems. The testing framework is a model based approach using the design models of the Prometheus agent development methodology. In this paper we focus on model based unit testing and identify the appropriate units, present mechanisms for generating suitable test cases and for determining the order in which the units are to be tested, present a brief overview of the unit testing process and an example. Although we use the design artefacts from Prometheus the approach is suitable for any plan and event based agent system.

  18. Using generalized linear models to estimate selectivity from short-term recoveries of tagged red drum Sciaenops ocellatus: Effects of gear, fate, and regulation period

    USGS Publications Warehouse

    Bacheler, N.M.; Hightower, J.E.; Burdick, S.M.; Paramore, L.M.; Buckel, J.A.; Pollock, K.H.

    2010-01-01

    Estimating the selectivity patterns of various fishing gears is a critical component of fisheries stock assessment due to the difficulty in obtaining representative samples from most gears. We used short-term recoveries (n = 3587) of tagged red drum Sciaenops ocellatus to directly estimate age- and length-based selectivity patterns using generalized linear models. The most parsimonious models were selected using AIC, and standard deviations were estimated using simulations. Selectivity of red drum was dependent upon the regulation period in which the fish was caught, the gear used to catch the fish (i.e., hook-and-line, gill nets, pound nets), and the fate of the fish upon recovery (i.e., harvested or released); models including all first-order interactions between main effects outperformed models without interactions. Selectivity of harvested fish was generally dome-shaped and shifted toward larger, older fish in response to regulation changes. Selectivity of caught-and-released red drum was highest on the youngest and smallest fish in the early and middle regulation periods, but increased on larger, legal-sized fish in the late regulation period. These results suggest that catch-and-release mortality has consistently been high for small, young red drum, but has recently become more common in larger, older fish. This method of estimating selectivity from short-term tag recoveries is valuable because it is simpler than full tag-return models, and may be more robust because yearly fishing and natural mortality rates do not need to be modeled and estimated. ?? 2009 Elsevier B.V.

  19. Using generalized linear models to estimate selectivity from short-term recoveries of tagged red drum Sciaenops ocellatus: Effects of gear, fate, and regulation period

    USGS Publications Warehouse

    Burdick, Summer M.; Hightower, Joseph E.; Bacheler, Nathan M.; Paramore, Lee M.; Buckel, Jeffrey A.; Pollock, Kenneth H.

    2010-01-01

    Estimating the selectivity patterns of various fishing gears is a critical component of fisheries stock assessment due to the difficulty in obtaining representative samples from most gears. We used short-term recoveries (n = 3587) of tagged red drum Sciaenops ocellatus to directly estimate age- and length-based selectivity patterns using generalized linear models. The most parsimonious models were selected using AIC, and standard deviations were estimated using simulations. Selectivity of red drum was dependent upon the regulation period in which the fish was caught, the gear used to catch the fish (i.e., hook-and-line, gill nets, pound nets), and the fate of the fish upon recovery (i.e., harvested or released); models including all first-order interactions between main effects outperformed models without interactions. Selectivity of harvested fish was generally dome-shaped and shifted toward larger, older fish in response to regulation changes. Selectivity of caught-and-released red drum was highest on the youngest and smallest fish in the early and middle regulation periods, but increased on larger, legal-sized fish in the late regulation period. These results suggest that catch-and-release mortality has consistently been high for small, young red drum, but has recently become more common in larger, older fish. This method of estimating selectivity from short-term tag recoveries is valuable because it is simpler than full tag-return models, and may be more robust because yearly fishing and natural mortality rates do not need to be modeled and estimated.

  20. Pathways of the North Pacific Intermediate Water identified through the tangent linear and adjoint models of an ocean general circulation model

    NASA Astrophysics Data System (ADS)

    Fujii, Y.; Nakano, T.; Usui, N.; Matsumoto, S.; Tsujino, H.; Kamachi, M.

    2014-12-01

    This study develops a strategy for tracing a target water mass, and applies it to analyzing the pathway of the North Pacific Intermediate Water (NPIW) from the subarctic gyre to the northwestern part of the subtropical gyre south of Japan in a simulation of an ocean general circulation model. This strategy estimates the pathway of the water mass that travels from an origin to a destination area during a specific period using a conservation property concerning tangent linear and adjoint models. In our analysis, a large fraction of the low salinity origin water mass of NPIW initially comes from the Okhotsk or Bering Sea, flows through the southeastern side of the Kuril Islands, and is advected to the Mixed Water Region (MWR) by the Oyashio current. It then enters the Kuroshio Extension (KE) at the first KE ridge, and is advected eastward by the KE current. However, it deviates southward from the KE axis around 158°E over the Shatsky Rise, or around 170ºE on the western side of the Emperor Seamount Chain, and enters the subtropical gyre. It is finally transported westward by the recirculation flow. This pathway corresponds well to the shortcut route of NPIW from MWR to the region south of Japan inferred from analysis of the long-term freshening trend of NPIW observation.

  1. Generalized Linear Model (GLM) framework for the association of host variables and viral strains with liver fibrosis in HCV/HIV coinfected patients.

    PubMed

    Matas, Marina; Picornell, Antònia; Cifuentes, Carmen; Payeras, Antoni; Bassa, Antoni; Homar, Francesc; González-Candelas, Fernando; López-Labrador, F Xavier; Moya, Andrés; Ramon, Maria M; Castro, José A

    2013-01-01

    Chronic hepatitis C virus (HCV) infection is the main cause of advanced and end-stage liver disease world-wide, and an important factor of morbidity and mortality in Human Immunodeficiency virus-1 (HIV-1) co-infected individuals. Whereas the genetic variability of HCV has been studied extensively in monoinfected patients, comprehensive analyses of both patient and virus characteristics are still scarce in HCV/HIV co-infection. In order to find correlates for liver damage, we sought to analyze demographic, epidemiological and clinical features of HCV/HIV co-infected patients along with the genetic makeup of HCV (viral subtypes and lineage studied by nucleotide sequencing and phylogenetic analysis of the NS5B region). We used the Generalized Linear Model (GLM) methodology in order to integrate data from the virus and the infected host to find predictors for liver damage. The degree of liver disease was evaluated indirectly by means of two indexes (APRI and FIB-4) and accounting for the time since infection, to estimate fibrosis progression rates. Our analyses identified a reduced number of variables (both from the virus and the host) implicated in liver damage, which included the stage of HIV infection, levels of gamma-glutamil transferase and cholesterol, and some distinct HCV phylogenetic clades. PMID:23174528

  2. Serial follow-up study on renal handling of calcium and phosphorus after soil replacement in Cd-polluted rice paddies estimated using a general linear mixed model.

    PubMed

    Kobayashi, Etsuko; Suwazono, Yasushi; Honda, Ryumon; Dochi, Mire; Nishijo, Muneko; Kido, Teruhiko; Nakagawa, Hideaki

    2009-01-01

    A 10-year follow-up study was conducted to investigate the effects of renal handling of calcium (Ca) and phosphorus (P) after the removal of cadmium-polluted soil in rice paddies and replacing it with nonpolluted soil. Using a general linear mixed model, serial changes of Ca and P concentrations in urine and serum (Ca-U/S, P-U/S), fractional excretion of Ca (FECa), and percent tubular reabsorption of P (%TRP) were determined in 37 persons requiring observation in the Cd-polluted Kakehashi River Basin, Japan. Ca-U and Ca-S remained within the normal range in both sexes. FECa in men returned to the normal level within 3.3 years from the completion of soil replacement. Overall, it is suggested that the renal handling of Ca showed no or only a slight change throughout the observation period in both sexes. P-U decreased gradually. P-S showed lower than normal values in the men and values at the lower end of the normal range in women, although the values recovered gradually to normal. %TRP values remained low throughout the observation period and the values did not recover in either sex. However, the results of P-U and P-S suggested that the renal handling of P may recover after the completion of soil replacement.

  3. Intelligent model-based OPC

    NASA Astrophysics Data System (ADS)

    Huang, W. C.; Lai, C. M.; Luo, B.; Tsai, C. K.; Chih, M. H.; Lai, C. W.; Kuo, C. C.; Liu, R. G.; Lin, H. T.

    2006-03-01

    Optical proximity correction is the technique of pre-distorting mask layouts so that the printed patterns are as close to the desired shapes as possible. For model-based optical proximity correction, a lithographic model to predict the edge position (contour) of patterns on the wafer after lithographic processing is needed. Generally, segmentation of edges is performed prior to the correction. Pattern edges are dissected into several small segments with corresponding target points. During the correction, the edges are moved back and forth from the initial drawn position, assisted by the lithographic model, to finally settle on the proper positions. When the correction converges, the intensity predicted by the model in every target points hits the model-specific threshold value. Several iterations are required to achieve the convergence and the computation time increases with the increase of the required iterations. An artificial neural network is an information-processing paradigm inspired by biological nervous systems, such as how the brain processes information. It is composed of a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems. A neural network can be a powerful data-modeling tool that is able to capture and represent complex input/output relationships. The network can accurately predict the behavior of a system via the learning procedure. A radial basis function network, a variant of artificial neural network, is an efficient function approximator. In this paper, a radial basis function network was used to build a mapping from the segment characteristics to the edge shift from the drawn position. This network can provide a good initial guess for each segment that OPC has carried out. The good initial guess reduces the required iterations. Consequently, cycle time can be shortened effectively. The optimization of the radial basis function network for this system was practiced by genetic algorithm

  4. Generalization of color-difference formulas for any illuminant and any observer by assuming perfect color constancy in a color-vision model based on the OSA-UCS system.

    PubMed

    Oleari, Claudio; Melgosa, Manuel; Huertas, Rafael

    2011-11-01

    The most widely used color-difference formulas are based on color-difference data obtained under D65 illumination or similar and for a 10° visual field; i.e., these formulas hold true for the CIE 1964 observer adapted to D65 illuminant. This work considers the psychometric color-vision model based on the Optical Society of America-Uniform Color Scales (OSA-UCS) system previously published by the first author [J. Opt. Soc. Am. A 21, 677 (2004); Color Res. Appl. 30, 31 (2005)] with the additional hypothesis that complete illuminant adaptation with perfect color constancy exists in the visual evaluation of color differences. In this way a computational procedure is defined for color conversion between different illuminant adaptations, which is an alternative to the current chromatic adaptation transforms. This color conversion allows the passage between different observers, e.g., CIE 1964 and CIE 1931. An application of this color conversion is here made in the color-difference evaluation for any observer and in any illuminant adaptation: these transformations convert tristimulus values related to any observer and illuminant adaptation to those related to the observer and illuminant adaptation of the definition of the color-difference formulas, i.e., to the CIE 1964 observer adapted to the D65 illuminant, and then the known color-difference formulas can be applied. The adaptations to the illuminants A, C, F11, D50, Planckian and daylight at any color temperature and for CIE 1931 and CIE 1964 observers are considered as examples, and all the corresponding transformations are given for practical use.

  5. Multitemporal Modelling of Socio-Economic Wildfire Drivers in Central Spain between the 1980s and the 2000s: Comparing Generalized Linear Models to Machine Learning Algorithms.

    PubMed

    Vilar, Lara; Gómez, Israel; Martínez-Vega, Javier; Echavarría, Pilar; Riaño, David; Martín, M Pilar

    2016-01-01

    The socio-economic factors are of key importance during all phases of wildfire management that include prevention, suppression and restoration. However, modeling these factors, at the proper spatial and temporal scale to understand fire regimes is still challenging. This study analyses socio-economic drivers of wildfire occurrence in central Spain. This site represents a good example of how human activities play a key role over wildfires in the European Mediterranean basin. Generalized Linear Models (GLM) and machine learning Maximum Entropy models (Maxent) predicted wildfire occurrence in the 1980s and also in the 2000s to identify changes between each period in the socio-economic drivers affecting wildfire occurrence. GLM base their estimation on wildfire presence-absence observations whereas Maxent on wildfire presence-only. According to indicators like sensitivity or commission error Maxent outperformed GLM in both periods. It achieved a sensitivity of 38.9% and a commission error of 43.9% for the 1980s, and 67.3% and 17.9% for the 2000s. Instead, GLM obtained 23.33, 64.97, 9.41 and 18.34%, respectively. However GLM performed steadier than Maxent in terms of the overall fit. Both models explained wildfires from predictors such as population density and Wildland Urban Interface (WUI), but differed in their relative contribution. As a result of the urban sprawl and an abandonment of rural areas, predictors like WUI and distance to roads increased their contribution to both models in the 2000s, whereas Forest-Grassland Interface (FGI) influence decreased. This study demonstrates that human component can be modelled with a spatio-temporal dimension to integrate it into wildfire risk assessment. PMID:27557113

  6. Multitemporal Modelling of Socio-Economic Wildfire Drivers in Central Spain between the 1980s and the 2000s: Comparing Generalized Linear Models to Machine Learning Algorithms

    PubMed Central

    Vilar, Lara; Gómez, Israel; Martínez-Vega, Javier; Echavarría, Pilar; Riaño, David; Martín, M. Pilar

    2016-01-01

    The socio-economic factors are of key importance during all phases of wildfire management that include prevention, suppression and restoration. However, modeling these factors, at the proper spatial and temporal scale to understand fire regimes is still challenging. This study analyses socio-economic drivers of wildfire occurrence in central Spain. This site represents a good example of how human activities play a key role over wildfires in the European Mediterranean basin. Generalized Linear Models (GLM) and machine learning Maximum Entropy models (Maxent) predicted wildfire occurrence in the 1980s and also in the 2000s to identify changes between each period in the socio-economic drivers affecting wildfire occurrence. GLM base their estimation on wildfire presence-absence observations whereas Maxent on wildfire presence-only. According to indicators like sensitivity or commission error Maxent outperformed GLM in both periods. It achieved a sensitivity of 38.9% and a commission error of 43.9% for the 1980s, and 67.3% and 17.9% for the 2000s. Instead, GLM obtained 23.33, 64.97, 9.41 and 18.34%, respectively. However GLM performed steadier than Maxent in terms of the overall fit. Both models explained wildfires from predictors such as population density and Wildland Urban Interface (WUI), but differed in their relative contribution. As a result of the urban sprawl and an abandonment of rural areas, predictors like WUI and distance to roads increased their contribution to both models in the 2000s, whereas Forest-Grassland Interface (FGI) influence decreased. This study demonstrates that human component can be modelled with a spatio-temporal dimension to integrate it into wildfire risk assessment. PMID:27557113

  7. Multitemporal Modelling of Socio-Economic Wildfire Drivers in Central Spain between the 1980s and the 2000s: Comparing Generalized Linear Models to Machine Learning Algorithms.

    PubMed

    Vilar, Lara; Gómez, Israel; Martínez-Vega, Javier; Echavarría, Pilar; Riaño, David; Martín, M Pilar

    2016-01-01

    The socio-economic factors are of key importance during all phases of wildfire management that include prevention, suppression and restoration. However, modeling these factors, at the proper spatial and temporal scale to understand fire regimes is still challenging. This study analyses socio-economic drivers of wildfire occurrence in central Spain. This site represents a good example of how human activities play a key role over wildfires in the European Mediterranean basin. Generalized Linear Models (GLM) and machine learning Maximum Entropy models (Maxent) predicted wildfire occurrence in the 1980s and also in the 2000s to identify changes between each period in the socio-economic drivers affecting wildfire occurrence. GLM base their estimation on wildfire presence-absence observations whereas Maxent on wildfire presence-only. According to indicators like sensitivity or commission error Maxent outperformed GLM in both periods. It achieved a sensitivity of 38.9% and a commission error of 43.9% for the 1980s, and 67.3% and 17.9% for the 2000s. Instead, GLM obtained 23.33, 64.97, 9.41 and 18.34%, respectively. However GLM performed steadier than Maxent in terms of the overall fit. Both models explained wildfires from predictors such as population density and Wildland Urban Interface (WUI), but differed in their relative contribution. As a result of the urban sprawl and an abandonment of rural areas, predictors like WUI and distance to roads increased their contribution to both models in the 2000s, whereas Forest-Grassland Interface (FGI) influence decreased. This study demonstrates that human component can be modelled with a spatio-temporal dimension to integrate it into wildfire risk assessment.

  8. Efficient Model-Based Diagnosis Engine

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Vatan, Farrokh; Barrett, Anthony; James, Mark; Mackey, Ryan; Williams, Colin

    2009-01-01

    An efficient diagnosis engine - a combination of mathematical models and algorithms - has been developed for identifying faulty components in a possibly complex engineering system. This model-based diagnosis engine embodies a twofold approach to reducing, relative to prior model-based diagnosis engines, the amount of computation needed to perform a thorough, accurate diagnosis. The first part of the approach involves a reconstruction of the general diagnostic engine to reduce the complexity of the mathematical-model calculations and of the software needed to perform them. The second part of the approach involves algorithms for computing a minimal diagnosis (the term "minimal diagnosis" is defined below). A somewhat lengthy background discussion is prerequisite to a meaningful summary of the innovative aspects of the present efficient model-based diagnosis engine. In model-based diagnosis, the function of each component and the relationships among all the components of the engineering system to be diagnosed are represented as a logical system denoted the system description (SD). Hence, the expected normal behavior of the engineering system is the set of logical consequences of the SD. Faulty components lead to inconsistencies between the observed behaviors of the system and the SD (see figure). Diagnosis - the task of finding faulty components - is reduced to finding those components, the abnormalities of which could explain all the inconsistencies. The solution of the diagnosis problem should be a minimal diagnosis, which is a minimal set of faulty components. A minimal diagnosis stands in contradistinction to the trivial solution, in which all components are deemed to be faulty, and which, therefore, always explains all inconsistencies.

  9. Model-based tomographic reconstruction

    DOEpatents

    Chambers, David H.; Lehman, Sean K.; Goodman, Dennis M.

    2012-06-26

    A model-based approach to estimating wall positions for a building is developed and tested using simulated data. It borrows two techniques from geophysical inversion problems, layer stripping and stacking, and combines them with a model-based estimation algorithm that minimizes the mean-square error between the predicted signal and the data. The technique is designed to process multiple looks from an ultra wideband radar array. The processed signal is time-gated and each section processed to detect the presence of a wall and estimate its position, thickness, and material parameters. The floor plan of a building is determined by moving the array around the outside of the building. In this paper we describe how the stacking and layer stripping algorithms are combined and show the results from a simple numerical example of three parallel walls.

  10. Qualitative model-based diagnosis using possibility theory

    NASA Technical Reports Server (NTRS)

    Joslyn, Cliff

    1994-01-01

    The potential for the use of possibility in the qualitative model-based diagnosis of spacecraft systems is described. The first sections of the paper briefly introduce the Model-Based Diagnostic (MBD) approach to spacecraft fault diagnosis; Qualitative Modeling (QM) methodologies; and the concepts of possibilistic modeling in the context of Generalized Information Theory (GIT). Then the necessary conditions for the applicability of possibilistic methods to qualitative MBD, and a number of potential directions for such an application, are described.

  11. Model-Based Safety Analysis

    NASA Technical Reports Server (NTRS)

    Joshi, Anjali; Heimdahl, Mats P. E.; Miller, Steven P.; Whalen, Mike W.

    2006-01-01

    System safety analysis techniques are well established and are used extensively during the design of safety-critical systems. Despite this, most of the techniques are highly subjective and dependent on the skill of the practitioner. Since these analyses are usually based on an informal system model, it is unlikely that they will be complete, consistent, and error free. In fact, the lack of precise models of the system architecture and its failure modes often forces the safety analysts to devote much of their effort to gathering architectural details about the system behavior from several sources and embedding this information in the safety artifacts such as the fault trees. This report describes Model-Based Safety Analysis, an approach in which the system and safety engineers share a common system model created using a model-based development process. By extending the system model with a fault model as well as relevant portions of the physical system to be controlled, automated support can be provided for much of the safety analysis. We believe that by using a common model for both system and safety engineering and automating parts of the safety analysis, we can both reduce the cost and improve the quality of the safety analysis. Here we present our vision of model-based safety analysis and discuss the advantages and challenges in making this approach practical.

  12. Model-based machine learning.

    PubMed

    Bishop, Christopher M

    2013-02-13

    Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications. PMID:23277612

  13. Model-based machine learning.

    PubMed

    Bishop, Christopher M

    2013-02-13

    Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications.

  14. Model-based reconfiguration: Diagnosis and recovery

    NASA Technical Reports Server (NTRS)

    Crow, Judy; Rushby, John

    1994-01-01

    We extend Reiter's general theory of model-based diagnosis to a theory of fault detection, identification, and reconfiguration (FDIR). The generality of Reiter's theory readily supports an extension in which the problem of reconfiguration is viewed as a close analog of the problem of diagnosis. Using a reconfiguration predicate 'rcfg' analogous to the abnormality predicate 'ab,' we derive a strategy for reconfiguration by transforming the corresponding strategy for diagnosis. There are two obvious benefits of this approach: algorithms for diagnosis can be exploited as algorithms for reconfiguration and we have a theoretical framework for an integrated approach to FDIR. As a first step toward realizing these benefits we show that a class of diagnosis engines can be used for reconfiguration and we discuss algorithms for integrated FDIR. We argue that integrating recovery and diagnosis is an essential next step if this technology is to be useful for practical applications.

  15. Linear Accelerators

    SciTech Connect

    Sidorin, Anatoly

    2010-01-05

    In linear accelerators the particles are accelerated by either electrostatic fields or oscillating Radio Frequency (RF) fields. Accordingly the linear accelerators are divided in three large groups: electrostatic, induction and RF accelerators. Overview of the different types of accelerators is given. Stability of longitudinal and transverse motion in the RF linear accelerators is briefly discussed. The methods of beam focusing in linacs are described.

  16. A general entry to linear, dendritic and branched thiourea-linked glycooligomers as new motifs for phosphate ester recognition in water.

    PubMed

    Jiménez Blanco, José L; Bootello, Purificación; Ortiz Mellet, Carmen; Gutiérrez Gallego, Ricardo; García Fernández, José M

    2004-01-01

    A blockwise iterative synthetic strategy for the preparation of linear, dendritic and branched full-carbohydrate architectures has been developed by using sugar azido(carbamate) isothiocyanates as key templates; the presence of intersaccharide thiourea bridges provides anchoring points for hydrogen bond-directed molecular recognition of phosphate esters in water.

  17. A Generalization of Pythagoras's Theorem and Application to Explanations of Variance Contributions in Linear Models. Research Report. ETS RR-14-18

    ERIC Educational Resources Information Center

    Carlson, James E.

    2014-01-01

    Many aspects of the geometry of linear statistical models and least squares estimation are well known. Discussions of the geometry may be found in many sources. Some aspects of the geometry relating to the partitioning of variation that can be explained using a little-known theorem of Pappus and have not been discussed previously are the topic of…

  18. Sequential Bayesian Detection: A Model-Based Approach

    SciTech Connect

    Sullivan, E J; Candy, J V

    2007-08-13

    Sequential detection theory has been known for a long time evolving in the late 1940's by Wald and followed by Middleton's classic exposition in the 1960's coupled with the concurrent enabling technology of digital computer systems and the development of sequential processors. Its development, when coupled to modern sequential model-based processors, offers a reasonable way to attack physics-based problems. In this chapter, the fundamentals of the sequential detection are reviewed from the Neyman-Pearson theoretical perspective and formulated for both linear and nonlinear (approximate) Gauss-Markov, state-space representations. We review the development of modern sequential detectors and incorporate the sequential model-based processors as an integral part of their solution. Motivated by a wealth of physics-based detection problems, we show how both linear and nonlinear processors can seamlessly be embedded into the sequential detection framework to provide a powerful approach to solving non-stationary detection problems.

  19. Sequential Bayesian Detection: A Model-Based Approach

    SciTech Connect

    Candy, J V

    2008-12-08

    Sequential detection theory has been known for a long time evolving in the late 1940's by Wald and followed by Middleton's classic exposition in the 1960's coupled with the concurrent enabling technology of digital computer systems and the development of sequential processors. Its development, when coupled to modern sequential model-based processors, offers a reasonable way to attack physics-based problems. In this chapter, the fundamentals of the sequential detection are reviewed from the Neyman-Pearson theoretical perspective and formulated for both linear and nonlinear (approximate) Gauss-Markov, state-space representations. We review the development of modern sequential detectors and incorporate the sequential model-based processors as an integral part of their solution. Motivated by a wealth of physics-based detection problems, we show how both linear and nonlinear processors can seamlessly be embedded into the sequential detection framework to provide a powerful approach to solving non-stationary detection problems.

  20. Model based control of dynamic atomic force microscope

    SciTech Connect

    Lee, Chibum; Salapaka, Srinivasa M.

    2015-04-15

    A model-based robust control approach is proposed that significantly improves imaging bandwidth for the dynamic mode atomic force microscopy. A model for cantilever oscillation amplitude and phase dynamics is derived and used for the control design. In particular, the control design is based on a linearized model and robust H{sub ∞} control theory. This design yields a significant improvement when compared to the conventional proportional-integral designs and verified by experiments.

  1. Model based control of dynamic atomic force microscope.

    PubMed

    Lee, Chibum; Salapaka, Srinivasa M

    2015-04-01

    A model-based robust control approach is proposed that significantly improves imaging bandwidth for the dynamic mode atomic force microscopy. A model for cantilever oscillation amplitude and phase dynamics is derived and used for the control design. In particular, the control design is based on a linearized model and robust H(∞) control theory. This design yields a significant improvement when compared to the conventional proportional-integral designs and verified by experiments.

  2. Linear Colliders

    NASA Astrophysics Data System (ADS)

    Yamamoto, Akira; Yokoya, Kaoru

    2015-02-01

    An overview of linear collider programs is given. The history and technical challenges are described and the pioneering electron-positron linear collider, the SLC, is first introduced. For future energy frontier linear collider projects, the International Linear Collider (ILC) and the Compact Linear Collider (CLIC) are introduced and their technical features are discussed. The ILC is based on superconducting RF technology and the CLIC is based on two-beam acceleration technology. The ILC collaboration completed the Technical Design Report in 2013, and has come to the stage of "Design to Reality." The CLIC collaboration published the Conceptual Design Report in 2012, and the key technology demonstration is in progress. The prospects for further advanced acceleration technology are briefly discussed for possible long-term future linear colliders.

  3. Linear Colliders

    NASA Astrophysics Data System (ADS)

    Yamamoto, Akira; Yokoya, Kaoru

    An overview of linear collider programs is given. The history and technical challenges are described and the pioneering electron-positron linear collider, the SLC, is first introduced. For future energy frontier linear collider projects, the International Linear Collider (ILC) and the Compact Linear Collider (CLIC) are introduced and their technical features are discussed. The ILC is based on superconducting RF technology and the CLIC is based on two-beam acceleration technology. The ILC collaboration completed the Technical Design Report in 2013, and has come to the stage of "Design to Reality." The CLIC collaboration published the Conceptual Design Report in 2012, and the key technology demonstration is in progress. The prospects for further advanced acceleration technology are briefly discussed for possible long-term future linear colliders.

  4. Applying knowledge compilation techniques to model-based reasoning

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.

    1991-01-01

    Researchers in the area of knowledge compilation are developing general purpose techniques for improving the efficiency of knowledge-based systems. In this article, an attempt is made to define knowledge compilation, to characterize several classes of knowledge compilation techniques, and to illustrate how some of these techniques can be applied to improve the performance of model-based reasoning systems.

  5. Model-based Utility Functions

    NASA Astrophysics Data System (ADS)

    Hibbard, Bill

    2012-05-01

    Orseau and Ring, as well as Dewey, have recently described problems, including self-delusion, with the behavior of agents using various definitions of utility functions. An agent's utility function is defined in terms of the agent's history of interactions with its environment. This paper argues, via two examples, that the behavior problems can be avoided by formulating the utility function in two steps: 1) inferring a model of the environment from interactions, and 2) computing utility as a function of the environment model. Basing a utility function on a model that the agent must learn implies that the utility function must initially be expressed in terms of specifications to be matched to structures in the learned model. These specifications constitute prior assumptions about the environment so this approach will not work with arbitrary environments. But the approach should work for agents designed by humans to act in the physical world. The paper also addresses the issue of self-modifying agents and shows that if provided with the possibility to modify their utility functions agents will not choose to do so, under some usual assumptions.

  6. Linear Collisions

    ERIC Educational Resources Information Center

    Walkiewicz, T. A.; Newby, N. D., Jr.

    1972-01-01

    A discussion of linear collisions between two or three objects is related to a junior-level course in analytical mechanics. The theoretical discussion uses a geometrical approach that treats elastic and inelastic collisions from a unified point of view. Experiments with a linear air track are described. (Author/TS)

  7. LINEAR ACCELERATOR

    DOEpatents

    Christofilos, N.C.; Polk, I.J.

    1959-02-17

    Improvements in linear particle accelerators are described. A drift tube system for a linear ion accelerator reduces gap capacity between adjacent drift tube ends. This is accomplished by reducing the ratio of the diameter of the drift tube to the diameter of the resonant cavity. Concentration of magnetic field intensity at the longitudinal midpoint of the external sunface of each drift tube is reduced by increasing the external drift tube diameter at the longitudinal center region.

  8. Model-based ocean acoustic passive localization. Revision 1

    SciTech Connect

    Candy, J.V.; Sullivan, E.J.

    1994-06-01

    A model-based approach is developed (theoretically) to solve the passive localization problem. Here the authors investigate the design of a model-based identifier for a shallow water ocean acoustic problem characterized by a normal-mode model. In this problem they show how the processor can be structured to estimate the vertical wave numbers directly from measured pressure-field and sound speed measurements thereby eliminating the need for synthetic aperture processing or even a propagation model solution. Finally, they investigate various special cases of the source localization problem, designing a model-based localizer for each and evaluating the underlying structure with the expectation of gaining more and more insight into the general problem.

  9. General theory for multiple input-output perturbations in complex molecular systems. 1. Linear QSPR electronegativity models in physical, organic, and medicinal chemistry.

    PubMed

    González-Díaz, Humberto; Arrasate, Sonia; Gómez-SanJuan, Asier; Sotomayor, Nuria; Lete, Esther; Besada-Porto, Lina; Ruso, Juan M

    2013-01-01

    In general perturbation methods starts with a known exact solution of a problem and add "small" variation terms in order to approach to a solution for a related problem without known exact solution. Perturbation theory has been widely used in almost all areas of science. Bhor's quantum model, Heisenberg's matrix mechanincs, Feyman diagrams, and Poincare's chaos model or "butterfly effect" in complex systems are examples of perturbation theories. On the other hand, the study of Quantitative Structure-Property Relationships (QSPR) in molecular complex systems is an ideal area for the application of perturbation theory. There are several problems with exact experimental solutions (new chemical reactions, physicochemical properties, drug activity and distribution, metabolic networks, etc.) in public databases like CHEMBL. However, in all these cases, we have an even larger list of related problems without known solutions. We need to know the change in all these properties after a perturbation of initial boundary conditions. It means, when we test large sets of similar, but different, compounds and/or chemical reactions under the slightly different conditions (temperature, time, solvents, enzymes, assays, protein targets, tissues, partition systems, organisms, etc.). However, to the best of our knowledge, there is no QSPR general-purpose perturbation theory to solve this problem. In this work, firstly we review general aspects and applications of both perturbation theory and QSPR models. Secondly, we formulate a general-purpose perturbation theory for multiple-boundary QSPR problems. Last, we develop three new QSPR-Perturbation theory models. The first model classify correctly >100,000 pairs of intra-molecular carbolithiations with 75-95% of Accuracy (Ac), Sensitivity (Sn), and Specificity (Sp). The model predicts probabilities of variations in the yield and enantiomeric excess of reactions due to at least one perturbation in boundary conditions (solvent, temperature

  10. General theory for multiple input-output perturbations in complex molecular systems. 1. Linear QSPR electronegativity models in physical, organic, and medicinal chemistry.

    PubMed

    González-Díaz, Humberto; Arrasate, Sonia; Gómez-SanJuan, Asier; Sotomayor, Nuria; Lete, Esther; Besada-Porto, Lina; Ruso, Juan M

    2013-01-01

    In general perturbation methods starts with a known exact solution of a problem and add "small" variation terms in order to approach to a solution for a related problem without known exact solution. Perturbation theory has been widely used in almost all areas of science. Bhor's quantum model, Heisenberg's matrix mechanincs, Feyman diagrams, and Poincare's chaos model or "butterfly effect" in complex systems are examples of perturbation theories. On the other hand, the study of Quantitative Structure-Property Relationships (QSPR) in molecular complex systems is an ideal area for the application of perturbation theory. There are several problems with exact experimental solutions (new chemical reactions, physicochemical properties, drug activity and distribution, metabolic networks, etc.) in public databases like CHEMBL. However, in all these cases, we have an even larger list of related problems without known solutions. We need to know the change in all these properties after a perturbation of initial boundary conditions. It means, when we test large sets of similar, but different, compounds and/or chemical reactions under the slightly different conditions (temperature, time, solvents, enzymes, assays, protein targets, tissues, partition systems, organisms, etc.). However, to the best of our knowledge, there is no QSPR general-purpose perturbation theory to solve this problem. In this work, firstly we review general aspects and applications of both perturbation theory and QSPR models. Secondly, we formulate a general-purpose perturbation theory for multiple-boundary QSPR problems. Last, we develop three new QSPR-Perturbation theory models. The first model classify correctly >100,000 pairs of intra-molecular carbolithiations with 75-95% of Accuracy (Ac), Sensitivity (Sn), and Specificity (Sp). The model predicts probabilities of variations in the yield and enantiomeric excess of reactions due to at least one perturbation in boundary conditions (solvent, temperature

  11. Model-based phase-shifting interferometer

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Zhang, Lei; Shi, Tu; Yang, Yongying; Chong, Shiyao; Miao, Liang; Huang, Wei; Shen, Yibing; Bai, Jian

    2015-10-01

    A model-based phase-shifting interferometer (MPI) is developed, in which a novel calculation technique is proposed instead of the traditional complicated system structure, to achieve versatile, high precision and quantitative surface tests. In the MPI, the partial null lens (PNL) is employed to implement the non-null test. With some alternative PNLs, similar as the transmission spheres in ZYGO interferometers, the MPI provides a flexible test for general spherical and aspherical surfaces. Based on modern computer modeling technique, a reverse iterative optimizing construction (ROR) method is employed for the retrace error correction of non-null test, as well as figure error reconstruction. A self-compiled ray-tracing program is set up for the accurate system modeling and reverse ray tracing. The surface figure error then can be easily extracted from the wavefront data in forms of Zernike polynomials by the ROR method. Experiments of the spherical and aspherical tests are presented to validate the flexibility and accuracy. The test results are compared with those of Zygo interferometer (null tests), which demonstrates the high accuracy of the MPI. With such accuracy and flexibility, the MPI would possess large potential in modern optical shop testing.

  12. Feedbacks from Green House Gas Emissions on Roads: A General Methodology for Analyzing Global Warming on Linear Infrastructure with a Case Study in the Northeastern U.S

    NASA Astrophysics Data System (ADS)

    Jacobs, J. M.; Meagher, W.; Daniel, J.; Linder, E.

    2011-12-01

    The Intergovernmental Panel on Climate Change attributes the observed pattern of change to the influence of anthropogenic forcing, stating that it is extremely unlikely that the global pattern of warming can be explained without external forcing, and that it is very likely the greenhouse gases caused the warming globally over the last 50 years. Consequently, much effort has been focused on understanding the contribution of road transportation to the emissions of greenhouse gases. Striking little research has been conducted to understand the implications of climate change on the performance and design of road networks. When using water and energy balance approaches, climate is an integral part of modeling pavement deterioration processes including rutting, thermal cracking, frost heave, and thaw weakening. The potential of climate change raises the possibility that the frequency, duration, and severity of these deterioration processes may increase. This research explores the value of NARCCAP climate data sets in transportation infrastructure models. Here, we present a general methodology to demonstrate how built infrastructure might from an effort to use various RCM climate scenarios and pavement designs to quantify the climate change impact on pavement performance using a case study approach. We present challenges and results in using the Regional Climate Model datasets as inputs, through intermediary hydrologic functions, into the Federal Department of Transportation's Mechanistic-Empirical Pavement Design Guide Model.

  13. Generalized Predictive and Neural Generalized Predictive Control of Aerospace Systems

    NASA Technical Reports Server (NTRS)

    Kelkar, Atul G.

    2000-01-01

    The research work presented in this thesis addresses the problem of robust control of uncertain linear and nonlinear systems using Neural network-based Generalized Predictive Control (NGPC) methodology. A brief overview of predictive control and its comparison with Linear Quadratic (LQ) control is given to emphasize advantages and drawbacks of predictive control methods. It is shown that the Generalized Predictive Control (GPC) methodology overcomes the drawbacks associated with traditional LQ control as well as conventional predictive control methods. It is shown that in spite of the model-based nature of GPC it has good robustness properties being special case of receding horizon control. The conditions for choosing tuning parameters for GPC to ensure closed-loop stability are derived. A neural network-based GPC architecture is proposed for the control of linear and nonlinear uncertain systems. A methodology to account for parametric uncertainty in the system is proposed using on-line training capability of multi-layer neural network. Several simulation examples and results from real-time experiments are given to demonstrate the effectiveness of the proposed methodology.

  14. Expediting model-based optoacoustic reconstructions with tomographic symmetries

    SciTech Connect

    Lutzweiler, Christian; Deán-Ben, Xosé Luís; Razansky, Daniel

    2014-01-15

    Purpose: Image quantification in optoacoustic tomography implies the use of accurate forward models of excitation, propagation, and detection of optoacoustic signals while inversions with high spatial resolution usually involve very large matrices, leading to unreasonably long computation times. The development of fast and memory efficient model-based approaches represents then an important challenge to advance on the quantitative and dynamic imaging capabilities of tomographic optoacoustic imaging. Methods: Herein, a method for simplification and acceleration of model-based inversions, relying on inherent symmetries present in common tomographic acquisition geometries, has been introduced. The method is showcased for the case of cylindrical symmetries by using polar image discretization of the time-domain optoacoustic forward model combined with efficient storage and inversion strategies. Results: The suggested methodology is shown to render fast and accurate model-based inversions in both numerical simulations andpost mortem small animal experiments. In case of a full-view detection scheme, the memory requirements are reduced by one order of magnitude while high-resolution reconstructions are achieved at video rate. Conclusions: By considering the rotational symmetry present in many tomographic optoacoustic imaging systems, the proposed methodology allows exploiting the advantages of model-based algorithms with feasible computational requirements and fast reconstruction times, so that its convenience and general applicability in optoacoustic imaging systems with tomographic symmetries is anticipated.

  15. LINEAR ACCELERATOR

    DOEpatents

    Colgate, S.A.

    1958-05-27

    An improvement is presented in linear accelerators for charged particles with respect to the stable focusing of the particle beam. The improvement consists of providing a radial electric field transverse to the accelerating electric fields and angularly introducing the beam of particles in the field. The results of the foregoing is to achieve a beam which spirals about the axis of the acceleration path. The combination of the electric fields and angular motion of the particles cooperate to provide a stable and focused particle beam.

  16. Nonlinear model-based control algorithm for a distillation column using software sensor.

    PubMed

    Jana, Amiya Kumar; Samanta, Amar Nath; Ganguly, Saibal

    2005-04-01

    This paper presents the design of model-based globally linearizing control (GLC) structure for a distillation process within the differential geometric framework. The model of a nonideal binary distillation column, whose characteristics were highly nonlinear and strongly interactive, is used as a real process. The classical GLC law is comprised of a transformer (input-output linearizing state feedback), a nonlinear state observer, and an external PI controller. The tray temperature based short-cut observer (TTBSCO) has been used as a state estimator within the control structure, in which all tray temperatures were considered to be measured. Accordingly, the liquid phase composition of each tray was calculated online using the derived temperature-composition correlation. In the simulation experiment, the proposed GLC coupled with TTBSCO (GLC-TTBSCO) outperformed a conventional PI controller based on servo performances with and without measurement noise as well as on regulatory behaviors. In the subsequent part, the GLC law has been synthesized in conjunction with tray temperature based reduced-order observer (GLC-TTBROO) where the distillate and bottom compositions of the distillation process have been inferred from top and bottom product temperatures respectively, which were measured online. Finally, the comparative performance of the GLC-TTBSCO and the GLC-TTBROO has been addressed under parametric uncertainty and the GLC-TTBSCO algorithm provided slightly better performance than the GLC-TTBROO. The resulting control laws are rather general and can be easily adopted for other binary distillation columns.

  17. Linear Clouds

    NASA Technical Reports Server (NTRS)

    2006-01-01

    [figure removed for brevity, see original site] Context image for PIA03667 Linear Clouds

    These clouds are located near the edge of the south polar region. The cloud tops are the puffy white features in the bottom half of the image.

    Image information: VIS instrument. Latitude -80.1N, Longitude 52.1E. 17 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  18. Linear Programming Problems for Generalized Uncertainty

    ERIC Educational Resources Information Center

    Thipwiwatpotjana, Phantipa

    2010-01-01

    Uncertainty occurs when there is more than one realization that can represent an information. This dissertation concerns merely discrete realizations of an uncertainty. Different interpretations of an uncertainty and their relationships are addressed when the uncertainty is not a probability of each realization. A well known model that can handle…

  19. Automated extraction of knowledge for model-based diagnostics

    NASA Technical Reports Server (NTRS)

    Gonzalez, Avelino J.; Myler, Harley R.; Towhidnejad, Massood; Mckenzie, Frederic D.; Kladke, Robin R.

    1990-01-01

    The concept of accessing computer aided design (CAD) design databases and extracting a process model automatically is investigated as a possible source for the generation of knowledge bases for model-based reasoning systems. The resulting system, referred to as automated knowledge generation (AKG), uses an object-oriented programming structure and constraint techniques as well as internal database of component descriptions to generate a frame-based structure that describes the model. The procedure has been designed to be general enough to be easily coupled to CAD systems that feature a database capable of providing label and connectivity data from the drawn system. The AKG system is capable of defining knowledge bases in formats required by various model-based reasoning tools.

  20. Reduced-order-model based feedback control of the Modified Hasegawa-Wakatani equations

    NASA Astrophysics Data System (ADS)

    Goumiri, Imene; Rowley, Clarence; Ma, Zhanhua; Gates, David; Parker, Jeffrey; Krommes, John

    2012-10-01

    In this study, we demonstrate the development of model-based feedback control for stabilization of an unstable equilibrium obtained in the Modified Hasegawa-Wakatani (MHW) equations, a classic model in plasma turbulence. First, a balanced truncation is applied; a model reduction technique that has been proved successful in flow control design problems, to obtain a low dimensional model of the linearized MHW equation. A model-based feedback controller is then designed for the reduced order model using linear quadratic regulators (LQR) then a linear quadratic gaussian (LQG) control. The controllers are then applied on the original linearized and nonlinear MHW equations to stabilize the equilibrium and suppress the transition to drift-wave induced turbulences.

  1. Testing Strategies for Model-Based Development

    NASA Technical Reports Server (NTRS)

    Heimdahl, Mats P. E.; Whalen, Mike; Rajan, Ajitha; Miller, Steven P.

    2006-01-01

    This report presents an approach for testing artifacts generated in a model-based development process. This approach divides the traditional testing process into two parts: requirements-based testing (validation testing) which determines whether the model implements the high-level requirements and model-based testing (conformance testing) which determines whether the code generated from a model is behaviorally equivalent to the model. The goals of the two processes differ significantly and this report explores suitable testing metrics and automation strategies for each. To support requirements-based testing, we define novel objective requirements coverage metrics similar to existing specification and code coverage metrics. For model-based testing, we briefly describe automation strategies and examine the fault-finding capability of different structural coverage metrics using tests automatically generated from the model.

  2. Generalized Parabolas

    ERIC Educational Resources Information Center

    Joseph, Dan; Hartman, Gregory; Gibson, Caleb

    2011-01-01

    In this article we explore the consequences of modifying the common definition of a parabola by considering the locus of all points equidistant from a focus and (not necessarily linear) directrix. The resulting derived curves, which we call "generalized parabolas," are often quite beautiful and possess many interesting properties. We show that…

  3. Multimode model based defect characterization in composites

    NASA Astrophysics Data System (ADS)

    Roberts, R.; Holland, S.; Gregory, E.

    2016-02-01

    A newly-initiated research program for model-based defect characterization in CFRP composites is summarized. The work utilizes computational models of the interaction of NDE probing energy fields (ultrasound and thermography), to determine 1) the measured signal dependence on material and defect properties (forward problem), and 2) an assessment of performance-critical defect properties from analysis of measured NDE signals (inverse problem). Work is reported on model implementation for inspection of CFRP laminates containing delamination and porosity. Forward predictions of measurement response are presented, as well as examples of model-based inversion of measured data for the estimation of defect parameters.

  4. Model-based internal wave processing

    SciTech Connect

    Candy, J.V.; Chambers, D.H.

    1995-06-09

    A model-based approach is proposed to solve the oceanic internal wave signal processing problem that is based on state-space representations of the normal-mode vertical velocity and plane wave horizontal velocity propagation models. It is shown that these representations can be utilized to spatially propagate the modal (dept) vertical velocity functions given the basic parameters (wave numbers, Brunt-Vaisala frequency profile etc.) developed from the solution of the associated boundary value problem as well as the horizontal velocity components. Based on this framework, investigations are made of model-based solutions to the signal enhancement problem for internal waves.

  5. Sandboxes for Model-Based Inquiry

    ERIC Educational Resources Information Center

    Brady, Corey; Holbert, Nathan; Soylu, Firat; Novak, Michael; Wilensky, Uri

    2015-01-01

    In this article, we introduce a class of constructionist learning environments that we call "Emergent Systems Sandboxes" ("ESSs"), which have served as a centerpiece of our recent work in developing curriculum to support scalable model-based learning in classroom settings. ESSs are a carefully specified form of virtual…

  6. Model-Based Inquiries in Chemistry

    ERIC Educational Resources Information Center

    Khan, Samia

    2007-01-01

    In this paper, instructional strategies for sustaining model-based inquiry in an undergraduate chemistry class were analyzed through data collected from classroom observations, a student survey, and in-depth problem-solving sessions with the instructor and students. Analysis of teacher-student interactions revealed a cyclical pattern in which…

  7. An Application of Explanatory Item Response Modeling for Model-Based Proficiency Scaling

    ERIC Educational Resources Information Center

    Hartig, Johannes; Frey, Andreas; Nold, Gunter; Klieme, Eckhard

    2012-01-01

    The article compares three different methods to estimate effects of task characteristics and to use these estimates for model-based proficiency scaling: prediction of item difficulties from the Rasch model, the linear logistic test model (LLTM), and an LLTM including random item effects (LLTM+e). The methods are applied to empirical data from a…

  8. Applying FSL to the FIAC data: model-based and model-free analysis of voice and sentence repetition priming.

    PubMed

    Beckmann, Christian F; Jenkinson, Mark; Woolrich, Mark W; Behrens, Timothy E J; Flitney, David E; Devlin, Joseph T; Smith, Stephen M

    2006-05-01

    This article presents results obtained from applying various tools from FSL (FMRIB Software Library) to data from the repetition priming experiment used for the HBM'05 Functional Image Analysis Contest. We present analyses from the model-based General Linear Model (GLM) tool (FEAT) and from the model-free independent component analysis tool (MELODIC). We also discuss the application of tools for the correction of image distortions prior to the statistical analysis and the utility of recent advances in functional magnetic resonance imaging (FMRI) time series modeling and inference such as the use of optimal constrained HRF basis function modeling and mixture modeling inference. The combination of hemodynamic response function (HRF) and mixture modeling, in particular, revealed that both sentence content and speaker voice priming effects occurred bilaterally along the length of the superior temporal sulcus (STS). These results suggest that both are processed in a single underlying system without any significant asymmetries for content vs. voice processing. PMID:16565953

  9. A Model-Based Prognostics Approach Applied to Pneumatic Valves

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Goebel, Kai

    2011-01-01

    Within the area of systems health management, the task of prognostics centers on predicting when components will fail. Model-based prognostics exploits domain knowledge of the system, its components, and how they fail by casting the underlying physical phenomena in a physics-based model that is derived from first principles. Uncertainty cannot be avoided in prediction, therefore, algorithms are employed that help in managing these uncertainties. The particle filtering algorithm has become a popular choice for model-based prognostics due to its wide applicability, ease of implementation, and support for uncertainty management. We develop a general model-based prognostics methodology within a robust probabilistic framework using particle filters. As a case study, we consider a pneumatic valve from the Space Shuttle cryogenic refueling system. We develop a detailed physics-based model of the pneumatic valve, and perform comprehensive simulation experiments to illustrate our prognostics approach and evaluate its effectiveness and robustness. The approach is demonstrated using historical pneumatic valve data from the refueling system.

  10. Connectotyping: model based fingerprinting of the functional connectome.

    PubMed

    Miranda-Dominguez, Oscar; Mills, Brian D; Carpenter, Samuel D; Grant, Kathleen A; Kroenke, Christopher D; Nigg, Joel T; Fair, Damien A

    2014-01-01

    A better characterization of how an individual's brain is functionally organized will likely bring dramatic advances to many fields of study. Here we show a model-based approach toward characterizing resting state functional connectivity MRI (rs-fcMRI) that is capable of identifying a so-called "connectotype", or functional fingerprint in individual participants. The approach rests on a simple linear model that proposes the activity of a given brain region can be described by the weighted sum of its functional neighboring regions. The resulting coefficients correspond to a personalized model-based connectivity matrix that is capable of predicting the timeseries of each subject. Importantly, the model itself is subject specific and has the ability to predict an individual at a later date using a limited number of non-sequential frames. While we show that there is a significant amount of shared variance between models across subjects, the model's ability to discriminate an individual is driven by unique connections in higher order control regions in frontal and parietal cortices. Furthermore, we show that the connectotype is present in non-human primates as well, highlighting the translational potential of the approach.

  11. A probabilistic graphical model based stochastic input model construction

    SciTech Connect

    Wan, Jiang; Zabaras, Nicholas

    2014-09-01

    Model reduction techniques have been widely used in modeling of high-dimensional stochastic input in uncertainty quantification tasks. However, the probabilistic modeling of random variables projected into reduced-order spaces presents a number of computational challenges. Due to the curse of dimensionality, the underlying dependence relationships between these random variables are difficult to capture. In this work, a probabilistic graphical model based approach is employed to learn the dependence by running a number of conditional independence tests using observation data. Thus a probabilistic model of the joint PDF is obtained and the PDF is factorized into a set of conditional distributions based on the dependence structure of the variables. The estimation of the joint PDF from data is then transformed to estimating conditional distributions under reduced dimensions. To improve the computational efficiency, a polynomial chaos expansion is further applied to represent the random field in terms of a set of standard random variables. This technique is combined with both linear and nonlinear model reduction methods. Numerical examples are presented to demonstrate the accuracy and efficiency of the probabilistic graphical model based stochastic input models. - Highlights: • Data-driven stochastic input models without the assumption of independence of the reduced random variables. • The problem is transformed to a Bayesian network structure learning problem. • Examples are given in flows in random media.

  12. Applying generalized linear models as an explanatory tool of sex steroids, thyroid hormones and their relationships with environmental and physiologic factors in immature East Pacific green sea turtles (Chelonia mydas).

    PubMed

    Labrada-Martagón, Vanessa; Méndez-Rodríguez, Lia C; Mangel, Marc; Zenteno-Savín, Tania

    2013-09-01

    Generalized linear models were fitted to evaluate the relationship between 17β-estradiol (E2), testosterone (T) and thyroxine (T4) levels in immature East Pacific green sea turtles (Chelonia mydas) and their body condition, size, mass, blood biochemistry parameters, handling time, year, season and site of capture. According to external (tail size) and morphological (<77.3 straight carapace length) characteristics, 95% of the individuals were juveniles. Hormone levels, assessed on sea turtles subjected to a capture stress protocol, were <34.7nmolTL(-1), <532.3pmolE2 L(-1) and <43.8nmolT4L(-1). The statistical model explained biologically plausible metabolic relationships between hormone concentrations and blood biochemistry parameters (e.g. glucose, cholesterol) and the potential effect of environmental variables (season and study site). The variables handling time and year did not contribute significantly to explain hormone levels. Differences in sex steroids between season and study sites found by the models coincided with specific nutritional, physiological and body condition differences related to the specific habitat conditions. The models correctly predicted the median levels of the measured hormones in green sea turtles, which confirms the fitted model's utility. It is suggested that quantitative predictions could be possible when the model is tested with additional data.

  13. Systems Engineering Interfaces: A Model Based Approach

    NASA Technical Reports Server (NTRS)

    Fosse, Elyse; Delp, Christopher

    2013-01-01

    Currently: Ops Rev developed and maintains a framework that includes interface-specific language, patterns, and Viewpoints. Ops Rev implements the framework to design MOS 2.0 and its 5 Mission Services. Implementation de-couples interfaces and instances of interaction Future: A Mission MOSE implements the approach and uses the model based artifacts for reviews. The framework extends further into the ground data layers and provides a unified methodology.

  14. Reduced-order model based feedback control of the modified Hasegawa-Wakatani model

    SciTech Connect

    Goumiri, I. R.; Rowley, C. W.; Ma, Z.; Gates, D. A.; Krommes, J. A.; Parker, J. B.

    2013-04-15

    In this work, the development of model-based feedback control that stabilizes an unstable equilibrium is obtained for the Modified Hasegawa-Wakatani (MHW) equations, a classic model in plasma turbulence. First, a balanced truncation (a model reduction technique that has proven successful in flow control design problems) is applied to obtain a low dimensional model of the linearized MHW equation. Then, a model-based feedback controller is designed for the reduced order model using linear quadratic regulators. Finally, a linear quadratic Gaussian controller which is more resistant to disturbances is deduced. The controller is applied on the non-reduced, nonlinear MHW equations to stabilize the equilibrium and suppress the transition to drift-wave induced turbulence.

  15. Reduced-order model based feedback control of the modified Hasegawa-Wakatani model

    NASA Astrophysics Data System (ADS)

    Goumiri, I. R.; Rowley, C. W.; Ma, Z.; Gates, D. A.; Krommes, J. A.; Parker, J. B.

    2013-04-01

    In this work, the development of model-based feedback control that stabilizes an unstable equilibrium is obtained for the Modified Hasegawa-Wakatani (MHW) equations, a classic model in plasma turbulence. First, a balanced truncation (a model reduction technique that has proven successful in flow control design problems) is applied to obtain a low dimensional model of the linearized MHW equation. Then, a model-based feedback controller is designed for the reduced order model using linear quadratic regulators. Finally, a linear quadratic Gaussian controller which is more resistant to disturbances is deduced. The controller is applied on the non-reduced, nonlinear MHW equations to stabilize the equilibrium and suppress the transition to drift-wave induced turbulence.

  16. Sparse linear programming subprogram

    SciTech Connect

    Hanson, R.J.; Hiebert, K.L.

    1981-12-01

    This report describes a subprogram, SPLP(), for solving linear programming problems. The package of subprogram units comprising SPLP() is written in Fortran 77. The subprogram SPLP() is intended for problems involving at most a few thousand constraints and variables. The subprograms are written to take advantage of sparsity in the constraint matrix. A very general problem statement is accepted by SPLP(). It allows upper, lower, or no bounds on the variables. Both the primal and dual solutions are returned as output parameters. The package has many optional features. Among them is the ability to save partial results and then use them to continue the computation at a later time.

  17. Note: Model-based identification method of a cable-driven wearable device for arm rehabilitation.

    PubMed

    Cui, Xiang; Chen, Weihai; Zhang, Jianbin; Wang, Jianhua

    2015-09-01

    Cable-driven exoskeletons have used active cables to actuate the system and are worn on subjects to provide motion assistance. However, this kind of wearable devices usually contains uncertain kinematic parameters. In this paper, a model-based identification method has been proposed for a cable-driven arm exoskeleton to estimate its uncertainties. The identification method is based on the linearized error model derived from the kinematics of the exoskeleton. Experiment has been conducted to demonstrate the feasibility of the proposed model-based method in practical application.

  18. Note: Model-based identification method of a cable-driven wearable device for arm rehabilitation

    NASA Astrophysics Data System (ADS)

    Cui, Xiang; Chen, Weihai; Zhang, Jianbin; Wang, Jianhua

    2015-09-01

    Cable-driven exoskeletons have used active cables to actuate the system and are worn on subjects to provide motion assistance. However, this kind of wearable devices usually contains uncertain kinematic parameters. In this paper, a model-based identification method has been proposed for a cable-driven arm exoskeleton to estimate its uncertainties. The identification method is based on the linearized error model derived from the kinematics of the exoskeleton. Experiment has been conducted to demonstrate the feasibility of the proposed model-based method in practical application.

  19. Inference regarding multiple structural changes in linear models with endogenous regressors☆

    PubMed Central

    Hall, Alastair R.; Han, Sanggohn; Boldea, Otilia

    2012-01-01

    This paper considers the linear model with endogenous regressors and multiple changes in the parameters at unknown times. It is shown that minimization of a Generalized Method of Moments criterion yields inconsistent estimators of the break fractions, but minimization of the Two Stage Least Squares (2SLS) criterion yields consistent estimators of these parameters. We develop a methodology for estimation and inference of the parameters of the model based on 2SLS. The analysis covers the cases where the reduced form is either stable or unstable. The methodology is illustrated via an application to the New Keynesian Phillips Curve for the US. PMID:23805021

  20. Identification of Relevant Phytochemical Constituents for Characterization and Authentication of Tomatoes by General Linear Model Linked to Automatic Interaction Detection (GLM-AID) and Artificial Neural Network Models (ANNs)

    PubMed Central

    Hernández Suárez, Marcos; Astray Dopazo, Gonzalo; Larios López, Dina; Espinosa, Francisco

    2015-01-01

    There are a large number of tomato cultivars with a wide range of morphological, chemical, nutritional and sensorial characteristics. Many factors are known to affect the nutrient content of tomato cultivars. A complete understanding of the effect of these factors would require an exhaustive experimental design, multidisciplinary scientific approach and a suitable statistical method. Some multivariate analytical techniques such as Principal Component Analysis (PCA) or Factor Analysis (FA) have been widely applied in order to search for patterns in the behaviour and reduce the dimensionality of a data set by a new set of uncorrelated latent variables. However, in some cases it is not useful to replace the original variables with these latent variables. In this study, Automatic Interaction Detection (AID) algorithm and Artificial Neural Network (ANN) models were applied as alternative to the PCA, AF and other multivariate analytical techniques in order to identify the relevant phytochemical constituents for characterization and authentication of tomatoes. To prove the feasibility of AID algorithm and ANN models to achieve the purpose of this study, both methods were applied on a data set with twenty five chemical parameters analysed on 167 tomato samples from Tenerife (Spain). Each tomato sample was defined by three factors: cultivar, agricultural practice and harvest date. General Linear Model linked to AID (GLM-AID) tree-structured was organized into 3 levels according to the number of factors. p-Coumaric acid was the compound the allowed to distinguish the tomato samples according to the day of harvest. More than one chemical parameter was necessary to distinguish among different agricultural practices and among the tomato cultivars. Several ANN models, with 25 and 10 input variables, for the prediction of cultivar, agricultural practice and harvest date, were developed. Finally, the models with 10 input variables were chosen with fit’s goodness between 44 and

  1. Linear analysis of incompressible Rayleigh-Taylor instability in solids.

    PubMed

    Piriz, A R; Cela, J J López; Tahir, N A

    2009-10-01

    The study of the linear stage of the incompressible Rayleigh-Taylor instability in elastic-plastic solids is performed by considering thick plates under a constant acceleration that is also uniform except for a small sinusoidal ripple in the horizontal plane. The analysis is carried out by using an analytical model based on the Newton second law and it is complemented with extensive two-dimensional numerical simulations. The conditions for marginal stability that determine the instability threshold are derived. Besides, the boundary for the transition from the elastic to the plastic regime is obtained and it is demonstrated that such a transition is not a sufficient condition for instability. The model yields complete analytical solutions for the perturbation amplitude evolution and reveals the main physical process that governs the instability. The theory is in general agreement with the numerical simulations and provides useful quantitative results. Implications for high-energy-density-physics experiments are also discussed.

  2. Linear analysis of incompressible Rayleigh-Taylor instability in solids

    SciTech Connect

    Piriz, A. R.; Lopez Cela, J. J.; Tahir, N. A.

    2009-10-15

    The study of the linear stage of the incompressible Rayleigh-Taylor instability in elastic-plastic solids is performed by considering thick plates under a constant acceleration that is also uniform except for a small sinusoidal ripple in the horizontal plane. The analysis is carried out by using an analytical model based on the Newton second law and it is complemented with extensive two-dimensional numerical simulations. The conditions for marginal stability that determine the instability threshold are derived. Besides, the boundary for the transition from the elastic to the plastic regime is obtained and it is demonstrated that such a transition is not a sufficient condition for instability. The model yields complete analytical solutions for the perturbation amplitude evolution and reveals the main physical process that governs the instability. The theory is in general agreement with the numerical simulations and provides useful quantitative results. Implications for high-energy-density-physics experiments are also discussed.

  3. Model-based neuroimaging for cognitive computing.

    PubMed

    Poznanski, Roman R

    2009-09-01

    The continuity of the mind is suggested to mean the continuous spatiotemporal dynamics arising from the electrochemical signature of the neocortex: (i) globally through volume transmission in the gray matter as fields of neural activity, and (ii) locally through extrasynaptic signaling between fine distal dendrites of cortical neurons. If the continuity of dynamical systems across spatiotemporal scales defines a stream of consciousness then intentional metarepresentations as templates of dynamic continuity allow qualia to be semantically mapped during neuroimaging of specific cognitive tasks. When interfaced with a computer, such model-based neuroimaging requiring new mathematics of the brain will begin to decipher higher cognitive operations not possible with existing brain-machine interfaces.

  4. Model-based vision using geometric hashing

    NASA Astrophysics Data System (ADS)

    Akerman, Alexander, III; Patton, Ronald

    1991-04-01

    The Geometric Hashing technique developed by the NYU Courant Institute has been applied to various automatic target recognition applications. In particular, I-MATH has extended the hashing algorithm to perform automatic target recognition ofsynthetic aperture radar (SAR) imagery. For this application, the hashing is performed upon the geometric locations of dominant scatterers. In addition to being a robust model-based matching algorithm -- invariant under translation, scale, and 3D rotations of the target -- hashing is of particular utility because it can still perform effective matching when the target is partially obscured. Moreover, hashing is very amenable to a SIMD parallel processing architecture, and thus potentially realtime implementable.

  5. Model-based Tomographic Reconstruction Literature Search

    SciTech Connect

    Chambers, D H; Lehman, S K

    2005-11-30

    In the process of preparing a proposal for internal research funding, a literature search was conducted on the subject of model-based tomographic reconstruction (MBTR). The purpose of the search was to ensure that the proposed research would not replicate any previous work. We found that the overwhelming majority of work on MBTR which used parameterized models of the object was theoretical in nature. Only three researchers had applied the technique to actual data. In this note, we summarize the findings of the literature search.

  6. ALPS: A Linear Program Solver

    NASA Technical Reports Server (NTRS)

    Ferencz, Donald C.; Viterna, Larry A.

    1991-01-01

    ALPS is a computer program which can be used to solve general linear program (optimization) problems. ALPS was designed for those who have minimal linear programming (LP) knowledge and features a menu-driven scheme to guide the user through the process of creating and solving LP formulations. Once created, the problems can be edited and stored in standard DOS ASCII files to provide portability to various word processors or even other linear programming packages. Unlike many math-oriented LP solvers, ALPS contains an LP parser that reads through the LP formulation and reports several types of errors to the user. ALPS provides a large amount of solution data which is often useful in problem solving. In addition to pure linear programs, ALPS can solve for integer, mixed integer, and binary type problems. Pure linear programs are solved with the revised simplex method. Integer or mixed integer programs are solved initially with the revised simplex, and the completed using the branch-and-bound technique. Binary programs are solved with the method of implicit enumeration. This manual describes how to use ALPS to create, edit, and solve linear programming problems. Instructions for installing ALPS on a PC compatible computer are included in the appendices along with a general introduction to linear programming. A programmers guide is also included for assistance in modifying and maintaining the program.

  7. Model-based damage evaluation of layered CFRP structures

    NASA Astrophysics Data System (ADS)

    Munoz, Rafael; Bochud, Nicolas; Rus, Guillermo; Peralta, Laura; Melchor, Juan; Chiachío, Juan; Chiachío, Manuel; Bond, Leonard J.

    2015-03-01

    An ultrasonic evaluation technique for damage identification of layered CFRP structures is presented. This approach relies on a model-based estimation procedure that combines experimental data and simulation of ultrasonic damage-propagation interactions. The CFPR structure, a [0/90]4s lay-up, has been tested in an immersion through transmission experiment, where a scan has been performed on a damaged specimen. Most ultrasonic techniques in industrial practice consider only a few features of the received signals, namely, time of flight, amplitude, attenuation, frequency contents, and so forth. In this case, once signals are captured, an algorithm is used to reconstruct the complete signal waveform and extract the unknown damage parameters by means of modeling procedures. A linear version of the data processing has been performed, where only Young modulus has been monitored and, in a second nonlinear version, the first order nonlinear coefficient β was incorporated to test the possibility of detection of early damage. The aforementioned physical simulation models are solved by the Transfer Matrix formalism, which has been extended from linear to nonlinear harmonic generation technique. The damage parameter search strategy is based on minimizing the mismatch between the captured and simulated signals in the time domain in an automated way using Genetic Algorithms. Processing all scanned locations, a C-scan of the parameter of each layer can be reconstructed, obtaining the information describing the state of each layer and each interface. Damage can be located and quantified in terms of changes in the selected parameter with a measurable extension. In the case of the nonlinear coefficient of first order, evidence of higher sensitivity to damage than imaging the linearly estimated Young Modulus is provided.

  8. Model-Based Fault Tolerant Control

    NASA Technical Reports Server (NTRS)

    Kumar, Aditya; Viassolo, Daniel

    2008-01-01

    The Model Based Fault Tolerant Control (MBFTC) task was conducted under the NASA Aviation Safety and Security Program. The goal of MBFTC is to develop and demonstrate real-time strategies to diagnose and accommodate anomalous aircraft engine events such as sensor faults, actuator faults, or turbine gas-path component damage that can lead to in-flight shutdowns, aborted take offs, asymmetric thrust/loss of thrust control, or engine surge/stall events. A suite of model-based fault detection algorithms were developed and evaluated. Based on the performance and maturity of the developed algorithms two approaches were selected for further analysis: (i) multiple-hypothesis testing, and (ii) neural networks; both used residuals from an Extended Kalman Filter to detect the occurrence of the selected faults. A simple fusion algorithm was implemented to combine the results from each algorithm to obtain an overall estimate of the identified fault type and magnitude. The identification of the fault type and magnitude enabled the use of an online fault accommodation strategy to correct for the adverse impact of these faults on engine operability thereby enabling continued engine operation in the presence of these faults. The performance of the fault detection and accommodation algorithm was extensively tested in a simulation environment.

  9. Sandboxes for Model-Based Inquiry

    NASA Astrophysics Data System (ADS)

    Brady, Corey; Holbert, Nathan; Soylu, Firat; Novak, Michael; Wilensky, Uri

    2015-04-01

    In this article, we introduce a class of constructionist learning environments that we call Emergent Systems Sandboxes ( ESSs), which have served as a centerpiece of our recent work in developing curriculum to support scalable model-based learning in classroom settings. ESSs are a carefully specified form of virtual construction environment that support students in creating, exploring, and sharing computational models of dynamic systems that exhibit emergent phenomena. They provide learners with "entity"-level construction primitives that reflect an underlying scientific model. These primitives can be directly "painted" into a sandbox space, where they can then be combined, arranged, and manipulated to construct complex systems and explore the emergent properties of those systems. We argue that ESSs offer a means of addressing some of the key barriers to adopting rich, constructionist model-based inquiry approaches in science classrooms at scale. Situating the ESS in a large-scale science modeling curriculum we are implementing across the USA, we describe how the unique "entity-level" primitive design of an ESS facilitates knowledge system refinement at both an individual and social level, we describe how it supports flexible modeling practices by providing both continuous and discrete modes of executability, and we illustrate how it offers students a variety of opportunities for validating their qualitative understandings of emergent systems as they develop.

  10. Unifying Model-Based and Reactive Programming within a Model-Based Executive

    NASA Technical Reports Server (NTRS)

    Williams, Brian C.; Gupta, Vineet; Norvig, Peter (Technical Monitor)

    1999-01-01

    Real-time, model-based, deduction has recently emerged as a vital component in AI's tool box for developing highly autonomous reactive systems. Yet one of the current hurdles towards developing model-based reactive systems is the number of methods simultaneously employed, and their corresponding melange of programming and modeling languages. This paper offers an important step towards unification. We introduce RMPL, a rich modeling language that combines probabilistic, constraint-based modeling with reactive programming constructs, while offering a simple semantics in terms of hidden state Markov processes. We introduce probabilistic, hierarchical constraint automata (PHCA), which allow Markov processes to be expressed in a compact representation that preserves the modularity of RMPL programs. Finally, a model-based executive, called Reactive Burton is described that exploits this compact encoding to perform efficIent simulation, belief state update and control sequence generation.

  11. Generalized smooth models

    SciTech Connect

    Glosup, J.

    1992-07-23

    The class of gene linear models is extended to develop a class of nonparametric regression models known as generalized smooth models. The technique of local scoring is used to estimate a generalized smooth model and the estimation procedure based on locally weighted regression is shown to produce local likelihood estimates. The asymptotically correct distribution of the deviance difference is derived and its use in comparing the fits of generalized linear models and generalized smooth models is illustrated. The relationship between generalized smooth models and generalized additive models is discussed, also.

  12. LRGS: Linear Regression by Gibbs Sampling

    NASA Astrophysics Data System (ADS)

    Mantz, Adam B.

    2016-02-01

    LRGS (Linear Regression by Gibbs Sampling) implements a Gibbs sampler to solve the problem of multivariate linear regression with uncertainties in all measured quantities and intrinsic scatter. LRGS extends an algorithm by Kelly (2007) that used Gibbs sampling for performing linear regression in fairly general cases in two ways: generalizing the procedure for multiple response variables, and modeling the prior distribution of covariates using a Dirichlet process.

  13. Internal wave signal processing: A model-based approach

    SciTech Connect

    Candy, J.V.; Chambers, D.H.

    1995-02-22

    A model-based approach is proposed to solve the oceanic internal wave signal processing problem that is based on state-space representations of the normal-mode vertical velocity and plane wave horizontal velocity propagation models. It is shown that these representations can be utilized to spatially propagate the modal (depth) vertical velocity functions given the basic parameters (wave numbers, Brunt-Vaisala frequency profile etc.) developed from the solution of the associated boundary value problem as well as the horizontal velocity components. These models are then generalized to the stochastic case where an approximate Gauss-Markov theory applies. The resulting Gauss-Markov representation, in principle, allows the inclusion of stochastic phenomena such as noise and modeling errors in a consistent manner. Based on this framework, investigations are made of model-based solutions to the signal enhancement problem for internal waves. In particular, a processor is designed that allows in situ recursive estimation of the required velocity functions. Finally, it is shown that the associated residual or so-called innovation sequence that ensues from the recursive nature of this formulation can be employed to monitor the model`s fit to the data.

  14. Model-based image processing using snakes and mutual information

    NASA Astrophysics Data System (ADS)

    von Klinski, Sebastian; Derz, Claus; Weese, David; Tolxdorff, Thomas

    2000-06-01

    Any segmentation approach assumes certain knowledge concerning data modalities, relevant organs and their imaging characteristics. These assumptions are necessary for developing criteria by which to separate the organ in question from the surrounding tissue. Typical assumptions are that the organs have homogeneous gray-value characteristics (region growing, region merging, etc.), specific gray-value patterns (classification methods), continuous edges (edge-based approaches), smooth and strong edges (snake approaches), or any combination of these. In most cases, such assumptions are invalid, at least locally. Consequently, these approaches prove to be time consuming either in their parameterization or execution. Further, the low result quality makes post- processing necessary. Our aim was to develop a segmentation approach for large 3D data sets (e.g., CT and MRI) that requires a short interaction time and that can easily be adapted to different organs and data materials. This has been achieved by exploiting available knowledge about data material and organ topology using anatomical models that have been constructed from previously segmented data sets. In the first step, the user manually specifies the general context of the data material and specifies anatomical landmarks. Then this information is used to automatically select a corresponding reference model, which is geometrically adjusted to the current data set. In the third step, a model-based snake approach is applied to determine the correct segmentation of the organ in question. Analogously, this approach can be used for model-based interpolation and registration.

  15. MODEL-BASED CLUSTERING OF LARGE NETWORKS1

    PubMed Central

    Vu, Duy Q.; Hunter, David R.; Schweinberger, Michael

    2015-01-01

    We describe a network clustering framework, based on finite mixture models, that can be applied to discrete-valued networks with hundreds of thousands of nodes and billions of edge variables. Relative to other recent model-based clustering work for networks, we introduce a more flexible modeling framework, improve the variational-approximation estimation algorithm, discuss and implement standard error estimation via a parametric bootstrap approach, and apply these methods to much larger data sets than those seen elsewhere in the literature. The more flexible framework is achieved through introducing novel parameterizations of the model, giving varying degrees of parsimony, using exponential family models whose structure may be exploited in various theoretical and algorithmic ways. The algorithms are based on variational generalized EM algorithms, where the E-steps are augmented by a minorization-maximization (MM) idea. The bootstrapped standard error estimates are based on an efficient Monte Carlo network simulation idea. Last, we demonstrate the usefulness of the model-based clustering framework by applying it to a discrete-valued network with more than 131,000 nodes and 17 billion edge variables. PMID:26605002

  16. Model-based patterns in prostate cancer mortality worldwide

    PubMed Central

    Fontes, F; Severo, M; Castro, C; Lourenço, S; Gomes, S; Botelho, F; La Vecchia, C; Lunet, N

    2013-01-01

    Background: Prostate cancer mortality has been decreasing in several high income countries and previous studies analysed the trends mostly according to geographical criteria. We aimed to identify patterns in the time trends of prostate cancer mortality across countries using a model-based approach. Methods: Model-based clustering was used to identify patterns of variation in prostate cancer mortality (1980–2010) across 37 European, five non-European high-income countries and four leading emerging economies. We characterised the patterns observed regarding the geographical distribution and gross national income of the countries, as well as the trends observed in mortality/incidence ratios. Results: We identified three clusters of countries with similar variation in prostate cancer mortality: pattern 1 (‘no mortality decline'), characterised by a continued increase throughout the whole period; patterns 2 (‘later mortality decline') and 3 (‘earlier mortality decline') depict mortality declines, starting in the late and early 1990s, respectively. These clusters are also homogeneous regarding the variation in the prostate cancer mortality/incidence ratios, while are heterogeneous with reference to the geographical region of the countries and distribution of the gross national income. Conclusion: We provide a general model for the description and interpretation of the trends in prostate cancer mortality worldwide, based on three main patterns. PMID:23660943

  17. Neural mass model-based tracking of anesthetic brain states.

    PubMed

    Kuhlmann, Levin; Freestone, Dean R; Manton, Jonathan H; Heyse, Bjorn; Vereecke, Hugo E M; Lipping, Tarmo; Struys, Michel M R F; Liley, David T J

    2016-06-01

    Neural mass model-based tracking of brain states from electroencephalographic signals holds the promise of simultaneously tracking brain states while inferring underlying physiological changes in various neuroscientific and clinical applications. Here, neural mass model-based tracking of brain states using the unscented Kalman filter applied to estimate parameters of the Jansen-Rit cortical population model is evaluated through the application of propofol-based anesthetic state monitoring. In particular, 15 subjects underwent propofol anesthesia induction from awake to anesthetised while behavioral responsiveness was monitored and frontal electroencephalographic signals were recorded. The unscented Kalman filter Jansen-Rit model approach applied to frontal electroencephalography achieved reasonable testing performance for classification of the anesthetic brain state (sensitivity: 0.51; chance sensitivity: 0.17; nearest neighbor sensitivity 0.75) when compared to approaches based on linear (autoregressive moving average) modeling (sensitivity 0.58; nearest neighbor sensitivity: 0.91) and a high performing standard depth of anesthesia monitoring measure, Higuchi Fractal Dimension (sensitivity: 0.50; nearest neighbor sensitivity: 0.88). Moreover, it was found that the unscented Kalman filter based parameter estimates of the inhibitory postsynaptic potential amplitude varied in the physiologically expected direction with increases in propofol concentration, while the estimates of the inhibitory postsynaptic potential rate constant did not. These results combined with analysis of monotonicity of parameter estimates, error analysis of parameter estimates, and observability analysis of the Jansen-Rit model, along with considerations of extensions of the Jansen-Rit model, suggests that the Jansen-Rit model combined with unscented Kalman filtering provides a valuable reference point for future real-time brain state tracking studies. This is especially true for studies of

  18. Model-based vision for space applications

    NASA Technical Reports Server (NTRS)

    Chaconas, Karen; Nashman, Marilyn; Lumia, Ronald

    1992-01-01

    This paper describes a method for tracking moving image features by combining spatial and temporal edge information with model based feature information. The algorithm updates the two-dimensional position of object features by correlating predicted model features with current image data. The results of the correlation process are used to compute an updated model. The algorithm makes use of a high temporal sampling rate with respect to spatial changes of the image features and operates in a real-time multiprocessing environment. Preliminary results demonstrate successful tracking for image feature velocities between 1.1 and 4.5 pixels every image frame. This work has applications for docking, assembly, retrieval of floating objects and a host of other space-related tasks.

  19. OREGANO_VE: a new parallelised 3D solver for the general (non-)linear Maxwell visco-elastic problem: validation and application to the calculation of surface deformation in the earthquake cycle

    NASA Astrophysics Data System (ADS)

    Yamasaki, Tadashi; Houseman, Gregory; Hamling, Ian; Postek, Elek

    2010-05-01

    We have developed a new parallelized 3-D numerical code, OREGANO_VE, for the solution of the general visco-elastic problem in a rectangular block domain. The mechanical equilibrium equation is solved using the finite element method for a (non-)linear Maxwell visco-elastic rheology. Time-dependent displacement and/or traction boundary conditions can be applied. Matrix assembly is based on a tetrahedral element defined by 4 vertex nodes and 6 nodes located at the midpoints of the edges, and within which displacement is described by a quadratic interpolation function. For evaluating viscoelastic relaxation, an explicit time-stepping algorithm (Zienkiewicz and Cormeau, Int. J. Num. Meth. Eng., 8, 821-845, 1974) is employed. We test the accurate implementation of the OREGANO_VE by comparing numerical and analytic (or semi-analytic half-space) solutions to different problems in a range of applications: (1) equilibration of stress in a constant density layer after gravity is switched on at t = 0 tests the implementation of spatially variable viscosity and non-Newtonian viscosity; (2) displacement of the welded interface between two blocks of differing viscosity tests the implementation of viscosity discontinuities, (3) displacement of the upper surface of a layer under applied normal load tests the implementation of time-dependent surface tractions (4) visco-elastic response to dyke intrusion (compared with the solution in a half-space) tests the implementation of all aspects. In each case, the accuracy of the code is validated subject to use of a sufficiently small time step, providing assurance that the OREGANO_VE code can be applied to a range of visco-elastic relaxation processes in three dimensions, including post-seismic deformation and post-glacial uplift. The OREGANO_VE code includes a capability for representation of prescribed fault slip on an internal fault. The surface displacement associated with large earthquakes can be detected by some geodetic observations

  20. Fast Algorithms for Model-Based Diagnosis

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Barrett, Anthony; Vatan, Farrokh; Mackey, Ryan

    2005-01-01

    Two improved new methods for automated diagnosis of complex engineering systems involve the use of novel algorithms that are more efficient than prior algorithms used for the same purpose. Both the recently developed algorithms and the prior algorithms in question are instances of model-based diagnosis, which is based on exploring the logical inconsistency between an observation and a description of a system to be diagnosed. As engineering systems grow more complex and increasingly autonomous in their functions, the need for automated diagnosis increases concomitantly. In model-based diagnosis, the function of each component and the interconnections among all the components of the system to be diagnosed (for example, see figure) are represented as a logical system, called the system description (SD). Hence, the expected behavior of the system is the set of logical consequences of the SD. Faulty components lead to inconsistency between the observed behaviors of the system and the SD. The task of finding the faulty components (diagnosis) reduces to finding the components, the abnormalities of which could explain all the inconsistencies. Of course, the meaningful solution should be a minimal set of faulty components (called a minimal diagnosis), because the trivial solution, in which all components are assumed to be faulty, always explains all inconsistencies. Although the prior algorithms in question implement powerful methods of diagnosis, they are not practical because they essentially require exhaustive searches among all possible combinations of faulty components and therefore entail the amounts of computation that grow exponentially with the number of components of the system.

  1. On the Performance of Stochastic Model-Based Image Segmentation

    NASA Astrophysics Data System (ADS)

    Lei, Tianhu; Sewchand, Wilfred

    1989-11-01

    A new stochastic model-based image segmentation technique for X-ray CT image has been developed and has been extended to the more general nondiffraction CT images which include MRI, SPELT, and certain type of ultrasound images [1,2]. The nondiffraction CT image is modeled by a Finite Normal Mixture. The technique utilizes the information theoretic criterion to detect the number of the region images, uses the Expectation-Maximization algorithm to estimate the parameters of the image, and uses the Bayesian classifier to segment the observed image. How does this technique over/under-estimate the number of the region images? What is the probability of errors in the segmentation of this technique? This paper addresses these two problems and is a continuation of [1,2].

  2. Model-Based Engineering and Manufacturing CAD/CAM Benchmark.

    SciTech Connect

    Domm, T.C.; Underwood, R.S.

    1999-10-13

    The Benchmark Project was created from a desire to identify best practices and improve the overall efficiency and performance of the Y-12 Plant's systems and personnel supporting the manufacturing mission. The mission of the benchmark team was to search out industry leaders in manufacturing and evaluate their engineering practices and processes to determine direction and focus for Y-12 modernization efforts. The companies visited included several large established companies and a new, small, high-tech machining firm. As a result of this effort, changes are recommended that will enable Y-12 to become a more modern, responsive, cost-effective manufacturing facility capable of supporting the needs of the Nuclear Weapons Complex (NWC) into the 21st century. The benchmark team identified key areas of interest, both focused and general. The focus areas included Human Resources, Information Management, Manufacturing Software Tools, and Standards/Policies and Practices. Areas of general interest included Infrastructure, Computer Platforms and Networking, and Organizational Structure. The results of this benchmark showed that all companies are moving in the direction of model-based engineering and manufacturing. There was evidence that many companies are trying to grasp how to manage current and legacy data. In terms of engineering design software tools, the companies contacted were somewhere between 3-D solid modeling and surfaced wire-frame models. The manufacturing computer tools were varied, with most companies using more than one software product to generate machining data and none currently performing model-based manufacturing (MBM) from a common model. The majority of companies were closer to identifying or using a single computer-aided design (CAD) system than a single computer-aided manufacturing (CAM) system. The Internet was a technology that all companies were looking to either transport information more easily throughout the corporation or as a conduit for

  3. Comparison between a model-based and a conventional pyramid sensor reconstructor.

    PubMed

    Korkiakoski, Visa; Vérinaud, Christophe; Le Louarn, Miska; Conan, Rodolphe

    2007-08-20

    A model of a non-modulated pyramid wavefront sensor (P-WFS) based on Fourier optics has been presented. Linearizations of the model represented as Jacobian matrices are used to improve the P-WFS phase estimates. It has been shown in simulations that a linear approximation of the P-WFS is sufficient in closed-loop adaptive optics. Also a method to compute model-based synthetic P-WFS command matrices is shown, and its performance is compared to the conventional calibration. It was observed that in poor visibility the new calibration is better than the conventional.

  4. Comparison between a model-based and a conventional pyramid sensor reconstructor.

    PubMed

    Korkiakoski, Visa; Vérinaud, Christophe; Le Louarn, Miska; Conan, Rodolphe

    2007-08-20

    A model of a non-modulated pyramid wavefront sensor (P-WFS) based on Fourier optics has been presented. Linearizations of the model represented as Jacobian matrices are used to improve the P-WFS phase estimates. It has been shown in simulations that a linear approximation of the P-WFS is sufficient in closed-loop adaptive optics. Also a method to compute model-based synthetic P-WFS command matrices is shown, and its performance is compared to the conventional calibration. It was observed that in poor visibility the new calibration is better than the conventional. PMID:17712383

  5. Using rule-based shot dose assignment in model-based MPC applications

    NASA Astrophysics Data System (ADS)

    Bork, Ingo; Buck, Peter; Wang, Lin; Müller, Uwe

    2014-10-01

    Shrinking feature sizes and the need for tighter CD (Critical Dimension) control require the introduction of new technologies in mask making processes. One of those methods is the dose assignment of individual shots on VSB (Variable Shaped Beam) mask writers to compensate CD non-linearity effects and improve dose edge slope. Using increased dose levels only for most critical features, generally only for the smallest CDs on a mask, the change in mask write time is minimal while the increase in image quality can be significant. This paper describes a method combining rule-based shot dose assignment with model-based shot size correction. This combination proves to be very efficient in correcting mask linearity errors while also improving dose edge slope of small features. Shot dose assignment is based on tables assigning certain dose levels to a range of feature sizes. The dose to feature size assignment is derived from mask measurements in such a way that shape corrections are kept to a minimum. For example, if a 50nm drawn line on mask results in a 45nm chrome line using nominal dose, a dose level is chosen which is closest to getting the line back on target. Since CD non-linearity is different for lines, line-ends and contacts, different tables are generated for the different shape categories. The actual dose assignment is done via DRC rules in a pre-processing step before executing the shape correction in the MPC engine. Dose assignment to line ends can be restricted to critical line/space dimensions since it might not be required for all line ends. In addition, adding dose assignment to a wide range of line ends might increase shot count which is undesirable. The dose assignment algorithm is very flexible and can be adjusted based on the type of layer and the best balance between accuracy and shot count. These methods can be optimized for the number of dose levels available for specific mask writers. The MPC engine now needs to be able to handle different dose

  6. A Kp forecast model based on neural network

    NASA Astrophysics Data System (ADS)

    Gong, J.; Liu, Y.; Luo, B.; Liu, S.

    2013-12-01

    As an important global geomagnetic disturbance index, Kp is difficult to predict, especially when Kp reaches 5 which means that the disturbance has reached the scales of geomagnetic storm and can cause spacecraft and power system anomaly. Statistical results showed that there exists high correlation between solar wind-magnetosphere coupling function and Kp index, and a linear combination of two solar wind-magnetosphere coupling terms, merging term and viscous term, proved to be good in predicting the Kp index. In this study, using the upstream solar wind parameters by the ACE satellite since 1998 and the two derived coupling terms mentioned above, a Kp forecast model based on artificial neural network is developed. For the operational need of predicting the geomagnetic disturbance as soon as possible, we construct the solar wind data and develop the model in an innovative way. For each Kp value at time t (the universal times of 8 Kp values in each day are noted as t=3, 6, 9, ..., 18, 21, 24), the model gives 6 predicted values every half an hour at t-3.5, t-3.0, t-2.5, t-2.0, t-1.5, t-1.0, based on the half-hour averaged model inputs (solar wind parameters and derived solar wind-magnetosphere coupling terms). The last predicted value at t-1.0 provides the final prediction. Evaluated with the test set data including years 1998, 2002 and 2006, the model yields the linear correlation coefficient (LC) of 0.88 and the root mean square error (RMSE) of 0.65 between the modeled and observed Kp values. Furthermore, if the nowcast Kp is available and included in the model input, the model can be improved and gives an LC of 0.90 and an RMSE of 0.62.

  7. Non Linear Conjugate Gradient

    2006-11-17

    Software that simulates and inverts electromagnetic field data for subsurface electrical properties (electrical conductivity) of geological media. The software treats data produced by a time harmonic source field excitation arising from the following antenna geometery: loops and grounded bipoles, as well as point electric and magnetic dioples. The inversion process is carried out using a non-linear conjugate gradient optimization scheme, which minimizes the misfit between field data and model data using a least squares criteria.more » The software is an upgrade from the code NLCGCS_MP ver 1.0. The upgrade includes the following components: Incorporation of new 1 D field sourcing routines to more accurately simulate the 3D electromagnetic field for arbitrary geologic& media, treatment for generalized finite length transmitting antenna geometry (antennas with vertical and horizontal component directions). In addition, the software has been upgraded to treat transverse anisotropy in electrical conductivity.« less

  8. The Impact of Acquiescence on Forced-Choice Responses: A Model-Based Analysis

    ERIC Educational Resources Information Center

    Ferrando, Pere J.; Anguiano-Carrasco, Cristina; Chico, Eliseo

    2011-01-01

    The general aim of the present study is to assess the potential usefulness of the normative Forced Choice (FC) format for reducing the impact of acquiescent responding (AR). To this end it makes two types of contributions: methodological and substantive. Methodologically, it proposes a model-based procedure, derived from a basic response…

  9. Language acquisition is model-based rather than model-free.

    PubMed

    Wang, Felix Hao; Mintz, Toben H

    2016-01-01

    Christiansen & Chater (C&C) propose that learning language is learning to process language. However, we believe that the general-purpose prediction mechanism they propose is insufficient to account for many phenomena in language acquisition. We argue from theoretical considerations and empirical evidence that many acquisition tasks are model-based, and that different acquisition tasks require different, specialized models.

  10. Model-based estimation of knee stiffness.

    PubMed

    Pfeifer, Serge; Vallery, Heike; Hardegger, Michael; Riener, Robert; Perreault, Eric J

    2012-09-01

    During natural locomotion, the stiffness of the human knee is modulated continuously and subconsciously according to the demands of activity and terrain. Given modern actuator technology, powered transfemoral prostheses could theoretically provide a similar degree of sophistication and function. However, experimentally quantifying knee stiffness modulation during natural gait is challenging. Alternatively, joint stiffness could be estimated in a less disruptive manner using electromyography (EMG) combined with kinetic and kinematic measurements to estimate muscle force, together with models that relate muscle force to stiffness. Here we present the first step in that process, where we develop such an approach and evaluate it in isometric conditions, where experimental measurements are more feasible. Our EMG-guided modeling approach allows us to consider conditions with antagonistic muscle activation, a phenomenon commonly observed in physiological gait. Our validation shows that model-based estimates of knee joint stiffness coincide well with experimental data obtained using conventional perturbation techniques. We conclude that knee stiffness can be accurately estimated in isometric conditions without applying perturbations, which presents an important step toward our ultimate goal of quantifying knee stiffness during gait.

  11. Model-Based Estimation of Knee Stiffness

    PubMed Central

    Pfeifer, Serge; Vallery, Heike; Hardegger, Michael; Riener, Robert; Perreault, Eric J.

    2013-01-01

    During natural locomotion, the stiffness of the human knee is modulated continuously and subconsciously according to the demands of activity and terrain. Given modern actuator technology, powered transfemoral prostheses could theoretically provide a similar degree of sophistication and function. However, experimentally quantifying knee stiffness modulation during natural gait is challenging. Alternatively, joint stiffness could be estimated in a less disruptive manner using electromyography (EMG) combined with kinetic and kinematic measurements to estimate muscle force, together with models that relate muscle force to stiffness. Here we present the first step in that process, where we develop such an approach and evaluate it in isometric conditions, where experimental measurements are more feasible. Our EMG-guided modeling approach allows us to consider conditions with antagonistic muscle activation, a phenomenon commonly observed in physiological gait. Our validation shows that model-based estimates of knee joint stiffness coincide well with experimental data obtained using conventional perturbation techniques. We conclude that knee stiffness can be accurately estimated in isometric conditions without applying perturbations, which presents an important step towards our ultimate goal of quantifying knee stiffness during gait. PMID:22801482

  12. 3-D model-based vehicle tracking.

    PubMed

    Lou, Jianguang; Tan, Tieniu; Hu, Weiming; Yang, Hao; Maybank, Steven J

    2005-10-01

    This paper aims at tracking vehicles from monocular intensity image sequences and presents an efficient and robust approach to three-dimensional (3-D) model-based vehicle tracking. Under the weak perspective assumption and the ground-plane constraint, the movements of model projection in the two-dimensional image plane can be decomposed into two motions: translation and rotation. They are the results of the corresponding movements of 3-D translation on the ground plane (GP) and rotation around the normal of the GP, which can be determined separately. A new metric based on point-to-line segment distance is proposed to evaluate the similarity between an image region and an instantiation of a 3-D vehicle model under a given pose. Based on this, we provide an efficient pose refinement method to refine the vehicle's pose parameters. An improved EKF is also proposed to track and to predict vehicle motion with a precise kinematics model. Experimental results with both indoor and outdoor data show that the algorithm obtains desirable performance even under severe occlusion and clutter.

  13. Model based systems engineering for astronomical projects

    NASA Astrophysics Data System (ADS)

    Karban, R.; Andolfato, L.; Bristow, P.; Chiozzi, G.; Esselborn, M.; Schilling, M.; Schmid, C.; Sommer, H.; Zamparelli, M.

    2014-08-01

    Model Based Systems Engineering (MBSE) is an emerging field of systems engineering for which the System Modeling Language (SysML) is a key enabler for descriptive, prescriptive and predictive models. This paper surveys some of the capabilities, expectations and peculiarities of tools-assisted MBSE experienced in real-life astronomical projects. The examples range in depth and scope across a wide spectrum of applications (for example documentation, requirements, analysis, trade studies) and purposes (addressing a particular development need, or accompanying a project throughout many - if not all - its lifecycle phases, fostering reuse and minimizing ambiguity). From the beginnings of the Active Phasing Experiment, through VLT instrumentation, VLTI infrastructure, Telescope Control System for the E-ELT, until Wavefront Control for the E-ELT, we show how stepwise refinements of tools, processes and methods have provided tangible benefits to customary system engineering activities like requirement flow-down, design trade studies, interfaces definition, and validation, by means of a variety of approaches (like Model Checking, Simulation, Model Transformation) and methodologies (like OOSEM, State Analysis)

  14. Model-Based Method for Sensor Validation

    NASA Technical Reports Server (NTRS)

    Vatan, Farrokh

    2012-01-01

    Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).

  15. 3-D model-based vehicle tracking.

    PubMed

    Lou, Jianguang; Tan, Tieniu; Hu, Weiming; Yang, Hao; Maybank, Steven J

    2005-10-01

    This paper aims at tracking vehicles from monocular intensity image sequences and presents an efficient and robust approach to three-dimensional (3-D) model-based vehicle tracking. Under the weak perspective assumption and the ground-plane constraint, the movements of model projection in the two-dimensional image plane can be decomposed into two motions: translation and rotation. They are the results of the corresponding movements of 3-D translation on the ground plane (GP) and rotation around the normal of the GP, which can be determined separately. A new metric based on point-to-line segment distance is proposed to evaluate the similarity between an image region and an instantiation of a 3-D vehicle model under a given pose. Based on this, we provide an efficient pose refinement method to refine the vehicle's pose parameters. An improved EKF is also proposed to track and to predict vehicle motion with a precise kinematics model. Experimental results with both indoor and outdoor data show that the algorithm obtains desirable performance even under severe occlusion and clutter. PMID:16238061

  16. [Fast spectral modeling based on Voigt peaks].

    PubMed

    Li, Jin-rong; Dai, Lian-kui

    2012-03-01

    Indirect hard modeling (IHM) is a recently introduced method for quantitative spectral analysis, which was applied to the analysis of nonlinear relation between mixture spectrum and component concentration. In addition, IHM is an effectual technology for the analysis of components of mixture with molecular interactions and strongly overlapping bands. Before the establishment of regression model, IHM needs to model the measured spectrum as a sum of Voigt peaks. The precision of the spectral model has immediate impact on the accuracy of the regression model. A spectrum often includes dozens or even hundreds of Voigt peaks, which mean that spectral modeling is a optimization problem with high dimensionality in fact. So, large operation overhead is needed and the solution would not be numerically unique due to the ill-condition of the optimization problem. An improved spectral modeling method is presented in the present paper, which reduces the dimensionality of optimization problem by determining the overlapped peaks in spectrum. Experimental results show that the spectral modeling based on the new method is more accurate and needs much shorter running time than conventional method. PMID:22582612

  17. Model Order and Identifiability of Non-Linear Biological Systems in Stable Oscillation.

    PubMed

    Wigren, Torbjörn

    2015-01-01

    The paper presents a theoretical result that clarifies when it is at all possible to determine the nonlinear dynamic equations of a biological system in stable oscillation, from measured data. As it turns out the minimal order needed for this is dependent on the minimal dimension in which the stable orbit of the system does not intersect itself. This is illustrated with a simulated fourth order Hodgkin-Huxley spiking neuron model, which is identified using a non-linear second order differential equation model. The simulated result illustrates that the underlying higher order model of the spiking neuron cannot be uniquely determined given only the periodic measured data. The result of the paper is of general validity when the dynamics of biological systems in stable oscillation is identified, and illustrates the need to carefully address non-linear identifiability aspects when validating models based on periodic data. PMID:26671817

  18. Model Order and Identifiability of Non-Linear Biological Systems in Stable Oscillation.

    PubMed

    Wigren, Torbjörn

    2015-01-01

    The paper presents a theoretical result that clarifies when it is at all possible to determine the nonlinear dynamic equations of a biological system in stable oscillation, from measured data. As it turns out the minimal order needed for this is dependent on the minimal dimension in which the stable orbit of the system does not intersect itself. This is illustrated with a simulated fourth order Hodgkin-Huxley spiking neuron model, which is identified using a non-linear second order differential equation model. The simulated result illustrates that the underlying higher order model of the spiking neuron cannot be uniquely determined given only the periodic measured data. The result of the paper is of general validity when the dynamics of biological systems in stable oscillation is identified, and illustrates the need to carefully address non-linear identifiability aspects when validating models based on periodic data.

  19. Model-based optimization of tapered free-electron lasers

    NASA Astrophysics Data System (ADS)

    Mak, Alan; Curbis, Francesca; Werin, Sverker

    2015-04-01

    The energy extraction efficiency is a figure of merit for a free-electron laser (FEL). It can be enhanced by the technique of undulator tapering, which enables the sustained growth of radiation power beyond the initial saturation point. In the development of a single-pass x-ray FEL, it is important to exploit the full potential of this technique and optimize the taper profile aw(z ). Our approach to the optimization is based on the theoretical model by Kroll, Morton, and Rosenbluth, whereby the taper profile aw(z ) is not a predetermined function (such as linear or exponential) but is determined by the physics of a resonant particle. For further enhancement of the energy extraction efficiency, we propose a modification to the model, which involves manipulations of the resonant particle's phase. Using the numerical simulation code GENESIS, we apply our model-based optimization methods to a case of the future FEL at the MAX IV Laboratory (Lund, Sweden), as well as a case of the LCLS-II facility (Stanford, USA).

  20. Lithium battery aging model based on Dakin's degradation approach

    NASA Astrophysics Data System (ADS)

    Baghdadi, Issam; Briat, Olivier; Delétage, Jean-Yves; Gyan, Philippe; Vinassa, Jean-Michel

    2016-09-01

    This paper proposes and validates a calendar and power cycling aging model for two different lithium battery technologies. The model development is based on previous SIMCAL and SIMSTOCK project data. In these previous projects, the effect of the battery state of charge, temperature and current magnitude on aging was studied on a large panel of different battery chemistries. In this work, data are analyzed using Dakin's degradation approach. In fact, the logarithms of battery capacity fade and the increase in resistance evolves linearly over aging. The slopes identified from straight lines correspond to battery aging rates. Thus, a battery aging rate expression function of aging factors was deduced and found to be governed by Eyring's law. The proposed model simulates the capacity fade and resistance increase as functions of the influencing aging factors. Its expansion using Taylor series was consistent with semi-empirical models based on the square root of time, which are widely studied in the literature. Finally, the influence of the current magnitude and temperature on aging was simulated. Interestingly, the aging rate highly increases with decreasing and increasing temperature for the ranges of -5 °C-25 °C and 25 °C-60 °C, respectively.

  1. Model-based Processing of Micro-cantilever Sensor Arrays

    SciTech Connect

    Tringe, J W; Clague, D S; Candy, J V; Lee, C L; Rudd, R E; Burnham, A K

    2004-11-17

    We develop a model-based processor (MBP) for a micro-cantilever array sensor to detect target species in solution. After discussing the generalized framework for this problem, we develop the specific model used in this study. We perform a proof-of-concept experiment, fit the model parameters to the measured data and use them to develop a Gauss-Markov simulation. We then investigate two cases of interest: (1) averaged deflection data, and (2) multi-channel data. In both cases the evaluation proceeds by first performing a model-based parameter estimation to extract the model parameters, next performing a Gauss-Markov simulation, designing the optimal MBP and finally applying it to measured experimental data. The simulation is used to evaluate the performance of the MBP in the multi-channel case and compare it to a ''smoother'' (''averager'') typically used in this application. It was shown that the MBP not only provides a significant gain ({approx} 80dB) in signal-to-noise ratio (SNR), but also consistently outperforms the smoother by 40-60 dB. Finally, we apply the processor to the smoothed experimental data and demonstrate its capability for chemical detection. The MBP performs quite well, though it includes a correctable systematic bias error. The project's primary accomplishment was the successful application of model-based processing to signals from micro-cantilever arrays: 40-60 dB improvement vs. the smoother algorithm was demonstrated. This result was achieved through the development of appropriate mathematical descriptions for the chemical and mechanical phenomena, and incorporation of these descriptions directly into the model-based signal processor. A significant challenge was the development of the framework which would maximize the usefulness of the signal processing algorithms while ensuring the accuracy of the mathematical description of the chemical-mechanical signal. Experimentally, the difficulty was to identify and characterize the non

  2. Linear superposition solutions to nonlinear wave equations

    NASA Astrophysics Data System (ADS)

    Liu, Yu

    2012-11-01

    The solutions to a linear wave equation can satisfy the principle of superposition, i.e., the linear superposition of two or more known solutions is still a solution of the linear wave equation. We show in this article that many nonlinear wave equations possess exact traveling wave solutions involving hyperbolic, triangle, and exponential functions, and the suitable linear combinations of these known solutions can also constitute linear superposition solutions to some nonlinear wave equations with special structural characteristics. The linear superposition solutions to the generalized KdV equation K(2,2,1), the Oliver water wave equation, and the k(n, n) equation are given. The structure characteristic of the nonlinear wave equations having linear superposition solutions is analyzed, and the reason why the solutions with the forms of hyperbolic, triangle, and exponential functions can form the linear superposition solutions is also discussed.

  3. A nanoscale linear-to-linear motion converter of graphene.

    PubMed

    Dai, Chunchun; Guo, Zhengrong; Zhang, Hongwei; Chang, Tienchong

    2016-08-14

    Motion conversion plays an irreplaceable role in a variety of machinery. Although many macroscopic motion converters have been widely used, it remains a challenge to convert motion at the nanoscale. Here we propose a nanoscale linear-to-linear motion converter, made of a flake-substrate system of graphene, which can convert the out-of-plane motion of the substrate into the in-plane motion of the flake. The curvature gradient induced van der Waals potential gradient between the flake and the substrate provides the driving force to achieve motion conversion. The proposed motion converter may have general implications for the design of nanomachinery and nanosensors.

  4. Evaluating face trustworthiness: a model based approach

    PubMed Central

    Baron, Sean G.; Oosterhof, Nikolaas N.

    2008-01-01

    Judgments of trustworthiness from faces determine basic approach/avoidance responses and approximate the valence evaluation of faces that runs across multiple person judgments. Here, based on trustworthiness judgments and using a computer model for face representation, we built a model for representing face trustworthiness (study 1). Using this model, we generated novel faces with an increased range of trustworthiness and used these faces as stimuli in a functional Magnetic Resonance Imaging study (study 2). Although participants did not engage in explicit evaluation of the faces, the amygdala response changed as a function of face trustworthiness. An area in the right amygdala showed a negative linear response—as the untrustworthiness of faces increased so did the amygdala response. Areas in the left and right putamen, the latter area extended into the anterior insula, showed a similar negative linear response. The response in the left amygdala was quadratic—strongest for faces on both extremes of the trustworthiness dimension. The medial prefrontal cortex and precuneus also showed a quadratic response, but their response was strongest to faces in the middle range of the trustworthiness dimension. PMID:19015102

  5. Evaluating face trustworthiness: a model based approach.

    PubMed

    Todorov, Alexander; Baron, Sean G; Oosterhof, Nikolaas N

    2008-06-01

    Judgments of trustworthiness from faces determine basic approach/avoidance responses and approximate the valence evaluation of faces that runs across multiple person judgments. Here, based on trustworthiness judgments and using a computer model for face representation, we built a model for representing face trustworthiness (study 1). Using this model, we generated novel faces with an increased range of trustworthiness and used these faces as stimuli in a functional Magnetic Resonance Imaging study (study 2). Although participants did not engage in explicit evaluation of the faces, the amygdala response changed as a function of face trustworthiness. An area in the right amygdala showed a negative linear response-as the untrustworthiness of faces increased so did the amygdala response. Areas in the left and right putamen, the latter area extended into the anterior insula, showed a similar negative linear response. The response in the left amygdala was quadratic--strongest for faces on both extremes of the trustworthiness dimension. The medial prefrontal cortex and precuneus also showed a quadratic response, but their response was strongest to faces in the middle range of the trustworthiness dimension. PMID:19015102

  6. Generalized Multilevel Structural Equation Modeling

    ERIC Educational Resources Information Center

    Rabe-Hesketh, Sophia; Skrondal, Anders; Pickles, Andrew

    2004-01-01

    A unifying framework for generalized multilevel structural equation modeling is introduced. The models in the framework, called generalized linear latent and mixed models (GLLAMM), combine features of generalized linear mixed models (GLMM) and structural equation models (SEM) and consist of a response model and a structural model for the latent…

  7. Model Based Autonomy for Robust Mars Operations

    NASA Technical Reports Server (NTRS)

    Kurien, James A.; Nayak, P. Pandurang; Williams, Brian C.; Lau, Sonie (Technical Monitor)

    1998-01-01

    Space missions have historically relied upon a large ground staff, numbering in the hundreds for complex missions, to maintain routine operations. When an anomaly occurs, this small army of engineers attempts to identify and work around the problem. A piloted Mars mission, with its multiyear duration, cost pressures, half-hour communication delays and two-week blackouts cannot be closely controlled by a battalion of engineers on Earth. Flight crew involvement in routine system operations must also be minimized to maximize science return. It also may be unrealistic to require the crew have the expertise in each mission subsystem needed to diagnose a system failure and effect a timely repair, as engineers did for Apollo 13. Enter model-based autonomy, which allows complex systems to autonomously maintain operation despite failures or anomalous conditions, contributing to safe, robust, and minimally supervised operation of spacecraft, life support, In Situ Resource Utilization (ISRU) and power systems. Autonomous reasoning is central to the approach. A reasoning algorithm uses a logical or mathematical model of a system to infer how to operate the system, diagnose failures and generate appropriate behavior to repair or reconfigure the system in response. The 'plug and play' nature of the models enables low cost development of autonomy for multiple platforms. Declarative, reusable models capture relevant aspects of the behavior of simple devices (e.g. valves or thrusters). Reasoning algorithms combine device models to create a model of the system-wide interactions and behavior of a complex, unique artifact such as a spacecraft. Rather than requiring engineers to all possible interactions and failures at design time or perform analysis during the mission, the reasoning engine generates the appropriate response to the current situation, taking into account its system-wide knowledge, the current state, and even sensor failures or unexpected behavior.

  8. Toward a model-based cognitive neuroscience of mind wandering.

    PubMed

    Hawkins, G E; Mittner, M; Boekel, W; Heathcote, A; Forstmann, B U

    2015-12-01

    People often "mind wander" during everyday tasks, temporarily losing track of time, place, or current task goals. In laboratory-based tasks, mind wandering is often associated with performance decrements in behavioral variables and changes in neural recordings. Such empirical associations provide descriptive accounts of mind wandering - how it affects ongoing task performance - but fail to provide true explanatory accounts - why it affects task performance. In this perspectives paper, we consider mind wandering as a neural state or process that affects the parameters of quantitative cognitive process models, which in turn affect observed behavioral performance. Our approach thus uses cognitive process models to bridge the explanatory divide between neural and behavioral data. We provide an overview of two general frameworks for developing a model-based cognitive neuroscience of mind wandering. The first approach uses neural data to segment observed performance into a discrete mixture of latent task-related and task-unrelated states, and the second regresses single-trial measures of neural activity onto structured trial-by-trial variation in the parameters of cognitive process models. We discuss the relative merits of the two approaches, and the research questions they can answer, and highlight that both approaches allow neural data to provide additional constraint on the parameters of cognitive models, which will lead to a more precise account of the effect of mind wandering on brain and behavior. We conclude by summarizing prospects for mind wandering as conceived within a model-based cognitive neuroscience framework, highlighting the opportunities for its continued study and the benefits that arise from using well-developed quantitative techniques to study abstract theoretical constructs.

  9. A Systematic Approach for Model-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter

  10. Photochemistry of tricyclo[5.2.2.0(2,6)]undeca-4,10-dien-8-ones: an efficient general route to substituted linear triquinanes from 2-methoxyphenols. Total synthesis of (+/-)-Delta(9(12))-capnellene.

    PubMed

    Hsu, Day-Shin; Chou, Yu-Yu; Tung, Yen-Shih; Liao, Chun-Chen

    2010-03-01

    An efficient and short entry to polyfunctionalized linear triquinanes from 2-methoxyphenols is described by utilizing the following chemistry. The Diels-Alder reactions of masked o-benzoquinones, derived from 2-methoxyphenols, with cyclopentadiene afford tricyclo[5.2.2.0(2,6)]undeca-4,10-dien-8-ones. Photochemical oxa-di-pi-methane (ODPM) rearrangements and 1,3-acyl shifts of the Diels-Alder adducts are investigated. The ODPM-rearranged products are further converted to linear triquinanes by using an O-stannyl ketyl fragmentation. Application of this efficient strategy to the total synthesis of (+/-)-Delta(9(12))-capnellene was accomplished from 2-methoxy-4-methylphenol in nine steps with 20 % overall yield.

  11. Model-based Adaptive Control of Resistive Wall Modes in DIII-D

    NASA Astrophysics Data System (ADS)

    Xie, F.; Schuster, E.; Humphreys, D. A.; Walker, M. L.

    2009-11-01

    One of the major non-axisymmetric instabilities under study in the DIII-D tokamak is the resistive wall mode (RWM), a form of plasma kink instability whose growth rate is moderated by the influence of a resistive wall. The General Atomics/FARTECH DIII-D/RWM dynamic model represents the plasma surface as a toroidal current sheet and the wall using an eigenmode approach. We report first on the experimental validation and reconciliation of the proposed dynamic model, which is a required step previous to the potential implementation in the Plasma Control System (PCS) of any model-based controller. The dynamic model is then used to synthesize an adaptive control law for the stabilization of the RWM under time-varying β conditions. Simulation results are presented comparing the performance of the model-based adaptive controller and present non-model-based PD controllers.

  12. Belos Block Linear Solvers Package

    2004-03-01

    Belos is an extensible and interoperable framework for large-scale, iterative methods for solving systems of linear equations with multiple right-hand sides. The motivation for this framework is to provide a generic interface to a collection of algorithms for solving large-scale linear systems. Belos is interoperable because both the matrix and vectors are considered to be opaque objects--only knowledge of the matrix and vectors via elementary operations is necessary. An implementation of Balos is accomplished viamore » the use of interfaces. One of the goals of Belos is to allow the user flexibility in specifying the data representation for the matrix and vectors and so leverage any existing software investment. The algorithms that will be included in package are Krylov-based linear solvers, like Block GMRES (Generalized Minimal RESidual) and Block CG (Conjugate-Gradient).« less

  13. Reduced-Order Model Based Feedback Control For Modified Hasegawa-Wakatani Model

    SciTech Connect

    Goumiri, I. R.; Rowley, C. W.; Ma, Z.; Gates, D. A.; Krommes, J. A.; Parker, J. B.

    2013-01-28

    In this work, the development of model-based feedback control that stabilizes an unstable equilibrium is obtained for the Modi ed Hasegawa-Wakatani (MHW) equations, a classic model in plasma turbulence. First, a balanced truncation (a model reduction technique that has proven successful in ow control design problems) is applied to obtain a low dimensional model of the linearized MHW equation. Then a modelbased feedback controller is designed for the reduced order model using linear quadratic regulators (LQR). Finally, a linear quadratic gaussian (LQG) controller, which is more resistant to disturbances is deduced. The controller is applied on the non-reduced, nonlinear MHW equations to stabilize the equilibrium and suppress the transition to drift-wave induced turbulence.

  14. Linear Logistic Test Modeling with R

    ERIC Educational Resources Information Center

    Baghaei, Purya; Kubinger, Klaus D.

    2015-01-01

    The present paper gives a general introduction to the linear logistic test model (Fischer, 1973), an extension of the Rasch model with linear constraints on item parameters, along with eRm (an R package to estimate different types of Rasch models; Mair, Hatzinger, & Mair, 2014) functions to estimate the model and interpret its parameters. The…

  15. Linear pose estimation from points or lines

    NASA Technical Reports Server (NTRS)

    Ansar, A.; Daniilidis, K.

    2002-01-01

    We present a general framework which allows for a novel set of linear solutions to the pose estimation problem for both n points and n lines. We present a number of simulations which compare our results to two other recent linear algorithm as well as to iterative approaches.

  16. Electrothermal linear actuator

    NASA Technical Reports Server (NTRS)

    Derr, L. J.; Tobias, R. A.

    1969-01-01

    Converting electric power into powerful linear thrust without generation of magnetic fields is accomplished with an electrothermal linear actuator. When treated by an energized filament, a stack of bimetallic washers expands and drives the end of the shaft upward.

  17. A linear programming manual

    NASA Technical Reports Server (NTRS)

    Tuey, R. C.

    1972-01-01

    Computer solutions of linear programming problems are outlined. Information covers vector spaces, convex sets, and matrix algebra elements for solving simultaneous linear equations. Dual problems, reduced cost analysis, ranges, and error analysis are illustrated.

  18. Distributed model-based nonlinear sensor fault diagnosis in wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Lo, Chun; Lynch, Jerome P.; Liu, Mingyan

    2016-01-01

    Wireless sensors operating in harsh environments have the potential to be error-prone. This paper presents a distributive model-based diagnosis algorithm that identifies nonlinear sensor faults. The diagnosis algorithm has advantages over existing fault diagnosis methods such as centralized model-based and distributive model-free methods. An algorithm is presented for detecting common non-linearity faults without using reference sensors. The study introduces a model-based fault diagnosis framework that is implemented within a pair of wireless sensors. The detection of sensor nonlinearities is shown to be equivalent to solving the largest empty rectangle (LER) problem, given a set of features extracted from an analysis of sensor outputs. A low-complexity algorithm that gives an approximate solution to the LER problem is proposed for embedment in resource constrained wireless sensors. By solving the LER problem, sensors corrupted by non-linearity faults can be isolated and identified. Extensive analysis evaluates the performance of the proposed algorithm through simulation.

  19. Anisotropic model-based SAR processing

    NASA Astrophysics Data System (ADS)

    Knight, Chad; Gunther, Jake; Moon, Todd

    2013-05-01

    Synthetic aperture radar (SAR) collections that integrate over a wide range of aspect angles hold the potentional for improved resolution and fosters improved scene interpretability and target detection. However, in practice it is difficult to realize the potential due to the anisotropic scattering of objects in the scene. The radar cross section (RCS) of most objects changes as a function of aspect angle. The isotropic assumption is tacitly made for most common image formation algorithms (IFA). For wide aspect scenarios one way to account for anistropy would be to employ a piecewise linear model. This paper focuses on such a model but it incorporates aspect and spatial magnitude filters in the image formation process. This is advantageous when prior knowledge is available regarding the desired targets' RCS signature spatially and in aspect. The appropriate filters can be incorporated into the image formation processing so that specific targets are emphasized while other targets are suppressed. This is demonstrated on the Air Force Research Laboratory (AFRL) GOTCHA1 data set to demonstrate the utility of the proposed approach.

  20. Linear-Algebra Programs

    NASA Technical Reports Server (NTRS)

    Lawson, C. L.; Krogh, F. T.; Gold, S. S.; Kincaid, D. R.; Sullivan, J.; Williams, E.; Hanson, R. J.; Haskell, K.; Dongarra, J.; Moler, C. B.

    1982-01-01

    The Basic Linear Algebra Subprograms (BLAS) library is a collection of 38 FORTRAN-callable routines for performing basic operations of numerical linear algebra. BLAS library is portable and efficient source of basic operations for designers of programs involving linear algebriac computations. BLAS library is supplied in portable FORTRAN and Assembler code versions for IBM 370, UNIVAC 1100 and CDC 6000 series computers.

  1. A terabyte linear tape recorder

    NASA Technical Reports Server (NTRS)

    Webber, John C.

    1994-01-01

    A plan has been formulated and selected for a NASA Phase 2 SBIR award for using the VLBA tape recorder for recording general data. The VLBA tape recorder is a high-speed, high-density linear tape recorder developed for Very Long Baseline Interferometry (VLBI) which is presently capable of recording at rates up to 2 Gbit/sec and holding up to 1 Terabyte of data on one tape, using a special interface and not employing error correction. A general-purpose interface and error correction will be added so that the recorder can be used in other high-speed, high-capacity applications.

  2. Tyre pressure monitoring using a dynamical model-based estimator

    NASA Astrophysics Data System (ADS)

    Reina, Giulio; Gentile, Angelo; Messina, Arcangelo

    2015-04-01

    In the last few years, various control systems have been investigated in the automotive field with the aim of increasing the level of safety and stability, avoid roll-over, and customise handling characteristics. One critical issue connected with their integration is the lack of state and parameter information. As an example, vehicle handling depends to a large extent on tyre inflation pressure. When inflation pressure drops, handling and comfort performance generally deteriorate. In addition, it results in an increase in fuel consumption and in a decrease in lifetime. Therefore, it is important to keep tyres within the normal inflation pressure range. This paper introduces a model-based approach to estimate online tyre inflation pressure. First, basic vertical dynamic modelling of the vehicle is discussed. Then, a parameter estimation framework for dynamic analysis is presented. Several important vehicle parameters including tyre inflation pressure can be estimated using the estimated states. This method aims to work during normal driving using information from standard sensors only. On the one hand, the driver is informed about the inflation pressure and he is warned for sudden changes. On the other hand, accurate estimation of the vehicle states is available as possible input to onboard control systems.

  3. Model-based approach to partial tracking for musical transcription

    NASA Astrophysics Data System (ADS)

    Sterian, Andrew; Wakefield, Gregory H.

    1998-10-01

    We present a new method for musical partial tracking in the context of musical transcription using a time-frequency Kalman filter structure. The filter is based upon a model for the evolution of a partial behavior across a wide range of pitch from four brass instruments. Statistics are computed independently for the partial attributes of frequency and log-power first differences. We present observed power spectral density shapes, total powers, and histograms, as well as least-squares approximations to these. We demonstrate that a Kalman filter tracker using this partial model is capable of tracking partials in music. We discuss how the filter structure naturally provides quality-of-fit information about the data for use in further processing and how this information can be used to perform partial track initiation and termination within a common framework. We propose that a model-based approach to partial tracking is preferable to existing approaches which generally use heuristic rules or birth/death notions over a small time neighborhood. The advantages include better performance in the presence of cluttered data and simplified tracking over missed observations.

  4. Model based iterative reconstruction for Bright Field electron tomography

    NASA Astrophysics Data System (ADS)

    Venkatakrishnan, Singanallur V.; Drummy, Lawrence F.; De Graef, Marc; Simmons, Jeff P.; Bouman, Charles A.

    2013-02-01

    Bright Field (BF) electron tomography (ET) has been widely used in the life sciences to characterize biological specimens in 3D. While BF-ET is the dominant modality in the life sciences it has been generally avoided in the physical sciences due to anomalous measurements in the data due to a phenomenon called "Bragg scatter" - visible when crystalline samples are imaged. These measurements cause undesirable artifacts in the reconstruction when the typical algorithms such as Filtered Back Projection (FBP) and Simultaneous Iterative Reconstruction Technique (SIRT) are applied to the data. Model based iterative reconstruction (MBIR) provides a powerful framework for tomographic reconstruction that incorporates a model for data acquisition, noise in the measurement and a model for the object to obtain reconstructions that are qualitatively superior and quantitatively accurate. In this paper we present a novel MBIR algorithm for BF-ET which accounts for the presence of anomalous measurements from Bragg scatter in the data during the iterative reconstruction. Our method accounts for the anomalies by formulating the reconstruction as minimizing a cost function which rejects measurements that deviate significantly from the typical Beer's law model widely assumed for BF-ET. Results on simulated as well as real data show that our method can dramatically improve the reconstructions compared to FBP and MBIR without anomaly rejection, suppressing the artifacts due to the Bragg anomalies.

  5. Learning of Chemical Equilibrium through Modelling-Based Teaching

    ERIC Educational Resources Information Center

    Maia, Poliana Flavia; Justi, Rosaria

    2009-01-01

    This paper presents and discusses students' learning process of chemical equilibrium from a modelling-based approach developed from the use of the "Model of Modelling" diagram. The investigation was conducted in a regular classroom (students 14-15 years old) and aimed at discussing how modelling-based teaching can contribute to students learning…

  6. Model-Based Software Testing for Object-Oriented Software

    ERIC Educational Resources Information Center

    Biju, Soly Mathew

    2008-01-01

    Model-based testing is one of the best solutions for testing object-oriented software. It has a better test coverage than other testing styles. Model-based testing takes into consideration behavioural aspects of a class, which are usually unchecked in other testing methods. An increase in the complexity of software has forced the software industry…

  7. Models-Based Practice: Great White Hope or White Elephant?

    ERIC Educational Resources Information Center

    Casey, Ashley

    2014-01-01

    Background: Many critical curriculum theorists in physical education have advocated a model- or models-based approach to teaching in the subject. This paper explores the literature base around models-based practice (MBP) and asks if this multi-models approach to curriculum planning has the potential to be the great white hope of pedagogical change…

  8. On Solving Non-Autonomous Linear Difference Equations with Applications

    ERIC Educational Resources Information Center

    Abu-Saris, Raghib M.

    2006-01-01

    An explicit formula is established for the general solution of the homogeneous non-autonomous linear difference equation. The formula developed is then used to characterize globally periodic linear difference equations with constant coefficients.

  9. Temporal generalization.

    PubMed

    Church, R M; Gibbon, J

    1982-04-01

    Responses of 26 rats were reinforced following a signal of a certain duration, but not following signals of shorter or longer durations. This led to a positive temporal generalization gradient with a maximum at the reinforced duration in six experiments. Spacing of the nonreinforced signals did not influence the gradient, but the location of the maximum and breadth of the gradient increased with the duration of the reinforced signal. Reduction of reinforcement, either by partial reinforcement or reduction in the probability of a positive signal, led to a decrease in the height of the generalization gradient. There were large, reliable individual differences in the height and breadth of the generalization gradient. When the conditions of reinforcement were reversed (responses reinforced following all signals longer or shorter than a single nonreinforced duration), eight additional rats had a negative generalization gradient with a minimum at a signal duration shorter than the single nonreinforced duration. A scalar timing theory is described that provided a quantitative fit of the data. This theory involved a clock that times in linear units with an accurate mean and a negligible variance, a distribution of memory times that is normally distributed with an accurate mean and a scalar standard deviation, and a rule to respond if the clock is "close enough" to a sample of the memory time distribution. This decision is based on a ratio of the discrepancy between the clock time and the remembered time, to the remembered time. When this ratio is below a (variable) threshold, subjects respond. When three timing parameters--coefficient of variation of the memory time, the mean and the standard deviation of the threshold--were set at their median values, a theory with two free parameters accounted for 96% of the variance. The two parameters reflect the probability of attention to time and the probability of a response given inattention. These parameters were not influenced

  10. Nonparametric Hammerstein model based model predictive control for heart rate regulation.

    PubMed

    Su, Steven W; Huang, Shoudong; Wang, Lu; Celler, Branko G; Savkin, Andrey V; Guo, Ying; Cheng, Teddy

    2007-01-01

    This paper proposed a novel nonparametric model based model predictive control approach for the regulation of heart rate during treadmill exercise. As the model structure of human cardiovascular system is often hard to determine, nonparametric modelling is a more realistic manner to describe complex behaviours of cardiovascular system. This paper presents a new nonparametric Hammerstein model identification approach for heart rate response modelling. Based on the pseudo-random binary sequence experiment data, we decouple the identification of linear dynamic part and input nonlinearity of the Hammerstein system. Correlation analysis is applied to acquire step response of linear dynamic component. Support Vector Regression is adopted to obtain a nonparametric description of the inverse of input static nonlinearity that is utilized to form an approximate linear model of the Hammerstein system. Based on the established model, a model predictive controller under predefined speed and acceleration constraints is designed to achieve safer treadmill exercise. Simulation results show that the proposed control algorithm can achieve optimal heart rate tracking performance under predefined constraints.

  11. Model-based cartilage thickness measurement in the submillimeter range

    SciTech Connect

    Streekstra, G. J.; Strackee, S. D.; Maas, M.; Wee, R. ter; Venema, H. W.

    2007-09-15

    Current methods of image-based thickness measurement in thin sheet structures utilize second derivative zero crossings to locate the layer boundaries. It is generally acknowledged that the nonzero width of the point spread function (PSF) limits the accuracy of this measurement procedure. We propose a model-based method that strongly reduces PSF-induced bias by incorporating the PSF into the thickness estimation method. We estimated the bias in thickness measurements in simulated thin sheet images as obtained from second derivative zero crossings. To gain insight into the range of sheet thickness where our method is expected to yield improved results, sheet thickness was varied between 0.15 and 1.2 mm with an assumed PSF as present in the high-resolution modes of current computed tomography (CT) scanners [full width at half maximum (FWHM) 0.5-0.8 mm]. Our model-based method was evaluated in practice by measuring layer thickness from CT images of a phantom mimicking two parallel cartilage layers in an arthrography procedure. CT arthrography images of cadaver wrists were also evaluated, and thickness estimates were compared to those obtained from high-resolution anatomical sections that served as a reference. The thickness estimates from the simulated images reveal that the method based on second derivative zero crossings shows considerable bias for layers in the submillimeter range. This bias is negligible for sheet thickness larger than 1 mm, where the size of the sheet is more than twice the FWHM of the PSF but can be as large as 0.2 mm for a 0.5 mm sheet. The results of the phantom experiments show that the bias is effectively reduced by our method. The deviations from the true thickness, due to random fluctuations induced by quantum noise in the CT images, are of the order of 3% for a standard wrist imaging protocol. In the wrist the submillimeter thickness estimates from the CT arthrography images correspond within 10% to those estimated from the anatomical

  12. Connection between Dirichlet distributions and a scale-invariant probabilistic model based on Leibniz-like pyramids

    NASA Astrophysics Data System (ADS)

    Rodríguez, A.; Tsallis, C.

    2014-12-01

    We show that the N → ∞ limiting probability distributions of a recently introduced family of d-dimensional scale-invariant probabilistic models based on Leibniz-like (d + 1)-dimensional hyperpyramids (Rodríguez and Tsallis 2012 J. Math. Phys. 53 023302) are given by Dirichlet distributions for d = 1, 2, …. It was formerly proved by Rodríguez et al that, for the one-dimensional case (d = 1), the corresponding limiting distributions are q-Gaussians (\\propto e_q- β x^2 , with e_1-β x^2=e-β x^2) . The Dirichlet distributions generalize the so-called Beta distributions to higher dimensions. Consistently, we make a connection between one-dimensional q-Gaussians and Beta distributions via a linear transformation. In addition, we discuss the probabilistically admissible region of parameters q and β defining a normalizable q-Gaussian, focusing particularly on the possibility of having both bell-shaped and U-shaped q-Gaussians, the latter corresponding, in an appropriate physical interpretation, to negative temperatures.

  13. Development of Hierarchical Bayesian Model Based on Regional Frequency Analysis and Its Application to Estimate Areal Rainfall in South Korea

    NASA Astrophysics Data System (ADS)

    Kim, J.; Kwon, H. H.

    2014-12-01

    The existing regional frequency analysis has disadvantages in that it is difficult to consider geographical characteristics in estimating areal rainfall. In this regard, This study aims to develop a hierarchical Bayesian model based regional frequency analysis in that spatial patterns of the design rainfall with geographical information are explicitly incorporated. This study assumes that the parameters of Gumbel distribution are a function of geographical characteristics (e.g. altitude, latitude and longitude) within a general linear regression framework. Posterior distributions of the regression parameters are estimated by Bayesian Markov Chain Monte Calro (MCMC) method, and the identified functional relationship is used to spatially interpolate the parameters of the Gumbel distribution by using digital elevation models (DEM) as inputs. The proposed model is applied to derive design rainfalls over the entire Han-river watershed. It was found that the proposed Bayesian regional frequency analysis model showed similar results compared to L-moment based regional frequency analysis. In addition, the model showed an advantage in terms of quantifying uncertainty of the design rainfall and estimating the area rainfall considering geographical information. Acknowledgement: This research was supported by a grant (14AWMP-B079364-01) from Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.

  14. LESS: a model-based classifier for sparse subspaces.

    PubMed

    Veenman, Cor J; Tax, David M J

    2005-09-01

    In this paper, we specifically focus on high-dimensional data sets for which the number of dimensions is an order of magnitude higher than the number of objects. From a classifier design standpoint, such small sample size problems have some interesting challenges. The first challenge is to find, from all hyperplanes that separate the classes, a separating hyperplane which generalizes well for future data. A second important task is to determine which features are required to distinguish the classes. To attack these problems, we propose the LESS (Lowest Error in a Sparse Subspace) classifier that efficiently finds linear discriminants in a sparse subspace. In contrast with most classifiers for high-dimensional data sets, the LESS classifier incorporates a (simple) data model. Further, by means of a regularization parameter, the classifier establishes a suitable trade-off between subspace sparseness and classification accuracy. In the experiments, we show how LESS performs on several high-dimensional data sets and compare its performance to related state-of-the-art classifiers like, among others, linear ridge regression with the LASSO and the Support Vector Machine. It turns out that LESS performs competitively while using fewer dimensions.

  15. Linear collider: a preview

    SciTech Connect

    Wiedemann, H.

    1981-11-01

    Since no linear colliders have been built yet it is difficult to know at what energy the linear cost scaling of linear colliders drops below the quadratic scaling of storage rings. There is, however, no doubt that a linear collider facility for a center of mass energy above say 500 GeV is significantly cheaper than an equivalent storage ring. In order to make the linear collider principle feasible at very high energies a number of problems have to be solved. There are two kinds of problems: one which is related to the feasibility of the principle and the other kind of problems is associated with minimizing the cost of constructing and operating such a facility. This lecture series describes the problems and possible solutions. Since the real test of a principle requires the construction of a prototype I will in the last chapter describe the SLC project at the Stanford Linear Accelerator Center.

  16. Linear mass actuator

    NASA Technical Reports Server (NTRS)

    Holloway, Sidney E., III (Inventor); Crossley, Edward A., Jr. (Inventor); Jones, Irby W. (Inventor); Miller, James B. (Inventor); Davis, C. Calvin (Inventor); Behun, Vaughn D. (Inventor); Goodrich, Lewis R., Sr. (Inventor)

    1992-01-01

    A linear mass actuator includes an upper housing and a lower housing connectable to each other and having a central passageway passing axially through a mass that is linearly movable in the central passageway. Rollers mounted in the upper and lower housings in frictional engagement with the mass translate the mass linearly in the central passageway and drive motors operatively coupled to the roller means, for rotating the rollers and driving the mass axially in the central passageway.

  17. Locating the Optic Nerve in Retinal Images: Comparing Model-Based and Bayesian Decision Methods

    SciTech Connect

    Karnowski, Thomas Paul; Tobin Jr, Kenneth William; Muthusamy Govindasamy, Vijaya Priya; Chaum, Edward

    2006-01-01

    In this work we compare two methods for automatic optic nerve (ON) localization in retinal imagery. The first method uses a Bayesian decision theory is criminator based on four spatial features of the retina imagery. The second method uses a principal component-based reconstruction to model the ON. We report on an improvement to the model-based technique by incorporating linear discriminant analysis and Bayesian decision theory methods. We explore a method to combine both techniques to produce a composite technique with high accuracy and rapid throughput. Results are shown for a data set of 395 images with 2-fold validation testing.

  18. Model-Based, Closed-Loop Control of PZT Creep for Cavity Ring-Down Spectroscopy

    PubMed Central

    McCartt, A D; Ognibene, T J; Bench, G; Turteltaub, K W

    2014-01-01

    Cavity ring-down spectrometers typically employ a PZT stack to modulate the cavity transmission spectrum. While PZTs ease instrument complexity and aid measurement sensitivity, PZT hysteresis hinders the implementation of cavity-length-stabilized, data-acquisition routines. Once the cavity length is stabilized, the cavity’s free spectral range imparts extreme linearity and precision to the measured spectrum’s wavelength axis. Methods such as frequency-stabilized cavity ring-down spectroscopy have successfully mitigated PZT hysteresis, but their complexity limits commercial applications. Described herein is a single-laser, model-based, closed-loop method for cavity length control. PMID:25395738

  19. Experimental evaluation of neural, statistical, and model-based approaches to FLIR ATR

    NASA Astrophysics Data System (ADS)

    Li, Baoxin; Zheng, Qinfen; Der, Sandor Z.; Chellappa, Rama; Nasrabadi, Nasser M.; Chan, Lipchen A.; Wang, LinCheng

    1998-09-01

    This paper presents an empirical evaluation of a number of recently developed Automatic Target Recognition algorithms for Forward-Looking InfraRed (FLIR) imagery using a large database of real second-generation FLIR images. The algorithms evaluated are based on convolution neural networks (CNN), principal component analysis (PCA), linear discriminant analysis (LDA), learning vector quantization (LVQ), and modular neural networks (MNN). Two model-based algorithms, using Hausdorff metric based matching and geometric hashing, are also evaluated. A hierarchial pose estimation system using CNN plus either PCA or LDA, developed by the authors, is also evaluated using the same data set.

  20. Linear phase compressive filter

    DOEpatents

    McEwan, Thomas E.

    1995-01-01

    A phase linear filter for soliton suppression is in the form of a laddered series of stages of non-commensurate low pass filters with each low pass filter having a series coupled inductance (L) and a reverse biased, voltage dependent varactor diode, to ground which acts as a variable capacitance (C). L and C values are set to levels which correspond to a linear or conventional phase linear filter. Inductance is mapped directly from that of an equivalent nonlinear transmission line and capacitance is mapped from the linear case using a large signal equivalent of a nonlinear transmission line.

  1. Linear phase compressive filter

    DOEpatents

    McEwan, T.E.

    1995-06-06

    A phase linear filter for soliton suppression is in the form of a laddered series of stages of non-commensurate low pass filters with each low pass filter having a series coupled inductance (L) and a reverse biased, voltage dependent varactor diode, to ground which acts as a variable capacitance (C). L and C values are set to levels which correspond to a linear or conventional phase linear filter. Inductance is mapped directly from that of an equivalent nonlinear transmission line and capacitance is mapped from the linear case using a large signal equivalent of a nonlinear transmission line. 2 figs.

  2. Fault tolerant linear actuator

    DOEpatents

    Tesar, Delbert

    2004-09-14

    In varying embodiments, the fault tolerant linear actuator of the present invention is a new and improved linear actuator with fault tolerance and positional control that may incorporate velocity summing, force summing, or a combination of the two. In one embodiment, the invention offers a velocity summing arrangement with a differential gear between two prime movers driving a cage, which then drives a linear spindle screw transmission. Other embodiments feature two prime movers driving separate linear spindle screw transmissions, one internal and one external, in a totally concentric and compact integrated module.

  3. Model-Based Reasoning in Humans Becomes Automatic with Training.

    PubMed

    Economides, Marcos; Kurth-Nelson, Zeb; Lübbert, Annika; Guitart-Masip, Marc; Dolan, Raymond J

    2015-09-01

    Model-based and model-free reinforcement learning (RL) have been suggested as algorithmic realizations of goal-directed and habitual action strategies. Model-based RL is more flexible than model-free but requires sophisticated calculations using a learnt model of the world. This has led model-based RL to be identified with slow, deliberative processing, and model-free RL with fast, automatic processing. In support of this distinction, it has recently been shown that model-based reasoning is impaired by placing subjects under cognitive load--a hallmark of non-automaticity. Here, using the same task, we show that cognitive load does not impair model-based reasoning if subjects receive prior training on the task. This finding is replicated across two studies and a variety of analysis methods. Thus, task familiarity permits use of model-based reasoning in parallel with other cognitive demands. The ability to deploy model-based reasoning in an automatic, parallelizable fashion has widespread theoretical implications, particularly for the learning and execution of complex behaviors. It also suggests a range of important failure modes in psychiatric disorders. PMID:26379239

  4. Model-Based Reasoning in Humans Becomes Automatic with Training

    PubMed Central

    Lübbert, Annika; Guitart-Masip, Marc; Dolan, Raymond J.

    2015-01-01

    Model-based and model-free reinforcement learning (RL) have been suggested as algorithmic realizations of goal-directed and habitual action strategies. Model-based RL is more flexible than model-free but requires sophisticated calculations using a learnt model of the world. This has led model-based RL to be identified with slow, deliberative processing, and model-free RL with fast, automatic processing. In support of this distinction, it has recently been shown that model-based reasoning is impaired by placing subjects under cognitive load—a hallmark of non-automaticity. Here, using the same task, we show that cognitive load does not impair model-based reasoning if subjects receive prior training on the task. This finding is replicated across two studies and a variety of analysis methods. Thus, task familiarity permits use of model-based reasoning in parallel with other cognitive demands. The ability to deploy model-based reasoning in an automatic, parallelizable fashion has widespread theoretical implications, particularly for the learning and execution of complex behaviors. It also suggests a range of important failure modes in psychiatric disorders. PMID:26379239

  5. Linearly polarized fiber amplifier

    SciTech Connect

    Kliner, Dahv A.; Koplow, Jeffery P.

    2004-11-30

    Optically pumped rare-earth-doped polarizing fibers exhibit significantly higher gain for one linear polarization state than for the orthogonal state. Such a fiber can be used to construct a single-polarization fiber laser, amplifier, or amplified-spontaneous-emission (ASE) source without the need for additional optical components to obtain stable, linearly polarized operation.

  6. SLAC Linear Collider

    SciTech Connect

    Richter, B.

    1985-12-01

    A report is given on the goals and progress of the SLAC Linear Collider. The status of the machine and the detectors are discussed and an overview is given of the physics which can be done at this new facility. Some ideas on how (and why) large linear colliders of the future should be built are given.

  7. Linear regression in astronomy. II

    NASA Technical Reports Server (NTRS)

    Feigelson, Eric D.; Babu, Gutti J.

    1992-01-01

    A wide variety of least-squares linear regression procedures used in observational astronomy, particularly investigations of the cosmic distance scale, are presented and discussed. The classes of linear models considered are (1) unweighted regression lines, with bootstrap and jackknife resampling; (2) regression solutions when measurement error, in one or both variables, dominates the scatter; (3) methods to apply a calibration line to new data; (4) truncated regression models, which apply to flux-limited data sets; and (5) censored regression models, which apply when nondetections are present. For the calibration problem we develop two new procedures: a formula for the intercept offset between two parallel data sets, which propagates slope errors from one regression to the other; and a generalization of the Working-Hotelling confidence bands to nonstandard least-squares lines. They can provide improved error analysis for Faber-Jackson, Tully-Fisher, and similar cosmic distance scale relations.

  8. Linear force device

    NASA Technical Reports Server (NTRS)

    Clancy, John P.

    1988-01-01

    The object of the invention is to provide a mechanical force actuator which is lightweight and manipulatable and utilizes linear motion for push or pull forces while maintaining a constant overall length. The mechanical force producing mechanism comprises a linear actuator mechanism and a linear motion shaft mounted parallel to one another. The linear motion shaft is connected to a stationary or fixed housing and to a movable housing where the movable housing is mechanically actuated through actuator mechanism by either manual means or motor means. The housings are adapted to releasably receive a variety of jaw or pulling elements adapted for clamping or prying action. The stationary housing is adapted to be pivotally mounted to permit an angular position of the housing to allow the tool to adapt to skewed interfaces. The actuator mechanisms is operated by a gear train to obtain linear motion of the actuator mechanism.

  9. Linear models: permutation methods

    USGS Publications Warehouse

    Cade, B.S.; Everitt, B.S.; Howell, D.C.

    2005-01-01

    Permutation tests (see Permutation Based Inference) for the linear model have applications in behavioral studies when traditional parametric assumptions about the error term in a linear model are not tenable. Improved validity of Type I error rates can be achieved with properly constructed permutation tests. Perhaps more importantly, increased statistical power, improved robustness to effects of outliers, and detection of alternative distributional differences can be achieved by coupling permutation inference with alternative linear model estimators. For example, it is well-known that estimates of the mean in linear model are extremely sensitive to even a single outlying value of the dependent variable compared to estimates of the median [7, 19]. Traditionally, linear modeling focused on estimating changes in the center of distributions (means or medians). However, quantile regression allows distributional changes to be estimated in all or any selected part of a distribution or responses, providing a more complete statistical picture that has relevance to many biological questions [6]...

  10. Cognitive Control Predicts Use of Model-Based Reinforcement-Learning

    PubMed Central

    Otto, A. Ross; Skatova, Anya; Madlon-Kay, Seth; Daw, Nathaniel D.

    2015-01-01

    Accounts of decision-making and its neural substrates have long posited the operation of separate, competing valuation systems in the control of choice behavior. Recent theoretical and experimental work suggest that this classic distinction between behaviorally and neurally dissociable systems for habitual and goal-directed (or more generally, automatic and controlled) choice may arise from two computational strategies for reinforcement learning (RL), called model-free and model-based RL, but the cognitive or computational processes by which one system may dominate over the other in the control of behavior is a matter of ongoing investigation. To elucidate this question, we leverage the theoretical framework of cognitive control, demonstrating that individual differences in utilization of goal-related contextual information—in the service of overcoming habitual, stimulus-driven responses—in established cognitive control paradigms predict model-based behavior in a separate, sequential choice task. The behavioral correspondence between cognitive control and model-based RL compellingly suggests that a common set of processes may underpin the two behaviors. In particular, computational mechanisms originally proposed to underlie controlled behavior may be applicable to understanding the interactions between model-based and model-free choice behavior. PMID:25170791

  11. Working-memory capacity protects model-based learning from stress.

    PubMed

    Otto, A Ross; Raio, Candace M; Chiang, Alice; Phelps, Elizabeth A; Daw, Nathaniel D

    2013-12-24

    Accounts of decision-making have long posited the operation of separate, competing valuation systems in the control of choice behavior. Recent theoretical and experimental advances suggest that this classic distinction between habitual and goal-directed (or more generally, automatic and controlled) choice may arise from two computational strategies for reinforcement learning, called model-free and model-based learning. Popular neurocomputational accounts of reward processing emphasize the involvement of the dopaminergic system in model-free learning and prefrontal, central executive-dependent control systems in model-based choice. Here we hypothesized that the hypothalamic-pituitary-adrenal (HPA) axis stress response--believed to have detrimental effects on prefrontal cortex function--should selectively attenuate model-based contributions to behavior. To test this, we paired an acute stressor with a sequential decision-making task that affords distinguishing the relative contributions of the two learning strategies. We assessed baseline working-memory (WM) capacity and used salivary cortisol levels to measure HPA axis stress response. We found that stress response attenuates the contribution of model-based, but not model-free, contributions to behavior. Moreover, stress-induced behavioral changes were modulated by individual WM capacity, such that low-WM-capacity individuals were more susceptible to detrimental stress effects than high-WM-capacity individuals. These results enrich existing accounts of the interplay between acute stress, working memory, and prefrontal function and suggest that executive function may be protective against the deleterious effects of acute stress.

  12. Distributed real-time model-based diagnosis

    NASA Technical Reports Server (NTRS)

    Barrett, A. C.; Chung, S. H.

    2003-01-01

    This paper presents an approach to onboard anomaly diagnosis that combines the simplicity and real-time guarantee of a rule-based diagnosis system with the specification ease and coverage guarantees of a model-based diagnosis system.

  13. Model-Based Development of Automotive Electronic Climate Control Software

    NASA Astrophysics Data System (ADS)

    Kakade, Rupesh; Murugesan, Mohan; Perugu, Bhupal; Nair, Mohanan

    With increasing complexity of software in today's products, writing and maintaining thousands of lines of code is a tedious task. Instead, an alternative methodology must be employed. Model-based development is one candidate that offers several benefits and allows engineers to focus on the domain of their expertise than writing huge codes. In this paper, we discuss the application of model-based development to the electronic climate control software of vehicles. The back-to-back testing approach is presented that ensures flawless and smooth transition from legacy designs to the model-based development. Simulink report generator to create design documents from the models is presented along with its usage to run the simulation model and capture the results into the test report. Test automation using model-based development tool that support the use of unique set of test cases for several testing levels and the test procedure that is independent of software and hardware platform is also presented.

  14. Rasch model based analysis of the Force Concept Inventory

    NASA Astrophysics Data System (ADS)

    Planinic, Maja; Ivanjek, Lana; Susac, Ana

    2010-06-01

    The Force Concept Inventory (FCI) is an important diagnostic instrument which is widely used in the field of physics education research. It is therefore very important to evaluate and monitor its functioning using different tools for statistical analysis. One of such tools is the stochastic Rasch model, which enables construction of linear measures for persons and items from raw test scores and which can provide important insight in the structure and functioning of the test (how item difficulties are distributed within the test, how well the items fit the model, and how well the items work together to define the underlying construct). The data for the Rasch analysis come from the large-scale research conducted in 2006-07, which investigated Croatian high school students’ conceptual understanding of mechanics on a representative sample of 1676 students (age 17-18 years). The instrument used in research was the FCI. The average FCI score for the whole sample was found to be (27.7±0.4)% , indicating that most of the students were still non-Newtonians at the end of high school, despite the fact that physics is a compulsory subject in Croatian schools. The large set of obtained data was analyzed with the Rasch measurement computer software WINSTEPS 3.66. Since the FCI is routinely used as pretest and post-test on two very different types of population (non-Newtonian and predominantly Newtonian), an additional predominantly Newtonian sample ( N=141 , average FCI score of 64.5%) of first year students enrolled in introductory physics course at University of Zagreb was also analyzed. The Rasch model based analysis suggests that the FCI has succeeded in defining a sufficiently unidimensional construct for each population. The analysis of fit of data to the model found no grossly misfitting items which would degrade measurement. Some items with larger misfit and items with significantly different difficulties in the two samples of students do require further examination

  15. Research on infrared imaging illumination model based on materials

    NASA Astrophysics Data System (ADS)

    Hu, Hai-he; Feng, Chao-yin; Guo, Chang-geng; Zheng, Hai-jing; Han, Qiang; Hu, Hai-yan

    2013-09-01

    In order to effectively simulate infrared features of the scene and infrared high light phenomenon, Based on the visual light illumination model, according to the optical property of all material types in the scene, the infrared imaging illumination models are proposed to fulfill different materials: to the smooth material with specular characteristic, adopting the infrared imaging illumination model based on Blinn-Phone reflection model and introducing the self emission; to the ordinary material which is similar to black body without highlight feature, ignoring the computation of its high light reflection feature, calculating simply the material's self emission and its reflection to the surrounding as its infrared imaging illumination model, the radiation energy under zero range of visibility can be obtained according to the above two models. The OpenGl rendering technology is used to construct infrared scene simulation system which can also simulate infrared electro-optical imaging system, then gets the synthetic infrared images from any angle of view of the 3D scenes. To validate the infrared imaging illumination model, two typical 3D scenes are made, and their infrared images are calculated to compare and contrast with the real collected infrared images obtained by a long wave infrared band imaging camera. There are two major points in the paper according to the experiment results: firstly, the infrared imaging illumination models are capable of producing infrared images which are very similar to those received by thermal infrared camera; secondly, the infrared imaging illumination models can simulate the infrared specular feature of relative materials and common infrared features of general materials, which shows the validation of the infrared imaging illumination models. Quantitative analysis shows that the simulation images are similar to the collected images in the aspects of main features, but their histogram distribution does not match very well, the

  16. A Navigational Analysis of Linear and Non-Linear Hypermedia Interfaces.

    ERIC Educational Resources Information Center

    Hall, Richard H.; Balestra, Joel; Davis, Miles

    The purpose of this experiment was to assess the effectiveness of a comprehensive model for the analysis of hypermap navigation patterns through a comparison of navigation patterns associated with a traditional linear interface versus a non-linear "hypermap" interface. Twenty-six general psychology university students studied material on bipolar…

  17. Reduced model-based decision-making in schizophrenia.

    PubMed

    Culbreth, Adam J; Westbrook, Andrew; Daw, Nathaniel D; Botvinick, Matthew; Barch, Deanna M

    2016-08-01

    Individuals with schizophrenia have a diminished ability to use reward history to adaptively guide behavior. However, tasks traditionally used to assess such deficits often rely on multiple cognitive and neural processes, leaving etiology unresolved. In the current study, we adopted recent computational formalisms of reinforcement learning to distinguish between model-based and model-free decision-making in hopes of specifying mechanisms associated with reinforcement-learning dysfunction in schizophrenia. Under this framework, decision-making is model-free to the extent that it relies solely on prior reward history, and model-based if it relies on prospective information such as motivational state, future consequences, and the likelihood of obtaining various outcomes. Model-based and model-free decision-making was assessed in 33 schizophrenia patients and 30 controls using a 2-stage 2-alternative forced choice task previously demonstrated to discern individual differences in reliance on the 2 forms of reinforcement-learning. We show that, compared with controls, schizophrenia patients demonstrate decreased reliance on model-based decision-making. Further, parameter estimates of model-based behavior correlate positively with IQ and working memory measures, suggesting that model-based deficits seen in schizophrenia may be partially explained by higher-order cognitive deficits. These findings demonstrate specific reinforcement-learning and decision-making deficits and thereby provide valuable insights for understanding disordered behavior in schizophrenia. (PsycINFO Database Record

  18. Robust master-slave synchronization for general uncertain delayed dynamical model based on adaptive control scheme.

    PubMed

    Wang, Tianbo; Zhou, Wuneng; Zhao, Shouwei; Yu, Weiqin

    2014-03-01

    In this paper, the robust exponential synchronization problem for a class of uncertain delayed master-slave dynamical system is investigated by using the adaptive control method. Different from some existing master-slave models, the considered master-slave system includes bounded unmodeled dynamics. In order to compensate the effect of unmodeled dynamics and effectively achieve synchronization, a novel adaptive controller with simple updated laws is proposed. Moreover, the results are given in terms of LMIs, which can be easily solved by LMI Toolbox in Matlab. A numerical example is given to illustrate the effectiveness of the method.

  19. Linear magnetic bearing

    NASA Technical Reports Server (NTRS)

    Studer, P. A. (Inventor)

    1983-01-01

    A linear magnetic bearing system having electromagnetic vernier flux paths in shunt relation with permanent magnets, so that the vernier flux does not traverse the permanent magnet, is described. Novelty is believed to reside in providing a linear magnetic bearing having electromagnetic flux paths that bypass high reluctance permanent magnets. Particular novelty is believed to reside in providing a linear magnetic bearing with a pair of axially spaced elements having electromagnets for establishing vernier x and y axis control. The magnetic bearing system has possible use in connection with a long life reciprocating cryogenic refrigerator that may be used on the space shuttle.

  20. Features in visual search combine linearly

    PubMed Central

    Pramod, R. T.; Arun, S. P.

    2014-01-01

    Single features such as line orientation and length are known to guide visual search, but relatively little is known about how multiple features combine in search. To address this question, we investigated how search for targets differing in multiple features (intensity, length, orientation) from the distracters is related to searches for targets differing in each of the individual features. We tested race models (based on reaction times) and co-activation models (based on reciprocal of reaction times) for their ability to predict multiple feature searches. Multiple feature searches were best accounted for by a co-activation model in which feature information combined linearly (r = 0.95). This result agrees with the classic finding that these features are separable i.e., subjective dissimilarity ratings sum linearly. We then replicated the classical finding that the length and width of a rectangle are integral features—in other words, they combine nonlinearly in visual search. However, to our surprise, upon including aspect ratio as an additional feature, length and width combined linearly and this model outperformed all other models. Thus, length and width of a rectangle became separable when considered together with aspect ratio. This finding predicts that searches involving shapes with identical aspect ratio should be more difficult than searches where shapes differ in aspect ratio. We confirmed this prediction on a variety of shapes. We conclude that features in visual search co-activate linearly and demonstrate for the first time that aspect ratio is a novel feature that guides visual search. PMID:24715328

  1. Toward a Model-Based Predictive Controller Design in Brain–Computer Interfaces

    PubMed Central

    Kamrunnahar, M.; Dias, N. S.; Schiff, S. J.

    2013-01-01

    A first step in designing a robust and optimal model-based predictive controller (MPC) for brain–computer interface (BCI) applications is presented in this article. An MPC has the potential to achieve improved BCI performance compared to the performance achieved by current ad hoc, nonmodel-based filter applications. The parameters in designing the controller were extracted as model-based features from motor imagery task-related human scalp electroencephalography. Although the parameters can be generated from any model-linear or non-linear, we here adopted a simple autoregressive model that has well-established applications in BCI task discriminations. It was shown that the parameters generated for the controller design can as well be used for motor imagery task discriminations with performance (with 8–23% task discrimination errors) comparable to the discrimination performance of the commonly used features such as frequency specific band powers and the AR model parameters directly used. An optimal MPC has significant implications for high performance BCI applications. PMID:21267657

  2. Position and torque tracking: series elastic actuation versus model-based-controlled hydraulic actuation.

    PubMed

    Otten, Alexander; van Vuuren, Wieke; Stienen, Arno; van Asseldonk, Edwin; Schouten, Alfred; van der Kooij, Herman

    2011-01-01

    Robotics used for diagnostic measurements on, e.g. stroke survivors, require actuators that are both stiff and compliant. Stiffness is required for identification purposes, and compliance to compensate for the robots dynamics, so that the subject can move freely while using the robot. A hydraulic actuator can act as a position (stiff) or a torque (compliant) actuator. The drawback of a hydraulic actuator is that it behaves nonlinear. This article examines two methods for controlling a nonlinear hydraulic actuator. The first method that is often applied uses an elastic element (i.e. spring) connected in series with the hydraulic actuator so that the torque can be measured as the deflection of the spring. This torque measurement is used for proportional integral control. The second method of control uses the inverse of the model of the actuator as a linearizing controller. Both methods are compared using simulation results. The controller designed for the series elastic hydraulic actuator is faster to implement, but only shows good performance for the working range for which the controller is designed due to the systems nonlinear behavior. The elastic element is a limiting factor when designing a position controller due to its low torsional stiffness. The model-based controller linearizes the nonlinear system and shows good performance when used for torque and position control. Implementing the model-based controller does require building and validating of the detailed model. PMID:22275654

  3. Particle Filtering for Model-Based Anomaly Detection in Sensor Networks

    NASA Technical Reports Server (NTRS)

    Solano, Wanda; Banerjee, Bikramjit; Kraemer, Landon

    2012-01-01

    A novel technique has been developed for anomaly detection of rocket engine test stand (RETS) data. The objective was to develop a system that postprocesses a csv file containing the sensor readings and activities (time-series) from a rocket engine test, and detects any anomalies that might have occurred during the test. The output consists of the names of the sensors that show anomalous behavior, and the start and end time of each anomaly. In order to reduce the involvement of domain experts significantly, several data-driven approaches have been proposed where models are automatically acquired from the data, thus bypassing the cost and effort of building system models. Many supervised learning methods can efficiently learn operational and fault models, given large amounts of both nominal and fault data. However, for domains such as RETS data, the amount of anomalous data that is actually available is relatively small, making most supervised learning methods rather ineffective, and in general met with limited success in anomaly detection. The fundamental problem with existing approaches is that they assume that the data are iid, i.e., independent and identically distributed, which is violated in typical RETS data. None of these techniques naturally exploit the temporal information inherent in time series data from the sensor networks. There are correlations among the sensor readings, not only at the same time, but also across time. However, these approaches have not explicitly identified and exploited such correlations. Given these limitations of model-free methods, there has been renewed interest in model-based methods, specifically graphical methods that explicitly reason temporally. The Gaussian Mixture Model (GMM) in a Linear Dynamic System approach assumes that the multi-dimensional test data is a mixture of multi-variate Gaussians, and fits a given number of Gaussian clusters with the help of the wellknown Expectation Maximization (EM) algorithm. The

  4. Linear Accelerator (LINAC)

    MedlinePlus

    ... is the device most commonly used for external beam radiation treatments for patients with cancer. The linear ... shape of the patient's tumor and the customized beam is directed to the patient's tumor. The beam ...

  5. Isolated linear blaschkoid psoriasis.

    PubMed

    Nasimi, M; Abedini, R; Azizpour, A; Nikoo, A

    2016-10-01

    Linear psoriasis (LPs) is considered a rare clinical presentation of psoriasis, which is characterized by linear erythematous and scaly lesions along the lines of Blaschko. We report the case of a 20-year-old man who presented with asymptomatic linear and S-shaped erythematous, scaly plaques on right side of his trunk. The plaques were arranged along the lines of Blaschko with a sharp demarcation at the midline. Histological examination of a skin biopsy confirmed the diagnosis of psoriasis. Topical calcipotriol and betamethasone dipropionate ointments were prescribed for 2 months. A good clinical improvement was achieved, with reduction in lesion thickness and scaling. In patients with linear erythematous and scaly plaques along the lines of Blaschko, the diagnosis of LPs should be kept in mind, especially in patients with asymptomatic lesions of late onset. PMID:27663156

  6. Model-Based Engine Control Architecture with an Extended Kalman Filter

    NASA Technical Reports Server (NTRS)

    Csank, Jeffrey T.; Connolly, Joseph W.

    2016-01-01

    This paper discusses the design and implementation of an extended Kalman filter (EKF) for model-based engine control (MBEC). Previously proposed MBEC architectures feature an optimal tuner Kalman Filter (OTKF) to produce estimates of both unmeasured engine parameters and estimates for the health of the engine. The success of this approach relies on the accuracy of the linear model and the ability of the optimal tuner to update its tuner estimates based on only a few sensors. Advances in computer processing are making it possible to replace the piece-wise linear model, developed off-line, with an on-board nonlinear model running in real-time. This will reduce the estimation errors associated with the linearization process, and is typically referred to as an extended Kalman filter. The non-linear extended Kalman filter approach is applied to the Commercial Modular Aero-Propulsion System Simulation 40,000 (C-MAPSS40k) and compared to the previously proposed MBEC architecture. The results show that the EKF reduces the estimation error, especially during transient operation.

  7. Inertial Linear Actuators

    NASA Technical Reports Server (NTRS)

    Laughlin, Darren

    1995-01-01

    Inertial linear actuators developed to suppress residual accelerations of nominally stationary or steadily moving platforms. Function like long-stroke version of voice coil in conventional loudspeaker, with superimposed linear variable-differential transformer. Basic concept also applicable to suppression of vibrations of terrestrial platforms. For example, laboratory table equipped with such actuators plus suitable vibration sensors and control circuits made to vibrate much less in presence of seismic, vehicular, and other environmental vibrational disturbances.

  8. Linear Alopecia Areata.

    PubMed

    Shetty, Shricharith; Rao, Raghavendra; Kudva, R Ranjini; Subramanian, Kumudhini

    2016-01-01

    Alopecia areata (AA) over scalp is known to present in various shapes and extents of hair loss. Typically it presents as circumscribed patches of alopecia with underlying skin remaining normal. We describe a rare variant of AA presenting in linear band-like form. Only four cases of linear alopecia have been reported in medical literature till today, all four being diagnosed as lupus erythematosus profundus. PMID:27625568

  9. Linear Alopecia Areata

    PubMed Central

    Shetty, Shricharith; Rao, Raghavendra; Kudva, R Ranjini; Subramanian, Kumudhini

    2016-01-01

    Alopecia areata (AA) over scalp is known to present in various shapes and extents of hair loss. Typically it presents as circumscribed patches of alopecia with underlying skin remaining normal. We describe a rare variant of AA presenting in linear band-like form. Only four cases of linear alopecia have been reported in medical literature till today, all four being diagnosed as lupus erythematosus profundus.

  10. Linear Alopecia Areata

    PubMed Central

    Shetty, Shricharith; Rao, Raghavendra; Kudva, R Ranjini; Subramanian, Kumudhini

    2016-01-01

    Alopecia areata (AA) over scalp is known to present in various shapes and extents of hair loss. Typically it presents as circumscribed patches of alopecia with underlying skin remaining normal. We describe a rare variant of AA presenting in linear band-like form. Only four cases of linear alopecia have been reported in medical literature till today, all four being diagnosed as lupus erythematosus profundus. PMID:27625568

  11. Model-Based Engineering and Manufacturing CAD/CAM Benchmark

    SciTech Connect

    Domm, T.D.; Underwood, R.S.

    1999-04-26

    The Benehmark Project was created from a desire to identify best practices and improve the overall efficiency and performance of the Y-12 Plant's systems and personnel supprting the manufacturing mission. The mission of the benchmark team was to search out industry leaders in manufacturing and evaluate lheir engineering practices and processes to determine direction and focus fm Y-12 modmizadon efforts. The companies visited included several large established companies and anew, small, high-tech machining firm. As a result of this efforL changes are recommended that will enable Y-12 to become a more responsive cost-effective manufacturing facility capable of suppordng the needs of the Nuclear Weapons Complex (NW@) and Work Fw Others into the 21' century. The benchmark team identified key areas of interest, both focused and gencml. The focus arm included Human Resources, Information Management, Manufacturing Software Tools, and Standarda/ Policies and Practices. Areas of general interest included Inhstructure, Computer Platforms and Networking, and Organizational Structure. The method for obtaining the desired information in these areas centered on the creation of a benchmark questionnaire. The questionnaire was used throughout each of the visits as the basis for information gathering. The results of this benchmark showed that all companies are moving in the direction of model-based engineering and manufacturing. There was evidence that many companies are trying to grasp how to manage current and legacy data. In terms of engineering design software tools, the companies contacted were using both 3-D solid modeling and surfaced Wire-frame models. The manufacturing computer tools were varie4 with most companies using more than one software product to generate machining data and none currently performing model-based manufacturing (MBM) ftom a common medel. The majority of companies were closer to identifying or using a single computer-aided design (CAD) system than a

  12. General Dentist

    MedlinePlus

    ... to your desktop! more... What Is a General Dentist? Article Chapters What Is a General Dentist? General ... Reviewed: January 2012 ?xml:namespace> Related Articles: General Dentists FAGD and MAGD: What Do These Awards Mean? ...

  13. Using Model-Based Reasoning for Autonomous Instrument Operation - Lessons Learned From IMAGE/LENA

    NASA Technical Reports Server (NTRS)

    Johnson, Michael A.; Rilee, Michael L.; Truszkowski, Walt; Bailin, Sidney C.

    2001-01-01

    Model-based reasoning has been applied as an autonomous control strategy on the Low Energy Neutral Atom (LENA) instrument currently flying on board the Imager for Magnetosphere-to-Aurora Global Exploration (IMAGE) spacecraft. Explicit models of instrument subsystem responses have been constructed and are used to dynamically adapt the instrument to the spacecraft's environment. These functions are cast as part of a Virtual Principal Investigator (VPI) that autonomously monitors and controls the instrument. In the VPI's current implementation, LENA's command uplink volume has been decreased significantly from its previous volume; typically, no uplinks are required for operations. This work demonstrates that a model-based approach can be used to enhance science instrument effectiveness. The components of LENA are common in space science instrumentation, and lessons learned by modeling this system may be applied to other instruments. Future work involves the extension of these methods to cover more aspects of LENA operation and the generalization to other space science instrumentation.

  14. Model-based detector and extraction of weak signal frequencies from chaotic data.

    PubMed

    Zhou, Cangtao; Cai, Tianxing; Heng Lai, Choy; Wang, Xingang; Lai, Ying-Cheng

    2008-03-01

    Detecting a weak signal from chaotic time series is of general interest in science and engineering. In this work we introduce and investigate a signal detection algorithm for which chaos theory, nonlinear dynamical reconstruction techniques, neural networks, and time-frequency analysis are put together in a synergistic manner. By applying the scheme to numerical simulation and different experimental measurement data sets (Henon map, chaotic circuit, and NH(3) laser data sets), we demonstrate that weak signals hidden beneath the noise floor can be detected by using a model-based detector. Particularly, the signal frequencies can be extracted accurately in the time-frequency space. By comparing the model-based method with the standard denoising wavelet technique as well as supervised principal components analysis detector, we further show that the nonlinear dynamics and neural network-based approach performs better in extracting frequencies of weak signals hidden in chaotic time series.

  15. Applying model-based diagnostics to space power distribution

    NASA Astrophysics Data System (ADS)

    Quinn, Todd M.; Schlegelmilch, Richard F.

    1994-03-01

    When engineers diagnose system failures, they often use models to confirm system operation. This concept has produced a class of advanced expert systems which perform model-based diagnosis. A model-based diagnostic expert system for a Space Station Freedom electrical power distribution testbed is currently being developed at the NASA Lewis Research Center. The objective of this expert system is to autonomously detect and isolate electrical fault conditions. Marple, a software package developed at TRW, provides a model-based environment utilizing constraint suspension. Originally, constraint suspension techniques were developed for digital systems. However, Marple provides the mechanisms for applying this approach to analog systems, such as the testbed, as well. The expert system was developed using Marple and Lucid Common Lisp running on Sun Sparc-2 workstation. The Marple modeling environment has proved to be a useful tool for investigating the various aspects of model-based diagnostics. This paper describes work completed to date and lessons learned while employing model-based diagnostics using constraint suspension within an analog system.

  16. An approach to accidents modeling based on compounds road environments.

    PubMed

    Fernandes, Ana; Neves, Jose

    2013-04-01

    The most common approach to study the influence of certain road features on accidents has been the consideration of uniform road segments characterized by a unique feature. However, when an accident is related to the road infrastructure, its cause is usually not a single characteristic but rather a complex combination of several characteristics. The main objective of this paper is to describe a methodology developed in order to consider the road as a complete environment by using compound road environments, overcoming the limitations inherented in considering only uniform road segments. The methodology consists of: dividing a sample of roads into segments; grouping them into quite homogeneous road environments using cluster analysis; and identifying the influence of skid resistance and texture depth on road accidents in each environment by using generalized linear models. The application of this methodology is demonstrated for eight roads. Based on real data from accidents and road characteristics, three compound road environments were established where the pavement surface properties significantly influence the occurrence of accidents. Results have showed clearly that road environments where braking maneuvers are more common or those with small radii of curvature and high speeds require higher skid resistance and texture depth as an important contribution to the accident prevention. PMID:23376544

  17. Model-based hierarchical reinforcement learning and human action control.

    PubMed

    Botvinick, Matthew; Weinstein, Ari

    2014-11-01

    Recent work has reawakened interest in goal-directed or 'model-based' choice, where decisions are based on prospective evaluation of potential action outcomes. Concurrently, there has been growing attention to the role of hierarchy in decision-making and action control. We focus here on the intersection between these two areas of interest, considering the topic of hierarchical model-based control. To characterize this form of action control, we draw on the computational framework of hierarchical reinforcement learning, using this to interpret recent empirical findings. The resulting picture reveals how hierarchical model-based mechanisms might play a special and pivotal role in human decision-making, dramatically extending the scope and complexity of human behaviour.

  18. Model Based Mission Assurance: Emerging Opportunities for Robotic Systems

    NASA Technical Reports Server (NTRS)

    Evans, John W.; DiVenti, Tony

    2016-01-01

    The emergence of Model Based Systems Engineering (MBSE) in a Model Based Engineering framework has created new opportunities to improve effectiveness and efficiencies across the assurance functions. The MBSE environment supports not only system architecture development, but provides for support of Systems Safety, Reliability and Risk Analysis concurrently in the same framework. Linking to detailed design will further improve assurance capabilities to support failures avoidance and mitigation in flight systems. This also is leading new assurance functions including model assurance and management of uncertainty in the modeling environment. Further, the assurance cases, a structured hierarchal argument or model, are emerging as a basis for supporting a comprehensive viewpoint in which to support Model Based Mission Assurance (MBMA).

  19. Model-based control of transitional and turbulent wall-bounded shear flows

    NASA Astrophysics Data System (ADS)

    Moarref, Rashad

    Turbulent flows are ubiquitous in nature and engineering. Dissipation of kinetic energy by turbulent flow around airplanes, ships, and submarines increases resistance to their motion (drag). In this dissertation, we have designed flow control strategies for enhancing performance of vehicles and other systems involving turbulent flows. While traditional flow control techniques combine physical intuition with costly numerical simulations and experiments, we have developed control-oriented models of wall-bounded shear flows that enable simulation-free and computationally-efficient design of flow controllers. Model-based approach to flow control design has been motivated by the realization that progressive loss of robustness and consequential noise amplification initiate the departure from the laminar flow. In view of this, we have used the Navier-Stokes equations with uncertainty linearized around the laminar flow as a control-oriented model for transitional flows and we have shown that reducing the sensitivity of fluctuations to external disturbances represents a powerful paradigm for preventing transition. In addition, we have established that turbulence modeling in conjunction with judiciously selected linearization of the flow with control can be used as a powerful control-oriented model for turbulent flows. We have illustrated the predictive power of our model-based control design in three concrete problems: preventing transition by (i) a sensorless strategy based on traveling waves and (ii) an optimal state-feedback controller based on local flow information; and (iii) skin-friction drag reduction in turbulent flows by transverse wall oscillations. We have developed analytical and computational tools based on perturbation analysis (in the control amplitude) for control design by means of spatially- and temporally- periodic flow manipulation in problems (i) and (iii), respectively. In problem (ii), we have utilized tools for designing structured optimal state

  20. When Does Model-Based Control Pay Off?

    PubMed

    Kool, Wouter; Cushman, Fiery A; Gershman, Samuel J

    2016-08-01

    Many accounts of decision making and reinforcement learning posit the existence of two distinct systems that control choice: a fast, automatic system and a slow, deliberative system. Recent research formalizes this distinction by mapping these systems to "model-free" and "model-based" strategies in reinforcement learning. Model-free strategies are computationally cheap, but sometimes inaccurate, because action values can be accessed by inspecting a look-up table constructed through trial-and-error. In contrast, model-based strategies compute action values through planning in a causal model of the environment, which is more accurate but also more cognitively demanding. It is assumed that this trade-off between accuracy and computational demand plays an important role in the arbitration between the two strategies, but we show that the hallmark task for dissociating model-free and model-based strategies, as well as several related variants, do not embody such a trade-off. We describe five factors that reduce the effectiveness of the model-based strategy on these tasks by reducing its accuracy in estimating reward outcomes and decreasing the importance of its choices. Based on these observations, we describe a version of the task that formally and empirically obtains an accuracy-demand trade-off between model-free and model-based strategies. Moreover, we show that human participants spontaneously increase their reliance on model-based control on this task, compared to the original paradigm. Our novel task and our computational analyses may prove important in subsequent empirical investigations of how humans balance accuracy and demand. PMID:27564094

  1. Superconducting linear actuator

    NASA Technical Reports Server (NTRS)

    Johnson, Bruce; Hockney, Richard

    1993-01-01

    Special actuators are needed to control the orientation of large structures in space-based precision pointing systems. Electromagnetic actuators that presently exist are too large in size and their bandwidth is too low. Hydraulic fluid actuation also presents problems for many space-based applications. Hydraulic oil can escape in space and contaminate the environment around the spacecraft. A research study was performed that selected an electrically-powered linear actuator that can be used to control the orientation of a large pointed structure. This research surveyed available products, analyzed the capabilities of conventional linear actuators, and designed a first-cut candidate superconducting linear actuator. The study first examined theoretical capabilities of electrical actuators and determined their problems with respect to the application and then determined if any presently available actuators or any modifications to available actuator designs would meet the required performance. The best actuator was then selected based on available design, modified design, or new design for this application. The last task was to proceed with a conceptual design. No commercially-available linear actuator or modification capable of meeting the specifications was found. A conventional moving-coil dc linear actuator would meet the specification, but the back-iron for this actuator would weigh approximately 12,000 lbs. A superconducting field coil, however, eliminates the need for back iron, resulting in an actuator weight of approximately 1000 lbs.

  2. Model-based estimation of changes in air temperature seasonality

    NASA Astrophysics Data System (ADS)

    Barbosa, Susana; Trigo, Ricardo

    2010-05-01

    Seasonality is a ubiquitous feature in climate time series. Climate change is expected to involve not only changes in the mean of climate parameters but also changes in the characteristics of the corresponding seasonal cycle. Therefore the identification and quantification of changes in seasonality is a highly relevant topic in climate analysis, particularly in a global warming context. However, the analysis of seasonality is far from a trivial task. A key challenge is the discrimination between long-term changes in the mean and long-term changes in the seasonal pattern itself, which requires the use of appropriate statistical approaches in order to be able to distinguish between overall trends in the mean and trends in the seasons. Model based approaches are particularly suitable for the analysis of seasonality, enabling to assess uncertainties in the amplitude and phase of seasonal patterns within a well defined statistical framework. This work addresses the changes in the seasonality of air temperature over the 20th century. The analysed data are global air temperature values close to surface (2m above ground) and mid-troposphere (500 hPa geopotential height) from the recently developed 20th century reanalysis. This new 3-D Reanalysis dataset is available since 1891, considerably extending all other Reanalyses currently in use (e.g. NCAR, ECWMF), and was obtained with the Ensemble Filter (Compo et al., 2006) by assimilation of pressure observations into a state-of-the-art atmospheric general circulation model that includes the radiative effects of historical time-varying CO2 concentrations, volcanic aerosol emissions and solar output variations. A modeling approach based on autoregression (Barbosa et al, 2008; Barbosa, 2009) is applied within a Bayesian framework for the estimation of a time varying seasonal pattern and further quantification of changes in the amplitude and phase of air temperature over the 20th century. Barbosa, SM, Silva, ME, Fernandes, MJ

  3. Model Based Analysis and Test Generation for Flight Software

    NASA Technical Reports Server (NTRS)

    Pasareanu, Corina S.; Schumann, Johann M.; Mehlitz, Peter C.; Lowry, Mike R.; Karsai, Gabor; Nine, Harmon; Neema, Sandeep

    2009-01-01

    We describe a framework for model-based analysis and test case generation in the context of a heterogeneous model-based development paradigm that uses and combines Math- Works and UML 2.0 models and the associated code generation tools. This paradigm poses novel challenges to analysis and test case generation that, to the best of our knowledge, have not been addressed before. The framework is based on a common intermediate representation for different modeling formalisms and leverages and extends model checking and symbolic execution tools for model analysis and test case generation, respectively. We discuss the application of our framework to software models for a NASA flight mission.

  4. Verification and Validation of Model-Based Autonomous Systems

    NASA Technical Reports Server (NTRS)

    Pecheur, Charles; Koga, Dennis (Technical Monitor)

    2001-01-01

    This paper presents a three year project (FY99 to FY01) on the verification and validation of model based autonomous systems. The topics include: 1) Project Profile; 2) Model-Based Autonomy; 3) The Livingstone MIR; 4) MPL2SMV; 5) Livingstone to SMV Translation; 6) Symbolic Model Checking; 7) From Livingstone Models to SMV Models; 8) Application In-Situ Propellant Production; 9) Closed-Loop Verification Principle; 10) Livingstone PathFinder (LPF); 11) Publications and Presentations; and 12) Future Directions. This paper is presented in viewgraph form.

  5. Designing linear systolic arrays

    SciTech Connect

    Kumar, V.K.P.; Tsai, Y.C. . Dept. of Electrical Engineering)

    1989-12-01

    The authors develop a simple mapping technique to design linear systolic arrays. The basic idea of the technique is to map the computations of a certain class of two-dimensional systolic arrays onto one-dimensional arrays. Using this technique, systolic algorithms are derived for problems such as matrix multiplication and transitive closure on linearly connected arrays of PEs with constant I/O bandwidth. Compared to known designs in the literature, the technique leads to modular systolic arrays with constant hardware in each PE, few control lines, lexicographic data input/output, and improved delay time. The unidirectional flow of control and data in this design assures implementation of the linear array in the known fault models of wafer scale integration.

  6. Linear encoding device

    NASA Technical Reports Server (NTRS)

    Leviton, Douglas B. (Inventor)

    1993-01-01

    A Linear Motion Encoding device for measuring the linear motion of a moving object is disclosed in which a light source is mounted on the moving object and a position sensitive detector such as an array photodetector is mounted on a nearby stationary object. The light source emits a light beam directed towards the array photodetector such that a light spot is created on the array. An analog-to-digital converter, connected to the array photodetector is used for reading the position of the spot on the array photodetector. A microprocessor and memory is connected to the analog-to-digital converter to hold and manipulate data provided by the analog-to-digital converter on the position of the spot and to compute the linear displacement of the moving object based upon the data from the analog-to-digital converter.

  7. Linearly Adjustable International Portfolios

    NASA Astrophysics Data System (ADS)

    Fonseca, R. J.; Kuhn, D.; Rustem, B.

    2010-09-01

    We present an approach to multi-stage international portfolio optimization based on the imposition of a linear structure on the recourse decisions. Multiperiod decision problems are traditionally formulated as stochastic programs. Scenario tree based solutions however can become intractable as the number of stages increases. By restricting the space of decision policies to linear rules, we obtain a conservative tractable approximation to the original problem. Local asset prices and foreign exchange rates are modelled separately, which allows for a direct measure of their impact on the final portfolio value.

  8. Educational Value and Models-Based Practice in Physical Education

    ERIC Educational Resources Information Center

    Kirk, David

    2013-01-01

    A models-based approach has been advocated as a means of overcoming the serious limitations of the traditional approach to physical education. One of the difficulties with this approach is that physical educators have sought to use it to achieve diverse and sometimes competing educational benefits, and these wide-ranging aspirations are rarely if…

  9. Model-based choices involve prospective neural activity

    PubMed Central

    Doll, Bradley B.; Duncan, Katherine D.; Simon, Dylan A.; Shohamy, Daphna; Daw, Nathaniel D.

    2015-01-01

    Decisions may arise via “model-free” repetition of previously reinforced actions, or by “model-based” evaluation, which is widely thought to follow from prospective anticipation of action consequences using a learned map or model. While choices and neural correlates of decision variables sometimes reflect knowledge of their consequences, it remains unclear whether this actually arises from prospective evaluation. Using functional MRI and a sequential reward-learning task in which paths contained decodable object categories, we found that humans’ model-based choices were associated with neural signatures of future paths observed at decision time, suggesting a prospective mechanism for choice. Prospection also covaried with the degree of model-based influences on neural correlates of decision variables, and was inversely related to prediction error signals thought to underlie model-free learning. These results dissociate separate mechanisms underlying model-based and model-free evaluation and support the hypothesis that model-based influences on choices and neural decision variables result from prospection. PMID:25799041

  10. Product Lifecycle Management Architecture: A Model Based Systems Engineering Analysis.

    SciTech Connect

    Noonan, Nicholas James

    2015-07-01

    This report is an analysis of the Product Lifecycle Management (PLM) program. The analysis is centered on a need statement generated by a Nuclear Weapons (NW) customer. The need statement captured in this report creates an opportunity for the PLM to provide a robust service as a solution. Lifecycles for both the NW and PLM are analyzed using Model Based System Engineering (MBSE).

  11. Improved Electrohydraulic Linear Actuators

    NASA Technical Reports Server (NTRS)

    Hamtil, James

    2004-01-01

    A product line of improved electrohydraulic linear actuators has been developed. These actuators are designed especially for use in actuating valves in rocket-engine test facilities. They are also adaptable to many industrial uses, such as steam turbines, process control valves, dampers, motion control, etc. The advantageous features of the improved electrohydraulic linear actuators are best described with respect to shortcomings of prior electrohydraulic linear actuators that the improved ones are intended to supplant. The flow of hydraulic fluid to the two ports of the actuator cylinder is controlled by a servo valve that is controlled by a signal from a servo amplifier that, in turn, receives an analog position-command signal (a current having a value between 4 and 20 mA) from a supervisory control system of the facility. As the position command changes, the servo valve shifts, causing a greater flow of hydraulic fluid to one side of the cylinder and thereby causing the actuator piston to move to extend or retract a piston rod from the actuator body. A linear variable differential transformer (LVDT) directly linked to the piston provides a position-feedback signal, which is compared with the position-command signal in the servo amplifier. When the position-feedback and position-command signals match, the servo valve moves to its null position, in which it holds the actuator piston at a steady position.

  12. Tissue non-linearity.

    PubMed

    Duck, F

    2010-01-01

    The propagation of acoustic waves is a fundamentally non-linear process, and only waves with infinitesimally small amplitudes may be described by linear expressions. In practice, all ultrasound propagation is associated with a progressive distortion in the acoustic waveform and the generation of frequency harmonics. At the frequencies and amplitudes used for medical diagnostic scanning, the waveform distortion can result in the formation of acoustic shocks, excess deposition of energy, and acoustic saturation. These effects occur most strongly when ultrasound propagates within liquids with comparatively low acoustic attenuation, such as water, amniotic fluid, or urine. Attenuation by soft tissues limits but does not extinguish these non-linear effects. Harmonics may be used to create tissue harmonic images. These offer improvements over conventional B-mode images in spatial resolution and, more significantly, in the suppression of acoustic clutter and side-lobe artefacts. The quantity B/A has promise as a parameter for tissue characterization, but methods for imaging B/A have shown only limited success. Standard methods for the prediction of tissue in-situ exposure from acoustic measurements in water, whether for regulatory purposes, for safety assessment, or for planning therapeutic regimes, may be in error because of unaccounted non-linear losses. Biological effects mechanisms are altered by finite-amplitude effects. PMID:20349813

  13. Linear motion valve

    NASA Technical Reports Server (NTRS)

    Chandler, J. A. (Inventor)

    1985-01-01

    The linear motion valve is described. The valve spool employs magnetically permeable rings, spaced apart axially, which engage a sealing assembly having magnetically permeable pole pieces in magnetic relationship with a magnet. The gap between the ring and the pole pieces is sealed with a ferrofluid. Depletion of the ferrofluid is minimized.

  14. Resistors Improve Ramp Linearity

    NASA Technical Reports Server (NTRS)

    Kleinberg, L. L.

    1982-01-01

    Simple modification to bootstrap ramp generator gives more linear output over longer sweep times. New circuit adds just two resistors, one of which is adjustable. Modification cancels nonlinearities due to variations in load on charging capacitor and due to changes in charging current as the voltage across capacitor increases.

  15. On Solving Linear Recurrences

    ERIC Educational Resources Information Center

    Dobbs, David E.

    2013-01-01

    A direct method is given for solving first-order linear recurrences with constant coefficients. The limiting value of that solution is studied as "n to infinity." This classroom note could serve as enrichment material for the typical introductory course on discrete mathematics that follows a calculus course.

  16. On-line and Model-based Approaches to the Visual Control of Action

    PubMed Central

    Zhao, Huaiyong; Warren, William H.

    2014-01-01

    Two general approaches to the visual control of action have emerged in last few decades, known as the on-line and model-based approaches. The key difference between them is whether action is controlled by current visual information or on the basis of an internal world model. In this paper, we evaluate three hypotheses: strong on-line control, strong model-based control, and a hybrid solution that combines on-line control with weak off-line strategies. We review experimental research on the control of locomotion and manual actions, which indicates that (a) an internal world model is neither sufficient nor necessary to control action at normal levels of performance; (b) current visual information is necessary and sufficient to control action at normal levels; and (c) under certain conditions (e.g. occlusion) action is controlled by less accurate, simple strategies such as heuristics, visual-motor mappings, or spatial memory. We conclude that the strong model-based hypothesis is not sustainable. Action is normally controlled on-line when current information is available, consistent with the strong on-line control hypothesis. In exceptional circumstances, action is controlled by weak, context-specific, off-line strategies. This hybrid solution is comprehensive, parsimonious, and able to account for a variety of tasks under a range of visual conditions. PMID:25454700

  17. Principal Component Analysis of breast DCE-MRI Adjusted with a Model Based Method

    PubMed Central

    Eyal, Erez.; Badikhi, Daria; Furman-Haran, Edna; Kelcz, Fredrick; Kirshenbaum, Kevin J.; Degani, Hadassa

    2010-01-01

    Purpose To investigate a fast, objective and standardized method for analyzing breast DCE-MRI applying principal component analysis (PCA) adjusted with a model based method. Materials and Methods 3D gradient-echo dynamic contrast-enhanced breast images of 31 malignant and 38 benign lesions, recorded on a 1.5 Tesla scanner were retrospectively analyzed by PCA and by the model based three-time-point (3TP) method. Results Intensity scaled (IS) and enhancement scaled (ES) datasets were reduced by PCA yielding a 1st IS-eigenvector that captured the signal variation between fat and fibroglandular tissue; two IS-eigenvectors and the two first ES-eigenvectors that captured contrast-enhanced changes, whereas the remaining eigenvectors captured predominantly noise changes. Rotation of the two contrast related eigenvectors led to a high congruence between the projection coefficients and the 3TP parameters. The ES-eigenvectors and the rotation angle were highly reproducible across malignant lesions enabling calculation of a general rotated eigenvector base. ROC curve analysis of the projection coefficients of the two eigenvectors indicated high sensitivity of the 1st rotated eigenvector to detect lesions (AUC>0.97) and of the 2nd rotated eigenvector to differentiate malignancy from benignancy (AUC=0.87). Conclusion PCA adjusted with a model-based method provided a fast and objective computer-aided diagnostic tool for breast DCE-MRI. PMID:19856419

  18. Model-Based Engine Control Architecture with an Extended Kalman Filter

    NASA Technical Reports Server (NTRS)

    Csank, Jeffrey T.; Connolly, Joseph W.

    2016-01-01

    This paper discusses the design and implementation of an extended Kalman filter (EKF) for model-based engine control (MBEC). Previously proposed MBEC architectures feature an optimal tuner Kalman Filter (OTKF) to produce estimates of both unmeasured engine parameters and estimates for the health of the engine. The success of this approach relies on the accuracy of the linear model and the ability of the optimal tuner to update its tuner estimates based on only a few sensors. Advances in computer processing are making it possible to replace the piece-wise linear model, developed off-line, with an on-board nonlinear model running in real-time. This will reduce the estimation errors associated with the linearization process, and is typically referred to as an extended Kalman filter. The nonlinear extended Kalman filter approach is applied to the Commercial Modular Aero-Propulsion System Simulation 40,000 (C-MAPSS40k) and compared to the previously proposed MBEC architecture. The results show that the EKF reduces the estimation error, especially during transient operation.

  19. Model-based Roentgen stereophotogrammetry of orthopaedic implants.

    PubMed

    Valstar, E R; de Jong, F W; Vrooman, H A; Rozing, P M; Reiber, J H

    2001-06-01

    Attaching tantalum markers to prostheses for Roentgen stereophotogrammetry (RSA) may be difficult and is sometimes even impossible. In this study, a model-based RSA method that avoids the attachment of markers to prostheses is presented and validated. This model-based RSA method uses a triangulated surface model of the implant. A projected contour of this model is calculated and this calculated model contour is matched onto the detected contour of the actual implant in the RSA radiograph. The difference between the two contours is minimized by variation of the position and orientation of the model. When a minimal difference between the contours is found, an optimal position and orientation of the model has been obtained. The method was validated by means of a phantom experiment. Three prosthesis components were used in this experiment: the femoral and tibial component of an Interax total knee prosthesis (Stryker Howmedica Osteonics Corp., Rutherfort, USA) and the femoral component of a Profix total knee prosthesis (Smith & Nephew, Memphis, USA). For the prosthesis components used in this study, the accuracy of the model-based method is lower than the accuracy of traditional RSA. For the Interax femoral and tibial components, significant dimensional tolerances were found that were probably caused by the casting process and manual polishing of the components surfaces. The largest standard deviation for any translation was 0.19mm and for any rotation it was 0.52 degrees. For the Profix femoral component that had no large dimensional tolerances, the largest standard deviation for any translation was 0.22mm and for any rotation it was 0.22 degrees. From this study we may conclude that the accuracy of the current model-based RSA method is sensitive to dimensional tolerances of the implant. Research is now being conducted to make model-based RSA less sensitive to dimensional tolerances and thereby improving its accuracy. PMID:11470108

  20. When Does Model-Based Control Pay Off?

    PubMed Central

    2016-01-01

    Many accounts of decision making and reinforcement learning posit the existence of two distinct systems that control choice: a fast, automatic system and a slow, deliberative system. Recent research formalizes this distinction by mapping these systems to “model-free” and “model-based” strategies in reinforcement learning. Model-free strategies are computationally cheap, but sometimes inaccurate, because action values can be accessed by inspecting a look-up table constructed through trial-and-error. In contrast, model-based strategies compute action values through planning in a causal model of the environment, which is more accurate but also more cognitively demanding. It is assumed that this trade-off between accuracy and computational demand plays an important role in the arbitration between the two strategies, but we show that the hallmark task for dissociating model-free and model-based strategies, as well as several related variants, do not embody such a trade-off. We describe five factors that reduce the effectiveness of the model-based strategy on these tasks by reducing its accuracy in estimating reward outcomes and decreasing the importance of its choices. Based on these observations, we describe a version of the task that formally and empirically obtains an accuracy-demand trade-off between model-free and model-based strategies. Moreover, we show that human participants spontaneously increase their reliance on model-based control on this task, compared to the original paradigm. Our novel task and our computational analyses may prove important in subsequent empirical investigations of how humans balance accuracy and demand. PMID:27564094

  1. Optimal Scaling of Interaction Effects in Generalized Linear Models

    ERIC Educational Resources Information Center

    van Rosmalen, Joost; Koning, Alex J.; Groenen, Patrick J. F.

    2009-01-01

    Multiplicative interaction models, such as Goodman's (1981) RC(M) association models, can be a useful tool for analyzing the content of interaction effects. However, most models for interaction effects are suitable only for data sets with two or three predictor variables. Here, we discuss an optimal scaling model for analyzing the content of…

  2. Misspecification of the covariance structure in generalized linear mixed models.

    PubMed

    Chavance, M; Escolano, S

    2016-04-01

    When fitting marginal models to correlated outcomes, the so-called sandwich variance is commonly used. However, this is not the case when fitting mixed models. Using two data sets, we illustrate the problems that can be encountered. We show that the differences or the ratios between the naive and sandwich standard deviations of the fixed effects estimators provide convenient means of assessing the fit of the model, as both are consistent when the covariance structure is correctly specified, but only the latter is when that structure is misspecified. When the number of statistical units is not too small, the sandwich formula correctly estimates the variance of the fixed effects estimator even if the random effects are misspecified, and it can be used in a diagnostic tool for assessing the misspecification of the random effects. A simple comparison with the naive variance is sufficient and we propose considering a ratio of the naive and sandwich standard deviation out of the [3/4; 4/3] interval as signaling a risk of erroneous inference due to a model misspecification. We strongly advocate broader use of the sandwich variance for statistical inference about the fixed effects in mixed models.

  3. Noiseless Linear Amplification with General Local Unitary Operations

    NASA Astrophysics Data System (ADS)

    Song, Yang; Ning-Juan, Ruan; Yun, Su; Xu-Ling, Lin; Zhi-Qiang, Wu

    2016-07-01

    Not Available Supported by the National Natural Science Foundation of China under Grant Nos 11304013, 11204197, 11204379 and 11074244, the National Basic Research Program of China under Grant No 2011CBA00200, the Doctor Science Research Foundation of Ministry of Education of China under Grant No 20113402110059, and Civil Aerospace 2013669.

  4. General Purpose Unfolding Program with Linear and Nonlinear Regularizations.

    1987-05-07

    Version 00 The interpretation of several physical measurements requires the unfolding or deconvolution of the solution of Fredholm integral equations of the first kind. Examples include neutron spectroscopy with activation detectors, moderating spheres, or proton recoil measurements. LOUHI82 is designed to be applicable to a large number of physical problems and to be extended to incorporate other unfolding methods.

  5. Item Response Theory Using Hierarchical Generalized Linear Models

    ERIC Educational Resources Information Center

    Ravand, Hamdollah

    2015-01-01

    Multilevel models (MLMs) are flexible in that they can be employed to obtain item and person parameters, test for differential item functioning (DIF) and capture both local item and person dependence. Papers on the MLM analysis of item response data have focused mostly on theoretical issues where applications have been add-ons to simulation…

  6. [Predicting suicide or predicting the unpredictable in an uncertain world: Reinforcement Learning Model-Based analysis].

    PubMed

    Desseilles, Martin

    2012-01-01

    In general, it appears that the suicidal act is highly unpredictable with the current scientific means available. In this article, the author submits the hypothesis that predicting suicide is complex because it results in predicting a choice, in itself unpredictable. The article proposes a Reinforcement learning model-based analysis. In this model, we integrate on the one hand, four ascending modulatory neurotransmitter systems (acetylcholine, noradrenalin, serotonin, and dopamine) with their regions of respective projections and afferences, and on the other hand, various observations of brain imaging identified until now in the suicidal process.

  7. Noise limitations in optical linear algebra processors.

    PubMed

    Batsell, S G; Jong, T L; Walkup, J F; Krile, T F

    1990-05-10

    A general statistical noise model is presented for optical linear algebra processors. A statistical analysis which includes device noise, the multiplication process, and the addition operation is undertaken. We focus on those processes which are architecturally independent. Finally, experimental results which verify the analytical predictions are also presented.

  8. Families of Linear Recurrences for Catalan Numbers

    ERIC Educational Resources Information Center

    Gauthier, N.

    2011-01-01

    Four different families of linear recurrences are derived for Catalan numbers. The derivations rest on John Riordan's 1973 generalization of Catalan numbers to a set of polynomials. Elementary differential and integral calculus techniques are used and the results should be of interest to teachers and students of introductory courses in calculus…

  9. DUSTYWAVE: Linear waves in gas and dust

    NASA Astrophysics Data System (ADS)

    Laibe, Guillaume; Price, Daniel J.

    2016-02-01

    Written in Fortran, DUSTYWAVE computes the exact solution for linear waves in a two-fluid mixture of gas and dust. The solutions are general with respect to both the dust-to-gas ratio and the amplitude of the drag coefficient.

  10. Using Quartile-Quartile Lines as Linear Models

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.

    2015-01-01

    This article introduces the notion of the quartile-quartile line as an alternative to the regression line and the median-median line to produce a linear model based on a set of data. It is based on using the first and third quartiles of a set of (x, y) data. Dynamic spreadsheets are used as exploratory tools to compare the different approaches and…

  11. Linear irreversible heat engines based on local equilibrium assumptions

    NASA Astrophysics Data System (ADS)

    Izumida, Yuki; Okuda, Koji

    2015-08-01

    We formulate an endoreversible finite-time Carnot cycle model based on the assumptions of local equilibrium and constant energy flux, where the efficiency and the power are expressed in terms of the thermodynamic variables of the working substance. By analyzing the entropy production rate caused by the heat transfer in each isothermal process during the cycle, and using the endoreversible condition applied to the linear response regime, we identify the thermodynamic flux and force of the present system and obtain a linear relation that connects them. We calculate the efficiency at maximum power in the linear response regime by using the linear relation, which agrees with the Curzon-Ahlborn (CA) efficiency known as the upper bound in this regime. This reason is also elucidated by rewriting our model into the form of the Onsager relations, where our model turns out to satisfy the tight-coupling condition leading to the CA efficiency.

  12. Reciprocating linear motor

    NASA Technical Reports Server (NTRS)

    Goldowsky, Michael P. (Inventor)

    1987-01-01

    A reciprocating linear motor is formed with a pair of ring-shaped permanent magnets having opposite radial polarizations, held axially apart by a nonmagnetic yoke, which serves as an axially displaceable armature assembly. A pair of annularly wound coils having axial lengths which differ from the axial lengths of the permanent magnets are serially coupled together in mutual opposition and positioned with an outer cylindrical core in axial symmetry about the armature assembly. One embodiment includes a second pair of annularly wound coils serially coupled together in mutual opposition and an inner cylindrical core positioned in axial symmetry inside the armature radially opposite to the first pair of coils. Application of a potential difference across a serial connection of the two pairs of coils creates a current flow perpendicular to the magnetic field created by the armature magnets, thereby causing limited linear displacement of the magnets relative to the coils.

  13. A Linear Bicharacteristic FDTD Method

    NASA Technical Reports Server (NTRS)

    Beggs, John H.

    2001-01-01

    The linear bicharacteristic scheme (LBS) was originally developed to improve unsteady solutions in computational acoustics and aeroacoustics [1]-[7]. It is a classical leapfrog algorithm, but is combined with upwind bias in the spatial derivatives. This approach preserves the time-reversibility of the leapfrog algorithm, which results in no dissipation, and it permits more flexibility by the ability to adopt a characteristic based method. The use of characteristic variables allows the LBS to treat the outer computational boundaries naturally using the exact compatibility equations. The LBS offers a central storage approach with lower dispersion than the Yee algorithm, plus it generalizes much easier to nonuniform grids. It has previously been applied to two and three-dimensional freespace electromagnetic propagation and scattering problems [3], [6], [7]. This paper extends the LBS to model lossy dielectric and magnetic materials. Results are presented for several one-dimensional model problems, and the FDTD algorithm is chosen as a convenient reference for comparison.

  14. Electrostatic Linear Actuator

    NASA Technical Reports Server (NTRS)

    Collins, Earl R., Jr.; Curry, Kenneth C.

    1990-01-01

    Electrically charged helices attract or repel each other. Proposed electrostatic linear actuator made with intertwined dual helices, which holds charge-bearing surfaces. Dual-helix configuration provides relatively large unbroken facing charged surfaces (relatively large electrostatic force) within small volume. Inner helix slides axially in outer helix in response to voltages applied to conductors. Spiral form also makes components more rigid. Actuator conceived to have few moving parts and to be operable after long intervals of inactivity.

  15. Linear induction accelerator

    DOEpatents

    Buttram, M.T.; Ginn, J.W.

    1988-06-21

    A linear induction accelerator includes a plurality of adder cavities arranged in a series and provided in a structure which is evacuated so that a vacuum inductance is provided between each adder cavity and the structure. An energy storage system for the adder cavities includes a pulsed current source and a respective plurality of bipolar converting networks connected thereto. The bipolar high-voltage, high-repetition-rate square pulse train sets and resets the cavities. 4 figs.

  16. Relativistic Linear Restoring Force

    ERIC Educational Resources Information Center

    Clark, D.; Franklin, J.; Mann, N.

    2012-01-01

    We consider two different forms for a relativistic version of a linear restoring force. The pair comes from taking Hooke's law to be the force appearing on the right-hand side of the relativistic expressions: d"p"/d"t" or d"p"/d["tau"]. Either formulation recovers Hooke's law in the non-relativistic limit. In addition to these two forces, we…

  17. Combustion powered linear actuator

    DOEpatents

    Fischer, Gary J.

    2007-09-04

    The present invention provides robotic vehicles having wheeled and hopping mobilities that are capable of traversing (e.g. by hopping over) obstacles that are large in size relative to the robot and, are capable of operation in unpredictable terrain over long range. The present invention further provides combustion powered linear actuators, which can include latching mechanisms to facilitate pressurized fueling of the actuators, as can be used to provide wheeled vehicles with a hopping mobility.

  18. Scintillation event energy measurement via a pulse model based iterative deconvolution method

    NASA Astrophysics Data System (ADS)

    Deng, Zhenzhou; Xie, Qingguo; Duan, Zhiwen; Xiao, Peng

    2013-11-01

    This work focuses on event energy measurement, a crucial task of scintillation detection systems. We modeled the scintillation detector as a linear system and treated the energy measurement as a deconvolution problem. We proposed a pulse model based iterative deconvolution (PMID) method, which can process pileup events without detection and is adaptive for different signal pulse shapes. The proposed method was compared with digital gated integrator (DGI) and digital delay-line clipping (DDLC) using real world experimental data. For singles data, the energy resolution (ER) produced by PMID matched that of DGI. For pileups, the PMID method outperformed both DGI and DDLC in ER and counts recovery. The encouraging results suggest that the PMID method has great potentials in applications like photon-counting systems and pulse height spectrometers, in which multiple-event pileups are common.

  19. Scintillation event energy measurement via a pulse model based iterative deconvolution method.

    PubMed

    Deng, Zhenzhou; Xie, Qingguo; Duan, Zhiwen; Xiao, Peng

    2013-11-01

    This work focuses on event energy measurement, a crucial task of scintillation detection systems. We modeled the scintillation detector as a linear system and treated the energy measurement as a deconvolution problem. We proposed a pulse model based iterative deconvolution (PMID) method, which can process pileup events without detection and is adaptive for different signal pulse shapes. The proposed method was compared with digital gated integrator (DGI) and digital delay-line clipping (DDLC) using real world experimental data. For singles data, the energy resolution (ER) produced by PMID matched that of DGI. For pileups, the PMID method outperformed both DGI and DDLC in ER and counts recovery. The encouraging results suggest that the PMID method has great potentials in applications like photon-counting systems and pulse height spectrometers, in which multiple-event pileups are common. PMID:24145134

  20. Model-based control of vortex shedding at low Reynolds numbers

    NASA Astrophysics Data System (ADS)

    Illingworth, Simon J.

    2016-10-01

    Model-based feedback control of vortex shedding at low Reynolds numbers is considered. The feedback signal is provided by velocity measurements in the wake, and actuation is achieved using blowing and suction on the cylinder's surface. Using two-dimensional direct numerical simulations and reduced-order modelling techniques, linear models of the wake are formed at Reynolds numbers between 45 and 110. These models are used to design feedback controllers using {H}_∞ loop-shaping. Complete suppression of shedding is demonstrated up to Re = 110—both for a single-sensor arrangement and for a three-sensor arrangement. The robustness of the feedback controllers is also investigated by applying them over a range of off-design Reynolds numbers, and good robustness properties are seen. It is also observed that it becomes increasingly difficult to achieve acceptable control performance—measured in a suitable way—as Reynolds number increases.

  1. Predictive models based on sensitivity theory and their application to practical shielding problems

    SciTech Connect

    Bhuiyan, S.I.; Roussin, R.W.; Lucius, J.L.; Bartine, D.E.

    1983-01-01

    Two new calculational models based on the use of cross-section sensitivity coefficients have been devised for calculating radiation transport in relatively simple shields. The two models, one an exponential model and the other a power model, have been applied, together with the traditional linear model, to 1- and 2-m-thick concrete-slab problems in which the water content, reinforcing-steel content, or composition of the concrete was varied. Comparing the results obtained with the three models with those obtained from exact one-dimensional discrete-ordinates transport calculations indicates that the exponential model, named the BEST model (for basic exponential shielding trend), is a particularly promising predictive tool for shielding problems dominated by exponential attenuation. When applied to a deep-penetration sodium problem, the BEST model also yields better results than do calculations based on second-order sensitivity theory.

  2. Photonic Beamformer Model Based on Analog Fiber-Optic Links’ Components

    NASA Astrophysics Data System (ADS)

    Volkov, V. A.; Gordeev, D. A.; Ivanov, S. I.; Lavrov, A. P.; Saenko, I. I.

    2016-08-01

    The model of photonic beamformer for wideband microwave phased array antenna is investigated. The main features of the photonic beamformer model based on true-time-delay technique, DWDM technology and fiber chromatic dispersion are briefly analyzed. The performance characteristics of the key components of photonic beamformer for phased array antenna in the receive mode are examined. The beamformer model composed of the components available on the market of fiber-optic analog communication links is designed and tentatively investigated. Experimental demonstration of the designed model beamforming features includes actual measurement of 5-element microwave linear array antenna far-field patterns in 6-16 GHz frequency range for antenna pattern steering up to 40°. The results of experimental testing show good accordance with the calculation estimates.

  3. Model-based fault detection of blade pitch system in floating wind turbines

    NASA Astrophysics Data System (ADS)

    Cho, S.; Gao, Z.; Moan, T.

    2016-09-01

    This paper presents a model-based scheme for fault detection of a blade pitch system in floating wind turbines. A blade pitch system is one of the most critical components due to its effect on the operational safety and the dynamics of wind turbines. Faults in this system should be detected at the early stage to prevent failures. To detect faults of blade pitch actuators and sensors, an appropriate observer should be designed to estimate the states of the system. Residuals are generated by a Kalman filter and a threshold based on H optimization, and linear matrix inequality (LMI) is used for residual evaluation. The proposed method is demonstrated in a case study that bias and fixed output in pitch sensors and stuck in pitch actuators. The simulation results show that the proposed method detects different realistic fault scenarios of wind turbines under the stochastic external winds.

  4. High-Performance Mixed Models Based Genome-Wide Association Analysis with omicABEL software.

    PubMed

    Fabregat-Traver, Diego; Sharapov, Sodbo Zh; Hayward, Caroline; Rudan, Igor; Campbell, Harry; Aulchenko, Yurii; Bientinesi, Paolo

    2014-01-01

    To raise the power of genome-wide association studies (GWAS) and avoid false-positive results in structured populations, one can rely on mixed model based tests. When large samples are used, and when multiple traits are to be studied in the 'omics' context, this approach becomes computationally challenging. Here we consider the problem of mixed-model based GWAS for arbitrary number of traits, and demonstrate that for the analysis of single-trait and multiple-trait scenarios different computational algorithms are optimal. We implement these optimal algorithms in a high-performance computing framework that uses state-of-the-art linear algebra kernels, incorporates optimizations, and avoids redundant computations, increasing throughput while reducing memory usage and energy consumption. We show that, compared to existing libraries, our algorithms and software achieve considerable speed-ups. The OmicABEL software described in this manuscript is available under the GNU GPL v. 3 license as part of the GenABEL project for statistical genomics at http: //www.genabel.org/packages/OmicABEL. PMID:25717363

  5. High-Performance Mixed Models Based Genome-Wide Association Analysis with omicABEL software

    PubMed Central

    Fabregat-Traver, Diego; Sharapov, Sodbo Zh.; Hayward, Caroline; Rudan, Igor; Campbell, Harry; Aulchenko, Yurii; Bientinesi, Paolo

    2014-01-01

    To raise the power of genome-wide association studies (GWAS) and avoid false-positive results in structured populations, one can rely on mixed model based tests. When large samples are used, and when multiple traits are to be studied in the ’omics’ context, this approach becomes computationally challenging. Here we consider the problem of mixed-model based GWAS for arbitrary number of traits, and demonstrate that for the analysis of single-trait and multiple-trait scenarios different computational algorithms are optimal. We implement these optimal algorithms in a high-performance computing framework that uses state-of-the-art linear algebra kernels, incorporates optimizations, and avoids redundant computations, increasing throughput while reducing memory usage and energy consumption. We show that, compared to existing libraries, our algorithms and software achieve considerable speed-ups. The OmicABEL software described in this manuscript is available under the GNU GPL v. 3 license as part of the GenABEL project for statistical genomics at http: //www.genabel.org/packages/OmicABEL. PMID:25717363

  6. A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan Walker

    2015-01-01

    This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.

  7. A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan W.

    2014-01-01

    This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.

  8. Representation of linear orders.

    PubMed

    Taylor, D A; Kim, J O; Sudevan, P

    1984-01-01

    Two binary classification tasks were used to explore the associative structure of linear orders. In Experiment 1, college students classified English letters as targets or nontargets, the targets being consecutive letters of the alphabet. The time to reject nontargets was a decreasing function of the distance from the target set, suggesting response interference mediated by automatic associations from the target to the nontarget letters. The way in which this interference effect depended on the placement of the boundaries between the target and nontarget sets revealed the relative strengths of individual interletter associations. In Experiment 2, students were assigned novel linear orders composed of letterlike symbols and asked to classify pairs of symbols as being adjacent or nonadjacent in the assigned sequence. Reaction time was found to be a joint function of the distance between any pair of symbols and the relative positions of those symbols within the sequence. The effects of both distance and position decreased systematically over 6 days of practice with a particular order, beginning at a level typical of unfamiliar orders and converging on a level characteristic of familiar orders such as letters and digits. These results provide an empirical unification of two previously disparate sets of findings in the literature on linear orders, those concerning familiar and unfamiliar orders, and the systematic transition between the two patterns of results suggests the gradual integration of a new associative structure.

  9. Anti- (conjugate) linearity

    NASA Astrophysics Data System (ADS)

    Uhlmann, Armin

    2016-03-01

    This is an introduction to antilinear operators. In following Wigner the terminus antilinear is used as it is standard in Physics. Mathematicians prefer to say conjugate linear. By restricting to finite-dimensional complex-linear spaces, the exposition becomes elementary in the functional analytic sense. Nevertheless it shows the amazing differences to the linear case. Basics of antilinearity is explained in sects. 2, 3, 4, 7 and in sect. 1.2: Spectrum, canonical Hermitian form, antilinear rank one and two operators, the Hermitian adjoint, classification of antilinear normal operators, (skew) conjugations, involutions, and acq-lines, the antilinear counterparts of 1-parameter operator groups. Applications include the representation of the Lagrangian Grassmannian by conjugations, its covering by acq-lines. As well as results on equivalence relations. After remembering elementary Tomita-Takesaki theory, antilinear maps, associated to a vector of a two-partite quantum system, are defined. By allowing to write modular objects as twisted products of pairs of them, they open some new ways to express EPR and teleportation tasks. The appendix presents a look onto the rich structure of antilinear operator spaces.

  10. Linearized Kernel Dictionary Learning

    NASA Astrophysics Data System (ADS)

    Golts, Alona; Elad, Michael

    2016-06-01

    In this paper we present a new approach of incorporating kernels into dictionary learning. The kernel K-SVD algorithm (KKSVD), which has been introduced recently, shows an improvement in classification performance, with relation to its linear counterpart K-SVD. However, this algorithm requires the storage and handling of a very large kernel matrix, which leads to high computational cost, while also limiting its use to setups with small number of training examples. We address these problems by combining two ideas: first we approximate the kernel matrix using a cleverly sampled subset of its columns using the Nystr\\"{o}m method; secondly, as we wish to avoid using this matrix altogether, we decompose it by SVD to form new "virtual samples," on which any linear dictionary learning can be employed. Our method, termed "Linearized Kernel Dictionary Learning" (LKDL) can be seamlessly applied as a pre-processing stage on top of any efficient off-the-shelf dictionary learning scheme, effectively "kernelizing" it. We demonstrate the effectiveness of our method on several tasks of both supervised and unsupervised classification and show the efficiency of the proposed scheme, its easy integration and performance boosting properties.

  11. The Stanford Linear Collider

    SciTech Connect

    Rees, J.R.

    1989-10-01

    April, 1989, the first Z zero particle was observed at the Stanford Linear Collider (SLC). The SLC collides high-energy beams of electrons and positrons into each other. In break with tradition the SLC aims two linear beams at each other. Strong motives impelled the Stanford team to choose the route of innovation. One reason being that linear colliders promise to be less expensive to build and operate than storage ring colliders. An equally powerful motive was the desire to build an Z zero factory, a facility at which the Z zero particle can be studied in detail. More than 200 Z zero particles have been detected at the SLC and more continue to be churned out regularly. It is in measuring the properties of the Z zero that the SLC has a seminal contribution to make. One of the primary goals of the SLC experimental program is to determine the mass of the Z zero as precisely as possible.In the end, the SLC's greatest significance will be in having proved a new accelerator technology. 7 figs.

  12. Comparing model-based and model-free analysis methods for QUASAR arterial spin labeling perfusion quantification.

    PubMed

    Chappell, Michael A; Woolrich, Mark W; Petersen, Esben T; Golay, Xavier; Payne, Stephen J

    2013-05-01

    Amongst the various implementations of arterial spin labeling MRI methods for quantifying cerebral perfusion, the QUASAR method is unique. By using a combination of labeling with and without flow suppression gradients, the QUASAR method offers the separation of macrovascular and tissue signals. This permits local arterial input functions to be defined and "model-free" analysis, using numerical deconvolution, to be used. However, it remains unclear whether arterial spin labeling data are best treated using model-free or model-based analysis. This work provides a critical comparison of these two approaches for QUASAR arterial spin labeling in the healthy brain. An existing two-component (arterial and tissue) model was extended to the mixed flow suppression scheme of QUASAR to provide an optimal model-based analysis. The model-based analysis was extended to incorporate dispersion of the labeled bolus, generally regarded as the major source of discrepancy between the two analysis approaches. Model-free and model-based analyses were compared for perfusion quantification including absolute measurements, uncertainty estimation, and spatial variation in cerebral blood flow estimates. Major sources of discrepancies between model-free and model-based analysis were attributed to the effects of dispersion and the degree to which the two methods can separate macrovascular and tissue signal.

  13. The Effects of a Model-Based Physics Curriculum Program with a Physics First Approach: a Causal-Comparative Study

    NASA Astrophysics Data System (ADS)

    Liang, Ling L.; Fulmer, Gavin W.; Majerich, David M.; Clevenstine, Richard; Howanski, Raymond

    2012-02-01

    The purpose of this study is to examine the effects of a model-based introductory physics curriculum on conceptual learning in a Physics First (PF) Initiative. This is the first comparative study in physics education that applies the Rasch modeling approach to examine the effects of a model-based curriculum program combined with PF in the United States. Five teachers and 301 students (in grades 9 through 12) in two mid-Atlantic high schools participated in the study. The students' conceptual learning was measured by the Force Concept Inventory (FCI). It was found that the ninth-graders enrolled in the model-based program in a PF initiative achieved substantially greater conceptual understanding of the physics content than those 11th-/12th-graders enrolled in the conventional non-modeling, non-PF program (Honors strand). For the 11th-/12th-graders enrolled in the non-PF, non-honors strands, the modeling classes also outperformed the conventional non-modeling classes. The instructional activity reports by students indicated that the model-based approach was generally implemented in modeling classrooms. A closer examination of the field notes and the classroom observation profiles revealed that the greatest inconsistencies in model-based teaching practices observed were related to classroom interactions or discourse. Implications and recommendations for future studies are also discussed.

  14. Progress in linear optics, non-linear optics and surface alignment of liquid crystals

    SciTech Connect

    Ong, H.L.; Meyer, R.B.; Hurd, A.J.; Karn, A.J.; Arakelian, S.M.; Shen, Y.R.; Sanda, P.N.; Dove, D.B.; Jansen, S.A.; Hoffmann, R.

    1989-01-01

    We first discuss the progress in linear optics, in particular, the formulation and application of geometrical-optics approximation and its generalization. We then discuss the progress in non-linear optics, in particular, the enhancement of a first-order Freedericksz transition and intrinsic optical bistability in homeotropic and parallel oriented nematic liquid crystal cells. Finally, we discuss the liquid crystal alignment and surface effects on field-induced Freedericksz transition. 50 refs.

  15. General Law of Electromagnetic Radiation Conversion Efficiency in Systems with Linear and Non-Linear Irreversibility

    NASA Astrophysics Data System (ADS)

    Chukova, Yu. P.

    2011-12-01

    It is shown, that the efficiency of conversion of solar radiation obeys the same law in alive and nonliving (technical) systems. For different processes in alive systems the evolution has selected different ranges of solar intensity and different conditions of irreversibility.

  16. Using Model-Based Reasoning for Autonomous Instrument Operation

    NASA Technical Reports Server (NTRS)

    Johnson, Mike; Rilee, M.; Truszkowski, W.; Powers, Edward I. (Technical Monitor)

    2000-01-01

    of environmental hazards, frame the problem of constructing autonomous science instruments. we are developing a model of the Low Energy Neutral Atom instrument (LENA) that is currently flying on board the Imager for Magnetosphere-to-Aurora Global Exploration (IMAGE) spacecraft. LENA is a particle detector that uses high voltage electrostatic optics and time-of-flight mass spectrometry to image neutral atom emissions from the denser regions of the Earth's magnetosphere. As with most spacecraft borne science instruments, phenomena in addition to neutral atoms are detected by LENA. Solar radiation and energetic particles from Earth's radiation belts are of particular concern because they may help generate currents that may compromise LENA's long term performance. An explicit model of the instrument response has been constructed and is currently in use on board IMAGE to dynamically adapt LENA to the presence or absence of energetic background radiations. The components of LENA are common in space science instrumentation, and lessons learned by modelling this system may be applied to other instruments. This work demonstrates that a model-based approach can be used to enhance science instrument effectiveness. Our future work involves the extension of these methods to cover more aspects of LENA operation and the generalization to other space science instrumentation.

  17. Identifying Model-Based Reconfiguration Goals through Functional Deficiencies

    NASA Technical Reports Server (NTRS)

    Benazera, Emmanuel; Trave-Massuyes, Louise

    2004-01-01

    Model-based diagnosis is now advanced to the point autonomous systems face some uncertain and faulty situations with success. The next step toward more autonomy is to have the system recovering itself after faults occur, a process known as model-based reconfiguration. After faults occur, given a prediction of the nominal behavior of the system and the result of the diagnosis operation, this paper details how to automatically determine the functional deficiencies of the system. These deficiencies are characterized in the case of uncertain state estimates. A methodology is then presented to determine the reconfiguration goals based on the deficiencies. Finally, a recovery process interleaves planning and model predictive control to restore the functionalities in prioritized order.

  18. Outlier Identification in Model-Based Cluster Analysis

    PubMed Central

    Evans, Katie; Love, Tanzy; Thurston, Sally W.

    2015-01-01

    In model-based clustering based on normal-mixture models, a few outlying observations can influence the cluster structure and number. This paper develops a method to identify these, however it does not attempt to identify clusters amidst a large field of noisy observations. We identify outliers as those observations in a cluster with minimal membership proportion or for which the cluster-specific variance with and without the observation is very different. Results from a simulation study demonstrate the ability of our method to detect true outliers without falsely identifying many non-outliers and improved performance over other approaches, under most scenarios. We use the contributed R package MCLUST for model-based clustering, but propose a modified prior for the cluster-specific variance which avoids degeneracies in estimation procedures. We also compare results from our outlier method to published results on National Hockey League data. PMID:26806993

  19. The Challenge of Configuring Model-Based Space Mission Planners

    NASA Technical Reports Server (NTRS)

    Frank, Jeremy D.; Clement, Bradley J.; Chachere, John M.; Smith, Tristan B.; Swanson, Keith J.

    2011-01-01

    Mission planning is central to space mission operations, and has benefited from advances in model-based planning software. Constraints arise from many sources, including simulators and engineering specification documents, and ensuring that constraints are correctly represented in the planner is a challenge. As mission constraints evolve, planning domain modelers need help with modeling constraints efficiently using the available source data, catching errors quickly, and correcting the model. This paper describes the current state of the practice in designing model-based mission planning tools, the challenges facing model developers, and a proposed Interactive Model Development Environment (IMDE) to configure mission planning systems. We describe current and future technology developments that can be integrated into an IMDE.

  20. MTK: An AI tool for model-based reasoning

    NASA Technical Reports Server (NTRS)

    Erickson, William K.; Schwartz, Mary R.

    1987-01-01

    A 1988 goal for the Systems Autonomy Demonstration Project Office of the NASA Ames Research Center is to apply model-based representation and reasoning techniques in a knowledge-based system that will provide monitoring, fault diagnosis, control and trend analysis of the space station Thermal Management System (TMS). A number of issues raised during the development of the first prototype system inspired the design and construction of a model-based reasoning tool called MTK, which was used in the building of the second prototype. These issues are outlined, along with examples from the thermal system to highlight the motivating factors behind them. An overview of the capabilities of MTK is given.