Sample records for general model based

  1. Assessing College Students' Understanding of Acid Base Chemistry Concepts

    ERIC Educational Resources Information Center

    Wan, Yanjun Jean

    2014-01-01

    Typically most college curricula include three acid base models: Arrhenius', Bronsted-Lowry's, and Lewis'. Although Lewis' acid base model is generally thought to be the most sophisticated among these three models, and can be further applied in reaction mechanisms, most general chemistry curricula either do not include Lewis' acid base model, or…

  2. A Mixed Kijima Model Using the Weibull-Based Generalized Renewal Processes

    PubMed Central

    2015-01-01

    Generalized Renewal Processes are useful for approaching the rejuvenation of dynamical systems resulting from planned or unplanned interventions. We present new perspectives for the Generalized Renewal Processes in general and for the Weibull-based Generalized Renewal Processes in particular. Disregarding from literature, we present a mixed Generalized Renewal Processes approach involving Kijima Type I and II models, allowing one to infer the impact of distinct interventions on the performance of the system under study. The first and second theoretical moments of this model are introduced as well as its maximum likelihood estimation and random sampling approaches. In order to illustrate the usefulness of the proposed Weibull-based Generalized Renewal Processes model, some real data sets involving improving, stable, and deteriorating systems are used. PMID:26197222

  3. Pattern-oriented modeling of agent-based complex systems: Lessons from ecology

    USGS Publications Warehouse

    Grimm, Volker; Revilla, Eloy; Berger, Uta; Jeltsch, Florian; Mooij, Wolf M.; Railsback, Steven F.; Thulke, Hans-Hermann; Weiner, Jacob; Wiegand, Thorsten; DeAngelis, Donald L.

    2005-01-01

    Agent-based complex systems are dynamic networks of many interacting agents; examples include ecosystems, financial markets, and cities. The search for general principles underlying the internal organization of such systems often uses bottom-up simulation models such as cellular automata and agent-based models. No general framework for designing, testing, and analyzing bottom-up models has yet been established, but recent advances in ecological modeling have come together in a general strategy we call pattern-oriented modeling. This strategy provides a unifying framework for decoding the internal organization of agent-based complex systems and may lead toward unifying algorithmic theories of the relation between adaptive behavior and system complexity.

  4. Pattern-Oriented Modeling of Agent-Based Complex Systems: Lessons from Ecology

    NASA Astrophysics Data System (ADS)

    Grimm, Volker; Revilla, Eloy; Berger, Uta; Jeltsch, Florian; Mooij, Wolf M.; Railsback, Steven F.; Thulke, Hans-Hermann; Weiner, Jacob; Wiegand, Thorsten; DeAngelis, Donald L.

    2005-11-01

    Agent-based complex systems are dynamic networks of many interacting agents; examples include ecosystems, financial markets, and cities. The search for general principles underlying the internal organization of such systems often uses bottom-up simulation models such as cellular automata and agent-based models. No general framework for designing, testing, and analyzing bottom-up models has yet been established, but recent advances in ecological modeling have come together in a general strategy we call pattern-oriented modeling. This strategy provides a unifying framework for decoding the internal organization of agent-based complex systems and may lead toward unifying algorithmic theories of the relation between adaptive behavior and system complexity.

  5. Model-Based Prognostics of Hybrid Systems

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew; Roychoudhury, Indranil; Bregon, Anibal

    2015-01-01

    Model-based prognostics has become a popular approach to solving the prognostics problem. However, almost all work has focused on prognostics of systems with continuous dynamics. In this paper, we extend the model-based prognostics framework to hybrid systems models that combine both continuous and discrete dynamics. In general, most systems are hybrid in nature, including those that combine physical processes with software. We generalize the model-based prognostics formulation to hybrid systems, and describe the challenges involved. We present a general approach for modeling hybrid systems, and overview methods for solving estimation and prediction in hybrid systems. As a case study, we consider the problem of conflict (i.e., loss of separation) prediction in the National Airspace System, in which the aircraft models are hybrid dynamical systems.

  6. Modelling vortex-induced fluid-structure interaction.

    PubMed

    Benaroya, Haym; Gabbai, Rene D

    2008-04-13

    The principal goal of this research is developing physics-based, reduced-order, analytical models of nonlinear fluid-structure interactions associated with offshore structures. Our primary focus is to generalize the Hamilton's variational framework so that systems of flow-oscillator equations can be derived from first principles. This is an extension of earlier work that led to a single energy equation describing the fluid-structure interaction. It is demonstrated here that flow-oscillator models are a subclass of the general, physical-based framework. A flow-oscillator model is a reduced-order mechanical model, generally comprising two mechanical oscillators, one modelling the structural oscillation and the other a nonlinear oscillator representing the fluid behaviour coupled to the structural motion.Reduced-order analytical model development continues to be carried out using a Hamilton's principle-based variational approach. This provides flexibility in the long run for generalizing the modelling paradigm to complex, three-dimensional problems with multiple degrees of freedom, although such extension is very difficult. As both experimental and analytical capabilities advance, the critical research path to developing and implementing fluid-structure interaction models entails-formulating generalized equations of motion, as a superset of the flow-oscillator models; and-developing experimentally derived, semi-analytical functions to describe key terms in the governing equations of motion. The developed variational approach yields a system of governing equations. This will allow modelling of multiple d.f. systems. The extensions derived generalize the Hamilton's variational formulation for such problems. The Navier-Stokes equations are derived and coupled to the structural oscillator. This general model has been shown to be a superset of the flow-oscillator model. Based on different assumptions, one can derive a variety of flow-oscillator models.

  7. [GSH fermentation process modeling using entropy-criterion based RBF neural network model].

    PubMed

    Tan, Zuoping; Wang, Shitong; Deng, Zhaohong; Du, Guocheng

    2008-05-01

    The prediction accuracy and generalization of GSH fermentation process modeling are often deteriorated by noise existing in the corresponding experimental data. In order to avoid this problem, we present a novel RBF neural network modeling approach based on entropy criterion. It considers the whole distribution structure of the training data set in the parameter learning process compared with the traditional MSE-criterion based parameter learning, and thus effectively avoids the weak generalization and over-learning. Then the proposed approach is applied to the GSH fermentation process modeling. Our results demonstrate that this proposed method has better prediction accuracy, generalization and robustness such that it offers a potential application merit for the GSH fermentation process modeling.

  8. A Standardized Generalized Dimensionality Discrepancy Measure and a Standardized Model-Based Covariance for Dimensionality Assessment for Multidimensional Models

    ERIC Educational Resources Information Center

    Levy, Roy; Xu, Yuning; Yel, Nedim; Svetina, Dubravka

    2015-01-01

    The standardized generalized dimensionality discrepancy measure and the standardized model-based covariance are introduced as tools to critique dimensionality assumptions in multidimensional item response models. These tools are grounded in a covariance theory perspective and associated connections between dimensionality and local independence.…

  9. The General Education Collaboration Model: A Model for Successful Mainstreaming.

    ERIC Educational Resources Information Center

    Simpson, Richard L.; Myles, Brenda Smith

    1990-01-01

    The General Education Collaboration Model is designed to support general educators teaching mainstreamed disabled students, through collaboration with special educators. The model is based on flexible departmentalization, program ownership, identification and development of supportive attitudes, student assessment as a measure of program…

  10. Propagation Effects in Space-Based Surveillance Systems

    DTIC Science & Technology

    1982-02-01

    This report describes the first year’s effort to investigate propagation effects in space - based radars. A model was developed for analyzing the...deleterious systems effects by first developing a generalized aperture distribution that ultimately can be applied to any space - based radar configuration...The propagation effects are characterized in terms of the SATCOM model striation parameters. The form of a generalized channel model for space - based radars

  11. Search algorithm complexity modeling with application to image alignment and matching

    NASA Astrophysics Data System (ADS)

    DelMarco, Stephen

    2014-05-01

    Search algorithm complexity modeling, in the form of penetration rate estimation, provides a useful way to estimate search efficiency in application domains which involve searching over a hypothesis space of reference templates or models, as in model-based object recognition, automatic target recognition, and biometric recognition. The penetration rate quantifies the expected portion of the database that must be searched, and is useful for estimating search algorithm computational requirements. In this paper we perform mathematical modeling to derive general equations for penetration rate estimates that are applicable to a wide range of recognition problems. We extend previous penetration rate analyses to use more general probabilistic modeling assumptions. In particular we provide penetration rate equations within the framework of a model-based image alignment application domain in which a prioritized hierarchical grid search is used to rank subspace bins based on matching probability. We derive general equations, and provide special cases based on simplifying assumptions. We show how previously-derived penetration rate equations are special cases of the general formulation. We apply the analysis to model-based logo image alignment in which a hierarchical grid search is used over a geometric misalignment transform hypothesis space. We present numerical results validating the modeling assumptions and derived formulation.

  12. Value-at-risk estimation with wavelet-based extreme value theory: Evidence from emerging markets

    NASA Astrophysics Data System (ADS)

    Cifter, Atilla

    2011-06-01

    This paper introduces wavelet-based extreme value theory (EVT) for univariate value-at-risk estimation. Wavelets and EVT are combined for volatility forecasting to estimate a hybrid model. In the first stage, wavelets are used as a threshold in generalized Pareto distribution, and in the second stage, EVT is applied with a wavelet-based threshold. This new model is applied to two major emerging stock markets: the Istanbul Stock Exchange (ISE) and the Budapest Stock Exchange (BUX). The relative performance of wavelet-based EVT is benchmarked against the Riskmetrics-EWMA, ARMA-GARCH, generalized Pareto distribution, and conditional generalized Pareto distribution models. The empirical results show that the wavelet-based extreme value theory increases predictive performance of financial forecasting according to number of violations and tail-loss tests. The superior forecasting performance of the wavelet-based EVT model is also consistent with Basel II requirements, and this new model can be used by financial institutions as well.

  13. An interactive modeling program for the generation of planar polygons for boundary type solids representations of wire frame models

    NASA Technical Reports Server (NTRS)

    Ozsoy, T.; Ochs, J. B.

    1984-01-01

    The development of a general link between three dimensional wire frame models and rigid solid models is discussed. An interactive computer graphics program was developed to serve as a front end to an algorithm (COSMIC Program No. ARC-11446) which offers a general solution to the hidden line problem where the input data is either line segments of n-sided planar polygons of the most general type with internal boundaries. The program provides a general interface to CAD/CAM data bases and is implemented for models created on the Unigraphics VAX 11/780-based CAD/CAM systems with the display software written for DEC's VS11 color graphics devices.

  14. The Development of Web-based Graphical User Interface for Unified Modeling Data with Multi (Correlated) Responses

    NASA Astrophysics Data System (ADS)

    Made Tirta, I.; Anggraeni, Dian

    2018-04-01

    Statistical models have been developed rapidly into various directions to accommodate various types of data. Data collected from longitudinal, repeated measured, clustered data (either continuous, binary, count, or ordinal), are more likely to be correlated. Therefore statistical model for independent responses, such as Generalized Linear Model (GLM), Generalized Additive Model (GAM) are not appropriate. There are several models available to apply for correlated responses including GEEs (Generalized Estimating Equations), for marginal model and various mixed effect model such as GLMM (Generalized Linear Mixed Models) and HGLM (Hierarchical Generalized Linear Models) for subject spesific models. These models are available on free open source software R, but they can only be accessed through command line interface (using scrit). On the othe hand, most practical researchers very much rely on menu based or Graphical User Interface (GUI). We develop, using Shiny framework, standard pull down menu Web-GUI that unifies most models for correlated responses. The Web-GUI has accomodated almost all needed features. It enables users to do and compare various modeling for repeated measure data (GEE, GLMM, HGLM, GEE for nominal responses) much more easily trough online menus. This paper discusses the features of the Web-GUI and illustrates the use of them. In General we find that GEE, GLMM, HGLM gave very closed results.

  15. Improved Characters and Student Learning Outcomes through Development of Character Education Based General Physics Learning Model

    ERIC Educational Resources Information Center

    Derlina; Sabani; Mihardi, Satria

    2015-01-01

    Education Research in Indonesia has begun to lead to the development of character education and is no longer fixated on the outcomes of cognitive learning. This study purposed to produce character education based general physics learning model (CEBGP Learning Model) and with valid, effective and practical peripheral devices to improve character…

  16. Modified Likelihood-Based Item Fit Statistics for the Generalized Graded Unfolding Model

    ERIC Educational Resources Information Center

    Roberts, James S.

    2008-01-01

    Orlando and Thissen (2000) developed an item fit statistic for binary item response theory (IRT) models known as S-X[superscript 2]. This article generalizes their statistic to polytomous unfolding models. Four alternative formulations of S-X[superscript 2] are developed for the generalized graded unfolding model (GGUM). The GGUM is a…

  17. A Nonlinear Multigrid Solver for an Atmospheric General Circulation Model Based on Semi-Implicit Semi-Lagrangian Advection of Potential Vorticity

    NASA Technical Reports Server (NTRS)

    McCormick, S.; Ruge, John W.

    1998-01-01

    This work represents a part of a project to develop an atmospheric general circulation model based on the semi-Lagrangian advection of potential vorticity (PC) with divergence as the companion prognostic variable.

  18. Generalized random sign and alert delay models for imperfect maintenance.

    PubMed

    Dijoux, Yann; Gaudoin, Olivier

    2014-04-01

    This paper considers the modelling of the process of Corrective and condition-based Preventive Maintenance, for complex repairable systems. In order to take into account the dependency between both types of maintenance and the possibility of imperfect maintenance, Generalized Competing Risks models have been introduced in "Doyen and Gaudoin (J Appl Probab 43:825-839, 2006)". In this paper, we study two classes of these models, the Generalized Random Sign and Generalized Alert Delay models. A Generalized Competing Risks model can be built as a generalization of a particular Usual Competing Risks model, either by using a virtual age framework or not. The models properties are studied and their parameterizations are discussed. Finally, simulation results and an application to real data are presented.

  19. Estimation of group means when adjusting for covariates in generalized linear models.

    PubMed

    Qu, Yongming; Luo, Junxiang

    2015-01-01

    Generalized linear models are commonly used to analyze categorical data such as binary, count, and ordinal outcomes. Adjusting for important prognostic factors or baseline covariates in generalized linear models may improve the estimation efficiency. The model-based mean for a treatment group produced by most software packages estimates the response at the mean covariate, not the mean response for this treatment group for the studied population. Although this is not an issue for linear models, the model-based group mean estimates in generalized linear models could be seriously biased for the true group means. We propose a new method to estimate the group mean consistently with the corresponding variance estimation. Simulation showed the proposed method produces an unbiased estimator for the group means and provided the correct coverage probability. The proposed method was applied to analyze hypoglycemia data from clinical trials in diabetes. Copyright © 2014 John Wiley & Sons, Ltd.

  20. Comparison of Multiscale Method of Cells-Based Models for Predicting Elastic Properties of Filament Wound C/C-SiC

    NASA Technical Reports Server (NTRS)

    Pineda, Evan J.; Fassin, Marek; Bednarcyk, Brett A.; Reese, Stefanie; Simon, Jaan-Willem

    2017-01-01

    Three different multiscale models, based on the method of cells (generalized and high fidelity) micromechanics models were developed and used to predict the elastic properties of C/C-SiC composites. In particular, the following multiscale modeling strategies were employed: Concurrent multiscale modeling of all phases using the generalized method of cells, synergistic (two-way coupling in space) multiscale modeling with the generalized method of cells, and hierarchical (one-way coupling in space) multiscale modeling with the high fidelity generalized method of cells. The three models are validated against data from a hierarchical multiscale finite element model in the literature for a repeating unit cell of C/C-SiC. Furthermore, the multiscale models are used in conjunction with classical lamination theory to predict the stiffness of C/C-SiC plates manufactured via a wet filament winding and liquid silicon infiltration process recently developed by the German Aerospace Institute.

  1. Electrophilic assistance to the cleavage of an RNA model phopshodiester via specific and general base-catalyzed mechanisms.

    PubMed

    Corona-Martínez, David Octavio; Gomez-Tagle, Paola; Yatsimirsky, Anatoly K

    2012-10-19

    Kinetics of transesterification of the RNA model substrate 2-hydroxypropyl 4-nitrophenyl phosphate promoted by Mg(2+) and Ca(2+), the most common biological metals acting as cofactors for nuclease enzymes and ribozymes, as well as by Co(NH(3))(6)(3+), Co(en)(3)(3+), Li(+), and Na(+) cations, often employed as mechanistic probes, was studied in 80% v/v (50 mol %) aqueous DMSO, a medium that allows one to discriminate easily specific base (OH(-)-catalyzed) and general base (buffer-catalyzed) reaction paths. All cations assist the specific base reaction, but only Mg(2+) and Na(+) assist the general base reaction. For Mg(2+)-assisted reactions, the solvent deuterium isotope effects are 1.23 and 0.25 for general base and specific base mechanisms, respectively. Rate constants for Mg(2+)-assisted general base reactions measured with different bases fit the Brønsted correlation with a slope of 0.38, significantly lower than the slope for the unassisted general base reaction (0.77). Transition state binding constants for catalysts in the specific base reaction (K(‡)(OH)) both in aqueous DMSO and pure water correlate with their binding constants to 4-nitrophenyl phosphate dianion (K(NPP)) used as a minimalist transition state model. It was found that K(‡)(OH) ≈ K(NPP) for "protic" catalysts (Co(NH(3))(6)(3+), Co(en)(3)(3+), guanidinium), but K(‡)(OH) ≫ K(NPP) for Mg(2+) and Ca(2+) acting as Lewis acids. It appears from results of this study that Mg(2+) is unique in its ability to assist efficiently the general base-catalyzed transesterification often occurring in active sites of nuclease enzymes and ribozymes.

  2. Generalized Constitutive-Based Theoretical and Empirical Models for Hot Working Behavior of Functionally Graded Steels

    NASA Astrophysics Data System (ADS)

    Vanini, Seyed Ali Sadough; Abolghasemzadeh, Mohammad; Assadi, Abbas

    2013-07-01

    Functionally graded steels with graded ferritic and austenitic regions including bainite and martensite intermediate layers produced by electroslag remelting have attracted much attention in recent years. In this article, an empirical model based on the Zener-Hollomon (Z-H) constitutive equation with generalized material constants is presented to investigate the effects of temperature and strain rate on the hot working behavior of functionally graded steels. Next, a theoretical model, generalized by strain compensation, is developed for the flow stress estimation of functionally graded steels under hot compression based on the phase mixture rule and boundary layer characteristics. The model is used for different strains and grading configurations. Specifically, the results for αβγMγ steels from empirical and theoretical models showed excellent agreement with those of experiments of other references within acceptable error.

  3. Dualities in CHL-models

    NASA Astrophysics Data System (ADS)

    Persson, Daniel; Volpato, Roberto

    2018-04-01

    We define a very general class of CHL-models associated with any string theory S (bosonic or supersymmetric) compactified on an internal CFT C× Td . We take the orbifold by a pair (g, δ) , where g is a (possibly non-geometric) symmetry of C and δ is a translation along T n . We analyze the T-dualities of these models and show that in general they contain Atkin–Lehner type symmetries. This generalizes our previous work on N=4 CHL-models based on heterotic string theory on T 6 or type II on K3× T2 , as well as the ‘monstrous’ CHL-models based on a compactification of heterotic string theory on the Frenkel–Lepowsky–Meurman CFT V\

  4. Invariant operators, orthogonal bases and correlators in general tensor models

    NASA Astrophysics Data System (ADS)

    Diaz, Pablo; Rey, Soo-Jong

    2018-07-01

    We study invariant operators in general tensor models. We show that representation theory provides an efficient framework to count and classify invariants in tensor models of (gauge) symmetry Gd = U (N1) ⊗ ⋯ ⊗ U (Nd). As a continuation and completion of our earlier work, we present two natural ways of counting invariants, one for arbitrary Gd and another valid for large rank of Gd. We construct bases of invariant operators based on the counting, and compute correlators of their elements. The basis associated with finite rank of Gd diagonalizes the two-point function of the free theory. It is analogous to the restricted Schur basis used in matrix models. We show that the constructions get almost identical as we swap the Littlewood-Richardson numbers in multi-matrix models with Kronecker coefficients in general tensor models. We explore the parallelism between matrix model and tensor model in depth from the perspective of representation theory and comment on several ideas for future investigation.

  5. Range image segmentation using Zernike moment-based generalized edge detector

    NASA Technical Reports Server (NTRS)

    Ghosal, S.; Mehrotra, R.

    1992-01-01

    The authors proposed a novel Zernike moment-based generalized step edge detection method which can be used for segmenting range and intensity images. A generalized step edge detector is developed to identify different kinds of edges in range images. These edge maps are thinned and linked to provide final segmentation. A generalized edge is modeled in terms of five parameters: orientation, two slopes, one step jump at the location of the edge, and the background gray level. Two complex and two real Zernike moment-based masks are required to determine all these parameters of the edge model. Theoretical noise analysis is performed to show that these operators are quite noise tolerant. Experimental results are included to demonstrate edge-based segmentation technique.

  6. GENERALIZED VISCOPLASTIC MODELING OF DEBRIS FLOW.

    USGS Publications Warehouse

    Chen, Cheng-lung

    1988-01-01

    The earliest model developed by R. A. Bagnold was based on the concept of the 'dispersive' pressure generated by grain collisions. Some efforts have recently been made by theoreticians in non-Newtonian fluid mechanics to modify or improve Bagnold's concept or model. A viable rheological model should consist both of a rate-independent part and a rate-dependent part. A generalized viscoplastic fluid (GVF) model that has both parts as well as two major rheological properties (i. e. , the normal stress effect and soil yield criterion) is shown to be sufficiently accurate, yet practical for general use in debris-flow modeling. In fact, Bagnold's model is found to be only a particular case of the GVF model. analytical solutions for (steady) uniform debris flows in wide channels are obtained from the GVF model based on Bagnold's simplified assumption of constant grain concentration.

  7. Enhancing Retrieval with Hyperlinks: A General Model Based on Propositional Argumentation Systems.

    ERIC Educational Resources Information Center

    Picard, Justin; Savoy, Jacques

    2003-01-01

    Discusses the use of hyperlinks for improving information retrieval on the World Wide Web and proposes a general model for using hyperlinks based on Probabilistic Argumentation Systems. Topics include propositional logic, knowledge, and uncertainty; assumptions; using hyperlinks to modify document score and rank; and estimating the popularity of a…

  8. Chronic heart failure management in Australia -- time for general practice centred models of care?

    PubMed

    Scott, Ian; Jackson, Claire

    2013-05-01

    Chronic heart failure (CHF) is an increasingly prevalent problem within ageing populations and accounts for thousands of hospitalisations and deaths annually in Australia. Disease management programs for CHF (CHF-DMPs) aim to optimise care, with the predominant model being cardiologist led, hospital based multidisciplinary clinics with cardiac nurse outreach. However, findings from contemporary observational studies and clinical trials raise uncertainty around the effectiveness and sustainability of traditional CHF-DMPs in real-world clinical practice. To suggest an alternative model of care that involves general practitioners with a special interest in CHF liaising with, and being up-skilled by, specialists within community based, multidisciplinary general practice settings. Preliminary data from trials evaluating primary care based CHF-DMPs are encouraging, and further studies are underway comparing this model of care with traditional hospital based, specialist led CHF-DMPs. Results of studies of similar primary care models targeting diabetes and other chronic diseases suggest potential for its application to CHF.

  9. An internet graph model based on trade-off optimization

    NASA Astrophysics Data System (ADS)

    Alvarez-Hamelin, J. I.; Schabanel, N.

    2004-03-01

    This paper presents a new model for the Internet graph (AS graph) based on the concept of heuristic trade-off optimization, introduced by Fabrikant, Koutsoupias and Papadimitriou in[CITE] to grow a random tree with a heavily tailed degree distribution. We propose here a generalization of this approach to generate a general graph, as a candidate for modeling the Internet. We present the results of our simulations and an analysis of the standard parameters measured in our model, compared with measurements from the physical Internet graph.

  10. A new model for fluid velocity slip on a solid surface.

    PubMed

    Shu, Jian-Jun; Teo, Ji Bin Melvin; Chan, Weng Kong

    2016-10-12

    A general adsorption model is developed to describe the interactions between near-wall fluid molecules and solid surfaces. This model serves as a framework for the theoretical modelling of boundary slip phenomena. Based on this adsorption model, a new general model for the slip velocity of fluids on solid surfaces is introduced. The slip boundary condition at a fluid-solid interface has hitherto been considered separately for gases and liquids. In this paper, we show that the slip velocity in both gases and liquids may originate from dynamical adsorption processes at the interface. A unified analytical model that is valid for both gas-solid and liquid-solid slip boundary conditions is proposed based on surface science theory. The corroboration with the experimental data extracted from the literature shows that the proposed model provides an improved prediction compared to existing analytical models for gases at higher shear rates and close agreement for liquid-solid interfaces in general.

  11. A fuzzy case based reasoning tool for model based approach to rocket engine health monitoring

    NASA Technical Reports Server (NTRS)

    Krovvidy, Srinivas; Nolan, Adam; Hu, Yong-Lin; Wee, William G.

    1992-01-01

    In this system we develop a fuzzy case based reasoner that can build a case representation for several past anomalies detected, and we develop case retrieval methods that can be used to index a relevant case when a new problem (case) is presented using fuzzy sets. The choice of fuzzy sets is justified by the uncertain data. The new problem can be solved using knowledge of the model along with the old cases. This system can then be used to generalize the knowledge from previous cases and use this generalization to refine the existing model definition. This in turn can help to detect failures using the model based algorithms.

  12. A general model-based design of experiments approach to achieve practical identifiability of pharmacokinetic and pharmacodynamic models.

    PubMed

    Galvanin, Federico; Ballan, Carlo C; Barolo, Massimiliano; Bezzo, Fabrizio

    2013-08-01

    The use of pharmacokinetic (PK) and pharmacodynamic (PD) models is a common and widespread practice in the preliminary stages of drug development. However, PK-PD models may be affected by structural identifiability issues intrinsically related to their mathematical formulation. A preliminary structural identifiability analysis is usually carried out to check if the set of model parameters can be uniquely determined from experimental observations under the ideal assumptions of noise-free data and no model uncertainty. However, even for structurally identifiable models, real-life experimental conditions and model uncertainty may strongly affect the practical possibility to estimate the model parameters in a statistically sound way. A systematic procedure coupling the numerical assessment of structural identifiability with advanced model-based design of experiments formulations is presented in this paper. The objective is to propose a general approach to design experiments in an optimal way, detecting a proper set of experimental settings that ensure the practical identifiability of PK-PD models. Two simulated case studies based on in vitro bacterial growth and killing models are presented to demonstrate the applicability and generality of the methodology to tackle model identifiability issues effectively, through the design of feasible and highly informative experiments.

  13. Modelling uncertainty with generalized credal sets: application to conjunction and decision

    NASA Astrophysics Data System (ADS)

    Bronevich, Andrey G.; Rozenberg, Igor N.

    2018-01-01

    To model conflict, non-specificity and contradiction in information, upper and lower generalized credal sets are introduced. Any upper generalized credal set is a convex subset of plausibility measures interpreted as lower probabilities whose bodies of evidence consist of singletons and a certain event. Analogously, contradiction is modelled in the theory of evidence by a belief function that is greater than zero at empty set. Based on generalized credal sets, we extend the conjunctive rule for contradictory sources of information, introduce constructions like natural extension in the theory of imprecise probabilities and show that the model of generalized credal sets coincides with the model of imprecise probabilities if the profile of a generalized credal set consists of probability measures. We give ways how the introduced model can be applied to decision problems.

  14. Acoustic Tomographic Estimate of Ocean Advective Heat Flux: A Numerical Assessment in the Norwegian Sea

    DTIC Science & Technology

    1990-06-01

    of transceivers used and the characteristics of the sound channel. In the assessment we use the General Digital Environmental Model ( GDEM ), a...sound channel. In the assessment we use the General Digital Environmental Model ( GDEM ), a climatological data base, to simulate an ocean area 550 x 550...34 3. GDEM Data Base

  15. Smart Climatology Applications for Undersea Warfare

    DTIC Science & Technology

    2008-09-01

    Comparisons of these climatologies with existing Navy climatologies based on the Generalized Digital Environmental Model ( GDEM ) reveal differences in sonic...undersea warfare. 15. NUMBER OF PAGES 117 14. SUBJECT TERMS antisubmarine warfare, climate variations, climatology, GDEM , ocean, re...climatologies based on the Generalized Digital Environmental Model ( GDEM ) to our smart ocean climatologies reveal a number of differences. The

  16. Infrared and Raman Spectroscopy: A Discovery-Based Activity for the General Chemistry Curriculum

    ERIC Educational Resources Information Center

    Borgsmiller, Karen L.; O'Connell, Dylan J.; Klauenberg, Kathryn M.; Wilson, Peter M.; Stromberg, Christopher J.

    2012-01-01

    A discovery-based method is described for incorporating the concepts of IR and Raman spectroscopy into the general chemistry curriculum. Students use three sets of springs to model the properties of single, double, and triple covalent bonds. Then, Gaussian 03W molecular modeling software is used to illustrate the relationship between bond…

  17. A Logical Account of Diagnosis with Multiple Theories

    NASA Technical Reports Server (NTRS)

    Pandurang, P.; Lum, Henry Jr. (Technical Monitor)

    1994-01-01

    Model-based diagnosis is a powerful, first-principles approach to diagnosis. The primary drawback with model-based diagnosis is that it is based on a system model, and this model might be inappropriate. The inappropriateness of models usually stems from the fundamental tradeoff between completeness and efficiency. Recently, Struss has developed an elegant proposal for diagnosis with multiple models. Struss characterizes models as relations and develops a precise notion of abstraction. He defines relations between models and analyzes the effect of a model switch on the space of possible diagnoses. In this paper we extend Struss's proposal in three ways. First, our account of diagnosis with multiple models is based on representing models as more expressive first-order theories, rather than as relations. A key technical contribution is the use of a general notion of abstraction based on interpretations between theories. Second, Struss conflates component modes with models, requiring him to define models relations such as choices which result in non-relational models. We avoid this problem by differentiating component modes from models. Third, we present a more general account of simplifications that correctly handles situations where the simplification contradicts the base theory.

  18. A General Water Resources Regulation Software System in China

    NASA Astrophysics Data System (ADS)

    LEI, X.

    2017-12-01

    To avoid iterative development of core modules in water resource normal regulation and emergency regulation and improve the capability of maintenance and optimization upgrading of regulation models and business logics, a general water resources regulation software framework was developed based on the collection and analysis of common demands for water resources regulation and emergency management. It can provide a customizable, secondary developed and extensible software framework for the three-level platform "MWR-Basin-Province". Meanwhile, this general software system can realize business collaboration and information sharing of water resources regulation schemes among the three-level platforms, so as to improve the decision-making ability of national water resources regulation. There are four main modules involved in the general software system: 1) A complete set of general water resources regulation modules allows secondary developer to custom-develop water resources regulation decision-making systems; 2) A complete set of model base and model computing software released in the form of Cloud services; 3) A complete set of tools to build the concept map and model system of basin water resources regulation, as well as a model management system to calibrate and configure model parameters; 4) A database which satisfies business functions and functional requirements of general water resources regulation software can finally provide technical support for building basin or regional water resources regulation models.

  19. An Open Trial of an Acceptance-Based Behavior Therapy for Generalized Anxiety Disorder

    ERIC Educational Resources Information Center

    Roemer, Lizabeth; Orsillo, Susan M.

    2007-01-01

    Research suggests that experiential avoidance may play an important role in generalized anxiety disorder (GAD; see Roemer, L., & Orsillo, S.M. (2002). "Expanding our conceptualization of and treatment for generalized anxiety disorder: Integrating mindfulness/acceptance-based approaches with existing cognitive-behavioral models." "Clinical…

  20. Testing the generality of above-ground biomass allometry across plant functional types at the continent scale.

    PubMed

    Paul, Keryn I; Roxburgh, Stephen H; Chave, Jerome; England, Jacqueline R; Zerihun, Ayalsew; Specht, Alison; Lewis, Tom; Bennett, Lauren T; Baker, Thomas G; Adams, Mark A; Huxtable, Dan; Montagu, Kelvin D; Falster, Daniel S; Feller, Mike; Sochacki, Stan; Ritson, Peter; Bastin, Gary; Bartle, John; Wildy, Dan; Hobbs, Trevor; Larmour, John; Waterworth, Rob; Stewart, Hugh T L; Jonson, Justin; Forrester, David I; Applegate, Grahame; Mendham, Daniel; Bradford, Matt; O'Grady, Anthony; Green, Daryl; Sudmeyer, Rob; Rance, Stan J; Turner, John; Barton, Craig; Wenk, Elizabeth H; Grove, Tim; Attiwill, Peter M; Pinkard, Elizabeth; Butler, Don; Brooksbank, Kim; Spencer, Beren; Snowdon, Peter; O'Brien, Nick; Battaglia, Michael; Cameron, David M; Hamilton, Steve; McAuthur, Geoff; Sinclair, Jenny

    2016-06-01

    Accurate ground-based estimation of the carbon stored in terrestrial ecosystems is critical to quantifying the global carbon budget. Allometric models provide cost-effective methods for biomass prediction. But do such models vary with ecoregion or plant functional type? We compiled 15 054 measurements of individual tree or shrub biomass from across Australia to examine the generality of allometric models for above-ground biomass prediction. This provided a robust case study because Australia includes ecoregions ranging from arid shrublands to tropical rainforests, and has a rich history of biomass research, particularly in planted forests. Regardless of ecoregion, for five broad categories of plant functional type (shrubs; multistemmed trees; trees of the genus Eucalyptus and closely related genera; other trees of high wood density; and other trees of low wood density), relationships between biomass and stem diameter were generic. Simple power-law models explained 84-95% of the variation in biomass, with little improvement in model performance when other plant variables (height, bole wood density), or site characteristics (climate, age, management) were included. Predictions of stand-based biomass from allometric models of varying levels of generalization (species-specific, plant functional type) were validated using whole-plot harvest data from 17 contrasting stands (range: 9-356 Mg ha(-1) ). Losses in efficiency of prediction were <1% if generalized models were used in place of species-specific models. Furthermore, application of generalized multispecies models did not introduce significant bias in biomass prediction in 92% of the 53 species tested. Further, overall efficiency of stand-level biomass prediction was 99%, with a mean absolute prediction error of only 13%. Hence, for cost-effective prediction of biomass across a wide range of stands, we recommend use of generic allometric models based on plant functional types. Development of new species-specific models is only warranted when gains in accuracy of stand-based predictions are relatively high (e.g. high-value monocultures). © 2015 John Wiley & Sons Ltd.

  1. Estimation of Standard Error of Regression Effects in Latent Regression Models Using Binder's Linearization. Research Report. ETS RR-07-09

    ERIC Educational Resources Information Center

    Li, Deping; Oranje, Andreas

    2007-01-01

    Two versions of a general method for approximating standard error of regression effect estimates within an IRT-based latent regression model are compared. The general method is based on Binder's (1983) approach, accounting for complex samples and finite populations by Taylor series linearization. In contrast, the current National Assessment of…

  2. Software Design Description for the HYbrid Coordinate Ocean Model (HYCOM), Version 2.2

    DTIC Science & Technology

    2009-02-12

    scalars. J. Phys. Oceanogr. 32: 240–264. Carnes, M., (2002). Data base description for the Generalized Digital Environmental Model ( GDEM -V) (U...Direction FCT Flux-Corrected Transport scheme GDEM Generalized Digital Environmental Model GISS NASA Goddard Institute for Space Studies GRD

  3. Comparison of modeling methods to predict the spatial distribution of deep-sea coral and sponge in the Gulf of Alaska

    NASA Astrophysics Data System (ADS)

    Rooper, Christopher N.; Zimmermann, Mark; Prescott, Megan M.

    2017-08-01

    Deep-sea coral and sponge ecosystems are widespread throughout most of Alaska's marine waters, and are associated with many different species of fishes and invertebrates. These ecosystems are vulnerable to the effects of commercial fishing activities and climate change. We compared four commonly used species distribution models (general linear models, generalized additive models, boosted regression trees and random forest models) and an ensemble model to predict the presence or absence and abundance of six groups of benthic invertebrate taxa in the Gulf of Alaska. All four model types performed adequately on training data for predicting presence and absence, with regression forest models having the best overall performance measured by the area under the receiver-operating-curve (AUC). The models also performed well on the test data for presence and absence with average AUCs ranging from 0.66 to 0.82. For the test data, ensemble models performed the best. For abundance data, there was an obvious demarcation in performance between the two regression-based methods (general linear models and generalized additive models), and the tree-based models. The boosted regression tree and random forest models out-performed the other models by a wide margin on both the training and testing data. However, there was a significant drop-off in performance for all models of invertebrate abundance ( 50%) when moving from the training data to the testing data. Ensemble model performance was between the tree-based and regression-based methods. The maps of predictions from the models for both presence and abundance agreed very well across model types, with an increase in variability in predictions for the abundance data. We conclude that where data conforms well to the modeled distribution (such as the presence-absence data and binomial distribution in this study), the four types of models will provide similar results, although the regression-type models may be more consistent with biological theory. For data with highly zero-inflated distributions and non-normal distributions such as the abundance data from this study, the tree-based methods performed better. Ensemble models that averaged predictions across the four model types, performed better than the GLM or GAM models but slightly poorer than the tree-based methods, suggesting ensemble models might be more robust to overfitting than tree methods, while mitigating some of the disadvantages in predictive performance of regression methods.

  4. Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model

    NASA Astrophysics Data System (ADS)

    Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.

    2014-02-01

    Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can be successfully applied to process-based models of high complexity. The methodology is particularly suitable for heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models.

  5. Generalized One-Band Model Based on Zhang-Rice Singlets for Tetragonal CuO.

    PubMed

    Hamad, I J; Manuel, L O; Aligia, A A

    2018-04-27

    Tetragonal CuO (T-CuO) has attracted attention because of its structure similar to that of the cuprates. It has been recently proposed as a compound whose study can give an end to the long debate about the proper microscopic modeling for cuprates. In this work, we rigorously derive an effective one-band generalized t-J model for T-CuO, based on orthogonalized Zhang-Rice singlets, and make an estimative calculation of its parameters, based on previous ab initio calculations. By means of the self-consistent Born approximation, we then evaluate the spectral function and the quasiparticle dispersion for a single hole doped in antiferromagnetically ordered half filled T-CuO. Our predictions show very good agreement with angle-resolved photoemission spectra and with theoretical multiband results. We conclude that a generalized t-J model remains the minimal Hamiltonian for a correct description of single-hole dynamics in cuprates.

  6. Generalized One-Band Model Based on Zhang-Rice Singlets for Tetragonal CuO

    NASA Astrophysics Data System (ADS)

    Hamad, I. J.; Manuel, L. O.; Aligia, A. A.

    2018-04-01

    Tetragonal CuO (T-CuO) has attracted attention because of its structure similar to that of the cuprates. It has been recently proposed as a compound whose study can give an end to the long debate about the proper microscopic modeling for cuprates. In this work, we rigorously derive an effective one-band generalized t -J model for T-CuO, based on orthogonalized Zhang-Rice singlets, and make an estimative calculation of its parameters, based on previous ab initio calculations. By means of the self-consistent Born approximation, we then evaluate the spectral function and the quasiparticle dispersion for a single hole doped in antiferromagnetically ordered half filled T-CuO. Our predictions show very good agreement with angle-resolved photoemission spectra and with theoretical multiband results. We conclude that a generalized t -J model remains the minimal Hamiltonian for a correct description of single-hole dynamics in cuprates.

  7. Investigating the Effect of Damage Progression Model Choice on Prognostics Performance

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew; Roychoudhury, Indranil; Narasimhan, Sriram; Saha, Sankalita; Saha, Bhaskar; Goebel, Kai

    2011-01-01

    The success of model-based approaches to systems health management depends largely on the quality of the underlying models. In model-based prognostics, it is especially the quality of the damage progression models, i.e., the models describing how damage evolves as the system operates, that determines the accuracy and precision of remaining useful life predictions. Several common forms of these models are generally assumed in the literature, but are often not supported by physical evidence or physics-based analysis. In this paper, using a centrifugal pump as a case study, we develop different damage progression models. In simulation, we investigate how model changes influence prognostics performance. Results demonstrate that, in some cases, simple damage progression models are sufficient. But, in general, the results show a clear need for damage progression models that are accurate over long time horizons under varied loading conditions.

  8. Relational similarity-based model of data part 1: foundations and query systems

    NASA Astrophysics Data System (ADS)

    Belohlavek, Radim; Vychodil, Vilem

    2017-10-01

    We present a general rank-aware model of data which supports handling of similarity in relational databases. The model is based on the assumption that in many cases it is desirable to replace equalities on values in data tables by similarity relations expressing degrees to which the values are similar. In this context, we study various phenomena which emerge in the model, including similarity-based queries and similarity-based data dependencies. Central notion in our model is that of a ranked data table over domains with similarities which is our counterpart to the notion of relation on relation scheme from the classical relational model. Compared to other approaches which cover related problems, we do not propose a similarity-based or ranking module on top of the classical relational model. Instead, we generalize the very core of the model by replacing the classical, two-valued logic upon which the classical model is built by a more general logic involving a scale of truth degrees that, in addition to the classical truth degrees 0 and 1, contains intermediate truth degrees. While the classical truth degrees 0 and 1 represent nonequality and equality of values, and subsequently mismatch and match of queries, the intermediate truth degrees in the new model represent similarity of values and partial match of queries. Moreover, the truth functions of many-valued logical connectives in the new model serve to aggregate degrees of similarity. The presented approach is conceptually clean, logically sound, and retains most properties of the classical model while enabling us to employ new types of queries and data dependencies. Most importantly, similarity is not handled in an ad hoc way or by putting a "similarity module" atop the classical model in our approach. Rather, it is consistently viewed as a notion that generalizes and replaces equality in the very core of the relational model. We present fundamentals of the formal model and two equivalent query systems which are analogues of the classical relational algebra and domain relational calculus with range declarations. In the sequel to this paper, we deal with similarity-based dependencies.

  9. Generalized neurofuzzy network modeling algorithms using Bézier-Bernstein polynomial functions and additive decomposition.

    PubMed

    Hong, X; Harris, C J

    2000-01-01

    This paper introduces a new neurofuzzy model construction algorithm for nonlinear dynamic systems based upon basis functions that are Bézier-Bernstein polynomial functions. This paper is generalized in that it copes with n-dimensional inputs by utilising an additive decomposition construction to overcome the curse of dimensionality associated with high n. This new construction algorithm also introduces univariate Bézier-Bernstein polynomial functions for the completeness of the generalized procedure. Like the B-spline expansion based neurofuzzy systems, Bézier-Bernstein polynomial function based neurofuzzy networks hold desirable properties such as nonnegativity of the basis functions, unity of support, and interpretability of basis function as fuzzy membership functions, moreover with the additional advantages of structural parsimony and Delaunay input space partition, essentially overcoming the curse of dimensionality associated with conventional fuzzy and RBF networks. This new modeling network is based on additive decomposition approach together with two separate basis function formation approaches for both univariate and bivariate Bézier-Bernstein polynomial functions used in model construction. The overall network weights are then learnt using conventional least squares methods. Numerical examples are included to demonstrate the effectiveness of this new data based modeling approach.

  10. Difference-based ridge-type estimator of parameters in restricted partial linear model with correlated errors.

    PubMed

    Wu, Jibo

    2016-01-01

    In this article, a generalized difference-based ridge estimator is proposed for the vector parameter in a partial linear model when the errors are dependent. It is supposed that some additional linear constraints may hold to the whole parameter space. Its mean-squared error matrix is compared with the generalized restricted difference-based estimator. Finally, the performance of the new estimator is explained by a simulation study and a numerical example.

  11. Argumentation in Science Education: A Model-Based Framework

    ERIC Educational Resources Information Center

    Bottcher, Florian; Meisert, Anke

    2011-01-01

    The goal of this article is threefold: First, the theoretical background for a model-based framework of argumentation to describe and evaluate argumentative processes in science education is presented. Based on the general model-based perspective in cognitive science and the philosophy of science, it is proposed to understand arguments as reasons…

  12. Development of a General Form CO 2 and Brine Flux Input Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mansoor, K.; Sun, Y.; Carroll, S.

    2014-08-01

    The National Risk Assessment Partnership (NRAP) project is developing a science-based toolset for the quantitative analysis of the potential risks associated with changes in groundwater chemistry from CO 2 injection. In order to address uncertainty probabilistically, NRAP is developing efficient, reduced-order models (ROMs) as part of its approach. These ROMs are built from detailed, physics-based process models to provide confidence in the predictions over a range of conditions. The ROMs are designed to reproduce accurately the predictions from the computationally intensive process models at a fraction of the computational time, thereby allowing the utilization of Monte Carlo methods to probemore » variability in key parameters. This report presents the procedures used to develop a generalized model for CO 2 and brine leakage fluxes based on the output of a numerical wellbore simulation. The resulting generalized parameters and ranges reported here will be used for the development of third-generation groundwater ROMs.« less

  13. Real-time simulation of biological soft tissues: a PGD approach.

    PubMed

    Niroomandi, S; González, D; Alfaro, I; Bordeu, F; Leygue, A; Cueto, E; Chinesta, F

    2013-05-01

    We introduce here a novel approach for the numerical simulation of nonlinear, hyperelastic soft tissues at kilohertz feedback rates necessary for haptic rendering. This approach is based upon the use of proper generalized decomposition techniques, a generalization of PODs. Proper generalized decomposition techniques can be considered as a means of a priori model order reduction and provides a physics-based meta-model without the need for prior computer experiments. The suggested strategy is thus composed of an offline phase, in which a general meta-model is computed, and an online evaluation phase in which the results are obtained at real time. Results are provided that show the potential of the proposed technique, together with some benchmark test that shows the accuracy of the method. Copyright © 2013 John Wiley & Sons, Ltd.

  14. Problem-Based Learning in Web-Based Science Classroom.

    ERIC Educational Resources Information Center

    Kim, Heeyoung; Chung, Ji-Sook; Kim, Younghoon

    The purpose of this paper is to discuss how general problem-based learning (PBL) models and social-constructivist perspectives are applied to the design and development of a Web-based science program, which emphasizes inquiry-based learning for fifth grade students. The paper also deals with the general features and learning process of a Web-based…

  15. Mimoza: web-based semantic zooming and navigation in metabolic networks.

    PubMed

    Zhukova, Anna; Sherman, David J

    2015-02-26

    The complexity of genome-scale metabolic models makes them quite difficult for human users to read, since they contain thousands of reactions that must be included for accurate computer simulation. Interestingly, hidden similarities between groups of reactions can be discovered, and generalized to reveal higher-level patterns. The web-based navigation system Mimoza allows a human expert to explore metabolic network models in a semantically zoomable manner: The most general view represents the compartments of the model; the next view shows the generalized versions of reactions and metabolites in each compartment; and the most detailed view represents the initial network with the generalization-based layout (where similar metabolites and reactions are placed next to each other). It allows a human expert to grasp the general structure of the network and analyze it in a top-down manner Mimoza can be installed standalone, or used on-line at http://mimoza.bordeaux.inria.fr/ , or installed in a Galaxy server for use in workflows. Mimoza views can be embedded in web pages, or downloaded as COMBINE archives.

  16. Space station ECLSS integration analysis: Simplified General Cluster Systems Model, ECLS System Assessment Program enhancements

    NASA Technical Reports Server (NTRS)

    Ferguson, R. E.

    1985-01-01

    The data base verification of the ECLS Systems Assessment Program (ESAP) was documented and changes made to enhance the flexibility of the water recovery subsystem simulations are given. All changes which were made to the data base values are described and the software enhancements performed. The refined model documented herein constitutes the submittal of the General Cluster Systems Model. A source listing of the current version of ESAP is provided in Appendix A.

  17. Rank-preserving regression: a more robust rank regression model against outliers.

    PubMed

    Chen, Tian; Kowalski, Jeanne; Chen, Rui; Wu, Pan; Zhang, Hui; Feng, Changyong; Tu, Xin M

    2016-08-30

    Mean-based semi-parametric regression models such as the popular generalized estimating equations are widely used to improve robustness of inference over parametric models. Unfortunately, such models are quite sensitive to outlying observations. The Wilcoxon-score-based rank regression (RR) provides more robust estimates over generalized estimating equations against outliers. However, the RR and its extensions do not sufficiently address missing data arising in longitudinal studies. In this paper, we propose a new approach to address outliers under a different framework based on the functional response models. This functional-response-model-based alternative not only addresses limitations of the RR and its extensions for longitudinal data, but, with its rank-preserving property, even provides more robust estimates than these alternatives. The proposed approach is illustrated with both real and simulated data. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  18. Requirements Modeling with the Aspect-oriented User Requirements Notation (AoURN): A Case Study

    NASA Astrophysics Data System (ADS)

    Mussbacher, Gunter; Amyot, Daniel; Araújo, João; Moreira, Ana

    The User Requirements Notation (URN) is a recent ITU-T standard that supports requirements engineering activities. The Aspect-oriented URN (AoURN) adds aspect-oriented concepts to URN, creating a unified framework that allows for scenario-based, goal-oriented, and aspect-oriented modeling. AoURN is applied to the car crash crisis management system (CCCMS), modeling its functional and non-functional requirements (NFRs). AoURN generally models all use cases, NFRs, and stakeholders as individual concerns and provides general guidelines for concern identification. AoURN handles interactions between concerns, capturing their dependencies and conflicts as well as the resolutions. We present a qualitative comparison of aspect-oriented techniques for scenario-based and goal-oriented requirements engineering. An evaluation carried out based on the metrics adapted from literature and a task-based evaluation suggest that AoURN models are more scalable than URN models and exhibit better modularity, reusability, and maintainability.

  19. Discovering the Power of Individual-Based Modelling in Teaching and Learning: The Study of a Predator-Prey System

    ERIC Educational Resources Information Center

    Ginovart, Marta

    2014-01-01

    The general aim is to promote the use of individual-based models (biological agent-based models) in teaching and learning contexts in life sciences and to make their progressive incorporation into academic curricula easier, complementing other existing modelling strategies more frequently used in the classroom. Modelling activities for the study…

  20. Multiple memory systems as substrates for multiple decision systems

    PubMed Central

    Doll, Bradley B.; Shohamy, Daphna; Daw, Nathaniel D.

    2014-01-01

    It has recently become widely appreciated that value-based decision making is supported by multiple computational strategies. In particular, animal and human behavior in learning tasks appears to include habitual responses described by prominent model-free reinforcement learning (RL) theories, but also more deliberative or goal-directed actions that can be characterized by a different class of theories, model-based RL. The latter theories evaluate actions by using a representation of the contingencies of the task (as with a learned map of a spatial maze), called an “internal model.” Given the evidence of behavioral and neural dissociations between these approaches, they are often characterized as dissociable learning systems, though they likely interact and share common mechanisms. In many respects, this division parallels a longstanding dissociation in cognitive neuroscience between multiple memory systems, describing, at the broadest level, separate systems for declarative and procedural learning. Procedural learning has notable parallels with model-free RL: both involve learning of habits and both are known to depend on parts of the striatum. Declarative memory, by contrast, supports memory for single events or episodes and depends on the hippocampus. The hippocampus is thought to support declarative memory by encoding temporal and spatial relations among stimuli and thus is often referred to as a relational memory system. Such relational encoding is likely to play an important role in learning an internal model, the representation that is central to model-based RL. Thus, insofar as the memory systems represent more general-purpose cognitive mechanisms that might subserve performance on many sorts of tasks including decision making, these parallels raise the question whether the multiple decision systems are served by multiple memory systems, such that one dissociation is grounded in the other. Here we investigated the relationship between model-based RL and relational memory by comparing individual differences across behavioral tasks designed to measure either capacity. Human subjects performed two tasks, a learning and generalization task (acquired equivalence) which involves relational encoding and depends on the hippocampus; and a sequential RL task that could be solved by either a model-based or model-free strategy. We assessed the correlation between subjects’ use of flexible, relational memory, as measured by generalization in the acquired equivalence task, and their differential reliance on either RL strategy in the decision task. We observed a significant positive relationship between generalization and model-based, but not model-free, choice strategies. These results are consistent with the hypothesis that model-based RL, like acquired equivalence, relies on a more general-purpose relational memory system. PMID:24846190

  1. Supervision--growing and building a sustainable general practice supervisor system.

    PubMed

    Thomson, Jennifer S; Anderson, Katrina J; Mara, Paul R; Stevenson, Alexander D

    2011-06-06

    This article explores various models and ideas for future sustainable general practice vocational training supervision in Australia. The general practitioner supervisor in the clinical practice setting is currently central to training the future general practice workforce. Finding ways to recruit, retain and motivate both new and experienced GP teachers is discussed, as is the creation of career paths for such teachers. Some of the newer methods of practice-based teaching are considered for further development, including vertically integrated teaching, e-learning, wave consulting and teaching on the run, teaching teams and remote teaching. Approaches to supporting and resourcing teaching and the required infrastructure are also considered. Further research into sustaining the practice-based general practice supervision model will be required.

  2. On Nonequivalence of Several Procedures of Structural Equation Modeling

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Chan, Wai

    2005-01-01

    The normal theory based maximum likelihood procedure is widely used in structural equation modeling. Three alternatives are: the normal theory based generalized least squares, the normal theory based iteratively reweighted least squares, and the asymptotically distribution-free procedure. When data are normally distributed and the model structure…

  3. Adaptive Long-Term Monitoring at Environmental Restoration Sites (ER-0629)

    DTIC Science & Technology

    2009-05-01

    Figures Figure 2-1 General Flowchart of Software Application Figure 2-2 Overview of the Genetic Algorithm Approach Figure 2-3 Example of a...and Model Builder) are highlighted on Figure 2-1, which is a general flowchart illustrating the application of the software. The software is applied...monitoring event (e.g., contaminant mass based on interpolation) that modeling is provided by Model Builder. 4 Figure 2-1. General Flowchart of Software

  4. A parametric generalization of the Hayne estimator for line transect sampling

    USGS Publications Warehouse

    Burnham, Kenneth P.

    1979-01-01

    The Hayne model for line transect sampling is generalized by using an elliptical (rather than circular) flushing model for animal detection. By assuming the ration of major and minor axes lengths is constant for all animals, a model results which allows estimation of population density based directly upon sighting distances and sighting angles. The derived estimator of animal density is a generalization of the Hayne estimator for line transect sampling.

  5. Renewal processes based on generalized Mittag-Leffler waiting times

    NASA Astrophysics Data System (ADS)

    Cahoy, Dexter O.; Polito, Federico

    2013-03-01

    The fractional Poisson process has recently attracted experts from several fields of study. Its natural generalization of the ordinary Poisson process made the model more appealing for real-world applications. In this paper, we generalized the standard and fractional Poisson processes through the waiting time distribution, and showed their relations to an integral operator with a generalized Mittag-Leffler function in the kernel. The waiting times of the proposed renewal processes have the generalized Mittag-Leffler and stretched-squashed Mittag-Leffler distributions. Note that the generalizations naturally provide greater flexibility in modeling real-life renewal processes. Algorithms to simulate sample paths and to estimate the model parameters are derived. Note also that these procedures are necessary to make these models more usable in practice. State probabilities and other qualitative or quantitative features of the models are also discussed.

  6. Prediction of dynamical systems by symbolic regression

    NASA Astrophysics Data System (ADS)

    Quade, Markus; Abel, Markus; Shafi, Kamran; Niven, Robert K.; Noack, Bernd R.

    2016-07-01

    We study the modeling and prediction of dynamical systems based on conventional models derived from measurements. Such algorithms are highly desirable in situations where the underlying dynamics are hard to model from physical principles or simplified models need to be found. We focus on symbolic regression methods as a part of machine learning. These algorithms are capable of learning an analytically tractable model from data, a highly valuable property. Symbolic regression methods can be considered as generalized regression methods. We investigate two particular algorithms, the so-called fast function extraction which is a generalized linear regression algorithm, and genetic programming which is a very general method. Both are able to combine functions in a certain way such that a good model for the prediction of the temporal evolution of a dynamical system can be identified. We illustrate the algorithms by finding a prediction for the evolution of a harmonic oscillator based on measurements, by detecting an arriving front in an excitable system, and as a real-world application, the prediction of solar power production based on energy production observations at a given site together with the weather forecast.

  7. Simulating Runoff from a Grid Based Mercury Model: Flow Comparisons

    EPA Science Inventory

    Several mercury cycling models, including general mass balance approaches, mixed-batch reactors in streams or lakes, or regional process-based models, exist to assess the ecological exposure risks associated with anthropogenically increased atmospheric mercury (Hg) deposition, so...

  8. Model-based synthesis of locally contingent responses to global market signals

    NASA Astrophysics Data System (ADS)

    Magliocca, N. R.

    2015-12-01

    Rural livelihoods and the land systems on which they depend are increasingly influenced by distant markets through economic globalization. Place-based analyses of land and livelihood system sustainability must then consider both proximate and distant influences on local decision-making. Thus, advancing land change theory in the context of economic globalization calls for a systematic understanding of the general processes as well as local contingencies shaping local responses to global signals. Synthesis of insights from place-based case studies of land and livelihood change is a path forward for developing such systematic knowledge. This paper introduces a model-based synthesis approach to investigating the influence of local socio-environmental and agent-level factors in mediating land-use and livelihood responses to changing global market signals. A generalized agent-based modeling framework is applied to six case-study sites that differ in environmental conditions, market access and influence, and livelihood settings. The largest modeled land conversions and livelihood transitions to market-oriented production occurred in sties with relatively productive agricultural land and/or with limited livelihood options. Experimental shifts in the distributions of agents' risk tolerances generally acted to attenuate or amplify responses to changes in global market signals. Importantly, however, responses of agents at different points in the risk tolerance distribution varied widely, with the wealth gap growing wider between agents with higher or lower risk tolerance. These results demonstrate model-based synthesis is a promising approach to overcome many of the challenges of current synthesis methods in land change science, and to identify generalized as well as locally contingent responses to global market signals.

  9. Continuity-based model interfacing for plant-wide simulation: a general approach.

    PubMed

    Volcke, Eveline I P; van Loosdrecht, Mark C M; Vanrolleghem, Peter A

    2006-08-01

    In plant-wide simulation studies of wastewater treatment facilities, often existing models from different origin need to be coupled. However, as these submodels are likely to contain different state variables, their coupling is not straightforward. The continuity-based interfacing method (CBIM) provides a general framework to construct model interfaces for models of wastewater systems, taking into account conservation principles. In this contribution, the CBIM approach is applied to study the effect of sludge digestion reject water treatment with a SHARON-Anammox process on a plant-wide scale. Separate models were available for the SHARON process and for the Anammox process. The Benchmark simulation model no. 2 (BSM2) is used to simulate the behaviour of the complete WWTP including sludge digestion. The CBIM approach is followed to develop three different model interfaces. At the same time, the generally applicable CBIM approach was further refined and particular issues when coupling models in which pH is considered as a state variable, are pointed out.

  10. Applying multibeam sonar and mathematical modeling for mapping seabed substrate and biota of offshore shallows

    NASA Astrophysics Data System (ADS)

    Herkül, Kristjan; Peterson, Anneliis; Paekivi, Sander

    2017-06-01

    Both basic science and marine spatial planning are in a need of high resolution spatially continuous data on seabed habitats and biota. As conventional point-wise sampling is unable to cover large spatial extents in high detail, it must be supplemented with remote sensing and modeling in order to fulfill the scientific and management needs. The combined use of in situ sampling, sonar scanning, and mathematical modeling is becoming the main method for mapping both abiotic and biotic seabed features. Further development and testing of the methods in varying locations and environmental settings is essential for moving towards unified and generally accepted methodology. To fill the relevant research gap in the Baltic Sea, we used multibeam sonar and mathematical modeling methods - generalized additive models (GAM) and random forest (RF) - together with underwater video to map seabed substrate and epibenthos of offshore shallows. In addition to testing the general applicability of the proposed complex of techniques, the predictive power of different sonar-based variables and modeling algorithms were tested. Mean depth, followed by mean backscatter, were the most influential variables in most of the models. Generally, mean values of sonar-based variables had higher predictive power than their standard deviations. The predictive accuracy of RF was higher than that of GAM. To conclude, we found the method to be feasible and with predictive accuracy similar to previous studies of sonar-based mapping.

  11. Cluster and propensity based approximation of a network

    PubMed Central

    2013-01-01

    Background The models in this article generalize current models for both correlation networks and multigraph networks. Correlation networks are widely applied in genomics research. In contrast to general networks, it is straightforward to test the statistical significance of an edge in a correlation network. It is also easy to decompose the underlying correlation matrix and generate informative network statistics such as the module eigenvector. However, correlation networks only capture the connections between numeric variables. An open question is whether one can find suitable decompositions of the similarity measures employed in constructing general networks. Multigraph networks are attractive because they support likelihood based inference. Unfortunately, it is unclear how to adjust current statistical methods to detect the clusters inherent in many data sets. Results Here we present an intuitive and parsimonious parametrization of a general similarity measure such as a network adjacency matrix. The cluster and propensity based approximation (CPBA) of a network not only generalizes correlation network methods but also multigraph methods. In particular, it gives rise to a novel and more realistic multigraph model that accounts for clustering and provides likelihood based tests for assessing the significance of an edge after controlling for clustering. We present a novel Majorization-Minimization (MM) algorithm for estimating the parameters of the CPBA. To illustrate the practical utility of the CPBA of a network, we apply it to gene expression data and to a bi-partite network model for diseases and disease genes from the Online Mendelian Inheritance in Man (OMIM). Conclusions The CPBA of a network is theoretically appealing since a) it generalizes correlation and multigraph network methods, b) it improves likelihood based significance tests for edge counts, c) it directly models higher-order relationships between clusters, and d) it suggests novel clustering algorithms. The CPBA of a network is implemented in Fortran 95 and bundled in the freely available R package PropClust. PMID:23497424

  12. Generalized Ordinary Differential Equation Models 1

    PubMed Central

    Miao, Hongyu; Wu, Hulin; Xue, Hongqi

    2014-01-01

    Existing estimation methods for ordinary differential equation (ODE) models are not applicable to discrete data. The generalized ODE (GODE) model is therefore proposed and investigated for the first time. We develop the likelihood-based parameter estimation and inference methods for GODE models. We propose robust computing algorithms and rigorously investigate the asymptotic properties of the proposed estimator by considering both measurement errors and numerical errors in solving ODEs. The simulation study and application of our methods to an influenza viral dynamics study suggest that the proposed methods have a superior performance in terms of accuracy over the existing ODE model estimation approach and the extended smoothing-based (ESB) method. PMID:25544787

  13. Generalized Ordinary Differential Equation Models.

    PubMed

    Miao, Hongyu; Wu, Hulin; Xue, Hongqi

    2014-10-01

    Existing estimation methods for ordinary differential equation (ODE) models are not applicable to discrete data. The generalized ODE (GODE) model is therefore proposed and investigated for the first time. We develop the likelihood-based parameter estimation and inference methods for GODE models. We propose robust computing algorithms and rigorously investigate the asymptotic properties of the proposed estimator by considering both measurement errors and numerical errors in solving ODEs. The simulation study and application of our methods to an influenza viral dynamics study suggest that the proposed methods have a superior performance in terms of accuracy over the existing ODE model estimation approach and the extended smoothing-based (ESB) method.

  14. Working covariance model selection for generalized estimating equations.

    PubMed

    Carey, Vincent J; Wang, You-Gan

    2011-11-20

    We investigate methods for data-based selection of working covariance models in the analysis of correlated data with generalized estimating equations. We study two selection criteria: Gaussian pseudolikelihood and a geodesic distance based on discrepancy between model-sensitive and model-robust regression parameter covariance estimators. The Gaussian pseudolikelihood is found in simulation to be reasonably sensitive for several response distributions and noncanonical mean-variance relations for longitudinal data. Application is also made to a clinical dataset. Assessment of adequacy of both correlation and variance models for longitudinal data should be routine in applications, and we describe open-source software supporting this practice. Copyright © 2011 John Wiley & Sons, Ltd.

  15. THE FUTURE OF COMPUTER-BASED TOXICITY PREDICTION: MECHANISM-BASED MODELS VS. INFORMATION MINING APPROACHES

    EPA Science Inventory


    The Future of Computer-Based Toxicity Prediction:
    Mechanism-Based Models vs. Information Mining Approaches

    When we speak of computer-based toxicity prediction, we are generally referring to a broad array of approaches which rely primarily upon chemical structure ...

  16. Spectrum Analysis of Inertial and Subinertial Motions Based on Analyzed Winds and Wind-Driven Currents from a Primitive Equation General Ocean Circulation Model.

    DTIC Science & Technology

    1982-12-01

    1Muter.Te Motions Based on Ana lyzed Winds and wind-driven December 1982 Currents from. a Primitive Squat ion General a.OW -love"*..* Oean Circulation...mew se"$ (comeS.... do oISN..u am ae~ 00do OWaor NUN Fourier and Rotary Spc , Analysis Modeled Inertial and Subinrtial Motion 4 Primitive Equation

  17. Issues in assessing multi-institutional performance of BI-RADS-based CAD systems

    NASA Astrophysics Data System (ADS)

    Markey, Mia K.; Lo, Joseph Y.

    2005-04-01

    The purpose of this study was to investigate factors that impact the generalization of breast cancer computer-aided diagnosis (CAD) systems that utilize the Breast Imaging Reporting and Data System (BI-RADS). Data sets from four institutions were analyzed: Duke University Medical Center, University of Pennsylvania Medical Center, Massachusetts General Hospital, and Wake Forest University. The latter two data sets are subsets of the Digital Database for Screening Mammography. Each data set consisted of descriptions of mammographic lesions according to the BI-RADS lexicon, patient age, and pathology status (benign/malignant). Models were developed to predict pathology status from the BI-RADS descriptors and the patient age. Comparisons between the models built on data from the different institutions were made in terms of empirical (non-parametric) receiver operating characteristic (ROC) curves. Results suggest that BI-RADS-based CAD systems focused on specific classes of lesions may be more generally applicable than models that cover several lesion types. However, better generalization was seen in terms of the area under the ROC curve than in the partial area index (>90% sensitivity). Previous studies have illustrated the challenges in translating a BI-RADS-based CAD system from one institution to another. This study provides new insights into possible approaches to improve the generalization of BI-RADS-based CAD systems.

  18. A geometric theory for Lévy distributions

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo

    2014-08-01

    Lévy distributions are of prime importance in the physical sciences, and their universal emergence is commonly explained by the Generalized Central Limit Theorem (CLT). However, the Generalized CLT is a geometry-less probabilistic result, whereas physical processes usually take place in an embedding space whose spatial geometry is often of substantial significance. In this paper we introduce a model of random effects in random environments which, on the one hand, retains the underlying probabilistic structure of the Generalized CLT and, on the other hand, adds a general and versatile underlying geometric structure. Based on this model we obtain geometry-based counterparts of the Generalized CLT, thus establishing a geometric theory for Lévy distributions. The theory explains the universal emergence of Lévy distributions in physical settings which are well beyond the realm of the Generalized CLT.

  19. A geometric theory for Lévy distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eliazar, Iddo, E-mail: eliazar@post.tau.ac.il

    2014-08-15

    Lévy distributions are of prime importance in the physical sciences, and their universal emergence is commonly explained by the Generalized Central Limit Theorem (CLT). However, the Generalized CLT is a geometry-less probabilistic result, whereas physical processes usually take place in an embedding space whose spatial geometry is often of substantial significance. In this paper we introduce a model of random effects in random environments which, on the one hand, retains the underlying probabilistic structure of the Generalized CLT and, on the other hand, adds a general and versatile underlying geometric structure. Based on this model we obtain geometry-based counterparts ofmore » the Generalized CLT, thus establishing a geometric theory for Lévy distributions. The theory explains the universal emergence of Lévy distributions in physical settings which are well beyond the realm of the Generalized CLT.« less

  20. An Application to the Prediction of LOD Change Based on General Regression Neural Network

    NASA Astrophysics Data System (ADS)

    Zhang, X. H.; Wang, Q. J.; Zhu, J. J.; Zhang, H.

    2011-07-01

    Traditional prediction of the LOD (length of day) change was based on linear models, such as the least square model and the autoregressive technique, etc. Due to the complex non-linear features of the LOD variation, the performances of the linear model predictors are not fully satisfactory. This paper applies a non-linear neural network - general regression neural network (GRNN) model to forecast the LOD change, and the results are analyzed and compared with those obtained with the back propagation neural network and other models. The comparison shows that the performance of the GRNN model in the prediction of the LOD change is efficient and feasible.

  1. Background-Error Correlation Model Based on the Implicit Solution of a Diffusion Equation

    DTIC Science & Technology

    2010-01-01

    1 Background- Error Correlation Model Based on the Implicit Solution of a Diffusion Equation Matthew J. Carrier* and Hans Ngodock...4. TITLE AND SUBTITLE Background- Error Correlation Model Based on the Implicit Solution of a Diffusion Equation 5a. CONTRACT NUMBER 5b. GRANT...2001), which sought to model error correlations based on the explicit solution of a generalized diffusion equation. The implicit solution is

  2. Applicability of the polynomial chaos expansion method for personalization of a cardiovascular pulse wave propagation model.

    PubMed

    Huberts, W; Donders, W P; Delhaas, T; van de Vosse, F N

    2014-12-01

    Patient-specific modeling requires model personalization, which can be achieved in an efficient manner by parameter fixing and parameter prioritization. An efficient variance-based method is using generalized polynomial chaos expansion (gPCE), but it has not been applied in the context of model personalization, nor has it ever been compared with standard variance-based methods for models with many parameters. In this work, we apply the gPCE method to a previously reported pulse wave propagation model and compare the conclusions for model personalization with that of a reference analysis performed with Saltelli's efficient Monte Carlo method. We furthermore differentiate two approaches for obtaining the expansion coefficients: one based on spectral projection (gPCE-P) and one based on least squares regression (gPCE-R). It was found that in general the gPCE yields similar conclusions as the reference analysis but at much lower cost, as long as the polynomial metamodel does not contain unnecessary high order terms. Furthermore, the gPCE-R approach generally yielded better results than gPCE-P. The weak performance of the gPCE-P can be attributed to the assessment of the expansion coefficients using the Smolyak algorithm, which might be hampered by the high number of model parameters and/or by possible non-smoothness in the output space. Copyright © 2014 John Wiley & Sons, Ltd.

  3. Version Control in Project-Based Learning

    ERIC Educational Resources Information Center

    Milentijevic, Ivan; Ciric, Vladimir; Vojinovic, Oliver

    2008-01-01

    This paper deals with the development of a generalized model for version control systems application as a support in a range of project-based learning methods. The model is given as UML sequence diagram and described in detail. The proposed model encompasses a wide range of different project-based learning approaches by assigning a supervisory…

  4. Systematizing Web Search through a Meta-Cognitive, Systems-Based, Information Structuring Model (McSIS)

    ERIC Educational Resources Information Center

    Abuhamdieh, Ayman H.; Harder, Joseph T.

    2015-01-01

    This paper proposes a meta-cognitive, systems-based, information structuring model (McSIS) to systematize online information search behavior based on literature review of information-seeking models. The General Systems Theory's (GST) prepositions serve as its framework. Factors influencing information-seekers, such as the individual learning…

  5. Development of an irrigation scheduling software based on model predicted crop water stress

    USDA-ARS?s Scientific Manuscript database

    Modern irrigation scheduling methods are generally based on sensor-monitored soil moisture regimes rather than crop water stress which is difficult to measure in real-time, but can be computed using agricultural system models. In this study, an irrigation scheduling software based on RZWQM2 model pr...

  6. A high speed model-based approach for wavefront sensorless adaptive optics systems

    NASA Astrophysics Data System (ADS)

    Lianghua, Wen; Yang, Ping; Shuai, Wang; Wenjing, Liu; Shanqiu, Chen; Xu, Bing

    2018-02-01

    To improve temporal-frequency property of wavefront sensorless adaptive optics (AO) systems, a fast general model-based aberration correction algorithm is presented. The fast general model-based approach is based on the approximately linear relation between the mean square of the aberration gradients and the second moment of far-field intensity distribution. The presented model-based method is capable of completing a mode aberration effective correction just applying one disturbing onto the deformable mirror(one correction by one disturbing), which is reconstructed by the singular value decomposing the correlation matrix of the Zernike functions' gradients. Numerical simulations of AO corrections under the various random and dynamic aberrations are implemented. The simulation results indicate that the equivalent control bandwidth is 2-3 times than that of the previous method with one aberration correction after applying N times disturbing onto the deformable mirror (one correction by N disturbing).

  7. The effectiveness of clinical problem-based learning model of medico-jurisprudence education on general law knowledge for Obstetrics/Gynecological interns.

    PubMed

    Chang, Hui-Chin; Wang, Ning-Yen; Ko, Wen-Ru; Yu, You-Tsz; Lin, Long-Yau; Tsai, Hui-Fang

    2017-06-01

    The effective education method of medico-jurisprudence for medical students is unclear. The study was designed to evaluate the effectiveness of problem-based learning (PBL) model teaching medico-jurisprudence in clinical setting on General Law Knowledge (GLK) for medical students. Senior medical students attending either campus-based law curriculum or Obstetrics/Gynecology (Ob/Gyn) clinical setting morning meeting from February to July in 2015 were enrolled. A validated questionnaire comprising 45 questions were completed before and after the law education. The interns attending clinical setting small group improvisation medico-jurisprudence problem-based learning education had significantly better GLK scores than the GLK of students attending campus-based medical law education course after the period studied. PBL teaching model of medico-jurisprudence is an ideal alternative pedagogy model in medical law education curriculum. Copyright © 2017. Published by Elsevier B.V.

  8. Seasonal changes in the atmospheric heat balance simulated by the GISS general circulation model

    NASA Technical Reports Server (NTRS)

    Stone, P. H.; Chow, S.; Helfand, H. M.; Quirk, W. J.; Somerville, R. C. J.

    1975-01-01

    Tests of the ability of numerical general circulation models to simulate the atmosphere have focussed so far on simulations of the January climatology. These models generally present boundary conditions such as sea surface temperature, but this does not prevent testing their ability to simulate seasonal changes in atmospheric processes that accompany presented seasonal changes in boundary conditions. Experiments to simulate changes in the zonally averaged heat balance are discussed since many simplified models of climatic processes are based solely on this balance.

  9. A New Ductility Exhaustion Model for High Temperature Low Cycle Fatigue Life Prediction of Turbine Disk Alloys

    NASA Astrophysics Data System (ADS)

    Zhu, Shun-Peng; Huang, Hong-Zhong; Li, Haiqing; Sun, Rui; Zuo, Ming J.

    2011-06-01

    Based on ductility exhaustion theory and the generalized energy-based damage parameter, a new viscosity-based life prediction model is introduced to account for the mean strain/stress effects in the low cycle fatigue regime. The loading waveform parameters and cyclic hardening effects are also incorporated within this model. It is assumed that damage accrues by means of viscous flow and ductility consumption is only related to plastic strain and creep strain under high temperature low cycle fatigue conditions. In the developed model, dynamic viscosity is used to describe the flow behavior. This model provides a better prediction of Superalloy GH4133's fatigue behavior when compared to Goswami's ductility model and the generalized damage parameter. Under non-zero mean strain conditions, moreover, the proposed model provides more accurate predictions of Superalloy GH4133's fatigue behavior than that with zero mean strains.

  10. Long short-term memory for speaker generalization in supervised speech separation

    PubMed Central

    Chen, Jitong; Wang, DeLiang

    2017-01-01

    Speech separation can be formulated as learning to estimate a time-frequency mask from acoustic features extracted from noisy speech. For supervised speech separation, generalization to unseen noises and unseen speakers is a critical issue. Although deep neural networks (DNNs) have been successful in noise-independent speech separation, DNNs are limited in modeling a large number of speakers. To improve speaker generalization, a separation model based on long short-term memory (LSTM) is proposed, which naturally accounts for temporal dynamics of speech. Systematic evaluation shows that the proposed model substantially outperforms a DNN-based model on unseen speakers and unseen noises in terms of objective speech intelligibility. Analyzing LSTM internal representations reveals that LSTM captures long-term speech contexts. It is also found that the LSTM model is more advantageous for low-latency speech separation and it, without future frames, performs better than the DNN model with future frames. The proposed model represents an effective approach for speaker- and noise-independent speech separation. PMID:28679261

  11. PRESS-based EFOR algorithm for the dynamic parametrical modeling of nonlinear MDOF systems

    NASA Astrophysics Data System (ADS)

    Liu, Haopeng; Zhu, Yunpeng; Luo, Zhong; Han, Qingkai

    2017-09-01

    In response to the identification problem concerning multi-degree of freedom (MDOF) nonlinear systems, this study presents the extended forward orthogonal regression (EFOR) based on predicted residual sums of squares (PRESS) to construct a nonlinear dynamic parametrical model. The proposed parametrical model is based on the non-linear autoregressive with exogenous inputs (NARX) model and aims to explicitly reveal the physical design parameters of the system. The PRESS-based EFOR algorithm is proposed to identify such a model for MDOF systems. By using the algorithm, we built a common-structured model based on the fundamental concept of evaluating its generalization capability through cross-validation. The resulting model aims to prevent over-fitting with poor generalization performance caused by the average error reduction ratio (AERR)-based EFOR algorithm. Then, a functional relationship is established between the coefficients of the terms and the design parameters of the unified model. Moreover, a 5-DOF nonlinear system is taken as a case to illustrate the modeling of the proposed algorithm. Finally, a dynamic parametrical model of a cantilever beam is constructed from experimental data. Results indicate that the dynamic parametrical model of nonlinear systems, which depends on the PRESS-based EFOR, can accurately predict the output response, thus providing a theoretical basis for the optimal design of modeling methods for MDOF nonlinear systems.

  12. Bayesian inference based on dual generalized order statistics from the exponentiated Weibull model

    NASA Astrophysics Data System (ADS)

    Al Sobhi, Mashail M.

    2015-02-01

    Bayesian estimation for the two parameters and the reliability function of the exponentiated Weibull model are obtained based on dual generalized order statistics (DGOS). Also, Bayesian prediction bounds for future DGOS from exponentiated Weibull model are obtained. The symmetric and asymmetric loss functions are considered for Bayesian computations. The Markov chain Monte Carlo (MCMC) methods are used for computing the Bayes estimates and prediction bounds. The results have been specialized to the lower record values. Comparisons are made between Bayesian and maximum likelihood estimators via Monte Carlo simulation.

  13. Optimizing Experimental Designs Relative to Costs and Effect Sizes.

    ERIC Educational Resources Information Center

    Headrick, Todd C.; Zumbo, Bruno D.

    A general model is derived for the purpose of efficiently allocating integral numbers of units in multi-level designs given prespecified power levels. The derivation of the model is based on a constrained optimization problem that maximizes a general form of a ratio of expected mean squares subject to a budget constraint. This model provides more…

  14. Elementary School Teachers' Desired Model for the Inclusion of Students with Disabilities

    ERIC Educational Resources Information Center

    Gavish, Bella; Sarah Shimoni

    2013-01-01

    The study sought to determine which kind of models are suitable for the inclusion of students with disabilities in school and classroom settings, based on the views of general education teachers. Israeli general education teachers were asked to create a constraints free, "Best case scenario" model, for the implementation of inclusion.…

  15. Wave–turbulence interaction-induced vertical mixing and its effects in ocean and climate models

    PubMed Central

    Qiao, Fangli; Yuan, Yeli; Deng, Jia; Dai, Dejun; Song, Zhenya

    2016-01-01

    Heated from above, the oceans are stably stratified. Therefore, the performance of general ocean circulation models and climate studies through coupled atmosphere–ocean models depends critically on vertical mixing of energy and momentum in the water column. Many of the traditional general circulation models are based on total kinetic energy (TKE), in which the roles of waves are averaged out. Although theoretical calculations suggest that waves could greatly enhance coexisting turbulence, no field measurements on turbulence have ever validated this mechanism directly. To address this problem, a specially designed field experiment has been conducted. The experimental results indicate that the wave–turbulence interaction-induced enhancement of the background turbulence is indeed the predominant mechanism for turbulence generation and enhancement. Based on this understanding, we propose a new parametrization for vertical mixing as an additive part to the traditional TKE approach. This new result reconfirmed the past theoretical model that had been tested and validated in numerical model experiments and field observations. It firmly establishes the critical role of wave–turbulence interaction effects in both general ocean circulation models and atmosphere–ocean coupled models, which could greatly improve the understanding of the sea surface temperature and water column properties distributions, and hence model-based climate forecasting capability. PMID:26953182

  16. A method for assigning species into groups based on generalized Mahalanobis distance between habitat model coefficients

    USGS Publications Warehouse

    Williams, C.J.; Heglund, P.J.

    2009-01-01

    Habitat association models are commonly developed for individual animal species using generalized linear modeling methods such as logistic regression. We considered the issue of grouping species based on their habitat use so that management decisions can be based on sets of species rather than individual species. This research was motivated by a study of western landbirds in northern Idaho forests. The method we examined was to separately fit models to each species and to use a generalized Mahalanobis distance between coefficient vectors to create a distance matrix among species. Clustering methods were used to group species from the distance matrix, and multidimensional scaling methods were used to visualize the relations among species groups. Methods were also discussed for evaluating the sensitivity of the conclusions because of outliers or influential data points. We illustrate these methods with data from the landbird study conducted in northern Idaho. Simulation results are presented to compare the success of this method to alternative methods using Euclidean distance between coefficient vectors and to methods that do not use habitat association models. These simulations demonstrate that our Mahalanobis-distance- based method was nearly always better than Euclidean-distance-based methods or methods not based on habitat association models. The methods used to develop candidate species groups are easily explained to other scientists and resource managers since they mainly rely on classical multivariate statistical methods. ?? 2008 Springer Science+Business Media, LLC.

  17. "Shape function + memory mechanism"-based hysteresis modeling of magnetorheological fluid actuators

    NASA Astrophysics Data System (ADS)

    Qian, Li-Jun; Chen, Peng; Cai, Fei-Long; Bai, Xian-Xu

    2018-03-01

    A hysteresis model based on "shape function + memory mechanism" is presented and its feasibility is verified through modeling the hysteresis behavior of a magnetorheological (MR) damper. A hysteresis phenomenon in resistor-capacitor (RC) circuit is first presented and analyzed. In the hysteresis model, the "memory mechanism" originating from the charging and discharging processes of the RC circuit is constructed by adopting a virtual displacement variable and updating laws for the reference points. The "shape function" is achieved and generalized from analytical solutions of the simple semi-linear Duhem model. Using the approach, the memory mechanism reveals the essence of specific Duhem model and the general shape function provides a direct and clear means to fit the hysteresis loop. In the frame of the structure of a "Restructured phenomenological model", the original hysteresis operator, i.e., the Bouc-Wen operator, is replaced with the new hysteresis operator. The comparative work with the Bouc-Wen operator based model demonstrates superior performances of high computational efficiency and comparable accuracy of the new hysteresis operator-based model.

  18. Complexity analysis based on generalized deviation for financial markets

    NASA Astrophysics Data System (ADS)

    Li, Chao; Shang, Pengjian

    2018-03-01

    In this paper, a new modified method is proposed as a measure to investigate the correlation between past price and future volatility for financial time series, known as the complexity analysis based on generalized deviation. In comparison with the former retarded volatility model, the new approach is both simple and computationally efficient. The method based on the generalized deviation function presents us an exhaustive way showing the quantization of the financial market rules. Robustness of this method is verified by numerical experiments with both artificial and financial time series. Results show that the generalized deviation complexity analysis method not only identifies the volatility of financial time series, but provides a comprehensive way distinguishing the different characteristics between stock indices and individual stocks. Exponential functions can be used to successfully fit the volatility curves and quantify the changes of complexity for stock market data. Then we study the influence for negative domain of deviation coefficient and differences during the volatile periods and calm periods. after the data analysis of the experimental model, we found that the generalized deviation model has definite advantages in exploring the relationship between the historical returns and future volatility.

  19. A multi-ion generalized transport model of the polar wind

    NASA Technical Reports Server (NTRS)

    Demars, H. G.; Schunk, R. W.

    1994-01-01

    The higher-order generalizations of the equations of standard hydrodynamics, known collectively as generalized transport theories, have been used since the early 1980s to describe the terrestrial polar wind. Inherent in the structure of generalized transport theories is the ability to describe not only interparticle collisions but also certain non-Maxwellian processes, such as heat flow and viscous stress, that are characteristic of any plasma flow that is not collision dominated. Because the polar wind exhibits a transition from collision-dominated to collisionless flow, generalized transport theories possess advantages for polar wind modeling not shared by either collision-dominated models (such as standard hydrodynamics) or collisionless models (such as those based on solving the collisionless Boltzmann equation). In general, previous polar wind models have used generalized transport equations to describe electrons and only one species of ion (H(+)). If other ion species were included in the models at all, it was in a simplified or semiempirical manner. The model described in this paper is the first polar wind model that uses a generalized transport theory (bi-Maxwellian-based 16-moment theory) to describe all of the species, both major and minor, in the polar wind plasma. In the model, electrons and three ion species (H(+), He(+), O(+)) are assumed to be major and several ion species are assumed to be minor (NO(+), Fe(+), O(++)). For all species, a complete 16-moment transport formulation is used, so that profiles of density, drift velocity, parallel and perpendicular temperatures, and the field-aligned parallel and perpendicular energy flows are obtained. In the results presented here, emphasis is placed on describing those constituents of the polar wind that have received little attention in past studies. In particular, characteristic solutions are presented for supersonic H(+) outflow and for both supersonic and subsonic outflows of the major ion He(+). Solutions are also presented for various minor ions, both atomic and molecular and both singly and multiply charged.

  20. A knowledge base architecture for distributed knowledge agents

    NASA Technical Reports Server (NTRS)

    Riedesel, Joel; Walls, Bryan

    1990-01-01

    A tuple space based object oriented model for knowledge base representation and interpretation is presented. An architecture for managing distributed knowledge agents is then implemented within the model. The general model is based upon a database implementation of a tuple space. Objects are then defined as an additional layer upon the database. The tuple space may or may not be distributed depending upon the database implementation. A language for representing knowledge and inference strategy is defined whose implementation takes advantage of the tuple space. The general model may then be instantiated in many different forms, each of which may be a distinct knowledge agent. Knowledge agents may communicate using tuple space mechanisms as in the LINDA model as well as using more well known message passing mechanisms. An implementation of the model is presented describing strategies used to keep inference tractable without giving up expressivity. An example applied to a power management and distribution network for Space Station Freedom is given.

  1. Automatic liver segmentation in computed tomography using general-purpose shape modeling methods.

    PubMed

    Spinczyk, Dominik; Krasoń, Agata

    2018-05-29

    Liver segmentation in computed tomography is required in many clinical applications. The segmentation methods used can be classified according to a number of criteria. One important criterion for method selection is the shape representation of the segmented organ. The aim of the work is automatic liver segmentation using general purpose shape modeling methods. As part of the research, methods based on shape information at various levels of advancement were used. The single atlas based segmentation method was used as the simplest shape-based method. This method is derived from a single atlas using the deformable free-form deformation of the control point curves. Subsequently, the classic and modified Active Shape Model (ASM) was used, using medium body shape models. As the most advanced and main method generalized statistical shape models, Gaussian Process Morphable Models was used, which are based on multi-dimensional Gaussian distributions of the shape deformation field. Mutual information and sum os square distance were used as similarity measures. The poorest results were obtained for the single atlas method. For the ASM method in 10 analyzed cases for seven test images, the Dice coefficient was above 55[Formula: see text], of which for three of them the coefficient was over 70[Formula: see text], which placed the method in second place. The best results were obtained for the method of generalized statistical distribution of the deformation field. The DICE coefficient for this method was 88.5[Formula: see text] CONCLUSIONS: This value of 88.5 [Formula: see text] Dice coefficient can be explained by the use of general-purpose shape modeling methods with a large variance of the shape of the modeled object-the liver and limitations on the size of our training data set, which was limited to 10 cases. The obtained results in presented fully automatic method are comparable with dedicated methods for liver segmentation. In addition, the deforamtion features of the model can be modeled mathematically by using various kernel functions, which allows to segment the liver on a comparable level using a smaller learning set.

  2. Representative Structural Element - A New Paradigm for Multi-Scale Structural Modeling

    DTIC Science & Technology

    2016-07-05

    developed by NASA Glenn Research Center based on Aboudi’s micromechanics theories [5] that provides a wide range of capabilities for modeling ...to use appropriate models for related problems based on the capability of corresponding approaches. Moreover, the analyses will give a general...interface of heterogeneous materials but also help engineers to use appropriate models for related problems based on the capability of corresponding

  3. A Nakanishi-based model illustrating the covariant extension of the pion GPD overlap representation and its ambiguities

    NASA Astrophysics Data System (ADS)

    Chouika, N.; Mezrag, C.; Moutarde, H.; Rodríguez-Quintero, J.

    2018-05-01

    A systematic approach for the model building of Generalized Parton Distributions (GPDs), based on their overlap representation within the DGLAP kinematic region and a further covariant extension to the ERBL one, is applied to the valence-quark pion's case, using light-front wave functions inspired by the Nakanishi representation of the pion Bethe-Salpeter amplitudes (BSA). This simple but fruitful pion GPD model illustrates the general model building technique and, in addition, allows for the ambiguities related to the covariant extension, grounded on the Double Distribution (DD) representation, to be constrained by requiring a soft-pion theorem to be properly observed.

  4. Estimating Parameters in the Generalized Graded Unfolding Model: Sensitivity to the Prior Distribution Assumption and the Number of Quadrature Points Used.

    ERIC Educational Resources Information Center

    Roberts, James S.; Donoghue, John R.; Laughlin, James E.

    The generalized graded unfolding model (J. Roberts, J. Donoghue, and J. Laughlin, 1998, 1999) is an item response theory model designed to unfold polytomous responses. The model is based on a proximity relation that postulates higher levels of expected agreement with a given statement to the extent that a respondent is located close to the…

  5. Conditional Monte Carlo randomization tests for regression models.

    PubMed

    Parhat, Parwen; Rosenberger, William F; Diao, Guoqing

    2014-08-15

    We discuss the computation of randomization tests for clinical trials of two treatments when the primary outcome is based on a regression model. We begin by revisiting the seminal paper of Gail, Tan, and Piantadosi (1988), and then describe a method based on Monte Carlo generation of randomization sequences. The tests based on this Monte Carlo procedure are design based, in that they incorporate the particular randomization procedure used. We discuss permuted block designs, complete randomization, and biased coin designs. We also use a new technique by Plamadeala and Rosenberger (2012) for simple computation of conditional randomization tests. Like Gail, Tan, and Piantadosi, we focus on residuals from generalized linear models and martingale residuals from survival models. Such techniques do not apply to longitudinal data analysis, and we introduce a method for computation of randomization tests based on the predicted rate of change from a generalized linear mixed model when outcomes are longitudinal. We show, by simulation, that these randomization tests preserve the size and power well under model misspecification. Copyright © 2014 John Wiley & Sons, Ltd.

  6. Effective Biot theory and its generalization to poroviscoelastic models

    NASA Astrophysics Data System (ADS)

    Liu, Xu; Greenhalgh, Stewart; Zhou, Bing; Greenhalgh, Mark

    2018-02-01

    A method is suggested to express the effective bulk modulus of the solid frame of a poroelastic material as a function of the saturated bulk modulus. This method enables effective Biot theory to be described through the use of seismic dispersion measurements or other models developed for the effective saturated bulk modulus. The effective Biot theory is generalized to a poroviscoelastic model of which the moduli are represented by the relaxation functions of the generalized fractional Zener model. The latter covers the general Zener and the Cole-Cole models as special cases. A global search method is described to determine the parameters of the relaxation functions, and a simple deterministic method is also developed to find the defining parameters of the single Cole-Cole model. These methods enable poroviscoelastic models to be constructed, which are based on measured seismic attenuation functions, and ensure that the model dispersion characteristics match the observations.

  7. HexSim - A general purpose framework for spatially-explicit, individual-based modeling

    EPA Science Inventory

    HexSim is a framework for constructing spatially-explicit, individual-based computer models designed for simulating terrestrial wildlife population dynamics and interactions. HexSim is useful for a broad set of modeling applications. This talk will focus on a subset of those ap...

  8. Provision of mental health care in general practice in Italy.

    PubMed Central

    Tansella, M; Bellantuono, C

    1991-01-01

    The main features of the psychiatric system and of the general practice system in Italy since the psychiatric reform and the introduction of a national health service are briefly described. Research conducted in Italy confirms that a large proportion of patients seen by general practitioners have psychological disorders and that only some of those patients whose psychological problems are identified by general practitioners are referred to specialist psychiatric care. Thus, the need to identify the best model of collaboration between psychiatric services and general practice services is becoming increasingly urgent. The chances of improving links between the two services and of developing a satisfactory liaison model are probably greater in countries such as Italy where psychiatric services are highly decentralized and community-based, than in countries where the psychiatric services are hospital-based. PMID:1807308

  9. Particle-hole symmetry in generalized seniority, microscopic interacting boson (fermion) model, nucleon-pair approximation, and other models

    NASA Astrophysics Data System (ADS)

    Jia, L. Y.

    2016-06-01

    The particle-hole symmetry (equivalence) of the full shell-model Hilbert space is straightforward and routinely used in practical calculations. In this work I show that this symmetry is preserved in the subspace truncated up to a certain generalized seniority and give the explicit transformation between the states in the two types (particle and hole) of representations. Based on the results, I study particle-hole symmetry in popular theories that could be regarded as further truncations on top of the generalized seniority, including the microscopic interacting boson (fermion) model, the nucleon-pair approximation, and other models.

  10. Generalized free-space diffuse photon transport model based on the influence analysis of a camera lens diaphragm.

    PubMed

    Chen, Xueli; Gao, Xinbo; Qu, Xiaochao; Chen, Duofang; Ma, Xiaopeng; Liang, Jimin; Tian, Jie

    2010-10-10

    The camera lens diaphragm is an important component in a noncontact optical imaging system and has a crucial influence on the images registered on the CCD camera. However, this influence has not been taken into account in the existing free-space photon transport models. To model the photon transport process more accurately, a generalized free-space photon transport model is proposed. It combines Lambertian source theory with analysis of the influence of the camera lens diaphragm to simulate photon transport process in free space. In addition, the radiance theorem is also adopted to establish the energy relationship between the virtual detector and the CCD camera. The accuracy and feasibility of the proposed model is validated with a Monte-Carlo-based free-space photon transport model and physical phantom experiment. A comparison study with our previous hybrid radiosity-radiance theorem based model demonstrates the improvement performance and potential of the proposed model for simulating photon transport process in free space.

  11. A Correlation-Based Transition Model using Local Variables. Part 1; Model Formation

    NASA Technical Reports Server (NTRS)

    Menter, F. R.; Langtry, R. B.; Likki, S. R.; Suzen, Y. B.; Huang, P. G.; Volker, S.

    2006-01-01

    A new correlation-based transition model has been developed, which is based strictly on local variables. As a result, the transition model is compatible with modern computational fluid dynamics (CFD) approaches, such as unstructured grids and massive parallel execution. The model is based on two transport equations, one for intermittency and one for the transition onset criteria in terms of momentum thickness Reynolds number. The proposed transport equations do not attempt to model the physics of the transition process (unlike, e.g., turbulence models) but from a framework for the implementation of correlation-based models into general-purpose CFD methods.

  12. Relevance of the c-statistic when evaluating risk-adjustment models in surgery.

    PubMed

    Merkow, Ryan P; Hall, Bruce L; Cohen, Mark E; Dimick, Justin B; Wang, Edward; Chow, Warren B; Ko, Clifford Y; Bilimoria, Karl Y

    2012-05-01

    The measurement of hospital quality based on outcomes requires risk adjustment. The c-statistic is a popular tool used to judge model performance, but can be limited, particularly when evaluating specific operations in focused populations. Our objectives were to examine the interpretation and relevance of the c-statistic when used in models with increasingly similar case mix and to consider an alternative perspective on model calibration based on a graphical depiction of model fit. From the American College of Surgeons National Surgical Quality Improvement Program (2008-2009), patients were identified who underwent a general surgery procedure, and procedure groups were increasingly restricted: colorectal-all, colorectal-elective cases only, and colorectal-elective cancer cases only. Mortality and serious morbidity outcomes were evaluated using logistic regression-based risk adjustment, and model c-statistics and calibration curves were used to compare model performance. During the study period, 323,427 general, 47,605 colorectal-all, 39,860 colorectal-elective, and 21,680 colorectal cancer patients were studied. Mortality ranged from 1.0% in general surgery to 4.1% in the colorectal-all group, and serious morbidity ranged from 3.9% in general surgery to 12.4% in the colorectal-all procedural group. As case mix was restricted, c-statistics progressively declined from the general to the colorectal cancer surgery cohorts for both mortality and serious morbidity (mortality: 0.949 to 0.866; serious morbidity: 0.861 to 0.668). Calibration was evaluated graphically by examining predicted vs observed number of events over risk deciles. For both mortality and serious morbidity, there was no qualitative difference in calibration identified between the procedure groups. In the present study, we demonstrate how the c-statistic can become less informative and, in certain circumstances, can lead to incorrect model-based conclusions, as case mix is restricted and patients become more homogenous. Although it remains an important tool, caution is advised when the c-statistic is advanced as the sole measure of a model performance. Copyright © 2012 American College of Surgeons. All rights reserved.

  13. Spatial modeling in ecology: the flexibility of eigenfunction spatial analyses.

    PubMed

    Griffith, Daniel A; Peres-Neto, Pedro R

    2006-10-01

    Recently, analytical approaches based on the eigenfunctions of spatial configuration matrices have been proposed in order to consider explicitly spatial predictors. The present study demonstrates the usefulness of eigenfunctions in spatial modeling applied to ecological problems and shows equivalencies of and differences between the two current implementations of this methodology. The two approaches in this category are the distance-based (DB) eigenvector maps proposed by P. Legendre and his colleagues, and spatial filtering based upon geographic connectivity matrices (i.e., topology-based; CB) developed by D. A. Griffith and his colleagues. In both cases, the goal is to create spatial predictors that can be easily incorporated into conventional regression models. One important advantage of these two approaches over any other spatial approach is that they provide a flexible tool that allows the full range of general and generalized linear modeling theory to be applied to ecological and geographical problems in the presence of nonzero spatial autocorrelation.

  14. How glitter relates to gold: similarity-dependent reward prediction errors in the human striatum.

    PubMed

    Kahnt, Thorsten; Park, Soyoung Q; Burke, Christopher J; Tobler, Philippe N

    2012-11-14

    Optimal choices benefit from previous learning. However, it is not clear how previously learned stimuli influence behavior to novel but similar stimuli. One possibility is to generalize based on the similarity between learned and current stimuli. Here, we use neuroscientific methods and a novel computational model to inform the question of how stimulus generalization is implemented in the human brain. Behavioral responses during an intradimensional discrimination task showed similarity-dependent generalization. Moreover, a peak shift occurred, i.e., the peak of the behavioral generalization gradient was displaced from the rewarded conditioned stimulus in the direction away from the unrewarded conditioned stimulus. To account for the behavioral responses, we designed a similarity-based reinforcement learning model wherein prediction errors generalize across similar stimuli and update their value. We show that this model predicts a similarity-dependent neural generalization gradient in the striatum as well as changes in responding during extinction. Moreover, across subjects, the width of generalization was negatively correlated with functional connectivity between the striatum and the hippocampus. This result suggests that hippocampus-striatal connections contribute to stimulus-specific value updating by controlling the width of generalization. In summary, our results shed light onto the neurobiology of a fundamental, similarity-dependent learning principle that allows learning the value of stimuli that have never been encountered.

  15. Artificial intelligence in process control: Knowledge base for the shuttle ECS model

    NASA Technical Reports Server (NTRS)

    Stiffler, A. Kent

    1989-01-01

    The general operation of KATE, an artificial intelligence controller, is outlined. A shuttle environmental control system (ECS) demonstration system for KATE is explained. The knowledge base model for this system is derived. An experimental test procedure is given to verify parameters in the model.

  16. A Rational Analysis of Rule-Based Concept Learning

    ERIC Educational Resources Information Center

    Goodman, Noah D.; Tenenbaum, Joshua B.; Feldman, Jacob; Griffiths, Thomas L.

    2008-01-01

    This article proposes a new model of human concept learning that provides a rational analysis of learning feature-based concepts. This model is built upon Bayesian inference for a grammatically structured hypothesis space--a concept language of logical rules. This article compares the model predictions to human generalization judgments in several…

  17. Empirical flow parameters - a tool for hydraulic model validity assessment : [summary].

    DOT National Transportation Integrated Search

    2013-10-01

    Hydraulic modeling assembles models based on generalizations of parameter values from textbooks, professional literature, computer program documentation, and engineering experience. Actual measurements adjacent to the model location are seldom availa...

  18. Efficient polarimetric BRDF model.

    PubMed

    Renhorn, Ingmar G E; Hallberg, Tomas; Boreman, Glenn D

    2015-11-30

    The purpose of the present manuscript is to present a polarimetric bidirectional reflectance distribution function (BRDF) model suitable for hyperspectral and polarimetric signature modelling. The model is based on a further development of a previously published four-parameter model that has been generalized in order to account for different types of surface structures (generalized Gaussian distribution). A generalization of the Lambertian diffuse model is presented. The pBRDF-functions are normalized using numerical integration. Using directional-hemispherical reflectance (DHR) measurements, three of the four basic parameters can be determined for any wavelength. This simplifies considerably the development of multispectral polarimetric BRDF applications. The scattering parameter has to be determined from at least one BRDF measurement. The model deals with linear polarized radiation; and in similarity with e.g. the facet model depolarization is not included. The model is very general and can inherently model extreme surfaces such as mirrors and Lambertian surfaces. The complex mixture of sources is described by the sum of two basic models, a generalized Gaussian/Fresnel model and a generalized Lambertian model. Although the physics inspired model has some ad hoc features, the predictive power of the model is impressive over a wide range of angles and scattering magnitudes. The model has been applied successfully to painted surfaces, both dull and glossy and also on metallic bead blasted surfaces. The simple and efficient model should be attractive for polarimetric simulations and polarimetric remote sensing.

  19. Order Selection for General Expression of Nonlinear Autoregressive Model Based on Multivariate Stepwise Regression

    NASA Astrophysics Data System (ADS)

    Shi, Jinfei; Zhu, Songqing; Chen, Ruwen

    2017-12-01

    An order selection method based on multiple stepwise regressions is proposed for General Expression of Nonlinear Autoregressive model which converts the model order problem into the variable selection of multiple linear regression equation. The partial autocorrelation function is adopted to define the linear term in GNAR model. The result is set as the initial model, and then the nonlinear terms are introduced gradually. Statistics are chosen to study the improvements of both the new introduced and originally existed variables for the model characteristics, which are adopted to determine the model variables to retain or eliminate. So the optimal model is obtained through data fitting effect measurement or significance test. The simulation and classic time-series data experiment results show that the method proposed is simple, reliable and can be applied to practical engineering.

  20. Language acquisition is model-based rather than model-free.

    PubMed

    Wang, Felix Hao; Mintz, Toben H

    2016-01-01

    Christiansen & Chater (C&C) propose that learning language is learning to process language. However, we believe that the general-purpose prediction mechanism they propose is insufficient to account for many phenomena in language acquisition. We argue from theoretical considerations and empirical evidence that many acquisition tasks are model-based, and that different acquisition tasks require different, specialized models.

  1. Prediction of Vehicle Mobility on Large-Scale Soft-Soil Terrain Maps Using Physics-Based Simulation

    DTIC Science & Technology

    2016-08-04

    soil type. The modeling approach is based on (i) a seamless integration of multibody dynamics and discrete element method (DEM) solvers, and (ii...ensure that the vehicle follows a desired path. The soil is modeled as a Discrete Element Model (DEM) with a general cohesive material model that is

  2. Online Statistical Modeling (Regression Analysis) for Independent Responses

    NASA Astrophysics Data System (ADS)

    Made Tirta, I.; Anggraeni, Dian; Pandutama, Martinus

    2017-06-01

    Regression analysis (statistical analmodelling) are among statistical methods which are frequently needed in analyzing quantitative data, especially to model relationship between response and explanatory variables. Nowadays, statistical models have been developed into various directions to model various type and complex relationship of data. Rich varieties of advanced and recent statistical modelling are mostly available on open source software (one of them is R). However, these advanced statistical modelling, are not very friendly to novice R users, since they are based on programming script or command line interface. Our research aims to developed web interface (based on R and shiny), so that most recent and advanced statistical modelling are readily available, accessible and applicable on web. We have previously made interface in the form of e-tutorial for several modern and advanced statistical modelling on R especially for independent responses (including linear models/LM, generalized linier models/GLM, generalized additive model/GAM and generalized additive model for location scale and shape/GAMLSS). In this research we unified them in the form of data analysis, including model using Computer Intensive Statistics (Bootstrap and Markov Chain Monte Carlo/ MCMC). All are readily accessible on our online Virtual Statistics Laboratory. The web (interface) make the statistical modeling becomes easier to apply and easier to compare them in order to find the most appropriate model for the data.

  3. The Topp-Leone generalized Rayleigh cure rate model and its application

    NASA Astrophysics Data System (ADS)

    Nanthaprut, Pimwarat; Bodhisuwan, Winai; Patummasut, Mena

    2017-11-01

    Cure rate model is one of the survival analysis when model consider a proportion of the censored data. In clinical trials, the data represent time to recurrence of event or death of patients are used to improve the efficiency of treatments. Each dataset can be separated into two groups: censored and uncensored data. In this work, the new mixture cure rate model is introduced based on the Topp-Leone generalized Rayleigh distribution. The Bayesian approach is employed to estimate its parameters. In addition, a breast cancer dataset is analyzed for model illustration purpose. According to the deviance information criterion, the Topp-Leone generalized Rayleigh cure rate model shows better result than the Weibull and exponential cure rate models.

  4. Dry Chemical Development - A Model for the Extinction of Hydrocarbon Flames.

    DTIC Science & Technology

    1984-02-08

    and predicts the suppression effectiveness of a wide variety of gaseous, liquid, and solid agents . The flame extinguishment model is based on the...generalized by consideration of all endothermic reaction sinks, eg., vaporization, dissociation, and decomposition. The general equation correlates...CHEMICAL DEVELOPMENT - A MODEL FOR THE EXTINCTION OF HYDROCARBON FLAMES Various fire-extinguishing agents are carried on board Navy ships to control

  5. Building Models for the Relationship between Attitudes toward Suicide and Suicidal Behavior: Based on Data from General Population Surveys in Sweden, Norway, and Russia

    ERIC Educational Resources Information Center

    Renberg, Ellinor Salander; Hjelmeland, Heidi; Koposov, Roman

    2008-01-01

    Our aim was to build a model delineating the relationship between attitudes toward suicide and suicidal behavior and to assess equivalence by applying the model on data from different countries. Representative samples from the general population were approached in Sweden, Norway, and Russia with the Attitudes Toward Suicide (ATTS) questionnaire.…

  6. Qualitative model-based diagnosis using possibility theory

    NASA Technical Reports Server (NTRS)

    Joslyn, Cliff

    1994-01-01

    The potential for the use of possibility in the qualitative model-based diagnosis of spacecraft systems is described. The first sections of the paper briefly introduce the Model-Based Diagnostic (MBD) approach to spacecraft fault diagnosis; Qualitative Modeling (QM) methodologies; and the concepts of possibilistic modeling in the context of Generalized Information Theory (GIT). Then the necessary conditions for the applicability of possibilistic methods to qualitative MBD, and a number of potential directions for such an application, are described.

  7. General framework for dynamic large deformation contact problems based on phantom-node X-FEM

    NASA Astrophysics Data System (ADS)

    Broumand, P.; Khoei, A. R.

    2018-04-01

    This paper presents a general framework for modeling dynamic large deformation contact-impact problems based on the phantom-node extended finite element method. The large sliding penalty contact formulation is presented based on a master-slave approach which is implemented within the phantom-node X-FEM and an explicit central difference scheme is used to model the inertial effects. The method is compared with conventional contact X-FEM; advantages, limitations and implementational aspects are also addressed. Several numerical examples are presented to show the robustness and accuracy of the proposed method.

  8. Artificial neural network models for prediction of cardiovascular autonomic dysfunction in general Chinese population

    PubMed Central

    2013-01-01

    Background The present study aimed to develop an artificial neural network (ANN) based prediction model for cardiovascular autonomic (CA) dysfunction in the general population. Methods We analyzed a previous dataset based on a population sample consisted of 2,092 individuals aged 30–80 years. The prediction models were derived from an exploratory set using ANN analysis. Performances of these prediction models were evaluated in the validation set. Results Univariate analysis indicated that 14 risk factors showed statistically significant association with CA dysfunction (P < 0.05). The mean area under the receiver-operating curve was 0.762 (95% CI 0.732–0.793) for prediction model developed using ANN analysis. The mean sensitivity, specificity, positive and negative predictive values were similar in the prediction models was 0.751, 0.665, 0.330 and 0.924, respectively. All HL statistics were less than 15.0. Conclusion ANN is an effective tool for developing prediction models with high value for predicting CA dysfunction among the general population. PMID:23902963

  9. Application of Complex Adaptive Systems in Portfolio Management

    ERIC Educational Resources Information Center

    Su, Zheyuan

    2017-01-01

    Simulation-based methods are becoming a promising research tool in financial markets. A general Complex Adaptive System can be tailored to different application scenarios. Based on the current research, we built two models that would benefit portfolio management by utilizing Complex Adaptive Systems (CAS) in Agent-based Modeling (ABM) approach.…

  10. Reliability Modeling Development and Its Applications for Ceramic Capacitors with Base-Metal Electrodes (BMEs)

    NASA Technical Reports Server (NTRS)

    Liu, Donhang

    2014-01-01

    This presentation includes a summary of NEPP-funded deliverables for the Base-Metal Electrodes (BMEs) capacitor task, development of a general reliability model for BME capacitors, and a summary and future work.

  11. Plant architecture, growth and radiative transfer for terrestrial and space environments

    NASA Technical Reports Server (NTRS)

    Norman, John M.; Goel, Narendra S.

    1993-01-01

    The overall objective of this research was to develop a hardware implemented model that would incorporate realistic and dynamic descriptions of canopy architecture in physiologically based models of plant growth and functioning, with an emphasis on radiative transfer while accommodating other environmental constraints. The general approach has five parts: a realistic mathematical treatment of canopy architecture, a methodology for combining this general canopy architectural description with a general radiative transfer model, the inclusion of physiological and environmental aspects of plant growth, inclusion of plant phenology, and integration.

  12. Energy Savings Forecast of Solid-State Lighting in General Illumination Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Penning, Julie; Stober, Kelsey; Taylor, Victor

    2016-09-01

    The DOE report, Energy Savings Forecast of Solid-State Lighting in General Illumination Applications, is a biannual report which models the adoption of LEDs in the U.S. general-lighting market, along with associated energy savings, based on the full potential DOE has determined to be technically feasible over time. This version of the report uses an updated 2016 U.S. lighting-market model that is more finely calibrated and granular than previous models, and extends the forecast period to 2035 from the 2030 limit that was used in previous editions.

  13. Nonlinear evolution of coarse-grained quantum systems with generalized purity constraints

    NASA Astrophysics Data System (ADS)

    Burić, Nikola

    2010-12-01

    Constrained quantum dynamics is used to propose a nonlinear dynamical equation for pure states of a generalized coarse-grained system. The relevant constraint is given either by the generalized purity or by the generalized invariant fluctuation, and the coarse-grained pure states correspond to the generalized coherent, i.e. generalized nonentangled states. Open system model of the coarse-graining is discussed. It is shown that in this model and in the weak coupling limit the constrained dynamical equations coincide with an equation for pointer states, based on Hilbert-Schmidt distance, that was previously suggested in the context of the decoherence theory.

  14. Optimizing Spectral Wave Estimates with Adjoint-Based Sensitivity Maps

    DTIC Science & Technology

    2014-02-18

    J, Orzech MD, Ngodock HE (2013) Validation of a wave data assimilation system based on SWAN. Geophys Res Abst, (15), EGU2013-5951-1, EGU General ...surface wave spectra. Sensitivity maps are generally constructed for a selected system indicator (e.g., vorticity) by computing the differential of...spectral action balance Eq. 2, generally initialized at the off- shore boundary with spectral wave and other outputs from regional models such as

  15. A Hybrid Generalized Hidden Markov Model-Based Condition Monitoring Approach for Rolling Bearings

    PubMed Central

    Liu, Jie; Hu, Youmin; Wu, Bo; Wang, Yan; Xie, Fengyun

    2017-01-01

    The operating condition of rolling bearings affects productivity and quality in the rotating machine process. Developing an effective rolling bearing condition monitoring approach is critical to accurately identify the operating condition. In this paper, a hybrid generalized hidden Markov model-based condition monitoring approach for rolling bearings is proposed, where interval valued features are used to efficiently recognize and classify machine states in the machine process. In the proposed method, vibration signals are decomposed into multiple modes with variational mode decomposition (VMD). Parameters of the VMD, in the form of generalized intervals, provide a concise representation for aleatory and epistemic uncertainty and improve the robustness of identification. The multi-scale permutation entropy method is applied to extract state features from the decomposed signals in different operating conditions. Traditional principal component analysis is adopted to reduce feature size and computational cost. With the extracted features’ information, the generalized hidden Markov model, based on generalized interval probability, is used to recognize and classify the fault types and fault severity levels. Finally, the experiment results show that the proposed method is effective at recognizing and classifying the fault types and fault severity levels of rolling bearings. This monitoring method is also efficient enough to quantify the two uncertainty components. PMID:28524088

  16. Symétries et nomenclature des baryons: Proposition d'une nouvelle nomenclature

    NASA Astrophysics Data System (ADS)

    Landry, Gaëtan

    Baryons, such as protons and neutrons, are matter particles made of three quarks. Their current nomenclature is based on the concept of isospin, introduced by Werner Heisenberg in 1932 to explain the similarity between the masses of protons and neutrons, as well as the similarity of their behaviour under the strong interaction. It is a refinement of a nomenclature designed in 1964, before the acceptance of the quark model, for light baryons. A historical review of baryon physics before the advent of the quark model is given to understand the motivations behind the light baryon nomenclature. Then, an overview of the quark model is given to understand the extensions done to this nomenclature in 1986, as well as to understand the physics of baryons and of properties such as isospin and flavour quantum numbers. Since baryon properties are in general explained by the quark model, a nomenclature based on isospin leads to several issues of physics and of clarity. To resolve these issues, the concepts of isospin and mass groups are generalized to all flavours of quarks, the Gell-Mann--Okubo formalism is extended to generalized mass groups, and a baryon nomenclature based on the quark model, reflecting modern knowledge, is proposed.

  17. Forest height estimation from mountain forest areas using general model-based decomposition for polarimetric interferometric synthetic aperture radar images

    NASA Astrophysics Data System (ADS)

    Minh, Nghia Pham; Zou, Bin; Cai, Hongjun; Wang, Chengyi

    2014-01-01

    The estimation of forest parameters over mountain forest areas using polarimetric interferometric synthetic aperture radar (PolInSAR) images is one of the greatest interests in remote sensing applications. For mountain forest areas, scattering mechanisms are strongly affected by the ground topography variations. Most of the previous studies in modeling microwave backscattering signatures of forest area have been carried out over relatively flat areas. Therefore, a new algorithm for the forest height estimation from mountain forest areas using the general model-based decomposition (GMBD) for PolInSAR image is proposed. This algorithm enables the retrieval of not only the forest parameters, but also the magnitude associated with each mechanism. In addition, general double- and single-bounce scattering models are proposed to fit for the cross-polarization and off-diagonal term by separating their independent orientation angle, which remains unachieved in the previous model-based decompositions. The efficiency of the proposed approach is demonstrated with simulated data from PolSARProSim software and ALOS-PALSAR spaceborne PolInSAR datasets over the Kalimantan areas, Indonesia. Experimental results indicate that forest height could be effectively estimated by GMBD.

  18. General Training System; GENTRAS. Final Report.

    ERIC Educational Resources Information Center

    International Business Machines Corp., Gaithersburg, MD. Federal Systems Div.

    GENTRAS (General Training System) is a computer-based training model for the Marine Corps which makes use of a systems approach. The model defines the skill levels applicable for career growth and classifies and defines the training needed for this growth. It also provides a training cost subsystem which will provide a more efficient means of…

  19. A model for a career in a specialty of general surgery: One surgeon's opinion.

    PubMed

    Ko, Bona; McHenry, Christopher R

    2018-01-01

    The integration of general and endocrine surgery was studied as a potential career model for fellowship trained general surgeons. Case logs collected from 1991-2016 and academic milestones were examined for a single general surgeon with a focused interest in endocrine surgery. Operations were categorized using CPT codes and the 2017 ACGME "Major Case Categories" and there frequencies were determined. 10,324 operations were performed on 8209 patients. 412.9 ± 84.9 operations were performed yearly including 279.3 ± 42.7 general and 133.7 ± 65.5 endocrine operations. A high-volume endocrine surgery practice and a rank of tenured professor were achieved by years 11 and 13, respectively. At year 25, the frequency of endocrine operations exceeded general surgery operations. Maintaining a foundation in broad-based general surgery with a specialty focus is a sustainable career model. Residents and fellows can use the model to help plan their careers with realistic expectations. Copyright © 2017. Published by Elsevier Inc.

  20. Science anxiety and social cognitive factors predicting STEM career aspirations of high school freshmen in general science class

    NASA Astrophysics Data System (ADS)

    Skells, Kristin Marie

    Extant data was used to consider the association between science anxiety, social cognitive factors and STEM career aspirations of high school freshmen in general science classes. An adapted model based on social cognitive career theory (SCCT) was used to consider these relationships, with science anxiety functioning as a barrier in the model. The study assessed the following research questions: (1) Do social cognitive variables relate in the expected way to STEM career aspirations based on SCCT for ninth graders taking general science classes? (2) Is there an association between science anxiety and outcomes and processes identified in the SCCT model for ninth graders taking general science classes? (3) Does gender moderate these relationships? Results indicated that support was found for many of the central tenants of the SCCT model. Science anxiety was associated with prior achievement, self-efficacy, and science interest, although it did not relate directly to STEM career goals. Gender was found to moderate only the relationship between prior achievement and science self-efficacy.

  1. Assessment of parametric uncertainty for groundwater reactive transport modeling,

    USGS Publications Warehouse

    Shi, Xiaoqing; Ye, Ming; Curtis, Gary P.; Miller, Geoffery L.; Meyer, Philip D.; Kohler, Matthias; Yabusaki, Steve; Wu, Jichun

    2014-01-01

    The validity of using Gaussian assumptions for model residuals in uncertainty quantification of a groundwater reactive transport model was evaluated in this study. Least squares regression methods explicitly assume Gaussian residuals, and the assumption leads to Gaussian likelihood functions, model parameters, and model predictions. While the Bayesian methods do not explicitly require the Gaussian assumption, Gaussian residuals are widely used. This paper shows that the residuals of the reactive transport model are non-Gaussian, heteroscedastic, and correlated in time; characterizing them requires using a generalized likelihood function such as the formal generalized likelihood function developed by Schoups and Vrugt (2010). For the surface complexation model considered in this study for simulating uranium reactive transport in groundwater, parametric uncertainty is quantified using the least squares regression methods and Bayesian methods with both Gaussian and formal generalized likelihood functions. While the least squares methods and Bayesian methods with Gaussian likelihood function produce similar Gaussian parameter distributions, the parameter distributions of Bayesian uncertainty quantification using the formal generalized likelihood function are non-Gaussian. In addition, predictive performance of formal generalized likelihood function is superior to that of least squares regression and Bayesian methods with Gaussian likelihood function. The Bayesian uncertainty quantification is conducted using the differential evolution adaptive metropolis (DREAM(zs)) algorithm; as a Markov chain Monte Carlo (MCMC) method, it is a robust tool for quantifying uncertainty in groundwater reactive transport models. For the surface complexation model, the regression-based local sensitivity analysis and Morris- and DREAM(ZS)-based global sensitivity analysis yield almost identical ranking of parameter importance. The uncertainty analysis may help select appropriate likelihood functions, improve model calibration, and reduce predictive uncertainty in other groundwater reactive transport and environmental modeling.

  2. Data-based Non-Markovian Model Inference

    NASA Astrophysics Data System (ADS)

    Ghil, Michael

    2015-04-01

    This talk concentrates on obtaining stable and efficient data-based models for simulation and prediction in the geosciences and life sciences. The proposed model derivation relies on using a multivariate time series of partial observations from a large-dimensional system, and the resulting low-order models are compared with the optimal closures predicted by the non-Markovian Mori-Zwanzig formalism of statistical physics. Multilayer stochastic models (MSMs) are introduced as both a very broad generalization and a time-continuous limit of existing multilevel, regression-based approaches to data-based closure, in particular of empirical model reduction (EMR). We show that the multilayer structure of MSMs can provide a natural Markov approximation to the generalized Langevin equation (GLE) of the Mori-Zwanzig formalism. A simple correlation-based stopping criterion for an EMR-MSM model is derived to assess how well it approximates the GLE solution. Sufficient conditions are given for the nonlinear cross-interactions between the constitutive layers of a given MSM to guarantee the existence of a global random attractor. This existence ensures that no blow-up can occur for a very broad class of MSM applications. The EMR-MSM methodology is first applied to a conceptual, nonlinear, stochastic climate model of coupled slow and fast variables, in which only slow variables are observed. The resulting reduced model with energy-conserving nonlinearities captures the main statistical features of the slow variables, even when there is no formal scale separation and the fast variables are quite energetic. Second, an MSM is shown to successfully reproduce the statistics of a partially observed, generalized Lokta-Volterra model of population dynamics in its chaotic regime. The positivity constraint on the solutions' components replaces here the quadratic-energy-preserving constraint of fluid-flow problems and it successfully prevents blow-up. This work is based on a close collaboration with M.D. Chekroun, D. Kondrashov, S. Kravtsov and A.W. Robertson.

  3. Multi-Level Building Reconstruction for Automatic Enhancement of High Resolution Dsms

    NASA Astrophysics Data System (ADS)

    Arefi, H.; Reinartz, P.

    2012-07-01

    In this article a multi-level approach is proposed for reconstruction-based improvement of high resolution Digital Surface Models (DSMs). The concept of Levels of Detail (LOD) defined by CityGML standard has been considered as basis for abstraction levels of building roof structures. Here, the LOD1 and LOD2 which are related to prismatic and parametric roof shapes are reconstructed. Besides proposing a new approach for automatic LOD1 and LOD2 generation from high resolution DSMs, the algorithm contains two generalization levels namely horizontal and vertical. Both generalization levels are applied to prismatic model of buildings. The horizontal generalization allows controlling the approximation level of building footprints which is similar to cartographic generalization concept of the urban maps. In vertical generalization, the prismatic model is formed using an individual building height and continuous to included all flat structures locating in different height levels. The concept of LOD1 generation is based on approximation of the building footprints into rectangular or non-rectangular polygons. For a rectangular building containing one main orientation a method based on Minimum Bounding Rectangle (MBR) in employed. In contrast, a Combined Minimum Bounding Rectangle (CMBR) approach is proposed for regularization of non-rectilinear polygons, i.e. buildings without perpendicular edge directions. Both MBRand CMBR-based approaches are iteratively employed on building segments to reduce the original building footprints to a minimum number of nodes with maximum similarity to original shapes. A model driven approach based on the analysis of the 3D points of DSMs in a 2D projection plane is proposed for LOD2 generation. Accordingly, a building block is divided into smaller parts according to the direction and number of existing ridge lines. The 3D model is derived for each building part and finally, a complete parametric model is formed by merging all the 3D models of the individual parts and adjusting the nodes after the merging step. In order to provide an enhanced DSM, a surface model is provided for each building by interpolation of the internal points of the generated models. All interpolated models are situated on a Digital Terrain Model (DTM) of corresponding area to shape the enhanced DSM. Proposed DSM enhancement approach has been tested on a dataset from Munich central area. The original DSM is created using robust stereo matching of Worldview-2 stereo images. A quantitative assessment of the new DSM by comparing the heights of the ridges and eaves shows a standard deviation of better than 50cm.

  4. Modeling damage in concrete pavements and bridges.

    DOT National Transportation Integrated Search

    2010-09-01

    This project focused on micromechanical modeling of damage in concrete under general, multi-axial loading. A : continuum-level, three-dimensional constitutive model based on micromechanics was developed. The model : accounts for damage in concrete by...

  5. A generalized spatiotemporal covariance model for stationary background in analysis of MEG data.

    PubMed

    Plis, S M; Schmidt, D M; Jun, S C; Ranken, D M

    2006-01-01

    Using a noise covariance model based on a single Kronecker product of spatial and temporal covariance in the spatiotemporal analysis of MEG data was demonstrated to provide improvement in the results over that of the commonly used diagonal noise covariance model. In this paper we present a model that is a generalization of all of the above models. It describes models based on a single Kronecker product of spatial and temporal covariance as well as more complicated multi-pair models together with any intermediate form expressed as a sum of Kronecker products of spatial component matrices of reduced rank and their corresponding temporal covariance matrices. The model provides a framework for controlling the tradeoff between the described complexity of the background and computational demand for the analysis using this model. Ways to estimate the value of the parameter controlling this tradeoff are also discussed.

  6. Developing a Conceptual Architecture for a Generalized Agent-based Modeling Environment (GAME)

    DTIC Science & Technology

    2008-03-01

    4. REPAST (Java, Python , C#, Open Source) ........28 5. MASON: Multi-Agent Modeling Language (Swarm Extension... Python , C#, Open Source) Repast (Recursive Porous Agent Simulation Toolkit) was designed for building agent-based models and simulations in the...Repast makes it easy for inexperienced users to build models by including a built-in simple model and provide interfaces through which menus and Python

  7. A realistic host-vector transmission model for describing malaria prevalence pattern.

    PubMed

    Mandal, Sandip; Sinha, Somdatta; Sarkar, Ram Rup

    2013-12-01

    Malaria continues to be a major public health concern all over the world even after effective control policies have been employed, and considerable understanding of the disease biology have been attained, from both the experimental and modelling perspective. Interactions between different general and local processes, such as dependence on age and immunity of the human host, variations of temperature and rainfall in tropical and sub-tropical areas, and continued presence of asymptomatic infections, regulate the host-vector interactions, and are responsible for the continuing disease prevalence pattern.In this paper, a general mathematical model of malaria transmission is developed considering short and long-term age-dependent immunity of human host and its interaction with pathogen-infected mosquito vector. The model is studied analytically and numerically to understand the role of different parameters related to mosquitoes and humans. To validate the model with a disease prevalence pattern in a particular region, real epidemiological data from the north-eastern part of India was used, and the effect of seasonal variation in mosquito density was modelled based on local climactic data. The model developed based on general features of host-vector interactions, and modified simply incorporating local environmental factors with minimal changes, can successfully explain the disease transmission process in the region. This provides a general approach toward modelling malaria that can be adapted to control future outbreaks of malaria.

  8. Neural networks with multiple general neuron models: a hybrid computational intelligence approach using Genetic Programming.

    PubMed

    Barton, Alan J; Valdés, Julio J; Orchard, Robert

    2009-01-01

    Classical neural networks are composed of neurons whose nature is determined by a certain function (the neuron model), usually pre-specified. In this paper, a type of neural network (NN-GP) is presented in which: (i) each neuron may have its own neuron model in the form of a general function, (ii) any layout (i.e network interconnection) is possible, and (iii) no bias nodes or weights are associated to the connections, neurons or layers. The general functions associated to a neuron are learned by searching a function space. They are not provided a priori, but are rather built as part of an Evolutionary Computation process based on Genetic Programming. The resulting network solutions are evaluated based on a fitness measure, which may, for example, be based on classification or regression errors. Two real-world examples are presented to illustrate the promising behaviour on classification problems via construction of a low-dimensional representation of a high-dimensional parameter space associated to the set of all network solutions.

  9. Estimates of runoff using water-balance and atmospheric general circulation models

    USGS Publications Warehouse

    Wolock, D.M.; McCabe, G.J.

    1999-01-01

    The effects of potential climate change on mean annual runoff in the conterminous United States (U.S.) are examined using a simple water-balance model and output from two atmospheric general circulation models (GCMs). The two GCMs are from the Canadian Centre for Climate Prediction and Analysis (CCC) and the Hadley Centre for Climate Prediction and Research (HAD). In general, the CCC GCM climate results in decreases in runoff for the conterminous U.S., and the HAD GCM climate produces increases in runoff. These estimated changes in runoff primarily are the result of estimated changes in precipitation. The changes in mean annual runoff, however, mostly are smaller than the decade-to-decade variability in GCM-based mean annual runoff and errors in GCM-based runoff. The differences in simulated runoff between the two GCMs, together with decade-to-decade variability and errors in GCM-based runoff, cause the estimates of changes in runoff to be uncertain and unreliable.

  10. The substantative knowledge base for travel and tourism: a systems model

    Treesearch

    David S. Solan

    1992-01-01

    Strategies for education and professional preparation in travel and tourism have generally been based in traditional tourism-related disciplines providing somewhat narrow perspectives of the tourism phenomenon. The need exists for models that provide comprehensive, holistic perspectives of travel and tourism. This paper presents one such systems model showing that...

  11. EVALUATION OF THE REAL-TIME AIR-QUALITY MODEL USING THE RAPS (REGIONAL AIR POLLUTION STUDY) DATA BASE. VOLUME 1. OVERVIEW

    EPA Science Inventory

    The theory and programming of statistical tests for evaluating the Real-Time Air-Quality Model (RAM) using the Regional Air Pollution Study (RAPS) data base are fully documented in four report volumes. Moreover, the tests are generally applicable to other model evaluation problem...

  12. Elastic-viscoplastic modeling of soft biological tissues using a mixed finite element formulation based on the relative deformation gradient.

    PubMed

    Weickenmeier, J; Jabareen, M

    2014-11-01

    The characteristic highly nonlinear, time-dependent, and often inelastic material response of soft biological tissues can be expressed in a set of elastic-viscoplastic constitutive equations. The specific elastic-viscoplastic model for soft tissues proposed by Rubin and Bodner (2002) is generalized with respect to the constitutive equations for the scalar quantity of the rate of inelasticity and the hardening parameter in order to represent a general framework for elastic-viscoplastic models. A strongly objective integration scheme and a new mixed finite element formulation were developed based on the introduction of the relative deformation gradient-the deformation mapping between the last converged and current configurations. The numerical implementation of both the generalized framework and the specific Rubin and Bodner model is presented. As an example of a challenging application of the new model equations, the mechanical response of facial skin tissue is characterized through an experimental campaign based on the suction method. The measurement data are used for the identification of a suitable set of model parameters that well represents the experimentally observed tissue behavior. Two different measurement protocols were defined to address specific tissue properties with respect to the instantaneous tissue response, inelasticity, and tissue recovery. Copyright © 2014 John Wiley & Sons, Ltd.

  13. Vocational-Technical Education Reforms in Germany, Netherlands, France and U.K. and Their Implications to Taiwan.

    ERIC Educational Resources Information Center

    Lee, Lung-Sheng

    Three major models of vocational education and training provision for the 16- to 19-year-old age group have been identified: schooling model, which emphasizes full-time schooling until age 18; dual model, which involves mainly work-based apprenticeship training with some school-based general education; and mixed model. Germany is an exemplar of…

  14. Computational study on cortical spreading depression based on a generalized cellular automaton model

    NASA Astrophysics Data System (ADS)

    Chen, Shangbin; Hu, Lele; Li, Bing; Xu, Changcheng; Liu, Qian

    2009-02-01

    Cortical spreading depression (CSD) is an important neurophysiological phenomenon correlating with some neural disorders, such as migraine, cerebral ischemia and epilepsy. By now, we are still not clear about the mechanisms of CSD's initiation and propagation, also the relevance between CSD and those neural diseases. Nevertheless, characterization of CSD, especially the spatiotemporal evolution, will promote the understanding of the CSD's nature and mechanisms. Besides the previous experimental work on charactering the spatiotemporal evolution of CSD in rats by optical intrinsic signal imaging, a computational study based on a generalized cellular automaton (CA) model was proposed here. In the model, we exploited a generalized neighborhood connection rule: a central CA cell is related with a group of surrounding CA cells with different weight coefficients. By selecting special parameters, the generalized CA model could be transformed to the traditional CA models with von Neumann, Moore and hexagon neighborhood connection means. Hence, the new model covered several properties of CSD simulated in traditional CA models: 1) expanding from the origin site like a circular wave; 2) annihilation of two waves traveling in opposite directions after colliding; 3) wavefront of CSD breaking and recovering when and after encountering an obstacle. By setting different refractory period in the different CA lattice field, different connection coefficient in different direction within the defined neighborhood, inhomogeneous propagation of CSD was simulated with high fidelity. The computational results were analogous to the reported time-varying CSD waves by optical imaging. So, the generalized CA model would be useful to study CSD because of its intuitive appeal and computational efficiency.

  15. Variable Complexity Structural Optimization of Shells

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.; Venkataraman, Satchi

    1999-01-01

    Structural designers today face both opportunities and challenges in a vast array of available analysis and optimization programs. Some programs such as NASTRAN, are very general, permitting the designer to model any structure, to any degree of accuracy, but often at a higher computational cost. Additionally, such general procedures often do not allow easy implementation of all constraints of interest to the designer. Other programs, based on algebraic expressions used by designers one generation ago, have limited applicability for general structures with modem materials. However, when applicable, they provide easy understanding of design decisions trade-off. Finally, designers can also use specialized programs suitable for designing efficiently a subset of structural problems. For example, PASCO and PANDA2 are panel design codes, which calculate response and estimate failure much more efficiently than general-purpose codes, but are narrowly applicable in terms of geometry and loading. Therefore, the problem of optimizing structures based on simultaneous use of several models and computer programs is a subject of considerable interest. The problem of using several levels of models in optimization has been dubbed variable complexity modeling. Work under NASA grant NAG1-2110 has been concerned with the development of variable complexity modeling strategies with special emphasis on response surface techniques. In addition, several modeling issues for the design of shells of revolution were studied.

  16. Variable Complexity Structural Optimization of Shells

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.; Venkataraman, Satchi

    1998-01-01

    Structural designers today face both opportunities and challenges in a vast array of available analysis and optimization programs. Some programs such as NASTRAN, are very general, permitting the designer to model any structure, to any degree of accuracy, but often at a higher computational cost. Additionally, such general procedures often do not allow easy implementation of all constraints of interest to the designer. Other programs, based on algebraic expressions used by designers one generation ago, have limited applicability for general structures with modem materials. However, when applicable, they provide easy understanding of design decisions trade-off. Finally, designers can also use specialized programs suitable for designing efficiently a subset of structural problems. For example, PASCO and PANDA2 are panel design codes, which calculate response and estimate failure much more efficiently than general-purpose codes, but are narrowly applicable in terms of geometry and loading. Therefore, the problem of optimizing structures based on simultaneous use of several models and computer programs is a subject of considerable interest. The problem of using several levels of models in optimization has been dubbed variable complexity modeling. Work under NASA grant NAG1-1808 has been concerned with the development of variable complexity modeling strategies with special emphasis on response surface techniques. In addition several modeling issues for the design of shells of revolution were studied.

  17. Abstract memory representations in the ventromedial prefrontal cortex and hippocampus support concept generalization.

    PubMed

    Bowman, Caitlin R; Zeithamova, Dagmar

    2018-02-07

    Memory function involves both the ability to remember details of individual experiences and the ability to link information across events to create new knowledge. Prior research has identified the ventromedial prefrontal cortex (VMPFC) and the hippocampus as important for integrating across events in service of generalization in episodic memory. The degree to which these memory integration mechanisms contribute to other forms of generalization, such as concept learning, is unclear. The present study used a concept-learning task in humans (both sexes) coupled with model-based fMRI to test whether VMPFC and hippocampus contribute to concept generalization, and whether they do so by maintaining specific category exemplars or abstract category representations. Two formal categorization models were fit to individual subject data: a prototype model that posits abstract category representations and an exemplar model that posits category representations based on individual category members. Latent variables from each of these models were entered into neuroimaging analyses to determine whether VMPFC and the hippocampus track prototype or exemplar information during concept generalization. Behavioral model fits indicated that almost three quarters of the subjects relied on prototype information when making judgments about new category members. Paralleling prototype dominance in behavior, correlates of the prototype model were identified in VMPFC and the anterior hippocampus with no significant exemplar correlates. These results indicate that the VMPFC and portions of the hippocampus play a broad role in memory generalization and that they do so by representing abstract information integrated from multiple events. SIGNIFICANCE STATEMENT Whether people represent concepts as a set of individual category members or by deriving generalized concept representations abstracted across exemplars has been debated. In episodic memory, generalized memory representations have been shown to arise through integration across events supported by the ventromedial prefrontal cortex (VMPFC) and hippocampus. The current study combined formal categorization models with fMRI data analysis to show that the VMPFC and anterior hippocampus represent abstract prototype information during concept generalization, contributing novel evidence of generalized concept representations in the brain. Results indicate that VMPFC-hippocampal memory integration mechanisms contribute to knowledge generalization across multiple cognitive domains, with the degree of abstraction of memory representations varying along the long axis of the hippocampus. Copyright © 2018 the authors.

  18. The accuracy of general practitioner workforce projections

    PubMed Central

    2013-01-01

    Background Health workforce projections are important instruments to prevent imbalances in the health workforce. For both the tenability and further development of these projections, it is important to evaluate the accuracy of workforce projections. In the Netherlands, health workforce projections have been done since 2000 to support health workforce planning. What is the accuracy of the techniques of these Dutch general practitioner workforce projections? Methods We backtested the workforce projection model by comparing the ex-post projected number of general practitioners with the observed number of general practitioners between 1998 and 2011. Averages of historical data were used for all elements except for inflow in training. As the required training inflow is the key result of the workforce planning model, and has actually determined past adjustments of training inflow, the accuracy of the model was backtested using the observed training inflow and not an average of historical data to avoid the interference of past policy decisions. The accuracy of projections with different lengths of projection horizon and base period (on which the projections are based) was tested. Results The workforce projection model underestimated the number of active Dutch general practitioners in most years. The mean absolute percentage errors range from 1.9% to 14.9%, with the projections being more accurate in more recent years. Furthermore, projections with a shorter projection horizon have a higher accuracy than those with a longer horizon. Unexpectedly, projections with a shorter base period have a higher accuracy than those with a longer base period. Conclusions According to the results of the present study, forecasting the size of the future workforce did not become more difficult between 1998 and 2011, as we originally expected. Furthermore, the projections with a short projection horizon and a short base period are more accurate than projections with a longer projection horizon and base period. We can carefully conclude that health workforce projections can be made with data based on relatively short base periods, although detailed data are still required to monitor and evaluate the health workforce. PMID:23866676

  19. Mathematical models for nonparametric inferences from line transect data

    USGS Publications Warehouse

    Burnham, K.P.; Anderson, D.R.

    1976-01-01

    A general mathematical theory of line transects is develoepd which supplies a framework for nonparametric density estimation based on either right angle or sighting distances. The probability of observing a point given its right angle distance (y) from the line is generalized to an arbitrary function g(y). Given only that g(O) = 1, it is shown there are nonparametric approaches to density estimation using the observed right angle distances. The model is then generalized to include sighting distances (r). Let f(y/r) be the conditional distribution of right angle distance given sighting distance. It is shown that nonparametric estimation based only on sighting distances requires we know the transformation of r given by f(O/r).

  20. 40 CFR 93.159 - Procedures for conformity determinations of general Federal actions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... based on the applicable air quality models, data bases, and other requirements specified in the most... applicable air quality models, data bases, and other requirements specified in the most recent version of the... data are available, such as actual stack test data from stationary sources which are part of the...

  1. Reproducing (Dis)advantage: The Role of Family-Based, School-Based, and Cumulative-Based Processes

    ERIC Educational Resources Information Center

    Conner, Sonya

    2012-01-01

    Pierre Bourdieu's theory of cultural and social reproduction (Bourdieu 1973; Bourdieu and Passeron 1977) offers a model that can be used to explain the existence of persistent educational stratification in the United States, which contributes to perpetuation of social inequality, more generally. This theoretical model purports three…

  2. The Effects of Recreation Experience, Environmental Attitude, and Biospheric Value on the Environmentally Responsible Behavior of Nature-Based Tourists

    NASA Astrophysics Data System (ADS)

    Lee, Tsung Hung; Jan, Fen-Hauh

    2015-07-01

    The scientific understanding of the recreation experience and the environmentally responsible behavior of nature-based tourists is limited. This study examines the relationship among the recreation experience, environmental attitude, biospheric value, and the general and site-specific environmentally responsible behavior of nature-based tourists in Taomi, Liuqiu Island, and Aowanda and Najenshan in Taiwan. A total of 1342 usable questionnaires were collected for this study. The empirical results indicate that the recreation experience influences biospheric value and environmental attitude; subsequently, it then indirectly influences the general and site-specific environmentally responsible behavior of nature-based tourists. Our theoretical behavioral model elucidates previously proposed but unexamined behavioral models among nature-based tourists, and it offers a theoretical framework for researchers, decision makers, managers, and tourists in the field of nature-based tourism. We conclude that when an individual participates in nature-based tourism as described here, these recreation experiences strengthen their environmental attitude and biospheric value, and consequently increase their engagement in both general and site-specific environmentally responsible behaviors.

  3. The effects of recreation experience, environmental attitude, and biospheric value on the environmentally responsible behavior of nature-based tourists.

    PubMed

    Lee, Tsung Hung; Jan, Fen-Hauh

    2015-07-01

    The scientific understanding of the recreation experience and the environmentally responsible behavior of nature-based tourists is limited. This study examines the relationship among the recreation experience, environmental attitude, biospheric value, and the general and site-specific environmentally responsible behavior of nature-based tourists in Taomi, Liuqiu Island, and Aowanda and Najenshan in Taiwan. A total of 1342 usable questionnaires were collected for this study. The empirical results indicate that the recreation experience influences biospheric value and environmental attitude; subsequently, it then indirectly influences the general and site-specific environmentally responsible behavior of nature-based tourists. Our theoretical behavioral model elucidates previously proposed but unexamined behavioral models among nature-based tourists, and it offers a theoretical framework for researchers, decision makers, managers, and tourists in the field of nature-based tourism. We conclude that when an individual participates in nature-based tourism as described here, these recreation experiences strengthen their environmental attitude and biospheric value, and consequently increase their engagement in both general and site-specific environmentally responsible behaviors.

  4. The propagation of inventory-based positional errors into statistical landslide susceptibility models

    NASA Astrophysics Data System (ADS)

    Steger, Stefan; Brenning, Alexander; Bell, Rainer; Glade, Thomas

    2016-12-01

    There is unanimous agreement that a precise spatial representation of past landslide occurrences is a prerequisite to produce high quality statistical landslide susceptibility models. Even though perfectly accurate landslide inventories rarely exist, investigations of how landslide inventory-based errors propagate into subsequent statistical landslide susceptibility models are scarce. The main objective of this research was to systematically examine whether and how inventory-based positional inaccuracies of different magnitudes influence modelled relationships, validation results, variable importance and the visual appearance of landslide susceptibility maps. The study was conducted for a landslide-prone site located in the districts of Amstetten and Waidhofen an der Ybbs, eastern Austria, where an earth-slide point inventory was available. The methodological approach comprised an artificial introduction of inventory-based positional errors into the present landslide data set and an in-depth evaluation of subsequent modelling results. Positional errors were introduced by artificially changing the original landslide position by a mean distance of 5, 10, 20, 50 and 120 m. The resulting differently precise response variables were separately used to train logistic regression models. Odds ratios of predictor variables provided insights into modelled relationships. Cross-validation and spatial cross-validation enabled an assessment of predictive performances and permutation-based variable importance. All analyses were additionally carried out with synthetically generated data sets to further verify the findings under rather controlled conditions. The results revealed that an increasing positional inventory-based error was generally related to increasing distortions of modelling and validation results. However, the findings also highlighted that interdependencies between inventory-based spatial inaccuracies and statistical landslide susceptibility models are complex. The systematic comparisons of 12 models provided valuable evidence that the respective error-propagation was not only determined by the degree of positional inaccuracy inherent in the landslide data, but also by the spatial representation of landslides and the environment, landslide magnitude, the characteristics of the study area, the selected classification method and an interplay of predictors within multiple variable models. Based on the results, we deduced that a direct propagation of minor to moderate inventory-based positional errors into modelling results can be partly counteracted by adapting the modelling design (e.g. generalization of input data, opting for strongly generalizing classifiers). Since positional errors within landslide inventories are common and subsequent modelling and validation results are likely to be distorted, the potential existence of inventory-based positional inaccuracies should always be considered when assessing landslide susceptibility by means of empirical models.

  5. A constitutive model for magnetostriction based on thermodynamic framework

    NASA Astrophysics Data System (ADS)

    Ho, Kwangsoo

    2016-08-01

    This work presents a general framework for the continuum-based formulation of dissipative materials with magneto-mechanical coupling in the viewpoint of irreversible thermodynamics. The thermodynamically consistent model developed for the magnetic hysteresis is extended to include the magnetostrictive effect. The dissipative and hysteretic response of magnetostrictive materials is captured through the introduction of internal state variables. The evolution rate of magnetostrictive strain as well as magnetization is derived from thermodynamic and dissipative potentials in accordance with the general principles of thermodynamics. It is then demonstrated that the constitutive model is competent to describe the magneto-mechanical behavior by comparing simulation results with the experimental data reported in the literature.

  6. Non-line-of-sight single-scatter propagation model for noncoplanar geometries.

    PubMed

    Elshimy, Mohamed A; Hranilovic, Steve

    2011-03-01

    In this paper, a geometrical propagation model is developed that generalizes the classical single-scatter model under the assumption of first-order scattering and non-line-of-sight (NLOS) communication. The generalized model considers the case of a noncoplanar geometry, where it overcomes the restriction that the transmitter and the receiver cone axes lie in the same plane. To verify the model, a Monte Carlo (MC) radiative transfer model based on a photon transport algorithm is constructed. Numerical examples for a wavelength of 266 nm are illustrated, which corresponds to a solar-blind NLOS UV communication system. A comparison of the temporal responses of the generalized model and the MC simulation results shows close agreement. Path loss and delay spread are also shown for different pointing directions.

  7. Micromechanics of metal matrix composites using the Generalized Method of Cells model (GMC) user's guide

    NASA Technical Reports Server (NTRS)

    Aboudi, Jacob; Pindera, Marek-Jerzy

    1992-01-01

    A user's guide for the program gmc.f is presented. The program is based on the generalized method of cells model (GMC) which is capable via a micromechanical analysis, of predicting the overall, inelastic behavior of unidirectional, multi-phase composites from the knowledge of the properties of the viscoplastic constituents. In particular, the program is sufficiently general to predict the response of unidirectional composites having variable fiber shapes and arrays.

  8. ℤ3 parafermionic chain emerging from Yang-Baxter equation.

    PubMed

    Yu, Li-Wei; Ge, Mo-Lin

    2016-02-23

    We construct the 1D ℤ3 parafermionic model based on the solution of Yang-Baxter equation and express the model by three types of fermions. It is shown that the ℤ3 parafermionic chain possesses both triple degenerate ground states and non-trivial topological winding number. Hence, the ℤ3 parafermionic model is a direct generalization of 1D ℤ2 Kitaev model. Both the ℤ2 and ℤ3 model can be obtained from Yang-Baxter equation. On the other hand, to show the algebra of parafermionic tripling intuitively, we define a new 3-body Hamiltonian H123 based on Yang-Baxter equation. Different from the Majorana doubling, the H123 holds triple degeneracy at each of energy levels. The triple degeneracy is protected by two symmetry operators of the system, ω-parity P [formula in text] and emergent parafermionic operator Γ, which are the generalizations of parity PM and emergent Majorana operator in Lee-Wilczek model, respectively. Both the ℤ3 parafermionic model and H123 can be viewed as SU(3) models in color space. In comparison with the Majorana models for SU(2), it turns out that the SU(3) models are truly the generalization of Majorana models resultant from Yang-Baxter equation.

  9. Gravitational redshift of galaxies in clusters as predicted by general relativity.

    PubMed

    Wojtak, Radosław; Hansen, Steen H; Hjorth, Jens

    2011-09-28

    The theoretical framework of cosmology is mainly defined by gravity, of which general relativity is the current model. Recent tests of general relativity within the Lambda Cold Dark Matter (ΛCDM) model have found a concordance between predictions and the observations of the growth rate and clustering of the cosmic web. General relativity has not hitherto been tested on cosmological scales independently of the assumptions of the ΛCDM model. Here we report an observation of the gravitational redshift of light coming from galaxies in clusters at the 99 per cent confidence level, based on archival data. Our measurement agrees with the predictions of general relativity and its modification created to explain cosmic acceleration without the need for dark energy (the f(R) theory), but is inconsistent with alternative models designed to avoid the presence of dark matter. © 2011 Macmillan Publishers Limited. All rights reserved

  10. Are We Predicting the Actual or Apparent Distribution of Temperate Marine Fishes?

    PubMed Central

    Monk, Jacquomo; Ierodiaconou, Daniel; Harvey, Euan; Rattray, Alex; Versace, Vincent L.

    2012-01-01

    Planning for resilience is the focus of many marine conservation programs and initiatives. These efforts aim to inform conservation strategies for marine regions to ensure they have inbuilt capacity to retain biological diversity and ecological function in the face of global environmental change – particularly changes in climate and resource exploitation. In the absence of direct biological and ecological information for many marine species, scientists are increasingly using spatially-explicit, predictive-modeling approaches. Through the improved access to multibeam sonar and underwater video technology these models provide spatial predictions of the most suitable regions for an organism at resolutions previously not possible. However, sensible-looking, well-performing models can provide very different predictions of distribution depending on which occurrence dataset is used. To examine this, we construct species distribution models for nine temperate marine sedentary fishes for a 25.7 km2 study region off the coast of southeastern Australia. We use generalized linear model (GLM), generalized additive model (GAM) and maximum entropy (MAXENT) to build models based on co-located occurrence datasets derived from two underwater video methods (i.e. baited and towed video) and fine-scale multibeam sonar based seafloor habitat variables. Overall, this study found that the choice of modeling approach did not considerably influence the prediction of distributions based on the same occurrence dataset. However, greater dissimilarity between model predictions was observed across the nine fish taxa when the two occurrence datasets were compared (relative to models based on the same dataset). Based on these results it is difficult to draw any general trends in regards to which video method provides more reliable occurrence datasets. Nonetheless, we suggest predictions reflecting the species apparent distribution (i.e. a combination of species distribution and the probability of detecting it). Consequently, we also encourage researchers and marine managers to carefully interpret model predictions. PMID:22536325

  11. Parent Ratings of ADHD Symptoms: Generalized Partial Credit Model Analysis of Differential Item Functioning across Gender

    ERIC Educational Resources Information Center

    Gomez, Rapson

    2012-01-01

    Objective: Generalized partial credit model, which is based on item response theory (IRT), was used to test differential item functioning (DIF) for the "Diagnostic and Statistical Manual of Mental Disorders" (4th ed.), inattention (IA), and hyperactivity/impulsivity (HI) symptoms across boys and girls. Method: To accomplish this, parents completed…

  12. Applications of General Systems Theory to the Development of an Adjustable Tutorial Software Machine.

    ERIC Educational Resources Information Center

    Vos, Hans J.

    1994-01-01

    Describes the construction of a model of computer-assisted instruction using a qualitative block diagram based on general systems theory (GST) as a framework. Subject matter representation is discussed, and appendices include system variables and system equations of the GST model, as well as an example of developing flexible courseware. (Contains…

  13. A General and Flexible Approach to Estimating the Social Relations Model Using Bayesian Methods

    ERIC Educational Resources Information Center

    Ludtke, Oliver; Robitzsch, Alexander; Kenny, David A.; Trautwein, Ulrich

    2013-01-01

    The social relations model (SRM) is a conceptual, methodological, and analytical approach that is widely used to examine dyadic behaviors and interpersonal perception within groups. This article introduces a general and flexible approach to estimating the parameters of the SRM that is based on Bayesian methods using Markov chain Monte Carlo…

  14. A Correlation-Based Transition Model using Local Variables. Part 2; Test Cases and Industrial Applications

    NASA Technical Reports Server (NTRS)

    Langtry, R. B.; Menter, F. R.; Likki, S. R.; Suzen, Y. B.; Huang, P. G.; Volker, S.

    2006-01-01

    A new correlation-based transition model has been developed, which is built strictly on local variables. As a result, the transition model is compatible with modern computational fluid dynamics (CFD) methods using unstructured grids and massive parallel execution. The model is based on two transport equations, one for the intermittency and one for the transition onset criteria in terms of momentum thickness Reynolds number. The proposed transport equations do not attempt to model the physics of the transition process (unlike, e.g., turbulence models), but form a framework for the implementation of correlation-based models into general-purpose CFD methods.

  15. Numerically pricing American options under the generalized mixed fractional Brownian motion model

    NASA Astrophysics Data System (ADS)

    Chen, Wenting; Yan, Bowen; Lian, Guanghua; Zhang, Ying

    2016-06-01

    In this paper, we introduce a robust numerical method, based on the upwind scheme, for the pricing of American puts under the generalized mixed fractional Brownian motion (GMFBM) model. By using portfolio analysis and applying the Wick-Itô formula, a partial differential equation (PDE) governing the prices of vanilla options under the GMFBM is successfully derived for the first time. Based on this, we formulate the pricing of American puts under the current model as a linear complementarity problem (LCP). Unlike the classical Black-Scholes (B-S) model or the generalized B-S model discussed in Cen and Le (2011), the newly obtained LCP under the GMFBM model is difficult to be solved accurately because of the numerical instability which results from the degeneration of the governing PDE as time approaches zero. To overcome this difficulty, a numerical approach based on the upwind scheme is adopted. It is shown that the coefficient matrix of the current method is an M-matrix, which ensures its stability in the maximum-norm sense. Remarkably, we have managed to provide a sharp theoretic error estimate for the current method, which is further verified numerically. The results of various numerical experiments also suggest that this new approach is quite accurate, and can be easily extended to price other types of financial derivatives with an American-style exercise feature under the GMFBM model.

  16. Dynamic modeling and optimal joint torque coordination of advanced robotic systems

    NASA Astrophysics Data System (ADS)

    Kang, Hee-Jun

    The development is documented of an efficient dynamic modeling algorithm and the subsequent optimal joint input load coordination of advanced robotic systems for industrial application. A closed-form dynamic modeling algorithm for the general closed-chain robotic linkage systems is presented. The algorithm is based on the transfer of system dependence from a set of open chain Lagrangian coordinates to any desired system generalized coordinate set of the closed-chain. Three different techniques for evaluation of the kinematic closed chain constraints allow the representation of the dynamic modeling parameters in terms of system generalized coordinates and have no restriction with regard to kinematic redundancy. The total computational requirement of the closed-chain system model is largely dependent on the computation required for the dynamic model of an open kinematic chain. In order to improve computational efficiency, modification of an existing open-chain KIC based dynamic formulation is made by the introduction of the generalized augmented body concept. This algorithm allows a 44 pct. computational saving over the current optimized one (O(N4), 5995 when N = 6). As means of resolving redundancies in advanced robotic systems, local joint torque optimization is applied for effectively using actuator power while avoiding joint torque limits. The stability problem in local joint torque optimization schemes is eliminated by using fictitious dissipating forces which act in the necessary null space. The performance index representing the global torque norm is shown to be satisfactory. In addition, the resulting joint motion trajectory becomes conservative, after a transient stage, for repetitive cyclic end-effector trajectories. The effectiveness of the null space damping method is shown. The modular robot, which is built of well defined structural modules from a finite-size inventory and is controlled by one general computer system, is another class of evolving, highly versatile, advanced robotic systems. Therefore, finally, a module based dynamic modeling algorithm is presented for the dynamic coordination of such reconfigurable modular robotic systems. A user interactive module based manipulator analysis program (MBMAP) has been coded in C language running on 4D/70 Silicon Graphics.

  17. Generalized Tavis-Cummings models and quantum networks

    NASA Astrophysics Data System (ADS)

    Gorokhov, A. V.

    2018-04-01

    The properties of quantum networks based on generalized Tavis-Cummings models are theoretically investigated. We have calculated the information transfer success rate from one node to another in a simple model of a quantum network realized with two-level atoms placed in the cavities and interacting with an external laser field and cavity photons. The method of dynamical group of the Hamiltonian and technique of corresponding coherent states were used for investigation of the temporal dynamics of the two nodes model.

  18. QCD Sum Rules and Models for Generalized Parton Distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anatoly Radyushkin

    2004-10-01

    I use QCD sum rule ideas to construct models for generalized parton distributions. To this end, the perturbative parts of QCD sum rules for the pion and nucleon electromagnetic form factors are interpreted in terms of GPDs and two models are discussed. One of them takes the double Borel transform at adjusted value of the Borel parameter as a model for nonforward parton densities, and another is based on the local duality relation. Possible ways of improving these Ansaetze are briefly discussed.

  19. Substrate inhibition kinetics of phenol biodegradation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goudar, C.T.; Ganji, S.H.; Pujar, B.G.

    Phenol biodegradation was studied in batch experiments using an acclimated inoculum and initial phenol concentrations ranging from 0.1 to 1.3 g/L. Phenol depletion an associated microbial growth were monitored over time to provide information that was used to estimate the kinetics of phenol biodegradation. Phenol inhibited biodegradation at high concentrations, and a generalized substrate inhibition model based on statistical thermodynamics was used to describe the dynamics of microbial growth in phenol. For experimental data obtained in this study, the generalized substrate inhibition model reduced to a form that is analogous to the Andrews equation, and the biokinetic parameters {micro}{sub max},more » maximum specific growth; K{sub s}, saturation constant; and K{sub i}, inhibition constant were estimated as 0.251 h{sup {minus}1}, 0.011 g/L, and 0.348 g/L, respectively, using a nonlinear least squares technique. Given the wide variability in substrate inhibition models used to describe phenol biodegradation, an attempt was made to justify selection of particular model based on theoretical considerations. Phenol biodegradation data from nine previously published studies were used in the generalized substrate inhibition model to determine the appropriate form of the substrate inhibition model. In all nine cases, the generalized substrate inhibition model reduced to a form analogous to the Andrews equation suggesting the suitability of the Andrews equation to describe phenol biodegradation data.« less

  20. A One-System Theory Which is Not Propositional.

    PubMed

    Witnauer, James E; Urcelay, Gonzalo P; Miller, Ralph R

    2009-04-01

    We argue that the propositional and link-based approaches to human contingency learning represent different levels of analysis because propositional reasoning requires a basis, which is plausibly provided by a link-based architecture. Moreover, in their attempt to compare two general classes of models (link-based and propositional), Mitchell et al. have referred to only two generic models and ignore the large variety of different models within each class.

  1. Model-based Acceleration Control of Turbofan Engines with a Hammerstein-Wiener Representation

    NASA Astrophysics Data System (ADS)

    Wang, Jiqiang; Ye, Zhifeng; Hu, Zhongzhi; Wu, Xin; Dimirovsky, Georgi; Yue, Hong

    2017-05-01

    Acceleration control of turbofan engines is conventionally designed through either schedule-based or acceleration-based approach. With the widespread acceptance of model-based design in aviation industry, it becomes necessary to investigate the issues associated with model-based design for acceleration control. In this paper, the challenges for implementing model-based acceleration control are explained; a novel Hammerstein-Wiener representation of engine models is introduced; based on the Hammerstein-Wiener model, a nonlinear generalized minimum variance type of optimal control law is derived; the feature of the proposed approach is that it does not require the inversion operation that usually upsets those nonlinear control techniques. The effectiveness of the proposed control design method is validated through a detailed numerical study.

  2. Orthonormal vector general polynomials derived from the Cartesian gradient of the orthonormal Zernike-based polynomials.

    PubMed

    Mafusire, Cosmas; Krüger, Tjaart P J

    2018-06-01

    The concept of orthonormal vector circle polynomials is revisited by deriving a set from the Cartesian gradient of Zernike polynomials in a unit circle using a matrix-based approach. The heart of this model is a closed-form matrix equation of the gradient of Zernike circle polynomials expressed as a linear combination of lower-order Zernike circle polynomials related through a gradient matrix. This is a sparse matrix whose elements are two-dimensional standard basis transverse Euclidean vectors. Using the outer product form of the Cholesky decomposition, the gradient matrix is used to calculate a new matrix, which we used to express the Cartesian gradient of the Zernike circle polynomials as a linear combination of orthonormal vector circle polynomials. Since this new matrix is singular, the orthonormal vector polynomials are recovered by reducing the matrix to its row echelon form using the Gauss-Jordan elimination method. We extend the model to derive orthonormal vector general polynomials, which are orthonormal in a general pupil by performing a similarity transformation on the gradient matrix to give its equivalent in the general pupil. The outer form of the Gram-Schmidt procedure and the Gauss-Jordan elimination method are then applied to the general pupil to generate the orthonormal vector general polynomials from the gradient of the orthonormal Zernike-based polynomials. The performance of the model is demonstrated with a simulated wavefront in a square pupil inscribed in a unit circle.

  3. A Generalized Model of E-trading for GSR Fair Exchange Protocol

    NASA Astrophysics Data System (ADS)

    Konar, Debajyoti; Mazumdar, Chandan

    In this paper we propose a generalized model of E-trading for the development of GSR Fair Exchange Protocols. Based on the model, a method is narrated to implement E-trading protocols that ensure fairness in true sense without using an additional trusted third party for which either party has to pay. The model provides the scope to include the correctness of the product, money atomicity and customer's anonymity properties within E-trading protocol. We conclude this paper by indicating the area of applicability for our model.

  4. Paper-based and web-based intervention modeling experiments identified the same predictors of general practitioners' antibiotic-prescribing behavior.

    PubMed

    Treweek, Shaun; Bonetti, Debbie; Maclennan, Graeme; Barnett, Karen; Eccles, Martin P; Jones, Claire; Pitts, Nigel B; Ricketts, Ian W; Sullivan, Frank; Weal, Mark; Francis, Jill J

    2014-03-01

    To evaluate the robustness of the intervention modeling experiment (IME) methodology as a way of developing and testing behavioral change interventions before a full-scale trial by replicating an earlier paper-based IME. Web-based questionnaire and clinical scenario study. General practitioners across Scotland were invited to complete the questionnaire and scenarios, which were then used to identify predictors of antibiotic-prescribing behavior. These predictors were compared with the predictors identified in an earlier paper-based IME and used to develop a new intervention. Two hundred seventy general practitioners completed the questionnaires and scenarios. The constructs that predicted simulated behavior and intention were attitude, perceived behavioral control, risk perception/anticipated consequences, and self-efficacy, which match the targets identified in the earlier paper-based IME. The choice of persuasive communication as an intervention in the earlier IME was also confirmed. Additionally, a new intervention, an action plan, was developed. A web-based IME replicated the findings of an earlier paper-based IME, which provides confidence in the IME methodology. The interventions will now be evaluated in the next stage of the IME, a web-based randomized controlled trial. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. DEVELOPMENT OF A PHYSIOLOGICALLY BASED PHARMACOKINETIC MODEL FOR DELTAMETHRIN IN DEVELOPING SPRAGUE-DAWLEY RATS

    EPA Science Inventory

    This work describes the development of a physiologically based pharmacokinetic (PBPK) model of deltamethrin, a type II pyrethroid, in the developing male Sprague-Dawley rat. Generalized Michaelis-Menten equations were used to calculate metabolic rate constants and organ weights ...

  6. The SMM Model as a Boundary Value Problem Using the Discrete Diffusion Equation

    NASA Technical Reports Server (NTRS)

    Campbell, Joel

    2007-01-01

    A generalized single step stepwise mutation model (SMM) is developed that takes into account an arbitrary initial state to a certain partial difference equation. This is solved in both the approximate continuum limit and the more exact discrete form. A time evolution model is developed for Y DNA or mtDNA that takes into account the reflective boundary modeling minimum microsatellite length and the original difference equation. A comparison is made between the more widely known continuum Gaussian model and a discrete model, which is based on modified Bessel functions of the first kind. A correction is made to the SMM model for the probability that two individuals are related that takes into account a reflecting boundary modeling minimum microsatellite length. This method is generalized to take into account the general n-step model and exact solutions are found. A new model is proposed for the step distribution.

  7. a Proposal for Generalization of 3d Models

    NASA Astrophysics Data System (ADS)

    Uyar, A.; Ulugtekin, N. N.

    2017-11-01

    In recent years, 3D models have been created of many cities around the world. Most of the 3D city models have been introduced as completely graphic or geometric models, and the semantic and topographic aspects of the models have been neglected. In order to use 3D city models beyond the task, a generalization is necessary. CityGML is an open data model and XML-based format for the storage and exchange of virtual 3D city models. Level of Details (LoD) which is an important concept for 3D modelling, can be defined as outlined degree or prior representation of real-world objects. The paper aim is first describes some requirements of 3D model generalization, then presents problems and approaches that have been developed in recent years. In conclude the paper will be a summary and outlook on problems and future work.

  8. The SMM model as a boundary value problem using the discrete diffusion equation.

    PubMed

    Campbell, Joel

    2007-12-01

    A generalized single-step stepwise mutation model (SMM) is developed that takes into account an arbitrary initial state to a certain partial difference equation. This is solved in both the approximate continuum limit and the more exact discrete form. A time evolution model is developed for Y DNA or mtDNA that takes into account the reflective boundary modeling minimum microsatellite length and the original difference equation. A comparison is made between the more widely known continuum Gaussian model and a discrete model, which is based on modified Bessel functions of the first kind. A correction is made to the SMM model for the probability that two individuals are related that takes into account a reflecting boundary modeling minimum microsatellite length. This method is generalized to take into account the general n-step model and exact solutions are found. A new model is proposed for the step distribution.

  9. Books and Balls: Antecedents and Outcomes of College Identification

    ERIC Educational Resources Information Center

    Porter, Thomas; Hartman, Katherine; Johnson, John Seth

    2011-01-01

    Identification plays a central role in models of giving to an organization. This study presents and tests a general model of giving that highlights status based and affect based drivers of identification. The model was tested using a sample of 114 alumni from 74 different colleges participated in an online survey. Identification was found to…

  10. EVALUATION OF THE REAL-TIME AIR-QUALITY MODEL USING THE RAPS (REGIONAL AIR POLLUTION STUDY) DATA BASE. VOLUME 3. PROGRAM USER'S GUIDE

    EPA Science Inventory

    The theory and programming of statistical tests for evaluating the Real-Time Air-Quality Model (RAM) using the Regional Air Pollution Study (RAPS) data base are fully documented in four volumes. Moreover, the tests are generally applicable to other model evaluation problems. Volu...

  11. EVALUATION OF THE REAL-TIME AIR-QUALITY MODEL USING THE RAPS (REGIONAL AIR POLLUTION STUDY) DATA BASE. VOLUME 4. EVALUATION GUIDE

    EPA Science Inventory

    The theory and programming of statistical tests for evaluating the Real-Time Air-Quality Model (RAM) using the Regional Air Pollution Study (RAPS) data base are fully documented in four volumes. Moreover, the tests are generally applicable to other model evaluation problems. Volu...

  12. IRT Models for Ability-Based Guessing

    ERIC Educational Resources Information Center

    Martin, Ernesto San; del Pino, Guido; De Boeck, Paul

    2006-01-01

    An ability-based guessing model is formulated and applied to several data sets regarding educational tests in language and in mathematics. The formulation of the model is such that the probability of a correct guess does not only depend on the item but also on the ability of the individual, weighted with a general discrimination parameter. By so…

  13. A measurement-based generalized source model for Monte Carlo dose simulations of CT scans

    PubMed Central

    Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun

    2018-01-01

    The goal of this study is to develop a generalized source model (GSM) for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology. PMID:28079526

  14. A measurement-based generalized source model for Monte Carlo dose simulations of CT scans

    NASA Astrophysics Data System (ADS)

    Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun

    2017-03-01

    The goal of this study is to develop a generalized source model for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1 mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology.

  15. System Identification of a Heaving Point Absorber: Design of Experiment and Device Modeling

    DOE PAGES

    Bacelli, Giorgio; Coe, Ryan; Patterson, David; ...

    2017-04-01

    Empirically based modeling is an essential aspect of design for a wave energy converter. These models are used in structural, mechanical and control design processes, as well as for performance prediction. The design of experiments and methods used to produce models from collected data have a strong impact on the quality of the model. This study considers the system identification and model validation process based on data collected from a wave tank test of a model-scale wave energy converter. Experimental design and data processing techniques based on general system identification procedures are discussed and compared with the practices often followedmore » for wave tank testing. The general system identification processes are shown to have a number of advantages. The experimental data is then used to produce multiple models for the dynamics of the device. These models are validated and their performance is compared against one and other. Furthermore, while most models of wave energy converters use a formulation with wave elevation as an input, this study shows that a model using a hull pressure sensor to incorporate the wave excitation phenomenon has better accuracy.« less

  16. System Identification of a Heaving Point Absorber: Design of Experiment and Device Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bacelli, Giorgio; Coe, Ryan; Patterson, David

    Empirically based modeling is an essential aspect of design for a wave energy converter. These models are used in structural, mechanical and control design processes, as well as for performance prediction. The design of experiments and methods used to produce models from collected data have a strong impact on the quality of the model. This study considers the system identification and model validation process based on data collected from a wave tank test of a model-scale wave energy converter. Experimental design and data processing techniques based on general system identification procedures are discussed and compared with the practices often followedmore » for wave tank testing. The general system identification processes are shown to have a number of advantages. The experimental data is then used to produce multiple models for the dynamics of the device. These models are validated and their performance is compared against one and other. Furthermore, while most models of wave energy converters use a formulation with wave elevation as an input, this study shows that a model using a hull pressure sensor to incorporate the wave excitation phenomenon has better accuracy.« less

  17. A general graphical user interface for automatic reliability modeling

    NASA Technical Reports Server (NTRS)

    Liceaga, Carlos A.; Siewiorek, Daniel P.

    1991-01-01

    Reported here is a general Graphical User Interface (GUI) for automatic reliability modeling of Processor Memory Switch (PMS) structures using a Markov model. This GUI is based on a hierarchy of windows. One window has graphical editing capabilities for specifying the system's communication structure, hierarchy, reconfiguration capabilities, and requirements. Other windows have field texts, popup menus, and buttons for specifying parameters and selecting actions. An example application of the GUI is given.

  18. The Traditional Model Does Not Explain Attitudes Toward Euthanasia: A Web-Based Survey of the General Public in Finland.

    PubMed

    Terkamo-Moisio, Anja; Kvist, Tarja; Laitila, Teuvo; Kangasniemi, Mari; Ryynänen, Olli-Pekka; Pietilä, Anna-Maija

    2017-08-01

    The debate about euthanasia is ongoing in several countries including Finland. However, there is a lack of information on current attitudes toward euthanasia among general Finnish public. The traditional model for predicting individuals' attitudes to euthanasia is based on their age, gender, educational level, and religiosity. However, a new evaluation of religiosity is needed due to the limited operationalization of this factor in previous studies. This study explores the connections between the factors of the traditional model and the attitudes toward euthanasia among the general public in the Finnish context. The Finnish public's attitudes toward euthanasia have become remarkably more positive over the last decade. Further research is needed on the factors that predict euthanasia attitudes. We suggest two different explanatory models for consideration: one that emphasizes the value of individual autonomy and another that approaches euthanasia from the perspective of fears of death or the process of dying.

  19. A General Accelerated Degradation Model Based on the Wiener Process.

    PubMed

    Liu, Le; Li, Xiaoyang; Sun, Fuqiang; Wang, Ning

    2016-12-06

    Accelerated degradation testing (ADT) is an efficient tool to conduct material service reliability and safety evaluations by analyzing performance degradation data. Traditional stochastic process models are mainly for linear or linearization degradation paths. However, those methods are not applicable for the situations where the degradation processes cannot be linearized. Hence, in this paper, a general ADT model based on the Wiener process is proposed to solve the problem for accelerated degradation data analysis. The general model can consider the unit-to-unit variation and temporal variation of the degradation process, and is suitable for both linear and nonlinear ADT analyses with single or multiple acceleration variables. The statistical inference is given to estimate the unknown parameters in both constant stress and step stress ADT. The simulation example and two real applications demonstrate that the proposed method can yield reliable lifetime evaluation results compared with the existing linear and time-scale transformation Wiener processes in both linear and nonlinear ADT analyses.

  20. Validation of the generalized model of two-phase thermosyphon loop based on experimental measurements of volumetric flow rate

    NASA Astrophysics Data System (ADS)

    Bieliński, Henryk

    2016-09-01

    The current paper presents the experimental validation of the generalized model of the two-phase thermosyphon loop. The generalized model is based on mass, momentum, and energy balances in the evaporators, rising tube, condensers and the falling tube. The theoretical analysis and the experimental data have been obtained for a new designed variant. The variant refers to a thermosyphon loop with both minichannels and conventional tubes. The thermosyphon loop consists of an evaporator on the lower vertical section and a condenser on the upper vertical section. The one-dimensional homogeneous and separated two-phase flow models were used in calculations. The latest minichannel heat transfer correlations available in literature were applied. A numerical analysis of the volumetric flow rate in the steady-state has been done. The experiment was conducted on a specially designed test apparatus. Ultrapure water was used as a working fluid. The results show that the theoretical predictions are in good agreement with the measured volumetric flow rate at steady-state.

  1. A General Accelerated Degradation Model Based on the Wiener Process

    PubMed Central

    Liu, Le; Li, Xiaoyang; Sun, Fuqiang; Wang, Ning

    2016-01-01

    Accelerated degradation testing (ADT) is an efficient tool to conduct material service reliability and safety evaluations by analyzing performance degradation data. Traditional stochastic process models are mainly for linear or linearization degradation paths. However, those methods are not applicable for the situations where the degradation processes cannot be linearized. Hence, in this paper, a general ADT model based on the Wiener process is proposed to solve the problem for accelerated degradation data analysis. The general model can consider the unit-to-unit variation and temporal variation of the degradation process, and is suitable for both linear and nonlinear ADT analyses with single or multiple acceleration variables. The statistical inference is given to estimate the unknown parameters in both constant stress and step stress ADT. The simulation example and two real applications demonstrate that the proposed method can yield reliable lifetime evaluation results compared with the existing linear and time-scale transformation Wiener processes in both linear and nonlinear ADT analyses. PMID:28774107

  2. Training Australian General Practitioners in Rural Public Health: Impact, Desirability and Adaptability of Hybrid Problem-Based Learning

    ERIC Educational Resources Information Center

    Gladman, Justin; Perkins, David

    2013-01-01

    Context and Objective: Australian rural general practitioners (GPs) require public health knowledge. This study explored the suitability of teaching complex public health issues related to Aboriginal health by way of a hybrid problem-based learning (PBL) model within an intensive training retreat for GP registrars, when numerous trainees have no…

  3. Applying knowledge compilation techniques to model-based reasoning

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.

    1991-01-01

    Researchers in the area of knowledge compilation are developing general purpose techniques for improving the efficiency of knowledge-based systems. In this article, an attempt is made to define knowledge compilation, to characterize several classes of knowledge compilation techniques, and to illustrate how some of these techniques can be applied to improve the performance of model-based reasoning systems.

  4. An Interdisciplinary Model for Teaching Evolutionary Ecology.

    ERIC Educational Resources Information Center

    Coletta, John

    1992-01-01

    Describes a general systems evolutionary model and demonstrates how a previously established ecological model is a function of its past development based on the evolution of the rock, nutrient, and water cycles. Discusses the applications of the model in environmental education. (MDH)

  5. Ecposure Related Dose Estimating Model

    EPA Science Inventory

    ERDEM is a physiologically based pharmacokinetic (PBPK) modeling system consisting of a general model and an associated front end. An actual model is defined when the user prepares an input command file. Such a command file defines the chemicals, compartments and processes that...

  6. WASP TRANSPORT MODELING AND WASP ECOLOGICAL MODELING

    EPA Science Inventory

    A combination of lectures, demonstrations, and hands-on excercises will be used to introduce pollutant transport modeling with the U.S. EPA's general water quality model, WASP (Water Quality Analysis Simulation Program). WASP features include a user-friendly Windows-based interfa...

  7. Novel forecasting approaches using combination of machine learning and statistical models for flood susceptibility mapping.

    PubMed

    Shafizadeh-Moghadam, Hossein; Valavi, Roozbeh; Shahabi, Himan; Chapi, Kamran; Shirzadi, Ataollah

    2018-07-01

    In this research, eight individual machine learning and statistical models are implemented and compared, and based on their results, seven ensemble models for flood susceptibility assessment are introduced. The individual models included artificial neural networks, classification and regression trees, flexible discriminant analysis, generalized linear model, generalized additive model, boosted regression trees, multivariate adaptive regression splines, and maximum entropy, and the ensemble models were Ensemble Model committee averaging (EMca), Ensemble Model confidence interval Inferior (EMciInf), Ensemble Model confidence interval Superior (EMciSup), Ensemble Model to estimate the coefficient of variation (EMcv), Ensemble Model to estimate the mean (EMmean), Ensemble Model to estimate the median (EMmedian), and Ensemble Model based on weighted mean (EMwmean). The data set covered 201 flood events in the Haraz watershed (Mazandaran province in Iran) and 10,000 randomly selected non-occurrence points. Among the individual models, the Area Under the Receiver Operating Characteristic (AUROC), which showed the highest value, belonged to boosted regression trees (0.975) and the lowest value was recorded for generalized linear model (0.642). On the other hand, the proposed EMmedian resulted in the highest accuracy (0.976) among all models. In spite of the outstanding performance of some models, nevertheless, variability among the prediction of individual models was considerable. Therefore, to reduce uncertainty, creating more generalizable, more stable, and less sensitive models, ensemble forecasting approaches and in particular the EMmedian is recommended for flood susceptibility assessment. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Automated extraction of knowledge for model-based diagnostics

    NASA Technical Reports Server (NTRS)

    Gonzalez, Avelino J.; Myler, Harley R.; Towhidnejad, Massood; Mckenzie, Frederic D.; Kladke, Robin R.

    1990-01-01

    The concept of accessing computer aided design (CAD) design databases and extracting a process model automatically is investigated as a possible source for the generation of knowledge bases for model-based reasoning systems. The resulting system, referred to as automated knowledge generation (AKG), uses an object-oriented programming structure and constraint techniques as well as internal database of component descriptions to generate a frame-based structure that describes the model. The procedure has been designed to be general enough to be easily coupled to CAD systems that feature a database capable of providing label and connectivity data from the drawn system. The AKG system is capable of defining knowledge bases in formats required by various model-based reasoning tools.

  9. DEVELOPMENT OF A PHYSIOLOGICALLY BASED PHARMACOKINETIC MODEL FOR DELTAMETHRIN IN ADULT AND DEVELOPING SPRAGUE-DAWLEY RATS

    EPA Science Inventory

    This work describes the development of a physiologically based pharmacokinetic (PBPK) model of deltamethrin, a type II pyrethroid, in the developing male Sprague-Dawley rat. Generalized Michaelis-Menten equations were used to calculate metabolic rate constants and organ weights ...

  10. A Case-Based Learning Model in Orthodontics.

    ERIC Educational Resources Information Center

    Engel, Francoise E.; Hendricson, William D.

    1994-01-01

    A case-based, student-centered instructional model designed to mimic orthodontic problem solving and decision making in dental general practice is described. Small groups of students analyze case data, then record and discuss their diagnoses and treatments. Students and instructors rated the seminars positively, and students reported improved…

  11. The valuation of the EQ-5D in Portugal.

    PubMed

    Ferreira, Lara N; Ferreira, Pedro L; Pereira, Luis N; Oppe, Mark

    2014-03-01

    The EQ-5D is a preference-based measure widely used in cost-utility analysis (CUA). Several countries have conducted surveys to derive value sets, but this was not the case for Portugal. The purpose of this study was to estimate a value set for the EQ-5D for Portugal using the time trade-off (TTO). A representative sample of the Portuguese general population (n = 450) stratified by age and gender valued 24 health states. Face-to-face interviews were conducted by trained interviewers. Each respondent ranked and valued seven health states using the TTO. Several models were estimated at both the individual and aggregated levels to predict health state valuations. Alternative functional forms were considered to account for the skewed distribution of these valuations. The models were analyzed in terms of their coefficients, overall fit and the ability for predicting the TTO values. Random effects models were estimated using generalized least squares and were robust across model specification. The results are generally consistent with other value sets. This research provides the Portuguese EQ-5D value set based on the preferences of the Portuguese general population as measured by the TTO. This value set is recommended for use in CUA conducted in Portugal.

  12. A dynamic simulation based water resources education tool.

    PubMed

    Williams, Alison; Lansey, Kevin; Washburne, James

    2009-01-01

    Educational tools to assist the public in recognizing impacts of water policy in a realistic context are not generally available. This project developed systems with modeling-based educational decision support simulation tools to satisfy this need. The goal of this model is to teach undergraduate students and the general public about the implications of common water management alternatives so that they can better understand or become involved in water policy and make more knowledgeable personal or community decisions. The model is based on Powersim, a dynamic simulation software package capable of producing web-accessible, intuitive, graphic, user-friendly interfaces. Modules are included to represent residential, agricultural, industrial, and turf uses, as well as non-market values, water quality, reservoir, flow, and climate conditions. Supplementary materials emphasize important concepts and lead learners through the model, culminating in an open-ended water management project. The model is used in a University of Arizona undergraduate class and within the Arizona Master Watershed Stewards Program. Evaluation results demonstrated improved understanding of concepts and system interactions, fulfilling the project's objectives.

  13. Alternative probability theories for cognitive psychology.

    PubMed

    Narens, Louis

    2014-01-01

    Various proposals for generalizing event spaces for probability functions have been put forth in the mathematical, scientific, and philosophic literatures. In cognitive psychology such generalizations are used for explaining puzzling results in decision theory and for modeling the influence of context effects. This commentary discusses proposals for generalizing probability theory to event spaces that are not necessarily boolean algebras. Two prominent examples are quantum probability theory, which is based on the set of closed subspaces of a Hilbert space, and topological probability theory, which is based on the set of open sets of a topology. Both have been applied to a variety of cognitive situations. This commentary focuses on how event space properties can influence probability concepts and impact cognitive modeling. Copyright © 2013 Cognitive Science Society, Inc.

  14. Mathematical models for non-parametric inferences from line transect data

    USGS Publications Warehouse

    Burnham, K.P.; Anderson, D.R.

    1976-01-01

    A general mathematical theory of line transects is developed which supplies a framework for nonparametric density estimation based on either right angle or sighting distances. The probability of observing a point given its right angle distance (y) from the line is generalized to an arbitrary function g(y). Given only that g(0) = 1, it is shown there are nonparametric approaches to density estimation using the observed right angle distances. The model is then generalized to include sighting distances (r). Let f(y I r) be the conditional distribution of right angle distance given sighting distance. It is shown that nonparametric estimation based only on sighting distances requires we know the transformation of r given by f(0 I r).

  15. Focus on Learning: A Report on Reorganizing General and Special Education in New York City.

    ERIC Educational Resources Information Center

    Fruchter, Norm; And Others

    This report is the result of a year-long evaluation of special education in New York City (New York) and presents major recommendations for reorganizing general and special education. It proposes a school-based model with an integrated general/special education system, and use of an enrichment allocation from merged special and general education…

  16. No-reference image quality assessment based on natural scene statistics and gradient magnitude similarity

    NASA Astrophysics Data System (ADS)

    Jia, Huizhen; Sun, Quansen; Ji, Zexuan; Wang, Tonghan; Chen, Qiang

    2014-11-01

    The goal of no-reference/blind image quality assessment (NR-IQA) is to devise a perceptual model that can accurately predict the quality of a distorted image as human opinions, in which feature extraction is an important issue. However, the features used in the state-of-the-art "general purpose" NR-IQA algorithms are usually natural scene statistics (NSS) based or are perceptually relevant; therefore, the performance of these models is limited. To further improve the performance of NR-IQA, we propose a general purpose NR-IQA algorithm which combines NSS-based features with perceptually relevant features. The new method extracts features in both the spatial and gradient domains. In the spatial domain, we extract the point-wise statistics for single pixel values which are characterized by a generalized Gaussian distribution model to form the underlying features. In the gradient domain, statistical features based on neighboring gradient magnitude similarity are extracted. Then a mapping is learned to predict quality scores using a support vector regression. The experimental results on the benchmark image databases demonstrate that the proposed algorithm correlates highly with human judgments of quality and leads to significant performance improvements over state-of-the-art methods.

  17. Model Selection Indices for Polytomous Items

    ERIC Educational Resources Information Center

    Kang, Taehoon; Cohen, Allan S.; Sung, Hyun-Jung

    2009-01-01

    This study examines the utility of four indices for use in model selection with nested and nonnested polytomous item response theory (IRT) models: a cross-validation index and three information-based indices. Four commonly used polytomous IRT models are considered: the graded response model, the generalized partial credit model, the partial credit…

  18. Building Hierarchical Representations for Oracle Character and Sketch Recognition.

    PubMed

    Jun Guo; Changhu Wang; Roman-Rangel, Edgar; Hongyang Chao; Yong Rui

    2016-01-01

    In this paper, we study oracle character recognition and general sketch recognition. First, a data set of oracle characters, which are the oldest hieroglyphs in China yet remain a part of modern Chinese characters, is collected for analysis. Second, typical visual representations in shape- and sketch-related works are evaluated. We analyze the problems suffered when addressing these representations and determine several representation design criteria. Based on the analysis, we propose a novel hierarchical representation that combines a Gabor-related low-level representation and a sparse-encoder-related mid-level representation. Extensive experiments show the effectiveness of the proposed representation in both oracle character recognition and general sketch recognition. The proposed representation is also complementary to convolutional neural network (CNN)-based models. We introduce a solution to combine the proposed representation with CNN-based models, and achieve better performances over both approaches. This solution has beaten humans at recognizing general sketches.

  19. TOPICS IN THEORY OF GENERALIZED PARTON DISTRIBUTIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radyushkin, Anatoly V.

    Several topics in the theory of generalized parton distributions (GPDs) are reviewed. First, we give a brief overview of the basics of the theory of generalized parton distributions and their relationship with simpler phenomenological functions, viz. form factors, parton densities and distribution amplitudes. Then, we discuss recent developments in building models for GPDs that are based on the formalism of double distributions (DDs). A special attention is given to a careful analysis of the singularity structure of DDs. The DD formalism is applied to construction of a model GPDs with a singular Regge behavior. Within the developed DD-based approach, wemore » discuss the structure of GPD sum rules. It is shown that separation of DDs into the so-called ``plus'' part and the $D$-term part may be treated as a renormalization procedure for the GPD sum rules. This approach is compared with an alternative prescription based on analytic regularization.« less

  20. An EKV-based high voltage MOSFET model with improved mobility and drift model

    NASA Astrophysics Data System (ADS)

    Chauhan, Yogesh Singh; Gillon, Renaud; Bakeroot, Benoit; Krummenacher, Francois; Declercq, Michel; Ionescu, Adrian Mihai

    2007-11-01

    An EKV-based high voltage MOSFET model is presented. The intrinsic channel model is derived based on the charge based EKV-formalism. An improved mobility model is used for the modeling of the intrinsic channel to improve the DC characteristics. The model uses second order dependence on the gate bias and an extra parameter for the smoothening of the saturation voltage of the intrinsic drain. An improved drift model [Chauhan YS, Anghel C, Krummenacher F, Ionescu AM, Declercq M, Gillon R, et al. A highly scalable high voltage MOSFET model. In: IEEE European solid-state device research conference (ESSDERC), September 2006. p. 270-3; Chauhan YS, Anghel C, Krummenacher F, Maier C, Gillon R, Bakeroot B, et al. Scalable general high voltage MOSFET model including quasi-saturation and self-heating effect. Solid State Electron 2006;50(11-12):1801-13] is used for the modeling of the drift region, which gives smoother transition on output characteristics and also models well the quasi-saturation region of high voltage MOSFETs. First, the model is validated on the numerical device simulation of the VDMOS transistor and then, on the measured characteristics of the SOI-LDMOS transistor. The accuracy of the model is better than our previous model [Chauhan YS, Anghel C, Krummenacher F, Maier C, Gillon R, Bakeroot B, et al. Scalable general high voltage MOSFET model including quasi-saturation and self-heating effect. Solid State Electron 2006;50(11-12):1801-13] especially in the quasi-saturation region of output characteristics.

  1. Supervised variational model with statistical inference and its application in medical image segmentation.

    PubMed

    Li, Changyang; Wang, Xiuying; Eberl, Stefan; Fulham, Michael; Yin, Yong; Dagan Feng, David

    2015-01-01

    Automated and general medical image segmentation can be challenging because the foreground and the background may have complicated and overlapping density distributions in medical imaging. Conventional region-based level set algorithms often assume piecewise constant or piecewise smooth for segments, which are implausible for general medical image segmentation. Furthermore, low contrast and noise make identification of the boundaries between foreground and background difficult for edge-based level set algorithms. Thus, to address these problems, we suggest a supervised variational level set segmentation model to harness the statistical region energy functional with a weighted probability approximation. Our approach models the region density distributions by using the mixture-of-mixtures Gaussian model to better approximate real intensity distributions and distinguish statistical intensity differences between foreground and background. The region-based statistical model in our algorithm can intuitively provide better performance on noisy images. We constructed a weighted probability map on graphs to incorporate spatial indications from user input with a contextual constraint based on the minimization of contextual graphs energy functional. We measured the performance of our approach on ten noisy synthetic images and 58 medical datasets with heterogeneous intensities and ill-defined boundaries and compared our technique to the Chan-Vese region-based level set model, the geodesic active contour model with distance regularization, and the random walker model. Our method consistently achieved the highest Dice similarity coefficient when compared to the other methods.

  2. Representing Micro-Macro Linkages by Actor-Based Dynamic Network Models

    PubMed Central

    Snijders, Tom A.B.; Steglich, Christian E.G.

    2014-01-01

    Stochastic actor-based models for network dynamics have the primary aim of statistical inference about processes of network change, but may be regarded as a kind of agent-based models. Similar to many other agent-based models, they are based on local rules for actor behavior. Different from many other agent-based models, by including elements of generalized linear statistical models they aim to be realistic detailed representations of network dynamics in empirical data sets. Statistical parallels to micro-macro considerations can be found in the estimation of parameters determining local actor behavior from empirical data, and the assessment of goodness of fit from the correspondence with network-level descriptives. This article studies several network-level consequences of dynamic actor-based models applied to represent cross-sectional network data. Two examples illustrate how network-level characteristics can be obtained as emergent features implied by micro-specifications of actor-based models. PMID:25960578

  3. Rhombic micro-displacement amplifier for piezoelectric actuator and its linear and hybrid model

    NASA Astrophysics Data System (ADS)

    Chen, Jinglong; Zhang, Chunlin; Xu, Minglong; Zi, Yanyang; Zhang, Xinong

    2015-01-01

    This paper proposes rhombic micro-displacement amplifier (RMDA) for piezoelectric actuator (PA). First, the geometric amplification relations are analyzed and linear model is built to analyze the mechanical and electrical properties of this amplifier. Next, the accurate modeling method of amplifier is studied for important application of precise servo control. The classical Preisach model (CPM) is generally implemented using a numerical technique based on the first-order reversal curves (FORCs). The accuracy of CPM mainly depends on the number of FORCs. However, it is generally difficult to achieve enough number of FORCs in practice. So, Support Vector Machine (SVM) is employed in the work to circumvent the deficiency of the CPM. Then the hybrid model, which is based on discrete CPM and SVM is developed to account for hysteresis and dynamic effects. Finally, experimental validation is carried out. The analyzed result shows that this amplifier with the hybrid model is suitable for control application.

  4. Terminology model discovery using natural language processing and visualization techniques.

    PubMed

    Zhou, Li; Tao, Ying; Cimino, James J; Chen, Elizabeth S; Liu, Hongfang; Lussier, Yves A; Hripcsak, George; Friedman, Carol

    2006-12-01

    Medical terminologies are important for unambiguous encoding and exchange of clinical information. The traditional manual method of developing terminology models is time-consuming and limited in the number of phrases that a human developer can examine. In this paper, we present an automated method for developing medical terminology models based on natural language processing (NLP) and information visualization techniques. Surgical pathology reports were selected as the testing corpus for developing a pathology procedure terminology model. The use of a general NLP processor for the medical domain, MedLEE, provides an automated method for acquiring semantic structures from a free text corpus and sheds light on a new high-throughput method of medical terminology model development. The use of an information visualization technique supports the summarization and visualization of the large quantity of semantic structures generated from medical documents. We believe that a general method based on NLP and information visualization will facilitate the modeling of medical terminologies.

  5. An Analysis of a Model for Developing Instructional Materials for Teaching Physical Science Concepts for Grade 8 Students in the Republic of China.

    ERIC Educational Resources Information Center

    Hsu, Shun-Yi

    An instructional model based on a learning cycle including correlation, analysis, and generalization (CAG) was developed and applied to design an instructional module for grade 8 students in Taiwan, Republic of China. The CAG model was based on Piagetian theory and a concept model (Pella, 1975). The module developed for heat and temperature was…

  6. The Economic Impact of the President’s 2013 Budget

    DTIC Science & Technology

    2012-04-01

    and capital . According to the Solow-type model , people base their decisions about working and saving pri- marily on current economic... model developed by Robert Solow. CBO’s life-cycle growth model is an overlapping - generations general -equilibrium model that is based on a standard...services produced in a given period by the labor and capital supplied by the country’s residents , regardless of where the labor

  7. Automated knowledge generation

    NASA Technical Reports Server (NTRS)

    Myler, Harley R.; Gonzalez, Avelino J.

    1988-01-01

    The general objectives of the NASA/UCF Automated Knowledge Generation Project were the development of an intelligent software system that could access CAD design data bases, interpret them, and generate a diagnostic knowledge base in the form of a system model. The initial area of concentration is in the diagnosis of the process control system using the Knowledge-based Autonomous Test Engineer (KATE) diagnostic system. A secondary objective was the study of general problems of automated knowledge generation. A prototype was developed, based on object-oriented language (Flavors).

  8. Information visualisation based on graph models

    NASA Astrophysics Data System (ADS)

    Kasyanov, V. N.; Kasyanova, E. V.

    2013-05-01

    Information visualisation is a key component of support tools for many applications in science and engineering. A graph is an abstract structure that is widely used to model information for its visualisation. In this paper, we consider practical and general graph formalism called hierarchical graphs and present the Higres and Visual Graph systems aimed at supporting information visualisation on the base of hierarchical graph models.

  9. Meeting in Turkey: WASP Transport Modeling and WASP Ecological Modeling

    EPA Science Inventory

    A combination of lectures, demonstrations, and hands-on excercises will be used to introduce pollutant transport modeling with the U.S. EPA's general water quality model, WASP (Water Quality Analysis Simulation Program). WASP features include a user-friendly Windows-based interfa...

  10. Meeting in Korea: WASP Transport Modeling and WASP Ecological Modeling

    EPA Science Inventory

    A combination of lectures, demonstrations, and hands-on excercises will be used to introduce pollutant transport modeling with the U.S. EPA's general water quality model, WASP (Water Quality Analysis Simulation Program). WASP features include a user-friendly Windows-based interfa...

  11. Model compilation: An approach to automated model derivation

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Baudin, Catherine; Iwasaki, Yumi; Nayak, Pandurang; Tanaka, Kazuo

    1990-01-01

    An approach is introduced to automated model derivation for knowledge based systems. The approach, model compilation, involves procedurally generating the set of domain models used by a knowledge based system. With an implemented example, how this approach can be used to derive models of different precision and abstraction is illustrated, and models are tailored to different tasks, from a given set of base domain models. In particular, two implemented model compilers are described, each of which takes as input a base model that describes the structure and behavior of a simple electromechanical device, the Reaction Wheel Assembly of NASA's Hubble Space Telescope. The compilers transform this relatively general base model into simple task specific models for troubleshooting and redesign, respectively, by applying a sequence of model transformations. Each transformation in this sequence produces an increasingly more specialized model. The compilation approach lessens the burden of updating and maintaining consistency among models by enabling their automatic regeneration.

  12. Investigating Island Evolution: A Galapagos-Based Lesson Using the 5E Instructional Model.

    ERIC Educational Resources Information Center

    DeFina, Anthony V.

    2002-01-01

    Introduces an inquiry-based lesson plan on evolution and the Galapagos Islands. Uses the 5E instructional model which includes phases of engagement, exploration, explanation, elaboration, and evaluation. Includes information on species for exploration and elaboration purposes, and a general rubric for student evaluation. (YDS)

  13. Stratigraphy of the crater Copernicus

    NASA Technical Reports Server (NTRS)

    Paquette, R.

    1984-01-01

    The stratigraphy of copernicus based on its olivine absorption bands is presented. Earth based spectral data are used to develop models that also employ cratering mechanics to devise theories for Copernican geomorphology. General geologic information, spectral information, upper and lower stratigraphic units and a chart for model comparison are included in the stratigraphic analysis.

  14. General theoretical description of angle-resolved photoemission spectroscopy of van der Waals structures

    NASA Astrophysics Data System (ADS)

    Amorim, B.

    2018-04-01

    We develop a general theory to model the angle-resolved photoemission spectroscopy (ARPES) of commensurate and incommensurate van der Waals (vdW) structures, formed by lattice mismatched and/or misaligned stacked layers of two-dimensional materials. The present theory is based on a tight-binding description of the structure and the concept of generalized umklapp processes, going beyond previous descriptions of ARPES in incommensurate vdW structures, which are based on continuous, low-energy models, being limited to structures with small lattice mismatch/misalignment. As applications of the general formalism, we study the ARPES bands and constant energy maps for two structures: twisted bilayer graphene and twisted bilayer MoS2. The present theory should be useful in correctly interpreting experimental results of ARPES of vdW structures and other systems displaying competition between different periodicities, such as two-dimensional materials weakly coupled to a substrate and materials with density wave phases.

  15. Confirmatory Factor Analysis of the Combined Social Phobia Scale and Social Interaction Anxiety Scale: Support for a Bifactor Model.

    PubMed

    Gomez, Rapson; Watson, Shaun D

    2017-01-01

    For the Social Phobia Scale (SPS) and the Social Interaction Anxiety Scale (SIAS) together, this study examined support for a bifactor model, and also the internal consistency reliability and external validity of the factors in this model. Participants ( N = 526) were adults from the general community who completed the SPS and SIAS. Confirmatory factor analysis (CFA) of their ratings indicated good support for the bifactor model. For this model, the loadings for all but six items were higher on the general factor than the specific factors. The three positively worded items had negligible loadings on the general factor. The general factor explained most of the common variance in the SPS and SIAS, and demonstrated good model-based internal consistency reliability (omega hierarchical) and a strong association with fear of negative evaluation and extraversion. The practical implications of the findings for the utilization of the SPS and SIAS, and the theoretical and clinical implications for social anxiety are discussed.

  16. Confirmatory Factor Analysis of the Combined Social Phobia Scale and Social Interaction Anxiety Scale: Support for a Bifactor Model

    PubMed Central

    Gomez, Rapson; Watson, Shaun D.

    2017-01-01

    For the Social Phobia Scale (SPS) and the Social Interaction Anxiety Scale (SIAS) together, this study examined support for a bifactor model, and also the internal consistency reliability and external validity of the factors in this model. Participants (N = 526) were adults from the general community who completed the SPS and SIAS. Confirmatory factor analysis (CFA) of their ratings indicated good support for the bifactor model. For this model, the loadings for all but six items were higher on the general factor than the specific factors. The three positively worded items had negligible loadings on the general factor. The general factor explained most of the common variance in the SPS and SIAS, and demonstrated good model-based internal consistency reliability (omega hierarchical) and a strong association with fear of negative evaluation and extraversion. The practical implications of the findings for the utilization of the SPS and SIAS, and the theoretical and clinical implications for social anxiety are discussed. PMID:28210232

  17. XML-Based SHINE Knowledge Base Interchange Language

    NASA Technical Reports Server (NTRS)

    James, Mark; Mackey, Ryan; Tikidjian, Raffi

    2008-01-01

    The SHINE Knowledge Base Interchange Language software has been designed to more efficiently send new knowledge bases to spacecraft that have been embedded with the Spacecraft Health Inference Engine (SHINE) tool. The intention of the behavioral model is to capture most of the information generally associated with a spacecraft functional model, while specifically addressing the needs of execution within SHINE and Livingstone. As such, it has some constructs that are based on one or the other.

  18. [The role of art therapy in the rehabilitation of psycho-socially disabled people].

    PubMed

    Simon, Lajos; Kovács, Emese

    2015-01-01

    The present review focuses on the generally accepted and applied community psychiatry based models of psycho-social rehabilitation. The basics of the Strenghts model and the Recovery based model are introduced in this paper. Both models can be assisted by art therapy in various ways. The forms and the therapeutic factors of art therapy are also discussed, as well as the effects of the creating experience during the art therapy sessions. The authors introduce the good practice of the Moravcsik Foundation with highlights in two special areas that are beyond the generally applied art therapy work and representing important support in reaching the goals set during the rehabilitation process. Further, the authors describe the Budapest Art Brut Gallery and the PsychArt24 art marathon project in details.

  19. A General Reliability Model for Ni-BaTiO3-Based Multilayer Ceramic Capacitors

    NASA Technical Reports Server (NTRS)

    Liu, Donhang

    2014-01-01

    The evaluation of multilayer ceramic capacitors (MLCCs) with Ni electrode and BaTiO3 dielectric material for potential space project applications requires an in-depth understanding of their reliability. A general reliability model for Ni-BaTiO3 MLCC is developed and discussed. The model consists of three parts: a statistical distribution; an acceleration function that describes how a capacitor's reliability life responds to the external stresses, and an empirical function that defines contribution of the structural and constructional characteristics of a multilayer capacitor device, such as the number of dielectric layers N, dielectric thickness d, average grain size, and capacitor chip size A. Application examples are also discussed based on the proposed reliability model for Ni-BaTiO3 MLCCs.

  20. A General Reliability Model for Ni-BaTiO3-Based Multilayer Ceramic Capacitors

    NASA Technical Reports Server (NTRS)

    Liu, Donhang

    2014-01-01

    The evaluation for potential space project applications of multilayer ceramic capacitors (MLCCs) with Ni electrode and BaTiO3 dielectric material requires an in-depth understanding of the MLCCs reliability. A general reliability model for Ni-BaTiO3 MLCCs is developed and discussed in this paper. The model consists of three parts: a statistical distribution; an acceleration function that describes how a capacitors reliability life responds to external stresses; and an empirical function that defines the contribution of the structural and constructional characteristics of a multilayer capacitor device, such as the number of dielectric layers N, dielectric thickness d, average grain size r, and capacitor chip size A. Application examples are also discussed based on the proposed reliability model for Ni-BaTiO3 MLCCs.

  1. Processing Technology Selection for Municipal Sewage Treatment Based on a Multi-Objective Decision Model under Uncertainty.

    PubMed

    Chen, Xudong; Xu, Zhongwen; Yao, Liming; Ma, Ning

    2018-03-05

    This study considers the two factors of environmental protection and economic benefits to address municipal sewage treatment. Based on considerations regarding the sewage treatment plant construction site, processing technology, capital investment, operation costs, water pollutant emissions, water quality and other indicators, we establish a general multi-objective decision model for optimizing municipal sewage treatment plant construction. Using the construction of a sewage treatment plant in a suburb of Chengdu as an example, this paper tests the general model of multi-objective decision-making for the sewage treatment plant construction by implementing a genetic algorithm. The results show the applicability and effectiveness of the multi-objective decision model for the sewage treatment plant. This paper provides decision and technical support for the optimization of municipal sewage treatment.

  2. Ensemble-Based Parameter Estimation in a Coupled General Circulation Model

    DOE PAGES

    Liu, Y.; Liu, Z.; Zhang, S.; ...

    2014-09-10

    Parameter estimation provides a potentially powerful approach to reduce model bias for complex climate models. Here, in a twin experiment framework, the authors perform the first parameter estimation in a fully coupled ocean–atmosphere general circulation model using an ensemble coupled data assimilation system facilitated with parameter estimation. The authors first perform single-parameter estimation and then multiple-parameter estimation. In the case of the single-parameter estimation, the error of the parameter [solar penetration depth (SPD)] is reduced by over 90% after ~40 years of assimilation of the conventional observations of monthly sea surface temperature (SST) and salinity (SSS). The results of multiple-parametermore » estimation are less reliable than those of single-parameter estimation when only the monthly SST and SSS are assimilated. Assimilating additional observations of atmospheric data of temperature and wind improves the reliability of multiple-parameter estimation. The errors of the parameters are reduced by 90% in ~8 years of assimilation. Finally, the improved parameters also improve the model climatology. With the optimized parameters, the bias of the climatology of SST is reduced by ~90%. Altogether, this study suggests the feasibility of ensemble-based parameter estimation in a fully coupled general circulation model.« less

  3. A Function-Based Intervention to Increase a Second-Grade Student's On-Task Behavior in a General Education Classroom

    ERIC Educational Resources Information Center

    Germer, Kathryn A.; Kaplan, Lauren M.; Giroux, Lindsay N.; Markham, Elizabeth H.; Ferris, Geoffrey J.; Oakes, Wendy P.; Lane, Kathleen Lynne

    2011-01-01

    A functional assessment-based intervention (FABI) was designed and implemented to increase the on-task behavior of David, a second-grade student in a general education classroom. David attended an elementary school that used a comprehensive, integrated, three-tiered (CI3T) model of prevention. The school's principal nominated David for Project…

  4. Protege Career Aspirations: The Influence of Formal E-Mentor Networks and Family-Based Role Models

    ERIC Educational Resources Information Center

    DiRenzo, Marco S.; Weer, Christy H.; Linnehan, Frank

    2013-01-01

    Using longitudinal data from a nine-month e-mentoring program, we analyzed the influence of formal e-mentor networks and family-based role models on increases in both psychosocial and career-related outcomes. Findings indicate that e-mentor network relationship quality positively influenced general- and career-based self-efficacy which, in turn,…

  5. Energy Savings Forecast of SSL in General Illumination Report Summary

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2016-09-30

    Summary of the DOE report Energy Savings Forecast of Solid-State Lighting in General Illumination Applications, a biannual report that models the adoption of LEDs in the U.S. general-lighting market, along with associated energy savings, based on the full potential DOE has determined to be technically feasible over time.

  6. Generalized Maintenance Trainer Simulator: Development of Hardware and Software. Final Report.

    ERIC Educational Resources Information Center

    Towne, Douglas M.; Munro, Allen

    A general purpose maintenance trainer, which has the potential to simulate a wide variety of electronic equipments without hardware changes or new computer programs, has been developed and field tested by the Navy. Based on a previous laboratory model, the Generalized Maintenance Trainer Simulator (GMTS) is a relatively low cost trainer that…

  7. Effects of Tiered Training on General Educators' Use of Specific Praise

    ERIC Educational Resources Information Center

    Thompson, Michele Terry; Marchant, Michelle; Anderson, Darlene; Prater, Mary Anne; Gibb, Gordon

    2012-01-01

    Research suggests a compelling correlation between teacher behavior and effective learning environments. Focusing on the evidence-based teaching skill of offering behavior-specific praise (BSP), the researchers worked with three elementary-level general educators in a tiered model of training generally known as response to intervention (RtI).…

  8. Generalization of the event-based Carnevale-Hines integration scheme for integrate-and-fire models.

    PubMed

    van Elburg, Ronald A J; van Ooyen, Arjen

    2009-07-01

    An event-based integration scheme for an integrate-and-fire neuron model with exponentially decaying excitatory synaptic currents and double exponential inhibitory synaptic currents has been introduced by Carnevale and Hines. However, the integration scheme imposes nonphysiological constraints on the time constants of the synaptic currents, which hamper its general applicability. This letter addresses this problem in two ways. First, we provide physical arguments demonstrating why these constraints on the time constants can be relaxed. Second, we give a formal proof showing which constraints can be abolished. As part of our formal proof, we introduce the generalized Carnevale-Hines lemma, a new tool for comparing double exponentials as they naturally occur in many cascaded decay systems, including receptor-neurotransmitter dissociation followed by channel closing. Through repeated application of the generalized lemma, we lift most of the original constraints on the time constants. Thus, we show that the Carnevale-Hines integration scheme for the integrate-and-fire model can be employed for simulating a much wider range of neuron and synapse types than was previously thought.

  9. A shared-care model of obesity treatment for 3-10 year old children: protocol for the HopSCOTCH randomised controlled trial.

    PubMed

    Wake, Melissa; Lycett, Kate; Sabin, Matthew A; Gunn, Jane; Gibbons, Kay; Hutton, Cathy; McCallum, Zoe; York, Elissa; Stringer, Michael; Wittert, Gary

    2012-03-28

    Despite record rates of childhood obesity, effective evidence-based treatments remain elusive. While prolonged tertiary specialist clinical input has some individual impact, these services are only available to very few children. Effective treatments that are easily accessible for all overweight and obese children in the community are urgently required. General practitioners are logical care providers for obese children but high-quality trials indicate that, even with substantial training and support, general practitioner care alone will not suffice to improve body mass index (BMI) trajectories. HopSCOTCH (the Shared Care Obesity Trial in Children) will determine whether a shared-care model, in which paediatric obesity specialists co-manage obesity with general practitioners, can improve adiposity in obese children. Randomised controlled trial nested within a cross-sectional BMI survey conducted across 22 general practices in Melbourne, Australia. Children aged 3-10 years identified as obese by Centers for Disease Control criteria at their family practice, and randomised to either a shared-care intervention or usual care. A single multidisciplinary obesity clinic appointment at Melbourne's Royal Children's Hospital, followed by regular appointments with the child's general practitioner over a 12 month period. To support both specialist and general practice consultations, web-based shared-care software was developed to record assessment, set goals and actions, provide information to caregivers, facilitate communication between the two professional groups, and jointly track progress. Primary - change in BMI z-score. Secondary - change in percentage fat and waist circumference; health status, body satisfaction and global self-worth. This will be the first efficacy trial of a general-practitioner based, shared-care model of childhood obesity management. If effective, it could greatly improve access to care for obese children. Australian New Zealand Clinical Trials Registry ACTRN12608000055303.

  10. Spherically-symmetric solutions in general relativity using a tetrad-based approach

    NASA Astrophysics Data System (ADS)

    Kim, Do Young; Lasenby, Anthony N.; Hobson, Michael P.

    2018-03-01

    We present a tetrad-based method for solving the Einstein field equations for spherically-symmetric systems and compare it with the widely-used Lemaître-Tolman-Bondi (LTB) model. In particular, we focus on the issues of gauge ambiguity and the use of comoving versus `physical' coordinate systems. We also clarify the correspondences between the two approaches, and illustrate their differences by applying them to the classic examples of the Schwarzschild and Friedmann-Lemaître-Robertson-Walker spacetimes. We demonstrate that the tetrad-based method does not suffer from the gauge freedoms inherent to the LTB model, naturally accommodates non-uniform pressure and has a more transparent physical interpretation. We further apply our tetrad-based method to a generalised form of `Swiss cheese' model, which consists of an interior spherical region surrounded by a spherical shell of vacuum that is embedded in an exterior background universe. In general, we allow the fluid in the interior and exterior regions to support pressure, and do not demand that the interior region be compensated. We pay particular attention to the form of the solution in the intervening vacuum region and illustrate the validity of Birkhoff's theorem at both the metric and tetrad level. We then reconsider critically the original theoretical arguments underlying the so-called Rh = ct cosmological model, which has recently received considerable attention. These considerations in turn illustrate the interesting behaviour of a number of `horizons' in general cosmological models.

  11. Simulation of green roof runoff under different substrate depths and vegetation covers by coupling a simple conceptual and a physically based hydrological model.

    PubMed

    Soulis, Konstantinos X; Valiantzas, John D; Ntoulas, Nikolaos; Kargas, George; Nektarios, Panayiotis A

    2017-09-15

    In spite of the well-known green roof benefits, their widespread adoption in the management practices of urban drainage systems requires the use of adequate analytical and modelling tools. In the current study, green roof runoff modeling was accomplished by developing, testing, and jointly using a simple conceptual model and a physically based numerical simulation model utilizing HYDRUS-1D software. The use of such an approach combines the advantages of the conceptual model, namely simplicity, low computational requirements, and ability to be easily integrated in decision support tools with the capacity of the physically based simulation model to be easily transferred in conditions and locations other than those used for calibrating and validating it. The proposed approach was evaluated with an experimental dataset that included various green roof covers (either succulent plants - Sedum sediforme, or xerophytic plants - Origanum onites, or bare substrate without any vegetation) and two substrate depths (either 8 cm or 16 cm). Both the physically based and the conceptual models matched very closely the observed hydrographs. In general, the conceptual model performed better than the physically based simulation model but the overall performance of both models was sufficient in most cases as it is revealed by the Nash-Sutcliffe Efficiency index which was generally greater than 0.70. Finally, it was showcased how a physically based and a simple conceptual model can be jointly used to allow the use of the simple conceptual model for a wider set of conditions than the available experimental data and in order to support green roof design. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. User's manual for the ALS base heating prediction code, volume 2

    NASA Technical Reports Server (NTRS)

    Reardon, John E.; Fulton, Michael S.

    1992-01-01

    The Advanced Launch System (ALS) Base Heating Prediction Code is based on a generalization of first principles in the prediction of plume induced base convective heating and plume radiation. It should be considered to be an approximate method for evaluating trends as a function of configuration variables because the processes being modeled are too complex to allow an accurate generalization. The convective methodology is based upon generalizing trends from four nozzle configurations, so an extension to use the code with strap-on boosters, multiple nozzle sizes, and variations in the propellants and chamber pressure histories cannot be precisely treated. The plume radiation is more amenable to precise computer prediction, but simplified assumptions are required to model the various aspects of the candidate configurations. Perhaps the most difficult area to characterize is the variation of radiation with altitude. The theory in the radiation predictions is described in more detail. This report is intended to familiarize a user with the interface operation and options, to summarize the limitations and restrictions of the code, and to provide information to assist in installing the code.

  13. Feature Extraction of Event-Related Potentials Using Wavelets: An Application to Human Performance Monitoring

    NASA Technical Reports Server (NTRS)

    Trejo, Leonard J.; Shensa, Mark J.; Remington, Roger W. (Technical Monitor)

    1998-01-01

    This report describes the development and evaluation of mathematical models for predicting human performance from discrete wavelet transforms (DWT) of event-related potentials (ERP) elicited by task-relevant stimuli. The DWT was compared to principal components analysis (PCA) for representation of ERPs in linear regression and neural network models developed to predict a composite measure of human signal detection performance. Linear regression models based on coefficients of the decimated DWT predicted signal detection performance with half as many f ree parameters as comparable models based on PCA scores. In addition, the DWT-based models were more resistant to model degradation due to over-fitting than PCA-based models. Feed-forward neural networks were trained using the backpropagation,-, algorithm to predict signal detection performance based on raw ERPs, PCA scores, or high-power coefficients of the DWT. Neural networks based on high-power DWT coefficients trained with fewer iterations, generalized to new data better, and were more resistant to overfitting than networks based on raw ERPs. Networks based on PCA scores did not generalize to new data as well as either the DWT network or the raw ERP network. The results show that wavelet expansions represent the ERP efficiently and extract behaviorally important features for use in linear regression or neural network models of human performance. The efficiency of the DWT is discussed in terms of its decorrelation and energy compaction properties. In addition, the DWT models provided evidence that a pattern of low-frequency activity (1 to 3.5 Hz) occurring at specific times and scalp locations is a reliable correlate of human signal detection performance.

  14. Feature extraction of event-related potentials using wavelets: an application to human performance monitoring

    NASA Technical Reports Server (NTRS)

    Trejo, L. J.; Shensa, M. J.

    1999-01-01

    This report describes the development and evaluation of mathematical models for predicting human performance from discrete wavelet transforms (DWT) of event-related potentials (ERP) elicited by task-relevant stimuli. The DWT was compared to principal components analysis (PCA) for representation of ERPs in linear regression and neural network models developed to predict a composite measure of human signal detection performance. Linear regression models based on coefficients of the decimated DWT predicted signal detection performance with half as many free parameters as comparable models based on PCA scores. In addition, the DWT-based models were more resistant to model degradation due to over-fitting than PCA-based models. Feed-forward neural networks were trained using the backpropagation algorithm to predict signal detection performance based on raw ERPs, PCA scores, or high-power coefficients of the DWT. Neural networks based on high-power DWT coefficients trained with fewer iterations, generalized to new data better, and were more resistant to overfitting than networks based on raw ERPs. Networks based on PCA scores did not generalize to new data as well as either the DWT network or the raw ERP network. The results show that wavelet expansions represent the ERP efficiently and extract behaviorally important features for use in linear regression or neural network models of human performance. The efficiency of the DWT is discussed in terms of its decorrelation and energy compaction properties. In addition, the DWT models provided evidence that a pattern of low-frequency activity (1 to 3.5 Hz) occurring at specific times and scalp locations is a reliable correlate of human signal detection performance. Copyright 1999 Academic Press.

  15. Constrained Total Generalized p-Variation Minimization for Few-View X-Ray Computed Tomography Image Reconstruction.

    PubMed

    Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen

    2016-01-01

    Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems.

  16. Experiments in monthly mean simulation of the atmosphere with a coarse-mesh general circulation model

    NASA Technical Reports Server (NTRS)

    Lutz, R. J.; Spar, J.

    1978-01-01

    The Hansen atmospheric model was used to compute five monthly forecasts (October 1976 through February 1977). The comparison is based on an energetics analysis, meridional and vertical profiles, error statistics, and prognostic and observed mean maps. The monthly mean model simulations suffer from several defects. There is, in general, no skill in the simulation of the monthly mean sea-level pressure field, and only marginal skill is indicated for the 850 mb temperatures and 500 mb heights. The coarse-mesh model appears to generate a less satisfactory monthly mean simulation than the finer mesh GISS model.

  17. A general modeling framework for describing spatially structured population dynamics

    USGS Publications Warehouse

    Sample, Christine; Fryxell, John; Bieri, Joanna; Federico, Paula; Earl, Julia; Wiederholt, Ruscena; Mattsson, Brady; Flockhart, Tyler; Nicol, Sam; Diffendorfer, James E.; Thogmartin, Wayne E.; Erickson, Richard A.; Norris, D. Ryan

    2017-01-01

    Variation in movement across time and space fundamentally shapes the abundance and distribution of populations. Although a variety of approaches model structured population dynamics, they are limited to specific types of spatially structured populations and lack a unifying framework. Here, we propose a unified network-based framework sufficiently novel in its flexibility to capture a wide variety of spatiotemporal processes including metapopulations and a range of migratory patterns. It can accommodate different kinds of age structures, forms of population growth, dispersal, nomadism and migration, and alternative life-history strategies. Our objective was to link three general elements common to all spatially structured populations (space, time and movement) under a single mathematical framework. To do this, we adopt a network modeling approach. The spatial structure of a population is represented by a weighted and directed network. Each node and each edge has a set of attributes which vary through time. The dynamics of our network-based population is modeled with discrete time steps. Using both theoretical and real-world examples, we show how common elements recur across species with disparate movement strategies and how they can be combined under a unified mathematical framework. We illustrate how metapopulations, various migratory patterns, and nomadism can be represented with this modeling approach. We also apply our network-based framework to four organisms spanning a wide range of life histories, movement patterns, and carrying capacities. General computer code to implement our framework is provided, which can be applied to almost any spatially structured population. This framework contributes to our theoretical understanding of population dynamics and has practical management applications, including understanding the impact of perturbations on population size, distribution, and movement patterns. By working within a common framework, there is less chance that comparative analyses are colored by model details rather than general principles

  18. A Generalized QMRA Beta-Poisson Dose-Response Model.

    PubMed

    Xie, Gang; Roiko, Anne; Stratton, Helen; Lemckert, Charles; Dunn, Peter K; Mengersen, Kerrie

    2016-10-01

    Quantitative microbial risk assessment (QMRA) is widely accepted for characterizing the microbial risks associated with food, water, and wastewater. Single-hit dose-response models are the most commonly used dose-response models in QMRA. Denoting PI(d) as the probability of infection at a given mean dose d, a three-parameter generalized QMRA beta-Poisson dose-response model, PI(d|α,β,r*), is proposed in which the minimum number of organisms required for causing infection, K min , is not fixed, but a random variable following a geometric distribution with parameter 0

  19. Development of an EVA systems cost model. Volume 3: EVA systems cost model

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The EVA systems cost model presented is based on proposed EVA equipment for the space shuttle program. General information on EVA crewman requirements in a weightless environment and an EVA capabilities overview are provided.

  20. Beta Regression Finite Mixture Models of Polarization and Priming

    ERIC Educational Resources Information Center

    Smithson, Michael; Merkle, Edgar C.; Verkuilen, Jay

    2011-01-01

    This paper describes the application of finite-mixture general linear models based on the beta distribution to modeling response styles, polarization, anchoring, and priming effects in probability judgments. These models, in turn, enhance our capacity for explicitly testing models and theories regarding the aforementioned phenomena. The mixture…

  1. Review of Statistical Methods for Analysing Healthcare Resources and Costs

    PubMed Central

    Mihaylova, Borislava; Briggs, Andrew; O'Hagan, Anthony; Thompson, Simon G

    2011-01-01

    We review statistical methods for analysing healthcare resource use and costs, their ability to address skewness, excess zeros, multimodality and heavy right tails, and their ease for general use. We aim to provide guidance on analysing resource use and costs focusing on randomised trials, although methods often have wider applicability. Twelve broad categories of methods were identified: (I) methods based on the normal distribution, (II) methods following transformation of data, (III) single-distribution generalized linear models (GLMs), (IV) parametric models based on skewed distributions outside the GLM family, (V) models based on mixtures of parametric distributions, (VI) two (or multi)-part and Tobit models, (VII) survival methods, (VIII) non-parametric methods, (IX) methods based on truncation or trimming of data, (X) data components models, (XI) methods based on averaging across models, and (XII) Markov chain methods. Based on this review, our recommendations are that, first, simple methods are preferred in large samples where the near-normality of sample means is assured. Second, in somewhat smaller samples, relatively simple methods, able to deal with one or two of above data characteristics, may be preferable but checking sensitivity to assumptions is necessary. Finally, some more complex methods hold promise, but are relatively untried; their implementation requires substantial expertise and they are not currently recommended for wider applied work. Copyright © 2010 John Wiley & Sons, Ltd. PMID:20799344

  2. A connectionist model for dynamic control

    NASA Technical Reports Server (NTRS)

    Whitfield, Kevin C.; Goodall, Sharon M.; Reggia, James A.

    1989-01-01

    The application of a connectionist modeling method known as competition-based spreading activation to a camera tracking task is described. The potential is explored for automation of control and planning applications using connectionist technology. The emphasis is on applications suitable for use in the NASA Space Station and in related space activities. The results are quite general and could be applicable to control systems in general.

  3. Physically-Derived Dynamical Cores in Atmospheric General Circulation Models

    NASA Technical Reports Server (NTRS)

    Rood, Richard B.; Lin, Shian-Jiann

    1999-01-01

    The algorithm chosen to represent the advection in atmospheric models is often used as the primary attribute to classify the model. Meteorological models are generally classified as spectral or grid point, with the term grid point implying discretization using finite differences. These traditional approaches have a number of shortcomings that render them non-physical. That is, they provide approximate solutions to the conservation equations that do not obey the fundamental laws of physics. The most commonly discussed shortcomings are overshoots and undershoots which manifest themselves most overtly in the constituent continuity equation. For this reason many climate models have special algorithms to model water vapor advection. This talk focuses on the development of an atmospheric general circulation model which uses a consistent physically-based advection algorithm in all aspects of the model formulation. The shallow-water model is generalized to three dimensions and combined with the physics parameterizations of NCAR's Community Climate Model. The scientific motivation for the development is to increase the integrity of the underlying fluid dynamics so that the physics terms can be more effectively isolated, examined, and improved. The expected benefits of the new model are discussed and results from the initial integrations will be presented.

  4. Physically-Derived Dynamical Cores in Atmospheric General Circulation Models

    NASA Technical Reports Server (NTRS)

    Rood, Richard B.; Lin, Shian-Kiann

    1999-01-01

    The algorithm chosen to represent the advection in atmospheric models is often used as the primary attribute to classify the model. Meteorological models are generally classified as spectral or grid point, with the term grid point implying discretization using finite differences. These traditional approaches have a number of shortcomings that render them non-physical. That is, they provide approximate solutions to the conservation equations that do not obey the fundamental laws of physics. The most commonly discussed shortcomings are overshoots and undershoots which manifest themselves most overtly in the constituent continuity equation. For this reason many climate models have special algorithms to model water vapor advection. This talk focuses on the development of an atmospheric general circulation model which uses a consistent physically-based advection algorithm in all aspects of the model formulation. The shallow-water model of Lin and Rood (QJRMS, 1997) is generalized to three dimensions and combined with the physics parameterizations of NCAR's Community Climate Model. The scientific motivation for the development is to increase the integrity of the underlying fluid dynamics so that the physics terms can be more effectively isolated, examined, and improved. The expected benefits of the new model are discussed and results from the initial integrations will be presented.

  5. Thermomechanical Characterization and Modeling of Superelastic Shape Memory Alloy Beams and Frames

    NASA Astrophysics Data System (ADS)

    Watkins, Ryan

    Of existing applications, the majority of shape memory alloy (SMA) devices consist of beam (orthodontic wire, eye glasses frames, catheter guide wires) and framed structures (cardiovascular stents, vena cava filters). Although uniaxial tension data is often sufficient to model basic beam behavior (which has been the main focus of the research community), the tension-compression asymmetry and complex phase transformation behavior of SMAs suggests more information is necessary to properly model higher complexity states of loading. In this work, SMA beams are experimentally characterized under general loading conditions (including tension, compression, pure bending, and buckling); furthermore, a model is developed with respect to general beam deformation based on the relevant phenomena observed in the experimental characterization. Stress induced phase transformation within superelastic SMA beams is shown to depend on not only the loading mode, but also kinematic constraints imposed by beam geometry (such as beam cross-section and length). In the cases of tension and pure bending, the structural behavior is unstable and corresponds to phase transformation localization and propagation. This unstable behavior is the result of a local level up--down--up stress/strain response in tension, which is measured here using a novel composite-based experimental technique. In addition to unstable phase transformation, intriguing post-buckling straightening is observed in short SMA columns during monotonic loading (termed unbuckling here). Based on this phenomenological understanding of SMA beam behavior, a trilinear based material law is developed in the context of a Shanley column model and is found to capture many of the relevant features of column buckling, including the experimentally observed unbuckling behavior. Due to the success of this model, it is generalized within the context of beam theory and, in conjunction with Bloch wave stability analysis, is used to model and design SMA honeycombs.

  6. The Recycling and Reclamation of Used Tank Track Pins

    DTIC Science & Technology

    1985-08-01

    dislocation models that show crack formation as the accumu- lation of defects and subsequent loss of coherency across a slip plane; and, models based on...specifications, thus removing any damage that has occurred. The success of this project is based on the ability of the reheat treatment to eliminate the...develop into cracks 8 . The general models of fatigue crack nucleation have been grouped into five main categories which are: models which consider the

  7. Flow assignment model for quantitative analysis of diverting bulk freight from road to railway

    PubMed Central

    Liu, Chang; Wang, Jiaxi; Xiao, Jie; Liu, Siqi; Wu, Jianping; Li, Jian

    2017-01-01

    Since railway transport possesses the advantage of high volume and low carbon emissions, diverting some freight from road to railway will help reduce the negative environmental impacts associated with transport. This paper develops a flow assignment model for quantitative analysis of diverting truck freight to railway. First, a general network which considers road transportation, railway transportation, handling and transferring is established according to all the steps in the whole transportation process. Then general functions which embody the factors which the shippers will pay attention to when choosing mode and path are formulated. The general functions contain the congestion cost on road, the capacity constraints of railways and freight stations. Based on the general network and general cost function, a user equilibrium flow assignment model is developed to simulate the flow distribution on the general network under the condition that all shippers choose transportation mode and path independently. Since the model is nonlinear and challenging, we adopt a method that uses tangent lines to constitute envelope curve to linearize it. Finally, a numerical example is presented to test the model and show the method of making quantitative analysis of bulk freight modal shift between road and railway. PMID:28771536

  8. Getting governance right for a sustainable regionalised business model.

    PubMed

    Laurence, Caroline O; Black, Linda E; Rowe, Mark; Pearce, Rod

    2011-06-06

    The 1998 Ministerial Review of General Practice Training identified several areas for improvement that led to major changes in the provision of general practice training, including the establishment of General Practice Education and Training (GPET) and the regionalisation of training. The regionalised training business model has been in place for nearly 10 years, and several key organisations have been involved in its evolution, including the Australian Government, speciality colleges, GPET and regionalised training providers. Both the college-focused and regionalised-focused models have had some successes. These include recognition and support of general practice as a vocational specialty, increased numbers of junior doctors undertaking placements in general practice, and increased numbers of registrars training in rural areas. This period has also seen changes in the governance and decision-making processes with creation of a new framework that is inclusive of all the key players in the new regionalised training system. The future holds challenges for the regionalised training business model as the general practice education and training landscape becomes more complex. The framework in the current model will provide a base to help meet these challenges and allow for further sustainable expansion.

  9. A general science-based framework for dynamical spatio-temporal models

    USGS Publications Warehouse

    Wikle, C.K.; Hooten, M.B.

    2010-01-01

    Spatio-temporal statistical models are increasingly being used across a wide variety of scientific disciplines to describe and predict spatially-explicit processes that evolve over time. Correspondingly, in recent years there has been a significant amount of research on new statistical methodology for such models. Although descriptive models that approach the problem from the second-order (covariance) perspective are important, and innovative work is being done in this regard, many real-world processes are dynamic, and it can be more efficient in some cases to characterize the associated spatio-temporal dependence by the use of dynamical models. The chief challenge with the specification of such dynamical models has been related to the curse of dimensionality. Even in fairly simple linear, first-order Markovian, Gaussian error settings, statistical models are often over parameterized. Hierarchical models have proven invaluable in their ability to deal to some extent with this issue by allowing dependency among groups of parameters. In addition, this framework has allowed for the specification of science based parameterizations (and associated prior distributions) in which classes of deterministic dynamical models (e. g., partial differential equations (PDEs), integro-difference equations (IDEs), matrix models, and agent-based models) are used to guide specific parameterizations. Most of the focus for the application of such models in statistics has been in the linear case. The problems mentioned above with linear dynamic models are compounded in the case of nonlinear models. In this sense, the need for coherent and sensible model parameterizations is not only helpful, it is essential. Here, we present an overview of a framework for incorporating scientific information to motivate dynamical spatio-temporal models. First, we illustrate the methodology with the linear case. We then develop a general nonlinear spatio-temporal framework that we call general quadratic nonlinearity and demonstrate that it accommodates many different classes of scientific-based parameterizations as special cases. The model is presented in a hierarchical Bayesian framework and is illustrated with examples from ecology and oceanography. ?? 2010 Sociedad de Estad??stica e Investigaci??n Operativa.

  10. Generalized constitutive equations for piezo-actuated compliant mechanism

    NASA Astrophysics Data System (ADS)

    Cao, Junyi; Ling, Mingxiang; Inman, Daniel J.; Lin, Jin

    2016-09-01

    This paper formulates analytical models to describe the static displacement and force interactions between generic serial-parallel compliant mechanisms and their loads by employing the matrix method. In keeping with the familiar piezoelectric constitutive equations, the generalized constitutive equations of compliant mechanism represent the input-output displacement and force relations in the form of a generalized Hooke’s law and as analytical functions of physical parameters. Also significantly, a new model of output displacement for compliant mechanism interacting with piezo-stacks and elastic loads is deduced based on the generalized constitutive equations. Some original findings differing from the well-known constitutive performance of piezo-stacks are also given. The feasibility of the proposed models is confirmed by finite element analysis and by experiments under various elastic loads. The analytical models can be an insightful tool for predicting and optimizing the performance of a wide class of compliant mechanisms that simultaneously consider the influence of loads and piezo-stacks.

  11. Continuing education for general practice. 2. Systematic learning from experience.

    PubMed Central

    al-Shehri, A; Stanley, I; Thomas, P

    1993-01-01

    Prompted by evidence that the recently-adopted arrangements for ongoing education among established general practitioners are unsatisfactory, the first of a pair of papers examined the theoretical basis of continuing education for general practice and proposed a model of self-directed learning in which the experience of established practitioners is connected, through the media of reading, reflection and audit, with competence for the role. In this paper a practical, systematic approach to self-directed learning by general practitioners is described based on the model. The contribution which appropriate participation in continuing medical education can make to enhancing learning from experience is outlined. PMID:8373649

  12. Propagation of flat-topped multi-Gaussian beams through a double-lens system with apertures.

    PubMed

    Gao, Yanqi; Zhu, Baoqiang; Liu, Daizhong; Lin, Zunqi

    2009-07-20

    A general model for different apertures and flat-topped laser beams based on the multi-Gaussian function is developed. The general analytical expression for the propagation of a flat-topped beam through a general double-lens system with apertures is derived using the above model. Then, the propagation characteristics of the flat-topped beam through a spatial filter are investigated by using a simplified analytical expression. Based on the Fluence beam contrast and the Fill factor, the influences of a pinhole size on the propagation of the flat-topped multi-Gaussian beam (FMGB) through the spatial filter are illustrated. An analytical expression for the propagation of the FMGB through the spatial filter with a misaligned pinhole is presented, and the influences of the pinhole offset are evaluated.

  13. Generalized framework for testing gravity with gravitational-wave propagation. I. Formulation

    NASA Astrophysics Data System (ADS)

    Nishizawa, Atsushi

    2018-05-01

    The direct detection of gravitational waves (GWs) from merging binary black holes and neutron stars marks the beginning of a new era in gravitational physics, and it brings forth new opportunities to test theories of gravity. To this end, it is crucial to search for anomalous deviations from general relativity in a model-independent way, irrespective of gravity theories, GW sources, and background spacetimes. In this paper, we propose a new universal framework for testing gravity with GWs, based on the generalized propagation of a GW in an effective field theory that describes modification of gravity at cosmological scales. Then, we perform a parameter estimation study, showing how well the future observation of GWs can constrain the model parameters in the generalized models of GW propagation.

  14. Developmental and Life-Stage Physiologically-Based Pharmacokinetic (PBPK) Models in Humans and Animal Models.

    EPA Science Inventory

    PBPK models provide a computational framework for incorporating pertinent physiological and biochemical information to estimate in vivo levels of xenobiotics in biological tissues. In general, PBPK models are used to correlate exposures to target tissue levels of chemicals and th...

  15. Standardizing Acute Toxicity Data for use in Ecotoxicology Models: Influence of Test Type, Life Stage, and Concentration Reporting

    EPA Science Inventory

    Ecotoxicological models generally have large data requirements and are frequently based on existing information from diverse sources. Standardizing data for toxicological models may be necessary to reduce extraneous variation and to ensure models reflect intrinsic relationships. ...

  16. Detection of greenhouse-gas-induced climatic change. Progress report, July 1, 1994--July 31, 1995

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, P.D.; Wigley, T.M.L.

    1995-07-21

    The objective of this research is to assembly and analyze instrumental climate data and to develop and apply climate models as a basis for detecting greenhouse-gas-induced climatic change, and validation of General Circulation Models. In addition to changes due to variations in anthropogenic forcing, including greenhouse gas and aerosol concentration changes, the global climate system exhibits a high degree of internally-generated and externally-forced natural variability. To detect the anthropogenic effect, its signal must be isolated from the ``noise`` of this natural climatic variability. A high quality, spatially extensive data base is required to define the noise and its spatial characteristics.more » To facilitate this, available land and marine data bases will be updated and expanded. The data will be analyzed to determine the potential effects on climate of greenhouse gas and aerosol concentration changes and other factors. Analyses will be guided by a variety of models, from simple energy balance climate models to coupled atmosphere ocean General Circulation Models. These analyses are oriented towards obtaining early evidence of anthropogenic climatic change that would lead either to confirmation, rejection or modification of model projections, and towards the statistical validation of General Circulation Model control runs and perturbation experiments.« less

  17. Dynamic Chest Image Analysis: Evaluation of Model-Based Pulmonary Perfusion Analysis With Pyramid Images

    DTIC Science & Technology

    2001-10-25

    Image Analysis aims to develop model-based computer analysis and visualization methods for showing focal and general abnormalities of lung ventilation and perfusion based on a sequence of digital chest fluoroscopy frames collected with the Dynamic Pulmonary Imaging technique 18,5,17,6. We have proposed and evaluated a multiresolutional method with an explicit ventilation model based on pyramid images for ventilation analysis. We have further extended the method for ventilation analysis to pulmonary perfusion. This paper focuses on the clinical evaluation of our method for

  18. An Exospheric Temperature Model Based On CHAMP Observations and TIEGCM Simulations

    NASA Astrophysics Data System (ADS)

    Ruan, Haibing; Lei, Jiuhou; Dou, Xiankang; Liu, Siqing; Aa, Ercha

    2018-02-01

    In this work, thermospheric densities from the accelerometer measurement on board the CHAMP satellite during 2002-2009 and the simulations from the National Center for Atmospheric Research Thermosphere Ionosphere Electrodynamics General Circulation Model (NCAR-TIEGCM) are employed to develop an empirical exospheric temperature model (ETM). The two-dimensional basis functions of the ETM are first provided from the principal component analysis of the TIEGCM simulations. Based on the exospheric temperatures derived from CHAMP thermospheric densities, a global distribution of the exospheric temperatures is reconstructed. A parameterization is conducted for each basis function amplitude as a function of solar-geophysical and seasonal conditions. Thus, the ETM can be utilized to model the thermospheric temperature and mass density under a specified condition. Our results showed that the averaged standard deviation of the ETM is generally less than 10% than approximately 30% in the MSIS model. Besides, the ETM reproduces the global thermospheric evolutions including the equatorial thermosphere anomaly.

  19. [Concepts of rational taxonomy].

    PubMed

    Pavlinov, I Ia

    2011-01-01

    The problems are discussed related to development of concepts of rational taxonomy and rational classifications (taxonomic systems) in biology. Rational taxonomy is based on the assumption that the key characteristic of rationality is deductive inference of certain partial judgments about reality under study from other judgments taken as more general and a priory true. Respectively, two forms of rationality are discriminated--ontological and epistemological ones. The former implies inference of classifications properties from general (essential) properties of the reality being investigated. The latter implies inference of the partial rules of judgments about classifications from more general (formal) rules. The following principal concepts of ontologically rational biological taxonomy are considered: "crystallographic" approach, inference of the orderliness of organismal diversity from general laws of Nature, inference of the above orderliness from the orderliness of ontogenetic development programs, based on the concept of natural kind and Cassirer's series theory, based on the systemic concept, based on the idea of periodic systems. Various concepts of ontologically rational taxonomy can be generalized by an idea of the causal taxonomy, according to which any biologically sound classification is founded on a contentwise model of biological diversity that includes explicit indication of general causes responsible for that diversity. It is asserted that each category of general causation and respective background model may serve as a basis for a particular ontologically rational taxonomy as a distinctive research program. Concepts of epistemologically rational taxonomy and classifications (taxonomic systems) can be interpreted in terms of application of certain epistemological criteria of substantiation of scientific status of taxonomy in general and of taxonomic systems in particular. These concepts include: consideration of taxonomy consistency from the standpoint of inductive and hypothetico-deductive argumentation schemes and such fundamental criteria of classifications naturalness as their prognostic capabilities; foundation of a theory of "general taxonomy" as a "general logic", including elements of the axiomatic method. The latter concept constitutes a core of the program of general classiology; it is inconsistent due to absence of anything like "general logic". It is asserted that elaboration of a theory of taxonomy as a biological discipline based on the formal principles of epistemological rationality is not feasible. Instead, it is to be elaborated as ontologically rational one based on biologically sound metatheories about biological diversity causes.

  20. Continuum-mechanics-based rheological formulation for debris flow

    USGS Publications Warehouse

    Chen, Cheng-lung; Ling, Chi-Hai; ,

    1993-01-01

    This paper aims to assess the validity of the generalized viscoplastic fluid (GVF) model in the light of both the classical relative-viscosity versus concentration relation and the dimensionless stress versus shear-rate squared relations based on kinetic theory, thereby addressing how to evaluate the rheological parameters of the GVF model using Bagnold's data.

  1. Model-assisted estimation of forest resources with generalized additive models

    Treesearch

    Jean D. Opsomer; F. Jay Breidt; Gretchen G. Moisen; Goran Kauermann

    2007-01-01

    Multiphase surveys are often conducted in forest inventories, with the goal of estimating forested area and tree characteristics over large regions. This article describes how design-based estimation of such quantities, based on information gathered during ground visits of sampled plots, can be made more precise by incorporating auxiliary information available from...

  2. Cogenerating a Competency-based HRM Degree: A Model and Some Lessons from Experience.

    ERIC Educational Resources Information Center

    Wooten, Kevin C.; Elden, Max

    2001-01-01

    A competency-based degree program in human resource management was co-generated by six groups of stakeholders who synthesized competency models using group decision support software. The program focuses on core human resource processes, general business management, strategic decision making and problem solving, change management, and personal…

  3. Movement behavior explains genetic differentiation in American black bears

    Treesearch

    Samuel A Cushman; Jesse S. Lewis

    2010-01-01

    Individual-based landscape genetic analyses provide empirically based models of gene flow. It would be valuable to verify the predictions of these models using independent data of a different type. Analyses using different data sources that produce consistent results provide strong support for the generality of the findings. Mating and dispersal movements are the...

  4. Technical Note: Approximate Bayesian parameterization of a complex tropical forest model

    NASA Astrophysics Data System (ADS)

    Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.

    2013-08-01

    Inverse parameter estimation of process-based models is a long-standing problem in ecology and evolution. A key problem of inverse parameter estimation is to define a metric that quantifies how well model predictions fit to the data. Such a metric can be expressed by general cost or objective functions, but statistical inversion approaches are based on a particular metric, the probability of observing the data given the model, known as the likelihood. Deriving likelihoods for dynamic models requires making assumptions about the probability for observations to deviate from mean model predictions. For technical reasons, these assumptions are usually derived without explicit consideration of the processes in the simulation. Only in recent years have new methods become available that allow generating likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional MCMC, performs well in retrieving known parameter values from virtual field data generated by the forest model. We analyze the results of the parameter estimation, examine the sensitivity towards the choice and aggregation of model outputs and observed data (summary statistics), and show results from using this method to fit the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss differences of this approach to Approximate Bayesian Computing (ABC), another commonly used method to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can successfully be applied to process-based models of high complexity. The methodology is particularly suited to heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models in ecology and evolution.

  5. Solitons, τ-functions and hamiltonian reduction for non-Abelian conformal affine Toda theories

    NASA Astrophysics Data System (ADS)

    Ferreira, L. A.; Miramontes, J. Luis; Guillén, Joaquín Sánchez

    1995-02-01

    We consider the Hamiltonian reduction of the "two-loop" Wess-Zumino-Novikov-Witten model (WZNW) based on an untwisted affine Kac-Moody algebra G. The resulting reduced models, called Generalized Non-Abelian Conformal Affine Toda (G-CAT), are conformally invariant and a wide class of them possesses soliton solutions; these models constitute non-Abelian generalizations of the conformal affine Toda models. Their general solution is constructed by the Leznov-Saveliev method. Moreover, the dressing transformations leading to the solutions in the orbit of the vacuum are considered in detail, as well as the τ-functions, which are defined for any integrable highest weight representation of G, irrespectively of its particular realization. When the conformal symmetry is spontaneously broken, the G-CAT model becomes a generalized affine Toda model, whose soliton solutions are constructed. Their masses are obtained exploring the spontaneous breakdown of the conformal symmetry, and their relation to the fundamental particle masses is discussed. We also introduce what we call the two-loop Virasoro algebra, describing extended symmetries of the two-loop WZNW models.

  6. Generalizing on Multiple Grounds: Performance Learning in Model-Based Troubleshooting

    DTIC Science & Technology

    1989-02-01

    Aritificial Intelligence , 24, 1984. [Ble88] Guy E. Blelloch. Scan Primitives and Parallel Vector Models. PhD thesis, Artificial Intelligence Laboratory...Diagnostic reasoning based on strcture and behavior. Aritificial Intelligence , 24, 1984. [dK86] J. de Kleer. An assumption-based truth maintenance system...diagnosis. Aritificial Intelligence , 24. �. )3 94 BIBLIOGRAPHY [Ham87] Kristian J. Hammond. Learning to anticipate and avoid planning prob- lems

  7. The Venus nitric oxide night airglow - Model calculations based on the Venus Thermospheric General Circulation Model

    NASA Technical Reports Server (NTRS)

    Bougher, S. W.; Gerard, J. C.; Stewart, A. I. F.; Fesen, C. G.

    1990-01-01

    The mechanism responsible for the Venus nitric oxide (0,1) delta band nightglow observed in the Pioneer Venus Orbiter UV spectrometer (OUVS) images was investigated using the Venus Thermospheric General Circulation Model (Dickinson et al., 1984), modified to include simple odd nitrogen chemistry. Results obtained for the solar maximum conditions indicate that the recently revised dark-disk average NO intensity at 198.0 nm, based on statistically averaged OUVS measurements, can be reproduced with minor modifications in chemical rate coefficients. The results imply a nightside hemispheric downward N flux of (2.5-3) x 10 to the 9th/sq cm sec, corresponding to the dayside net production of N atoms needed for transport.

  8. Particle Transport through Scattering Regions with Clear Layers and Inclusions

    NASA Astrophysics Data System (ADS)

    Bal, Guillaume

    2002-08-01

    This paper introduces generalized diffusion models for the transport of particles in scattering media with nonscattering inclusions. Classical diffusion is known as a good approximation of transport only in scattering media. Based on asymptotic expansions and the coupling of transport and diffusion models, generalized diffusion equations with nonlocal interface conditions are proposed which offer a computationally cheap, yet accurate, alternative to solving the full phase-space transport equations. The paper shows which computational model should be used depending on the size and shape of the nonscattering inclusions in the simplified setting of two space dimensions. An important application is the treatment of clear layers in near-infrared (NIR) spectroscopy, an imaging technique based on the propagation of NIR photons in human tissues.

  9. Generalized regression neural network (GRNN)-based approach for colored dissolved organic matter (CDOM) retrieval: case study of Connecticut River at Middle Haddam Station, USA.

    PubMed

    Heddam, Salim

    2014-11-01

    The prediction of colored dissolved organic matter (CDOM) using artificial neural network approaches has received little attention in the past few decades. In this study, colored dissolved organic matter (CDOM) was modeled using generalized regression neural network (GRNN) and multiple linear regression (MLR) models as a function of Water temperature (TE), pH, specific conductance (SC), and turbidity (TU). Evaluation of the prediction accuracy of the models is based on the root mean square error (RMSE), mean absolute error (MAE), coefficient of correlation (CC), and Willmott's index of agreement (d). The results indicated that GRNN can be applied successfully for prediction of colored dissolved organic matter (CDOM).

  10. Consensus-based training and assessment model for general surgery.

    PubMed

    Szasz, P; Louridas, M; de Montbrun, S; Harris, K A; Grantcharov, T P

    2016-05-01

    Surgical education is becoming competency-based with the implementation of in-training milestones. Training guidelines should reflect these changes and determine the specific procedures for such milestone assessments. This study aimed to develop a consensus view regarding operative procedures and tasks considered appropriate for junior and senior trainees, and the procedures that can be used as technical milestone assessments for trainee progression in general surgery. A Delphi process was followed where questionnaires were distributed to all 17 Canadian general surgery programme directors. Items were ranked on a 5-point Likert scale, with consensus defined as Cronbach's α of at least 0·70. Items rated 4 or above on the 5-point Likert scale by 80 per cent of the programme directors were included in the models. Two Delphi rounds were completed, with 14 programme directors taking part in round one and 11 in round two. The overall consensus was high (Cronbach's α = 0·98). The training model included 101 unique procedures and tasks, 24 specific to junior trainees, 68 specific to senior trainees, and nine appropriate to all. The assessment model included four procedures. A system of operative procedures and tasks for junior- and senior-level trainees has been developed along with an assessment model for trainee progression. These can be used as milestones in competency-based assessments. © 2016 BJS Society Ltd Published by John Wiley & Sons Ltd.

  11. The Factor Structure of ADHD in a General Population of Primary School Children

    ERIC Educational Resources Information Center

    Ullebo, Anne Karin; Breivik, Kyrre; Gillberg, Christopher; Lundervold, Astri J.; Posserud, Maj-Britt

    2012-01-01

    Objective: To examine whether a bifactor model with a general ADHD factor and domain specific factors of inattention, hyperactivity and impulsivity was supported in a large general population sample of children. We also explored the utility of forming subscales based on the domain-specific factors. Methods: Child mental health questionnaires were…

  12. The Role of Inertia in Modeling Decisions from Experience with Instance-Based Learning

    PubMed Central

    Dutt, Varun; Gonzalez, Cleotilde

    2012-01-01

    One form of inertia is the tendency to repeat the last decision irrespective of the obtained outcomes while making decisions from experience (DFE). A number of computational models based upon the Instance-Based Learning Theory, a theory of DFE, have included different inertia implementations and have shown to simultaneously account for both risk-taking and alternations between alternatives. The role that inertia plays in these models, however, is unclear as the same model without inertia is also able to account for observed risk-taking quite well. This paper demonstrates the predictive benefits of incorporating one particular implementation of inertia in an existing IBL model. We use two large datasets, estimation and competition, from the Technion Prediction Tournament involving a repeated binary-choice task to show that incorporating an inertia mechanism in an IBL model enables it to account for the observed average risk-taking and alternations. Including inertia, however, does not help the model to account for the trends in risk-taking and alternations over trials compared to the IBL model without the inertia mechanism. We generalize the two IBL models, with and without inertia, to the competition set by using the parameters determined in the estimation set. The generalization process demonstrates both the advantages and disadvantages of including inertia in an IBL model. PMID:22685443

  13. The role of inertia in modeling decisions from experience with instance-based learning.

    PubMed

    Dutt, Varun; Gonzalez, Cleotilde

    2012-01-01

    One form of inertia is the tendency to repeat the last decision irrespective of the obtained outcomes while making decisions from experience (DFE). A number of computational models based upon the Instance-Based Learning Theory, a theory of DFE, have included different inertia implementations and have shown to simultaneously account for both risk-taking and alternations between alternatives. The role that inertia plays in these models, however, is unclear as the same model without inertia is also able to account for observed risk-taking quite well. This paper demonstrates the predictive benefits of incorporating one particular implementation of inertia in an existing IBL model. We use two large datasets, estimation and competition, from the Technion Prediction Tournament involving a repeated binary-choice task to show that incorporating an inertia mechanism in an IBL model enables it to account for the observed average risk-taking and alternations. Including inertia, however, does not help the model to account for the trends in risk-taking and alternations over trials compared to the IBL model without the inertia mechanism. We generalize the two IBL models, with and without inertia, to the competition set by using the parameters determined in the estimation set. The generalization process demonstrates both the advantages and disadvantages of including inertia in an IBL model.

  14. Meta-analysis of studies with bivariate binary outcomes: a marginal beta-binomial model approach

    PubMed Central

    Chen, Yong; Hong, Chuan; Ning, Yang; Su, Xiao

    2018-01-01

    When conducting a meta-analysis of studies with bivariate binary outcomes, challenges arise when the within-study correlation and between-study heterogeneity should be taken into account. In this paper, we propose a marginal beta-binomial model for the meta-analysis of studies with binary outcomes. This model is based on the composite likelihood approach, and has several attractive features compared to the existing models such as bivariate generalized linear mixed model (Chu and Cole, 2006) and Sarmanov beta-binomial model (Chen et al., 2012). The advantages of the proposed marginal model include modeling the probabilities in the original scale, not requiring any transformation of probabilities or any link function, having closed-form expression of likelihood function, and no constraints on the correlation parameter. More importantly, since the marginal beta-binomial model is only based on the marginal distributions, it does not suffer from potential misspecification of the joint distribution of bivariate study-specific probabilities. Such misspecification is difficult to detect and can lead to biased inference using currents methods. We compare the performance of the marginal beta-binomial model with the bivariate generalized linear mixed model and the Sarmanov beta-binomial model by simulation studies. Interestingly, the results show that the marginal beta-binomial model performs better than the Sarmanov beta-binomial model, whether or not the true model is Sarmanov beta-binomial, and the marginal beta-binomial model is more robust than the bivariate generalized linear mixed model under model misspecifications. Two meta-analyses of diagnostic accuracy studies and a meta-analysis of case-control studies are conducted for illustration. PMID:26303591

  15. A generalized groundwater fluctuation model based on precipitation for estimating water table levels of deep unconfined aquifers

    NASA Astrophysics Data System (ADS)

    Jeong, Jina; Park, Eungyu; Shik Han, Weon; Kim, Kue-Young; Suk, Heejun; Beom Jo, Si

    2018-07-01

    A generalized water table fluctuation model based on precipitation was developed using a statistical conceptualization of unsaturated infiltration fluxes. A gamma distribution function was adopted as a transfer function due to its versatility in representing recharge rates with temporally dispersed infiltration fluxes, and a Laplace transformation was used to obtain an analytical solution. To prove the general applicability of the model, convergences with previous water table fluctuation models were shown as special cases. For validation, a few hypothetical cases were developed, where the applicability of the model to a wide range of unsaturated zone conditions was confirmed. For further validation, the model was applied to water table level estimations of three monitoring wells with considerably thick unsaturated zones on Jeju Island. The results show that the developed model represented the pattern of hydrographs from the two monitoring wells fairly well. The lag times from precipitation to recharge estimated from the developed system transfer function were found to agree with those from a conventional cross-correlation analysis. The developed model has the potential to be adopted for the hydraulic characterization of both saturated and unsaturated zones by being calibrated to actual data when extraneous and exogenous causes of water table fluctuation are limited. In addition, as it provides reference estimates, the model can be adopted as a tool for surveilling groundwater resources under hydraulically stressed conditions.

  16. Generalized probabilistic scale space for image restoration.

    PubMed

    Wong, Alexander; Mishra, Akshaya K

    2010-10-01

    A novel generalized sampling-based probabilistic scale space theory is proposed for image restoration. We explore extending the definition of scale space to better account for both noise and observation models, which is important for producing accurately restored images. A new class of scale-space realizations based on sampling and probability theory is introduced to realize this extended definition in the context of image restoration. Experimental results using 2-D images show that generalized sampling-based probabilistic scale-space theory can be used to produce more accurate restored images when compared with state-of-the-art scale-space formulations, particularly under situations characterized by low signal-to-noise ratios and image degradation.

  17. Communication: Introducing prescribed biases in out-of-equilibrium Markov models

    NASA Astrophysics Data System (ADS)

    Dixit, Purushottam D.

    2018-03-01

    Markov models are often used in modeling complex out-of-equilibrium chemical and biochemical systems. However, many times their predictions do not agree with experiments. We need a systematic framework to update existing Markov models to make them consistent with constraints that are derived from experiments. Here, we present a framework based on the principle of maximum relative path entropy (minimum Kullback-Leibler divergence) to update Markov models using stationary state and dynamical trajectory-based constraints. We illustrate the framework using a biochemical model network of growth factor-based signaling. We also show how to find the closest detailed balanced Markov model to a given Markov model. Further applications and generalizations are discussed.

  18. General method to find the attractors of discrete dynamic models of biological systems.

    PubMed

    Gan, Xiao; Albert, Réka

    2018-04-01

    Analyzing the long-term behaviors (attractors) of dynamic models of biological networks can provide valuable insight. We propose a general method that can find the attractors of multilevel discrete dynamical systems by extending a method that finds the attractors of a Boolean network model. The previous method is based on finding stable motifs, subgraphs whose nodes' states can stabilize on their own. We extend the framework from binary states to any finite discrete levels by creating a virtual node for each level of a multilevel node, and describing each virtual node with a quasi-Boolean function. We then create an expanded representation of the multilevel network, find multilevel stable motifs and oscillating motifs, and identify attractors by successive network reduction. In this way, we find both fixed point attractors and complex attractors. We implemented an algorithm, which we test and validate on representative synthetic networks and on published multilevel models of biological networks. Despite its primary motivation to analyze biological networks, our motif-based method is general and can be applied to any finite discrete dynamical system.

  19. General method to find the attractors of discrete dynamic models of biological systems

    NASA Astrophysics Data System (ADS)

    Gan, Xiao; Albert, Réka

    2018-04-01

    Analyzing the long-term behaviors (attractors) of dynamic models of biological networks can provide valuable insight. We propose a general method that can find the attractors of multilevel discrete dynamical systems by extending a method that finds the attractors of a Boolean network model. The previous method is based on finding stable motifs, subgraphs whose nodes' states can stabilize on their own. We extend the framework from binary states to any finite discrete levels by creating a virtual node for each level of a multilevel node, and describing each virtual node with a quasi-Boolean function. We then create an expanded representation of the multilevel network, find multilevel stable motifs and oscillating motifs, and identify attractors by successive network reduction. In this way, we find both fixed point attractors and complex attractors. We implemented an algorithm, which we test and validate on representative synthetic networks and on published multilevel models of biological networks. Despite its primary motivation to analyze biological networks, our motif-based method is general and can be applied to any finite discrete dynamical system.

  20. A 1D-2D Shallow Water Equations solver for discontinuous porosity field based on a Generalized Riemann Problem

    NASA Astrophysics Data System (ADS)

    Ferrari, Alessia; Vacondio, Renato; Dazzi, Susanna; Mignosa, Paolo

    2017-09-01

    A novel augmented Riemann Solver capable of handling porosity discontinuities in 1D and 2D Shallow Water Equation (SWE) models is presented. With the aim of accurately approximating the porosity source term, a Generalized Riemann Problem is derived by adding an additional fictitious equation to the SWEs system and imposing mass and momentum conservation across the porosity discontinuity. The modified Shallow Water Equations are theoretically investigated, and the implementation of an augmented Roe Solver in a 1D Godunov-type finite volume scheme is presented. Robust treatment of transonic flows is ensured by introducing an entropy fix based on the wave pattern of the Generalized Riemann Problem. An Exact Riemann Solver is also derived in order to validate the numerical model. As an extension of the 1D scheme, an analogous 2D numerical model is also derived and validated through test cases with radial symmetry. The capability of the 1D and 2D numerical models to capture different wave patterns is assessed against several Riemann Problems with different wave patterns.

  1. Degradation data analysis based on a generalized Wiener process subject to measurement error

    NASA Astrophysics Data System (ADS)

    Li, Junxing; Wang, Zhihua; Zhang, Yongbo; Fu, Huimin; Liu, Chengrui; Krishnaswamy, Sridhar

    2017-09-01

    Wiener processes have received considerable attention in degradation modeling over the last two decades. In this paper, we propose a generalized Wiener process degradation model that takes unit-to-unit variation, time-correlated structure and measurement error into considerations simultaneously. The constructed methodology subsumes a series of models studied in the literature as limiting cases. A simple method is given to determine the transformed time scale forms of the Wiener process degradation model. Then model parameters can be estimated based on a maximum likelihood estimation (MLE) method. The cumulative distribution function (CDF) and the probability distribution function (PDF) of the Wiener process with measurement errors are given based on the concept of the first hitting time (FHT). The percentiles of performance degradation (PD) and failure time distribution (FTD) are also obtained. Finally, a comprehensive simulation study is accomplished to demonstrate the necessity of incorporating measurement errors in the degradation model and the efficiency of the proposed model. Two illustrative real applications involving the degradation of carbon-film resistors and the wear of sliding metal are given. The comparative results show that the constructed approach can derive a reasonable result and an enhanced inference precision.

  2. Machine Learning-based discovery of closures for reduced models of dynamical systems

    NASA Astrophysics Data System (ADS)

    Pan, Shaowu; Duraisamy, Karthik

    2017-11-01

    Despite the successful application of machine learning (ML) in fields such as image processing and speech recognition, only a few attempts has been made toward employing ML to represent the dynamics of complex physical systems. Previous attempts mostly focus on parameter calibration or data-driven augmentation of existing models. In this work we present a ML framework to discover closure terms in reduced models of dynamical systems and provide insights into potential problems associated with data-driven modeling. Based on exact closure models for linear system, we propose a general linear closure framework from viewpoint of optimization. The framework is based on trapezoidal approximation of convolution term. Hyperparameters that need to be determined include temporal length of memory effect, number of sampling points, and dimensions of hidden states. To circumvent the explicit specification of memory effect, a general framework inspired from neural networks is also proposed. We conduct both a priori and posteriori evaluations of the resulting model on a number of non-linear dynamical systems. This work was supported in part by AFOSR under the project ``LES Modeling of Non-local effects using Statistical Coarse-graining'' with Dr. Jean-Luc Cambier as the technical monitor.

  3. Introducing NMR to a General Chemistry Audience: A Structural-Based Instrumental Laboratory Relating Lewis Structures, Molecular Models, and [superscript 13]C NMR Data

    ERIC Educational Resources Information Center

    Pulliam, Curtis R.; Pfeiffer, William F.; Thomas, Alyssa C.

    2015-01-01

    This paper describes a first-year general chemistry laboratory that uses NMR spectroscopy and model building to emphasize molecular shape and structure. It is appropriate for either a traditional or an atoms-first curriculum. Students learn the basis of structure and the use of NMR data through a cooperative learning hands-on laboratory…

  4. Development of the Internet addiction scale based on the Internet Gaming Disorder criteria suggested in DSM-5.

    PubMed

    Cho, Hyun; Kwon, Min; Choi, Ji-Hye; Lee, Sang-Kyu; Choi, Jung Seok; Choi, Sam-Wook; Kim, Dai-Jin

    2014-09-01

    This study was conducted to develop and validate a standardized self-diagnostic Internet addiction (IA) scale based on the diagnosis criteria for Internet Gaming Disorder (IGD) in the Diagnostic and Statistical Manual of Mental Disorder, 5th edition (DSM-5). Items based on the IGD diagnosis criteria were developed using items of the previous Internet addiction scales. Data were collected from a community sample. The data were divided into two sets, and confirmatory factor analysis (CFA) was performed repeatedly. The model was modified after discussion with professionals based on the first CFA results, after which the second CFA was performed. The internal consistency reliability was generally good. The items that showed significantly low correlation values based on the item-total correlation of each factor were excluded. After the first CFA was performed, some factors and items were excluded. Seven factors and 26 items were prepared for the final model. The second CFA results showed good general factor loading, Squared Multiple Correlation (SMC) and model fit. The model fit of the final model was good, but some factors were very highly correlated. It is recommended that some of the factors be refined through further studies. Copyright © 2014. Published by Elsevier Ltd.

  5. A support vector machine based control application to the experimental three-tank system.

    PubMed

    Iplikci, Serdar

    2010-07-01

    This paper presents a support vector machine (SVM) approach to generalized predictive control (GPC) of multiple-input multiple-output (MIMO) nonlinear systems. The possession of higher generalization potential and at the same time avoidance of getting stuck into the local minima have motivated us to employ SVM algorithms for modeling MIMO systems. Based on the SVM model, detailed and compact formulations for calculating predictions and gradient information, which are used in the computation of the optimal control action, are given in the paper. The proposed MIMO SVM-based GPC method has been verified on an experimental three-tank liquid level control system. Experimental results have shown that the proposed method can handle the control task successfully for different reference trajectories. Moreover, a detailed discussion on data gathering, model selection and effects of the control parameters have been given in this paper. 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Motivation towards extracurricular activities and motivation at school: A test of the generalization effect hypothesis.

    PubMed

    Denault, Anne-Sophie; Guay, Frédéric

    2017-01-01

    Participation in extracurricular activities is a promising avenue for enhancing students' school motivation. Using self-determination theory (Deci & Ryan, 2000), the goal of this study was to test a serial multiple mediator model. In this model, students' perceptions of autonomy support from their extracurricular activity leader predicted their activity-based intrinsic and identified regulations. In turn, these regulations predicted their school-based intrinsic and identified regulations during the same school year. Finally, these regulations predicted their school-based intrinsic and identified regulations one year later. A total of 276 youths (54% girls) from disadvantaged neighborhoods were surveyed over two waves of data collection. The proposed mediation model was supported for both types of regulation. These results highlight the generalization effects of motivation from the extracurricular activity context to the school context. Copyright © 2016 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.

  7. From particle systems to learning processes. Comment on "Collective learning modeling based on the kinetic theory of active particles" by Diletta Burini, Silvana De Lillo, and Livio Gibelli

    NASA Astrophysics Data System (ADS)

    Lachowicz, Mirosław

    2016-03-01

    The very stimulating paper [6] discusses an approach to perception and learning in a large population of living agents. The approach is based on a generalization of kinetic theory methods in which the interactions between agents are described in terms of game theory. Such an approach was already discussed in Ref. [2-4] (see also references therein) in various contexts. The processes of perception and learning are based on the interactions between agents and therefore the general kinetic theory is a suitable tool for modeling them. However the main question that rises is how the perception and learning processes may be treated in the mathematical modeling. How may we precisely deliver suitable mathematical structures that are able to capture various aspects of perception and learning?

  8. On the Relation between the Linear Factor Model and the Latent Profile Model

    ERIC Educational Resources Information Center

    Halpin, Peter F.; Dolan, Conor V.; Grasman, Raoul P. P. P.; De Boeck, Paul

    2011-01-01

    The relationship between linear factor models and latent profile models is addressed within the context of maximum likelihood estimation based on the joint distribution of the manifest variables. Although the two models are well known to imply equivalent covariance decompositions, in general they do not yield equivalent estimates of the…

  9. Residence-time framework for modeling multicomponent reactive transport in stream hyporheic zones

    NASA Astrophysics Data System (ADS)

    Painter, S. L.; Coon, E. T.; Brooks, S. C.

    2017-12-01

    Process-based models for transport and transformation of nutrients and contaminants in streams require tractable representations of solute exchange between the stream channel and biogeochemically active hyporheic zones. Residence-time based formulations provide an alternative to detailed three-dimensional simulations and have had good success in representing hyporheic exchange of non-reacting solutes. We extend the residence-time formulation for hyporheic transport to accommodate general multicomponent reactive transport. To that end, the integro-differential form of previous residence time models is replaced by an equivalent formulation based on a one-dimensional advection dispersion equation along the channel coupled at each channel location to a one-dimensional transport model in Lagrangian travel-time form. With the channel discretized for numerical solution, the associated Lagrangian model becomes a subgrid model representing an ensemble of streamlines that are diverted into the hyporheic zone before returning to the channel. In contrast to the previous integro-differential forms of the residence-time based models, the hyporheic flowpaths have semi-explicit spatial representation (parameterized by travel time), thus allowing coupling to general biogeochemical models. The approach has been implemented as a stream-corridor subgrid model in the open-source integrated surface/subsurface modeling software ATS. We use bedform-driven flow coupled to a biogeochemical model with explicit microbial biomass dynamics as an example to show that the subgrid representation is able to represent redox zonation in sediments and resulting effects on metal biogeochemical dynamics in a tractable manner that can be scaled to reach scales.

  10. A computational model of the cognitive impact of decorative elements on the perception of suspense

    NASA Astrophysics Data System (ADS)

    Delatorre, Pablo; León, Carlos; Gervás, Pablo; Palomo-Duarte, Manuel

    2017-10-01

    Suspense is a key narrative issue in terms of emotional gratification, influencing the way in which the audience experiences a story. Virtually all narrative media uses suspense as a strategy for reader engagement regardless of the technology or genre. Being such an important narrative component, computational creativity has tackled suspense in a number of automatic storytelling. These systems are mainly based on narrative theories, and in general lack a cognitive approach involving the study of empathy or emotional effect of the environment impact. With this idea in mind, this paper reports on a computational model of the influence of decorative elements on suspense. It has been developed as part of a more general proposal for plot generation based on cognitive aspects. In order to test and parameterise the model, an evaluation based on textual stories and an evaluation based on a 3D virtual environment was run. In both cases, results suggest a direct influence of emotional perception of decorative objects in the suspense of a scene.

  11. The Political Economy of Interlibrary Organizations: Two Case Studies.

    ERIC Educational Resources Information Center

    Townley, Charles T.

    J. Kenneth Benson's political economy model for interlibrary cooperation identifies linkages and describes interactions between the environment, the interlibrary organization, and member libraries. A tentative general model for interlibrary organizations based on the Benson model was developed, and the fit of this adjusted model to the realities…

  12. Validating and Optimizing the Effects of Model Progression in Simulation-Based Inquiry Learning

    ERIC Educational Resources Information Center

    Mulder, Yvonne G.; Lazonder, Ard W.; de Jong, Ton; Anjewierden, Anjo; Bollen, Lars

    2012-01-01

    Model progression denotes the organization of the inquiry learning process in successive phases of increasing complexity. This study investigated the effectiveness of model progression in general, and explored the added value of either broadening or narrowing students' possibilities to change model progression phases. Results showed that…

  13. SIMULATING SUB-DECADAL CHANNEL MORPHOLOGIC CHANGE IN EPHEMERAL STREAM NETWORKS

    EPA Science Inventory

    A distributed watershed model was modified to simulate cumulative channel morphologic
    change from multiple runoff events in ephemeral stream networks. The model incorporates the general design of the event-based Kinematic Runoff and" Erosion Model (KINEROS), which describes t...

  14. A general framework of automorphic inflation

    NASA Astrophysics Data System (ADS)

    Schimmrigk, Rolf

    2016-05-01

    Automorphic inflation is an application of the framework of automorphic scalar field theory, based on the theory of automorphic forms and representations. In this paper the general framework of automorphic and modular inflation is described in some detail, with emphasis on the resulting stratification of the space of scalar field theories in terms of the group theoretic data associated to the shift symmetry, as well as the automorphic data that specifies the potential. The class of theories based on Eisenstein series provides a natural generalization of the model of j-inflation considered previously.

  15. A generalized Poisson solver for first-principles device simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bani-Hashemian, Mohammad Hossein; VandeVondele, Joost, E-mail: joost.vandevondele@mat.ethz.ch; Brück, Sascha

    2016-01-28

    Electronic structure calculations of atomistic systems based on density functional theory involve solving the Poisson equation. In this paper, we present a plane-wave based algorithm for solving the generalized Poisson equation subject to periodic or homogeneous Neumann conditions on the boundaries of the simulation cell and Dirichlet type conditions imposed at arbitrary subdomains. In this way, source, drain, and gate voltages can be imposed across atomistic models of electronic devices. Dirichlet conditions are enforced as constraints in a variational framework giving rise to a saddle point problem. The resulting system of equations is then solved using a stationary iterative methodmore » in which the generalized Poisson operator is preconditioned with the standard Laplace operator. The solver can make use of any sufficiently smooth function modelling the dielectric constant, including density dependent dielectric continuum models. For all the boundary conditions, consistent derivatives are available and molecular dynamics simulations can be performed. The convergence behaviour of the scheme is investigated and its capabilities are demonstrated.« less

  16. Generalized image contrast enhancement technique based on Heinemann contrast discrimination model

    NASA Astrophysics Data System (ADS)

    Liu, Hong; Nodine, Calvin F.

    1994-03-01

    This paper presents a generalized image contrast enhancement technique which equalizes perceived brightness based on the Heinemann contrast discrimination model. This is a modified algorithm which presents an improvement over the previous study by Mokrane in its mathematically proven existence of a unique solution and in its easily tunable parameterization. The model uses a log-log representation of contrast luminosity between targets and the surround in a fixed luminosity background setting. The algorithm consists of two nonlinear gray-scale mapping functions which have seven parameters, two of which are adjustable Heinemann constants. Another parameter is the background gray level. The remaining four parameters are nonlinear functions of gray scale distribution of the image, and can be uniquely determined once the previous three are given. Tests have been carried out to examine the effectiveness of the algorithm for increasing the overall contrast of images. It can be demonstrated that the generalized algorithm provides better contrast enhancement than histogram equalization. In fact, the histogram equalization technique is a special case of the proposed mapping.

  17. Disease Suppressive Soils: New Insights from the Soil Microbiome

    USDA-ARS?s Scientific Manuscript database

    In this review, we will present three currently-studied model systems with features representative of specific and general suppressiveness. Based on these systems, we will consider hypotheses about the fundamental nature of specific and general disease-suppressive soil microbial communities, explore...

  18. Predicting adsorptive removal of chlorophenol from aqueous solution using artificial intelligence based modeling approaches.

    PubMed

    Singh, Kunwar P; Gupta, Shikha; Ojha, Priyanka; Rai, Premanjali

    2013-04-01

    The research aims to develop artificial intelligence (AI)-based model to predict the adsorptive removal of 2-chlorophenol (CP) in aqueous solution by coconut shell carbon (CSC) using four operational variables (pH of solution, adsorbate concentration, temperature, and contact time), and to investigate their effects on the adsorption process. Accordingly, based on a factorial design, 640 batch experiments were conducted. Nonlinearities in experimental data were checked using Brock-Dechert-Scheimkman (BDS) statistics. Five nonlinear models were constructed to predict the adsorptive removal of CP in aqueous solution by CSC using four variables as input. Performances of the constructed models were evaluated and compared using statistical criteria. BDS statistics revealed strong nonlinearity in experimental data. Performance of all the models constructed here was satisfactory. Radial basis function network (RBFN) and multilayer perceptron network (MLPN) models performed better than generalized regression neural network, support vector machines, and gene expression programming models. Sensitivity analysis revealed that the contact time had highest effect on adsorption followed by the solution pH, temperature, and CP concentration. The study concluded that all the models constructed here were capable of capturing the nonlinearity in data. A better generalization and predictive performance of RBFN and MLPN models suggested that these can be used to predict the adsorption of CP in aqueous solution using CSC.

  19. Design and application of a technologically explicit hybrid energy-economy policy model with micro and macro economic dynamics

    NASA Astrophysics Data System (ADS)

    Bataille, Christopher G. F.

    2005-11-01

    Are further energy efficiency gains, or more recently greenhouse gas reductions, expensive or cheap? Analysts provide conflicting advice to policy makers based on divergent modelling perspectives, a 'top-down/bottom-up debate' in which economists use equation based models that equilibrate markets by maximizing consumer welfare, and technologists use technology simulation models that minimize the financial cost of providing energy services. This thesis summarizes a long term research project to find a middle ground between these two positions that is more useful to policy makers. Starting with the individual components of a behaviourally realistic and technologically explicit simulation model (ISTUM---Inter Sectoral Technology Use Model), or "hybrid", the individual sectors of the economy are linked using a framework of micro and macro economic feedbacks. These feedbacks are taken from the economic theory that informs the computable general equilibrium (CGE) family of models. Speaking in the languages of both economists and engineers, the resulting "physical" equilibrium model of Canada (CIMS---Canadian Integrated Modeling System), equilibrates energy and end-product markets, including imports and exports, for seven regions and 15 economic sectors, including primary industry, manufacturing, transportation, commerce, residences, governmental infrastructure and the energy supply sectors. Several different policy experiments demonstrate the value-added of the model and how its results compare to top-down and bottom-up practice. In general, the results show that technical adjustments make up about half the response to simulated energy policy, and macroeconomic demand adjustments the other half. Induced technical adjustments predominate with minor policies, while the importance of macroeconomic demand adjustment increases with the strength of the policy. Results are also shown for an experiment to derive estimates of future elasticity of substitution (ESUB) and autonomous energy efficiency indices (AEEI) from the model, parameters that could be used in long-run computable general equilibrium (CGE) analysis. The thesis concludes with a summary of the strengths and weakness of the new model as a policy tool, a work plan for its further improvement, and a discussion of the general potential for technologically explicit general equilibrium modelling.

  20. Finite-deformation phase-field chemomechanics for multiphase, multicomponent solids

    NASA Astrophysics Data System (ADS)

    Svendsen, Bob; Shanthraj, Pratheek; Raabe, Dierk

    2018-03-01

    The purpose of this work is the development of a framework for the formulation of geometrically non-linear inelastic chemomechanical models for a mixture of multiple chemical components diffusing among multiple transforming solid phases. The focus here is on general model formulation. No specific model or application is pursued in this work. To this end, basic balance and constitutive relations from non-equilibrium thermodynamics and continuum mixture theory are combined with a phase-field-based description of multicomponent solid phases and their interfaces. Solid phase modeling is based in particular on a chemomechanical free energy and stress relaxation via the evolution of phase-specific concentration fields, order-parameter fields (e.g., related to chemical ordering, structural ordering, or defects), and local internal variables. At the mixture level, differences or contrasts in phase composition and phase local deformation in phase interface regions are treated as mixture internal variables. In this context, various phase interface models are considered. In the equilibrium limit, phase contrasts in composition and local deformation in the phase interface region are determined via bulk energy minimization. On the chemical side, the equilibrium limit of the current model formulation reduces to a multicomponent, multiphase, generalization of existing two-phase binary alloy interface equilibrium conditions (e.g., KKS). On the mechanical side, the equilibrium limit of one interface model considered represents a multiphase generalization of Reuss-Sachs conditions from mechanical homogenization theory. Analogously, other interface models considered represent generalizations of interface equilibrium conditions consistent with laminate and sharp-interface theory. In the last part of the work, selected existing models are formulated within the current framework as special cases and discussed in detail.

  1. A Reinforcement Learning Model Equipped with Sensors for Generating Perception Patterns: Implementation of a Simulated Air Navigation System Using ADS-B (Automatic Dependent Surveillance-Broadcast) Technology.

    PubMed

    Álvarez de Toledo, Santiago; Anguera, Aurea; Barreiro, José M; Lara, Juan A; Lizcano, David

    2017-01-19

    Over the last few decades, a number of reinforcement learning techniques have emerged, and different reinforcement learning-based applications have proliferated. However, such techniques tend to specialize in a particular field. This is an obstacle to their generalization and extrapolation to other areas. Besides, neither the reward-punishment (r-p) learning process nor the convergence of results is fast and efficient enough. To address these obstacles, this research proposes a general reinforcement learning model. This model is independent of input and output types and based on general bioinspired principles that help to speed up the learning process. The model is composed of a perception module based on sensors whose specific perceptions are mapped as perception patterns. In this manner, similar perceptions (even if perceived at different positions in the environment) are accounted for by the same perception pattern. Additionally, the model includes a procedure that statistically associates perception-action pattern pairs depending on the positive or negative results output by executing the respective action in response to a particular perception during the learning process. To do this, the model is fitted with a mechanism that reacts positively or negatively to particular sensory stimuli in order to rate results. The model is supplemented by an action module that can be configured depending on the maneuverability of each specific agent. The model has been applied in the air navigation domain, a field with strong safety restrictions, which led us to implement a simulated system equipped with the proposed model. Accordingly, the perception sensors were based on Automatic Dependent Surveillance-Broadcast (ADS-B) technology, which is described in this paper. The results were quite satisfactory, and it outperformed traditional methods existing in the literature with respect to learning reliability and efficiency.

  2. A Reinforcement Learning Model Equipped with Sensors for Generating Perception Patterns: Implementation of a Simulated Air Navigation System Using ADS-B (Automatic Dependent Surveillance-Broadcast) Technology

    PubMed Central

    Álvarez de Toledo, Santiago; Anguera, Aurea; Barreiro, José M.; Lara, Juan A.; Lizcano, David

    2017-01-01

    Over the last few decades, a number of reinforcement learning techniques have emerged, and different reinforcement learning-based applications have proliferated. However, such techniques tend to specialize in a particular field. This is an obstacle to their generalization and extrapolation to other areas. Besides, neither the reward-punishment (r-p) learning process nor the convergence of results is fast and efficient enough. To address these obstacles, this research proposes a general reinforcement learning model. This model is independent of input and output types and based on general bioinspired principles that help to speed up the learning process. The model is composed of a perception module based on sensors whose specific perceptions are mapped as perception patterns. In this manner, similar perceptions (even if perceived at different positions in the environment) are accounted for by the same perception pattern. Additionally, the model includes a procedure that statistically associates perception-action pattern pairs depending on the positive or negative results output by executing the respective action in response to a particular perception during the learning process. To do this, the model is fitted with a mechanism that reacts positively or negatively to particular sensory stimuli in order to rate results. The model is supplemented by an action module that can be configured depending on the maneuverability of each specific agent. The model has been applied in the air navigation domain, a field with strong safety restrictions, which led us to implement a simulated system equipped with the proposed model. Accordingly, the perception sensors were based on Automatic Dependent Surveillance-Broadcast (ADS-B) technology, which is described in this paper. The results were quite satisfactory, and it outperformed traditional methods existing in the literature with respect to learning reliability and efficiency. PMID:28106849

  3. Multimodal electromechanical model of piezoelectric transformers by Hamilton's principle.

    PubMed

    Nadal, Clement; Pigache, Francois

    2009-11-01

    This work deals with a general energetic approach to establish an accurate electromechanical model of a piezoelectric transformer (PT). Hamilton's principle is used to obtain the equations of motion for free vibrations. The modal characteristics (mass, stiffness, primary and secondary electromechanical conversion factors) are also deduced. Then, to illustrate this general electromechanical method, the variational principle is applied to both homogeneous and nonhomogeneous Rosen-type PT models. A comparison of modal parameters, mechanical displacements, and electrical potentials are presented for both models. Finally, the validity of the electrodynamical model of nonhomogeneous Rosen-type PT is confirmed by a numerical comparison based on a finite elements method and an experimental identification.

  4. Modelling nonlinear viscoelastic behaviours of loudspeaker suspensions-like structures

    NASA Astrophysics Data System (ADS)

    Maillou, Balbine; Lotton, Pierrick; Novak, Antonin; Simon, Laurent

    2018-03-01

    Mechanical properties of an electrodynamic loudspeaker are mainly determined by its suspensions (surround and spider) that behave nonlinearly and typically exhibit frequency dependent viscoelastic properties such as creep effect. The paper aims at characterizing the mechanical behaviour of electrodynamic loudspeaker suspensions at low frequencies using nonlinear identification techniques developed in recent years. A Generalized Hammerstein based model can take into account both frequency dependency and nonlinear properties. As shown in the paper, the model generalizes existing nonlinear or viscoelastic models commonly used for loudspeaker modelling. It is further experimentally shown that a possible input-dependent law may play a key role in suspension characterization.

  5. Combining a popularity-productivity stochastic block model with a discriminative-content model for general structure detection.

    PubMed

    Chai, Bian-fang; Yu, Jian; Jia, Cai-Yan; Yang, Tian-bao; Jiang, Ya-wen

    2013-07-01

    Latent community discovery that combines links and contents of a text-associated network has drawn more attention with the advance of social media. Most of the previous studies aim at detecting densely connected communities and are not able to identify general structures, e.g., bipartite structure. Several variants based on the stochastic block model are more flexible for exploring general structures by introducing link probabilities between communities. However, these variants cannot identify the degree distributions of real networks due to a lack of modeling of the differences among nodes, and they are not suitable for discovering communities in text-associated networks because they ignore the contents of nodes. In this paper, we propose a popularity-productivity stochastic block (PPSB) model by introducing two random variables, popularity and productivity, to model the differences among nodes in receiving links and producing links, respectively. This model has the flexibility of existing stochastic block models in discovering general community structures and inherits the richness of previous models that also exploit popularity and productivity in modeling the real scale-free networks with power law degree distributions. To incorporate the contents in text-associated networks, we propose a combined model which combines the PPSB model with a discriminative model that models the community memberships of nodes by their contents. We then develop expectation-maximization (EM) algorithms to infer the parameters in the two models. Experiments on synthetic and real networks have demonstrated that the proposed models can yield better performances than previous models, especially on networks with general structures.

  6. Combining a popularity-productivity stochastic block model with a discriminative-content model for general structure detection

    NASA Astrophysics Data System (ADS)

    Chai, Bian-fang; Yu, Jian; Jia, Cai-yan; Yang, Tian-bao; Jiang, Ya-wen

    2013-07-01

    Latent community discovery that combines links and contents of a text-associated network has drawn more attention with the advance of social media. Most of the previous studies aim at detecting densely connected communities and are not able to identify general structures, e.g., bipartite structure. Several variants based on the stochastic block model are more flexible for exploring general structures by introducing link probabilities between communities. However, these variants cannot identify the degree distributions of real networks due to a lack of modeling of the differences among nodes, and they are not suitable for discovering communities in text-associated networks because they ignore the contents of nodes. In this paper, we propose a popularity-productivity stochastic block (PPSB) model by introducing two random variables, popularity and productivity, to model the differences among nodes in receiving links and producing links, respectively. This model has the flexibility of existing stochastic block models in discovering general community structures and inherits the richness of previous models that also exploit popularity and productivity in modeling the real scale-free networks with power law degree distributions. To incorporate the contents in text-associated networks, we propose a combined model which combines the PPSB model with a discriminative model that models the community memberships of nodes by their contents. We then develop expectation-maximization (EM) algorithms to infer the parameters in the two models. Experiments on synthetic and real networks have demonstrated that the proposed models can yield better performances than previous models, especially on networks with general structures.

  7. Bifactor Modeling of the Positive and Negative Syndrome Scale: Generalized Psychosis Spans Schizoaffective, Bipolar, and Schizophrenia Diagnoses.

    PubMed

    Anderson, Ariana E; Marder, Stephen; Reise, Steven P; Savitz, Adam; Salvadore, Giacomo; Fu, Dong Jing; Li, Qingqin; Turkoz, Ibrahim; Han, Carol; Bilder, Robert M

    2018-02-06

    Common genetic variation spans schizophrenia, schizoaffective and bipolar disorders, but historically, these syndromes have been distinguished categorically. A symptom dimension shared across these syndromes, if such a general factor exists, might provide a clearer target for understanding and treating mental illnesses that share core biological bases. We tested the hypothesis that a bifactor model of the Positive and Negative Syndrome Scale (PANSS), containing 1 general factor and 5 specific factors (positive, negative, disorganized, excited, anxiety), explains the cross-diagnostic structure of symptoms better than the traditional 5-factor model, and examined the extent to which a general factor reflects the overall severity of symptoms spanning diagnoses in 5094 total patients with a diagnosis of schizophrenia, schizoaffective, and bipolar disorder. The bifactor model provided superior fit across diagnoses, and was closer to the "true" model, compared to the traditional 5-factor model (Vuong test; P < .001). The general factor included high loadings on 28 of the 30 PANSS items, omitting symptoms associated with the excitement and anxiety/depression domains. The general factor had highest total loadings on symptoms that are often associated with the positive and disorganization syndromes, but there were also substantial loadings on the negative syndrome thus leading to the interpretation of this factor as reflecting generalized psychosis. A bifactor model derived from the PANSS can provide a stronger framework for measuring cross-diagnostic psychopathology than a 5-factor model, and includes a generalized psychosis dimension shared at least across schizophrenia, schizoaffective, and bipolar disorder. © The Author(s) 2018. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center. All rights reserved. For permissions, please email: journals.permissions@oup.com

  8. Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology.

    PubMed

    Schaff, James C; Gao, Fei; Li, Ye; Novak, Igor L; Slepchenko, Boris M

    2016-12-01

    Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium 'sparks' as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell.

  9. The generalized drift flux approach: Identification of the void-drift closure law

    NASA Technical Reports Server (NTRS)

    Boure, J. A.

    1989-01-01

    The main characteristics and the potential advantages of generalized drift flux models are presented. In particular it is stressed that the issue on the propagation properties and on the mathematical nature (hyperbolic or not) of the model and the problem of closure are easier to tackle than in two fluid models. The problem of identifying the differential void-drift closure law inherent to generalized drift flux models is then addressed. Such a void-drift closure, based on wave properties, is proposed for bubbly flows. It involves a drift relaxation time which is of the order of 0.25 s. It is observed that, although wave properties provide essential closure validity tests, they do not represent an easily usable source of quantitative information on the closure laws.

  10. Satellite-based terrestrial production efficiency modeling

    PubMed Central

    McCallum, Ian; Wagner, Wolfgang; Schmullius, Christiane; Shvidenko, Anatoly; Obersteiner, Michael; Fritz, Steffen; Nilsson, Sten

    2009-01-01

    Production efficiency models (PEMs) are based on the theory of light use efficiency (LUE) which states that a relatively constant relationship exists between photosynthetic carbon uptake and radiation receipt at the canopy level. Challenges remain however in the application of the PEM methodology to global net primary productivity (NPP) monitoring. The objectives of this review are as follows: 1) to describe the general functioning of six PEMs (CASA; GLO-PEM; TURC; C-Fix; MOD17; and BEAMS) identified in the literature; 2) to review each model to determine potential improvements to the general PEM methodology; 3) to review the related literature on satellite-based gross primary productivity (GPP) and NPP modeling for additional possibilities for improvement; and 4) based on this review, propose items for coordinated research. This review noted a number of possibilities for improvement to the general PEM architecture - ranging from LUE to meteorological and satellite-based inputs. Current PEMs tend to treat the globe similarly in terms of physiological and meteorological factors, often ignoring unique regional aspects. Each of the existing PEMs has developed unique methods to estimate NPP and the combination of the most successful of these could lead to improvements. It may be beneficial to develop regional PEMs that can be combined under a global framework. The results of this review suggest the creation of a hybrid PEM could bring about a significant enhancement to the PEM methodology and thus terrestrial carbon flux modeling. Key items topping the PEM research agenda identified in this review include the following: LUE should not be assumed constant, but should vary by plant functional type (PFT) or photosynthetic pathway; evidence is mounting that PEMs should consider incorporating diffuse radiation; continue to pursue relationships between satellite-derived variables and LUE, GPP and autotrophic respiration (Ra); there is an urgent need for satellite-based biomass measurements to improve Ra estimation; and satellite-based soil moisture data could improve determination of soil water stress. PMID:19765285

  11. Can metric-based approaches really improve multi-model climate projections? A perfect model framework applied to summer temperature change in France.

    NASA Astrophysics Data System (ADS)

    Boé, Julien; Terray, Laurent

    2014-05-01

    Ensemble approaches for climate change projections have become ubiquitous. Because of large model-to-model variations and, generally, lack of rationale for the choice of a particular climate model against others, it is widely accepted that future climate change and its impacts should not be estimated based on a single climate model. Generally, as a default approach, the multi-model ensemble mean (MMEM) is considered to provide the best estimate of climate change signals. The MMEM approach is based on the implicit hypothesis that all the models provide equally credible projections of future climate change. This hypothesis is unlikely to be true and ideally one would want to give more weight to more realistic models. A major issue with this alternative approach lies in the assessment of the relative credibility of future climate projections from different climate models, as they can only be evaluated against present-day observations: which present-day metric(s) should be used to decide which models are "good" and which models are "bad" in the future climate? Once a supposedly informative metric has been found, other issues arise. What is the best statistical method to combine multiple models results taking into account their relative credibility measured by a given metric? How to be sure in the end that the metric-based estimate of future climate change is not in fact less realistic than the MMEM? It is impossible to provide strict answers to those questions in the climate change context. Yet, in this presentation, we propose a methodological approach based on a perfect model framework that could bring some useful elements of answer to the questions previously mentioned. The basic idea is to take a random climate model in the ensemble and treat it as if it were the truth (results of this model, in both past and future climate, are called "synthetic observations"). Then, all the other members from the multi-model ensemble are used to derive thanks to a metric-based approach a posterior estimate of climate change, based on the synthetic observation of the metric. Finally, it is possible to compare the posterior estimate to the synthetic observation of future climate change to evaluate the skill of the method. The main objective of this presentation is to describe and apply this perfect model framework to test different methodological issues associated with non-uniform model weighting and similar metric-based approaches. The methodology presented is general, but will be applied to the specific case of summer temperature change in France, for which previous works have suggested potentially useful metrics associated with soil-atmosphere and cloud-temperature interactions. The relative performances of different simple statistical approaches to combine multiple model results based on metrics will be tested. The impact of ensemble size, observational errors, internal variability, and model similarity will be characterized. The potential improvements associated with metric-based approaches compared to the MMEM is terms of errors and uncertainties will be quantified.

  12. Surprise! Infants consider possible bases of generalization for a single input example.

    PubMed

    Gerken, LouAnn; Dawson, Colin; Chatila, Razanne; Tenenbaum, Josh

    2015-01-01

    Infants have been shown to generalize from a small number of input examples. However, existing studies allow two possible means of generalization. One is via a process of noting similarities shared by several examples. Alternatively, generalization may reflect an implicit desire to explain the input. The latter view suggests that generalization might occur when even a single input example is surprising, given the learner's current model of the domain. To test the possibility that infants are able to generalize based on a single example, we familiarized 9-month-olds with a single three-syllable input example that contained either one surprising feature (syllable repetition, Experiment 1) or two features (repetition and a rare syllable, Experiment 2). In both experiments, infants generalized only to new strings that maintained all of the surprising features from familiarization. This research suggests that surprise can promote very rapid generalization. © 2014 John Wiley & Sons Ltd.

  13. Value generalization in human avoidance learning

    PubMed Central

    Robbins, Trevor W; Seymour, Ben

    2018-01-01

    Generalization during aversive decision-making allows us to avoid a broad range of potential threats following experience with a limited set of exemplars. However, over-generalization, resulting in excessive and inappropriate avoidance, has been implicated in a variety of psychological disorders. Here, we use reinforcement learning modelling to dissect out different contributions to the generalization of instrumental avoidance in two groups of human volunteers (N = 26, N = 482). We found that generalization of avoidance could be parsed into perceptual and value-based processes, and further, that value-based generalization could be subdivided into that relating to aversive and neutral feedback − with corresponding circuits including primary sensory cortex, anterior insula, amygdala and ventromedial prefrontal cortex. Further, generalization from aversive, but not neutral, feedback was associated with self-reported anxiety and intrusive thoughts. These results reveal a set of distinct mechanisms that mediate generalization in avoidance learning, and show how specific individual differences within them can yield anxiety. PMID:29735014

  14. Value generalization in human avoidance learning.

    PubMed

    Norbury, Agnes; Robbins, Trevor W; Seymour, Ben

    2018-05-08

    Generalization during aversive decision-making allows us to avoid a broad range of potential threats following experience with a limited set of exemplars. However, over-generalization, resulting in excessive and inappropriate avoidance, has been implicated in a variety of psychological disorders. Here, we use reinforcement learning modelling to dissect out different contributions to the generalization of instrumental avoidance in two groups of human volunteers ( N = 26, N = 482). We found that generalization of avoidance could be parsed into perceptual and value-based processes, and further, that value-based generalization could be subdivided into that relating to aversive and neutral feedback - with corresponding circuits including primary sensory cortex, anterior insula, amygdala and ventromedial prefrontal cortex. Further, generalization from aversive, but not neutral, feedback was associated with self-reported anxiety and intrusive thoughts. These results reveal a set of distinct mechanisms that mediate generalization in avoidance learning, and show how specific individual differences within them can yield anxiety. © 2018, Norbury et al.

  15. Improving ECG Classification Accuracy Using an Ensemble of Neural Network Modules

    PubMed Central

    Javadi, Mehrdad; Ebrahimpour, Reza; Sajedin, Atena; Faridi, Soheil; Zakernejad, Shokoufeh

    2011-01-01

    This paper illustrates the use of a combined neural network model based on Stacked Generalization method for classification of electrocardiogram (ECG) beats. In conventional Stacked Generalization method, the combiner learns to map the base classifiers' outputs to the target data. We claim adding the input pattern to the base classifiers' outputs helps the combiner to obtain knowledge about the input space and as the result, performs better on the same task. Experimental results support our claim that the additional knowledge according to the input space, improves the performance of the proposed method which is called Modified Stacked Generalization. In particular, for classification of 14966 ECG beats that were not previously seen during training phase, the Modified Stacked Generalization method reduced the error rate for 12.41% in comparison with the best of ten popular classifier fusion methods including Max, Min, Average, Product, Majority Voting, Borda Count, Decision Templates, Weighted Averaging based on Particle Swarm Optimization and Stacked Generalization. PMID:22046232

  16. A person based formula for allocating commissioning funds to general practices in England: development of a statistical model.

    PubMed

    Dixon, Jennifer; Smith, Peter; Gravelle, Hugh; Martin, Steve; Bardsley, Martin; Rice, Nigel; Georghiou, Theo; Dusheiko, Mark; Billings, John; Lorenzo, Michael De; Sanderson, Colin

    2011-11-22

    To develop a formula for allocating resources for commissioning hospital care to all general practices in England based on the health needs of the people registered in each practice Multivariate prospective statistical models were developed in which routinely collected electronic information from 2005-6 and 2006-7 on individuals and the areas in which they lived was used to predict their costs of hospital care in the next year, 2007-8. Data on individuals included all diagnoses recorded at any inpatient admission. Models were developed on a random sample of 5 million people and validated on a second random sample of 5 million people and a third sample of 5 million people drawn from a random sample of practices. All general practices in England as of 1 April 2007. All NHS inpatient admissions and outpatient attendances for individuals registered with a general practice on that date. All individuals registered with a general practice in England at 1 April 2007. Power of the statistical models to predict the costs of the individual patient or each practice's registered population for 2007-8 tested with a range of metrics (R(2) reported here). Comparisons of predicted costs in 2007-8 with actual costs incurred in the same year were calculated by individual and by practice. Models including person level information (age, sex, and ICD-10 codes diagnostic recorded) and a range of area level information (such as socioeconomic deprivation and supply of health facilities) were most predictive of costs. After accounting for person level variables, area level variables added little explanatory power. The best models for resource allocation could predict upwards of 77% of the variation in costs at practice level, and about 12% at the person level. With these models, the predicted costs of about a third of practices would exceed or undershoot the actual costs by 10% or more. Smaller practices were more likely to be in these groups. A model was developed that performed well by international standards, and could be used for allocations to practices for commissioning. The best formulas, however, could predict only about 12% of the variation in next year's costs of most inpatient and outpatient NHS care for each individual. Person-based diagnostic data significantly added to the predictive power of the models.

  17. Detection of abrupt changes in dynamic systems

    NASA Technical Reports Server (NTRS)

    Willsky, A. S.

    1984-01-01

    Some of the basic ideas associated with the detection of abrupt changes in dynamic systems are presented. Multiple filter-based techniques and residual-based method and the multiple model and generalized likelihood ratio methods are considered. Issues such as the effect of unknown onset time on algorithm complexity and structure and robustness to model uncertainty are discussed.

  18. Some Useful Cost-Benefit Criteria for Evaluating Computer-Based Test Delivery Models and Systems

    ERIC Educational Resources Information Center

    Luecht, Richard M.

    2005-01-01

    Computer-based testing (CBT) is typically implemented using one of three general test delivery models: (1) multiple fixed testing (MFT); (2) computer-adaptive testing (CAT); or (3) multistage testing (MSTs). This article reviews some of the real cost drivers associated with CBT implementation--focusing on item production costs, the costs…

  19. Scintillation index and performance analysis of wireless optical links over non-Kolmogorov weak turbulence based on generalized atmospheric spectral model.

    PubMed

    Cang, Ji; Liu, Xu

    2011-09-26

    Based on the generalized spectral model for non-Kolmogorov atmospheric turbulence, analytic expressions of the scintillation index (SI) are derived for plane, spherical optical waves and a partially coherent Gaussian beam propagating through non-Kolmogorov turbulence horizontally in the weak fluctuation regime. The new expressions relate the SI to the finite turbulence inner and outer scales, spatial coherence of the source and spectral power-law and then used to analyze the effects of atmospheric condition and link length on the performance of wireless optical communication links. © 2011 Optical Society of America

  20. Socrates Meets the 21st Century

    ERIC Educational Resources Information Center

    Lege, Jerry

    2005-01-01

    A inquiry-based approach called the "modelling discussion" is introduced for structuring beginning modelling activity, teaching new mathematics from examining its applications in contextual situations, and as a general classroom management technique when students are engaged in mathematical modelling. An example which illustrates the style and…

  1. SHAWNEE LIME/LIMESTONE SCRUBBING COMPUTERIZED DESIGN/COST-ESTIMATE MODEL USERS MANUAL

    EPA Science Inventory

    The manual gives a general description of the Shawnee lime/limestone scrubbing computerized design/cost-estimate model and detailed procedures for using it. It describes all inputs and outputs, along with available options. The model, based on Shawnee Test Facility scrubbing data...

  2. A flexible framework for process-based hydraulic and water quality modeling of stormwater green infrastructure performance

    EPA Science Inventory

    Background Models that allow for design considerations of green infrastructure (GI) practices to control stormwater runoff and associated contaminants have received considerable attention in recent years. While popular, generally, the GI models are relatively simplistic. However,...

  3. An inexact log-normal distribution-based stochastic chance-constrained model for agricultural water quality management

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong

    2018-05-01

    In this study, an inexact log-normal-based stochastic chance-constrained programming model was developed for solving the non-point source pollution issues caused by agricultural activities. Compared to the general stochastic chance-constrained programming model, the main advantage of the proposed model is that it allows random variables to be expressed as a log-normal distribution, rather than a general normal distribution. Possible deviations in solutions caused by irrational parameter assumptions were avoided. The agricultural system management in the Erhai Lake watershed was used as a case study, where critical system factors, including rainfall and runoff amounts, show characteristics of a log-normal distribution. Several interval solutions were obtained under different constraint-satisfaction levels, which were useful in evaluating the trade-off between system economy and reliability. The applied results show that the proposed model could help decision makers to design optimal production patterns under complex uncertainties. The successful application of this model is expected to provide a good example for agricultural management in many other watersheds.

  4. Competing regression models for longitudinal data.

    PubMed

    Alencar, Airlane P; Singer, Julio M; Rocha, Francisco Marcelo M

    2012-03-01

    The choice of an appropriate family of linear models for the analysis of longitudinal data is often a matter of concern for practitioners. To attenuate such difficulties, we discuss some issues that emerge when analyzing this type of data via a practical example involving pretest-posttest longitudinal data. In particular, we consider log-normal linear mixed models (LNLMM), generalized linear mixed models (GLMM), and models based on generalized estimating equations (GEE). We show how some special features of the data, like a nonconstant coefficient of variation, may be handled in the three approaches and evaluate their performance with respect to the magnitude of standard errors of interpretable and comparable parameters. We also show how different diagnostic tools may be employed to identify outliers and comment on available software. We conclude by noting that the results are similar, but that GEE-based models may be preferable when the goal is to compare the marginal expected responses. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Comparison of Models for Ball Bearing Dynamic Capacity and Life

    NASA Technical Reports Server (NTRS)

    Gupta, Pradeep K.; Oswald, Fred B.; Zaretsky, Erwin V.

    2015-01-01

    Generalized formulations for dynamic capacity and life of ball bearings, based on the models introduced by Lundberg and Palmgren and Zaretsky, have been developed and implemented in the bearing dynamics computer code, ADORE. Unlike the original Lundberg-Palmgren dynamic capacity equation, where the elastic properties are part of the life constant, the generalized formulations permit variation of elastic properties of the interacting materials. The newly updated Lundberg-Palmgren model allows prediction of life as a function of elastic properties. For elastic properties similar to those of AISI 52100 bearing steel, both the original and updated Lundberg-Palmgren models provide identical results. A comparison between the Lundberg-Palmgren and the Zaretsky models shows that at relatively light loads the Zaretsky model predicts a much higher life than the Lundberg-Palmgren model. As the load increases, the Zaretsky model provides a much faster drop off in life. This is because the Zaretsky model is much more sensitive to load than the Lundberg-Palmgren model. The generalized implementation where all model parameters can be varied provides an effective tool for future model validation and enhancement in bearing life prediction capabilities.

  6. Giant pulsar glitches in full general relativity

    NASA Astrophysics Data System (ADS)

    Sourie, A.; Chamel, N.; Novak, J.; Oertel, M.

    2017-12-01

    We present recent numerical simulations of giant pulsar glitches, as observed in the emblematic Vela pulsar, based on a two-fluid model, including for the first time all general-relativistic effects and realistic equations of state. In particular, we focus on modelling the vortex-mediated transfer of angular momentum that takes place during the spin-up stage from the neutron superfluid to the charged particles through dissipative mutual friction forces. Taking general relativity into account does not only modify the structure of the star but also leads to a new coupling between the fluids arising from frame-dragging effects. As a consequence, general relativity can strongly affect the global dynamics of pulsar glitches : the errors on the value of the characteristic rise time incurred by using Newtonian gravity are thus found to be as large as ˜ 40 % for the models considered.

  7. Constrained Total Generalized p-Variation Minimization for Few-View X-Ray Computed Tomography Image Reconstruction

    PubMed Central

    Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen

    2016-01-01

    Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems. PMID:26901410

  8. General relativity in upper secondary school: Design and evaluation of an online learning environment using the model of educational reconstruction

    NASA Astrophysics Data System (ADS)

    Kersting, Magdalena; Henriksen, Ellen Karoline; Bøe, Maria Vetleseter; Angell, Carl

    2018-06-01

    Because of its abstract nature, Albert Einstein's theory of general relativity is rarely present in school physics curricula. Although the educational community has started to investigate ways of bringing general relativity to classrooms, field-tested educational material is rare. Employing the model of educational reconstruction, we present a collaborative online learning environment that was introduced to final year students (18-19 years old) in six Norwegian upper secondary physics classrooms. Design-based research methods guided the development of the learning resources, which were based on a sociocultural view of learning and a historical-philosophical approach to teaching general relativity. To characterize students' learning from and interaction with the learning environment we analyzed focus group interviews and students' oral and written responses to assigned problems and discussion tasks. Our findings show how design choices on different levels can support or hinder understanding of general relativity, leading to the formulation of design principles that help to foster qualitative understanding and encourage collaborative learning. The results indicate that upper secondary students can obtain a qualitative understanding of general relativity when provided with appropriately designed learning resources and sufficient scaffolding of learning through interaction with teacher and peers.

  9. Geodesy- and geology-based slip-rate models for the Western United States (excluding California) national seismic hazard maps

    USGS Publications Warehouse

    Petersen, Mark D.; Zeng, Yuehua; Haller, Kathleen M.; McCaffrey, Robert; Hammond, William C.; Bird, Peter; Moschetti, Morgan; Shen, Zhengkang; Bormann, Jayne; Thatcher, Wayne

    2014-01-01

    The 2014 National Seismic Hazard Maps for the conterminous United States incorporate additional uncertainty in fault slip-rate parameter that controls the earthquake-activity rates than was applied in previous versions of the hazard maps. This additional uncertainty is accounted for by new geodesy- and geology-based slip-rate models for the Western United States. Models that were considered include an updated geologic model based on expert opinion and four combined inversion models informed by both geologic and geodetic input. The two block models considered indicate significantly higher slip rates than the expert opinion and the two fault-based combined inversion models. For the hazard maps, we apply 20 percent weight with equal weighting for the two fault-based models. Off-fault geodetic-based models were not considered in this version of the maps. Resulting changes to the hazard maps are generally less than 0.05 g (acceleration of gravity). Future research will improve the maps and interpret differences between the new models.

  10. Multichannel Speech Enhancement Based on Generalized Gamma Prior Distribution with Its Online Adaptive Estimation

    NASA Astrophysics Data System (ADS)

    Dat, Tran Huy; Takeda, Kazuya; Itakura, Fumitada

    We present a multichannel speech enhancement method based on MAP speech spectral magnitude estimation using a generalized gamma model of speech prior distribution, where the model parameters are adapted from actual noisy speech in a frame-by-frame manner. The utilization of a more general prior distribution with its online adaptive estimation is shown to be effective for speech spectral estimation in noisy environments. Furthermore, the multi-channel information in terms of cross-channel statistics are shown to be useful to better adapt the prior distribution parameters to the actual observation, resulting in better performance of speech enhancement algorithm. We tested the proposed algorithm in an in-car speech database and obtained significant improvements of the speech recognition performance, particularly under non-stationary noise conditions such as music, air-conditioner and open window.

  11. Cancer biomarker discovery is improved by accounting for variability in general levels of drug sensitivity in pre-clinical models.

    PubMed

    Geeleher, Paul; Cox, Nancy J; Huang, R Stephanie

    2016-09-21

    We show that variability in general levels of drug sensitivity in pre-clinical cancer models confounds biomarker discovery. However, using a very large panel of cell lines, each treated with many drugs, we could estimate a general level of sensitivity to all drugs in each cell line. By conditioning on this variable, biomarkers were identified that were more likely to be effective in clinical trials than those identified using a conventional uncorrected approach. We find that differences in general levels of drug sensitivity are driven by biologically relevant processes. We developed a gene expression based method that can be used to correct for this confounder in future studies.

  12. Generalization of value in reinforcement learning by humans

    PubMed Central

    Wimmer, G. Elliott; Daw, Nathaniel D.; Shohamy, Daphna

    2012-01-01

    Research in decision making has focused on the role of dopamine and its striatal targets in guiding choices via learned stimulus-reward or stimulus-response associations, behavior that is well-described by reinforcement learning (RL) theories. However, basic RL is relatively limited in scope and does not explain how learning about stimulus regularities or relations may guide decision making. A candidate mechanism for this type of learning comes from the domain of memory, which has highlighted a role for the hippocampus in learning of stimulus-stimulus relations, typically dissociated from the role of the striatum in stimulus-response learning. Here, we used fMRI and computational model-based analyses to examine the joint contributions of these mechanisms to RL. Humans performed an RL task with added relational structure, modeled after tasks used to isolate hippocampal contributions to memory. On each trial participants chose one of four options, but the reward probabilities for pairs of options were correlated across trials. This (uninstructed) relationship between pairs of options potentially enabled an observer to learn about options’ values based on experience with the other options and to generalize across them. We observed BOLD activity related to learning in the striatum and also in the hippocampus. By comparing a basic RL model to one augmented to allow feedback to generalize between correlated options, we tested whether choice behavior and BOLD activity were influenced by the opportunity to generalize across correlated options. Although such generalization goes beyond standard computational accounts of RL and striatal BOLD, both choices and striatal BOLD were better explained by the augmented model. Consistent with the hypothesized role for the hippocampus in this generalization, functional connectivity between the ventral striatum and hippocampus was modulated, across participants, by the ability of the augmented model to capture participants’ choice. Our results thus point toward an interactive model in which striatal RL systems may employ relational representations typically associated with the hippocampus. PMID:22487039

  13. Why has the bohr-sommerfeld model of the atom been ignoredby general chemistry textbooks?

    PubMed

    Niaz, Mansoor; Cardellini, Liberato

    2011-12-01

    Bohr's model of the atom is considered to be important by general chemistry textbooks. A major shortcoming of this model was that it could not explain the spectra of atoms containing more than one electron. In order to increase the explanatory power of the model, Sommerfeld hypothesized the existence of elliptical orbits. This study has the following objectives: 1) Formulation of criteria based on a history and philosophy of science framework; and 2) Evaluation of university-level general chemistry textbooks based on the criteria, published in Italy and U.S.A. Presentation of a textbook was considered to be "satisfactory" if it included a description of the Bohr-Sommerfeld model along with diagrams of the elliptical orbits. Of the 28 textbooks published in Italy that were analyzed, only five were classified as "satisfactory". Of the 46 textbooks published in U.S.A., only three were classified as "satisfactory". This study has the following educational implications: a) Sommerfeld's innovation (auxiliary hypothesis) by introducing elliptical orbits, helped to restore the viability of Bohr's model; b) Bohr-Sommerfeld's model went no further than the alkali metals, which led scientists to look for other models; c) This clearly shows that scientific models are tentative in nature; d) Textbook authors and chemistry teachers do not consider the tentative nature of scientific knowledge to be important; e) Inclusion of the Bohr-Sommerfeld model in textbooks can help our students to understand how science progresses.

  14. Computer Modeling and Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pronskikh, V. S.

    2014-05-09

    Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossiblemore » to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling (construction of mathematical computer models of elementary processes) and simulation (construction of models of composite objects and processes by means of numerical experimenting with them) needs to be made. Holding on to that distinction, I propose to relate verification (based on theoretical strategies such as inferences) to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes« less

  15. Structural Behavioral Study on the General Aviation Network Based on Complex Network

    NASA Astrophysics Data System (ADS)

    Zhang, Liang; Lu, Na

    2017-12-01

    The general aviation system is an open and dissipative system with complex structures and behavioral features. This paper has established the system model and network model for general aviation. We have analyzed integral attributes and individual attributes by applying the complex network theory and concluded that the general aviation network has influential enterprise factors and node relations. We have checked whether the network has small world effect, scale-free property and network centrality property which a complex network should have by applying degree distribution of functions and proved that the general aviation network system is a complex network. Therefore, we propose to achieve the evolution process of the general aviation industrial chain to collaborative innovation cluster of advanced-form industries by strengthening network multiplication effect, stimulating innovation performance and spanning the structural hole path.

  16. Control Theory and Statistical Generalizations.

    ERIC Educational Resources Information Center

    Powers, William T.

    1990-01-01

    Contrasts modeling methods in control theory to the methods of statistical generalizations in empirical studies of human or animal behavior. Presents a computer simulation that predicts behavior based on variables (effort and rewards) determined by the invariable (desired reward). Argues that control theory methods better reflect relationships to…

  17. The general deterrence of driving while intoxicated. Volume 1, System analysis and computer-based simulation

    DOT National Transportation Integrated Search

    1978-01-01

    A system analysis was completed of the general deterrence of driving while intoxicated (DWI). Elements which influence DWI decisions were identified and interrelated in a system model; then, potential countermeasures which might be employed in DWI ge...

  18. Linking service quality, customer satisfaction, and behavioral intention.

    PubMed

    Woodside, A G; Frey, L L; Daly, R T

    1989-12-01

    Based on the service quality and script theory literature, a framework of relationships among service quality, customer satisfaction, and behavioral intention for service purchases is proposed. Specific models are developed from the general framework and the models are applied and tested for the highly complex and divergent consumer service of overnight hospital care. Service quality, customer satisfaction, and behavioral intention data were collected from recent patients of two hospitals. The findings support the specific models and general framework. Implications for theory, service marketing, and future research are discussed.

  19. TI-59 Programs for Multiple Regression.

    DTIC Science & Technology

    1980-05-01

    general linear hypothesis model of full rank [ Graybill , 19611 can be written as Y = x 8 + C , s-N(O,o 2I) nxl nxk kxl nxl where Y is the vector of n...a "reduced model " solution, and confidence intervals for linear functions of the coefficients can be obtained using (x’x) and a2, based on the t...O107)l UA.LLL. Library ModuIe NASTER -Puter 0NTINA Cards 1 PROGRAM DESCRIPTION (s s 2 ror the general linear hypothesis model Y - XO + C’ calculates

  20. AveBoost2: Boosting for Noisy Data

    NASA Technical Reports Server (NTRS)

    Oza, Nikunj C.

    2004-01-01

    AdaBoost is a well-known ensemble learning algorithm that constructs its constituent or base models in sequence. A key step in AdaBoost is constructing a distribution over the training examples to create each base model. This distribution, represented as a vector, is constructed to be orthogonal to the vector of mistakes made by the pre- vious base model in the sequence. The idea is to make the next base model's errors uncorrelated with those of the previous model. In previous work, we developed an algorithm, AveBoost, that constructed distributions orthogonal to the mistake vectors of all the previous models, and then averaged them to create the next base model s distribution. Our experiments demonstrated the superior accuracy of our approach. In this paper, we slightly revise our algorithm to allow us to obtain non-trivial theoretical results: bounds on the training error and generalization error (difference between training and test error). Our averaging process has a regularizing effect which, as expected, leads us to a worse training error bound for our algorithm than for AdaBoost but a superior generalization error bound. For this paper, we experimented with the data that we used in both as originally supplied and with added label noise-a small fraction of the data has its original label changed. Noisy data are notoriously difficult for AdaBoost to learn. Our algorithm's performance improvement over AdaBoost is even greater on the noisy data than the original data.

  1. Modelo Pedagogico de Educacion Primaria para Adultos: Guia General de Apoyo para el Estudiante de Primaria (Pedagogical Model for Adult Primary Education: General Guide for the Student).

    ERIC Educational Resources Information Center

    Instituto Nacional para la Educacion de los Adultos, Mexico City (Mexico).

    This book, part of a Mexican series of instructional materials, is directed toward people over the age of 15 who are interested in beginning, continuing or finishing their basic education. It explains the pedagogical model developed for adult education in Mexico based on the following features: (1) the content of the textbooks must be useful for…

  2. Action Recommendation for Cyber Resilience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choudhury, Sutanay; Rodriguez, Luke R.; Curtis, Darren S.

    2015-09-01

    This paper presents an unifying graph-based model for representing the infrastructure, behavior and missions of an enterprise. We describe how the model can be used to achieve resiliency against a wide class of failures and attacks. We introduce an algorithm for recommending resilience establishing actions based on dynamic updates to the models. Without loss of generality, we show the effectiveness of the algorithm for preserving latency based quality of service (QoS). Our models and the recommendation algorithms are implemented in a software framework that we seek to release as an open source framework for simulating resilient cyber systems.

  3. A Novel Multiscale Physics Based Progressive Failure Methodology for Laminated Composite Structures

    NASA Technical Reports Server (NTRS)

    Pineda, Evan J.; Waas, Anthony M.; Bednarcyk, Brett A.; Collier, Craig S.; Yarrington, Phillip W.

    2008-01-01

    A variable fidelity, multiscale, physics based finite element procedure for predicting progressive damage and failure of laminated continuous fiber reinforced composites is introduced. At every integration point in a finite element model, progressive damage is accounted for at the lamina-level using thermodynamically based Schapery Theory. Separate failure criteria are applied at either the global-scale or the microscale in two different FEM models. A micromechanics model, the Generalized Method of Cells, is used to evaluate failure criteria at the micro-level. The stress-strain behavior and observed failure mechanisms are compared with experimental results for both models.

  4. Interpretation of pH-activity profiles for acid-base catalysis from molecular simulations.

    PubMed

    Dissanayake, Thakshila; Swails, Jason M; Harris, Michael E; Roitberg, Adrian E; York, Darrin M

    2015-02-17

    The measurement of reaction rate as a function of pH provides essential information about mechanism. These rates are sensitive to the pK(a) values of amino acids directly involved in catalysis that are often shifted by the enzyme active site environment. Experimentally observed pH-rate profiles are usually interpreted using simple kinetic models that allow estimation of "apparent pK(a)" values of presumed general acid and base catalysts. One of the underlying assumptions in these models is that the protonation states are uncorrelated. In this work, we introduce the use of constant pH molecular dynamics simulations in explicit solvent (CpHMD) with replica exchange in the pH-dimension (pH-REMD) as a tool to aid in the interpretation of pH-activity data of enzymes and to test the validity of different kinetic models. We apply the methods to RNase A, a prototype acid-base catalyst, to predict the macroscopic and microscopic pK(a) values, as well as the shape of the pH-rate profile. Results for apo and cCMP-bound RNase A agree well with available experimental data and suggest that deprotonation of the general acid and protonation of the general base are not strongly coupled in transphosphorylation and hydrolysis steps. Stronger coupling, however, is predicted for the Lys41 and His119 protonation states in apo RNase A, leading to the requirement for a microscopic kinetic model. This type of analysis may be important for other catalytic systems where the active forms of the implicated general acid and base are oppositely charged and more highly correlated. These results suggest a new way for CpHMD/pH-REMD simulations to bridge the gap with experiments to provide a molecular-level interpretation of pH-activity data in studies of enzyme mechanisms.

  5. Interpretation of pH-activity Profiles for Acid-Base Catalysis from Molecular Simulations

    PubMed Central

    Dissanayake, Thakshila; Swails, Jason; Harris, Michael E.; Roitberg, Adrian E.; York, Darrin M.

    2015-01-01

    The measurement of reaction rate as a function of pH provides essential information about mechanism. These rates are sensitive to the pKa values of amino acids directly involved in catalysis that are often shifted by the enzyme active site environment. Experimentally observed pH-rate profiles are usually interpreted using simple kinetic models that allow estimation of “apparent pKa” values of presumed general acid and base catalysts. One of the underlying assumptions in these models is that the protonation states are uncorrelated. In the present work, we introduce the use of constant pH molecular dynamics simulations in explicit solvent (CpHMD) with replica exchange in the pH-dimension (pH-REMD) as a tool to aid in the interpretation of pH-activity data of enzymes, and test the validity of different kinetic models. We apply the methods to RNase A, a prototype acid/base catalyst, to predict the macroscopic and microscopic pKa values, as well as the shape of the pH-rate profile. Results for apo and cCMP-bound RNase A agree well with available experimental data, and suggest that deprotonation of the general acid and protonation of the general base are not strongly coupled in transphosphorylation and hydrolysis steps. Stronger coupling, however, is predicted for the Lys41 and His119 protonation states in apo RNase A, leading to the requirement for a microscopic kinetic model. This type of analysis may be important for other catalytic systems where the active forms of implicated general acid and base are oppositely charged and more highly correlated. These results suggest a new way for CpHMD/pH-REMD simulations to bridge the gap with experiments to provide a molecular-level interpretation of pH-activity data in studies of enzyme mechanisms. PMID:25615525

  6. PDE-based geophysical modelling using finite elements: examples from 3D resistivity and 2D magnetotellurics

    NASA Astrophysics Data System (ADS)

    Schaa, R.; Gross, L.; du Plessis, J.

    2016-04-01

    We present a general finite-element solver, escript, tailored to solve geophysical forward and inverse modeling problems in terms of partial differential equations (PDEs) with suitable boundary conditions. Escript’s abstract interface allows geoscientists to focus on solving the actual problem without being experts in numerical modeling. General-purpose finite element solvers have found wide use especially in engineering fields and find increasing application in the geophysical disciplines as these offer a single interface to tackle different geophysical problems. These solvers are useful for data interpretation and for research, but can also be a useful tool in educational settings. This paper serves as an introduction into PDE-based modeling with escript where we demonstrate in detail how escript is used to solve two different forward modeling problems from applied geophysics (3D DC resistivity and 2D magnetotellurics). Based on these two different cases, other geophysical modeling work can easily be realized. The escript package is implemented as a Python library and allows the solution of coupled, linear or non-linear, time-dependent PDEs. Parallel execution for both shared and distributed memory architectures is supported and can be used without modifications to the scripts.

  7. A multifactor approach to forecasting Romanian gross domestic product (GDP) in the short run.

    PubMed

    Armeanu, Daniel; Andrei, Jean Vasile; Lache, Leonard; Panait, Mirela

    2017-01-01

    The purpose of this paper is to investigate the application of a generalized dynamic factor model (GDFM) based on dynamic principal components analysis to forecasting short-term economic growth in Romania. We have used a generalized principal components approach to estimate a dynamic model based on a dataset comprising 86 economic and non-economic variables that are linked to economic output. The model exploits the dynamic correlations between these variables and uses three common components that account for roughly 72% of the information contained in the original space. We show that it is possible to generate reliable forecasts of quarterly real gross domestic product (GDP) using just the common components while also assessing the contribution of the individual variables to the dynamics of real GDP. In order to assess the relative performance of the GDFM to standard models based on principal components analysis, we have also estimated two Stock-Watson (SW) models that were used to perform the same out-of-sample forecasts as the GDFM. The results indicate significantly better performance of the GDFM compared with the competing SW models, which empirically confirms our expectations that the GDFM produces more accurate forecasts when dealing with large datasets.

  8. A multifactor approach to forecasting Romanian gross domestic product (GDP) in the short run

    PubMed Central

    Armeanu, Daniel; Lache, Leonard; Panait, Mirela

    2017-01-01

    The purpose of this paper is to investigate the application of a generalized dynamic factor model (GDFM) based on dynamic principal components analysis to forecasting short-term economic growth in Romania. We have used a generalized principal components approach to estimate a dynamic model based on a dataset comprising 86 economic and non-economic variables that are linked to economic output. The model exploits the dynamic correlations between these variables and uses three common components that account for roughly 72% of the information contained in the original space. We show that it is possible to generate reliable forecasts of quarterly real gross domestic product (GDP) using just the common components while also assessing the contribution of the individual variables to the dynamics of real GDP. In order to assess the relative performance of the GDFM to standard models based on principal components analysis, we have also estimated two Stock-Watson (SW) models that were used to perform the same out-of-sample forecasts as the GDFM. The results indicate significantly better performance of the GDFM compared with the competing SW models, which empirically confirms our expectations that the GDFM produces more accurate forecasts when dealing with large datasets. PMID:28742100

  9. A composites-based hyperelastic constitutive model for soft tissue with application to the human annulus fibrosus

    NASA Astrophysics Data System (ADS)

    Guo, Z. Y.; Peng, X. Q.; Moran, B.

    2006-09-01

    This paper presents a composites-based hyperelastic constitutive model for soft tissue. Well organized soft tissue is treated as a composite in which the matrix material is embedded with a single family of aligned fibers. The fiber is modeled as a generalized neo-Hookean material in which the stiffness depends on fiber stretch. The deformation gradient is decomposed multiplicatively into two parts: a uniaxial deformation along the fiber direction and a subsequent shear deformation. This permits the fiber-matrix interaction caused by inhomogeneous deformation to be estimated by using effective properties from conventional composites theory based on small strain linear elasticity and suitably generalized to the present large deformation case. A transversely isotropic hyperelastic model is proposed to describe the mechanical behavior of fiber-reinforced soft tissue. This model is then applied to the human annulus fibrosus. Because of the layered anatomical structure of the annulus fibrosus, an orthotropic hyperelastic model of the annulus fibrosus is developed. Simulations show that the model reproduces the stress-strain response of the human annulus fibrosus accurately. We also show that the expression for the fiber-matrix shear interaction energy used in a previous phenomenological model is compatible with that derived in the present paper.

  10. High-level PC-based laser system modeling

    NASA Astrophysics Data System (ADS)

    Taylor, Michael S.

    1991-05-01

    Since the inception of the Strategic Defense Initiative (SDI) there have been a multitude of comparison studies done in an attempt to evaluate the effectiveness and relative sizes of complementary, and sometimes competitive, laser weapon systems. It became more and more apparent that what the systems analyst needed was not only a fast, but a cost effective way to perform high-level trade studies. In the present investigation, a general procedure is presented for the development of PC-based algorithmic systems models for laser systems. This procedure points out all of the major issues that should be addressed in the design and development of such a model. Issues addressed include defining the problem to be modeled, defining a strategy for development, and finally, effective use of the model once developed. Being a general procedure, it will allow a systems analyst to develop a model to meet specific needs. To illustrate this method of model development, a description of the Strategic Defense Simulation - Design To (SDS-DT) model developed and used by Science Applications International Corporation (SAIC) is presented. SDS-DT is a menu-driven, fast executing, PC-based program that can be used to either calculate performance, weight, volume, and cost values for a particular design or, alternatively, to run parametrics on particular system parameters to perhaps optimize a design.

  11. Sample sizes and model comparison metrics for species distribution models

    Treesearch

    B.B. Hanberry; H.S. He; D.C. Dey

    2012-01-01

    Species distribution models use small samples to produce continuous distribution maps. The question of how small a sample can be to produce an accurate model generally has been answered based on comparisons to maximum sample sizes of 200 observations or fewer. In addition, model comparisons often are made with the kappa statistic, which has become controversial....

  12. A Person Fit Test for IRT Models for Polytomous Items

    ERIC Educational Resources Information Center

    Glas, C. A. W.; Dagohoy, Anna Villa T.

    2007-01-01

    A person fit test based on the Lagrange multiplier test is presented for three item response theory models for polytomous items: the generalized partial credit model, the sequential model, and the graded response model. The test can also be used in the framework of multidimensional ability parameters. It is shown that the Lagrange multiplier…

  13. Genetic algorithm dynamics on a rugged landscape

    NASA Astrophysics Data System (ADS)

    Bornholdt, Stefan

    1998-04-01

    The genetic algorithm is an optimization procedure motivated by biological evolution and is successfully applied to optimization problems in different areas. A statistical mechanics model for its dynamics is proposed based on the parent-child fitness correlation of the genetic operators, making it applicable to general fitness landscapes. It is compared to a recent model based on a maximum entropy ansatz. Finally it is applied to modeling the dynamics of a genetic algorithm on the rugged fitness landscape of the NK model.

  14. Estimating rates of local extinction and colonization in colonial species and an extension to the metapopulation and community levels

    USGS Publications Warehouse

    Barbraud, C.; Nichols, J.D.; Hines, J.E.; Hafner, H.

    2003-01-01

    Coloniality has mainly been studied from an evolutionary perspective, but relatively few studies have developed methods for modelling colony dynamics. Changes in number of colonies over time provide a useful tool for predicting and evaluating the responses of colonial species to management and to environmental disturbance. Probabilistic Markov process models have been recently used to estimate colony site dynamics using presence-absence data when all colonies are detected in sampling efforts. Here, we define and develop two general approaches for the modelling and analysis of colony dynamics for sampling situations in which all colonies are, and are not, detected. For both approaches, we develop a general probabilistic model for the data and then constrain model parameters based on various hypotheses about colony dynamics. We use Akaike's Information Criterion (AIC) to assess the adequacy of the constrained models. The models are parameterised with conditional probabilities of local colony site extinction and colonization. Presence-absence data arising from Pollock's robust capture-recapture design provide the basis for obtaining unbiased estimates of extinction, colonization, and detection probabilities when not all colonies are detected. This second approach should be particularly useful in situations where detection probabilities are heterogeneous among colony sites. The general methodology is illustrated using presence-absence data on two species of herons (Purple Heron, Ardea purpurea and Grey Heron, Ardea cinerea). Estimates of the extinction and colonization rates showed interspecific differences and strong temporal and spatial variations. We were also able to test specific predictions about colony dynamics based on ideas about habitat change and metapopulation dynamics. We recommend estimators based on probabilistic modelling for future work on colony dynamics. We also believe that this methodological framework has wide application to problems in animal ecology concerning metapopulation and community dynamics.

  15. Waveform model for an eccentric binary black hole based on the effective-one-body-numerical-relativity formalism

    NASA Astrophysics Data System (ADS)

    Cao, Zhoujian; Han, Wen-Biao

    2017-08-01

    Binary black hole systems are among the most important sources for gravitational wave detection. They are also good objects for theoretical research for general relativity. A gravitational waveform template is important to data analysis. An effective-one-body-numerical-relativity (EOBNR) model has played an essential role in the LIGO data analysis. For future space-based gravitational wave detection, many binary systems will admit a somewhat orbit eccentricity. At the same time, the eccentric binary is also an interesting topic for theoretical study in general relativity. In this paper, we construct the first eccentric binary waveform model based on an effective-one-body-numerical-relativity framework. Our basic assumption in the model construction is that the involved eccentricity is small. We have compared our eccentric EOBNR model to the circular one used in the LIGO data analysis. We have also tested our eccentric EOBNR model against another recently proposed eccentric binary waveform model; against numerical relativity simulation results; and against perturbation approximation results for extreme mass ratio binary systems. Compared to numerical relativity simulations with an eccentricity as large as about 0.2, the overlap factor for our eccentric EOBNR model is better than 0.98 for all tested cases, including spinless binary and spinning binary, equal mass binary, and unequal mass binary. Hopefully, our eccentric model can be the starting point to develop a faithful template for future space-based gravitational wave detectors.

  16. A unifying framework for marginalized random intercept models of correlated binary outcomes

    PubMed Central

    Swihart, Bruce J.; Caffo, Brian S.; Crainiceanu, Ciprian M.

    2013-01-01

    We demonstrate that many current approaches for marginal modeling of correlated binary outcomes produce likelihoods that are equivalent to the copula-based models herein. These general copula models of underlying latent threshold random variables yield likelihood-based models for marginal fixed effects estimation and interpretation in the analysis of correlated binary data with exchangeable correlation structures. Moreover, we propose a nomenclature and set of model relationships that substantially elucidates the complex area of marginalized random intercept models for binary data. A diverse collection of didactic mathematical and numerical examples are given to illustrate concepts. PMID:25342871

  17. Significance Testing in Confirmatory Factor Analytic Models.

    ERIC Educational Resources Information Center

    Khattab, Ali-Maher; Hocevar, Dennis

    Traditionally, confirmatory factor analytic models are tested against a null model of total independence. Using randomly generated factors in a matrix of 46 aptitude tests, this approach is shown to be unlikely to reject even random factors. An alternative null model, based on a single general factor, is suggested. In addition, an index of model…

  18. Particularized trust, generalized trust, and immigrant self-rated health: cross-national analysis of World Values Survey.

    PubMed

    Kim, H H-S

    2018-05-01

    This research examined the associations between two types of trust, generalized and particularized, and self-rated health among immigrants. Data were drawn from the World Values Survey (WVS6), the latest wave of cross-sectional surveys based on face-to-face interviews. The immigrant subsample analyzed herein contains 3108 foreign-born individuals clustered from 51 countries. Given the hierarchically nested data, two-level logistic regressions models were estimated using HLM (Hierarchical Linear Modeling) 7.1. At the individual level, net of socio-economic and demographic factors (age, gender, marital status, education, income, neighborhood security, and subjective well-being), particularized trust was positively related to physical health (odds ratio [OR] = 1.11, P < .001). Generalized trust, however, was not a significant predictor. At the country level, based on alternative models, the aggregate measure of particularized trust was negatively associated with subjective health. The odds of being healthy were on average about 30% lower. The interdisciplinary literature on social determinants of health has largely focused on the salubrious impact of trust and other forms of social capital on physical well-being. Many previous studies based on general, not immigrant, populations also did not differentiate between generalized and particularized types of trust. Results from this study suggest that this conceptual distinction is critical in understanding how and to what extent the two are differentially related to immigrant well-being across multiple levels of analysis. Copyright © 2018 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  19. Meta-analysis of studies with bivariate binary outcomes: a marginal beta-binomial model approach.

    PubMed

    Chen, Yong; Hong, Chuan; Ning, Yang; Su, Xiao

    2016-01-15

    When conducting a meta-analysis of studies with bivariate binary outcomes, challenges arise when the within-study correlation and between-study heterogeneity should be taken into account. In this paper, we propose a marginal beta-binomial model for the meta-analysis of studies with binary outcomes. This model is based on the composite likelihood approach and has several attractive features compared with the existing models such as bivariate generalized linear mixed model (Chu and Cole, 2006) and Sarmanov beta-binomial model (Chen et al., 2012). The advantages of the proposed marginal model include modeling the probabilities in the original scale, not requiring any transformation of probabilities or any link function, having closed-form expression of likelihood function, and no constraints on the correlation parameter. More importantly, because the marginal beta-binomial model is only based on the marginal distributions, it does not suffer from potential misspecification of the joint distribution of bivariate study-specific probabilities. Such misspecification is difficult to detect and can lead to biased inference using currents methods. We compare the performance of the marginal beta-binomial model with the bivariate generalized linear mixed model and the Sarmanov beta-binomial model by simulation studies. Interestingly, the results show that the marginal beta-binomial model performs better than the Sarmanov beta-binomial model, whether or not the true model is Sarmanov beta-binomial, and the marginal beta-binomial model is more robust than the bivariate generalized linear mixed model under model misspecifications. Two meta-analyses of diagnostic accuracy studies and a meta-analysis of case-control studies are conducted for illustration. Copyright © 2015 John Wiley & Sons, Ltd.

  20. Research on Multi - Person Parallel Modeling Method Based on Integrated Model Persistent Storage

    NASA Astrophysics Data System (ADS)

    Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Liu, Ying

    2018-03-01

    This paper mainly studies the multi-person parallel modeling method based on the integrated model persistence storage. The integrated model refers to a set of MDDT modeling graphics system, which can carry out multi-angle, multi-level and multi-stage description of aerospace general embedded software. Persistent storage refers to converting the data model in memory into a storage model and converting the storage model into a data model in memory, where the data model refers to the object model and the storage model is a binary stream. And multi-person parallel modeling refers to the need for multi-person collaboration, the role of separation, and even real-time remote synchronization modeling.

  1. Log-normal frailty models fitted as Poisson generalized linear mixed models.

    PubMed

    Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver

    2016-12-01

    The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  2. A Model of Instructional Supervision That Meets Today's Needs.

    ERIC Educational Resources Information Center

    Beck, John J.; Seifert, Edward H.

    1983-01-01

    The proposed Instructional Technologist Model is based on a closed loop feedback system allowing for continuous monitoring of teachers by expert instructional technologists. Principals are thereby released for instructional evaluation and general educational management. (MJL)

  3. Comparison of prognostic and diagnostic surface flux modeling approaches over the Nile River Basin

    USDA-ARS?s Scientific Manuscript database

    Regional evapotranspiration (ET) can be estimated using diagnostic remote sensing models, generally based on principles of energy balance, or with spatially distributed prognostic models that simultaneously balance both the energy and water budgets over landscapes using predictive equations for land...

  4. Development of the Integrated Communication Model

    ERIC Educational Resources Information Center

    Ho, Hua-Kuo

    2008-01-01

    Human communication is a critical issue in personal life. It also should be the indispensable core element of general education curriculum in universities and colleges. Based on literature analysis and the author's clinical observation, the importance of human communication, functions of model, and often seen human communication models were…

  5. Estimating wildfire behavior and effects

    Treesearch

    Frank A. Albini

    1976-01-01

    This paper presents a brief survey of the research literature on wildfire behavior and effects and assembles formulae and graphical computation aids based on selected theoretical and empirical models. The uses of mathematical fire behavior models are discussed, and the general capabilities and limitations of currently available models are outlined.

  6. Mathematical modeling of spinning elastic bodies for modal analysis.

    NASA Technical Reports Server (NTRS)

    Likins, P. W.; Barbera, F. J.; Baddeley, V.

    1973-01-01

    The problem of modal analysis of an elastic appendage on a rotating base is examined to establish the relative advantages of various mathematical models of elastic structures and to extract general inferences concerning the magnitude and character of the influence of spin on the natural frequencies and mode shapes of rotating structures. In realization of the first objective, it is concluded that except for a small class of very special cases the elastic continuum model is devoid of useful results, while for constant nominal spin rate the distributed-mass finite-element model is quite generally tractable, since in the latter case the governing equations are always linear, constant-coefficient, ordinary differential equations. Although with both of these alternatives the details of the formulation generally obscure the essence of the problem and permit very little engineering insight to be gained without extensive computation, this difficulty is not encountered when dealing with simple concentrated mass models.

  7. Modeling individualized coefficient alpha to measure quality of test score data.

    PubMed

    Liu, Molei; Hu, Ming; Zhou, Xiao-Hua

    2018-05-23

    Individualized coefficient alpha is defined. It is item and subject specific and is used to measure the quality of test score data with heterogenicity among the subjects and items. A regression model is developed based on 3 sets of generalized estimating equations. The first set of generalized estimating equation models the expectation of the responses, the second set models the response's variance, and the third set is proposed to estimate the individualized coefficient alpha, defined and used to measure individualized internal consistency of the responses. We also use different techniques to extend our method to handle missing data. Asymptotic property of the estimators is discussed, based on which inference on the coefficient alpha is derived. Performance of our method is evaluated through simulation study and real data analysis. The real data application is from a health literacy study in Hunan province of China. Copyright © 2018 John Wiley & Sons, Ltd.

  8. Derivation of Hunt equation for suspension distribution using Shannon entropy theory

    NASA Astrophysics Data System (ADS)

    Kundu, Snehasis

    2017-12-01

    In this study, the Hunt equation for computing suspension concentration in sediment-laden flows is derived using Shannon entropy theory. Considering the inverse of the void ratio as a random variable and using principle of maximum entropy, probability density function and cumulative distribution function of suspension concentration is derived. A new and more general cumulative distribution function for the flow domain is proposed which includes several specific other models of CDF reported in literature. This general form of cumulative distribution function also helps to derive the Rouse equation. The entropy based approach helps to estimate model parameters using suspension data of sediment concentration which shows the advantage of using entropy theory. Finally model parameters in the entropy based model are also expressed as functions of the Rouse number to establish a link between the parameters of the deterministic and probabilistic approaches.

  9. Decentralised control of continuous Petri nets

    NASA Astrophysics Data System (ADS)

    Wang, Liewei; Wang, Xu

    2017-05-01

    This paper focuses on decentralised control of systems modelled by continuous Petri nets, in which a target marking control problem is discussed. In some previous works, an efficient ON/OFF strategy-based minimum-time controller was developed. Nevertheless, the convergence is only proved for subclasses like Choice-Free nets. For a general net, the pre-conditions of applying the ON/OFF strategy are not given; therefore, the application scope of the method is unclear. In this work, we provide two sufficient conditions of applying the ON/OFF strategy-based controller to general nets. Furthermore, an extended algorithm for general nets is proposed, in which control laws are computed based on some limited information, without knowing the detailed structure of subsystems.

  10. Application of General Regression Neural Network to the Prediction of LOD Change

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao-Hong; Wang, Qi-Jie; Zhu, Jian-Jun; Zhang, Hao

    2012-01-01

    Traditional methods for predicting the change in length of day (LOD change) are mainly based on some linear models, such as the least square model and autoregression model, etc. However, the LOD change comprises complicated non-linear factors and the prediction effect of the linear models is always not so ideal. Thus, a kind of non-linear neural network — general regression neural network (GRNN) model is tried to make the prediction of the LOD change and the result is compared with the predicted results obtained by taking advantage of the BP (back propagation) neural network model and other models. The comparison result shows that the application of the GRNN to the prediction of the LOD change is highly effective and feasible.

  11. A local-circulation model for Darrieus vertical-axis wind turbines

    NASA Astrophysics Data System (ADS)

    Masse, B.

    1986-04-01

    A new computational model for the aerodynamics of the vertical-axis wind turbine is presented. Based on the local-circulation method generalized for curved blades, combined with a wake model for the vertical-axis wind turbine, it differs markedly from current models based on variations in the streamtube momentum and vortex models using the lifting-line theory. A computer code has been developed to calculate the loads and performance of the Darrieus vertical-axis wind turbine. The results show good agreement with experimental data and compare well with other methods.

  12. Generalized plasma skimming model for cells and drug carriers in the microvasculature.

    PubMed

    Lee, Tae-Rin; Yoo, Sung Sic; Yang, Jiho

    2017-04-01

    In microvascular transport, where both blood and drug carriers are involved, plasma skimming has a key role on changing hematocrit level and drug carrier concentration in capillary beds after continuous vessel bifurcation in the microvasculature. While there have been numerous studies on modeling the plasma skimming of blood, previous works lacked in consideration of its interaction with drug carriers. In this paper, a generalized plasma skimming model is suggested to predict the redistributions of both the cells and drug carriers at each bifurcation. In order to examine its applicability, this new model was applied on a single bifurcation system to predict the redistribution of red blood cells and drug carriers. Furthermore, this model was tested at microvascular network level under different plasma skimming conditions for predicting the concentration of drug carriers. Based on these results, the applicability of this generalized plasma skimming model is fully discussed and future works along with the model's limitations are summarized.

  13. Ground state destabilization from a positioned general base in the ketosteroid isomerase active site.

    PubMed

    Ruben, Eliza A; Schwans, Jason P; Sonnett, Matthew; Natarajan, Aditya; Gonzalez, Ana; Tsai, Yingssu; Herschlag, Daniel

    2013-02-12

    We compared the binding affinities of ground state analogues for bacterial ketosteroid isomerase (KSI) with a wild-type anionic Asp general base and with uncharged Asn and Ala in the general base position to provide a measure of potential ground state destabilization that could arise from the close juxtaposition of the anionic Asp and hydrophobic steroid in the reaction's Michaelis complex. The analogue binding affinity increased ~1 order of magnitude for the Asp38Asn mutation and ~2 orders of magnitude for the Asp38Ala mutation, relative to the affinity with Asp38, for KSI from two sources. The increased level of binding suggests that the abutment of a charged general base and a hydrophobic steroid is modestly destabilizing, relative to a standard state in water, and that this destabilization is relieved in the transition state and intermediate in which the charge on the general base has been neutralized because of proton abstraction. Stronger binding also arose from mutation of Pro39, the residue adjacent to the Asp general base, consistent with an ability of the Asp general base to now reorient to avoid the destabilizing interaction. Consistent with this model, the Pro mutants reduced or eliminated the increased level of binding upon replacement of Asp38 with Asn or Ala. These results, supported by additional structural observations, suggest that ground state destabilization from the negatively charged Asp38 general base provides a modest contribution to KSI catalysis. They also provide a clear illustration of the well-recognized concept that enzymes evolve for catalytic function and not, in general, to maximize ground state binding. This ground state destabilization mechanism may be common to the many enzymes with anionic side chains that deprotonate carbon acids.

  14. Solving large mixed linear models using preconditioned conjugate gradient iteration.

    PubMed

    Strandén, I; Lidauer, M

    1999-12-01

    Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.

  15. Fault management for data systems

    NASA Technical Reports Server (NTRS)

    Boyd, Mark A.; Iverson, David L.; Patterson-Hine, F. Ann

    1993-01-01

    Issues related to automating the process of fault management (fault diagnosis and response) for data management systems are considered. Substantial benefits are to be gained by successful automation of this process, particularly for large, complex systems. The use of graph-based models to develop a computer assisted fault management system is advocated. The general problem is described and the motivation behind choosing graph-based models over other approaches for developing fault diagnosis computer programs is outlined. Some existing work in the area of graph-based fault diagnosis is reviewed, and a new fault management method which was developed from existing methods is offered. Our method is applied to an automatic telescope system intended as a prototype for future lunar telescope programs. Finally, an application of our method to general data management systems is described.

  16. Generalized Beer-Lambert model for near-infrared light propagation in thick biological tissues

    NASA Astrophysics Data System (ADS)

    Bhatt, Manish; Ayyalasomayajula, Kalyan R.; Yalavarthy, Phaneendra K.

    2016-07-01

    The attenuation of near-infrared (NIR) light intensity as it propagates in a turbid medium like biological tissue is described by modified the Beer-Lambert law (MBLL). The MBLL is generally used to quantify the changes in tissue chromophore concentrations for NIR spectroscopic data analysis. Even though MBLL is effective in terms of providing qualitative comparison, it suffers from its applicability across tissue types and tissue dimensions. In this work, we introduce Lambert-W function-based modeling for light propagation in biological tissues, which is a generalized version of the Beer-Lambert model. The proposed modeling provides parametrization of tissue properties, which includes two attenuation coefficients μ0 and η. We validated our model against the Monte Carlo simulation, which is the gold standard for modeling NIR light propagation in biological tissue. We included numerous human and animal tissues to validate the proposed empirical model, including an inhomogeneous adult human head model. The proposed model, which has a closed form (analytical), is first of its kind in providing accurate modeling of NIR light propagation in biological tissues.

  17. Generalized Models for Rock Joint Surface Shapes

    PubMed Central

    Du, Shigui; Hu, Yunjin; Hu, Xiaofei

    2014-01-01

    Generalized models of joint surface shapes are the foundation for mechanism studies on the mechanical effects of rock joint surface shapes. Based on extensive field investigations of rock joint surface shapes, generalized models for three level shapes named macroscopic outline, surface undulating shape, and microcosmic roughness were established through statistical analyses of 20,078 rock joint surface profiles. The relative amplitude of profile curves was used as a borderline for the division of different level shapes. The study results show that the macroscopic outline has three basic features such as planar, arc-shaped, and stepped; the surface undulating shape has three basic features such as planar, undulating, and stepped; and the microcosmic roughness has two basic features such as smooth and rough. PMID:25152901

  18. Generalized viscothermoelasticity theory of dual-phase-lagging model for damping analysis in circular micro-plate resonators

    NASA Astrophysics Data System (ADS)

    Grover, D.; Seth, R. K.

    2018-05-01

    Analysis and numerical results are presented for the thermoelastic dissipation of a homogeneous isotropic, thermally conducting, Kelvin-Voigt type circular micro-plate based on Kirchhoff's Love plate theory utilizing generalized viscothermoelasticity theory of dual-phase-lagging model. The analytical expressions for thermoelastic damping of vibration and frequency shift are obtained for generalized dual-phase-lagging model and coupled viscothermoelastic plates. The scaled thermoelastic damping has been illustrated in case of circular plate and axisymmetric circular plate for fixed aspect ratio for clamped and simply supported boundary conditions. It is observed that the damping of vibrations significantly depend on time delay and mechanical relaxation times in addition to thermo-mechanical coupling in circular plate under resonance conditions and plate dimensions.

  19. Framework for event-based semidistributed modeling that unifies the SCS-CN method, VIC, PDM, and TOPMODEL

    NASA Astrophysics Data System (ADS)

    Bartlett, M. S.; Parolari, A. J.; McDonnell, J. J.; Porporato, A.

    2016-09-01

    Hydrologists and engineers may choose from a range of semidistributed rainfall-runoff models such as VIC, PDM, and TOPMODEL, all of which predict runoff from a distribution of watershed properties. However, these models are not easily compared to event-based data and are missing ready-to-use analytical expressions that are analogous to the SCS-CN method. The SCS-CN method is an event-based model that describes the runoff response with a rainfall-runoff curve that is a function of the cumulative storm rainfall and antecedent wetness condition. Here we develop an event-based probabilistic storage framework and distill semidistributed models into analytical, event-based expressions for describing the rainfall-runoff response. The event-based versions called VICx, PDMx, and TOPMODELx also are extended with a spatial description of the runoff concept of "prethreshold" and "threshold-excess" runoff, which occur, respectively, before and after infiltration exceeds a storage capacity threshold. For total storm rainfall and antecedent wetness conditions, the resulting ready-to-use analytical expressions define the source areas (fraction of the watershed) that produce runoff by each mechanism. They also define the probability density function (PDF) representing the spatial variability of runoff depths that are cumulative values for the storm duration, and the average unit area runoff, which describes the so-called runoff curve. These new event-based semidistributed models and the traditional SCS-CN method are unified by the same general expression for the runoff curve. Since the general runoff curve may incorporate different model distributions, it may ease the way for relating such distributions to land use, climate, topography, ecology, geology, and other characteristics.

  20. Uncertainty quantification for environmental models

    USGS Publications Warehouse

    Hill, Mary C.; Lu, Dan; Kavetski, Dmitri; Clark, Martyn P.; Ye, Ming

    2012-01-01

    Environmental models are used to evaluate the fate of fertilizers in agricultural settings (including soil denitrification), the degradation of hydrocarbons at spill sites, and water supply for people and ecosystems in small to large basins and cities—to mention but a few applications of these models. They also play a role in understanding and diagnosing potential environmental impacts of global climate change. The models are typically mildly to extremely nonlinear. The persistent demand for enhanced dynamics and resolution to improve model realism [17] means that lengthy individual model execution times will remain common, notwithstanding continued enhancements in computer power. In addition, high-dimensional parameter spaces are often defined, which increases the number of model runs required to quantify uncertainty [2]. Some environmental modeling projects have access to extensive funding and computational resources; many do not. The many recent studies of uncertainty quantification in environmental model predictions have focused on uncertainties related to data error and sparsity of data, expert judgment expressed mathematically through prior information, poorly known parameter values, and model structure (see, for example, [1,7,9,10,13,18]). Approaches for quantifying uncertainty include frequentist (potentially with prior information [7,9]), Bayesian [13,18,19], and likelihood-based. A few of the numerous methods, including some sensitivity and inverse methods with consequences for understanding and quantifying uncertainty, are as follows: Bayesian hierarchical modeling and Bayesian model averaging; single-objective optimization with error-based weighting [7] and multi-objective optimization [3]; methods based on local derivatives [2,7,10]; screening methods like OAT (one at a time) and the method of Morris [14]; FAST (Fourier amplitude sensitivity testing) [14]; the Sobol' method [14]; randomized maximum likelihood [10]; Markov chain Monte Carlo (MCMC) [10]. There are also bootstrapping and cross-validation approaches.Sometimes analyses are conducted using surrogate models [12]. The availability of so many options can be confusing. Categorizing methods based on fundamental questions assists in communicating the essential results of uncertainty analyses to stakeholders. Such questions can focus on model adequacy (e.g., How well does the model reproduce observed system characteristics and dynamics?) and sensitivity analysis (e.g., What parameters can be estimated with available data? What observations are important to parameters and predictions? What parameters are important to predictions?), as well as on the uncertainty quantification (e.g., How accurate and precise are the predictions?). The methods can also be classified by the number of model runs required: few (10s to 1000s) or many (10,000s to 1,000,000s). Of the methods listed above, the most computationally frugal are generally those based on local derivatives; MCMC methods tend to be among the most computationally demanding. Surrogate models (emulators)do not necessarily produce computational frugality because many runs of the full model are generally needed to create a meaningful surrogate model. With this categorization, we can, in general, address all the fundamental questions mentioned above using either computationally frugal or demanding methods. Model development and analysis can thus be conducted consistently using either computation-ally frugal or demanding methods; alternatively, different fundamental questions can be addressed using methods that require different levels of effort. Based on this perspective, we pose the question: Can computationally frugal methods be useful companions to computationally demanding meth-ods? The reliability of computationally frugal methods generally depends on the model being reasonably linear, which usually means smooth nonlin-earities and the assumption of Gaussian errors; both tend to be more valid with more linear

  1. Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology

    PubMed Central

    Gao, Fei; Li, Ye; Novak, Igor L.; Slepchenko, Boris M.

    2016-01-01

    Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium ‘sparks’ as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell. PMID:27959915

  2. Pros, Cons, and Alternatives to Weight Based Cost Estimating

    NASA Technical Reports Server (NTRS)

    Joyner, Claude R.; Lauriem, Jonathan R.; Levack, Daniel H.; Zapata, Edgar

    2011-01-01

    Many cost estimating tools use weight as a major parameter in projecting the cost. This is often combined with modifying factors such as complexity, technical maturity of design, environment of operation, etc. to increase the fidelity of the estimate. For a set of conceptual designs, all meeting the same requirements, increased weight can be a major driver in increased cost. However, once a design is fixed, increased weight generally decreases cost, while decreased weight generally increases cost - and the relationship is not linear. Alternative approaches to estimating cost without using weight (except perhaps for materials costs) have been attempted to try to produce a tool usable throughout the design process - from concept studies through development. This paper will address the pros and cons of using weight based models for cost estimating, using liquid rocket engines as the example. It will then examine approaches that minimize the impct of weight based cost estimating. The Rocket Engine- Cost Model (RECM) is an attribute based model developed internally by Pratt & Whitney Rocketdyne for NASA. RECM will be presented primarily to show a successful method to use design and programmatic parameters instead of weight to estimate both design and development costs and production costs. An operations model developed by KSC, the Launch and Landing Effects Ground Operations model (LLEGO), will also be discussed.

  3. Utilisation of home-based physician, nurse and personal support worker services within a palliative care programme in Ontario, Canada: trends over 2005-2015.

    PubMed

    Sun, Zhuolu; Laporte, Audrey; Guerriere, Denise N; Coyte, Peter C

    2017-05-01

    With health system restructuring in Canada and a general preference by care recipients and their families to receive palliative care at home, attention to home-based palliative care continues to increase. A multidisciplinary team of health professionals is the most common delivery model for home-based palliative care in Canada. However, little is known about the changing temporal trends in the propensity and intensity of home-based palliative care. The purpose of this study was to assess the propensity to use home-based palliative care services, and once used, the intensity of that use for three main service categories: physician visits, nurse visits and care by personal support workers (PSWs) over the last decade. Three prospective cohort data sets were used to track changes in service use over the period 2005 to 2015. Service use for each category was assessed using a two-part model, and a Heckit regression was performed to assess the presence of selectivity bias. Service propensity was modelled using multivariate logistic regression analysis and service intensity was modelled using log-transformed ordinary least squares regression analysis. Both the propensity and intensity to use home-based physician visits and PSWs increased over the last decade, while service propensity and the intensity of nurse visits decreased. Meanwhile, there was a general tendency for service propensity and intensity to increase as the end of life approached. These findings demonstrate temporal changes towards increased use of home-based palliative care, and a shift to substitute care away from nursing to less expensive forms of care, specifically PSWs. These findings may provide a general idea of the types of services that are used more intensely and require more resources from multidisciplinary teams, as increased use of home-based palliative care has placed dramatic pressures on the budgets of local home and community care organisations. © 2016 John Wiley & Sons Ltd.

  4. CLIMATIC EFFECTS ON TUNDRA CARBON STORAGE INFERRED FROM EXPERIMENTAL DATA AND A MODEL

    EPA Science Inventory

    We used a process-based model of ecosystem carbon (C) and nitrogen (N)dynamics, MBL-GEM (Marine Biological Laboratory General Ecosystem Model), to integrated and analyze the results of several experiments that examined the response of arctic tussock tundra to manipulations of CO2...

  5. Design 2000: Theory-Based Design Models of the Future.

    ERIC Educational Resources Information Center

    Richey, Rita C.

    The influence of theory on instructional-design models of the future is explored on the basis of the theoretical developments of today. Anticipated model changes are expected to result from disparate theoretical thinking in areas such as chaos theory, constructivism, situated learning, cognitive-learning theory, and general systems theory.…

  6. A Systems Approach to the Estimation of Ecosystem and Human Health Stressors in Air, Land and Water

    EPA Science Inventory

    A model linkage paradigm, based on the nitrogen cascade, is introduced. This general paradigm is then adapted to specific multi-media nitrogen issues and specific models to be linked. An example linked modeling system addressing potential nitrogen responses to biofuel-driven co...

  7. A MODE-OF-ACTION-BASED QSAR APPROACH TO IMPROVE UNDERSTANDING OF DEVELOPMENTAL TOXICITY

    EPA Science Inventory

    QSAR models of developmental toxicity (devtox) have met with limited regulatory acceptance due to the use of ill-defined endpoints, lack of biological interpretability, and poor model performance. More generally, the lack of biological inference of many QSAR models is often due t...

  8. A Survey of Model Evaluation Approaches with a Tutorial on Hierarchical Bayesian Methods

    ERIC Educational Resources Information Center

    Shiffrin, Richard M.; Lee, Michael D.; Kim, Woojae; Wagenmakers, Eric-Jan

    2008-01-01

    This article reviews current methods for evaluating models in the cognitive sciences, including theoretically based approaches, such as Bayes factors and minimum description length measures; simulation approaches, including model mimicry evaluations; and practical approaches, such as validation and generalization measures. This article argues…

  9. Sexual, marital, and general life functioning in couples coping with colorectal cancer: a dyadic study across time.

    PubMed

    Traa, Marjan J; Braeken, Johan; De Vries, Jolanda; Roukema, Jan A; Slooter, Gerrit D; Crolla, Rogier M P H; Borremans, Monique P M; Den Oudsten, Brenda L

    2015-09-01

    This study evaluated the following: (a) levels of sexual, marital, and general life functioning for both patients and partners; (b) interdependence between both members of the couple; and (c) longitudinal change in sexual, marital, and general life functioning and longitudinal stress-spillover effects in these three domains from a dyadic perspective. Couples (n = 102) completed the Maudsley Marital Questionnaire preoperatively and 3 and 6 months postoperatively. Mean scores were compared with norm scores. A multivariate general linear model and a multivariate latent difference score - structural equation modeling (LDS-SEM), which took into account actor and partner effects, were evaluated. Patients and partners reported lower sexual, mostly similar marital, and higher general life functioning compared with norm scores. Moderate to high within-dyad associations were found. The LDS-SEM model mostly showed actor effects. Yet the longitudinal change in the partners' sexual functioning was determined not only by their own preoperative sexual functioning but also by that of the patient. Preoperative sexual functioning did not spill over to the other two domains for patients and partners, whereas the patients' preoperative general life functioning influenced postoperative change in marital and sexual functioning. Health care professionals should examine potential sexual problems but have to be aware that these problems may not spill over to the marital and general life domains. In contrast, low functioning in the general life domain may spill over to the marital and sexual domains. The interdependence between patients and partners implies that a couple-based perspective (e.g., couple-based interventions/therapies) to coping with cancer is needed. Copyright © 2015 John Wiley & Sons, Ltd.

  10. Regional climate model downscaling may improve the prediction of alien plant species distributions

    NASA Astrophysics Data System (ADS)

    Liu, Shuyan; Liang, Xin-Zhong; Gao, Wei; Stohlgren, Thomas J.

    2014-12-01

    Distributions of invasive species are commonly predicted with species distribution models that build upon the statistical relationships between observed species presence data and climate data. We used field observations, climate station data, and Maximum Entropy species distribution models for 13 invasive plant species in the United States, and then compared the models with inputs from a General Circulation Model (hereafter GCM-based models) and a downscaled Regional Climate Model (hereafter, RCM-based models).We also compared species distributions based on either GCM-based or RCM-based models for the present (1990-1999) to the future (2046-2055). RCM-based species distribution models replicated observed distributions remarkably better than GCM-based models for all invasive species under the current climate. This was shown for the presence locations of the species, and by using four common statistical metrics to compare modeled distributions. For two widespread invasive taxa ( Bromus tectorum or cheatgrass, and Tamarix spp. or tamarisk), GCM-based models failed miserably to reproduce observed species distributions. In contrast, RCM-based species distribution models closely matched observations. Future species distributions may be significantly affected by using GCM-based inputs. Because invasive plants species often show high resilience and low rates of local extinction, RCM-based species distribution models may perform better than GCM-based species distribution models for planning containment programs for invasive species.

  11. Types and Characteristics of Data for Geomagnetic Field Modeling

    NASA Technical Reports Server (NTRS)

    Langel, R. A. (Editor); Baldwin, R. T. (Editor)

    1992-01-01

    Given here is material submitted at a symposium convened on Friday, August 23, 1991, at the General Assembly of the International Union of Geodesy and Geophysics (IUGG) held in Vienna, Austria. Models of the geomagnetic field are only as good as the data upon which they are based, and depend upon correct understanding of data characteristics such as accuracy, correlations, systematic errors, and general statistical properties. This symposium was intended to expose and illuminate these data characteristics.

  12. Automated real time constant-specificity surveillance for disease outbreaks.

    PubMed

    Wieland, Shannon C; Brownstein, John S; Berger, Bonnie; Mandl, Kenneth D

    2007-06-13

    For real time surveillance, detection of abnormal disease patterns is based on a difference between patterns observed, and those predicted by models of historical data. The usefulness of outbreak detection strategies depends on their specificity; the false alarm rate affects the interpretation of alarms. We evaluate the specificity of five traditional models: autoregressive, Serfling, trimmed seasonal, wavelet-based, and generalized linear. We apply each to 12 years of emergency department visits for respiratory infection syndromes at a pediatric hospital, finding that the specificity of the five models was almost always a non-constant function of the day of the week, month, and year of the study (p < 0.05). We develop an outbreak detection method, called the expectation-variance model, based on generalized additive modeling to achieve a constant specificity by accounting for not only the expected number of visits, but also the variance of the number of visits. The expectation-variance model achieves constant specificity on all three time scales, as well as earlier detection and improved sensitivity compared to traditional methods in most circumstances. Modeling the variance of visit patterns enables real-time detection with known, constant specificity at all times. With constant specificity, public health practitioners can better interpret the alarms and better evaluate the cost-effectiveness of surveillance systems.

  13. Invited commentary: Lost in estimation--searching for alternatives to markov chains to fit complex Bayesian models.

    PubMed

    Molitor, John

    2012-03-01

    Bayesian methods have seen an increase in popularity in a wide variety of scientific fields, including epidemiology. One of the main reasons for their widespread application is the power of the Markov chain Monte Carlo (MCMC) techniques generally used to fit these models. As a result, researchers often implicitly associate Bayesian models with MCMC estimation procedures. However, Bayesian models do not always require Markov-chain-based methods for parameter estimation. This is important, as MCMC estimation methods, while generally quite powerful, are complex and computationally expensive and suffer from convergence problems related to the manner in which they generate correlated samples used to estimate probability distributions for parameters of interest. In this issue of the Journal, Cole et al. (Am J Epidemiol. 2012;175(5):368-375) present an interesting paper that discusses non-Markov-chain-based approaches to fitting Bayesian models. These methods, though limited, can overcome some of the problems associated with MCMC techniques and promise to provide simpler approaches to fitting Bayesian models. Applied researchers will find these estimation approaches intuitively appealing and will gain a deeper understanding of Bayesian models through their use. However, readers should be aware that other non-Markov-chain-based methods are currently in active development and have been widely published in other fields.

  14. The impact of covariance misspecification in group-based trajectory models for longitudinal data with non-stationary covariance structure.

    PubMed

    Davies, Christopher E; Glonek, Gary Fv; Giles, Lynne C

    2017-08-01

    One purpose of a longitudinal study is to gain a better understanding of how an outcome of interest changes among a given population over time. In what follows, a trajectory will be taken to mean the series of measurements of the outcome variable for an individual. Group-based trajectory modelling methods seek to identify subgroups of trajectories within a population, such that trajectories that are grouped together are more similar to each other than to trajectories in distinct groups. Group-based trajectory models generally assume a certain structure in the covariances between measurements, for example conditional independence, homogeneous variance between groups or stationary variance over time. Violations of these assumptions could be expected to result in poor model performance. We used simulation to investigate the effect of covariance misspecification on misclassification of trajectories in commonly used models under a range of scenarios. To do this we defined a measure of performance relative to the ideal Bayesian correct classification rate. We found that the more complex models generally performed better over a range of scenarios. In particular, incorrectly specified covariance matrices could significantly bias the results but using models with a correct but more complicated than necessary covariance matrix incurred little cost.

  15. A generalized form of the Bernoulli Trial collision scheme in DSMC: Derivation and evaluation

    NASA Astrophysics Data System (ADS)

    Roohi, Ehsan; Stefanov, Stefan; Shoja-Sani, Ahmad; Ejraei, Hossein

    2018-02-01

    The impetus of this research is to present a generalized Bernoulli Trial collision scheme in the context of the direct simulation Monte Carlo (DSMC) method. Previously, a subsequent of several collision schemes have been put forward, which were mathematically based on the Kac stochastic model. These include Bernoulli Trial (BT), Ballot Box (BB), Simplified Bernoulli Trial (SBT) and Intelligent Simplified Bernoulli Trial (ISBT) schemes. The number of considered pairs for a possible collision in the above-mentioned schemes varies between N (l) (N (l) - 1) / 2 in BT, 1 in BB, and (N (l) - 1) in SBT or ISBT, where N (l) is the instantaneous number of particles in the lth cell. Here, we derive a generalized form of the Bernoulli Trial collision scheme (GBT) where the number of selected pairs is any desired value smaller than (N (l) - 1), i.e., Nsel < (N (l) - 1), keeping the same the collision frequency and accuracy of the solution as the original SBT and BT models. We derive two distinct formulas for the GBT scheme, where both formula recover BB and SBT limits if Nsel is set as 1 and N (l) - 1, respectively, and provide accurate solutions for a wide set of test cases. The present generalization further improves the computational efficiency of the BT-based collision models compared to the standard no time counter (NTC) and nearest neighbor (NN) collision models.

  16. A generalized interval fuzzy mixed integer programming model for a multimodal transportation problem under uncertainty

    NASA Astrophysics Data System (ADS)

    Tian, Wenli; Cao, Chengxuan

    2017-03-01

    A generalized interval fuzzy mixed integer programming model is proposed for the multimodal freight transportation problem under uncertainty, in which the optimal mode of transport and the optimal amount of each type of freight transported through each path need to be decided. For practical purposes, three mathematical methods, i.e. the interval ranking method, fuzzy linear programming method and linear weighted summation method, are applied to obtain equivalents of constraints and parameters, and then a fuzzy expected value model is presented. A heuristic algorithm based on a greedy criterion and the linear relaxation algorithm are designed to solve the model.

  17. A Large Scale, High Resolution Agent-Based Insurgency Model

    DTIC Science & Technology

    2013-09-30

    CUDA) is NVIDIA Corporation’s software development model for General Purpose Programming on Graphics Processing Units (GPGPU) ( NVIDIA Corporation ...Conference. Argonne National Laboratory, Argonne, IL, October, 2005. NVIDIA Corporation . NVIDIA CUDA Programming Guide 2.0 [Online]. NVIDIA Corporation

  18. DNA Microarray-based Ecotoxicological Biomarker Discovery in a Small Fish Model Species

    EPA Science Inventory

    This paper addresses several issues critical to use of zebrafish oligonucleotide microarrays for computational toxicology research on endocrine disrupting chemicals using small fish models, and more generally, the use of microarrays in aquatic toxicology.

  19. Interplay between inhibited transport and reaction in nanoporous materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ackerman, David Michael

    2013-01-01

    This work presents a detailed formulation of reaction and diffusion dynamics of molecules in confined pores such as mesoporous silica and zeolites. A general reaction-diffusion model and discrete Monte Carlo simulations are presented. Both transient and steady state behavior is covered. Failure of previous mean-field models for these systems is explained and discussed. A coarse-grained, generalized hydrodynamic model is developed that accurately captures the interplay between reaction and restricted transport in these systems. This method incorporates the non-uniform chemical diffusion behavior present in finite pores with multi-component diffusion. Two methods of calculating these diffusion values are developed: a random walkmore » based approach and a driven diffusion model based on an extension of Fick's law. The effects of reaction, diffusion, pore length, and catalytic site distribution are investigated. In addition to strictly single file motion, quasi-single file diffusion is incorporated into the model to match a range of experimental systems. The connection between these experimental systems and model parameters is made through Langevin dynamics modeling of particles in confined pores.« less

  20. Detecting and modelling delayed density-dependence in abundance time series of a small mammal (Didelphis aurita)

    NASA Astrophysics Data System (ADS)

    Brigatti, E.; Vieira, M. V.; Kajin, M.; Almeida, P. J. A. L.; de Menezes, M. A.; Cerqueira, R.

    2016-02-01

    We study the population size time series of a Neotropical small mammal with the intent of detecting and modelling population regulation processes generated by density-dependent factors and their possible delayed effects. The application of analysis tools based on principles of statistical generality are nowadays a common practice for describing these phenomena, but, in general, they are more capable of generating clear diagnosis rather than granting valuable modelling. For this reason, in our approach, we detect the principal temporal structures on the bases of different correlation measures, and from these results we build an ad-hoc minimalist autoregressive model that incorporates the main drivers of the dynamics. Surprisingly our model is capable of reproducing very well the time patterns of the empirical series and, for the first time, clearly outlines the importance of the time of attaining sexual maturity as a central temporal scale for the dynamics of this species. In fact, an important advantage of this analysis scheme is that all the model parameters are directly biologically interpretable and potentially measurable, allowing a consistency check between model outputs and independent measurements.

  1. Posterior propriety for hierarchical models with log-likelihoods that have norm bounds

    DOE PAGES

    Michalak, Sarah E.; Morris, Carl N.

    2015-07-17

    Statisticians often use improper priors to express ignorance or to provide good frequency properties, requiring that posterior propriety be verified. Our paper addresses generalized linear mixed models, GLMMs, when Level I parameters have Normal distributions, with many commonly-used hyperpriors. It provides easy-to-verify sufficient posterior propriety conditions based on dimensions, matrix ranks, and exponentiated norm bounds, ENBs, for the Level I likelihood. Since many familiar likelihoods have ENBs, which is often verifiable via log-concavity and MLE finiteness, our novel use of ENBs permits unification of posterior propriety results and posterior MGF/moment results for many useful Level I distributions, including those commonlymore » used with multilevel generalized linear models, e.g., GLMMs and hierarchical generalized linear models, HGLMs. Furthermore, those who need to verify existence of posterior distributions or of posterior MGFs/moments for a multilevel generalized linear model given a proper or improper multivariate F prior as in Section 1 should find the required results in Sections 1 and 2 and Theorem 3 (GLMMs), Theorem 4 (HGLMs), or Theorem 5 (posterior MGFs/moments).« less

  2. Using avian radar to examine relationships among avian activity, bird strikes, and meteorological factors

    USGS Publications Warehouse

    Coates, Peter S.; Casazza, Michael L.; Halstead, Brian J.; Fleskes, Joseph P.; Laughlin, James A.

    2011-01-01

    Radar systems designed to detect avian activity at airfields are useful in understanding factors that influence the risk of bird and aircraft collisions (bird strikes). We used an avian radar system to measure avian activity at Beale Air Force Base, California, USA, during 2008 and 2009. We conducted a 2-part analysis to examine relationships among avian activity, bird strikes, and meteorological and time-dependent factors. We found that avian activity around the airfield was greater at times when bird strikes occurred than on average using a permutation resampling technique. Second, we developed generalized linear mixed models of an avian activity index (AAI). Variation in AAI was first explained by seasons that were based on average migration dates of birds at the study area. We then modeled AAI by those seasons to further explain variation by meteorological factors and daily light levels within a 24-hour period. In general, avian activity increased with decreased temperature, wind, visibility, precipitation, and increased humidity and cloud cover. These effects differed by season. For example, during the spring bird migration period, most avian activity occurred before sunrise at twilight hours on clear days with low winds, whereas during fall migration, substantial activity occurred after sunrise, and birds generally were more active at lower temperatures. We report parameter estimates (i.e., constants and coefficients) averaged across models and a relatively simple calculation for safety officers and wildlife managers to predict AAI and the relative risk of bird strike based on time, date, and meteorological values. We validated model predictability and assessed model fit. These analyses will be useful for general inference of avian activity and risk assessment efforts. Further investigation and ongoing data collection will refine these inference models and improve our understanding of factors that influence avian activity, which is necessary to inform management decisions aimed at reducing risk of bird strikes.

  3. Musite, a tool for global prediction of general and kinase-specific phosphorylation sites.

    PubMed

    Gao, Jianjiong; Thelen, Jay J; Dunker, A Keith; Xu, Dong

    2010-12-01

    Reversible protein phosphorylation is one of the most pervasive post-translational modifications, regulating diverse cellular processes in various organisms. High throughput experimental studies using mass spectrometry have identified many phosphorylation sites, primarily from eukaryotes. However, the vast majority of phosphorylation sites remain undiscovered, even in well studied systems. Because mass spectrometry-based experimental approaches for identifying phosphorylation events are costly, time-consuming, and biased toward abundant proteins and proteotypic peptides, in silico prediction of phosphorylation sites is potentially a useful alternative strategy for whole proteome annotation. Because of various limitations, current phosphorylation site prediction tools were not well designed for comprehensive assessment of proteomes. Here, we present a novel software tool, Musite, specifically designed for large scale predictions of both general and kinase-specific phosphorylation sites. We collected phosphoproteomics data in multiple organisms from several reliable sources and used them to train prediction models by a comprehensive machine-learning approach that integrates local sequence similarities to known phosphorylation sites, protein disorder scores, and amino acid frequencies. Application of Musite on several proteomes yielded tens of thousands of phosphorylation site predictions at a high stringency level. Cross-validation tests show that Musite achieves some improvement over existing tools in predicting general phosphorylation sites, and it is at least comparable with those for predicting kinase-specific phosphorylation sites. In Musite V1.0, we have trained general prediction models for six organisms and kinase-specific prediction models for 13 kinases or kinase families. Although the current pretrained models were not correlated with any particular cellular conditions, Musite provides a unique functionality for training customized prediction models (including condition-specific models) from users' own data. In addition, with its easily extensible open source application programming interface, Musite is aimed at being an open platform for community-based development of machine learning-based phosphorylation site prediction applications. Musite is available at http://musite.sourceforge.net/.

  4. Assessment of corneal properties based on statistical modeling of OCT speckle.

    PubMed

    Jesus, Danilo A; Iskander, D Robert

    2017-01-01

    A new approach to assess the properties of the corneal micro-structure in vivo based on the statistical modeling of speckle obtained from Optical Coherence Tomography (OCT) is presented. A number of statistical models were proposed to fit the corneal speckle data obtained from OCT raw image. Short-term changes in corneal properties were studied by inducing corneal swelling whereas age-related changes were observed analyzing data of sixty-five subjects aged between twenty-four and seventy-three years. Generalized Gamma distribution has shown to be the best model, in terms of the Akaike's Information Criterion, to fit the OCT corneal speckle. Its parameters have shown statistically significant differences (Kruskal-Wallis, p < 0.001) for short and age-related corneal changes. In addition, it was observed that age-related changes influence the corneal biomechanical behaviour when corneal swelling is induced. This study shows that Generalized Gamma distribution can be utilized to modeling corneal speckle in OCT in vivo providing complementary quantified information where micro-structure of corneal tissue is of essence.

  5. A general computation model based on inverse analysis principle used for rheological analysis of W/O rapeseed and soybean oil emulsions

    NASA Astrophysics Data System (ADS)

    Vintila, Iuliana; Gavrus, Adinel

    2017-10-01

    The present research paper proposes the validation of a rigorous computation model used as a numerical tool to identify rheological behavior of complex emulsions W/O. Considering a three-dimensional description of a general viscoplastic flow it is detailed the thermo-mechanical equations used to identify fluid or soft material's rheological laws starting from global experimental measurements. Analyses are conducted for complex emulsions W/O having generally a Bingham behavior using the shear stress - strain rate dependency based on a power law and using an improved analytical model. Experimental results are investigated in case of rheological behavior for crude and refined rapeseed/soybean oils and four types of corresponding W/O emulsions using different physical-chemical composition. The rheological behavior model was correlated with the thermo-mechanical analysis of a plane-plane rheometer, oil content, chemical composition, particle size and emulsifier's concentration. The parameters of rheological laws describing the industrial oils and the W/O concentrated emulsions behavior were computed from estimated shear stresses using a non-linear regression technique and from experimental torques using the inverse analysis tool designed by A. Gavrus (1992-2000).

  6. Cross-site comparison of land-use decision-making and its consequences across land systems with a generalized agent-based model.

    PubMed

    Magliocca, Nicholas R; Brown, Daniel G; Ellis, Erle C

    2014-01-01

    Local changes in land use result from the decisions and actions of land-users within land systems, which are structured by local and global environmental, economic, political, and cultural contexts. Such cross-scale causation presents a major challenge for developing a general understanding of how local decision-making shapes land-use changes at the global scale. This paper implements a generalized agent-based model (ABM) as a virtual laboratory to explore how global and local processes influence the land-use and livelihood decisions of local land-users, operationalized as settlement-level agents, across the landscapes of six real-world test sites. Test sites were chosen in USA, Laos, and China to capture globally-significant variation in population density, market influence, and environmental conditions, with land systems ranging from swidden to commercial agriculture. Publicly available global data were integrated into the ABM to model cross-scale effects of economic globalization on local land-use decisions. A suite of statistics was developed to assess the accuracy of model-predicted land-use outcomes relative to observed and random (i.e. null model) landscapes. At four of six sites, where environmental and demographic forces were important constraints on land-use choices, modeled land-use outcomes were more similar to those observed across sites than the null model. At the two sites in which market forces significantly influenced land-use and livelihood decisions, the model was a poorer predictor of land-use outcomes than the null model. Model successes and failures in simulating real-world land-use patterns enabled the testing of hypotheses on land-use decision-making and yielded insights on the importance of missing mechanisms. The virtual laboratory approach provides a practical framework for systematic improvement of both theory and predictive skill in land change science based on a continual process of experimentation and model enhancement.

  7. Cross-Site Comparison of Land-Use Decision-Making and Its Consequences across Land Systems with a Generalized Agent-Based Model

    PubMed Central

    Magliocca, Nicholas R.; Brown, Daniel G.; Ellis, Erle C.

    2014-01-01

    Local changes in land use result from the decisions and actions of land-users within land systems, which are structured by local and global environmental, economic, political, and cultural contexts. Such cross-scale causation presents a major challenge for developing a general understanding of how local decision-making shapes land-use changes at the global scale. This paper implements a generalized agent-based model (ABM) as a virtual laboratory to explore how global and local processes influence the land-use and livelihood decisions of local land-users, operationalized as settlement-level agents, across the landscapes of six real-world test sites. Test sites were chosen in USA, Laos, and China to capture globally-significant variation in population density, market influence, and environmental conditions, with land systems ranging from swidden to commercial agriculture. Publicly available global data were integrated into the ABM to model cross-scale effects of economic globalization on local land-use decisions. A suite of statistics was developed to assess the accuracy of model-predicted land-use outcomes relative to observed and random (i.e. null model) landscapes. At four of six sites, where environmental and demographic forces were important constraints on land-use choices, modeled land-use outcomes were more similar to those observed across sites than the null model. At the two sites in which market forces significantly influenced land-use and livelihood decisions, the model was a poorer predictor of land-use outcomes than the null model. Model successes and failures in simulating real-world land-use patterns enabled the testing of hypotheses on land-use decision-making and yielded insights on the importance of missing mechanisms. The virtual laboratory approach provides a practical framework for systematic improvement of both theory and predictive skill in land change science based on a continual process of experimentation and model enhancement. PMID:24489696

  8. Generalized Structured Component Analysis with Latent Interactions

    ERIC Educational Resources Information Center

    Hwang, Heungsun; Ho, Moon-Ho Ringo; Lee, Jonathan

    2010-01-01

    Generalized structured component analysis (GSCA) is a component-based approach to structural equation modeling. In practice, researchers may often be interested in examining the interaction effects of latent variables. However, GSCA has been geared only for the specification and testing of the main effects of variables. Thus, an extension of GSCA…

  9. Regularized Generalized Structured Component Analysis

    ERIC Educational Resources Information Center

    Hwang, Heungsun

    2009-01-01

    Generalized structured component analysis (GSCA) has been proposed as a component-based approach to structural equation modeling. In practice, GSCA may suffer from multi-collinearity, i.e., high correlations among exogenous variables. GSCA has yet no remedy for this problem. Thus, a regularized extension of GSCA is proposed that integrates a ridge…

  10. Estimating plant available water for general crop simulations in ALMANAC/APEX/EPIC/SWAT

    USDA-ARS?s Scientific Manuscript database

    Process-based simulation models ALMANAC/APEX/EPIC/SWAT contain generalized plant growth subroutines to predict biomass and crop yield. Environmental constraints typically restrict plant growth and yield. Water stress is often an important limiting factor; it is calculated as the sum of water use f...

  11. A general psychopathology factor in early adolescence.

    PubMed

    Patalay, Praveetha; Fonagy, Peter; Deighton, Jessica; Belsky, Jay; Vostanis, Panos; Wolpert, Miranda

    2015-07-01

    Recently, a general psychopathology dimension reflecting common aspects among disorders has been identified in adults. This has not yet been considered in children and adolescents, where the focus has been on externalising and internalising dimensions. To examine the existence, correlates and predictive value of a general psychopathology dimension in young people. Alternative factor models were estimated using self-reports of symptoms in a large community-based sample aged 11-13.5 years (N = 23 477), and resulting dimensions were assessed in terms of associations with external correlates and future functioning. Both a traditional two-factor model and a bi-factor model with a general psychopathology bi-factor fitted the data well. The general psychopathology bi-factor best predicted future psychopathology and academic attainment. Associations with correlates and factor loadings are discussed. A general psychopathology factor, which is equal across genders, can be identified in young people. Its associations with correlates and future functioning indicate that investigating this factor can increase our understanding of the aetiology, risk and correlates of psychopathology. © The Royal College of Psychiatrists 2015.

  12. Molecular Dynamics based on a Generalized Born solvation model: application to protein folding

    NASA Astrophysics Data System (ADS)

    Onufriev, Alexey

    2004-03-01

    An accurate description of the aqueous environment is essential for realistic biomolecular simulations, but may become very expensive computationally. We have developed a version of the Generalized Born model suitable for describing large conformational changes in macromolecules. The model represents the solvent implicitly as continuum with the dielectric properties of water, and include charge screening effects of salt. The computational cost associated with the use of this model in Molecular Dynamics simulations is generally considerably smaller than the cost of representing water explicitly. Also, compared to traditional Molecular Dynamics simulations based on explicit water representation, conformational changes occur much faster in implicit solvation environment due to the absence of viscosity. The combined speed-up allow one to probe conformational changes that occur on much longer effective time-scales. We apply the model to folding of a 46-residue three helix bundle protein (residues 10-55 of protein A, PDB ID 1BDD). Starting from an unfolded structure at 450 K, the protein folds to the lowest energy state in 6 ns of simulation time, which takes about a day on a 16 processor SGI machine. The predicted structure differs from the native one by 2.4 A (backbone RMSD). Analysis of the structures seen on the folding pathway reveals details of the folding process unavailable form experiment.

  13. General predictive model of friction behavior regimes for metal contacts based on the formation stability and evolution of nanocrystalline surface films.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Argibay, Nicolas; Cheng, Shengfeng; Sawyer, W. G.

    2015-09-01

    The prediction of macro-scale friction and wear behavior based on first principles and material properties has remained an elusive but highly desirable target for tribologists and material scientists alike. Stochastic processes (e.g. wear), statistically described parameters (e.g. surface topography) and their evolution tend to defeat attempts to establish practical general correlations between fundamental nanoscale processes and macro-scale behaviors. We present a model based on microstructural stability and evolution for the prediction of metal friction regimes, founded on recently established microstructural deformation mechanisms of nanocrystalline metals, that relies exclusively on material properties and contact stress models. We show through complementary experimentalmore » and simulation results that this model overcomes longstanding practical challenges and successfully makes accurate and consistent predictions of friction transitions for a wide range of contact conditions. This framework not only challenges the assumptions of conventional causal relationships between hardness and friction, and between friction and wear, but also suggests a pathway for the design of higher performance metal alloys.« less

  14. An Ensemble System Based on Hybrid EGARCH-ANN with Different Distributional Assumptions to Predict S&P 500 Intraday Volatility

    NASA Astrophysics Data System (ADS)

    Lahmiri, S.; Boukadoum, M.

    2015-10-01

    Accurate forecasting of stock market volatility is an important issue in portfolio risk management. In this paper, an ensemble system for stock market volatility is presented. It is composed of three different models that hybridize the exponential generalized autoregressive conditional heteroscedasticity (GARCH) process and the artificial neural network trained with the backpropagation algorithm (BPNN) to forecast stock market volatility under normal, t-Student, and generalized error distribution (GED) assumption separately. The goal is to design an ensemble system where each single hybrid model is capable to capture normality, excess skewness, or excess kurtosis in the data to achieve complementarity. The performance of each EGARCH-BPNN and the ensemble system is evaluated by the closeness of the volatility forecasts to realized volatility. Based on mean absolute error and mean of squared errors, the experimental results show that proposed ensemble model used to capture normality, skewness, and kurtosis in data is more accurate than the individual EGARCH-BPNN models in forecasting the S&P 500 intra-day volatility based on one and five-minute time horizons data.

  15. Generalizing Gillespie’s Direct Method to Enable Network-Free Simulations

    DOE PAGES

    Suderman, Ryan T.; Mitra, Eshan David; Lin, Yen Ting; ...

    2018-03-28

    Gillespie’s direct method for stochastic simulation of chemical kinetics is a staple of computational systems biology research. However, the algorithm requires explicit enumeration of all reactions and all chemical species that may arise in the system. In many cases, this is not feasible due to the combinatorial explosion of reactions and species in biological networks. Rule-based modeling frameworks provide a way to exactly represent networks containing such combinatorial complexity, and generalizations of Gillespie’s direct method have been developed as simulation engines for rule-based modeling languages. Here, we provide both a high-level description of the algorithms underlying the simulation engines, termedmore » network-free simulation algorithms, and how they have been applied in systems biology research. We also define a generic rule-based modeling framework and describe a number of technical details required for adapting Gillespie’s direct method for network-free simulation. Lastly, we briefly discuss potential avenues for advancing network-free simulation and the role they continue to play in modeling dynamical systems in biology.« less

  16. Retrievals of aerosol microphysics from simulations of spaceborne multiwavelength lidar measurements

    NASA Astrophysics Data System (ADS)

    Whiteman, David N.; Pérez-Ramírez, Daniel; Veselovskii, Igor; Colarco, Peter; Buchard, Virginie

    2018-01-01

    In support of the Aerosol, Clouds, Ecosystems mission, simulations of a spaceborne multiwavelength lidar are performed based on global model simulations of the atmosphere along a satellite orbit track. The yield for aerosol microphysical inversions is quantified and comparisons are made between the aerosol microphysics inherent in the global model and those inverted from both the model's optical data and the simulated three backscatter and two extinction lidar measurements, which are based on the model's optical data. We find that yield can be significantly increased if inversions based on a reduced optical dataset of three backscatter and one extinction are acceptable. In general, retrieval performance is better for cases where the aerosol fine mode dominates although a lack of sensitivity to particles with sizes less than 0.1 μm is found. Lack of sensitivity to coarse mode cases is also found, in agreement with earlier studies. Surface area is generally the most robustly retrieved quantity. The work here points toward the need for ancillary data to aid in the constraints of the lidar inversions and also for joint inversions involving lidar and polarimeter measurements.

  17. Retrievals of Aerosol Microphysics from Simulations of Spaceborne Multiwavelength Lidar Measurements

    NASA Technical Reports Server (NTRS)

    Whiteman, David N.; Perez-Ramírez, Daniel; Veselovskii, Igor; Colarco, Peter; Buchard, Virginie

    2017-01-01

    In support of the Aerosol, Clouds, Ecosystems mission, simulations of a spaceborne multiwavelength lidar are performed based on global model simulations of the atmosphere along a satellite orbit track. The yield for aerosol microphysical inversions is quantified and comparisons are made between the aerosol microphysics inherent in the global model and those inverted from both the model's optical data and the simulated three backscatter and two extinction lidar measurements, which are based on the model's optical data. We find that yield can be significantly increased if inversions based on a reduced optical dataset of three backscatter and one extinction are acceptable. In general, retrieval performance is better for cases where the aerosol fine mode dominates although a lack of sensitivity to particles with sizes less than 0.1 microns is found. Lack of sensitivity to coarse mode cases is also found, in agreement with earlier studies. Surface area is generally the most robustly retrieved quantity. The work here points toward the need for ancillary data to aid in the constraints of the lidar inversions and also for joint inversions involving lidar and polarimeter measurements.

  18. From point process observations to collective neural dynamics: Nonlinear Hawkes process GLMs, low-dimensional dynamics and coarse graining

    PubMed Central

    Truccolo, Wilson

    2017-01-01

    This review presents a perspective on capturing collective dynamics in recorded neuronal ensembles based on multivariate point process models, inference of low-dimensional dynamics and coarse graining of spatiotemporal measurements. A general probabilistic framework for continuous time point processes reviewed, with an emphasis on multivariate nonlinear Hawkes processes with exogenous inputs. A point process generalized linear model (PP-GLM) framework for the estimation of discrete time multivariate nonlinear Hawkes processes is described. The approach is illustrated with the modeling of collective dynamics in neocortical neuronal ensembles recorded in human and non-human primates, and prediction of single-neuron spiking. A complementary approach to capture collective dynamics based on low-dimensional dynamics (“order parameters”) inferred via latent state-space models with point process observations is presented. The approach is illustrated by inferring and decoding low-dimensional dynamics in primate motor cortex during naturalistic reach and grasp movements. Finally, we briefly review hypothesis tests based on conditional inference and spatiotemporal coarse graining for assessing collective dynamics in recorded neuronal ensembles. PMID:28336305

  19. From point process observations to collective neural dynamics: Nonlinear Hawkes process GLMs, low-dimensional dynamics and coarse graining.

    PubMed

    Truccolo, Wilson

    2016-11-01

    This review presents a perspective on capturing collective dynamics in recorded neuronal ensembles based on multivariate point process models, inference of low-dimensional dynamics and coarse graining of spatiotemporal measurements. A general probabilistic framework for continuous time point processes reviewed, with an emphasis on multivariate nonlinear Hawkes processes with exogenous inputs. A point process generalized linear model (PP-GLM) framework for the estimation of discrete time multivariate nonlinear Hawkes processes is described. The approach is illustrated with the modeling of collective dynamics in neocortical neuronal ensembles recorded in human and non-human primates, and prediction of single-neuron spiking. A complementary approach to capture collective dynamics based on low-dimensional dynamics ("order parameters") inferred via latent state-space models with point process observations is presented. The approach is illustrated by inferring and decoding low-dimensional dynamics in primate motor cortex during naturalistic reach and grasp movements. Finally, we briefly review hypothesis tests based on conditional inference and spatiotemporal coarse graining for assessing collective dynamics in recorded neuronal ensembles. Published by Elsevier Ltd.

  20. Linear mixed model for heritability estimation that explicitly addresses environmental variation.

    PubMed

    Heckerman, David; Gurdasani, Deepti; Kadie, Carl; Pomilla, Cristina; Carstensen, Tommy; Martin, Hilary; Ekoru, Kenneth; Nsubuga, Rebecca N; Ssenyomo, Gerald; Kamali, Anatoli; Kaleebu, Pontiano; Widmer, Christian; Sandhu, Manjinder S

    2016-07-05

    The linear mixed model (LMM) is now routinely used to estimate heritability. Unfortunately, as we demonstrate, LMM estimates of heritability can be inflated when using a standard model. To help reduce this inflation, we used a more general LMM with two random effects-one based on genomic variants and one based on easily measured spatial location as a proxy for environmental effects. We investigated this approach with simulated data and with data from a Uganda cohort of 4,778 individuals for 34 phenotypes including anthropometric indices, blood factors, glycemic control, blood pressure, lipid tests, and liver function tests. For the genomic random effect, we used identity-by-descent estimates from accurately phased genome-wide data. For the environmental random effect, we constructed a covariance matrix based on a Gaussian radial basis function. Across the simulated and Ugandan data, narrow-sense heritability estimates were lower using the more general model. Thus, our approach addresses, in part, the issue of "missing heritability" in the sense that much of the heritability previously thought to be missing was fictional. Software is available at https://github.com/MicrosoftGenomics/FaST-LMM.

  1. Generalizing Gillespie’s Direct Method to Enable Network-Free Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suderman, Ryan T.; Mitra, Eshan David; Lin, Yen Ting

    Gillespie’s direct method for stochastic simulation of chemical kinetics is a staple of computational systems biology research. However, the algorithm requires explicit enumeration of all reactions and all chemical species that may arise in the system. In many cases, this is not feasible due to the combinatorial explosion of reactions and species in biological networks. Rule-based modeling frameworks provide a way to exactly represent networks containing such combinatorial complexity, and generalizations of Gillespie’s direct method have been developed as simulation engines for rule-based modeling languages. Here, we provide both a high-level description of the algorithms underlying the simulation engines, termedmore » network-free simulation algorithms, and how they have been applied in systems biology research. We also define a generic rule-based modeling framework and describe a number of technical details required for adapting Gillespie’s direct method for network-free simulation. Lastly, we briefly discuss potential avenues for advancing network-free simulation and the role they continue to play in modeling dynamical systems in biology.« less

  2. A General Architecture for Intelligent Tutoring of Diagnostic Classification Problem Solving

    PubMed Central

    Crowley, Rebecca S.; Medvedeva, Olga

    2003-01-01

    We report on a general architecture for creating knowledge-based medical training systems to teach diagnostic classification problem solving. The approach is informed by our previous work describing the development of expertise in classification problem solving in Pathology. The architecture envelops the traditional Intelligent Tutoring System design within the Unified Problem-solving Method description Language (UPML) architecture, supporting component modularity and reuse. Based on the domain ontology, domain task ontology and case data, the abstract problem-solving methods of the expert model create a dynamic solution graph. Student interaction with the solution graph is filtered through an instructional layer, which is created by a second set of abstract problem-solving methods and pedagogic ontologies, in response to the current state of the student model. We outline the advantages and limitations of this general approach, and describe it’s implementation in SlideTutor–a developing Intelligent Tutoring System in Dermatopathology. PMID:14728159

  3. On Two-Scale Modelling of Heat and Mass Transfer

    NASA Astrophysics Data System (ADS)

    Vala, J.; Št'astník, S.

    2008-09-01

    Modelling of macroscopic behaviour of materials, consisting of several layers or components, whose microscopic (at least stochastic) analysis is available, as well as (more general) simulation of non-local phenomena, complicated coupled processes, etc., requires both deeper understanding of physical principles and development of mathematical theories and software algorithms. Starting from the (relatively simple) example of phase transformation in substitutional alloys, this paper sketches the general formulation of a nonlinear system of partial differential equations of evolution for the heat and mass transfer (useful in mechanical and civil engineering, etc.), corresponding to conservation principles of thermodynamics, both at the micro- and at the macroscopic level, and suggests an algorithm for scale-bridging, based on the robust finite element techniques. Some existence and convergence questions, namely those based on the construction of sequences of Rothe and on the mathematical theory of two-scale convergence, are discussed together with references to useful generalizations, required by new technologies.

  4. Flavoured tobacco products and the public's health: lessons from the TPSAC menthol report.

    PubMed

    Samet, Jonathan M; Pentz, Mary Ann; Unger, Jennifer B

    2016-11-01

    The menthol report developed by the Tobacco Products Scientific Advisory Committee (TPSAC) of the Center for Tobacco Products elaborated a methodology for considering the public health impact of menthol in cigarettes that has relevance to flavourings generally. The TPSAC report was based on a conceptual framework on how menthol in cigarettes has public health impact results of evidence from related systematic reviews, and an evidence-based statistical model. In extending this approach to flavourings generally, consideration will need to be given to the existence of multiple flavourings, a very dynamic market place and regulatory interventions and industry activities. Now is the time to begin to develop the research strategies and models needed to extend the TPSAC approach to flavoured tobacco products generally. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  5. Biologically Inspired Model for Visual Cognition Achieving Unsupervised Episodic and Semantic Feature Learning.

    PubMed

    Qiao, Hong; Li, Yinlin; Li, Fengfu; Xi, Xuanyang; Wu, Wei

    2016-10-01

    Recently, many biologically inspired visual computational models have been proposed. The design of these models follows the related biological mechanisms and structures, and these models provide new solutions for visual recognition tasks. In this paper, based on the recent biological evidence, we propose a framework to mimic the active and dynamic learning and recognition process of the primate visual cortex. From principle point of view, the main contributions are that the framework can achieve unsupervised learning of episodic features (including key components and their spatial relations) and semantic features (semantic descriptions of the key components), which support higher level cognition of an object. From performance point of view, the advantages of the framework are as follows: 1) learning episodic features without supervision-for a class of objects without a prior knowledge, the key components, their spatial relations and cover regions can be learned automatically through a deep neural network (DNN); 2) learning semantic features based on episodic features-within the cover regions of the key components, the semantic geometrical values of these components can be computed based on contour detection; 3) forming the general knowledge of a class of objects-the general knowledge of a class of objects can be formed, mainly including the key components, their spatial relations and average semantic values, which is a concise description of the class; and 4) achieving higher level cognition and dynamic updating-for a test image, the model can achieve classification and subclass semantic descriptions. And the test samples with high confidence are selected to dynamically update the whole model. Experiments are conducted on face images, and a good performance is achieved in each layer of the DNN and the semantic description learning process. Furthermore, the model can be generalized to recognition tasks of other objects with learning ability.

  6. Comparison of statistical models for writer verification

    NASA Astrophysics Data System (ADS)

    Srihari, Sargur; Ball, Gregory R.

    2009-01-01

    A novel statistical model for determining whether a pair of documents, a known and a questioned, were written by the same individual is proposed. The goal of this formulation is to learn the specific uniqueness of style in a particular author's writing, given the known document. Since there are often insufficient samples to extrapolate a generalized model of an writer's handwriting based solely on the document, we instead generalize over the differences between the author and a large population of known different writers. This is in contrast to an earlier model proposed whereby probability distributions were a priori without learning. We show the performance of the model along with a comparison in performance to the non-learning, older model, which shows significant improvement.

  7. Guide on Data Models in the Selection and Use of Database Management Systems. Final Report.

    ERIC Educational Resources Information Center

    Gallagher, Leonard J.; Draper, Jesse M.

    A tutorial introduction to data models in general is provided, with particular emphasis on the relational and network models defined by the two proposed ANSI (American National Standards Institute) database language standards. Examples based on the network and relational models include specific syntax and semantics, while examples from the other…

  8. Deformed exponentials and portfolio selection

    NASA Astrophysics Data System (ADS)

    Rodrigues, Ana Flávia P.; Guerreiro, Igor M.; Cavalcante, Charles Casimiro

    In this paper, we present a method for portfolio selection based on the consideration on deformed exponentials in order to generalize the methods based on the gaussianity of the returns in portfolio, such as the Markowitz model. The proposed method generalizes the idea of optimizing mean-variance and mean-divergence models and allows a more accurate behavior for situations where heavy-tails distributions are necessary to describe the returns in a given time instant, such as those observed in economic crises. Numerical results show the proposed method outperforms the Markowitz portfolio for the cumulated returns with a good convergence rate of the weights for the assets which are searched by means of a natural gradient algorithm.

  9. Freezing Transition Studies Through Constrained Cell Model Simulation

    NASA Astrophysics Data System (ADS)

    Nayhouse, Michael; Kwon, Joseph Sang-Il; Heng, Vincent R.; Amlani, Ankur M.; Orkoulas, G.

    2014-10-01

    In the present work, a simulation method based on cell models is used to deduce the fluid-solid transition of a system of particles that interact via a pair potential, , which is of the form with . The simulations are implemented under constant-pressure conditions on a generalized version of the constrained cell model. The constrained cell model is constructed by dividing the volume into Wigner-Seitz cells and confining each particle in a single cell. This model is a special case of a more general cell model which is formed by introducing an additional field variable that controls the number of particles per cell and, thus, the relative stability of the solid against the fluid phase. High field values force configurations with one particle per cell and thus favor the solid phase. Fluid-solid coexistence on the isotherm that corresponds to a reduced temperature of 2 is determined from constant-pressure simulations of the generalized cell model using tempering and histogram reweighting techniques. The entire fluid-solid phase boundary is determined through a thermodynamic integration technique based on histogram reweighting, using the previous coexistence point as a reference point. The vapor-liquid phase diagram is obtained from constant-pressure simulations of the unconstrained system using tempering and histogram reweighting. The phase diagram of the system is found to contain a stable critical point and a triple point. The phase diagram of the corresponding constrained cell model is also found to contain both a stable critical point and a triple point.

  10. Learning abstract visual concepts via probabilistic program induction in a Language of Thought.

    PubMed

    Overlan, Matthew C; Jacobs, Robert A; Piantadosi, Steven T

    2017-11-01

    The ability to learn abstract concepts is a powerful component of human cognition. It has been argued that variable binding is the key element enabling this ability, but the computational aspects of variable binding remain poorly understood. Here, we address this shortcoming by formalizing the Hierarchical Language of Thought (HLOT) model of rule learning. Given a set of data items, the model uses Bayesian inference to infer a probability distribution over stochastic programs that implement variable binding. Because the model makes use of symbolic variables as well as Bayesian inference and programs with stochastic primitives, it combines many of the advantages of both symbolic and statistical approaches to cognitive modeling. To evaluate the model, we conducted an experiment in which human subjects viewed training items and then judged which test items belong to the same concept as the training items. We found that the HLOT model provides a close match to human generalization patterns, significantly outperforming two variants of the Generalized Context Model, one variant based on string similarity and the other based on visual similarity using features from a deep convolutional neural network. Additional results suggest that variable binding happens automatically, implying that binding operations do not add complexity to peoples' hypothesized rules. Overall, this work demonstrates that a cognitive model combining symbolic variables with Bayesian inference and stochastic program primitives provides a new perspective for understanding people's patterns of generalization. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Development of a transformation model to derive general population-based utility: Mapping the pruritus-visual analog scale (VAS) to the EQ-5D utility.

    PubMed

    Park, Sun-Young; Park, Eun-Ja; Suh, Hae Sun; Ha, Dongmun; Lee, Eui-Kyung

    2017-08-01

    Although nonpreference-based disease-specific measures are widely used in clinical studies, they cannot generate utilities for economic evaluation. A solution to this problem is to estimate utilities from disease-specific instruments using the mapping function. This study aimed to develop a transformation model for mapping the pruritus-visual analog scale (VAS) to the EuroQol 5-Dimension 3-Level (EQ-5D-3L) utility index in pruritus. A cross-sectional survey was conducted with a sample (n = 268) drawn from the general population of South Korea. Data were randomly divided into 2 groups, one for estimating and the other for validating mapping models. To select the best model, we developed and compared 3 separate models using demographic information and the pruritus-VAS as independent variables. The predictive performance was assessed using the mean absolute deviation and root mean square error in a separate dataset. Among the 3 models, model 2 using age, age squared, sex, and the pruritus-VAS as independent variables had the best performance based on the goodness of fit and model simplicity, with a log likelihood of 187.13. The 3 models had similar precision errors based on mean absolute deviation and root mean square error in the validation dataset. No statistically significant difference was observed between the mean observed and predicted values in all models. In conclusion, model 2 was chosen as the preferred mapping model. Outcomes measured as the pruritus-VAS can be transformed into the EQ-5D-3L utility index using this mapping model, which makes an economic evaluation possible when only pruritus-VAS data are available. © 2017 John Wiley & Sons, Ltd.

  12. Theme-Based Tests: Teaching in Context

    ERIC Educational Resources Information Center

    Anderson, Gretchen L.; Heck, Marsha L.

    2005-01-01

    Theme-based tests provide an assessment tool that instructs as well as provides a single general context for a broad set of biochemical concepts. A single story line connects the questions on the tests and models applications of scientific principles and biochemical knowledge in an extended scenario. Theme-based tests are based on a set of…

  13. Generalized image contrast enhancement technique based on the Heinemann contrast discrimination model

    NASA Astrophysics Data System (ADS)

    Liu, Hong; Nodine, Calvin F.

    1996-07-01

    This paper presents a generalized image contrast enhancement technique, which equalizes the perceived brightness distribution based on the Heinemann contrast discrimination model. It is based on the mathematically proven existence of a unique solution to a nonlinear equation, and is formulated with easily tunable parameters. The model uses a two-step log-log representation of luminance contrast between targets and surround in a luminous background setting. The algorithm consists of two nonlinear gray scale mapping functions that have seven parameters, two of which are adjustable Heinemann constants. Another parameter is the background gray level. The remaining four parameters are nonlinear functions of the gray-level distribution of the given image, and can be uniquely determined once the previous three are set. Tests have been carried out to demonstrate the effectiveness of the algorithm for increasing the overall contrast of radiology images. The traditional histogram equalization can be reinterpreted as an image enhancement technique based on the knowledge of human contrast perception. In fact, it is a special case of the proposed algorithm.

  14. The Applicability of the Generalized Method of Cells for Analyzing Discontinuously Reinforced Composites

    NASA Technical Reports Server (NTRS)

    Pahr, D. H.; Arnold, S. M.

    2001-01-01

    The paper begins with a short overview of the recent work done in the field of discontinuous reinforced composites, focusing on the different parameters which influence the material behavior of discontinuous reinforced composites, as well as the various analysis approaches undertaken. Based on this overview it became evident, that in order to investigate the enumerated effects in an efficient and comprehensive manner, an alternative approach to the computationally intensive finite-element based micromechanics approach is required. Therefore, an investigation is conducted to demonstrate the utility of utilizing the generalized method of cells (GMC), a semi-analytical micromechanics-based approach, to simulate the elastic and elastoplastic material behavior of aligned short fiber composites. The results are compared with (1) simulations using other micromechanical based mean field models and finite element (FE) unit cell models found in the literature given elastic material behavior, as well as (2) finite element unit cell and a new semianalytical elastoplastic shear lag model in the inelastic range. GMC is shown to definitely have a window of applicability when simulating discontinuously reinforced composite material behavior.

  15. The Applicability of the Generalized Method of Cells for Analyzing Discontinuously Reinforced Composites

    NASA Technical Reports Server (NTRS)

    Pahr, D. H.; Arnold, S. M.

    2001-01-01

    The paper begins with a short overview of the recent work done in the field of discontinuous reinforced composites, focusing on the different parameters which influence the material behavior of discontinuous reinforced composites, as well as the various analysis approaches undertaken. Based on this overview it became evident that in order to investigate the enumerated effects in an efficient and comprehensive manner, an alternative approach to the computationally intensive finite-element based micromechanics approach is required. Therefore, an investigation is conducted to demonstrate the utility of utilizing the generalized method of cells (GMC), a semi-analytical micromechanics-based approach, to simulate the elastic and elastoplastic material behavior of aligned short fiber composites. The results are compared with simulations using other micromechanical based mean field models and finite element (FE) unit cell models found in the literature given elastic material behavior, as well as finite element unit cell and a new semianalytical elastoplastic shear lag model in the inelastic range. GMC is shown to definitely have a window of applicability when simulating discontinuously reinforced composite material behavior.

  16. Pattern recognition tool based on complex network-based approach

    NASA Astrophysics Data System (ADS)

    Casanova, Dalcimar; Backes, André Ricardo; Martinez Bruno, Odemir

    2013-02-01

    This work proposed a generalization of the method proposed by the authors: 'A complex network-based approach for boundary shape analysis'. Instead of modelling a contour into a graph and use complex networks rules to characterize it, here, we generalize the technique. This way, the work proposes a mathematical tool for characterization signals, curves and set of points. To evaluate the pattern description power of the proposal, an experiment of plat identification based on leaf veins image are conducted. Leaf vein is a taxon characteristic used to plant identification proposes, and one of its characteristics is that these structures are complex, and difficult to be represented as a signal or curves and this way to be analyzed in a classical pattern recognition approach. Here, we model the veins as a set of points and model as graphs. As features, we use the degree and joint degree measurements in a dynamic evolution. The results demonstrates that the technique has a good power of discrimination and can be used for plant identification, as well as other complex pattern recognition tasks.

  17. Transformation Model Choice in Nonlinear Regression Analysis of Fluorescence-based Serial Dilution Assays

    PubMed Central

    Fong, Youyi; Yu, Xuesong

    2016-01-01

    Many modern serial dilution assays are based on fluorescence intensity (FI) readouts. We study optimal transformation model choice for fitting five parameter logistic curves (5PL) to FI-based serial dilution assay data. We first develop a generalized least squares-pseudolikelihood type algorithm for fitting heteroscedastic logistic models. Next we show that the 5PL and log 5PL functions can approximate each other well. We then compare four 5PL models with different choices of log transformation and variance modeling through a Monte Carlo study and real data. Our findings are that the optimal choice depends on the intended use of the fitted curves. PMID:27642502

  18. A generalized mixed effects model of abundance for mark-resight data when sampling is without replacement

    USGS Publications Warehouse

    McClintock, B.T.; White, Gary C.; Burnham, K.P.; Pryde, M.A.; Thomson, David L.; Cooch, Evan G.; Conroy, Michael J.

    2009-01-01

    In recent years, the mark-resight method for estimating abundance when the number of marked individuals is known has become increasingly popular. By using field-readable bands that may be resighted from a distance, these techniques can be applied to many species, and are particularly useful for relatively small, closed populations. However, due to the different assumptions and general rigidity of the available estimators, researchers must often commit to a particular model without rigorous quantitative justification for model selection based on the data. Here we introduce a nonlinear logit-normal mixed effects model addressing this need for a more generalized framework. Similar to models available for mark-recapture studies, the estimator allows a wide variety of sampling conditions to be parameterized efficiently under a robust sampling design. Resighting rates may be modeled simply or with more complexity by including fixed temporal and random individual heterogeneity effects. Using information theory, the model(s) best supported by the data may be selected from the candidate models proposed. Under this generalized framework, we hope the uncertainty associated with mark-resight model selection will be reduced substantially. We compare our model to other mark-resight abundance estimators when applied to mainland New Zealand robin (Petroica australis) data recently collected in Eglinton Valley, Fiordland National Park and summarize its performance in simulation experiments.

  19. Modeling error PDF optimization based wavelet neural network modeling of dynamic system and its application in blast furnace ironmaking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Ping; Wang, Chenyu; Li, Mingjie

    In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) can not fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First,more » the modeling error PDF by the tradional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. Furthermore, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less

  20. Modeling error PDF optimization based wavelet neural network modeling of dynamic system and its application in blast furnace ironmaking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Ping; Wang, Chenyu; Li, Mingjie

    In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) cannot fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First, themore » modeling error PDF by the traditional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. However, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less

  1. Modeling error PDF optimization based wavelet neural network modeling of dynamic system and its application in blast furnace ironmaking

    DOE PAGES

    Zhou, Ping; Wang, Chenyu; Li, Mingjie; ...

    2018-01-31

    In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) cannot fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First, themore » modeling error PDF by the traditional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. However, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less

  2. Specialty hospitals emulating focused factories: a case study.

    PubMed

    Kumar, Sameer

    2010-01-01

    For 15 years general hospital managers faced new competition from for-profit specialty hospitals that operate on a "focused factory" model, which threaten to siphon-off the most profitable patients. This paper aims to discuss North American specialty hospitals and to review rising costs impact on general hospital operations. The focus is to discover whether specialty hospitals are more efficient than general hospitals; if so, how significant is the difference and also what can general hospitals do in light of the rising specialty hospitals. The case study involves stochastic frontier regression analysis using Cobb-Douglas and Translog cost functions to compare Minnesota general and specialty hospital efficiency. Analysis is based on data from 117 general and 19 specialty hospitals. The results suggest that specialty hospitals are significantly more efficient than general hospitals. Overall, general hospitals were found to be more than twice as inefficient compared with specialty hospitals in the sample. Some cost-cutting factors highlighted can be implemented to trim rising costs. The case study highlights some managerial levers that general hospital operational managers might use to control rising costs. This also helps them compete with specialty hospitals by reducing overheads and other major costs. The study is based on empirical modeling for an important healthcare operational challenge and provides additional in-depth information that has health policy implications. The analysis and findings enable healthcare managers to guide their institutions in a new direction during a time of change within the industry.

  3. Diagnostic layer integration in FPGA-based pipeline measurement systems for HEP experiments

    NASA Astrophysics Data System (ADS)

    Pozniak, Krzysztof T.

    2007-08-01

    Integrated triggering and data acquisition systems for high energy physics experiments may be considered as fast, multichannel, synchronous, distributed, pipeline measurement systems. A considerable extension of functional, technological and monitoring demands, which has recently been imposed on them, forced a common usage of large field-programmable gate array (FPGA), digital signal processing-enhanced matrices and fast optical transmission for their realization. This paper discusses modelling, design, realization and testing of pipeline measurement systems. A distribution of synchronous data stream flows is considered in the network. A general functional structure of a single network node is presented. A suggested, novel block structure of the node model facilitates full implementation in the FPGA chip, circuit standardization and parametrization, as well as integration of functional and diagnostic layers. A general method for pipeline system design was derived. This method is based on a unified model of the synchronous data network node. A few examples of practically realized, FPGA-based, pipeline measurement systems were presented. The described systems were applied in ZEUS and CMS.

  4. Making Predictions in a Changing World: The Benefits of Individual-Based Ecology

    PubMed Central

    Stillman, Richard A.; Railsback, Steven F.; Giske, Jarl; Berger, Uta; Grimm, Volker

    2014-01-01

    Ecologists urgently need a better ability to predict how environmental change affects biodiversity. We examine individual-based ecology (IBE), a research paradigm that promises better a predictive ability by using individual-based models (IBMs) to represent ecological dynamics as arising from how individuals interact with their environment and with each other. A key advantage of IBMs is that the basis for predictions—fitness maximization by individual organisms—is more general and reliable than the empirical relationships that other models depend on. Case studies illustrate the usefulness and predictive success of long-term IBE programs. The pioneering programs had three phases: conceptualization, implementation, and diversification. Continued validation of models runs throughout these phases. The breakthroughs that make IBE more productive include standards for describing and validating IBMs, improved and standardized theory for individual traits and behavior, software tools, and generalized instead of system-specific IBMs. We provide guidelines for pursuing IBE and a vision for future IBE research. PMID:26955076

  5. A CONSISTENT APPROACH FOR THE APPLICATION OF PHARMACOKINETIC MODELING IN CANCER RISK ASSESSMENT

    EPA Science Inventory

    Physiologically based pharmacokinetic (PBPK) modeling provides important capabilities for improving the reliability of the extrapolations across dose, species, and exposure route that are generally required in chemical risk assessment regardless of the toxic endpoint being consid...

  6. Moving Base Simulation of an ASTOVL Lift-Fan Aircraft

    DOT National Transportation Integrated Search

    1995-08-01

    Using a generalized simulation model, a moving-base simulation of a lift-fan : short takeoff/vertical landing fighter aircraft was conducted on the Vertical : Motion Simulator at Ames Research Center. Objectives of the experiment were to : (1)assess ...

  7. Use of generalized linear models and digital data in a forest inventory of Northern Utah

    USGS Publications Warehouse

    Moisen, Gretchen G.; Edwards, Thomas C.

    1999-01-01

    Forest inventories, like those conducted by the Forest Service's Forest Inventory and Analysis Program (FIA) in the Rocky Mountain Region, are under increased pressure to produce better information at reduced costs. Here we describe our efforts in Utah to merge satellite-based information with forest inventory data for the purposes of reducing the costs of estimates of forest population totals and providing spatial depiction of forest resources. We illustrate how generalized linear models can be used to construct approximately unbiased and efficient estimates of population totals while providing a mechanism for prediction in space for mapping of forest structure. We model forest type and timber volume of five tree species groups as functions of a variety of predictor variables in the northern Utah mountains. Predictor variables include elevation, aspect, slope, geographic coordinates, as well as vegetation cover types based on satellite data from both the Advanced Very High Resolution Radiometer (AVHRR) and Thematic Mapper (TM) platforms. We examine the relative precision of estimates of area by forest type and mean cubic-foot volumes under six different models, including the traditional double sampling for stratification strategy. Only very small gains in precision were realized through the use of expensive photointerpreted or TM-based data for stratification, while models based on topography and spatial coordinates alone were competitive. We also compare the predictive capability of the models through various map accuracy measures. The models including the TM-based vegetation performed best overall, while topography and spatial coordinates alone provided substantial information at very low cost.

  8. Physics-based statistical learning approach to mesoscopic model selection.

    PubMed

    Taverniers, Søren; Haut, Terry S; Barros, Kipton; Alexander, Francis J; Lookman, Turab

    2015-11-01

    In materials science and many other research areas, models are frequently inferred without considering their generalization to unseen data. We apply statistical learning using cross-validation to obtain an optimally predictive coarse-grained description of a two-dimensional kinetic nearest-neighbor Ising model with Glauber dynamics (GD) based on the stochastic Ginzburg-Landau equation (sGLE). The latter is learned from GD "training" data using a log-likelihood analysis, and its predictive ability for various complexities of the model is tested on GD "test" data independent of the data used to train the model on. Using two different error metrics, we perform a detailed analysis of the error between magnetization time trajectories simulated using the learned sGLE coarse-grained description and those obtained using the GD model. We show that both for equilibrium and out-of-equilibrium GD training trajectories, the standard phenomenological description using a quartic free energy does not always yield the most predictive coarse-grained model. Moreover, increasing the amount of training data can shift the optimal model complexity to higher values. Our results are promising in that they pave the way for the use of statistical learning as a general tool for materials modeling and discovery.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Laurence; Yurkovich, James T.; Lloyd, Colton J.

    Integrating omics data to refine or make context-specific models is an active field of constraint-based modeling. Proteomics now cover over 95% of the Escherichia coli proteome by mass. Genome-scale models of Metabolism and macromolecular Expression (ME) compute proteome allocation linked to metabolism and fitness. Using proteomics data, we formulated allocation constraints for key proteome sectors in the ME model. The resulting calibrated model effectively computed the “generalist” (wild-type) E. coli proteome and phenotype across diverse growth environments. Across 15 growth conditions, prediction errors for growth rate and metabolic fluxes were 69% and 14% lower, respectively. The sector-constrained ME model thusmore » represents a generalist ME model reflecting both growth rate maximization and “hedging” against uncertain environments and stresses, as indicated by significant enrichment of these sectors for the general stress response sigma factor σS. Finally, the sector constraints represent a general formalism for integrating omics data from any experimental condition into constraint-based ME models. The constraints can be fine-grained (individual proteins) or coarse-grained (functionally-related protein groups) as demonstrated here. Furthermore, this flexible formalism provides an accessible approach for narrowing the gap between the complexity captured by omics data and governing principles of proteome allocation described by systems-level models.« less

  10. A Case Study on a Combination NDVI Forecasting Model Based on the Entropy Weight Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Shengzhi; Ming, Bo; Huang, Qiang

    It is critically meaningful to accurately predict NDVI (Normalized Difference Vegetation Index), which helps guide regional ecological remediation and environmental managements. In this study, a combination forecasting model (CFM) was proposed to improve the performance of NDVI predictions in the Yellow River Basin (YRB) based on three individual forecasting models, i.e., the Multiple Linear Regression (MLR), Artificial Neural Network (ANN), and Support Vector Machine (SVM) models. The entropy weight method was employed to determine the weight coefficient for each individual model depending on its predictive performance. Results showed that: (1) ANN exhibits the highest fitting capability among the four orecastingmore » models in the calibration period, whilst its generalization ability becomes weak in the validation period; MLR has a poor performance in both calibration and validation periods; the predicted results of CFM in the calibration period have the highest stability; (2) CFM generally outperforms all individual models in the validation period, and can improve the reliability and stability of predicted results through combining the strengths while reducing the weaknesses of individual models; (3) the performances of all forecasting models are better in dense vegetation areas than in sparse vegetation areas.« less

  11. On Mathematical Anti-Evolutionism

    NASA Astrophysics Data System (ADS)

    Rosenhouse, Jason

    2016-03-01

    The teaching of evolution in American high schools has long been a source of controversy. The past decade has seen an important shift in the rhetoric of anti-evolutionists, toward arguments of a strongly mathematical character. These mathematical arguments, while different in their specifics, follow the same general program and rely on the same underlying model of evolution. We shall discuss the nature and history of this program and model and describe general reasons for skepticism with regard to any anti-evolutionary arguments based upon them. We shall then survey the major arguments used by anti-evolutionists, to show how our general considerations make it possible to quickly identify their weakest points.

  12. The shadow map: a general contact definition for capturing the dynamics of biomolecular folding and function.

    PubMed

    Noel, Jeffrey K; Whitford, Paul C; Onuchic, José N

    2012-07-26

    Structure-based models (SBMs) are simplified models of the biomolecular dynamics that arise from funneled energy landscapes. We recently introduced an all-atom SBM that explicitly represents the atomic geometry of a biomolecule. While this initial study showed the robustness of the all-atom SBM Hamiltonian to changes in many of the energetic parameters, an important aspect, which has not been explored previously, is the definition of native interactions. In this study, we propose a general definition for generating atomically grained contact maps called "Shadow". The Shadow algorithm initially considers all atoms within a cutoff distance and then, controlled by a screening parameter, discards the occluded contacts. We show that this choice of contact map is not only well behaved for protein folding, since it produces consistently cooperative folding behavior in SBMs but also desirable for exploring the dynamics of macromolecular assemblies since, it distributes energy similarly between RNAs and proteins despite their disparate internal packing. All-atom structure-based models employing Shadow contact maps provide a general framework for exploring the geometrical features of biomolecules, especially the connections between folding and function.

  13. Characterization of exchange rate regimes based on scaling and correlation properties of volatility for ASEAN-5 countries

    NASA Astrophysics Data System (ADS)

    Muniandy, Sithi V.; Uning, Rosemary

    2006-11-01

    Foreign currency exchange rate policies of ASEAN member countries have undergone tremendous changes following the 1997 Asian financial crisis. In this paper, we study the fractal and long-memory characteristics in the volatility of five ASEAN founding members’ exchange rates with respect to US dollar. The impact of exchange rate policies implemented by the ASEAN-5 countries on the currency fluctuations during pre-, mid- and post-crisis are briefly discussed. The time series considered are daily price returns, absolute returns and aggregated absolute returns, each partitioned into three segments based on the crisis regimes. These time series are then modeled using fractional Gaussian noise, fractionally integrated ARFIMA (0,d,0) and generalized Cauchy process. The first two stationary models provide the description of long-range dependence through Hurst and fractional differencing parameter, respectively. Meanwhile, the generalized Cauchy process offers independent estimation of fractal dimension and long memory exponent. In comparison, among the three models we found that the generalized Cauchy process showed greater sensitivity to transition of exchange rate regimes that were implemented by ASEAN-5 countries.

  14. [Impact analysis of shuxuetong injection on abnormal changes of ALT based on generalized boosted models propensity score weighting].

    PubMed

    Yang, Wei; Yi, Dan-Hui; Xie, Yan-Ming; Yang, Wei; Dai, Yi; Zhi, Ying-Jie; Zhuang, Yan; Yang, Hu

    2013-09-01

    To estimate treatment effects of Shuxuetong injection on abnormal changes on ALT index, that is, to explore whether the Shuxuetong injection harms liver function in clinical settings and to provide clinical guidance for its safe application. Clinical information of traditional Chinese medicine (TCM) injections is gathered from hospital information system (HIS) of eighteen general hospitals. This is a retrospective cohort study, using abnormal changes in ALT index as an outcome. A large number of confounding biases are taken into account through the generalized boosted models (GBM) and multiple logistic regression model (MLRM) to estimate the treatment effects of Shuxuetong injections on abnormal changes in ALT index and to explore possible influencing factors. The advantages and process of application of GBM has been demonstrated with examples which eliminate the biases from most confounding variables between groups. This serves to modify the estimation of treatment effects of Shuxuetong injection on ALT index making the results more reliable. Based on large scale clinical observational data from HIS database, significant effects of Shuxuetong injection on abnormal changes in ALT have not been found.

  15. Robust current control-based generalized predictive control with sliding mode disturbance compensation for PMSM drives.

    PubMed

    Liu, Xudong; Zhang, Chenghui; Li, Ke; Zhang, Qi

    2017-11-01

    This paper addresses the current control of permanent magnet synchronous motor (PMSM) for electric drives with model uncertainties and disturbances. A generalized predictive current control method combined with sliding mode disturbance compensation is proposed to satisfy the requirement of fast response and strong robustness. Firstly, according to the generalized predictive control (GPC) theory based on the continuous time model, a predictive current control method is presented without considering the disturbance, which is convenient to be realized in the digital controller. In fact, it's difficult to derive the exact motor model and parameters in the practical system. Thus, a sliding mode disturbance compensation controller is studied to improve the adaptiveness and robustness of the control system. The designed controller attempts to combine the merits of both predictive control and sliding mode control, meanwhile, the controller parameters are easy to be adjusted. Lastly, the proposed controller is tested on an interior PMSM by simulation and experiment, and the results indicate that it has good performance in both current tracking and disturbance rejection. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Extracting business vocabularies from business process models: SBVR and BPMN standards-based approach

    NASA Astrophysics Data System (ADS)

    Skersys, Tomas; Butleris, Rimantas; Kapocius, Kestutis

    2013-10-01

    Approaches for the analysis and specification of business vocabularies and rules are very relevant topics in both Business Process Management and Information Systems Development disciplines. However, in common practice of Information Systems Development, the Business modeling activities still are of mostly empiric nature. In this paper, basic aspects of the approach for business vocabularies' semi-automated extraction from business process models are presented. The approach is based on novel business modeling-level OMG standards "Business Process Model and Notation" (BPMN) and "Semantics for Business Vocabularies and Business Rules" (SBVR), thus contributing to OMG's vision about Model-Driven Architecture (MDA) and to model-driven development in general.

  17. Monitoring and modeling as a continuing learning process: the use of hydrological models in a general probabilistic framework.

    NASA Astrophysics Data System (ADS)

    Baroni, G.; Gräff, T.; Reinstorf, F.; Oswald, S. E.

    2012-04-01

    Nowadays uncertainty and sensitivity analysis are considered basic tools for the assessment of hydrological models and the evaluation of the most important sources of uncertainty. In this context, in the last decades several methods have been developed and applied in different hydrological conditions. However, in most of the cases, the studies have been done by investigating mainly the influence of the parameter uncertainty on the simulated outputs and few approaches tried to consider also other sources of uncertainty i.e. input and model structure. Moreover, several constrains arise when spatially distributed parameters are involved. To overcome these limitations a general probabilistic framework based on Monte Carlo simulations and the Sobol method has been proposed. In this study, the general probabilistic framework was applied at field scale using a 1D physical-based hydrological model (SWAP). Furthermore, the framework was extended at catchment scale in combination with a spatially distributed hydrological model (SHETRAN). The models are applied in two different experimental sites in Germany: a relatively flat cropped field close to Potsdam (Brandenburg) and a small mountainous catchment with agricultural land use (Schaefertal, Harz Mountains). For both cases, input and parameters are considered as major sources of uncertainty. Evaluation of the models was based on soil moisture detected at plot scale in different depths and, for the catchment site, also with daily discharge values. The study shows how the framework can take into account all the various sources of uncertainty i.e. input data, parameters (either in scalar or spatially distributed form) and model structures. The framework can be used in a loop in order to optimize further monitoring activities used to improve the performance of the model. In the particular applications, the results show how the sources of uncertainty are specific for each process considered. The influence of the input data as well as the presence of compensating errors become clear by the different processes simulated.

  18. Subgrid-scale Condensation Modeling for Entropy-based Large Eddy Simulations of Clouds

    NASA Astrophysics Data System (ADS)

    Kaul, C. M.; Schneider, T.; Pressel, K. G.; Tan, Z.

    2015-12-01

    An entropy- and total water-based formulation of LES thermodynamics, such as that used by the recently developed code PyCLES, is advantageous from physical and numerical perspectives. However, existing closures for subgrid-scale thermodynamic fluctuations assume more traditional choices for prognostic thermodynamic variables, such as liquid potential temperature, and are not directly applicable to entropy-based modeling. Since entropy and total water are generally nonlinearly related to diagnosed quantities like temperature and condensate amounts, neglecting their small-scale variability can lead to bias in simulation results. Here we present the development of a subgrid-scale condensation model suitable for use with entropy-based thermodynamic formulations.

  19. General Purpose Data-Driven Online System Health Monitoring with Applications to Space Operations

    NASA Technical Reports Server (NTRS)

    Iverson, David L.; Spirkovska, Lilly; Schwabacher, Mark

    2010-01-01

    Modern space transportation and ground support system designs are becoming increasingly sophisticated and complex. Determining the health state of these systems using traditional parameter limit checking, or model-based or rule-based methods is becoming more difficult as the number of sensors and component interactions grows. Data-driven monitoring techniques have been developed to address these issues by analyzing system operations data to automatically characterize normal system behavior. System health can be monitored by comparing real-time operating data with these nominal characterizations, providing detection of anomalous data signatures indicative of system faults, failures, or precursors of significant failures. The Inductive Monitoring System (IMS) is a general purpose, data-driven system health monitoring software tool that has been successfully applied to several aerospace applications and is under evaluation for anomaly detection in vehicle and ground equipment for next generation launch systems. After an introduction to IMS application development, we discuss these NASA online monitoring applications, including the integration of IMS with complementary model-based and rule-based methods. Although the examples presented in this paper are from space operations applications, IMS is a general-purpose health-monitoring tool that is also applicable to power generation and transmission system monitoring.

  20. Shape-based approach for the estimation of individual facial mimics in craniofacial surgery planning

    NASA Astrophysics Data System (ADS)

    Gladilin, Evgeny; Zachow, Stefan; Deuflhard, Peter; Hege, Hans-Christian

    2002-05-01

    Besides the static soft tissue prediction, the estimation of basic facial emotion expressions is another important criterion for the evaluation of craniofacial surgery planning. For a realistic simulation of facial mimics, an adequate biomechanical model of soft tissue including the mimic musculature is needed. In this work, we present an approach for the modeling of arbitrarily shaped muscles and the estimation of basic individual facial mimics, which is based on the geometrical model derived from the individual tomographic data and the general finite element modeling of soft tissue biomechanics.

  1. Next Generation Transport Phenomenology Model

    NASA Technical Reports Server (NTRS)

    Strickland, Douglas J.; Knight, Harold; Evans, J. Scott

    2004-01-01

    This report describes the progress made in Quarter 3 of Contract Year 3 on the development of Aeronomy Phenomenology Modeling Tool (APMT), an open-source, component-based, client-server architecture for distributed modeling, analysis, and simulation activities focused on electron and photon transport for general atmospheres. In the past quarter, column emission rate computations were implemented in Java, preexisting Fortran programs for computing synthetic spectra were embedded into APMT through Java wrappers, and work began on a web-based user interface for setting input parameters and running the photoelectron and auroral electron transport models.

  2. Predicting tree species presence and basal area in Utah: A comparison of stochastic gradient boosting, generalized additive models, and tree-based methods

    Treesearch

    Gretchen G. Moisen; Elizabeth A. Freeman; Jock A. Blackard; Tracey S. Frescino; Niklaus E. Zimmermann; Thomas C. Edwards

    2006-01-01

    Many efforts are underway to produce broad-scale forest attribute maps by modelling forest class and structure variables collected in forest inventories as functions of satellite-based and biophysical information. Typically, variants of classification and regression trees implemented in Rulequest's© See5 and Cubist (for binary and continuous responses,...

  3. Method of conditional moments (MCM) for the Chemical Master Equation: a unified framework for the method of moments and hybrid stochastic-deterministic models.

    PubMed

    Hasenauer, J; Wolf, V; Kazeroonian, A; Theis, F J

    2014-09-01

    The time-evolution of continuous-time discrete-state biochemical processes is governed by the Chemical Master Equation (CME), which describes the probability of the molecular counts of each chemical species. As the corresponding number of discrete states is, for most processes, large, a direct numerical simulation of the CME is in general infeasible. In this paper we introduce the method of conditional moments (MCM), a novel approximation method for the solution of the CME. The MCM employs a discrete stochastic description for low-copy number species and a moment-based description for medium/high-copy number species. The moments of the medium/high-copy number species are conditioned on the state of the low abundance species, which allows us to capture complex correlation structures arising, e.g., for multi-attractor and oscillatory systems. We prove that the MCM provides a generalization of previous approximations of the CME based on hybrid modeling and moment-based methods. Furthermore, it improves upon these existing methods, as we illustrate using a model for the dynamics of stochastic single-gene expression. This application example shows that due to the more general structure, the MCM allows for the approximation of multi-modal distributions.

  4. Comparisons of calculated respiratory tract deposition of particles based on the NCRP/ITRI model and the new ICRP66 model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yeh, Hsu-Chi; Phalen, R.F.; Chang, I.

    1995-12-01

    The National Council on Radiation Protection and Measurements (NCRP) in the United States and the International Commission on Radiological Protection (ICRP) have been independently reviewing and revising respiratory tract dosimetry models for inhaled radioactive aerosols. The newly proposed NCRP respiratory tract dosimetry model represents a significant change in philosophy from the old ICRP Task Group model. The proposed NCRP model describes respiratory tract deposition, clearance, and dosimetry for radioactive substances inhaled by workers and the general public and is expected to be published soon. In support of the NCRP proposed model, ITRI staff members have been developing computer software. Althoughmore » this software is still incomplete, the deposition portion has been completed and can be used to calculate inhaled particle deposition within the respiratory tract for particle sizes as small as radon and radon progeny ({approximately} 1 nm) to particles larger than 100 {mu}m. Recently, ICRP published their new dosimetric model for the respiratory tract, ICRP66. Based on ICRP66, the National Radiological Protection Board of the UK developed PC-based software, LUDEP, for calculating particle deposition and internal doses. The purpose of this report is to compare the calculated respiratory tract deposition of particles using the NCRP/ITRI model and the ICRP66 model, under the same particle size distribution and breathing conditions. In summary, the general trends of the deposition curves for the two models were similar.« less

  5. Theoretical Development of an Orthotropic Elasto-Plastic Generalized Composite Material Model

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.; Carney, Kelly S.; DuBois, Paul; Hoffarth, Canio; Harrington, Joseph; Subramanian, Rajan; Blankenhorn, Gunther

    2014-01-01

    The need for accurate material models to simulate the deformation, damage and failure of polymer matrix composites is becoming critical as these materials are gaining increased usage in the aerospace and automotive industries. While there are several composite material models currently available within LS-DYNA (Registered), there are several features that have been identified that could improve the predictive capability of a composite model. To address these needs, a combined plasticity and damage model suitable for use with both solid and shell elements is being developed and is being implemented into LS-DYNA as MAT_213. A key feature of the improved material model is the use of tabulated stress-strain data in a variety of coordinate directions to fully define the stress-strain response of the material. To date, the model development efforts have focused on creating the plasticity portion of the model. The Tsai-Wu composite failure model has been generalized and extended to a strain-hardening based orthotropic material model with a non-associative flow rule. The coefficients of the yield function, and the stresses to be used in both the yield function and the flow rule, are computed based on the input stress-strain curves using the effective plastic strain as the tracking variable. The coefficients in the flow rule are computed based on the obtained stress-strain data. The developed material model is suitable for implementation within LS-DYNA for use in analyzing the nonlinear response of polymer composites.

  6. Rule-based modeling: a computational approach for studying biomolecular site dynamics in cell signaling systems

    PubMed Central

    Chylek, Lily A.; Harris, Leonard A.; Tung, Chang-Shung; Faeder, James R.; Lopez, Carlos F.

    2013-01-01

    Rule-based modeling was developed to address the limitations of traditional approaches for modeling chemical kinetics in cell signaling systems. These systems consist of multiple interacting biomolecules (e.g., proteins), which themselves consist of multiple parts (e.g., domains, linear motifs, and sites of phosphorylation). Consequently, biomolecules that mediate information processing generally have the potential to interact in multiple ways, with the number of possible complexes and post-translational modification states tending to grow exponentially with the number of binary interactions considered. As a result, only large reaction networks capture all possible consequences of the molecular interactions that occur in a cell signaling system, which is problematic because traditional modeling approaches for chemical kinetics (e.g., ordinary differential equations) require explicit network specification. This problem is circumvented through representation of interactions in terms of local rules. With this approach, network specification is implicit and model specification is concise. Concise representation results in a coarse graining of chemical kinetics, which is introduced because all reactions implied by a rule inherit the rate law associated with that rule. Coarse graining can be appropriate if interactions are modular, and the coarseness of a model can be adjusted as needed. Rules can be specified using specialized model-specification languages, and recently developed tools designed for specification of rule-based models allow one to leverage powerful software engineering capabilities. A rule-based model comprises a set of rules, which can be processed by general-purpose simulation and analysis tools to achieve different objectives (e.g., to perform either a deterministic or stochastic simulation). PMID:24123887

  7. Efficient Information Access for Location-Based Services in Mobile Environments

    ERIC Educational Resources Information Center

    Lee, Chi Keung

    2009-01-01

    The demand for pervasive access of location-related information (e.g., local traffic, restaurant locations, navigation maps, weather conditions, pollution index, etc.) fosters a tremendous application base of "Location Based Services (LBSs)". Without loss of generality, we model location-related information as "spatial objects" and the accesses…

  8. A path-integral approach to the problem of time

    NASA Astrophysics Data System (ADS)

    Amaral, M. M.; Bojowald, Martin

    2018-01-01

    Quantum transition amplitudes are formulated for model systems with local internal time, using intuition from path integrals. The amplitudes are shown to be more regular near a turning point of internal time than could be expected based on existing canonical treatments. In particular, a successful transition through a turning point is provided in the model systems, together with a new definition of such a transition in general terms. Some of the results rely on a fruitful relation between the problem of time and general Gribov problems.

  9. Model-based metabolism design: constraints for kinetic and stoichiometric models

    PubMed Central

    Stalidzans, Egils; Seiman, Andrus; Peebo, Karl; Komasilovs, Vitalijs; Pentjuss, Agris

    2018-01-01

    The implementation of model-based designs in metabolic engineering and synthetic biology may fail. One of the reasons for this failure is that only a part of the real-world complexity is included in models. Still, some knowledge can be simplified and taken into account in the form of optimization constraints to improve the feasibility of model-based designs of metabolic pathways in organisms. Some constraints (mass balance, energy balance, and steady-state assumption) serve as a basis for many modelling approaches. There are others (total enzyme activity constraint and homeostatic constraint) proposed decades ago, but which are frequently ignored in design development. Several new approaches of cellular analysis have made possible the application of constraints like cell size, surface, and resource balance. Constraints for kinetic and stoichiometric models are grouped according to their applicability preconditions in (1) general constraints, (2) organism-level constraints, and (3) experiment-level constraints. General constraints are universal and are applicable for any system. Organism-level constraints are applicable for biological systems and usually are organism-specific, but these constraints can be applied without information about experimental conditions. To apply experimental-level constraints, peculiarities of the organism and the experimental set-up have to be taken into account to calculate the values of constraints. The limitations of applicability of particular constraints for kinetic and stoichiometric models are addressed. PMID:29472367

  10. A Harris-Todaro Agent-Based Model to Rural-Urban Migration

    NASA Astrophysics Data System (ADS)

    Espíndola, Aquino L.; Silveira, Jaylson J.; Penna, T. J. P.

    2006-09-01

    The Harris-Todaro model of the rural-urban migration process is revisited under an agent-based approach. The migration of the workers is interpreted as a process of social learning by imitation, formalized by a computational model. By simulating this model, we observe a transitional dynamics with continuous growth of the urban fraction of overall population toward an equilibrium. Such an equilibrium is characterized by stabilization of rural-urban expected wages differential (generalized Harris-Todaro equilibrium condition), urban concentration and urban unemployment. These classic results obtained originally by Harris and Todaro are emergent properties of our model.

  11. Economic communication model set

    NASA Astrophysics Data System (ADS)

    Zvereva, Olga M.; Berg, Dmitry B.

    2017-06-01

    This paper details findings from the research work targeted at economic communications investigation with agent-based models usage. The agent-based model set was engineered to simulate economic communications. Money in the form of internal and external currencies was introduced into the models to support exchanges in communications. Every model, being based on the general concept, has its own peculiarities in algorithm and input data set since it was engineered to solve the specific problem. Several and different origin data sets were used in experiments: theoretic sets were estimated on the basis of static Leontief's equilibrium equation and the real set was constructed on the basis of statistical data. While simulation experiments, communication process was observed in dynamics, and system macroparameters were estimated. This research approved that combination of an agent-based and mathematical model can cause a synergetic effect.

  12. Upscaling from research watersheds: an essential stage of trustworthy general-purpose hydrologic model building

    NASA Astrophysics Data System (ADS)

    McNamara, J. P.; Semenova, O.; Restrepo, P. J.

    2011-12-01

    Highly instrumented research watersheds provide excellent opportunities for investigating hydrologic processes. A danger, however, is that the processes observed at a particular research watershed are too specific to the watershed and not representative even of the larger scale watershed that contains that particular research watershed. Thus, models developed based on those partial observations may not be suitable for general hydrologic use. Therefore demonstrating the upscaling of hydrologic process from research watersheds to larger watersheds is essential to validate concepts and test model structure. The Hydrograph model has been developed as a general-purpose process-based hydrologic distributed system. In its applications and further development we evaluate the scaling of model concepts and parameters in a wide range of hydrologic landscapes. All models, either lumped or distributed, are based on a discretization concept. It is common practice that watersheds are discretized into so called hydrologic units or hydrologic landscapes possessing assumed homogeneous hydrologic functioning. If a model structure is fixed, the difference in hydrologic functioning (difference in hydrologic landscapes) should be reflected by a specific set of model parameters. Research watersheds provide the possibility for reasonable detailed combining of processes into some typical hydrologic concept such as hydrologic units, hydrologic forms, and runoff formation complexes in the Hydrograph model. And here by upscaling we imply not the upscaling of a single process but upscaling of such unified hydrologic functioning. The simulation of runoff processes for the Dry Creek research watershed, Idaho, USA (27 km2) was undertaken using the Hydrograph model. The information on the watershed was provided by Boise State University and included a GIS database of watershed characteristics and a detailed hydrometeorological observational dataset. The model provided good simulation results in terms of runoff and variable states of soil and snow over a simulation period 2000 - 2009. The parameters of the model were hand-adjusted based on rational sense, observational data and available understanding of underlying processes. For the first run some processes as riparian vegetation impact on runoff and streamflow/groundwater interaction were handled in a conceptual way. It was shown that the use of Hydrograph model which requires modest amount of parameter calibration may serve also as a quality control for observations. Based on the obtained parameters values and process understanding at the research watershed the model was applied to the larger scale watersheds located in similar environment - the Boise River at South Fork (1660 km2) and Twin Springs (2155 km2). The evaluation of the results of such upscaling will be presented.

  13. Evaluation of Clear Sky Models for Satellite-Based Irradiance Estimates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sengupta, Manajit; Gotseff, Peter

    2013-12-01

    This report describes an intercomparison of three popular broadband clear sky solar irradiance model results with measured data, as well as satellite-based model clear sky results compared to measured clear sky data. The authors conclude that one of the popular clear sky models (the Bird clear sky model developed by Richard Bird and Roland Hulstrom) could serve as a more accurate replacement for current satellite-model clear sky estimations. Additionally, the analysis of the model results with respect to model input parameters indicates that rather than climatological, annual, or monthly mean input data, higher-time-resolution input parameters improve the general clear skymore » model performance.« less

  14. 40 CFR 93.158 - Criteria for determining conformity of general Federal actions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... requirements: (i) Specified in paragraph (b) of this section, based on areawide air quality modeling analysis and local air quality modeling analysis; or (ii) Meet the requirements of paragraph (a)(5) of this section and, for local air quality modeling analysis, the requirement of paragraph (b) of this section; (4...

  15. 40 CFR 93.158 - Criteria for determining conformity of general Federal actions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... requirements: (i) Specified in paragraph (b) of this section, based on areawide air quality modeling analysis and local air quality modeling analysis; or (ii) Meet the requirements of paragraph (a)(5) of this section and, for local air quality modeling analysis, the requirement of paragraph (b) of this section; (4...

  16. 40 CFR 93.158 - Criteria for determining conformity of general Federal actions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... requirements: (i) Specified in paragraph (b) of this section, based on areawide air quality modeling analysis and local air quality modeling analysis; or (ii) Meet the requirements of paragraph (a)(5) of this section and, for local air quality modeling analysis, the requirement of paragraph (b) of this section; (4...

  17. A General Multidimensional Model for the Measurement of Cultural Differences.

    ERIC Educational Resources Information Center

    Olmedo, Esteban L.; Martinez, Sergio R.

    A multidimensional model for measuring cultural differences (MCD) based on factor analytic theory and techniques is proposed. The model assumes that a cultural space may be defined by means of a relatively small number of orthogonal dimensions which are linear combinations of a much larger number of cultural variables. Once a suitable,…

  18. Mathematical model of water transport in Bacon and alkaline matrix-type hydrogen-oxygen fuel cells

    NASA Technical Reports Server (NTRS)

    Prokopius, P. R.; Easter, R. W.

    1972-01-01

    Based on general mass continuity and diffusive transport equations, a mathematical model was developed that simulates the transport of water in Bacon and alkaline-matrix fuel cells. The derived model was validated by using it to analytically reproduce various Bacon and matrix-cell experimental water transport transients.

  19. A Generalized Approach to Defining Item Discrimination for DCMs

    ERIC Educational Resources Information Center

    Henson, Robert; DiBello, Lou; Stout, Bill

    2018-01-01

    Diagnostic classification models (DCMs, also known as cognitive diagnosis models) hold the promise of providing detailed classroom information about the skills a student has or has not mastered. Specifically, DCMs are special cases of constrained latent class models where classes are defined based on mastery/nonmastery of a set of attributes (or…

  20. Assessment of the scale effect on statistical downscaling quality at a station scale using a weather generator-based model

    USDA-ARS?s Scientific Manuscript database

    The resolution of General Circulation Models (GCMs) is too coarse to assess the fine scale or site-specific impacts of climate change. Downscaling approaches including dynamical and statistical downscaling have been developed to meet this requirement. As the resolution of climate model increases, it...

Top