Yue, Chen; Chen, Shaojie; Sair, Haris I; Airan, Raag; Caffo, Brian S
2015-09-01
Data reproducibility is a critical issue in all scientific experiments. In this manuscript, the problem of quantifying the reproducibility of graphical measurements is considered. The image intra-class correlation coefficient (I2C2) is generalized and the graphical intra-class correlation coefficient (GICC) is proposed for such purpose. The concept for GICC is based on multivariate probit-linear mixed effect models. A Markov Chain Monte Carlo EM (mcm-cEM) algorithm is used for estimating the GICC. Simulation results with varied settings are demonstrated and our method is applied to the KIRBY21 test-retest dataset.
Magezi, David A
2015-01-01
Linear mixed-effects models (LMMs) are increasingly being used for data analysis in cognitive neuroscience and experimental psychology, where within-participant designs are common. The current article provides an introductory review of the use of LMMs for within-participant data analysis and describes a free, simple, graphical user interface (LMMgui). LMMgui uses the package lme4 (Bates et al., 2014a,b) in the statistical environment R (R Core Team).
Representing Learning With Graphical Models
NASA Technical Reports Server (NTRS)
Buntine, Wray L.; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
Probabilistic graphical models are being used widely in artificial intelligence, for instance, in diagnosis and expert systems, as a unified qualitative and quantitative framework for representing and reasoning with probabilities and independencies. Their development and use spans several fields including artificial intelligence, decision theory and statistics, and provides an important bridge between these communities. This paper shows by way of example that these models can be extended to machine learning, neural networks and knowledge discovery by representing the notion of a sample on the graphical model. Not only does this allow a flexible variety of learning problems to be represented, it also provides the means for representing the goal of learning and opens the way for the automatic development of learning algorithms from specifications.
Representing Learning With Graphical Models
NASA Technical Reports Server (NTRS)
Buntine, Wray L.; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
Probabilistic graphical models are being used widely in artificial intelligence, for instance, in diagnosis and expert systems, as a unified qualitative and quantitative framework for representing and reasoning with probabilities and independencies. Their development and use spans several fields including artificial intelligence, decision theory and statistics, and provides an important bridge between these communities. This paper shows by way of example that these models can be extended to machine learning, neural networks and knowledge discovery by representing the notion of a sample on the graphical model. Not only does this allow a flexible variety of learning problems to be represented, it also provides the means for representing the goal of learning and opens the way for the automatic development of learning algorithms from specifications.
Graphical Models for Ordinal Data
Guo, Jian; Levina, Elizaveta; Michailidis, George; Zhu, Ji
2014-01-01
A graphical model for ordinal variables is considered, where it is assumed that the data are generated by discretizing the marginal distributions of a latent multivariate Gaussian distribution. The relationships between these ordinal variables are then described by the underlying Gaussian graphical model and can be inferred by estimating the corresponding concentration matrix. Direct estimation of the model is computationally expensive, but an approximate EM-like algorithm is developed to provide an accurate estimate of the parameters at a fraction of the computational cost. Numerical evidence based on simulation studies shows the strong performance of the algorithm, which is also illustrated on data sets on movie ratings and an educational survey. PMID:26120267
Model Evaluation of Continuous Data Pharmacometric Models: Metrics and Graphics
Nguyen, THT; Mouksassi, M‐S; Holford, N; Al‐Huniti, N; Freedman, I; Hooker, AC; John, J; Karlsson, MO; Mould, DR; Pérez Ruixo, JJ; Plan, EL; Savic, R; van Hasselt, JGC; Weber, B; Zhou, C; Comets, E
2017-01-01
This article represents the first in a series of tutorials on model evaluation in nonlinear mixed effect models (NLMEMs), from the International Society of Pharmacometrics (ISoP) Model Evaluation Group. Numerous tools are available for evaluation of NLMEM, with a particular emphasis on visual assessment. This first basic tutorial focuses on presenting graphical evaluation tools of NLMEM for continuous data. It illustrates graphs for correct or misspecified models, discusses their pros and cons, and recalls the definition of metrics used. PMID:27884052
Model Evaluation of Continuous Data Pharmacometric Models: Metrics and Graphics.
Nguyen, Tht; Mouksassi, M-S; Holford, N; Al-Huniti, N; Freedman, I; Hooker, A C; John, J; Karlsson, M O; Mould, D R; Pérez Ruixo, J J; Plan, E L; Savic, R; van Hasselt, Jgc; Weber, B; Zhou, C; Comets, E; Mentré, F
2017-02-01
This article represents the first in a series of tutorials on model evaluation in nonlinear mixed effect models (NLMEMs), from the International Society of Pharmacometrics (ISoP) Model Evaluation Group. Numerous tools are available for evaluation of NLMEM, with a particular emphasis on visual assessment. This first basic tutorial focuses on presenting graphical evaluation tools of NLMEM for continuous data. It illustrates graphs for correct or misspecified models, discusses their pros and cons, and recalls the definition of metrics used.
Graphical Modeling Meets Systems Pharmacology
Lombardo, Rosario; Priami, Corrado
2017-01-01
A main source of failures in systems projects (including systems pharmacology) is poor communication level and different expectations among the stakeholders. A common and not ambiguous language that is naturally comprehensible by all the involved players is a boost to success. We present bStyle, a modeling tool that adopts a graphical language close enough to cartoons to be a common media to exchange ideas and data and that it is at the same time formal enough to enable modeling, analysis, and dynamic simulations of a system. Data analysis and simulation integrated in the same application are fundamental to understand the mechanisms of actions of drugs: a core aspect of systems pharmacology. PMID:28469411
A graphical language for reliability model generation
NASA Technical Reports Server (NTRS)
Howell, Sandra V.; Bavuso, Salvatore J.; Haley, Pamela J.
1990-01-01
A graphical interface capability of the hybrid automated reliability predictor (HARP) is described. The graphics-oriented (GO) module provides the user with a graphical language for modeling system failure modes through the selection of various fault tree gates, including sequence dependency gates, or by a Markov chain. With this graphical input language, a fault tree becomes a convenient notation for describing a system. In accounting for any sequence dependencies, HARP converts the fault-tree notation to a complex stochastic process that is reduced to a Markov chain which it can then solve for system reliability. The graphics capability is available for use on an IBM-compatible PC, a Sun, and a VAX workstation. The GO module is written in the C programming language and uses the Graphical Kernel System (GKS) standard for graphics implementation. The PC, VAX, and Sun versions of the HARP GO module are currently in beta-testing.
A graphical language for reliability model generation
NASA Technical Reports Server (NTRS)
Howell, Sandra V.; Bavuso, Salvatore J.; Haley, Pamela J.
1990-01-01
A graphical interface capability of the hybrid automated reliability predictor (HARP) is described. The graphics-oriented (GO) module provides the user with a graphical language for modeling system failure modes through the selection of various fault tree gates, including sequence dependency gates, or by a Markov chain. With this graphical input language, a fault tree becomes a convenient notation for describing a system. In accounting for any sequence dependencies, HARP converts the fault-tree notation to a complex stochastic process that is reduced to a Markov chain which it can then solve for system reliability. The graphics capability is available for use on an IBM-compatible PC, a Sun, and a VAX workstation. The GO module is written in the C programming language and uses the Graphical Kernel System (GKS) standard for graphics implementation. The PC, VAX, and Sun versions of the HARP GO module are currently in beta-testing.
Graphical Models via Univariate Exponential Family Distributions
Yang, Eunho; Ravikumar, Pradeep; Allen, Genevera I.; Liu, Zhandong
2016-01-01
Undirected graphical models, or Markov networks, are a popular class of statistical models, used in a wide variety of applications. Popular instances of this class include Gaussian graphical models and Ising models. In many settings, however, it might not be clear which subclass of graphical models to use, particularly for non-Gaussian and non-categorical data. In this paper, we consider a general sub-class of graphical models where the node-wise conditional distributions arise from exponential families. This allows us to derive multivariate graphical model distributions from univariate exponential family distributions, such as the Poisson, negative binomial, and exponential distributions. Our key contributions include a class of M-estimators to fit these graphical model distributions; and rigorous statistical analysis showing that these M-estimators recover the true graphical model structure exactly, with high probability. We provide examples of genomic and proteomic networks learned via instances of our class of graphical models derived from Poisson and exponential distributions. PMID:27570498
Graphical models for optimal power flow
Dvijotham, Krishnamurthy; Chertkov, Michael; Van Hentenryck, Pascal; Vuffray, Marc; Misra, Sidhant
2016-09-13
Optimal power flow (OPF) is the central optimization problem in electric power grids. Although solved routinely in the course of power grid operations, it is known to be strongly NP-hard in general, and weakly NP-hard over tree networks. In this paper, we formulate the optimal power flow problem over tree networks as an inference problem over a tree-structured graphical model where the nodal variables are low-dimensional vectors. We adapt the standard dynamic programming algorithm for inference over a tree-structured graphical model to the OPF problem. Combining this with an interval discretization of the nodal variables, we develop an approximation algorithm for the OPF problem. Further, we use techniques from constraint programming (CP) to perform interval computations and adaptive bound propagation to obtain practically efficient algorithms. Compared to previous algorithms that solve OPF with optimality guarantees using convex relaxations, our approach is able to work for arbitrary tree-structured distribution networks and handle mixed-integer optimization problems. Further, it can be implemented in a distributed message-passing fashion that is scalable and is suitable for “smart grid” applications like control of distributed energy resources. In conclusion, numerical evaluations on several benchmark networks show that practical OPF problems can be solved effectively using this approach.
Graphical models for optimal power flow
Dvijotham, Krishnamurthy; Chertkov, Michael; Van Hentenryck, Pascal; Vuffray, Marc; Misra, Sidhant
2016-09-13
Optimal power flow (OPF) is the central optimization problem in electric power grids. Although solved routinely in the course of power grid operations, it is known to be strongly NP-hard in general, and weakly NP-hard over tree networks. In this paper, we formulate the optimal power flow problem over tree networks as an inference problem over a tree-structured graphical model where the nodal variables are low-dimensional vectors. We adapt the standard dynamic programming algorithm for inference over a tree-structured graphical model to the OPF problem. Combining this with an interval discretization of the nodal variables, we develop an approximation algorithm for the OPF problem. Further, we use techniques from constraint programming (CP) to perform interval computations and adaptive bound propagation to obtain practically efficient algorithms. Compared to previous algorithms that solve OPF with optimality guarantees using convex relaxations, our approach is able to work for arbitrary tree-structured distribution networks and handle mixed-integer optimization problems. Further, it can be implemented in a distributed message-passing fashion that is scalable and is suitable for “smart grid” applications like control of distributed energy resources. In conclusion, numerical evaluations on several benchmark networks show that practical OPF problems can be solved effectively using this approach.
Graphical models for optimal power flow
Dvijotham, Krishnamurthy; Chertkov, Michael; Van Hentenryck, Pascal; ...
2016-09-13
Optimal power flow (OPF) is the central optimization problem in electric power grids. Although solved routinely in the course of power grid operations, it is known to be strongly NP-hard in general, and weakly NP-hard over tree networks. In this paper, we formulate the optimal power flow problem over tree networks as an inference problem over a tree-structured graphical model where the nodal variables are low-dimensional vectors. We adapt the standard dynamic programming algorithm for inference over a tree-structured graphical model to the OPF problem. Combining this with an interval discretization of the nodal variables, we develop an approximation algorithmmore » for the OPF problem. Further, we use techniques from constraint programming (CP) to perform interval computations and adaptive bound propagation to obtain practically efficient algorithms. Compared to previous algorithms that solve OPF with optimality guarantees using convex relaxations, our approach is able to work for arbitrary tree-structured distribution networks and handle mixed-integer optimization problems. Further, it can be implemented in a distributed message-passing fashion that is scalable and is suitable for “smart grid” applications like control of distributed energy resources. In conclusion, numerical evaluations on several benchmark networks show that practical OPF problems can be solved effectively using this approach.« less
Biochemical modeling with Systems Biology Graphical Notation.
Jansson, Andreas; Jirstrand, Mats
2010-05-01
The Systems Biology Graphical Notation (SBGN) is an emerging standard for graphical notation developed by an international systems biology community. Standardized graphical notation is crucial for efficient and accurate communication of biological knowledge between researchers with various backgrounds in the expanding field of systems biology. Here, we highlight SBGN from a practical point of view and describe how the user can build and simulate SBGN models from a simple drag-and-drop graphical user interface in PathwayLab. Copyright 2010 Elsevier Ltd. All rights reserved.
Understanding human functioning using graphical models
2010-01-01
Background Functioning and disability are universal human experiences. However, our current understanding of functioning from a comprehensive perspective is limited. The development of the International Classification of Functioning, Disability and Health (ICF) on the one hand and recent developments in graphical modeling on the other hand might be combined and open the door to a more comprehensive understanding of human functioning. The objective of our paper therefore is to explore how graphical models can be used in the study of ICF data for a range of applications. Methods We show the applicability of graphical models on ICF data for different tasks: Visualization of the dependence structure of the data set, dimension reduction and comparison of subpopulations. Moreover, we further developed and applied recent findings in causal inference using graphical models to estimate bounds on intervention effects in an observational study with many variables and without knowing the underlying causal structure. Results In each field, graphical models could be applied giving results of high face-validity. In particular, graphical models could be used for visualization of functioning in patients with spinal cord injury. The resulting graph consisted of several connected components which can be used for dimension reduction. Moreover, we found that the differences in the dependence structures between subpopulations were relevant and could be systematically analyzed using graphical models. Finally, when estimating bounds on causal effects of ICF categories on general health perceptions among patients with chronic health conditions, we found that the five ICF categories that showed the strongest effect were plausible. Conclusions Graphical Models are a flexible tool and lend themselves for a wide range of applications. In particular, studies involving ICF data seem to be suited for analysis using graphical models. PMID:20149230
Operations for Learning with Graphical Models
NASA Technical Reports Server (NTRS)
Buntine, Wray L.
1994-01-01
This paper is a multidisciplinary review of empirical, statistical learning from a graphical model perspective. Well-known examples of graphical models include Bayesian net- works, directed graphs representing a Markov chain, and undirected networks representing a Markov field. These graphical models are extended to model data analysis and empirical learning using the notation of plates. Graphical operations for simplifying and manipulating a problem are provided including decomposition, differentiation, and the manipulation of probability models from the exponential family. These operations adapt existing techniques from statistics and automatic differentiation to graphs. Two standard algorithm schemes for learning are reviewed in a graphical framework: Gibbs sampling and the expectation maximization algorithm. Some algorithms are developed in this graphical framework including a generalized version of linear regression, techniques for feed-forward networks, and learning Gaussian and discrete Bayesian networks from data. The paper concludes by sketching some implications for data analysis and summarizing some popular algorithms that fall within the framework presented. The main original contributions here are the decomposition techniques and the demonstration that graphical models provide a framework for understanding and developing complex learning algorithms.
Operations for Learning with Graphical Models
NASA Technical Reports Server (NTRS)
Buntine, Wray L.
1994-01-01
This paper is a multidisciplinary review of empirical, statistical learning from a graphical model perspective. Well-known examples of graphical models include Bayesian net- works, directed graphs representing a Markov chain, and undirected networks representing a Markov field. These graphical models are extended to model data analysis and empirical learning using the notation of plates. Graphical operations for simplifying and manipulating a problem are provided including decomposition, differentiation, and the manipulation of probability models from the exponential family. These operations adapt existing techniques from statistics and automatic differentiation to graphs. Two standard algorithm schemes for learning are reviewed in a graphical framework: Gibbs sampling and the expectation maximization algorithm. Some algorithms are developed in this graphical framework including a generalized version of linear regression, techniques for feed-forward networks, and learning Gaussian and discrete Bayesian networks from data. The paper concludes by sketching some implications for data analysis and summarizing some popular algorithms that fall within the framework presented. The main original contributions here are the decomposition techniques and the demonstration that graphical models provide a framework for understanding and developing complex learning algorithms.
Probabilistic Graphical Model Representation in Phylogenetics
Höhna, Sebastian; Heath, Tracy A.; Boussau, Bastien; Landis, Michael J.; Ronquist, Fredrik; Huelsenbeck, John P.
2014-01-01
Recent years have seen a rapid expansion of the model space explored in statistical phylogenetics, emphasizing the need for new approaches to statistical model representation and software development. Clear communication and representation of the chosen model is crucial for: (i) reproducibility of an analysis, (ii) model development, and (iii) software design. Moreover, a unified, clear and understandable framework for model representation lowers the barrier for beginners and nonspecialists to grasp complex phylogenetic models, including their assumptions and parameter/variable dependencies. Graphical modeling is a unifying framework that has gained in popularity in the statistical literature in recent years. The core idea is to break complex models into conditionally independent distributions. The strength lies in the comprehensibility, flexibility, and adaptability of this formalism, and the large body of computational work based on it. Graphical models are well-suited to teach statistical models, to facilitate communication among phylogeneticists and in the development of generic software for simulation and statistical inference. Here, we provide an introduction to graphical models for phylogeneticists and extend the standard graphical model representation to the realm of phylogenetics. We introduce a new graphical model component, tree plates, to capture the changing structure of the subgraph corresponding to a phylogenetic tree. We describe a range of phylogenetic models using the graphical model framework and introduce modules to simplify the representation of standard components in large and complex models. Phylogenetic model graphs can be readily used in simulation, maximum likelihood inference, and Bayesian inference using, for example, Metropolis–Hastings or Gibbs sampling of the posterior distribution. [Computation; graphical models; inference; modularization; statistical phylogenetics; tree plate.] PMID:24951559
Graphical Models and Computerized Adaptive Testing.
ERIC Educational Resources Information Center
Almond, Russell G.; Mislevy, Robert J.
1999-01-01
Considers computerized adaptive testing from the perspective of graphical modeling (GM). GM provides methods for making inferences about multifaceted skills and knowledge and for extracting data from complex performances. Provides examples from language-proficiency assessment. (SLD)
Modelling structured data with Probabilistic Graphical Models
NASA Astrophysics Data System (ADS)
Forbes, F.
2016-05-01
Most clustering and classification methods are based on the assumption that the objects to be clustered are independent. However, in more and more modern applications, data are structured in a way that makes this assumption not realistic and potentially misleading. A typical example that can be viewed as a clustering task is image segmentation where the objects are the pixels on a regular grid and depend on neighbouring pixels on this grid. Also, when data are geographically located, it is of interest to cluster data with an underlying dependence structure accounting for some spatial localisation. These spatial interactions can be naturally encoded via a graph not necessarily regular as a grid. Data sets can then be modelled via Markov random fields and mixture models (e.g. the so-called MRF and Hidden MRF). More generally, probabilistic graphical models are tools that can be used to represent and manipulate data in a structured way while modeling uncertainty. This chapter introduces the basic concepts. The two main classes of probabilistic graphical models are considered: Bayesian networks and Markov networks. The key concept of conditional independence and its link to Markov properties is presented. The main problems that can be solved with such tools are described. Some illustrations are given associated with some practical work.
Graphical workstation capability for reliability modeling
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.; Koppen, Sandra V.; Haley, Pamela J.
1992-01-01
In addition to computational capabilities, software tools for estimating the reliability of fault-tolerant digital computer systems must also provide a means of interfacing with the user. Described here is the new graphical interface capability of the hybrid automated reliability predictor (HARP), a software package that implements advanced reliability modeling techniques. The graphics oriented (GO) module provides the user with a graphical language for modeling system failure modes through the selection of various fault-tree gates, including sequence-dependency gates, or by a Markov chain. By using this graphical input language, a fault tree becomes a convenient notation for describing a system. In accounting for any sequence dependencies, HARP converts the fault-tree notation to a complex stochastic process that is reduced to a Markov chain, which it can then solve for system reliability. The graphics capability is available for use on an IBM-compatible PC, a Sun, and a VAX workstation. The GO module is written in the C programming language and uses the graphical kernal system (GKS) standard for graphics implementation. The PC, VAX, and Sun versions of the HARP GO module are currently in beta-testing stages.
Probabilistic graphical model representation in phylogenetics.
Höhna, Sebastian; Heath, Tracy A; Boussau, Bastien; Landis, Michael J; Ronquist, Fredrik; Huelsenbeck, John P
2014-09-01
Recent years have seen a rapid expansion of the model space explored in statistical phylogenetics, emphasizing the need for new approaches to statistical model representation and software development. Clear communication and representation of the chosen model is crucial for: (i) reproducibility of an analysis, (ii) model development, and (iii) software design. Moreover, a unified, clear and understandable framework for model representation lowers the barrier for beginners and nonspecialists to grasp complex phylogenetic models, including their assumptions and parameter/variable dependencies. Graphical modeling is a unifying framework that has gained in popularity in the statistical literature in recent years. The core idea is to break complex models into conditionally independent distributions. The strength lies in the comprehensibility, flexibility, and adaptability of this formalism, and the large body of computational work based on it. Graphical models are well-suited to teach statistical models, to facilitate communication among phylogeneticists and in the development of generic software for simulation and statistical inference. Here, we provide an introduction to graphical models for phylogeneticists and extend the standard graphical model representation to the realm of phylogenetics. We introduce a new graphical model component, tree plates, to capture the changing structure of the subgraph corresponding to a phylogenetic tree. We describe a range of phylogenetic models using the graphical model framework and introduce modules to simplify the representation of standard components in large and complex models. Phylogenetic model graphs can be readily used in simulation, maximum likelihood inference, and Bayesian inference using, for example, Metropolis-Hastings or Gibbs sampling of the posterior distribution. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.
Graphical Model Theory for Wireless Sensor Networks
Davis, William B.
2002-12-08
Information processing in sensor networks, with many small processors, demands a theory of computation that allows the minimization of processing effort, and the distribution of this effort throughout the network. Graphical model theory provides a probabilistic theory of computation that explicitly addresses complexity and decentralization for optimizing network computation. The junction tree algorithm, for decentralized inference on graphical probability models, can be instantiated in a variety of applications useful for wireless sensor networks, including: sensor validation and fusion; data compression and channel coding; expert systems, with decentralized data structures, and efficient local queries; pattern classification, and machine learning. Graphical models for these applications are sketched, and a model of dynamic sensor validation and fusion is presented in more depth, to illustrate the junction tree algorithm.
Faculjak, D.A.
1988-03-01
Graphics Manager (GFXMGR) is menu-driven, user-friendly software designed to interactively create, edit, and delete graphics displays on the Advanced Electronics Design (AED) graphics controller, Model 767. The software runs on the VAX family of computers and has been used successfully in security applications to create and change site layouts (maps) of specific facilities. GFXMGR greatly benefits graphics development by minimizing display-development time, reducing tedium on the part of the user, and improving system performance. It is anticipated that GFXMGR can be used to create graphics displays for many types of applications. 8 figs., 2 tabs.
Data Analysis with Graphical Models: Software Tools
NASA Technical Reports Server (NTRS)
Buntine, Wray L.
1994-01-01
Probabilistic graphical models (directed and undirected Markov fields, and combined in chain graphs) are used widely in expert systems, image processing and other areas as a framework for representing and reasoning with probabilities. They come with corresponding algorithms for performing probabilistic inference. This paper discusses an extension to these models by Spiegelhalter and Gilks, plates, used to graphically model the notion of a sample. This offers a graphical specification language for representing data analysis problems. When combined with general methods for statistical inference, this also offers a unifying framework for prototyping and/or generating data analysis algorithms from graphical specifications. This paper outlines the framework and then presents some basic tools for the task: a graphical version of the Pitman-Koopman Theorem for the exponential family, problem decomposition, and the calculation of exact Bayes factors. Other tools already developed, such as automatic differentiation, Gibbs sampling, and use of the EM algorithm, make this a broad basis for the generation of data analysis software.
Data Analysis with Graphical Models: Software Tools
NASA Technical Reports Server (NTRS)
Buntine, Wray L.
1994-01-01
Probabilistic graphical models (directed and undirected Markov fields, and combined in chain graphs) are used widely in expert systems, image processing and other areas as a framework for representing and reasoning with probabilities. They come with corresponding algorithms for performing probabilistic inference. This paper discusses an extension to these models by Spiegelhalter and Gilks, plates, used to graphically model the notion of a sample. This offers a graphical specification language for representing data analysis problems. When combined with general methods for statistical inference, this also offers a unifying framework for prototyping and/or generating data analysis algorithms from graphical specifications. This paper outlines the framework and then presents some basic tools for the task: a graphical version of the Pitman-Koopman Theorem for the exponential family, problem decomposition, and the calculation of exact Bayes factors. Other tools already developed, such as automatic differentiation, Gibbs sampling, and use of the EM algorithm, make this a broad basis for the generation of data analysis software.
Planar graphical models which are easy
Chertkov, Michael; Chernyak, Vladimir
2009-01-01
We describe a rich family of binary variables statistical mechanics models on planar graphs which are equivalent to Gaussian Grassmann Graphical models (free fermions). Calculation of partition function (weighted counting) in the models is easy (of polynomial complexity) as reduced to evaluation of determinants of matrixes linear in the number of variables. In particular, this family of models covers Holographic Algorithms of Valiant and extends on the Gauge Transformations discussed in our previous works.
Software for Data Analysis with Graphical Models
NASA Technical Reports Server (NTRS)
Buntine, Wray L.; Roy, H. Scott
1994-01-01
Probabilistic graphical models are being used widely in artificial intelligence and statistics, for instance, in diagnosis and expert systems, as a framework for representing and reasoning with probabilities and independencies. They come with corresponding algorithms for performing statistical inference. This offers a unifying framework for prototyping and/or generating data analysis algorithms from graphical specifications. This paper illustrates the framework with an example and then presents some basic techniques for the task: problem decomposition and the calculation of exact Bayes factors. Other tools already developed, such as automatic differentiation, Gibbs sampling, and use of the EM algorithm, make this a broad basis for the generation of data analysis software.
Software for Data Analysis with Graphical Models
NASA Technical Reports Server (NTRS)
Buntine, Wray L.; Roy, H. Scott
1994-01-01
Probabilistic graphical models are being used widely in artificial intelligence and statistics, for instance, in diagnosis and expert systems, as a framework for representing and reasoning with probabilities and independencies. They come with corresponding algorithms for performing statistical inference. This offers a unifying framework for prototyping and/or generating data analysis algorithms from graphical specifications. This paper illustrates the framework with an example and then presents some basic techniques for the task: problem decomposition and the calculation of exact Bayes factors. Other tools already developed, such as automatic differentiation, Gibbs sampling, and use of the EM algorithm, make this a broad basis for the generation of data analysis software.
Item Screening in Graphical Loglinear Rasch Models
ERIC Educational Resources Information Center
Kreiner, Svend; Christensen, Karl Bang
2011-01-01
In behavioural sciences, local dependence and DIF are common, and purification procedures that eliminate items with these weaknesses often result in short scales with poor reliability. Graphical loglinear Rasch models (Kreiner & Christensen, in "Statistical Methods for Quality of Life Studies," ed. by M. Mesbah, F.C. Cole & M.T.…
Image segmentation with a unified graphical model.
Zhang, Lei; Ji, Qiang
2010-08-01
We propose a unified graphical model that can represent both the causal and noncausal relationships among random variables and apply it to the image segmentation problem. Specifically, we first propose to employ Conditional Random Field (CRF) to model the spatial relationships among image superpixel regions and their measurements. We then introduce a multilayer Bayesian Network (BN) to model the causal dependencies that naturally exist among different image entities, including image regions, edges, and vertices. The CRF model and the BN model are then systematically and seamlessly combined through the theories of Factor Graph to form a unified probabilistic graphical model that captures the complex relationships among different image entities. Using the unified graphical model, image segmentation can be performed through a principled probabilistic inference. Experimental results on the Weizmann horse data set, on the VOC2006 cow data set, and on the MSRC2 multiclass data set demonstrate that our approach achieves favorable results compared to state-of-the-art approaches as well as those that use either the BN model or CRF model alone.
Graphical models and automatic speech recognition
NASA Astrophysics Data System (ADS)
Bilmes, Jeff A.
2002-11-01
Graphical models (GMs) are a flexible statistical abstraction that has been successfully used to describe problems in a variety of different domains. Commonly used for ASR, hidden Markov models are only one example of the large space of models constituting GMs. Therefore, GMs are useful to understand existing ASR approaches and also offer a promising path towards novel techniques. In this work, several such ways are described including (1) using both directed and undirected GMs to represent sparse Gaussian and conditional Gaussian distributions, (2) GMs for representing information fusion and classifier combination, (3) GMs for representing hidden articulatory information in a speech signal, (4) structural discriminability where the graph structure itself is discriminative, and the difficulties that arise when learning discriminative structure (5) switching graph structures, where the graph may change dynamically, and (6) language modeling. The graphical model toolkit (GMTK), a software system for general graphical-model based speech recognition and time series analysis, will also be described, including a number of GMTK's features that are specifically geared to ASR.
Graphical models for inferring single molecule dynamics
2010-01-01
Background The recent explosion of experimental techniques in single molecule biophysics has generated a variety of novel time series data requiring equally novel computational tools for analysis and inference. This article describes in general terms how graphical modeling may be used to learn from biophysical time series data using the variational Bayesian expectation maximization algorithm (VBEM). The discussion is illustrated by the example of single-molecule fluorescence resonance energy transfer (smFRET) versus time data, where the smFRET time series is modeled as a hidden Markov model (HMM) with Gaussian observables. A detailed description of smFRET is provided as well. Results The VBEM algorithm returns the model’s evidence and an approximating posterior parameter distribution given the data. The former provides a metric for model selection via maximum evidence (ME), and the latter a description of the model’s parameters learned from the data. ME/VBEM provide several advantages over the more commonly used approach of maximum likelihood (ML) optimized by the expectation maximization (EM) algorithm, the most important being a natural form of model selection and a well-posed (non-divergent) optimization problem. Conclusions The results demonstrate the utility of graphical modeling for inference of dynamic processes in single molecule biophysics. PMID:21034427
Stochastic Spectral Descent for Discrete Graphical Models
Carlson, David; Hsieh, Ya-Ping; Collins, Edo; ...
2015-12-14
Interest in deep probabilistic graphical models has in-creased in recent years, due to their state-of-the-art performance on many machine learning applications. Such models are typically trained with the stochastic gradient method, which can take a significant number of iterations to converge. Since the computational cost of gradient estimation is prohibitive even for modestly sized models, training becomes slow and practically usable models are kept small. In this paper we propose a new, largely tuning-free algorithm to address this problem. Our approach derives novel majorization bounds based on the Schatten- norm. Intriguingly, the minimizers of these bounds can be interpreted asmore » gradient methods in a non-Euclidean space. We thus propose using a stochastic gradient method in non-Euclidean space. We both provide simple conditions under which our algorithm is guaranteed to converge, and demonstrate empirically that our algorithm leads to dramatically faster training and improved predictive ability compared to stochastic gradient descent for both directed and undirected graphical models.« less
Stochastic Spectral Descent for Discrete Graphical Models
Carlson, David; Hsieh, Ya-Ping; Collins, Edo; Carin, Lawrence; Cevher, Volkan
2015-12-14
Interest in deep probabilistic graphical models has in-creased in recent years, due to their state-of-the-art performance on many machine learning applications. Such models are typically trained with the stochastic gradient method, which can take a significant number of iterations to converge. Since the computational cost of gradient estimation is prohibitive even for modestly sized models, training becomes slow and practically usable models are kept small. In this paper we propose a new, largely tuning-free algorithm to address this problem. Our approach derives novel majorization bounds based on the Schatten- norm. Intriguingly, the minimizers of these bounds can be interpreted as gradient methods in a non-Euclidean space. We thus propose using a stochastic gradient method in non-Euclidean space. We both provide simple conditions under which our algorithm is guaranteed to converge, and demonstrate empirically that our algorithm leads to dramatically faster training and improved predictive ability compared to stochastic gradient descent for both directed and undirected graphical models.
ERIC Educational Resources Information Center
Post, Susan
1975-01-01
An art teacher described an elective course in graphics which was designed to enlarge a student's knowledge of value, color, shape within a shape, transparency, line and texture. This course utilized the technique of working a multi-colored print from a single block that was first introduced by Picasso. (Author/RK)
On the graphical extraction of multipole mixing ratios of nuclear transitions
NASA Astrophysics Data System (ADS)
Rezynkina, K.; Lopez-Martens, A.; Hauschild, K.
2017-02-01
We propose a novel graphical method for determining the mixing ratios δ and their associated uncertainties for mixed nuclear transitions. It incorporates the uncertainties on both the measured and the theoretical conversion coefficients. The accuracy of the method has been studied by deriving the corresponding probability density function. The domains of applicability of the method are carefully defined.
Graphical model checking with correlated response data.
Pan, W; Connett, J E; Porzio, G C; Weisberg, S
2001-10-15
Correlated response data arise often in biomedical studies. The generalized estimation equation (GEE) approach is widely used in regression analysis for such data. However, there are few methods available to check the adequacy of regression models in GEE. In this paper, a graphical method is proposed based on Cook and Weisberg's marginal model plot. A bootstrap method is applied to obtain the reference band to assess statistical uncertainties in comparing two marginal mean functions. We also propose using the generalized additive model (GAM) in a similar fashion. The proposed two methods are easy to implement by taking advantage of existing smoothing and GAM softwares for independent data. The usefulness of the methodology is demonstrated through application to a correlated binary data set drawn from a clinical trial, the Lung Health Study. Copyright 2001 John Wiley & Sons, Ltd.
The cluster graphical lasso for improved estimation of Gaussian graphical models
Tan, Kean Ming; Witten, Daniela; Shojaie, Ali
2015-01-01
The task of estimating a Gaussian graphical model in the high-dimensional setting is considered. The graphical lasso, which involves maximizing the Gaussian log likelihood subject to a lasso penalty, is a well-studied approach for this task. A surprising connection between the graphical lasso and hierarchical clustering is introduced: the graphical lasso in effect performs a two-step procedure, in which (1) single linkage hierarchical clustering is performed on the variables in order to identify connected components, and then (2) a penalized log likelihood is maximized on the subset of variables within each connected component. Thus, the graphical lasso determines the connected components of the estimated network via single linkage clustering. The single linkage clustering is known to perform poorly in certain finite-sample settings. Therefore, the cluster graphical lasso, which involves clustering the features using an alternative to single linkage clustering, and then performing the graphical lasso on the subset of variables within each cluster, is proposed. Model selection consistency for this technique is established, and its improved performance relative to the graphical lasso is demonstrated in a simulation study, as well as in applications to a university webpage and a gene expression data sets. PMID:25642008
Connections between Graphical Gaussian Models and Factor Analysis
ERIC Educational Resources Information Center
Salgueiro, M. Fatima; Smith, Peter W. F.; McDonald, John W.
2010-01-01
Connections between graphical Gaussian models and classical single-factor models are obtained by parameterizing the single-factor model as a graphical Gaussian model. Models are represented by independence graphs, and associations between each manifest variable and the latent factor are measured by factor partial correlations. Power calculations…
Connections between Graphical Gaussian Models and Factor Analysis
ERIC Educational Resources Information Center
Salgueiro, M. Fatima; Smith, Peter W. F.; McDonald, John W.
2010-01-01
Connections between graphical Gaussian models and classical single-factor models are obtained by parameterizing the single-factor model as a graphical Gaussian model. Models are represented by independence graphs, and associations between each manifest variable and the latent factor are measured by factor partial correlations. Power calculations…
Teaching "Instant Experience" with Graphical Model Validation Techniques
ERIC Educational Resources Information Center
Ekstrøm, Claus Thorn
2014-01-01
Graphical model validation techniques for linear normal models are often used to check the assumptions underlying a statistical model. We describe an approach to provide "instant experience" in looking at a graphical model validation plot, so it becomes easier to validate if any of the underlying assumptions are violated.
Teaching "Instant Experience" with Graphical Model Validation Techniques
ERIC Educational Resources Information Center
Ekstrøm, Claus Thorn
2014-01-01
Graphical model validation techniques for linear normal models are often used to check the assumptions underlying a statistical model. We describe an approach to provide "instant experience" in looking at a graphical model validation plot, so it becomes easier to validate if any of the underlying assumptions are violated.
ModelMate - A graphical user interface for model analysis
Banta, Edward R.
2011-01-01
ModelMate is a graphical user interface designed to facilitate use of model-analysis programs with models. This initial version of ModelMate supports one model-analysis program, UCODE_2005, and one model software program, MODFLOW-2005. ModelMate can be used to prepare input files for UCODE_2005, run UCODE_2005, and display analysis results. A link to the GW_Chart graphing program facilitates visual interpretation of results. ModelMate includes capabilities for organizing directories used with the parallel-processing capabilities of UCODE_2005 and for maintaining files in those directories to be identical to a set of files in a master directory. ModelMate can be used on its own or in conjunction with ModelMuse, a graphical user interface for MODFLOW-2005 and PHAST.
A Guide to the Literature on Learning Graphical Models
NASA Technical Reports Server (NTRS)
Buntine, Wray L.; Friedland, Peter (Technical Monitor)
1994-01-01
This literature review discusses different methods under the general rubric of learning Bayesian networks from data, and more generally, learning probabilistic graphical models. Because many problems in artificial intelligence, statistics and neural networks can be represented as a probabilistic graphical model, this area provides a unifying perspective on learning. This paper organizes the research in this area along methodological lines of increasing complexity.
Graphical tools for model selection in generalized linear models.
Murray, K; Heritier, S; Müller, S
2013-11-10
Model selection techniques have existed for many years; however, to date, simple, clear and effective methods of visualising the model building process are sparse. This article describes graphical methods that assist in the selection of models and comparison of many different selection criteria. Specifically, we describe for logistic regression, how to visualize measures of description loss and of model complexity to facilitate the model selection dilemma. We advocate the use of the bootstrap to assess the stability of selected models and to enhance our graphical tools. We demonstrate which variables are important using variable inclusion plots and show that these can be invaluable plots for the model building process. We show with two case studies how these proposed tools are useful to learn more about important variables in the data and how these tools can assist the understanding of the model building process.
Interactive graphical model building using telepresence and virtual reality
Cooke, C.; Stansfield, S.
1993-10-01
This paper presents a prototype system developed at Sandia National Laboratories to create and verify computer-generated graphical models of remote physical environments. The goal of the system is to create an interface between an operator and a computer vision system so that graphical models can be created interactively. Virtual reality and telepresence are used to allow interaction between the operator, computer, and remote environment. A stereo view of the remote environment is produced by two CCD cameras. The cameras are mounted on a three degree-of-freedom platform which is slaved to a mechanically-tracked, stereoscopic viewing device. This gives the operator a sense of immersion in the physical environment. The stereo video is enhanced by overlaying the graphical model onto it. Overlay of the graphical model onto the stereo video allows visual verification of graphical models. Creation of a graphical model is accomplished by allowing the operator to assist the computer in modeling. The operator controls a 3-D cursor to mark objects to be modeled. The computer then automatically extracts positional and geometric information about the object and creates the graphical model.
Retrospective Study on Mathematical Modeling Based on Computer Graphic Processing
NASA Astrophysics Data System (ADS)
Zhang, Kai Li
Graphics & image making is an important field in computer application, in which visualization software has been widely used with the characteristics of convenience and quick. However, it was thought by modeling designers that the software had been limited in it's function and flexibility because mathematics modeling platform was not built. A non-visualization graphics software appearing at this moment enabled the graphics & image design has a very good mathematics modeling platform. In the paper, a polished pyramid is established by multivariate spline function algorithm, and validate the non-visualization software is good in mathematical modeling.
A general graphical user interface for automatic reliability modeling
NASA Technical Reports Server (NTRS)
Liceaga, Carlos A.; Siewiorek, Daniel P.
1991-01-01
Reported here is a general Graphical User Interface (GUI) for automatic reliability modeling of Processor Memory Switch (PMS) structures using a Markov model. This GUI is based on a hierarchy of windows. One window has graphical editing capabilities for specifying the system's communication structure, hierarchy, reconfiguration capabilities, and requirements. Other windows have field texts, popup menus, and buttons for specifying parameters and selecting actions. An example application of the GUI is given.
Multibody dynamics model building using graphical interfaces
NASA Technical Reports Server (NTRS)
Macala, Glenn A.
1989-01-01
In recent years, the extremely laborious task of manually deriving equations of motion for the simulation of multibody spacecraft dynamics has largely been eliminated. Instead, the dynamicist now works with commonly available general purpose dynamics simulation programs which generate the equations of motion either explicitly or implicitly via computer codes. The user interface to these programs has predominantly been via input data files, each with its own required format and peculiarities, causing errors and frustrations during program setup. Recent progress in a more natural method of data input for dynamics programs: the graphical interface, is described.
A probabilistic graphical model based stochastic input model construction
Wan, Jiang; Zabaras, Nicholas
2014-09-01
Model reduction techniques have been widely used in modeling of high-dimensional stochastic input in uncertainty quantification tasks. However, the probabilistic modeling of random variables projected into reduced-order spaces presents a number of computational challenges. Due to the curse of dimensionality, the underlying dependence relationships between these random variables are difficult to capture. In this work, a probabilistic graphical model based approach is employed to learn the dependence by running a number of conditional independence tests using observation data. Thus a probabilistic model of the joint PDF is obtained and the PDF is factorized into a set of conditional distributions based on the dependence structure of the variables. The estimation of the joint PDF from data is then transformed to estimating conditional distributions under reduced dimensions. To improve the computational efficiency, a polynomial chaos expansion is further applied to represent the random field in terms of a set of standard random variables. This technique is combined with both linear and nonlinear model reduction methods. Numerical examples are presented to demonstrate the accuracy and efficiency of the probabilistic graphical model based stochastic input models. - Highlights: • Data-driven stochastic input models without the assumption of independence of the reduced random variables. • The problem is transformed to a Bayesian network structure learning problem. • Examples are given in flows in random media.
Accelerating molecular modeling applications with graphics processors.
Stone, John E; Phillips, James C; Freddolino, Peter L; Hardy, David J; Trabuco, Leonardo G; Schulten, Klaus
2007-12-01
Molecular mechanics simulations offer a computational approach to study the behavior of biomolecules at atomic detail, but such simulations are limited in size and timescale by the available computing resources. State-of-the-art graphics processing units (GPUs) can perform over 500 billion arithmetic operations per second, a tremendous computational resource that can now be utilized for general purpose computing as a result of recent advances in GPU hardware and software architecture. In this article, an overview of recent advances in programmable GPUs is presented, with an emphasis on their application to molecular mechanics simulations and the programming techniques required to obtain optimal performance in these cases. We demonstrate the use of GPUs for the calculation of long-range electrostatics and nonbonded forces for molecular dynamics simulations, where GPU-based calculations are typically 10-100 times faster than heavily optimized CPU-based implementations. The application of GPU acceleration to biomolecular simulation is also demonstrated through the use of GPU-accelerated Coulomb-based ion placement and calculation of time-averaged potentials from molecular dynamics trajectories. A novel approximation to Coulomb potential calculation, the multilevel summation method, is introduced and compared with direct Coulomb summation. In light of the performance obtained for this set of calculations, future applications of graphics processors to molecular dynamics simulations are discussed.
Lee, S; Richard Dimenna, R; David Tamburello, D
2008-11-13
The process of recovering the waste in storage tanks at the Savannah River Site (SRS) typically requires mixing the contents of the tank with one to four dual-nozzle jet mixers located within the tank. The typical criteria to establish a mixed condition in a tank are based on the number of pumps in operation and the time duration of operation. To ensure that a mixed condition is achieved, operating times are set conservatively long. This approach results in high operational costs because of the long mixing times and high maintenance and repair costs for the same reason. A significant reduction in both of these costs might be realized by reducing the required mixing time based on calculating a reliable indicator of mixing with a suitably validated computer code. The work described in this report establishes the basis for further development of the theory leading to the identified mixing indicators, the benchmark analyses demonstrating their consistency with widely accepted correlations, and the application of those indicators to SRS waste tanks to provide a better, physically based estimate of the required mixing time. Waste storage tanks at SRS contain settled sludge which varies in height from zero to 10 ft. The sludge has been characterized and modeled as micron-sized solids, typically 1 to 5 microns, at weight fractions as high as 20 to 30 wt%, specific gravities to 1.4, and viscosities up to 64 cp during motion. The sludge is suspended and mixed through the use of submersible slurry jet pumps. To suspend settled sludge, water is added to the tank as a slurry medium and stirred with the jet pump. Although there is considerable technical literature on mixing and solid suspension in agitated tanks, very little literature has been published on jet mixing in a large-scale tank. If shorter mixing times can be shown to support Defense Waste Processing Facility (DWPF) or other feed requirements, longer pump lifetimes can be achieved with associated operational cost and
Lee, S; Dimenna, R; Tamburello, D
2011-02-14
height from zero to 10 ft. The sludge has been characterized and modeled as micron-sized solids, typically 1 to 5 microns, at weight fractions as high as 20 to 30 wt%, specific gravities to 1.4, and viscosities up to 64 cp during motion. The sludge is suspended and mixed through the use of submersible slurry jet pumps. To suspend settled sludge, water is added to the tank as a slurry medium and stirred with the jet pump. Although there is considerable technical literature on mixing and solid suspension in agitated tanks, very little literature has been published on jet mixing in a large-scale tank. One of the main objectives in the waste processing is to provide feed of a uniform slurry composition at a certain weight percentage (e.g. typically {approx}13 wt% at SRS) over an extended period of time. In preparation of the sludge for slurrying, several important questions have been raised with regard to sludge suspension and mixing of the solid suspension in the bulk of the tank: (1) How much time is required to prepare a slurry with a uniform solid composition? (2) How long will it take to suspend and mix the sludge for uniform composition in any particular waste tank? (3) What are good mixing indicators to answer the questions concerning sludge mixing stated above in a general fashion applicable to any waste tank/slurry pump geometry and fluid/sludge combination?
Integrating Surface Modeling into the Engineering Design Graphics Curriculum
ERIC Educational Resources Information Center
Hartman, Nathan W.
2006-01-01
It has been suggested there is a knowledge base that surrounds the use of 3D modeling within the engineering design process and correspondingly within engineering design graphics education. While solid modeling receives a great deal of attention and discussion relative to curriculum efforts, and rightly so, surface modeling is an equally viable 3D…
Integrating Surface Modeling into the Engineering Design Graphics Curriculum
ERIC Educational Resources Information Center
Hartman, Nathan W.
2006-01-01
It has been suggested there is a knowledge base that surrounds the use of 3D modeling within the engineering design process and correspondingly within engineering design graphics education. While solid modeling receives a great deal of attention and discussion relative to curriculum efforts, and rightly so, surface modeling is an equally viable 3D…
A Thermal Model Preprocessor For Graphics And Material Database Generation
NASA Astrophysics Data System (ADS)
Jones, Jack C.; Gonda, Teresa G.
1989-08-01
The process of developing a physical description of a target for thermal models is a time consuming and tedious task. The problem is one of data collection, data manipulation, and data storage. Information on targets can come from many sources and therefore could be in any form (2-D drawings, 3-D wireframe or solid model representations, etc.). TACOM has developed a preprocessor that decreases the time involved in creating a faceted target representation. This program allows the user to create the graphics for the vehicle and to assign the material properties to the graphics. The vehicle description file is then automatically generated by the preprocessor. By containing all the information in one database, the modeling process is made more accurate and data tracing can be done easily. A bridge to convert other graphics packages (such as BRL-CAD) to a faceted representation is being developed. When the bridge is finished, this preprocessor will be used to manipulate the converted data.
Teaching Geometry through Dynamic Modeling in Introductory Engineering Graphics.
ERIC Educational Resources Information Center
Wiebe, Eric N.; Branoff, Ted J.; Hartman, Nathan W.
2003-01-01
Examines how constraint-based 3D modeling can be used as a vehicle for rethinking instructional approaches to engineering design graphics. Focuses on moving from a mode of instruction based on the crafting by students and assessment by instructors of static 2D drawings and 3D models. Suggests that the new approach is better aligned with…
Teaching Geometry through Dynamic Modeling in Introductory Engineering Graphics.
ERIC Educational Resources Information Center
Wiebe, Eric N.; Branoff, Ted J.; Hartman, Nathan W.
2003-01-01
Examines how constraint-based 3D modeling can be used as a vehicle for rethinking instructional approaches to engineering design graphics. Focuses on moving from a mode of instruction based on the crafting by students and assessment by instructors of static 2D drawings and 3D models. Suggests that the new approach is better aligned with…
Transient thermoregulatory model with graphics output
NASA Technical Reports Server (NTRS)
Grounds, D. J.
1974-01-01
A user's guide is presented for the transient version of the thermoregulatory model. The model is designed to simulate the transient response of the human thermoregulatory system to thermal inputs. The model consists of 41 compartments over which the terms of the heat balance are computed. The control mechanisms which are identified are sweating, vaso-constriction and vasodilation.
VR Lab ISS Graphics Models Data Package
NASA Technical Reports Server (NTRS)
Paddock, Eddie; Homan, Dave; Bell, Brad; Miralles, Evely; Hoblit, Jeff
2016-01-01
All the ISS models are saved in AC3D model format which is a text based format that can be loaded into blender and exported to other formats from there including FBX. The models are saved in two different levels of detail, one being labeled "LOWRES" and the other labeled "HIRES". There are two ".str" files (HIRES _ scene _ load.str and LOWRES _ scene _ load.str) that give the hierarchical relationship of the different nodes and the models associated with each node for both the "HIRES" and "LOWRES" model sets. All the images used for texturing are stored in Windows ".bmp" format for easy importing.
Greedy Learning of Graphical Models with Small Girth
2013-01-01
with the Department of Electrical and Computer Engineering , The University of Texas at Austin, USA, Emails: avik@utexas.edu, sanghavi@mail.utexas.edu...61, pp. 401-425, 1996. [7] A. Dobra , C. Hans, B. Jones, J. R. Nevins, G. Yao, and M. West, “Sparse graphical models for exploring gene expression data
Graphical Models for Causation, and the Identification Problem
ERIC Educational Resources Information Center
Freedman, David A.
2004-01-01
This article (which is mainly expository) sets up graphical models for causation, having a bit less than the usual complement of hypothetical counterfactuals. Assuming the invariance of error distributions may be essential for causal inference, but the errors themselves need not be invariant. Graphs can be interpreted using conditional…
Learning Design Based on Graphical Knowledge-Modelling
ERIC Educational Resources Information Center
Paquette, Gilbert; Leonard, Michel; Lundgren-Cayrol, Karin; Mihaila, Stefan; Gareau, Denis
2006-01-01
This chapter states and explains that a Learning Design is the result of a knowledge engineering process where knowledge and competencies, learning design and delivery models are constructed in an integrated framework. We present a general graphical language and a knowledge editor that has been adapted to support the construction of learning…
MAGIC: Model and Graphic Information Converter
NASA Technical Reports Server (NTRS)
Herbert, W. C.
2009-01-01
MAGIC is a software tool capable of converting highly detailed 3D models from an open, standard format, VRML 2.0/97, into the proprietary DTS file format used by the Torque Game Engine from GarageGames. MAGIC is used to convert 3D simulations from authoritative sources into the data needed to run the simulations in NASA's Distributed Observer Network. The Distributed Observer Network (DON) is a simulation presentation tool built by NASA to facilitate the simulation sharing requirements of the Data Presentation and Visualization effort within the Constellation Program. DON is built on top of the Torque Game Engine (TGE) and has chosen TGE's Dynamix Three Space (DTS) file format to represent 3D objects within simulations.
Workflow modeling in the graphic arts and printing industry
NASA Astrophysics Data System (ADS)
Tuijn, Chris
2003-12-01
The last few years, a lot of effort has been spent on the standardization of the workflow in the graphic arts and printing industry. The main reasons for this standardization are two-fold: first of all, the need to represent all aspects of products, processes and resources in a uniform, digital framework and, secondly, the need to have different systems communicate with each other without having to implement dedicated drivers or protocols. Since many years, a number of organizations in the IT sector have been quite busy developing models and languages on the topic of workflow modeling. In addition to the more formal methods (such as, e.g., extended finite state machines, Petri Nets, Markov Chains etc.) introduced a number of decades ago, more pragmatic methods have been proposed quite recently. We hereby think in particular of the activities of the Workflow Management Coalition that resulted in an XML based Process Definition Language. Although one might be tempted to use the already established standards in the graphic environment, one should be well aware of the complexity and uniqueness of the graphic arts workflow. In this paper, we will show that it is quite hard though not impossible to model the graphic arts workflow using the already established workflow systems. After a brief summary of the graphic arts workflow requirements, we will show why the traditional models are less suitable to use. It will turn out that one of the main reasons for the incompatibility is that the graphic arts workflow is primarily resource driven; this means that the activation of processes depends on the status of different incoming resources. The fact that processes can start running with a partial availability of the input resources is a further complication that asks for additional knowledge on process level. In the second part of this paper, we will discuss in more detail the different software components that are available in any graphic enterprise. In the last part, we will
GENI: A graphical environment for model-based control
NASA Astrophysics Data System (ADS)
Kleban, Stephen; Lee, Martin; Zambre, Yadunath
1990-08-01
A new method of operating machine-modeling and beam-simulation programs for accelerator control has been developed. Existing methods, although cumbersome, have been used in control systems for commissioning and operation of many machines. We developed GENI, a generalized graphical interface to these programs for model-based control. This "object-oriented"-like environment is described and some typical applications are presented.
Graphical Models as Surrogates for Complex Ground Motion Models
NASA Astrophysics Data System (ADS)
Vogel, K.; Riggelsen, C.; Kuehn, N.; Scherbaum, F.
2012-04-01
An essential part of the probabilistic seismic hazard analysis (PSHA) is the ground motion model, which estimates the conditional probability of a ground motion parameter, such as (horizontal) peak ground acceleration or spectral acceleration, given earthquake and site related predictor variables. For a reliable seismic hazard estimation the ground motion model has to keep the epistemic uncertainty small, while the aleatory uncertainty of the ground motion is covered by the model. In regions of well recorded seismicity the most popular modeling approach is to fit a regression function to the observed data, where the functional form is determined by expert knowledge. In regions, where we lack a sufficient amount of data, it is popular to fit the regression function to a data set generated by a so-called stochastic model, which distorts the shape of a random time series according to physical principles to obtain a time series with properties that match ground-motion characteristics. The stochastic model does not have nice analytical properties nor does it come in a form amenable for easy analytical handling and evaluation as needed for PSHA. Therefore a surrogate model, which describes the stochastic model in a more abstract sense (e.g. regression) is often used instead. We show how Directed Graphical Models (DGM) may be seen as a viable alternative to the classical regression approach. They describe a joint probability distribution of a set of variables, decomposing it into a product of (local) conditional probability distributions according to a directed acyclic graph. Graphical models have proven to be a all-round pre/descriptive probabilistic framework for many problems. Their transparent nature is attractive from a domain perspective allowing for a better understanding and gives direct insight into the relationships and workings of a system. DGMs learn the dependency structure of the parameters from the data and do not need, but can include prior expert
Conditional graphical models for protein structural motif recognition.
Liu, Yan; Carbonell, Jaime; Gopalakrishnan, Vanathi; Weigele, Peter
2009-05-01
Determining protein structures is crucial to understanding the mechanisms of infection and designing drugs. However, the elucidation of protein folds by crystallographic experiments can be a bottleneck in the development process. In this article, we present a probabilistic graphical model framework, conditional graphical models, for predicting protein structural motifs. It represents the structure characteristics of a structural motif using a graph, where the nodes denote the secondary structure elements, and the edges indicate the side-chain interactions between the components either within one protein chain or between chains. Then the model defines the optimal segmentation of a protein sequence against the graph by maximizing its "conditional" probability so that it can take advantages of the discriminative training approach. Efficient approximate inference algorithms using reversible jump Markov Chain Monte Carlo (MCMC) algorithm are developed to handle the resulting complex graphical models. We test our algorithm on four important structural motifs, and our method outperforms other state-of-art algorithms for motif recognition. We also hypothesize potential membership proteins of target folds from Swiss-Prot, which further supports the evolutionary hypothesis about viral folds.
Detecting relationships between physiological variables using graphical models.
Imhoff, Michael; Fried, Ronald; Gather, Ursula
2002-01-01
In intensive care physiological variables of the critically ill are measured and recorded in short time intervals. The proper extraction and interpretation of the information contained in this flood of information can hardly be done by experience alone. Intelligent alarm systems are needed to provide suitable bedside decision support. So far there is no commonly accepted standard for detecting the actual clinical state from the patient record. We use the statistical methodology of graphical models based on partial correlations for detecting time-varying relationships between physiological variables. Graphical models provide information on the relationships among physiological variables that is helpful e.g. for variable selection. Separate analyses for different pathophysiological states show that distinct clinical states are characterized by distinct partial correlation structures. Hence, this technique can provide new insights into physiological mechanisms. PMID:12463843
GENI: A graphical environment for model-based control
Kleban, S.; Lee, M.; Zambre, Y.
1989-10-01
A new method to operate machine and beam simulation programs for accelerator control has been developed. Existing methods, although cumbersome, have been used in control systems for commissioning and operation of many machines. We developed GENI, a generalized graphical interface to these programs for model-based control. This object-oriented''-like environment is described and some typical applications are presented. 4 refs., 5 figs.
[Dynamic biliary manometry: display modelling and graphic interpretation].
Tkachuk, O L; Shevchuk, I M
2003-10-01
Tendencies of development of biliary manometry have been analyzed. Key advantages and problems of manometric investigation of biliary tracts have been summarized. New method of graphic registration and pressure monitoring in biliary tracts called biliary manometry has been suggested. Characteristic types of manometric curves were determined using stand modelling, their physical and mathematical analysis was conducted, clinical analogues have been suggested. The emphasis has been made on expediency of its further elaboration and clinical application.
Probabilistic graphic models applied to identification of diseases.
Sato, Renato Cesar; Sato, Graziela Tiemy Kajita
2015-01-01
Decision-making is fundamental when making diagnosis or choosing treatment. The broad dissemination of computed systems and databases allows systematization of part of decisions through artificial intelligence. In this text, we present basic use of probabilistic graphic models as tools to analyze causality in health conditions. This method has been used to make diagnosis of Alzheimer´s disease, sleep apnea and heart diseases.
Probabilistic graphic models applied to identification of diseases
Sato, Renato Cesar; Sato, Graziela Tiemy Kajita
2015-01-01
ABSTRACT Decision-making is fundamental when making diagnosis or choosing treatment. The broad dissemination of computed systems and databases allows systematization of part of decisions through artificial intelligence. In this text, we present basic use of probabilistic graphic models as tools to analyze causality in health conditions. This method has been used to make diagnosis of Alzheimer´s disease, sleep apnea and heart diseases. PMID:26154555
Identifying gene regulatory network rewiring using latent differential graphical models
Tian, Dechao; Gu, Quanquan; Ma, Jian
2016-01-01
Gene regulatory networks (GRNs) are highly dynamic among different tissue types. Identifying tissue-specific gene regulation is critically important to understand gene function in a particular cellular context. Graphical models have been used to estimate GRN from gene expression data to distinguish direct interactions from indirect associations. However, most existing methods estimate GRN for a specific cell/tissue type or in a tissue-naive way, or do not specifically focus on network rewiring between different tissues. Here, we describe a new method called Latent Differential Graphical Model (LDGM). The motivation of our method is to estimate the differential network between two tissue types directly without inferring the network for individual tissues, which has the advantage of utilizing much smaller sample size to achieve reliable differential network estimation. Our simulation results demonstrated that LDGM consistently outperforms other Gaussian graphical model based methods. We further evaluated LDGM by applying to the brain and blood gene expression data from the GTEx consortium. We also applied LDGM to identify network rewiring between cancer subtypes using the TCGA breast cancer samples. Our results suggest that LDGM is an effective method to infer differential network using high-throughput gene expression data to identify GRN dynamics among different cellular conditions. PMID:27378774
Protein design by sampling an undirected graphical model of residue constraints.
Thomas, John; Ramakrishnan, Naren; Bailey-Kellogg, Chris
2009-01-01
This paper develops an approach for designing protein variants by sampling sequences that satisfy residue constraints encoded in an undirected probabilistic graphical model. Due to evolutionary pressures on proteins to maintain structure and function, the sequence record of a protein family contains valuable information regarding position-specific residue conservation and coupling (or covariation) constraints. Representing these constraints with a graphical model provides two key benefits for protein design: a probabilistic semantics enabling evaluation of possible sequences for consistency with the constraints, and an explicit factorization of residue dependence and independence supporting efficient exploration of the constrained sequence space. We leverage these benefits in developing two complementary MCMC algorithms for protein design: constrained shuffling mixes wild-type sequences positionwise and evaluates graphical model likelihood, while component sampling directly generates sequences by sampling clique values and propagating to other cliques. We apply our methods to design WW domains. We demonstrate that likelihood under a model of wild-type WWs is highly predictive of foldedness of new WWs. We then show both theoretical and rapid empirical convergence of our algorithms in generating high-likelihood, diverse new sequences. We further show that these sequences capture the original sequence constraints, yielding a model as predictive of foldedness as the original one.
SN_GUI: a graphical user interface for snowpack modeling
NASA Astrophysics Data System (ADS)
Spreitzhofer, G.; Fierz, C.; Lehning, M.
2004-10-01
SNOWPACK is a physical snow cover model. The model not only serves as a valuable research tool, but also runs operationally on a network of high Alpine automatic weather and snow measurement sites. In order to facilitate the operation of SNOWPACK and the interpretation of the results obtained by this model, a user-friendly graphical user interface for snowpack modeling, named SN_GUI, was created. This Java-based and thus platform-independent tool can be operated in two modes, one designed to fulfill the requirements of avalanche warning services (e.g. by providing information about critical layers within the snowpack that are closely related to the avalanche activity), and the other one offering a variety of additional options satisfying the needs of researchers. The user of SN_GUI is graphically guided through the entire process of creating snow cover simulations. The starting point is the efficient creation of input parameter files for SNOWPACK, followed by the launching of SNOWPACK with a variety of parameter settings. Finally, after the successful termination of the run, a number of interactive display options may be used to visualize the model output. Among these are vertical profiles and time profiles for many parameters. Besides other features, SN_GUI allows the use of various color, time and coordinate scales, and the comparison of measured and observed parameters.
Implementing the lattice Boltzmann model on commodity graphics hardware
NASA Astrophysics Data System (ADS)
Kaufman, Arie; Fan, Zhe; Petkov, Kaloian
2009-06-01
Modern graphics processing units (GPUs) can perform general-purpose computations in addition to the native specialized graphics operations. Due to the highly parallel nature of graphics processing, the GPU has evolved into a many-core coprocessor that supports high data parallelism. Its performance has been growing at a rate of squared Moore's law, and its peak floating point performance exceeds that of the CPU by an order of magnitude. Therefore, it is a viable platform for time-sensitive and computationally intensive applications. The lattice Boltzmann model (LBM) computations are carried out via linear operations at discrete lattice sites, which can be implemented efficiently using a GPU-based architecture. Our simulations produce results comparable to the CPU version while improving performance by an order of magnitude. We have demonstrated that the GPU is well suited for interactive simulations in many applications, including simulating fire, smoke, lightweight objects in wind, jellyfish swimming in water, and heat shimmering and mirage (using the hybrid thermal LBM). We further advocate the use of a GPU cluster for large scale LBM simulations and for high performance computing. The Stony Brook Visual Computing Cluster has been the platform for several applications, including simulations of real-time plume dispersion in complex urban environments and thermal fluid dynamics in a pressurized water reactor. Major GPU vendors have been targeting the high performance computing market with GPU hardware implementations. Software toolkits such as NVIDIA CUDA provide a convenient development platform that abstracts the GPU and allows access to its underlying stream computing architecture. However, software programming for a GPU cluster remains a challenging task. We have therefore developed the Zippy framework to simplify GPU cluster programming. Zippy is based on global arrays combined with the stream programming model and it hides the low-level details of the
Dimension reduction for physiological variables using graphical modeling.
Imhoff, Michael; Fried, Roland; Gather, Ursula; Lanius, Vivian
2003-01-01
In intensive care, physiological variables of the critically ill are measured and recorded in short time intervals. The proper extraction and interpretation of the essential information contained in this flood of data can hardly be done by experience alone. Typically, decision making in intensive care is based on only a few selected variables. Alternatively, for a dimension reduction statistical latent variable techniques like principal component analysis or factor analysis can be applied. However, the interpretation of latent components extracted by these methods may be difficult. A more refined analysis is needed to provide suitable bedside decision support. Graphical models based on partial correlations provide information on the relationships among physiological variables that is helpful for variable selection and for identifying interpretable latent components. In a comparative study we investigate how much of the variability of the observed multivariate physiological time series can be explained by variable selection, by standard principal component analysis and by extracting latent compo-nents from groups of variables identified in a graphical model.
Ice-sheet modelling accelerated by graphics cards
NASA Astrophysics Data System (ADS)
Brædstrup, Christian Fredborg; Damsgaard, Anders; Egholm, David Lundbek
2014-11-01
Studies of glaciers and ice sheets have increased the demand for high performance numerical ice flow models over the past decades. When exploring the highly non-linear dynamics of fast flowing glaciers and ice streams, or when coupling multiple flow processes for ice, water, and sediment, researchers are often forced to use super-computing clusters. As an alternative to conventional high-performance computing hardware, the Graphical Processing Unit (GPU) is capable of massively parallel computing while retaining a compact design and low cost. In this study, we present a strategy for accelerating a higher-order ice flow model using a GPU. By applying the newest GPU hardware, we achieve up to 180× speedup compared to a similar but serial CPU implementation. Our results suggest that GPU acceleration is a competitive option for ice-flow modelling when compared to CPU-optimised algorithms parallelised by the OpenMP or Message Passing Interface (MPI) protocols.
An Accurate and Dynamic Computer Graphics Muscle Model
NASA Technical Reports Server (NTRS)
Levine, David Asher
1997-01-01
A computer based musculo-skeletal model was developed at the University in the departments of Mechanical and Biomedical Engineering. This model accurately represents human shoulder kinematics. The result of this model is the graphical display of bones moving through an appropriate range of motion based on inputs of EMGs and external forces. The need existed to incorporate a geometric muscle model in the larger musculo-skeletal model. Previous muscle models did not accurately represent muscle geometries, nor did they account for the kinematics of tendons. This thesis covers the creation of a new muscle model for use in the above musculo-skeletal model. This muscle model was based on anatomical data from the Visible Human Project (VHP) cadaver study. Two-dimensional digital images from the VHP were analyzed and reconstructed to recreate the three-dimensional muscle geometries. The recreated geometries were smoothed, reduced, and sliced to form data files defining the surfaces of each muscle. The muscle modeling function opened these files during run-time and recreated the muscle surface. The modeling function applied constant volume limitations to the muscle and constant geometry limitations to the tendons.
Non-convex Statistical Optimization for Sparse Tensor Graphical Model
Sun, Wei; Wang, Zhaoran; Liu, Han; Cheng, Guang
2016-01-01
We consider the estimation of sparse graphical models that characterize the dependency structure of high-dimensional tensor-valued data. To facilitate the estimation of the precision matrix corresponding to each way of the tensor, we assume the data follow a tensor normal distribution whose covariance has a Kronecker product structure. The penalized maximum likelihood estimation of this model involves minimizing a non-convex objective function. In spite of the non-convexity of this estimation problem, we prove that an alternating minimization algorithm, which iteratively estimates each sparse precision matrix while fixing the others, attains an estimator with the optimal statistical rate of convergence as well as consistent graph recovery. Notably, such an estimator achieves estimation consistency with only one tensor sample, which is unobserved in previous work. Our theoretical results are backed by thorough numerical studies.
Brain graphs: graphical models of the human brain connectome.
Bullmore, Edward T; Bassett, Danielle S
2011-01-01
Brain graphs provide a relatively simple and increasingly popular way of modeling the human brain connectome, using graph theory to abstractly define a nervous system as a set of nodes (denoting anatomical regions or recording electrodes) and interconnecting edges (denoting structural or functional connections). Topological and geometrical properties of these graphs can be measured and compared to random graphs and to graphs derived from other neuroscience data or other (nonneural) complex systems. Both structural and functional human brain graphs have consistently demonstrated key topological properties such as small-worldness, modularity, and heterogeneous degree distributions. Brain graphs are also physically embedded so as to nearly minimize wiring cost, a key geometric property. Here we offer a conceptual review and methodological guide to graphical analysis of human neuroimaging data, with an emphasis on some of the key assumptions, issues, and trade-offs facing the investigator.
De novo protein conformational sampling using a probabilistic graphical model
NASA Astrophysics Data System (ADS)
Bhattacharya, Debswapna; Cheng, Jianlin
2015-11-01
Efficient exploration of protein conformational space remains challenging especially for large proteins when assembling discretized structural fragments extracted from a protein structure data database. We propose a fragment-free probabilistic graphical model, FUSION, for conformational sampling in continuous space and assess its accuracy using ‘blind’ protein targets with a length up to 250 residues from the CASP11 structure prediction exercise. The method reduces sampling bottlenecks, exhibits strong convergence, and demonstrates better performance than the popular fragment assembly method, ROSETTA, on relatively larger proteins with a length of more than 150 residues in our benchmark set. FUSION is freely available through a web server at http://protein.rnet.missouri.edu/FUSION/.
MixSIAR: advanced stable isotope mixing models in R
Background/Question/Methods The development of stable isotope mixing models has coincided with modeling products (e.g. IsoSource, MixSIR, SIAR), where methodological advances are published in parity with software packages. However, while mixing model theory has recently been ex...
MixSIAR: advanced stable isotope mixing models in R
Background/Question/Methods The development of stable isotope mixing models has coincided with modeling products (e.g. IsoSource, MixSIR, SIAR), where methodological advances are published in parity with software packages. However, while mixing model theory has recently been ex...
Handling geophysical flows: Numerical modelling using Graphical Processing Units
NASA Astrophysics Data System (ADS)
Garcia-Navarro, Pilar; Lacasta, Asier; Juez, Carmelo; Morales-Hernandez, Mario
2016-04-01
Computational tools may help engineers in the assessment of sediment transport during the decision-making processes. The main requirements are that the numerical results have to be accurate and simulation models must be fast. The present work is based on the 2D shallow water equations in combination with the 2D Exner equation [1]. The resulting numerical model accuracy was already discussed in previous work. Regarding the speed of the computation, the Exner equation slows down the already costly 2D shallow water model as the number of variables to solve is increased and the numerical stability is more restrictive. On the other hand, the movement of poorly sorted material over steep areas constitutes a hazardous environmental problem. Computational tools help in the predictions of such landslides [2]. In order to overcome this problem, this work proposes the use of Graphical Processing Units (GPUs) for decreasing significantly the simulation time [3, 4]. The numerical scheme implemented in GPU is based on a finite volume scheme. The mathematical model and the numerical implementation are compared against experimental and field data. In addition, the computational times obtained with the Graphical Hardware technology are compared against Single-Core (sequential) and Multi-Core (parallel) CPU implementations. References [Juez et al.(2014)] Juez, C., Murillo, J., & Garca-Navarro, P. (2014) A 2D weakly-coupled and efficient numerical model for transient shallow flow and movable bed. Advances in Water Resources. 71 93-109. [Juez et al.(2013)] Juez, C., Murillo, J., & Garca-Navarro, P. (2013) . 2D simulation of granular flow over irregular steep slopes using global and local coordinates. Journal of Computational Physics. 225 166-204. [Lacasta et al.(2014)] Lacasta, A., Morales-Hernndez, M., Murillo, J., & Garca-Navarro, P. (2014) An optimized GPU implementation of a 2D free surface simulation model on unstructured meshes Advances in Engineering Software. 78 1-15. [Lacasta
Modeling Mix in ICF Implosions
NASA Astrophysics Data System (ADS)
Weber, C. R.; Clark, D. S.; Chang, B.; Eder, D. C.; Haan, S. W.; Jones, O. S.; Marinak, M. M.; Peterson, J. L.; Robey, H. F.
2014-10-01
The observation of ablator material mixing into the hot spot of ICF implosions correlates with reduced yield in National Ignition Campaign (NIC) experiments. Higher Z ablator material radiatively cools the central hot spot, inhibiting thermonuclear burn. This talk focuses on modeling a ``high-mix'' implosion from the NIC, where greater than 1000 ng of ablator material was inferred to have mixed into the hot spot. Standard post-shot modeling of this implosion does not predict the large amounts of ablator mix necessary to explain the data. Other issues are explored in this talk and sensitivity to the method of radiation transport is found. Compared with radiation diffusion, Sn transport can increase ablation front growth and alter the blow-off dynamics of capsule dust. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Dynamics of Mental Model Construction from Text and Graphics
ERIC Educational Resources Information Center
Hochpöchler, Ulrike; Schnotz, Wolfgang; Rasch, Thorsten; Ullrich, Mark; Horz, Holger; McElvany, Nele; Baumert, Jürgen
2013-01-01
When students read for learning, they frequently are required to integrate text and graphics information into coherent knowledge structures. The following study aimed at analyzing how students deal with texts and how they deal with graphics when they try to integrate the two sources of information. Furthermore, the study investigated differences…
Dynamics of Mental Model Construction from Text and Graphics
ERIC Educational Resources Information Center
Hochpöchler, Ulrike; Schnotz, Wolfgang; Rasch, Thorsten; Ullrich, Mark; Horz, Holger; McElvany, Nele; Baumert, Jürgen
2013-01-01
When students read for learning, they frequently are required to integrate text and graphics information into coherent knowledge structures. The following study aimed at analyzing how students deal with texts and how they deal with graphics when they try to integrate the two sources of information. Furthermore, the study investigated differences…
Graphic-based musculoskeletal model for biomechanical analyses and animation.
Chao, Edmund Y S
2003-04-01
The ability to combine physiology and engineering analyses with computer sciences has opened the door to the possibility of creating the 'Virtual Human' reality. This paper presents a broad foundation for a full-featured biomechanical simulator for the human musculoskeletal system physiology. This simulation technology unites the expertise in biomechanical analysis and graphic modeling to investigate joint and connective tissue mechanics at the structural level and to visualize the results in both static and animated forms together with the model. Adaptable anatomical models including prosthetic implants and fracture fixation devices and a robust computational infrastructure for static, kinematic, kinetic, and stress analyses under varying boundary and loading conditions are incorporated on a common platform, the VIMS (Virtual Interactive Musculoskeletal System). Within this software system, a manageable database containing long bone dimensions, connective tissue material properties and a library of skeletal joint system functional activities and loading conditions are also available and they can easily be modified, updated and expanded. Application software is also available to allow end-users to perform biomechanical analyses interactively. This paper details the design, capabilities, and features of the VIMS development at Johns Hopkins University, an effort possible only through academic and commercial collaborations. Examples using these models and the computational algorithms in a virtual laboratory environment are used to demonstrate the utility of this unique database and simulation technology. This integrated system will impact on medical education, basic research, device development and application, and clinical patient care related to musculoskeletal diseases, trauma, and rehabilitation.
Effects of Alternative Chromatic Mixed Graphics Displays in Decision Support Systems.
1984-05-01
experiment. The key factors of the experimental design were determined by the components of Mason and Mitroff’s (1973) definition of a DSS. The decision...66 Design of Study ................... ................ 67 Research Method .................................. 67 Experiment...and efficiency for the system designed (Dickson, Senn and Chervany, 1977, p. 913). The technical development of the color graphical displays is ahead
ACE: adaptive cluster expansion for maximum entropy graphical model inference.
Barton, J P; De Leonardis, E; Coucke, A; Cocco, S
2016-10-15
Graphical models are often employed to interpret patterns of correlations observed in data through a network of interactions between the variables. Recently, Ising/Potts models, also known as Markov random fields, have been productively applied to diverse problems in biology, including the prediction of structural contacts from protein sequence data and the description of neural activity patterns. However, inference of such models is a challenging computational problem that cannot be solved exactly. Here, we describe the adaptive cluster expansion (ACE) method to quickly and accurately infer Ising or Potts models based on correlation data. ACE avoids overfitting by constructing a sparse network of interactions sufficient to reproduce the observed correlation data within the statistical error expected due to finite sampling. When convergence of the ACE algorithm is slow, we combine it with a Boltzmann Machine Learning algorithm (BML). We illustrate this method on a variety of biological and artificial datasets and compare it to state-of-the-art approximate methods such as Gaussian and pseudo-likelihood inference. We show that ACE accurately reproduces the true parameters of the underlying model when they are known, and yields accurate statistical descriptions of both biological and artificial data. Models inferred by ACE more accurately describe the statistics of the data, including both the constrained low-order correlations and unconstrained higher-order correlations, compared to those obtained by faster Gaussian and pseudo-likelihood methods. These alternative approaches can recover the structure of the interaction network but typically not the correct strength of interactions, resulting in less accurate generative models. The ACE source code, user manual and tutorials with the example data and filtered correlations described herein are freely available on GitHub at https://github.com/johnbarton/ACE CONTACTS: jpbarton@mit.edu, cocco
A Graphical Method for Assessing the Identification of Linear Structural Equation Models
ERIC Educational Resources Information Center
Eusebi, Paolo
2008-01-01
A graphical method is presented for assessing the state of identifiability of the parameters in a linear structural equation model based on the associated directed graph. We do not restrict attention to recursive models. In the recent literature, methods based on graphical models have been presented as a useful tool for assessing the state of…
Graphical User Interface for Simulink Integrated Performance Analysis Model
NASA Technical Reports Server (NTRS)
Durham, R. Caitlyn
2009-01-01
The J-2X Engine (built by Pratt & Whitney Rocketdyne,) in the Upper Stage of the Ares I Crew Launch Vehicle, will only start within a certain range of temperature and pressure for Liquid Hydrogen and Liquid Oxygen propellants. The purpose of the Simulink Integrated Performance Analysis Model is to verify that in all reasonable conditions the temperature and pressure of the propellants are within the required J-2X engine start boxes. In order to run the simulation, test variables must be entered at all reasonable values of parameters such as heat leak and mass flow rate. To make this testing process as efficient as possible in order to save the maximum amount of time and money, and to show that the J-2X engine will start when it is required to do so, a graphical user interface (GUI) was created to allow the input of values to be used as parameters in the Simulink Model, without opening or altering the contents of the model. The GUI must allow for test data to come from Microsoft Excel files, allow those values to be edited before testing, place those values into the Simulink Model, and get the output from the Simulink Model. The GUI was built using MATLAB, and will run the Simulink simulation when the Simulate option is activated. After running the simulation, the GUI will construct a new Microsoft Excel file, as well as a MATLAB matrix file, using the output values for each test of the simulation so that they may graphed and compared to other values.
2016-01-01
Over recent years, several alternative relaxed clock models have been proposed in the context of Bayesian dating. These models fall in two distinct categories: uncorrelated and autocorrelated across branches. The choice between these two classes of relaxed clocks is still an open question. More fundamentally, the true process of rate variation may have both long-term trends and short-term fluctuations, suggesting that more sophisticated clock models unfolding over multiple time scales should ultimately be developed. Here, a mixed relaxed clock model is introduced, which can be mechanistically interpreted as a rate variation process undergoing short-term fluctuations on the top of Brownian long-term trends. Statistically, this mixed clock represents an alternative solution to the problem of choosing between autocorrelated and uncorrelated relaxed clocks, by proposing instead to combine their respective merits. Fitting this model on a dataset of 105 placental mammals, using both node-dating and tip-dating approaches, suggests that the two pure clocks, Brownian and white noise, are rejected in favour of a mixed model with approximately equal contributions for its uncorrelated and autocorrelated components. The tip-dating analysis is particularly sensitive to the choice of the relaxed clock model. In this context, the classical pure Brownian relaxed clock appears to be overly rigid, leading to biases in divergence time estimation. By contrast, the use of a mixed clock leads to more recent and more reasonable estimates for the crown ages of placental orders and superorders. Altogether, the mixed clock introduced here represents a first step towards empirically more adequate models of the patterns of rate variation across phylogenetic trees. This article is part of the themed issue ‘Dating species divergences using rocks and clocks’. PMID:27325829
Lartillot, Nicolas; Phillips, Matthew J; Ronquist, Fredrik
2016-07-19
Over recent years, several alternative relaxed clock models have been proposed in the context of Bayesian dating. These models fall in two distinct categories: uncorrelated and autocorrelated across branches. The choice between these two classes of relaxed clocks is still an open question. More fundamentally, the true process of rate variation may have both long-term trends and short-term fluctuations, suggesting that more sophisticated clock models unfolding over multiple time scales should ultimately be developed. Here, a mixed relaxed clock model is introduced, which can be mechanistically interpreted as a rate variation process undergoing short-term fluctuations on the top of Brownian long-term trends. Statistically, this mixed clock represents an alternative solution to the problem of choosing between autocorrelated and uncorrelated relaxed clocks, by proposing instead to combine their respective merits. Fitting this model on a dataset of 105 placental mammals, using both node-dating and tip-dating approaches, suggests that the two pure clocks, Brownian and white noise, are rejected in favour of a mixed model with approximately equal contributions for its uncorrelated and autocorrelated components. The tip-dating analysis is particularly sensitive to the choice of the relaxed clock model. In this context, the classical pure Brownian relaxed clock appears to be overly rigid, leading to biases in divergence time estimation. By contrast, the use of a mixed clock leads to more recent and more reasonable estimates for the crown ages of placental orders and superorders. Altogether, the mixed clock introduced here represents a first step towards empirically more adequate models of the patterns of rate variation across phylogenetic trees.This article is part of the themed issue 'Dating species divergences using rocks and clocks'.
Overview of Neutrino Mixing Models and Their Mixing Angle Predictions
Albright, Carl H.
2009-11-01
An overview of neutrino-mixing models is presented with emphasis on the types of horizontal flavor and vertical family symmetries that have been invoked. Distributions for the mixing angles of many models are displayed. Ways to differentiate among the models and to narrow the list of viable models are discussed.
Understanding of Relation Structures of Graphical Models by Lower Secondary Students
ERIC Educational Resources Information Center
van Buuren, Onne; Heck, André; Ellermeijer, Ton
2016-01-01
A learning path has been developed on system dynamical graphical modelling, integrated into the Dutch lower secondary physics curriculum. As part of the developmental research for this learning path, students' understanding of the relation structures shown in the diagrams of graphical system dynamics based models has been investigated. One of our…
Understanding of Relation Structures of Graphical Models by Lower Secondary Students
ERIC Educational Resources Information Center
van Buuren, Onne; Heck, André; Ellermeijer, Ton
2016-01-01
A learning path has been developed on system dynamical graphical modelling, integrated into the Dutch lower secondary physics curriculum. As part of the developmental research for this learning path, students' understanding of the relation structures shown in the diagrams of graphical system dynamics based models has been investigated. One of our…
JACK - ANTHROPOMETRIC MODELING SYSTEM FOR SILICON GRAPHICS WORKSTATIONS
NASA Technical Reports Server (NTRS)
Smith, B.
1994-01-01
JACK is an interactive graphics program developed at the University of Pennsylvania that displays and manipulates articulated geometric figures. JACK is typically used to observe how a human mannequin interacts with its environment and what effects body types will have upon the performance of a task in a simulated environment. Any environment can be created, and any number of mannequins can be placed anywhere in that environment. JACK includes facilities to construct limited geometric objects, position figures, perform a variety of analyses on the figures, describe the motion of the figures and specify lighting and surface property information for rendering high quality images. JACK is supplied with a variety of body types pre-defined and known to the system. There are both male and female bodies, ranging from the 5th to the 95th percentile, based on NASA Standard 3000. Each mannequin is fully articulated and reflects the joint limitations of a normal human. JACK is an editor for manipulating previously defined objects known as "Peabody" objects. Used to describe the figures as well as the internal data structure for representing them, Peabody is a language with a powerful and flexible mechanism for representing connectivity between objects, both the joints between individual segments within a figure and arbitrary connections between different figures. Peabody objects are generally comprised of several individual figures, each one a collection of segments. Each segment has a geometry represented by PSURF files that consist of polygons or curved surface patches. Although JACK does not have the capability to create new objects, objects may be created by other geometric modeling programs and then translated into the PSURF format. Environment files are a collection of figures and attributes that may be dynamically moved under the control of an animation file. The animation facilities allow the user to create a sequence of commands that duplicate the movements of a
JACK - ANTHROPOMETRIC MODELING SYSTEM FOR SILICON GRAPHICS WORKSTATIONS
NASA Technical Reports Server (NTRS)
Smith, B.
1994-01-01
JACK is an interactive graphics program developed at the University of Pennsylvania that displays and manipulates articulated geometric figures. JACK is typically used to observe how a human mannequin interacts with its environment and what effects body types will have upon the performance of a task in a simulated environment. Any environment can be created, and any number of mannequins can be placed anywhere in that environment. JACK includes facilities to construct limited geometric objects, position figures, perform a variety of analyses on the figures, describe the motion of the figures and specify lighting and surface property information for rendering high quality images. JACK is supplied with a variety of body types pre-defined and known to the system. There are both male and female bodies, ranging from the 5th to the 95th percentile, based on NASA Standard 3000. Each mannequin is fully articulated and reflects the joint limitations of a normal human. JACK is an editor for manipulating previously defined objects known as "Peabody" objects. Used to describe the figures as well as the internal data structure for representing them, Peabody is a language with a powerful and flexible mechanism for representing connectivity between objects, both the joints between individual segments within a figure and arbitrary connections between different figures. Peabody objects are generally comprised of several individual figures, each one a collection of segments. Each segment has a geometry represented by PSURF files that consist of polygons or curved surface patches. Although JACK does not have the capability to create new objects, objects may be created by other geometric modeling programs and then translated into the PSURF format. Environment files are a collection of figures and attributes that may be dynamically moved under the control of an animation file. The animation facilities allow the user to create a sequence of commands that duplicate the movements of a
Alternating direction methods for latent variable gaussian graphical model selection.
Ma, Shiqian; Xue, Lingzhou; Zou, Hui
2013-08-01
Chandrasekaran, Parrilo, and Willsky (2012) proposed a convex optimization problem for graphical model selection in the presence of unobserved variables. This convex optimization problem aims to estimate an inverse covariance matrix that can be decomposed into a sparse matrix minus a low-rank matrix from sample data. Solving this convex optimization problem is very challenging, especially for large problems. In this letter, we propose two alternating direction methods for solving this problem. The first method is to apply the classic alternating direction method of multipliers to solve the problem as a consensus problem. The second method is a proximal gradient-based alternating-direction method of multipliers. Our methods take advantage of the special structure of the problem and thus can solve large problems very efficiently. A global convergence result is established for the proposed methods. Numerical results on both synthetic data and gene expression data show that our methods usually solve problems with 1 million variables in 1 to 2 minutes and are usually 5 to 35 times faster than a state-of-the-art Newton-CG proximal point algorithm.
Graphical modeling for item difficulty in medical faculty exams.
Tomak, L; Bek, Y; Cengiz, M A
2016-01-01
There are different indexes used in the evaluation of exam results. One important index is the difficulty level of the item that is also used in this study to obtain control charts. This article offers some suggestions for the improvement of multiple-choice tests using item analysis statistics. The graphical modeling is important for the rapid and comparative evaluation of test results. The control chart is a tool that can be used to sharpen our teaching and testing skills by inspecting the weaknesses of measurements and producing reliable items. The research data for the application of control charts were obtained using the results of the fourth and fifth-grade student's exams at Ondokuz Mayis University, Faculty of Medicine. I-chart or moving range chart (MR) is preferred for whole variable data. It is seen that all observations are within control limits for I-chart, but three points on MR-chart are settled on the LCL. Using X--chart with subgroups, it was determined that control measurements were within the upper and lower limits in both charts. The difficulty levels of items were examined by obtaining different variable control charts. The difficulty level of the two items exceeded the upper control limit in R- and S-charts. The control charts have the advantage for classifying items as acceptable or unacceptable based on item difficulty criteria.
A Gaussian graphical model approach to climate networks
Zerenner, Tanja; Friederichs, Petra; Hense, Andreas; Lehnertz, Klaus
2014-06-15
Distinguishing between direct and indirect connections is essential when interpreting network structures in terms of dynamical interactions and stability. When constructing networks from climate data the nodes are usually defined on a spatial grid. The edges are usually derived from a bivariate dependency measure, such as Pearson correlation coefficients or mutual information. Thus, the edges indistinguishably represent direct and indirect dependencies. Interpreting climate data fields as realizations of Gaussian Random Fields (GRFs), we have constructed networks according to the Gaussian Graphical Model (GGM) approach. In contrast to the widely used method, the edges of GGM networks are based on partial correlations denoting direct dependencies. Furthermore, GRFs can be represented not only on points in space, but also by expansion coefficients of orthogonal basis functions, such as spherical harmonics. This leads to a modified definition of network nodes and edges in spectral space, which is motivated from an atmospheric dynamics perspective. We construct and analyze networks from climate data in grid point space as well as in spectral space, and derive the edges from both Pearson and partial correlations. Network characteristics, such as mean degree, average shortest path length, and clustering coefficient, reveal that the networks posses an ordered and strongly locally interconnected structure rather than small-world properties. Despite this, the network structures differ strongly depending on the construction method. Straightforward approaches to infer networks from climate data while not regarding any physical processes may contain too strong simplifications to describe the dynamics of the climate system appropriately.
A Gaussian graphical model approach to climate networks.
Zerenner, Tanja; Friederichs, Petra; Lehnertz, Klaus; Hense, Andreas
2014-06-01
Distinguishing between direct and indirect connections is essential when interpreting network structures in terms of dynamical interactions and stability. When constructing networks from climate data the nodes are usually defined on a spatial grid. The edges are usually derived from a bivariate dependency measure, such as Pearson correlation coefficients or mutual information. Thus, the edges indistinguishably represent direct and indirect dependencies. Interpreting climate data fields as realizations of Gaussian Random Fields (GRFs), we have constructed networks according to the Gaussian Graphical Model (GGM) approach. In contrast to the widely used method, the edges of GGM networks are based on partial correlations denoting direct dependencies. Furthermore, GRFs can be represented not only on points in space, but also by expansion coefficients of orthogonal basis functions, such as spherical harmonics. This leads to a modified definition of network nodes and edges in spectral space, which is motivated from an atmospheric dynamics perspective. We construct and analyze networks from climate data in grid point space as well as in spectral space, and derive the edges from both Pearson and partial correlations. Network characteristics, such as mean degree, average shortest path length, and clustering coefficient, reveal that the networks posses an ordered and strongly locally interconnected structure rather than small-world properties. Despite this, the network structures differ strongly depending on the construction method. Straightforward approaches to infer networks from climate data while not regarding any physical processes may contain too strong simplifications to describe the dynamics of the climate system appropriately.
Bayesian stable isotope mixing models
In this paper we review recent advances in Stable Isotope Mixing Models (SIMMs) and place them into an over-arching Bayesian statistical framework which allows for several useful extensions. SIMMs are used to quantify the proportional contributions of various sources to a mixtur...
Bayesian stable isotope mixing models
In this paper we review recent advances in Stable Isotope Mixing Models (SIMMs) and place them into an over-arching Bayesian statistical framework which allows for several useful extensions. SIMMs are used to quantify the proportional contributions of various sources to a mixtur...
Viscoelastic Finite Difference Modeling Using Graphics Processing Units
NASA Astrophysics Data System (ADS)
Fabien-Ouellet, G.; Gloaguen, E.; Giroux, B.
2014-12-01
Full waveform seismic modeling requires a huge amount of computing power that still challenges today's technology. This limits the applicability of powerful processing approaches in seismic exploration like full-waveform inversion. This paper explores the use of Graphics Processing Units (GPU) to compute a time based finite-difference solution to the viscoelastic wave equation. The aim is to investigate whether the adoption of the GPU technology is susceptible to reduce significantly the computing time of simulations. The code presented herein is based on the freely accessible software of Bohlen (2002) in 2D provided under a General Public License (GNU) licence. This implementation is based on a second order centred differences scheme to approximate time differences and staggered grid schemes with centred difference of order 2, 4, 6, 8, and 12 for spatial derivatives. The code is fully parallel and is written using the Message Passing Interface (MPI), and it thus supports simulations of vast seismic models on a cluster of CPUs. To port the code from Bohlen (2002) on GPUs, the OpenCl framework was chosen for its ability to work on both CPUs and GPUs and its adoption by most of GPU manufacturers. In our implementation, OpenCL works in conjunction with MPI, which allows computations on a cluster of GPU for large-scale model simulations. We tested our code for model sizes between 1002 and 60002 elements. Comparison shows a decrease in computation time of more than two orders of magnitude between the GPU implementation run on a AMD Radeon HD 7950 and the CPU implementation run on a 2.26 GHz Intel Xeon Quad-Core. The speed-up varies depending on the order of the finite difference approximation and generally increases for higher orders. Increasing speed-ups are also obtained for increasing model size, which can be explained by kernel overheads and delays introduced by memory transfers to and from the GPU through the PCI-E bus. Those tests indicate that the GPU memory size
Graphics development of DCOR: Deterministic combat model of Oak Ridge
Hunt, G.; Azmy, Y.Y.
1992-10-01
DCOR is a user-friendly computer implementation of a deterministic combat model developed at ORNL. To make the interpretation of the results more intuitive, a conversion of the numerical solution to a graphic animation sequence of battle evolution is desirable. DCOR uses a coarse computational spatial mesh superimposed on the battlefield. This research is aimed at developing robust methods for computing the position of the combative units over the continuum (and also pixeled) battlefield, from DCOR`s discrete-variable solution representing the density of each force type evaluated at gridpoints. Three main problems have been identified and solutions have been devised and implemented in a new visualization module of DCOR. First, there is the problem of distributing the total number of objects, each representing a combative unit of each force type, among the gridpoints at each time level of the animation. This problem is solved by distributing, for each force type, the total number of combative units, one by one, to the gridpoint with the largest calculated number of units. Second, there is the problem of distributing the number of units assigned to each computational gridpoint over the battlefield area attributed to that point. This problem is solved by distributing the units within that area by taking into account the influence of surrounding gridpoints using linear interpolation. Finally, time interpolated solutions must be generated to produce a sufficient number of frames to create a smooth animation sequence. Currently, enough frames may be generated either by direct computation via the PDE solver or by using linear programming techniques to linearly interpolate intermediate frames between calculated frames.
A Graphical Analysis of the Cournot-Nash and Stackelberg Models.
ERIC Educational Resources Information Center
Fulton, Murray
1997-01-01
Shows how the Cournot-Nash and Stackelberg equilibria can be represented in the familiar supply-demand graphical framework, allowing a direct comparison with the monopoly, competitive, and industrial organization models. This graphical analysis is represented throughout the article. (MJP)
An Item Response Unfolding Model for Graphic Rating Scales
ERIC Educational Resources Information Center
Liu, Ying
2009-01-01
The graphic rating scale, a measurement tool used in many areas of psychology, usually takes a form of a fixed-length line segment, with both ends bounded and labeled as extreme responses. The raters mark somewhere on the line, and the length of the line segment from one endpoint to the mark is taken as the measure. An item response unfolding…
A Graphical Approach to the Standard Principal-Agent Model.
ERIC Educational Resources Information Center
Zhou, Xianming
2002-01-01
States the principal-agent theory is difficult to teach because of its technical complexity and intractability. Indicates the equilibrium in the contract space is defined by the incentive parameter and insurance component of pay under a linear contract. Describes a graphical approach that students with basic knowledge of algebra and…
An Item Response Unfolding Model for Graphic Rating Scales
ERIC Educational Resources Information Center
Liu, Ying
2009-01-01
The graphic rating scale, a measurement tool used in many areas of psychology, usually takes a form of a fixed-length line segment, with both ends bounded and labeled as extreme responses. The raters mark somewhere on the line, and the length of the line segment from one endpoint to the mark is taken as the measure. An item response unfolding…
Graphics modelling of non-contact thickness measuring robotics work cell
NASA Technical Reports Server (NTRS)
Warren, Charles W.
1990-01-01
A system was developed for measuring, in real time, the thickness of a sprayable insulation during its application. The system was graphically modelled, off-line, using a state-of-the-art graphics workstation and associated software. This model was to contain a 3D color model of a workcell containing a robot and an air bearing turntable. A communication link was established between the graphics workstations and the robot's controller. Sequences of robot motion generated by the computer simulation are transmitted to the robot for execution.
Top View of a Computer Graphic Model of the Opportunity Lander and Rover
NASA Technical Reports Server (NTRS)
2004-01-01
[figure removed for brevity, see original site] PIA05265
A computer graphics model of the Opportunity lander and rover are super-imposed on top of the martian terrain where Opportunity landed.
Top View of a Computer Graphic Model of the Opportunity Lander and Rover
NASA Technical Reports Server (NTRS)
2004-01-01
[figure removed for brevity, see original site] PIA05265
A computer graphics model of the Opportunity lander and rover are super-imposed on top of the martian terrain where Opportunity landed.
Graphics-based intelligent search and abstracting using Data Modeling
NASA Astrophysics Data System (ADS)
Jaenisch, Holger M.; Handley, James W.; Case, Carl T.; Songy, Claude G.
2002-11-01
This paper presents an autonomous text and context-mining algorithm that converts text documents into point clouds for visual search cues. This algorithm is applied to the task of data-mining a scriptural database comprised of the Old and New Testaments from the Bible and the Book of Mormon, Doctrine and Covenants, and the Pearl of Great Price. Results are generated which graphically show the scripture that represents the average concept of the database and the mining of the documents down to the verse level.
A computer graphical user interface for survival mixture modelling of recurrent infections.
Lee, Andy H; Zhao, Yun; Yau, Kelvin K W; Ng, S K
2009-03-01
Recurrent infections data are commonly encountered in medical research, where the recurrent events are characterised by an acute phase followed by a stable phase after the index episode. Two-component survival mixture models, in both proportional hazards and accelerated failure time settings, are presented as a flexible method of analysing such data. To account for the inherent dependency of the recurrent observations, random effects are incorporated within the conditional hazard function, in the manner of generalised linear mixed models. Assuming a Weibull or log-logistic baseline hazard in both mixture components of the survival mixture model, an EM algorithm is developed for the residual maximum quasi-likelihood estimation of fixed effect and variance component parameters. The methodology is implemented as a graphical user interface coded using Microsoft visual C++. Application to model recurrent urinary tract infections for elderly women is illustrated, where significant individual variations are evident at both acute and stable phases. The survival mixture methodology developed enable practitioners to identify pertinent risk factors affecting the recurrent times and to draw valid conclusions inferred from these correlated and heterogeneous survival data.
NASA Technical Reports Server (NTRS)
Bidasaria, Hari
1989-01-01
Ultra network is a recently installed very high speed graphic hardware at NASA Langley Research Center. The Ultra Network interfaced to Voyager through its HSX channel is capable of transmitting up to 800 million bits of information per second. It is capable of displaying fifteen to twenty frames of precomputed images of size 1024 x 2368 with 24 bits of color information per pixel per second. Modeling and rendering techniques are being developed in computer graphics and implemented on Ultra hardware. A ray tracer is being developed for use at the Flight Software and Graphic branch. Changes were made to make the ray tracer compatible with Voyager.
NASA Technical Reports Server (NTRS)
Perucchio, R.; Ingraffea, A. R.
1984-01-01
The establishment of the boundary element method (BEM) as a valid tool for solving problems in structural mechanics and in other fields of applied physics is discussed. The development of an integrated interactive computer graphic system for the application of the BEM to three dimensional problems in elastostatics is described. The integration of interactive computer graphic techniques and the BEM takes place at the preprocessing and postprocessing stages of the analysis process, when, respectively, the data base is generated and the results are interpreted. The interactive computer graphic modeling techniques used for generating and discretizing the boundary surfaces of a solid domain are outlined.
NASA Technical Reports Server (NTRS)
Perucchio, R.; Ingraffea, A. R.
1984-01-01
The establishment of the boundary element method (BEM) as a valid tool for solving problems in structural mechanics and in other fields of applied physics is discussed. The development of an integrated interactive computer graphic system for the application of the BEM to three dimensional problems in elastostatics is described. The integration of interactive computer graphic techniques and the BEM takes place at the preprocessing and postprocessing stages of the analysis process, when, respectively, the data base is generated and the results are interpreted. The interactive computer graphic modeling techniques used for generating and discretizing the boundary surfaces of a solid domain are outlined.
Höhna, Sebastian; Landis, Michael J.
2016-01-01
Programs for Bayesian inference of phylogeny currently implement a unique and ﬁxed suite of models. Consequently, users of these software packages are simultaneously forced to use a number of programs for a given study, while also lacking the freedom to explore models that have not been implemented by the developers of those programs. We developed a new open-source software package, RevBayes, to address these problems. RevBayes is entirely based on probabilistic graphical models, a powerful generic framework for specifying and analyzing statistical models. Phylogenetic-graphical models can be speciﬁed interactively in RevBayes, piece by piece, using a new succinct and intuitive language called Rev. Rev is similar to the R language and the BUGS model-speciﬁcation language, and should be easy to learn for most users. The strength of RevBayes is the simplicity with which one can design, specify, and implement new and complex models. Fortunately, this tremendous ﬂexibility does not come at the cost of slower computation; as we demonstrate, RevBayes outperforms competing software for several standard analyses. Compared with other programs, RevBayes has fewer black-box elements. Users need to explicitly specify each part of the model and analysis. Although this explicitness may initially be unfamiliar, we are convinced that this transparency will improve understanding of phylogenetic models in our ﬁeld. Moreover, it will motivate the search for improvements to existing methods by brazenly exposing the model choices that we make to critical scrutiny. RevBayes is freely available at http://www.RevBayes.com. [Bayesian inference; Graphical models; MCMC; statistical phylogenetics.] PMID:27235697
Model Verification of Mixed Dynamic Systems
NASA Technical Reports Server (NTRS)
Evensen, D. A.; Chrostowski, J. D.; Hasselman, T. K.
1982-01-01
MOVER uses experimental data to verify mathematical models of "mixed" dynamic systems. The term "mixed" refers to interactive mechanical, hydraulic, electrical, and other components. Program compares analytical transfer functions with experiment.
Brundage, Michael D; Smith, Katherine C; Little, Emily A; Bantug, Elissa T; Snyder, Claire F
2015-10-01
Patient-reported outcomes (PROs) promote patient-centered care by using PRO research results ("group-level data") to inform decision making and by monitoring individual patient's PROs ("individual-level data") to inform care. We investigated the interpretability of current PRO data presentation formats. This cross-sectional mixed-methods study randomized purposively sampled cancer patients and clinicians to evaluate six group-data or four individual-data formats. A self-directed exercise assessed participants' interpretation accuracy and ratings of ease-of-understanding and usefulness (0 = least to 10 = most) of each format. Semi-structured qualitative interviews explored helpful and confusing format attributes. We reached thematic saturation with 50 patients (44 % < college graduate) and 20 clinicians. For group-level data, patients rated simple line graphs highest for ease-of-understanding and usefulness (median 8.0; 33 % selected for easiest to understand/most useful) and clinicians rated simple line graphs highest for ease-of-understanding and usefulness (median 9.0, 8.5) but most often selected line graphs with confidence limits or norms (30 % for each format for easiest to understand/most useful). Qualitative results support that clinicians value confidence intervals, norms, and p values, but patients find them confusing. For individual-level data, both patients and clinicians rated line graphs highest for ease-of-understanding (median 8.0 patients, 8.5 clinicians) and usefulness (median 8.0, 9.0) and selected them as easiest to understand (50, 70 %) and most useful (62, 80 %). The qualitative interviews supported highlighting scores requiring clinical attention and providing reference values. This study has identified preferences and opportunities for improving on current formats for PRO presentation and will inform development of best practices for PRO presentation. Both patients and clinicians prefer line graphs across group-level data and individual
Graphical modeling of binary data using the LASSO: a simulation study
2012-01-01
Background Graphical models were identified as a promising new approach to modeling high-dimensional clinical data. They provided a probabilistic tool to display, analyze and visualize the net-like dependence structures by drawing a graph describing the conditional dependencies between the variables. Until now, the main focus of research was on building Gaussian graphical models for continuous multivariate data following a multivariate normal distribution. Satisfactory solutions for binary data were missing. We adapted the method of Meinshausen and Bühlmann to binary data and used the LASSO for logistic regression. Objective of this paper was to examine the performance of the Bolasso to the development of graphical models for high dimensional binary data. We hypothesized that the performance of Bolasso is superior to competing LASSO methods to identify graphical models. Methods We analyzed the Bolasso to derive graphical models in comparison with other LASSO based method. Model performance was assessed in a simulation study with random data generated via symmetric local logistic regression models and Gibbs sampling. Main outcome variables were the Structural Hamming Distance and the Youden Index. We applied the results of the simulation study to a real-life data with functioning data of patients having head and neck cancer. Results Bootstrap aggregating as incorporated in the Bolasso algorithm greatly improved the performance in higher sample sizes. The number of bootstraps did have minimal impact on performance. Bolasso performed reasonable well with a cutpoint of 0.90 and a small penalty term. Optimal prediction for Bolasso leads to very conservative models in comparison with AIC, BIC or cross-validated optimal penalty terms. Conclusions Bootstrap aggregating may improve variable selection if the underlying selection process is not too unstable due to small sample size and if one is mainly interested in reducing the false discovery rate. We propose using the
Understanding of Relation Structures of Graphical Models by Lower Secondary Students
NASA Astrophysics Data System (ADS)
van Buuren, Onne; Heck, André; Ellermeijer, Ton
2016-10-01
A learning path has been developed on system dynamical graphical modelling, integrated into the Dutch lower secondary physics curriculum. As part of the developmental research for this learning path, students' understanding of the relation structures shown in the diagrams of graphical system dynamics based models has been investigated. One of our main findings is that only some students understand these structures correctly. Reality-based interpretation of the diagrams can conceal an incorrect understanding of diagram structures. As a result, students seemingly have no problems interpreting the diagrams until they are asked to construct a graphical model. Misconceptions have been identified that are the consequence of the fact that the equations are not clearly communicated by the diagrams or because the icons used in the diagrams mislead novice modellers. Suggestions are made for improvements.
Word-level language modeling for P300 spellers based on discriminative graphical models.
Saa, Jaime F Delgado; Pesters, Adriana de; McFarland, Dennis; Çetin, Müjdat
2015-04-01
In this work we propose a probabilistic graphical model framework that uses language priors at the level of words as a mechanism to increase the performance of P300-based spellers. This paper is concerned with brain-computer interfaces based on P300 spellers. Motivated by P300 spelling scenarios involving communication based on a limited vocabulary, we propose a probabilistic graphical model framework and an associated classification algorithm that uses learned statistical models of language at the level of words. Exploiting such high-level contextual information helps reduce the error rate of the speller. Our experimental results demonstrate that the proposed approach offers several advantages over existing methods. Most importantly, it increases the classification accuracy while reducing the number of times the letters need to be flashed, increasing the communication rate of the system. The proposed approach models all the variables in the P300 speller in a unified framework and has the capability to correct errors in previous letters in a word, given the data for the current one. The structure of the model we propose allows the use of efficient inference algorithms, which in turn makes it possible to use this approach in real-time applications.
Word-level language modeling for P300 spellers based on discriminative graphical models
NASA Astrophysics Data System (ADS)
Delgado Saa, Jaime F.; de Pesters, Adriana; McFarland, Dennis; Çetin, Müjdat
2015-04-01
Objective. In this work we propose a probabilistic graphical model framework that uses language priors at the level of words as a mechanism to increase the performance of P300-based spellers. Approach. This paper is concerned with brain-computer interfaces based on P300 spellers. Motivated by P300 spelling scenarios involving communication based on a limited vocabulary, we propose a probabilistic graphical model framework and an associated classification algorithm that uses learned statistical models of language at the level of words. Exploiting such high-level contextual information helps reduce the error rate of the speller. Main results. Our experimental results demonstrate that the proposed approach offers several advantages over existing methods. Most importantly, it increases the classification accuracy while reducing the number of times the letters need to be flashed, increasing the communication rate of the system. Significance. The proposed approach models all the variables in the P300 speller in a unified framework and has the capability to correct errors in previous letters in a word, given the data for the current one. The structure of the model we propose allows the use of efficient inference algorithms, which in turn makes it possible to use this approach in real-time applications.
Use and abuse of mixing models (MixSIAR)
Background/Question/MethodsCharacterizing trophic links in food webs is a fundamental ecological question. In our efforts to quantify energy flow through food webs, ecologists have increasingly used mixing models to analyze biological tracer data, often from stable isotopes. Whil...
Use and abuse of mixing models (MixSIAR)
Background/Question/MethodsCharacterizing trophic links in food webs is a fundamental ecological question. In our efforts to quantify energy flow through food webs, ecologists have increasingly used mixing models to analyze biological tracer data, often from stable isotopes. Whil...
Graphic model of the processes involved in the production of casegood furniture
Kristen G. Hoff; Subhash C. Sarin; R. Bruce. Anderson; R. Bruce. Anderson
1992-01-01
Imports from foreign furniture manufacturers are on ,the rise, and American manufacturers must take advantage of recent technological advances to regain their lost market share. To facilitate the implementation of these technologies for improving productivity and quality, a graphic model of the wood furniture production process is presented using the IDEF modeling...
Three-dimensional interactive graphics for displaying and modelling microscopic data.
Basinski, M; Deatherage, J F
1990-09-01
EUCLID is a three-dimensional (3D) general purpose graphics display package for interactive manipulation of vector, surface and solid drawings on Evans and Sutherland PS300 series graphics processors. It is useful for displaying, comparing, measuring and modelling 3D microscopic images in real time. EUCLID can assemble groups of drawings into a composite drawing, while retaining the ability to operate upon the individual drawings within the composite drawing separately. EUCLID is capable of real time geometrical transformations (scaling, translation and rotation in two coordinate frames) and stereo and perspective viewing transformations. Because of its flexibility, EUCLID is especially useful for fitting models into 3D microscopic images.
PRay - A graphical user interface for interactive visualization and modification of rayinvr models
NASA Astrophysics Data System (ADS)
Fromm, T.
2016-01-01
PRay is a graphical user interface for interactive displaying and editing of velocity models for seismic refraction. It is optimized for editing rayinvr models but can also be used as a dynamic viewer for ray tracing results from other software. The main features are the graphical editing of nodes and fast adjusting of the display (stations and phases). It can be extended by user-defined shell scripts and links to phase picking software. PRay is open source software written in the scripting language Perl, runs on Unix-like operating systems including Mac OS X and provides a version controlled source code repository for community development.
Graphical modelling of carbon nanotube field effect transistor
NASA Astrophysics Data System (ADS)
Sahoo, R.; Mishra, R. R.
2015-02-01
Carbon nanotube Field Effect Transistors (CNTFET) are found to be one of the most promising successors to conventional Si-MOSFET. This paper presents a novel modelling for planar CNTFET based on curve fitting method. The results obtained from the model are compared with the simulated results obtained by using the nanohub simulator. Finally the accuracy of the model is discussed by calculating the normalized root mean square difference between the nanohub simulation results and those obtained from the proposed model.
Graphical Means for Inspecting Qualitative Models of System Behaviour
ERIC Educational Resources Information Center
Bouwer, Anders; Bredeweg, Bert
2010-01-01
This article presents the design and evaluation of a tool for inspecting conceptual models of system behaviour. The basis for this research is the Garp framework for qualitative simulation. This framework includes modelling primitives, such as entities, quantities and causal dependencies, which are combined into model fragments and scenarios.…
A Graphical Model for Risk Analysis and Management
NASA Astrophysics Data System (ADS)
Wang, Xun; Williams, Mary-Anne
Risk analysis and management are important capabilities in intelligent information and knowledge systems. We present a new approach using directed graph based models for risk analysis and management. Our modelling approach is inspired by and builds on the two level approach of the Transferable Belief Model. The credal level for risk analysis and model construction uses beliefs in causal inference relations among the variables within a domain and a pignistic(betting) level for the decision making. The risk model at the credal level can be transformed into a probabilistic model through a pignistic transformation function. This paper focuses on model construction at the credal level. Our modelling approach captures expert knowledge in a formal and iterative fashion based on the Open World Assumption(OWA) in contrast to Bayesian Network based approaches for managing uncertainty associated with risks which assume all the domain knowledge and data have been captured before hand. As a result, our approach does not require complete knowledges and is well suited for modelling risk in dynamic changing environments where information and knowledge is gathered over time as decisions need to be taken. Its performance is related to the quality of the knowledge at hand at any given time.
Greiner, Matthias; Smid, Joost; Havelaar, Arie H; Müller-Graf, Christine
2013-05-15
Quantitative microbiological risk assessment (QMRA) models are used to reflect knowledge about complex real-world scenarios for the propagation of microbiological hazards along the feed and food chain. The aim is to provide insight into interdependencies among model parameters, typically with an interest to characterise the effect of risk mitigation measures. A particular requirement is to achieve clarity about the reliability of conclusions from the model in the presence of uncertainty. To this end, Monte Carlo (MC) simulation modelling has become a standard in so-called probabilistic risk assessment. In this paper, we elaborate on the application of Bayesian computational statistics in the context of QMRA. It is useful to explore the analogy between MC modelling and Bayesian inference (BI). This pertains in particular to the procedures for deriving prior distributions for model parameters. We illustrate using a simple example that the inability to cope with feedback among model parameters is a major limitation of MC modelling. However, BI models can be easily integrated into MC modelling to overcome this limitation. We refer a BI submodel integrated into a MC model to as a "Bayes domain". We also demonstrate that an entire QMRA model can be formulated as Bayesian graphical model (BGM) and discuss the advantages of this approach. Finally, we show example graphs of MC, BI and BGM models, highlighting the similarities among the three approaches.
A graphical vector autoregressive modelling approach to the analysis of electronic diary data
2010-01-01
Background In recent years, electronic diaries are increasingly used in medical research and practice to investigate patients' processes and fluctuations in symptoms over time. To model dynamic dependence structures and feedback mechanisms between symptom-relevant variables, a multivariate time series method has to be applied. Methods We propose to analyse the temporal interrelationships among the variables by a structural modelling approach based on graphical vector autoregressive (VAR) models. We give a comprehensive description of the underlying concepts and explain how the dependence structure can be recovered from electronic diary data by a search over suitable constrained (graphical) VAR models. Results The graphical VAR approach is applied to the electronic diary data of 35 obese patients with and without binge eating disorder (BED). The dynamic relationships for the two subgroups between eating behaviour, depression, anxiety and eating control are visualized in two path diagrams. Results show that the two subgroups of obese patients with and without BED are distinguishable by the temporal patterns which influence their respective eating behaviours. Conclusion The use of the graphical VAR approach for the analysis of electronic diary data leads to a deeper insight into patient's dynamics and dependence structures. An increasing use of this modelling approach could lead to a better understanding of complex psychological and physiological mechanisms in different areas of medical care and research. PMID:20359333
GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models
Mukherjee, Chiranjit; Rodriguez, Abel
2016-01-01
Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful. PMID:28626348
GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models.
Mukherjee, Chiranjit; Rodriguez, Abel
2016-01-01
Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful.
Structural identifiability of cyclic graphical models of biological networks with latent variables.
Wang, Yulin; Lu, Na; Miao, Hongyu
2016-06-13
Graphical models have long been used to describe biological networks for a variety of important tasks such as the determination of key biological parameters, and the structure of graphical model ultimately determines whether such unknown parameters can be unambiguously obtained from experimental observations (i.e., the identifiability problem). Limited by resources or technical capacities, complex biological networks are usually partially observed in experiment, which thus introduces latent variables into the corresponding graphical models. A number of previous studies have tackled the parameter identifiability problem for graphical models such as linear structural equation models (SEMs) with or without latent variables. However, the limited resolution and efficiency of existing approaches necessarily calls for further development of novel structural identifiability analysis algorithms. An efficient structural identifiability analysis algorithm is developed in this study for a broad range of network structures. The proposed method adopts the Wright's path coefficient method to generate identifiability equations in forms of symbolic polynomials, and then converts these symbolic equations to binary matrices (called identifiability matrix). Several matrix operations are introduced for identifiability matrix reduction with system equivalency maintained. Based on the reduced identifiability matrices, the structural identifiability of each parameter is determined. A number of benchmark models are used to verify the validity of the proposed approach. Finally, the network module for influenza A virus replication is employed as a real example to illustrate the application of the proposed approach in practice. The proposed approach can deal with cyclic networks with latent variables. The key advantage is that it intentionally avoids symbolic computation and is thus highly efficient. Also, this method is capable of determining the identifiability of each single parameter and
Automatic Construction of Anomaly Detectors from Graphical Models
Ferragut, Erik M; Darmon, David M; Shue, Craig A; Kelley, Stephen
2011-01-01
Detection of rare or previously unseen attacks in cyber security presents a central challenge: how does one search for a sufficiently wide variety of types of anomalies and yet allow the process to scale to increasingly complex data? In particular, creating each anomaly detector manually and training each one separately presents untenable strains on both human and computer resources. In this paper we propose a systematic method for constructing a potentially very large number of complementary anomaly detectors from a single probabilistic model of the data. Only one model needs to be trained, but numerous detectors can then be implemented. This approach promises to scale better than manual methods to the complex heterogeneity of real-life data. As an example, we develop a Latent Dirichlet Allocation probability model of TCP connections entering Oak Ridge National Laboratory. We show that several detectors can be automatically constructed from the model and will provide anomaly detection at flow, sub-flow, and host (both server and client) levels. This demonstrates how the fundamental connection between anomaly detection and probabilistic modeling can be exploited to develop more robust operational solutions.
[The graphic model of algorithm for the abdominal part of the thoracic duct].
Zykov, D S
2001-01-01
Fifty seven cadavers of adults have been studied with complex of anatomic techniques. Syntopy of abdominal part of the thoracic duct has been explored. Landmarks for reconstruction of the graphic model allowing preoperative and intraoperative determination of an optimal access to the abdominal part of the thoracic duct were suggested.
Parallelized CCHE2D flow model with CUDA Fortran on Graphics Process Units
USDA-ARS?s Scientific Manuscript database
This paper presents the CCHE2D implicit flow model parallelized using CUDA Fortran programming technique on Graphics Processing Units (GPUs). A parallelized implicit Alternating Direction Implicit (ADI) solver using Parallel Cyclic Reduction (PCR) algorithm on GPU is developed and tested. This solve...
A Monthly Water-Balance Model Driven By a Graphical User Interface
McCabe, Gregory J.; Markstrom, Steven L.
2007-01-01
This report describes a monthly water-balance model driven by a graphical user interface, referred to as the Thornthwaite monthly water-balance program. Computations of monthly water-balance components of the hydrologic cycle are made for a specified location. The program can be used as a research tool, an assessment tool, and a tool for classroom instruction.
Probabilistic assessment of agricultural droughts using graphical models
NASA Astrophysics Data System (ADS)
Ramadas, Meenu; Govindaraju, Rao S.
2015-07-01
Agricultural droughts are often characterized by soil moisture in the root zone of the soil, but crop needs are rarely factored into the analysis. Since water needs vary with crops, agricultural drought incidences in a region can be characterized better if crop responses to soil water deficits are also accounted for in the drought index. This study investigates agricultural droughts driven by plant stress due to soil moisture deficits using crop stress functions available in the literature. Crop water stress is assumed to begin at the soil moisture level corresponding to incipient stomatal closure, and reaches its maximum at the crop's wilting point. Using available location-specific crop acreage data, a weighted crop water stress function is computed. A new probabilistic agricultural drought index is then developed within a hidden Markov model (HMM) framework that provides model uncertainty in drought classification and accounts for time dependence between drought states. The proposed index allows probabilistic classification of the drought states and takes due cognizance of the stress experienced by the crop due to soil moisture deficit. The capabilities of HMM model formulations for assessing agricultural droughts are compared to those of current drought indices such as standardized precipitation evapotranspiration index (SPEI) and self-calibrating Palmer drought severity index (SC-PDSI). The HMM model identified critical drought events and several drought occurrences that are not detected by either SPEI or SC-PDSI, and shows promise as a tool for agricultural drought studies.
Graphical Modeling of Shipboard Electric Power Distribution Systems
1993-12-01
specified reference voltage. As can be seen in the Simulink model, this comparison is done in the qdO reference frame. The reference voltage is...ivi abc-variables in order to eliminate the need for additional transformations (abc -+ qdO ). The simulation uses voltage as the input and currents
Quantifying uncertainty in stable isotope mixing models
Davis, Paul; Syme, James; Heikoop, Jeffrey; ...
2015-05-19
Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, SIAR [Parnell et al., 2010] a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (δ15N and δ18O) but all methods testedmore » are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the uncertainty in calculated
Quantifying uncertainty in stable isotope mixing models
Davis, Paul; Syme, James; Heikoop, Jeffrey; Fessenden-Rahn, Julianna; Perkins, George; Newman, Brent; Chrystal, Abbey E.; Hagerty, Shannon B.
2015-05-19
Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, SIAR [Parnell et al., 2010] a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (δ^{15}N and δ^{18}O) but all methods tested are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the
Quantifying uncertainty in stable isotope mixing models
NASA Astrophysics Data System (ADS)
Davis, Paul; Syme, James; Heikoop, Jeffrey; Fessenden-Rahn, Julianna; Perkins, George; Newman, Brent; Chrystal, Abbey E.; Hagerty, Shannon B.
2015-05-01
Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, Stable Isotope Analysis in R (SIAR), a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (δ15N and δ18O) but all methods tested are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the uncertainty in calculated
Graphic modeling of epithelial transport system: causality of dissipation.
Imai, Yusuke
2003-06-01
The epithelial transport system is a thermodynamic system which is composed of membranes and fluid compartments. The membranes are assumed to be dissipative subsystems in which power dissipates, and fluid compartments are capacitive subsystems in which power is stored. Each subsystem can be subdivided into elementary thermodynamic processes, and can be represented by generalized capacitors, power transducers and resistors in a bond graph. In the modeling of the dissipative subsystem, the causality of the dissipative process was taken into consideration and the representation of power coupling was developed. The dissipative subsystem can be represented by a combination of coupling modules and conductors. Phenomenological equations with parameters from the model were derived. This study shows that the behavior of transport systems can be simulated using these equations.
A Practical Probabilistic Graphical Modeling Tool for Weighing ...
Past weight-of-evidence frameworks for adverse ecological effects have provided soft-scoring procedures for judgments based on the quality and measured attributes of evidence. Here, we provide a flexible probabilistic structure for weighing and integrating lines of evidence for ecological risk determinations. Probabilistic approaches can provide both a quantitative weighing of lines of evidence and methods for evaluating risk and uncertainty. The current modeling structure wasdeveloped for propagating uncertainties in measured endpoints and their influence on the plausibility of adverse effects. To illustrate the approach, we apply the model framework to the sediment quality triad using example lines of evidence for sediment chemistry measurements, bioassay results, and in situ infauna diversity of benthic communities using a simplified hypothetical case study. We then combine the three lines evidence and evaluate sensitivity to the input parameters, and show how uncertainties are propagated and how additional information can be incorporated to rapidly update the probability of impacts. The developed network model can be expanded to accommodate additional lines of evidence, variables and states of importance, and different types of uncertainties in the lines of evidence including spatial and temporal as well as measurement errors. We provide a flexible Bayesian network structure for weighing and integrating lines of evidence for ecological risk determinations
A Practical Probabilistic Graphical Modeling Tool for Weighing ...
Past weight-of-evidence frameworks for adverse ecological effects have provided soft-scoring procedures for judgments based on the quality and measured attributes of evidence. Here, we provide a flexible probabilistic structure for weighing and integrating lines of evidence for ecological risk determinations. Probabilistic approaches can provide both a quantitative weighing of lines of evidence and methods for evaluating risk and uncertainty. The current modeling structure wasdeveloped for propagating uncertainties in measured endpoints and their influence on the plausibility of adverse effects. To illustrate the approach, we apply the model framework to the sediment quality triad using example lines of evidence for sediment chemistry measurements, bioassay results, and in situ infauna diversity of benthic communities using a simplified hypothetical case study. We then combine the three lines evidence and evaluate sensitivity to the input parameters, and show how uncertainties are propagated and how additional information can be incorporated to rapidly update the probability of impacts. The developed network model can be expanded to accommodate additional lines of evidence, variables and states of importance, and different types of uncertainties in the lines of evidence including spatial and temporal as well as measurement errors. We provide a flexible Bayesian network structure for weighing and integrating lines of evidence for ecological risk determinations
Transition mixing study empirical model report
NASA Technical Reports Server (NTRS)
Srinivasan, R.; White, C.
1988-01-01
The empirical model developed in the NASA Dilution Jet Mixing Program has been extended to include the curvature effects of transition liners. This extension is based on the results of a 3-D numerical model generated under this contract. The empirical model results agree well with the numerical model results for all tests cases evaluated. The empirical model shows faster mixing rates compared to the numerical model. Both models show drift of jets toward the inner wall of a turning duct. The structure of the jets from the inner wall does not exhibit the familiar kidney-shaped structures observed for the outer wall jets or for jets injected in rectangular ducts.
Experiments with a low-cost system for computer graphics material model acquisition
NASA Astrophysics Data System (ADS)
Rushmeier, Holly; Lockerman, Yitzhak; Cartwright, Luke; Pitera, David
2015-03-01
We consider the design of an inexpensive system for acquiring material models for computer graphics rendering applications in animation, games and conceptual design. To be useful in these applications a system must be able to model a rich range of appearances in a computationally tractable form. The range of appearance of interest in computer graphics includes materials that have spatially varying properties, directionality, small-scale geometric structure, and subsurface scattering. To be computationally tractable, material models for graphics must be compact, editable, and efficient to numerically evaluate for ray tracing importance sampling. To construct appropriate models for a range of interesting materials, we take the approach of separating out directly and indirectly scattered light using high spatial frequency patterns introduced by Nayar et al. in 2006. To acquire the data at low cost, we use a set of Raspberry Pi computers and cameras clamped to miniature projectors. We explore techniques to separate out surface and subsurface indirect lighting. This separation would allow the fitting of simple, and so tractable, analytical models to features of the appearance model. The goal of the system is to provide models for physically accurate renderings that are visually equivalent to viewing the original physical materials.
Austin, Peter C; Steyerberg, Ewout W
2014-02-10
Predicting the probability of the occurrence of a binary outcome or condition is important in biomedical research. While assessing discrimination is an essential issue in developing and validating binary prediction models, less attention has been paid to methods for assessing model calibration. Calibration refers to the degree of agreement between observed and predicted probabilities and is often assessed by testing for lack-of-fit. The objective of our study was to examine the ability of graphical methods to assess the calibration of logistic regression models. We examined lack of internal calibration, which was related to misspecification of the logistic regression model, and external calibration, which was related to an overfit model or to shrinkage of the linear predictor. We conducted an extensive set of Monte Carlo simulations with a locally weighted least squares regression smoother (i.e., the loess algorithm) to examine the ability of graphical methods to assess model calibration. We found that loess-based methods were able to provide evidence of moderate departures from linearity and indicate omission of a moderately strong interaction. Misspecification of the link function was harder to detect. Visual patterns were clearer with higher sample sizes, higher incidence of the outcome, or higher discrimination. Loess-based methods were also able to identify the lack of calibration in external validation samples when an overfit regression model had been used. In conclusion, loess-based smoothing methods are adequate tools to graphically assess calibration and merit wider application.
VISUAL PLUMES MIXING ZONE MODELING SOFTWARE
The U.S. Environmental Protection Agency has a long history of both supporting plume model development and providing mixing zone modeling software. The Visual Plumes model is the most recent addition to the suite of public-domain models available through the EPA-Athens Center f...
VISUAL PLUMES MIXING ZONE MODELING SOFTWARE
The U.S. Environmental Protection Agency has a long history of both supporting plume model development and providing mixing zone modeling software. The Visual Plumes model is the most recent addition to the suite of public-domain models available through the EPA-Athens Center f...
Extended Generalized Linear Latent and Mixed Model
ERIC Educational Resources Information Center
Segawa, Eisuke; Emery, Sherry; Curry, Susan J.
2008-01-01
The generalized linear latent and mixed modeling (GLLAMM framework) includes many models such as hierarchical and structural equation models. However, GLLAMM cannot currently accommodate some models because it does not allow some parameters to be random. GLLAMM is extended to overcome the limitation by adding a submodel that specifies a…
Inouye, David I; Ravikumar, Pradeep; Dhillon, Inderjit S
2016-06-01
We develop Square Root Graphical Models (SQR), a novel class of parametric graphical models that provides multivariate generalizations of univariate exponential family distributions. Previous multivariate graphical models (Yang et al., 2015) did not allow positive dependencies for the exponential and Poisson generalizations. However, in many real-world datasets, variables clearly have positive dependencies. For example, the airport delay time in New York-modeled as an exponential distribution-is positively related to the delay time in Boston. With this motivation, we give an example of our model class derived from the univariate exponential distribution that allows for almost arbitrary positive and negative dependencies with only a mild condition on the parameter matrix-a condition akin to the positive definiteness of the Gaussian covariance matrix. Our Poisson generalization allows for both positive and negative dependencies without any constraints on the parameter values. We also develop parameter estimation methods using node-wise regressions with ℓ1 regularization and likelihood approximation methods using sampling. Finally, we demonstrate our exponential generalization on a synthetic dataset and a real-world dataset of airport delay times.
A new test and graphical tool to assess the goodness of fit of logistic regression models.
Nattino, Giovanni; Finazzi, Stefano; Bertolini, Guido
2016-02-28
A prognostic model is well calibrated when it accurately predicts event rates. This is first determined by testing for goodness of fit with the development dataset. All existing tests and graphic tools designed for the purpose suffer several drawbacks, related mainly to the subgrouping of observations or to heavy dependence on arbitrary parameters. We propose a statistical test and a graphical method to assess the goodness of fit of logistic regression models, obtained through an extension of similar techniques developed for external validation. We analytically computed and numerically verified the distribution of the underlying statistic. Simulations on a set of realistic scenarios show that this test and the well-known Hosmer-Lemeshow approach have similar type I error rates. The main advantage of this new approach is that the relationship between model predictions and outcome rates across the range of probabilities can be represented in the calibration belt plot, together with its statistical confidence. By readily spotting any deviations from the perfect fit, this new graphical tool is designed to identify, during the process of model development, poorly modeled variables that call for further investigation. This is illustrated through an example based on real data. Copyright © 2015 John Wiley & Sons, Ltd.
GRAPHICAL USER INTERFACE WITH APPLICATIONS IN SUSCEPTIBLE-INFECTIOUS-SUSCEPTIBLE MODELS.
Ilea, M; Turnea, M; Arotăriţei, D; Rotariu, Mariana; Popescu, Marilena
2015-01-01
Practical significance of understanding the dynamics and evolution of infectious diseases increases continuously in contemporary world. The mathematical study of the dynamics of infectious diseases has a long history. By incorporating statistical methods and computer-based simulations in dynamic epidemiological models, it could be possible for modeling methods and theoretical analyses to be more realistic and reliable, allowing a more detailed understanding of the rules governing epidemic spreading. To provide the basis for a disease transmission, the population of a region is often divided into various compartments, and the model governing their relation is called the compartmental model. To present all of the information available, a graphical user interface provides icons and visual indicators. The graphical interface shown in this paper is performed using the MATLAB software ver. 7.6.0. MATLAB software offers a wide range of techniques by which data can be displayed graphically. The process of data viewing involves a series of operations. To achieve it, I had to make three separate files, one for defining the mathematical model and two for the interface itself. Considering a fixed population, it is observed that the number of susceptible individuals diminishes along with an increase in the number of infectious individuals so that in about ten days the number of individuals infected and susceptible, respectively, has the same value. If the epidemic is not controlled, it will continue for an indefinite period of time. By changing the global parameters specific of the SIS model, a more rapid increase of infectious individuals is noted. Using the graphical user interface shown in this paper helps achieving a much easier interaction with the computer, simplifying the structure of complex instructions by using icons and menus, and, in particular, programs and files are much easier to organize. Some numerical simulations have been presented to illustrate theoretical
Zhang, Xiao-Fei; Ou-Yang, Le; Yan, Hong
2017-08-15
Understanding how gene regulatory networks change under different cellular states is important for revealing insights into network dynamics. Gaussian graphical models, which assume that the data follow a joint normal distribution, have been used recently to infer differential networks. However, the distributions of the omics data are non-normal in general. Furthermore, although much biological knowledge (or prior information) has been accumulated, most existing methods ignore the valuable prior information. Therefore, new statistical methods are needed to relax the normality assumption and make full use of prior information. We propose a new differential network analysis method to address the above challenges. Instead of using Gaussian graphical models, we employ a non-paranormal graphical model that can relax the normality assumption. We develop a principled model to take into account the following prior information: (i) a differential edge less likely exists between two genes that do not participate together in the same pathway; (ii) changes in the networks are driven by certain regulator genes that are perturbed across different cellular states and (iii) the differential networks estimated from multi-view gene expression data likely share common structures. Simulation studies demonstrate that our method outperforms other graphical model-based algorithms. We apply our method to identify the differential networks between platinum-sensitive and platinum-resistant ovarian tumors, and the differential networks between the proneural and mesenchymal subtypes of glioblastoma. Hub nodes in the estimated differential networks rediscover known cancer-related regulator genes and contain interesting predictions. The source code is at https://github.com/Zhangxf-ccnu/pDNA. szuouyl@gmail.com. Supplementary data are available at Bioinformatics online.
ERIC Educational Resources Information Center
Crow, Wendell C.
This paper suggests ways in which manifest, physical attributes of graphic elements can be described and measured. It also proposes a preliminary conceptual model that accounts for the readily apparent, measurable variables in a visual message. The graphic elements that are described include format, typeface, and photographs/artwork. The…
ERIC Educational Resources Information Center
Crow, Wendell C.
This paper suggests ways in which manifest, physical attributes of graphic elements can be described and measured. It also proposes a preliminary conceptual model that accounts for the readily apparent, measurable variables in a visual message. The graphic elements that are described include format, typeface, and photographs/artwork. The…
Vinciotti, Veronica; Augugliaro, Luigi; Abbruzzo, Antonino; Wit, Ernst C
2016-06-01
Factorial Gaussian graphical Models (fGGMs) have recently been proposed for inferring dynamic gene regulatory networks from genomic high-throughput data. In the search for true regulatory relationships amongst the vast space of possible networks, these models allow the imposition of certain restrictions on the dynamic nature of these relationships, such as Markov dependencies of low order - some entries of the precision matrix are a priori zeros - or equal dependency strengths across time lags - some entries of the precision matrix are assumed to be equal. The precision matrix is then estimated by l1-penalized maximum likelihood, imposing a further constraint on the absolute value of its entries, which results in sparse networks. Selecting the optimal sparsity level is a major challenge for this type of approaches. In this paper, we evaluate the performance of a number of model selection criteria for fGGMs by means of two simulated regulatory networks from realistic biological processes. The analysis reveals a good performance of fGGMs in comparison with other methods for inferring dynamic networks and of the KLCV criterion in particular for model selection. Finally, we present an application on a high-resolution time-course microarray data from the Neisseria meningitidis bacterium, a causative agent of life-threatening infections such as meningitis. The methodology described in this paper is implemented in the R package sglasso, freely available at CRAN, http://CRAN.R-project.org/package=sglasso.
A stacked graphical model for associating sub-images with sub-captions.
Kou, Zhenzhen; Cohen, William W; Murphy, Robert F
2007-01-01
There is extensive interest in mining data from full text. We have built a system called SLIF (for Subcellular Location Image Finder), which extracts information on one particular aspect of biology from a combination of text and images in journal articles. Associating the information from the text and image requires matching sub-figures with the sentences in the text. We introduce a stacked graphical model, a meta-learning scheme to augment a base learner by expanding features based on related instances, to match the labels of sub-figures with labels of sentences. The experimental results show a significant improvement in the matching accuracy of the stacked graphical model (81.3%) as compared with a relational dependency network (70.8%) or the current algorithm in SLIF (64.3%).
A semiparametric graphical modelling approach for large-scale equity selection.
Liu, Han; Mulvey, John; Zhao, Tianqi
2016-01-01
We propose a new stock selection strategy that exploits rebalancing returns and improves portfolio performance. To effectively harvest rebalancing gains, we apply ideas from elliptical-copula graphical modelling and stability inference to select stocks that are as independent as possible. The proposed elliptical-copula graphical model has a latent Gaussian representation; its structure can be effectively inferred using the regularized rank-based estimators. The resulting algorithm is computationally efficient and scales to large data-sets. To show the efficacy of the proposed method, we apply it to conduct equity selection based on a 16-year health care stock data-set and a large 34-year stock data-set. Empirical tests show that the proposed method is superior to alternative strategies including a principal component analysis-based approach and the classical Markowitz strategy based on the traditional buy-and-hold assumption.
A semiparametric graphical modelling approach for large-scale equity selection
Liu, Han; Mulvey, John; Zhao, Tianqi
2016-01-01
We propose a new stock selection strategy that exploits rebalancing returns and improves portfolio performance. To effectively harvest rebalancing gains, we apply ideas from elliptical-copula graphical modelling and stability inference to select stocks that are as independent as possible. The proposed elliptical-copula graphical model has a latent Gaussian representation; its structure can be effectively inferred using the regularized rank-based estimators. The resulting algorithm is computationally efficient and scales to large data-sets. To show the efficacy of the proposed method, we apply it to conduct equity selection based on a 16-year health care stock data-set and a large 34-year stock data-set. Empirical tests show that the proposed method is superior to alternative strategies including a principal component analysis-based approach and the classical Markowitz strategy based on the traditional buy-and-hold assumption. PMID:28316507
Pecevski, Dejan; Buesing, Lars; Maass, Wolfgang
2011-12-01
An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows ("explaining away") and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons.
From least squares to multilevel modeling: A graphical introduction to Bayesian inference
NASA Astrophysics Data System (ADS)
Loredo, Thomas J.
2016-01-01
This tutorial presentation will introduce some of the key ideas and techniques involved in applying Bayesian methods to problems in astrostatistics. The focus will be on the big picture: understanding the foundations (interpreting probability, Bayes's theorem, the law of total probability and marginalization), making connections to traditional methods (propagation of errors, least squares, chi-squared, maximum likelihood, Monte Carlo simulation), and highlighting problems where a Bayesian approach can be particularly powerful (Poisson processes, density estimation and curve fitting with measurement error). The "graphical" component of the title reflects an emphasis on pictorial representations of some of the math, but also on the use of graphical models (multilevel or hierarchical models) for analyzing complex data. Code for some examples from the talk will be available to participants, in Python and in the Stan probabilistic programming language.
Convex set and linear mixing model
NASA Technical Reports Server (NTRS)
Xu, P.; Greeley, R.
1993-01-01
A major goal of optical remote sensing is to determine surface compositions of the earth and other planetary objects. For assessment of composition, single pixels in multi-spectral images usually record a mixture of the signals from various materials within the corresponding surface area. In this report, we introduce a closed and bounded convex set as a mathematical model for linear mixing. This model has a clear geometric implication because the closed and bounded convex set is a natural generalization of a triangle in n-space. The endmembers are extreme points of the convex set. Every point in the convex closure of the endmembers is a linear mixture of those endmembers, which is exactly how linear mixing is defined. With this model, some general criteria for selecting endmembers could be described. This model can lead to a better understanding of linear mixing models.
Oishi, Makoto; Fukuda, Masafumi; Hiraishi, Tetsuya; Yajima, Naoki; Sato, Yosuke; Fujii, Yukihiko
2012-09-01
The purpose of this paper is to report on the authors' advanced presurgical interactive virtual simulation technique using a 3D computer graphics model for microvascular decompression (MVD) surgery. The authors performed interactive virtual simulation prior to surgery in 26 patients with trigeminal neuralgia or hemifacial spasm. The 3D computer graphics models for interactive virtual simulation were composed of the brainstem, cerebellum, cranial nerves, vessels, and skull individually created by the image analysis, including segmentation, surface rendering, and data fusion for data collected by 3-T MRI and 64-row multidetector CT systems. Interactive virtual simulation was performed by employing novel computer-aided design software with manipulation of a haptic device to imitate the surgical procedures of bone drilling and retraction of the cerebellum. The findings were compared with intraoperative findings. In all patients, interactive virtual simulation provided detailed and realistic surgical perspectives, of sufficient quality, representing the lateral suboccipital route. The causes of trigeminal neuralgia or hemifacial spasm determined by observing 3D computer graphics models were concordant with those identified intraoperatively in 25 (96%) of 26 patients, which was a significantly higher rate than the 73% concordance rate (concordance in 19 of 26 patients) obtained by review of 2D images only (p < 0.05). Surgeons evaluated interactive virtual simulation as having "prominent" utility for carrying out the entire surgical procedure in 50% of cases. It was evaluated as moderately useful or "supportive" in the other 50% of cases. There were no cases in which it was evaluated as having no utility. The utilities of interactive virtual simulation were associated with atypical or complex forms of neurovascular compression and structural restrictions in the surgical window. Finally, MVD procedures were performed as simulated in 23 (88%) of the 26 patients . Our
Learned graphical models for probabilistic planning provide a new class of movement primitives
Rückert, Elmar A.; Neumann, Gerhard; Toussaint, Marc; Maass, Wolfgang
2013-01-01
Biological movement generation combines three interesting aspects: its modular organization in movement primitives (MPs), its characteristics of stochastic optimality under perturbations, and its efficiency in terms of learning. A common approach to motor skill learning is to endow the primitives with dynamical systems. Here, the parameters of the primitive indirectly define the shape of a reference trajectory. We propose an alternative MP representation based on probabilistic inference in learned graphical models with new and interesting properties that complies with salient features of biological movement control. Instead of endowing the primitives with dynamical systems, we propose to endow MPs with an intrinsic probabilistic planning system, integrating the power of stochastic optimal control (SOC) methods within a MP. The parameterization of the primitive is a graphical model that represents the dynamics and intrinsic cost function such that inference in this graphical model yields the control policy. We parameterize the intrinsic cost function using task-relevant features, such as the importance of passing through certain via-points. The system dynamics as well as intrinsic cost function parameters are learned in a reinforcement learning (RL) setting. We evaluate our approach on a complex 4-link balancing task. Our experiments show that our movement representation facilitates learning significantly and leads to better generalization to new task settings without re-learning. PMID:23293598
Surles, M C; Richardson, J S; Richardson, D C; Brooks, F P
1994-02-01
We describe a new paradigm for modeling proteins in interactive computer graphics systems--continual maintenance of a physically valid representation, combined with direct user control and visualization. This is achieved by a fast algorithm for energy minimization, capable of real-time performance on all atoms of a small protein, plus graphically specified user tugs. The modeling system, called Sculpt, rigidly constrains bond lengths, bond angles, and planar groups (similar to existing interactive modeling programs), while it applies elastic restraints to minimize the potential energy due to torsions, hydrogen bonds, and van der Waals and electrostatic interactions (similar to existing batch minimization programs), and user-specified springs. The graphical interface can show bad and/or favorable contacts, and individual energy terms can be turned on or off to determine their effects and interactions. Sculpt finds a local minimum of the total energy that satisfies all the constraints using an augmented Lagrange-multiplier method; calculation time increases only linearly with the number of atoms because the matrix of constraint gradients is sparse and banded. On a 100-MHz MIPS R4000 processor (Silicon Graphics Indigo), Sculpt achieves 11 updates per second on a 20-residue fragment and 2 updates per second on an 80-residue protein, using all atoms except non-H-bonding hydrogens, and without electrostatic interactions. Applications of Sculpt are described: to reverse the direction of bundle packing in a designed 4-helix bundle protein, to fold up a 2-stranded beta-ribbon into an approximate beta-barrel, and to design the sequence and conformation of a 30-residue peptide that mimics one partner of a protein subunit interaction. Computer models that are both interactive and physically realistic (within the limitations of a given force field) have 2 significant advantages: (1) they make feasible the modeling of very large changes (such as needed for de novo design), and
Surles, M. C.; Richardson, J. S.; Richardson, D. C.; Brooks, F. P.
1994-01-01
We describe a new paradigm for modeling proteins in interactive computer graphics systems--continual maintenance of a physically valid representation, combined with direct user control and visualization. This is achieved by a fast algorithm for energy minimization, capable of real-time performance on all atoms of a small protein, plus graphically specified user tugs. The modeling system, called Sculpt, rigidly constrains bond lengths, bond angles, and planar groups (similar to existing interactive modeling programs), while it applies elastic restraints to minimize the potential energy due to torsions, hydrogen bonds, and van der Waals and electrostatic interactions (similar to existing batch minimization programs), and user-specified springs. The graphical interface can show bad and/or favorable contacts, and individual energy terms can be turned on or off to determine their effects and interactions. Sculpt finds a local minimum of the total energy that satisfies all the constraints using an augmented Lagrange-multiplier method; calculation time increases only linearly with the number of atoms because the matrix of constraint gradients is sparse and banded. On a 100-MHz MIPS R4000 processor (Silicon Graphics Indigo), Sculpt achieves 11 updates per second on a 20-residue fragment and 2 updates per second on an 80-residue protein, using all atoms except non-H-bonding hydrogens, and without electrostatic interactions. Applications of Sculpt are described: to reverse the direction of bundle packing in a designed 4-helix bundle protein, to fold up a 2-stranded beta-ribbon into an approximate beta-barrel, and to design the sequence and conformation of a 30-residue peptide that mimics one partner of a protein subunit interaction. Computer models that are both interactive and physically realistic (within the limitations of a given force field) have 2 significant advantages: (1) they make feasible the modeling of very large changes (such as needed for de novo design), and
Cylindrical Mixing Layer Model in Stellar Jet
NASA Astrophysics Data System (ADS)
Choe, Seung-Urn; Yu, Kyoung Hee
1994-12-01
We have developed a cylindrical mixing layer model of a stellar jet including cooling effect in order to understand an optical emission mechanism along collimated high velocity stellar jets associated with young stellar objects. The cylindrical results have been calculated to be the same as the 2D ones presented by Canto & Raga(1991) because the entrainment efficiency in our cylindrical model has been obtained to be the same value as the 2D model has given. We have discussed the morphological and physical characteristics of the mixing layers by the cooling effect. As the jet Mach number increases, the initial temperature of the mixing layer goes high because the kinetic energy of the jet partly converts to the thermal energy of the mixing layer. The initial cooling of the mixing layer is very severe, changing its outer boundary radius. A subsequent change becomes adiabatic. The number of the Mach disks in the stellar jet and the total radiative luminosity of the mixing layer, based on our cylindrical calculation, have quite agreed with the observation.
NASA Technical Reports Server (NTRS)
Treinish, Lloyd A.; Gough, Michael L.; Wildenhain, W. David
1987-01-01
The capability was developed of rapidly producing visual representations of large, complex, multi-dimensional space and earth sciences data sets via the implementation of computer graphics modeling techniques on the Massively Parallel Processor (MPP) by employing techniques recently developed for typically non-scientific applications. Such capabilities can provide a new and valuable tool for the understanding of complex scientific data, and a new application of parallel computing via the MPP. A prototype system with such capabilities was developed and integrated into the National Space Science Data Center's (NSSDC) Pilot Climate Data System (PCDS) data-independent environment for computer graphics data display to provide easy access to users. While developing these capabilities, several problems had to be solved independently of the actual use of the MPP, all of which are outlined.
The effects of a dynamic graphical model during simulation-based training of console operation skill
NASA Technical Reports Server (NTRS)
Farquhar, John D.; Regian, J. Wesley
1993-01-01
LOADER is a Windows-based simulation of a complex procedural task. The task requires subjects to execute long sequences of console-operation actions (e.g., button presses, switch actuations, dial rotations) to accomplish specific goals. The LOADER interface is a graphical computer-simulated console which controls railroad cars, tracks, and cranes in a fictitious railroad yard. We hypothesized that acquisition of LOADER performance skill would be supported by the representation of a dynamic graphical model linking console actions to goal and goal states in the 'railroad yard'. Twenty-nine subjects were randomly assigned to one of two treatments (i.e., dynamic model or no model). During training, both groups received identical text-based instruction in an instructional-window above the LOADER interface. One group, however, additionally saw a dynamic version of the bird's-eye view of the railroad yard. After training, both groups were tested under identical conditions. They were asked to perform the complete procedure without guidance and without access to either type of railroad yard representation. Results indicate that rather than becoming dependent on the animated rail yard model, subjects in the dynamic model condition apparently internalized the model, as evidenced by their performance after the model was removed.
Predictive models of large neutrino mixing angles
Barr, S.M.
1997-02-01
Several experimental results could be interpreted as evidence that certain neutrino mixing angles are large, of order unity. However, in the context of grand unified models the neutrino angles come out characteristically to be small, like the KM angles. It is shown how to construct simple grand-unified models in which neutrino angles are not only large but completely predicted with some precision. Six models are presented for illustration. {copyright} {ital 1997} {ital The American Physical Society}
A Module for Graphical Display of Model Results with the CBP Toolbox
Smith, F.
2015-04-21
This report describes work performed by the Savannah River National Laboratory (SRNL) in fiscal year 2014 to add enhanced graphical capabilities to display model results in the Cementitious Barriers Project (CBP) Toolbox. Because Version 2.0 of the CBP Toolbox has just been released, the graphing enhancements described in this report have not yet been integrated into a new version of the Toolbox. Instead they have been tested using a standalone GoldSim model and, while they are substantially complete, may undergo further refinement before full implementation. Nevertheless, this report is issued to document the FY14 development efforts which will provide a basis for further development of the CBP Toolbox.
Node-Structured Integrative Gaussian Graphical Model Guided by Pathway Information
Kim, SungHwan; Jhong, Jae-Hwan
2017-01-01
Up to date, many biological pathways related to cancer have been extensively applied thanks to outputs of burgeoning biomedical research. This leads to a new technical challenge of exploring and validating biological pathways that can characterize transcriptomic mechanisms across different disease subtypes. In pursuit of accommodating multiple studies, the joint Gaussian graphical model was previously proposed to incorporate nonzero edge effects. However, this model is inevitably dependent on post hoc analysis in order to confirm biological significance. To circumvent this drawback, we attempt not only to combine transcriptomic data but also to embed pathway information, well-ascertained biological evidence as such, into the model. To this end, we propose a novel statistical framework for fitting joint Gaussian graphical model simultaneously with informative pathways consistently expressed across multiple studies. In theory, structured nodes can be prespecified with multiple genes. The optimization rule employs the structured input-output lasso model, in order to estimate a sparse precision matrix constructed by simultaneous effects of multiple studies and structured nodes. With an application to breast cancer data sets, we found that the proposed model is superior in efficiently capturing structures of biological evidence (e.g., pathways). An R software package nsiGGM is publicly available at author's webpage. PMID:28487748
Bondarenko, Irina; Raghunathan, Trivellore
2016-07-30
Multiple imputation has become a popular approach for analyzing incomplete data. Many software packages are available to multiply impute the missing values and to analyze the resulting completed data sets. However, diagnostic tools to check the validity of the imputations are limited, and the majority of the currently available methods need considerable knowledge of the imputation model. In many practical settings, however, the imputer and the analyst may be different individuals or from different organizations, and the analyst model may or may not be congenial to the model used by the imputer. This article develops and evaluates a set of graphical and numerical diagnostic tools for two practical purposes: (i) for an analyst to determine whether the imputations are reasonable under his/her model assumptions without actually knowing the imputation model assumptions; and (ii) for an imputer to fine tune the imputation model by checking the key characteristics of the observed and imputed values. The tools are based on the numerical and graphical comparisons of the distributions of the observed and imputed values conditional on the propensity of response. The methodology is illustrated using simulated data sets created under a variety of scenarios. The examples focus on continuous and binary variables, but the principles can be used to extend methods for other types of variables. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Mixed deterministic and probabilistic networks
Dechter, Rina
2010-01-01
The paper introduces mixed networks, a new graphical model framework for expressing and reasoning with probabilistic and deterministic information. The motivation to develop mixed networks stems from the desire to fully exploit the deterministic information (constraints) that is often present in graphical models. Several concepts and algorithms specific to belief networks and constraint networks are combined, achieving computational efficiency, semantic coherence and user-interface convenience. We define the semantics and graphical representation of mixed networks, and discuss the two main types of algorithms for processing them: inference-based and search-based. A preliminary experimental evaluation shows the benefits of the new model. PMID:20981243
A Development Method for Multiagent Simulators Using a Graphical Model Editor
NASA Astrophysics Data System (ADS)
Murakami, Masatoshi; Maruo, Tomoaki; Matsumoto, Keinosuke; Mori, Naoki
A multiagent simulator (MAS) attracts attention as an approach to analyze social phenomena and complex systems in recent years. In addition, many frameworks for developing MAS are also proposed. These frameworks make the amount of development work reduce. But it is necessary to build models that are required to develop simulators from scratch. It becomes a burden to developers. These models would be specialized in the frameworks and lack in reusability. To solve these problems, this paper proposes a graphical model editor that can diagrammatically build models and a simulator development method using the editor. Saving models in a general-purpose form, these models are applicable to various frameworks. Numerical experiments show that the proposed method is effective in MAS development.
The Mixed Effects Trend Vector Model
ERIC Educational Resources Information Center
de Rooij, Mark; Schouteden, Martijn
2012-01-01
Maximum likelihood estimation of mixed effect baseline category logit models for multinomial longitudinal data can be prohibitive due to the integral dimension of the random effects distribution. We propose to use multidimensional unfolding methodology to reduce the dimensionality of the problem. As a by-product, readily interpretable graphical…
Wavelet-based functional mixed models
Morris, Jeffrey S.; Carroll, Raymond J.
2009-01-01
Summary Increasingly, scientific studies yield functional data, in which the ideal units of observation are curves and the observed data consist of sets of curves that are sampled on a fine grid. We present new methodology that generalizes the linear mixed model to the functional mixed model framework, with model fitting done by using a Bayesian wavelet-based approach. This method is flexible, allowing functions of arbitrary form and the full range of fixed effects structures and between-curve covariance structures that are available in the mixed model framework. It yields nonparametric estimates of the fixed and random-effects functions as well as the various between-curve and within-curve covariance matrices. The functional fixed effects are adaptively regularized as a result of the non-linear shrinkage prior that is imposed on the fixed effects’ wavelet coefficients, and the random-effect functions experience a form of adaptive regularization because of the separately estimated variance components for each wavelet coefficient. Because we have posterior samples for all model quantities, we can perform pointwise or joint Bayesian inference or prediction on the quantities of the model. The adaptiveness of the method makes it especially appropriate for modelling irregular functional data that are characterized by numerous local features like peaks. PMID:19759841
Simplified models of mixed dark matter
Cheung, Clifford; Sanford, David E-mail: dsanford@caltech.edu
2014-02-01
We explore simplified models of mixed dark matter (DM), defined here to be a stable relic composed of a singlet and an electroweak charged state. Our setup describes a broad spectrum of thermal DM candidates that can naturally accommodate the observed DM abundance but are subject to substantial constraints from current and upcoming direct detection experiments. We identify ''blind spots'' at which the DM-Higgs coupling is identically zero, thus nullifying direct detection constraints on spin independent scattering. Furthermore, we characterize the fine-tuning in mixing angles, i.e. well-tempering, required for thermal freeze-out to accommodate the observed abundance. Present and projected limits from LUX and XENON1T force many thermal relic models into blind spot tuning, well-tempering, or both. This simplified model framework generalizes bino-Higgsino DM in the MSSM, singlino-Higgsino DM in the NMSSM, and scalar DM candidates that appear in models of extended Higgs sectors.
2008-04-07
entitled "A Computational Model of the Cerebral Cortex" [5]. The paper (he scribed a graphical mo(lcl of the visual cortex inspired bY David Niumford’s...Proceedings of the ninth IEEE International Conference on Computer Vision, volume 1, pages 432-439, 2003. 117] Tai Sing Lee and David ’Mumford...Hierarchical Bayesian inference in the visual cortex. Journal of the Opthcal Socicty of America, 2(7):1434-1,148, July 2003. 16~] David Lowe. Distitict ivi
Galkin, A A
2012-01-01
On the basis of graphic models of the human response to environmental factors, two main types of complex quantitative influence as well as interrelation between determined effects at the level of an individual, and stochastic effects on population were revealed. Two main kinds of factors have been suggested to be distinguished. They are essential factors and accidental factors. The essential factors are common for environment. The accidental factors are foreign for environment. The above two kinds are different in approaches of hygienic standardization Accidental factors need a dot-like approach, whereas a two-level range approach is suitable for the essential factors.
NASA Astrophysics Data System (ADS)
Sastry, G. P.; Ravuri, Tushar R.
1990-11-01
This paper describes several relativistic phenomena in two spatial dimensions that can be modeled using the collision program of Spacetime Software. These include the familiar aberration, the Doppler effect, the headlight effect, and the invariance of the speed of light in vacuum, in addition to the rather unfamiliar effects like the dragging of light in a moving medium, reflection at moving mirrors, Wigner rotation of noncommuting boosts, and relativistic rotation of shrinking and expanding rods. All these phenomena are exhibited by tracings of composite computer printouts of the collision movie. It is concluded that an interactive educational graphics software with pleasing visuals can have considerable investigative power packed within it.
NASA Technical Reports Server (NTRS)
Goel, Narendra S.; Rozehnal, Ivan; Thompson, Richard L.
1991-01-01
A computer-graphics-based model, named DIANA, is presented for generation of objects of arbitrary shape and for calculating bidirectional reflectances and scattering from them, in the visible and infrared region. The computer generation is based on a modified Lindenmayer system approach which makes it possible to generate objects of arbitrary shapes and to simulate their growth, dynamics, and movement. Rendering techniques are used to display an object on a computer screen with appropriate shading and shadowing and to calculate the scattering and reflectance from the object. The technique is illustrated with scattering from canopies of simulated corn plants.
Optimizing ion channel models using a parallel genetic algorithm on graphical processors.
Ben-Shalom, Roy; Aviv, Amit; Razon, Benjamin; Korngreen, Alon
2012-01-01
We have recently shown that we can semi-automatically constrain models of voltage-gated ion channels by combining a stochastic search algorithm with ionic currents measured using multiple voltage-clamp protocols. Although numerically successful, this approach is highly demanding computationally, with optimization on a high performance Linux cluster typically lasting several days. To solve this computational bottleneck we converted our optimization algorithm for work on a graphical processing unit (GPU) using NVIDIA's CUDA. Parallelizing the process on a Fermi graphic computing engine from NVIDIA increased the speed ∼180 times over an application running on an 80 node Linux cluster, considerably reducing simulation times. This application allows users to optimize models for ion channel kinetics on a single, inexpensive, desktop "super computer," greatly reducing the time and cost of building models relevant to neuronal physiology. We also demonstrate that the point of algorithm parallelization is crucial to its performance. We substantially reduced computing time by solving the ODEs (Ordinary Differential Equations) so as to massively reduce memory transfers to and from the GPU. This approach may be applied to speed up other data intensive applications requiring iterative solutions of ODEs. Copyright © 2012 Elsevier B.V. All rights reserved.
Wang, Ting; Ren, Zhao; Ding, Ying; Fang, Zhou; Sun, Zhe; MacDonald, Matthew L; Sweet, Robert A; Wang, Jieru; Chen, Wei
2016-02-01
Biological networks provide additional information for the analysis of human diseases, beyond the traditional analysis that focuses on single variables. Gaussian graphical model (GGM), a probability model that characterizes the conditional dependence structure of a set of random variables by a graph, has wide applications in the analysis of biological networks, such as inferring interaction or comparing differential networks. However, existing approaches are either not statistically rigorous or are inefficient for high-dimensional data that include tens of thousands of variables for making inference. In this study, we propose an efficient algorithm to implement the estimation of GGM and obtain p-value and confidence interval for each edge in the graph, based on a recent proposal by Ren et al., 2015. Through simulation studies, we demonstrate that the algorithm is faster by several orders of magnitude than the current implemented algorithm for Ren et al. without losing any accuracy. Then, we apply our algorithm to two real data sets: transcriptomic data from a study of childhood asthma and proteomic data from a study of Alzheimer's disease. We estimate the global gene or protein interaction networks for the disease and healthy samples. The resulting networks reveal interesting interactions and the differential networks between cases and controls show functional relevance to the diseases. In conclusion, we provide a computationally fast algorithm to implement a statistically sound procedure for constructing Gaussian graphical model and making inference with high-dimensional biological data. The algorithm has been implemented in an R package named "FastGGM".
Anquez, Jérémie; Boubekeur, Tamy; Bibin, Lazar; Angelini, Elsa; Bloch, Isabelle
2009-01-01
Potential sanitary effects related to electromagnetic fields exposure raise public concerns, especially for fetuses during pregnancy. Human fetus exposure can only be assessed through simulated dosimetry studies, performed on anthropomorphic models of pregnant women. In this paper, we propose a new methodology to generate a set of detailed utero-fetal unit (UFU) 3D models during the first and third trimesters of pregnancy, based on segmented 3D ultrasound and MRI data. UFU models are built using recent geometry processing methods derived from mesh-based computer graphics techniques and embedded in a synthetic woman body. Nine pregnant woman models have been generated using this approach and validated by obstetricians, for anatomical accuracy and representativeness.
Bayesian Learning in Sparse Graphical Factor Models via Variational Mean-Field Annealing.
Yoshida, Ryo; West, Mike
2010-05-01
We describe a class of sparse latent factor models, called graphical factor models (GFMs), and relevant sparse learning algorithms for posterior mode estimation. Linear, Gaussian GFMs have sparse, orthogonal factor loadings matrices, that, in addition to sparsity of the implied covariance matrices, also induce conditional independence structures via zeros in the implied precision matrices. We describe the models and their use for robust estimation of sparse latent factor structure and data/signal reconstruction. We develop computational algorithms for model exploration and posterior mode search, addressing the hard combinatorial optimization involved in the search over a huge space of potential sparse configurations. A mean-field variational technique coupled with annealing is developed to successively generate "artificial" posterior distributions that, at the limiting temperature in the annealing schedule, define required posterior modes in the GFM parameter space. Several detailed empirical studies and comparisons to related approaches are discussed, including analyses of handwritten digit image and cancer gene expression data.
Modified graphical autocatalytic set model of combustion process in circulating fluidized bed boiler
NASA Astrophysics Data System (ADS)
Yusof, Nurul Syazwani; Bakar, Sumarni Abu; Ismail, Razidah
2014-07-01
Circulating Fluidized Bed Boiler (CFB) is a device for generating steam by burning fossil fuels in a furnace operating under a special hydrodynamic condition. Autocatalytic Set has provided a graphical model of chemical reactions that occurred during combustion process in CFB. Eight important chemical substances known as species were represented as nodes and catalytic relationships between nodes are represented by the edges in the graph. In this paper, the model is extended and modified by considering other relevant chemical reactions that also exist during the process. Catalytic relationship among the species in the model is discussed. The result reveals that the modified model is able to gives more explanation of the relationship among the species during the process at initial time t.
2011-01-01
Background Machine learning has a vast range of applications. In particular, advanced machine learning methods are routinely and increasingly used in quantitative structure activity relationship (QSAR) modeling. QSAR data sets often encompass tens of thousands of compounds and the size of proprietary, as well as public data sets, is rapidly growing. Hence, there is a demand for computationally efficient machine learning algorithms, easily available to researchers without extensive machine learning knowledge. In granting the scientific principles of transparency and reproducibility, Open Source solutions are increasingly acknowledged by regulatory authorities. Thus, an Open Source state-of-the-art high performance machine learning platform, interfacing multiple, customized machine learning algorithms for both graphical programming and scripting, to be used for large scale development of QSAR models of regulatory quality, is of great value to the QSAR community. Results This paper describes the implementation of the Open Source machine learning package AZOrange. AZOrange is specially developed to support batch generation of QSAR models in providing the full work flow of QSAR modeling, from descriptor calculation to automated model building, validation and selection. The automated work flow relies upon the customization of the machine learning algorithms and a generalized, automated model hyper-parameter selection process. Several high performance machine learning algorithms are interfaced for efficient data set specific selection of the statistical method, promoting model accuracy. Using the high performance machine learning algorithms of AZOrange does not require programming knowledge as flexible applications can be created, not only at a scripting level, but also in a graphical programming environment. Conclusions AZOrange is a step towards meeting the needs for an Open Source high performance machine learning platform, supporting the efficient development of
ERIC Educational Resources Information Center
Halpern, Jeanne W.
1970-01-01
Computer graphics have been called the most exciting development in computer technology. At the University of Michigan, three kinds of graphics output equipment are now being used: symbolic printers, line plotters or drafting devices, and cathode-ray tubes (CRT). Six examples are given that demonstrate the range of graphics use at the University.…
Jackson, M E; Gnadt, J W
1999-03-01
The object-oriented graphical programming language LabView was used to implement the numerical solution to a computational model of saccade generation in primates. The computational model simulates the activity and connectivity of anatomical strictures known to be involved in saccadic eye movements. The LabView program provides a graphical user interface to the model that makes it easy to observe and modify the behavior of each element of the model. Essential elements of the source code of the LabView program are presented and explained. A copy of the model is available for download from the internet.
Pecevski, Dejan; Buesing, Lars; Maass, Wolfgang
2011-01-01
An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows (“explaining away”) and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons. PMID:22219717
Comfort constrains graphic workspace: test results of a 3D forearm model.
Schillings, J J; Thomassen, A J; Meulenbroek, R G
2000-01-01
Human movement performance is subject to many physical and psychological constraints. Analyses of these constraints may not only improve our understanding of the performance aspects that subjects need to keep under continuous control, but may also shed light on the possible origins of specific behavioral preferences that people display in motor tasks. The goal of the present paper is to make an empirical contribution here. In a recent simulation study, we reported effects of pen-grip and forearm-posture constraints on the spatial characteristics of the pen tip's workspace in drawing. The effects concerned changes in the location, size, and orientation of the reachable part of the writing plane, as well as variations in the computed degree of comfort in the hand and finger postures required to reach the various parts of this area. The present study is aimed at empirically evaluating to what extent these effects influence subjects' graphic behavior in a simple, free line-drawing task. The task involved the production of small back-and-forth drawing movements in various directions, to be chosen randomly under three forearm-posture and five pen-grip conditions. The observed variations in the subjects' choice of starting positions showed a high level of agreement with those of the simulated graphic-area locations, showing that biomechanically defined comfort of starting postures is indeed a determinant of the selection of starting points. Furthermore, between-condition rotations in the frequency distributions of the realized stroke directions corresponded to the simulation results, which again confirms the importance of comfort in directional preferences. It is concluded that postural rather than spatial constraints primarily affect subjects' preferences for starting positions and stroke directions in graphic motor performance. The relevance of the present modelling approach and its results for the broader field of complex motor behavior, including the manipulation of
Raster graphics extensions to the core system
NASA Technical Reports Server (NTRS)
Foley, J. D.
1984-01-01
A conceptual model of raster graphics systems was developed. The model integrates core-like graphics package concepts with contemporary raster display architectures. The conceptual model of raster graphics introduces multiple pixel matrices with associated index tables.
Configuration mixing calculations in soluble models
NASA Astrophysics Data System (ADS)
Cambiaggio, M. C.; Plastino, A.; Szybisz, L.; Miller, H. G.
1983-07-01
Configuration mixing calculations have been performed in two quasi-spin models using basis states which are solutions of a particular set of Hartree-Fock equations. Each of these solutions, even those which do not correspond to the global minimum, is found to contain interesting physical information. Relatively good agreement with the exact lowest-lying states has been obtained. In particular, one obtains a better approximation to the ground state than that provided by Hartree-Fock.
A random distribution reacting mixing layer model
NASA Technical Reports Server (NTRS)
Jones, Richard A.
1994-01-01
A methodology for simulation of molecular mixing and the resulting velocity and temperature fields has been developed. The ideas are applied to the flow conditions present in the NASA Lewis Planar Reacting Shear Layer (PRSL) facility, and results compared to experimental data. A gaussian transverse turbulent velocity distribution is used in conjunction with a linearly increasing time scale to describe the mixing of different regions of the flow. Equilibrium reaction calculations are then performed on the mix to arrive at a new species composition and temperature. Velocities are determined through summation of momentum contributions. The analysis indicates a combustion efficiency of the order of 80 percent for the reacting mixing layer, and a turbulent Schmidt number of 2/3. The success of the model is attributed to the simulation of large-scale transport of fluid. The favorable comparison shows that a relatively quick and simple PC calculation is capable of simulating the basic flow structure in the reacting and non-reacting shear layer present in the facility given basic assumptions about turbulence properties.
A random distribution reacting mixing layer model
NASA Technical Reports Server (NTRS)
Jones, Richard A.; Marek, C. John; Myrabo, Leik N.; Nagamatsu, Henry T.
1994-01-01
A methodology for simulation of molecular mixing, and the resulting velocity and temperature fields has been developed. The ideas are applied to the flow conditions present in the NASA Lewis Research Center Planar Reacting Shear Layer (PRSL) facility, and results compared to experimental data. A gaussian transverse turbulent velocity distribution is used in conjunction with a linearly increasing time scale to describe the mixing of different regions of the flow. Equilibrium reaction calculations are then performed on the mix to arrive at a new species composition and temperature. Velocities are determined through summation of momentum contributions. The analysis indicates a combustion efficiency of the order of 80 percent for the reacting mixing layer, and a turbulent Schmidt number of 2/3. The success of the model is attributed to the simulation of large-scale transport of fluid. The favorable comparison shows that a relatively quick and simple PC calculation is capable of simulating the basic flow structure in the reacting and nonreacting shear layer present in the facility given basic assumptions about turbulence properties.
CFD Modeling of Mixed-Phase Icing
NASA Astrophysics Data System (ADS)
Zhang, Lifen; Liu, Zhenxia; Zhang, Fei
2016-12-01
Ice crystal ingestion at high altitude has been reported to be a threat for safe operation of aero-engine in recently. Ice crystals do not accrete on external surface because of cold environment. But when they enter the core flow of aero-engine, ice crystals melt partially into droplets due to higher temperature. Air-droplets-ice crystal is the mixed-phase, which will give rise to ice accretion on static and rotating components in compressor. Subsequently, compressor surge and engine shutdowns may occur. To provide a numerical tool to analyze this in detail, a numerical method was developed in this study. The mixed phase flow was solved using Eulerian-Lagrangian method. The dispersed phase was represented by one-way coupling. A thermodynamic model that considers mass and energy balance with ice crystals and droplets was presented as well. The icing code was implemented by the user-defined function of Fluent. The method of ice accretion under mixed-phase conditions was validated by comparing the results simulated on a cylinder with experimental data derived from literature. The predicted ice shape and mass agree with these data, thereby confirming the validity of the numerical method developed in this research for mixed-phase conditions.
ModelMuse - A Graphical User Interface for MODFLOW-2005 and PHAST
Winston, Richard B.
2009-01-01
ModelMuse is a graphical user interface (GUI) for the U.S. Geological Survey (USGS) models MODFLOW-2005 and PHAST. This software package provides a GUI for creating the flow and transport input file for PHAST and the input files for MODFLOW-2005. In ModelMuse, the spatial data for the model is independent of the grid, and the temporal data is independent of the stress periods. Being able to input these data independently allows the user to redefine the spatial and temporal discretization at will. This report describes the basic concepts required to work with ModelMuse. These basic concepts include the model grid, data sets, formulas, objects, the method used to assign values to data sets, and model features. The ModelMuse main window has a top, front, and side view of the model that can be used for editing the model, and a 3-D view of the model that can be used to display properties of the model. ModelMuse has tools to generate and edit the model grid. It also has a variety of interpolation methods and geographic functions that can be used to help define the spatial variability of the model. ModelMuse can be used to execute both MODFLOW-2005 and PHAST and can also display the results of MODFLOW-2005 models. An example of using ModelMuse with MODFLOW-2005 is included in this report. Several additional examples are described in the help system for ModelMuse, which can be accessed from the Help menu.
Mencarini, Letizia; Vignoli, Daniele; Gottard, Anna
2015-03-01
This paper studies fertility intentions and their outcomes, analyzing the complete path leading to fertility behavior according to the social psychological model of Theory Planned Behavior (TPB). We move beyond existing research using graphical models to have a precise understanding, and a formal description, of the developmental fertility decision-making process. Our findings yield new results for the Italian case which are empirically robust and theoretically coherent, adding important insights to the effectiveness of the TPB for fertility research. In line with TPB, all intentions' primary antecedents are found to be determinants of the level of fertility intentions, but do not affect fertility outcomes, being pre-filtered by fertility intentions. Nevertheless, in contrast with TPB, background factors are not fully mediated by intentions' primary antecedents, influencing directly fertility intentions and even fertility behaviors. Copyright © 2014 Elsevier Ltd. All rights reserved.
Fuzzy Edge Connectivity of Graphical Fuzzy State Space Model in Multi-connected System
NASA Astrophysics Data System (ADS)
Harish, Noor Ainy; Ismail, Razidah; Ahmad, Tahir
2010-11-01
Structured networks of interacting components illustrate complex structure in a direct or intuitive way. Graph theory provides a mathematical modeling for studying interconnection among elements in natural and man-made systems. On the other hand, directed graph is useful to define and interpret the interconnection structure underlying the dynamics of the interacting subsystem. Fuzzy theory provides important tools in dealing various aspects of complexity, imprecision and fuzziness of the network structure of a multi-connected system. Initial development for systems of Fuzzy State Space Model (FSSM) and a fuzzy algorithm approach were introduced with the purpose of solving the inverse problems in multivariable system. In this paper, fuzzy algorithm is adapted in order to determine the fuzzy edge connectivity between subsystems, in particular interconnected system of Graphical Representation of FSSM. This new approach will simplify the schematic diagram of interconnection of subsystems in a multi-connected system.
ASAMgpu V1.0 - a moist fully compressible atmospheric model using graphics processing units (GPUs)
NASA Astrophysics Data System (ADS)
Horn, S.
2012-03-01
In this work the three dimensional compressible moist atmospheric model ASAMgpu is presented. The calculations are done using graphics processing units (GPUs). To ensure platform independence OpenGL and GLSL are used, with that the model runs on any hardware supporting fragment shaders. The MPICH2 library enables interprocess communication allowing the usage of more than one GPU through domain decomposition. Time integration is done with an explicit three step Runge-Kutta scheme with a time-splitting algorithm for the acoustic waves. The results for four test cases are shown in this paper. A rising dry heat bubble, a cold bubble induced density flow, a rising moist heat bubble in a saturated environment, and a DYCOMS-II case.
ASAMgpu V1.0 - a moist fully compressible atmospheric model using graphics processing units (GPUs)
NASA Astrophysics Data System (ADS)
Horn, S.
2011-10-01
In this work the three dimensional compressible moist atmospheric model ASAMgpu is presented. The calculations are done using graphics processing units (GPUs). To ensure platform independence OpenGL and GLSL is used, with that the model runs on any hardware supporting fragment shaders. The MPICH2 library enables interprocess communication allowing the usage of more than one GPU through domain decomposition. Time integration is done with an explicit three step Runge-Kutta scheme with a timesplitting algorithm for the acoustic waves. The results for four test cases are shown in this paper. A rising dry heat bubble, a cold bubble induced density flow, a rising moist heat bubble in a saturated environment and a DYCOMS-II case.
p-Value combiners for graphical modelling of EEG data in the frequency domain.
Schneider-Luftman, Deborah
2016-09-15
In the graphical modelling of brain data, we are interested in estimating connectivity between various regions of interest, and evaluating statistical significance in order to derive a network model. This process involves aggregating results across frequency ranges and several patients, in order to obtain an overall result that can serve to construct a graph. In this paper, we propose a method based on p-value combiners, which have never been used in applications to EEG data analysis. This new method is split into two aspects: frequency-wide tests and group-wide tests. The first step can be effectively adjusted to control for false detection rate. This two-step protocol is applied to EEG data collected from distinct groups of mental health patients, in order to draw graphical models for each group and highlight structural connectivity differences. Using the method proposed, we show that it is possible to reliably achieve this while effectively controlling for false connections detection. Conventionally, the Holm's Stepdown procedure is used for this type of problem, as it is robust to type I errors. However, it is known to be conservative and prone to false negatives. Furthermore, unlike the proposed methods, it does not directly output a decision rule on whether to accept or reject a statement. The proposed methodology offers significant improvements over the stepdown procedure in terms of error rate and false negative rate across the network models, as well as in term of applicability. Copyright © 2016 The Author. Published by Elsevier B.V. All rights reserved.
Flow modeling in turbofan mixing duct
Tsui, Y.Y.; Wu, P.W. ); Liao, C.W. )
1994-08-01
A computational procedure is described to study the mixing flow in a multilobe turbofan mixer. The predictions have been obtained using a finite volume method to solve the density-weighted time-averaged Navier-Stokes equations. Turbulence is characterized by the [kappa]-[epsilon] eddy viscosity model. To fit the irregular boundaries of the flow field, the curvilinear nonorthogonal coordinates are employed. The robustness of the computational procedure is enhanced by making use of nonstaggered grids. Results show that the streamwise vortex generated at the exit of the lobes dominates the performance of the mixing process. Comparison with experimental data indicates that good predictions can be obtained provided that sufficient inlet conditions are given.
BDA special care case mix model.
Bateman, P; Arnold, C; Brown, R; Foster, L V; Greening, S; Monaghan, N; Zoitopoulos, L
2010-04-10
Routine dental care provided in special care dentistry is complicated by patient specific factors which increase the time taken and costs of treatment. The BDA have developed and conducted a field trial of a case mix tool to measure this complexity. For each episode of care the case mix tool assesses the following on a four point scale: 'ability to communicate', 'ability to cooperate', 'medical status', 'oral risk factors', 'access to oral care' and 'legal and ethical barriers to care'. The tool is reported to be easy to use and captures sufficient detail to discriminate between types of service and special care dentistry provided. It offers potential as a simple to use and clinically relevant source of performance management and commissioning data. This paper describes the model, demonstrates how it is currently being used, and considers future developments in its use.
A new multifluid turbulent-mix model
Cranfill, C.W.
1997-03-01
Equations are proposed for a new multifluid turbulent-mix model intended to simulate fluid flows near unstable material interfaces. The model is based on the usual decomposition of the fluid properties into mean and fluctuating parts whose evolution equations are obtained from the Navier-Stokes equations. Correlations among the fluctuating parts produce turbulent contributions to the bulk fluid properties. The innovation is to divide the turbulent contributions into ordered and disordered parts, where the ordered parts are obtained from the average drift motions produced by a set of multifluid interpenetration equations, while the disordered parts are obtained from a set of single-fluid turbulence equations. The problem of dosing the multifluid and single-fluid sets of equations is solved by coupling them together in such a way that they close each other. The resulting energy cascade is from bulk kinetic to ordered drift kinetic to disordered turbulent kinetic to thermal internal energy. The new model exhibits both the early-time convective and the late-time diffusive drift motions seen in numerical and experimental investigations of the evolution of interfacial instabilities. The division of the turbulent contributions into ordered and disordered parts provides a more natural formalism for deriving the equations than has been given for similar mix models that have been proposed. The new model incorporates several simplifying assumptions designed to minimize the extra computational work required, so it is suitable for implementation in multidimensional hydrodynamics codes.
Model Selection with the Linear Mixed Model for Longitudinal Data
ERIC Educational Resources Information Center
Ryoo, Ji Hoon
2011-01-01
Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or…
Model Selection with the Linear Mixed Model for Longitudinal Data
ERIC Educational Resources Information Center
Ryoo, Ji Hoon
2011-01-01
Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or…
FastGGM: An Efficient Algorithm for the Inference of Gaussian Graphical Model in Biological Networks
Ding, Ying; Fang, Zhou; Sun, Zhe; MacDonald, Matthew L.; Sweet, Robert A.; Wang, Jieru; Chen, Wei
2016-01-01
Biological networks provide additional information for the analysis of human diseases, beyond the traditional analysis that focuses on single variables. Gaussian graphical model (GGM), a probability model that characterizes the conditional dependence structure of a set of random variables by a graph, has wide applications in the analysis of biological networks, such as inferring interaction or comparing differential networks. However, existing approaches are either not statistically rigorous or are inefficient for high-dimensional data that include tens of thousands of variables for making inference. In this study, we propose an efficient algorithm to implement the estimation of GGM and obtain p-value and confidence interval for each edge in the graph, based on a recent proposal by Ren et al., 2015. Through simulation studies, we demonstrate that the algorithm is faster by several orders of magnitude than the current implemented algorithm for Ren et al. without losing any accuracy. Then, we apply our algorithm to two real data sets: transcriptomic data from a study of childhood asthma and proteomic data from a study of Alzheimer’s disease. We estimate the global gene or protein interaction networks for the disease and healthy samples. The resulting networks reveal interesting interactions and the differential networks between cases and controls show functional relevance to the diseases. In conclusion, we provide a computationally fast algorithm to implement a statistically sound procedure for constructing Gaussian graphical model and making inference with high-dimensional biological data. The algorithm has been implemented in an R package named “FastGGM”. PMID:26872036
A graphical model approach to systematically missing data in meta-analysis of observational studies.
Kovačić, Jelena; Varnai, Veda Marija
2016-10-30
When studies in meta-analysis include different sets of confounders, simple analyses can cause a bias (omitting confounders that are missing in certain studies) or precision loss (omitting studies with incomplete confounders, i.e. a complete-case meta-analysis). To overcome these types of issues, a previous study proposed modelling the high correlation between partially and fully adjusted regression coefficient estimates in a bivariate meta-analysis. When multiple differently adjusted regression coefficient estimates are available, we propose exploiting such correlations in a graphical model. Compared with a previously suggested bivariate meta-analysis method, such a graphical model approach is likely to reduce the number of parameters in complex missing data settings by omitting the direct relationships between some of the estimates. We propose a structure-learning rule whose justification relies on the missingness pattern being monotone. This rule was tested using epidemiological data from a multi-centre survey. In the analysis of risk factors for early retirement, the method showed a smaller difference from a complete data odds ratio and greater precision than a commonly used complete-case meta-analysis. Three real-world applications with monotone missing patterns are provided, namely, the association between (1) the fibrinogen level and coronary heart disease, (2) the intima media thickness and vascular risk and (3) allergic asthma and depressive episodes. The proposed method allows for the inclusion of published summary data, which makes it particularly suitable for applications involving both microdata and summary data. Copyright © 2016 John Wiley & Sons, Ltd.
Gray component replacement using color mixing models
NASA Astrophysics Data System (ADS)
Kang, Henry R.
1994-05-01
A new approach to the gray component replacement (GCR) has been developed. It employs the color mixing theory for modeling the spectral fit between the 3-color and 4-color prints. To achieve this goal, we first examine the accuracy of the models with respect to the experimental results by applying them to the prints made by a Canon Color Laser Copier-500 (CLC-500). An empirical halftone correction factor is used for improving the data fitting. Among the models tested, the halftone corrected Kubelka-Munk theory gives the closest fit, followed by the halftone corrected Beer-Bouguer law and the Yule-Neilsen approach. We then apply the halftone corrected BB law to GCR. The main feature of this GCR approach is based on the spectral measurements of the primary color step wedges and a software package implementing the color mixing model. The software determines the amount of the gray component to be removed, then adjusts each primary color until a good match of the peak wavelengths between the 3-color and 4-color spectra is obtained. Results indicate that the average (Delta) Eab between cmy and cmyk renditions of 64 color patches is 3.11 (Delta) Eab. Eighty-seven percent of the patches has (Delta) Eab less than 5 units. The advantage of this approach is its simplicity; there is no need for the black printer and under color addition. Because this approach is based on the spectral reproduction, it minimizes the metamerism.
Toward Better Modeling of Supercritical Turbulent Mixing
NASA Technical Reports Server (NTRS)
Selle, Laurent; Okongo'o, Nora; Bellan, Josette; Harstad, Kenneth
2008-01-01
study was done as part of an effort to develop computational models representing turbulent mixing under thermodynamic supercritical (here, high pressure) conditions. The question was whether the large-eddy simulation (LES) approach, developed previously for atmospheric-pressure compressible-perfect-gas and incompressible flows, can be extended to real-gas non-ideal (including supercritical) fluid mixtures. [In LES, the governing equations are approximated such that the flow field is spatially filtered and subgrid-scale (SGS) phenomena are represented by models.] The study included analyses of results from direct numerical simulation (DNS) of several such mixing layers based on the Navier-Stokes, total-energy, and conservation- of-chemical-species governing equations. Comparison of LES and DNS results revealed the need to augment the atmospheric- pressure LES equations with additional SGS momentum and energy terms. These new terms are the direct result of high-density-gradient-magnitude regions found in the DNS and observed experimentally under fully turbulent flow conditions. A model has been derived for the new term in the momentum equation and was found to perform well at small filter size but to deteriorate with increasing filter size. Several alternative models were derived for the new SGS term in the energy equation that would need further investigations to determine if they are too computationally intensive in LES.
Generalized linear mixed models can detect unimodal species-environment relationships.
Jamil, Tahira; Ter Braak, Cajo J F
2013-01-01
Niche theory predicts that species occurrence and abundance show non-linear, unimodal relationships with respect to environmental gradients. Unimodal models, such as the Gaussian (logistic) model, are however more difficult to fit to data than linear ones, particularly in a multi-species context in ordination, with trait modulated response and when species phylogeny and species traits must be taken into account. Adding squared terms to a linear model is a possibility but gives uninterpretable parameters. This paper explains why and when generalized linear mixed models, even without squared terms, can effectively analyse unimodal data and also presents a graphical tool and statistical test to test for unimodal response while fitting just the generalized linear mixed model. The R-code for this is supplied in Supplemental Information 1.
Two graphical user interfaces for managing and analyzing MODFLOW groundwater-model scenarios
Banta, Edward R.
2014-01-01
Scenario Manager and Scenario Analyzer are graphical user interfaces that facilitate the use of calibrated, MODFLOW-based groundwater models for investigating possible responses to proposed stresses on a groundwater system. Scenario Manager allows a user, starting with a calibrated model, to design and run model scenarios by adding or modifying stresses simulated by the model. Scenario Analyzer facilitates the process of extracting data from model output and preparing such display elements as maps, charts, and tables. Both programs are designed for users who are familiar with the science on which groundwater modeling is based but who may not have a groundwater modeler’s expertise in building and calibrating a groundwater model from start to finish. With Scenario Manager, the user can manipulate model input to simulate withdrawal or injection wells, time-variant specified hydraulic heads, recharge, and such surface-water features as rivers and canals. Input for stresses to be simulated comes from user-provided geographic information system files and time-series data files. A Scenario Manager project can contain multiple scenarios and is self-documenting. Scenario Analyzer can be used to analyze output from any MODFLOW-based model; it is not limited to use with scenarios generated by Scenario Manager. Model-simulated values of hydraulic head, drawdown, solute concentration, and cell-by-cell flow rates can be presented in display elements. Map data can be represented as lines of equal value (contours) or as a gradated color fill. Charts and tables display time-series data obtained from output generated by a transient-state model run or from user-provided text files of time-series data. A display element can be based entirely on output of a single model run, or, to facilitate comparison of results of multiple scenarios, an element can be based on output from multiple model runs. Scenario Analyzer can export display elements and supporting metadata as a Portable
Netto, Kevin J; Burnett, Angus F; Green, Jonathon P; Rodrigues, Julian P
2008-06-01
EMG-driven musculoskeletal modeling is a method in which loading on the active and passive structures of the cervical spine may be investigated. A model of the cervical spine exists; however, it has yet to be criterion validated. Furthermore, neck muscle morphometry in this model was derived from elderly cadavers, threatening model validity. Therefore, the overall aim of this study was to modify and criterion validate this preexisting graphically based musculoskeletal model of the cervical spine. Five male subjects with no neck pain participated in this study. The study consisted of three parts. First, subject-specific neck muscle morphometry data were derived by using magnetic resonance imaging. Second, EMG drive for the model was generated from both surface (Drive 1: N=5) and surface and deep muscles (Drive 2: N=3). Finally, to criterion validate the modified model, net moments predicted by the model were compared against net moments measured by an isokinetic dynamometer in both maximal and submaximal isometric contractions with the head in the neutral posture, 20 deg of flexion, and 35 deg of extension. Neck muscle physiological cross sectional area values were greater in this study when compared to previously reported data. Predictions of neck torque by the model were better in flexion (18.2% coefficient of variation (CV)) when compared to extension (28.5% CV) and using indwelling EMG did not enhance model predictions. There were, however, large variations in predictions when all the contractions were compared. It is our belief that further work needs to be done to improve the validity of the modified EMG-driven neck model examined in this study. A number of factors could potentially improve the model with the most promising probably being optimizing various modeling parameters by using methods established by previous researchers investigating other joints of the body.
Asymptotically Normal and Efficient Estimation of Covariate-Adjusted Gaussian Graphical Model
Chen, Mengjie; Ren, Zhao; Zhao, Hongyu; Zhou, Harrison
2015-01-01
A tuning-free procedure is proposed to estimate the covariate-adjusted Gaussian graphical model. For each finite subgraph, this estimator is asymptotically normal and efficient. As a consequence, a confidence interval can be obtained for each edge. The procedure enjoys easy implementation and efficient computation through parallel estimation on subgraphs or edges. We further apply the asymptotic normality result to perform support recovery through edge-wise adaptive thresholding. This support recovery procedure is called ANTAC, standing for Asymptotically Normal estimation with Thresholding after Adjusting Covariates. ANTAC outperforms other methodologies in the literature in a range of simulation studies. We apply ANTAC to identify gene-gene interactions using an eQTL dataset. Our result achieves better interpretability and accuracy in comparison with CAMPE. PMID:27499564
uPy: a ubiquitous computer graphics Python API with Biological Modeling Applications
Autin, L.; Johnson, G.; Hake, J.; Olson, A.; Sanner, M.
2015-01-01
In this paper we describe uPy, an extension module for the Python programming language that provides a uniform abstraction of the APIs of several 3D computer graphics programs called hosts, including: Blender, Maya, Cinema4D, and DejaVu. A plugin written with uPy is a unique piece of code that will run in all uPy-supported hosts. We demonstrate the creation of complex plug-ins for molecular/cellular modeling and visualization and discuss how uPy can more generally simplify programming for many types of projects (not solely science applications) intended for multi-host distribution. uPy is available at http://upy.scripps.edu PMID:24806987
Graphical representation of life paths to better convey results of decision models to patients.
Rubrichi, Stefania; Rognoni, Carla; Sacchi, Lucia; Parimbelli, Enea; Napolitano, Carlo; Mazzanti, Andrea; Quaglini, Silvana
2015-04-01
The inclusion of patients' perspectives in clinical practice has become an important matter for health professionals, in view of the increasing attention to patient-centered care. In this regard, this report illustrates a method for developing a visual aid that supports the physician in the process of informing patients about a critical decisional problem. In particular, we focused on interpretation of the results of decision trees embedding Markov models implemented with the commercial tool TreeAge Pro. Starting from patient-level simulations and exploiting some advanced functionalities of TreeAge Pro, we combined results to produce a novel graphical output that represents the distributions of outcomes over the lifetime for the different decision options, thus becoming a more informative decision support in a context of shared decision making. The training example used to illustrate the method is a decision tree for thromboembolism risk prevention in patients with nonvalvular atrial fibrillation.
Glossiness of Colored Papers based on Computer Graphics Model and Its Measuring Method
NASA Astrophysics Data System (ADS)
Aida, Teizo
In the case of colored papers, the color of surface effects strongly upon the gloss of its paper. The new glossiness for such a colored paper is suggested in this paper. First, using the Achromatic and Chromatic Munsell colored chips, the author obtained experimental equation which represents the relation between lightness V ( or V and saturation C ) and psychological glossiness Gph of these chips. Then, the author defined a new glossiness G for the colored papers, based on the above mentioned experimental equations Gph and Cook-Torrance's reflection model which are widely used in the filed of Computer Graphics. This new glossiness is shown to be nearly proportional to the psychological glossiness Gph. The measuring system for the new glossiness G is furthermore descrived. The measuring time for one specimen is within 1 minute.
Robust Depth Image Acquisition Using Modulated Pattern Projection and Probabilistic Graphical Models
Kravanja, Jaka; Žganec, Mario; Žganec-Gros, Jerneja; Dobrišek, Simon; Štruc, Vitomir
2016-01-01
Depth image acquisition with structured light approaches in outdoor environments is a challenging problem due to external factors, such as ambient sunlight, which commonly affect the acquisition procedure. This paper presents a novel structured light sensor designed specifically for operation in outdoor environments. The sensor exploits a modulated sequence of structured light projected onto the target scene to counteract environmental factors and estimate a spatial distortion map in a robust manner. The correspondence between the projected pattern and the estimated distortion map is then established using a probabilistic framework based on graphical models. Finally, the depth image of the target scene is reconstructed using a number of reference frames recorded during the calibration process. We evaluate the proposed sensor on experimental data in indoor and outdoor environments and present comparative experiments with other existing methods, as well as commercial sensors. PMID:27775570
Inference of ICF implosion core mix using experimental data and theoretical mix modeling
Sherrill, Leslie Welser; Haynes, Donald A; Cooley, James H; Sherrill, Manolo E; Mancini, Roberto C; Tommasini, Riccardo; Golovkin, Igor E; Haan, Steven W
2009-01-01
The mixing between fuel and shell materials in Inertial Confinement Fusion (lCF) implosion cores is a current topic of interest. The goal of this work was to design direct-drive ICF experiments which have varying levels of mix, and subsequently to extract information on mixing directly from the experimental data using spectroscopic techniques. The experimental design was accomplished using hydrodynamic simulations in conjunction with Haan's saturation model, which was used to predict the mix levels of candidate experimental configurations. These theoretical predictions were then compared to the mixing information which was extracted from the experimental data, and it was found that Haan's mix model predicted trends in the width of the mix layer as a function of initial shell thickness. These results contribute to an assessment of the range of validity and predictive capability of the Haan saturation model, as well as increasing confidence in the methods used to extract mixing information from experimental data.
Numerical modelling of mixed-sediment consolidation
NASA Astrophysics Data System (ADS)
Grasso, Florent; Le Hir, Pierre; Bassoullet, Philippe
2015-04-01
Sediment transport modelling in estuarine environments, characterised by cohesive and non-cohesive sediment mixtures, has to consider a time variation of erodibility due to consolidation. Generally, validated by settling column experiments, mud consolidation is now fairly well simulated; however, numerical models still have difficulty to simulate accurately the sedimentation and consolidation of mixed sediments for a wide range of initial conditions. This is partly due to the difficulty to formulate the contribution of sand in the hindered settling regime when segregation does not clearly occur. Based on extensive settling experiments with mud-sand mixtures, the objective of this study was to improve the numerical modelling of mixed-sediment consolidation by focusing on segregation processes. We used constitutive relationships following the fractal theory associated with a new segregation formulation based on the relative mud concentration. Using specific sets of parameters calibrated for each test—with different initial sediment concentration and sand content—the model achieved excellent prediction skills for simulating sediment height evolutions and concentration vertical profiles. It highlighted the model capacity to simulate properly the segregation occurrence for mud-sand mixtures characterised by a wide range of initial conditions. Nevertheless, calibration parameters varied significantly, as the fractal number ranged from 2.64 to 2.77. This study investigated the relevance of using a common set of parameters, which is generally required for 3D sediment transport modelling. Simulations were less accurate but remained satisfactory in an operational approach. Finally, a specific formulation for natural estuarine environments was proposed, simulating correctly the sedimentation-consolidation processes of mud-sand mixtures through 3D sediment transport modelling.
Graphics Processing Unit (GPU) Acceleration of the Goddard Earth Observing System Atmospheric Model
NASA Technical Reports Server (NTRS)
Putnam, Williama
2011-01-01
The Goddard Earth Observing System 5 (GEOS-5) is the atmospheric model used by the Global Modeling and Assimilation Office (GMAO) for a variety of applications, from long-term climate prediction at relatively coarse resolution, to data assimilation and numerical weather prediction, to very high-resolution cloud-resolving simulations. GEOS-5 is being ported to a graphics processing unit (GPU) cluster at the NASA Center for Climate Simulation (NCCS). By utilizing GPU co-processor technology, we expect to increase the throughput of GEOS-5 by at least an order of magnitude, and accelerate the process of scientific exploration across all scales of global modeling, including: The large-scale, high-end application of non-hydrostatic, global, cloud-resolving modeling at 10- to I-kilometer (km) global resolutions Intermediate-resolution seasonal climate and weather prediction at 50- to 25-km on small clusters of GPUs Long-range, coarse-resolution climate modeling, enabled on a small box of GPUs for the individual researcher After being ported to the GPU cluster, the primary physics components and the dynamical core of GEOS-5 have demonstrated a potential speedup of 15-40 times over conventional processor cores. Performance improvements of this magnitude reduce the required scalability of 1-km, global, cloud-resolving models from an unfathomable 6 million cores to an attainable 200,000 GPU-enabled cores.
NASA Astrophysics Data System (ADS)
Kornfeld, A.; Van der Tol, C.; Berry, J. A.
2015-12-01
Recent advances in optical remote sensing of photosynthesis offer great promise for estimating gross primary productivity (GPP) at leaf, canopy and even global scale. These methods -including solar-induced chlorophyll fluorescence (SIF) emission, fluorescence spectra, and hyperspectral features such as the red edge and the photochemical reflectance index (PRI) - can be used to greatly enhance the predictive power of global circulation models (GCMs) by providing better constraints on GPP. The way to use measured optical data to parameterize existing models such as SCOPE (Soil Canopy Observation, Photochemistry and Energy fluxes) is not trivial, however. We have therefore extended a biochemical model to include fluorescence and other parameters in a coupled treatment. To help parameterize the model, we then use nonlinear curve-fitting routines to determine the parameter set that enables model results to best fit leaf-level gas exchange and optical data measurements. To make the tool more accessible to all practitioners, we have further designed a graphical user interface (GUI) based front-end to allow researchers to analyze data with a minimum of effort while, at the same time, allowing them to change parameters interactively to visualize how variation in model parameters affect predicted outcomes such as photosynthetic rates, electron transport, and chlorophyll fluorescence. Here we discuss the tool and its effectiveness, using recently-gathered leaf-level data.
NASA Technical Reports Server (NTRS)
Jones, R. H.
1984-01-01
The hardware and software developments in computer graphics are discussed. Major topics include: system capabilities, hardware design, system compatibility, and software interface with the data base management system.
Gaussian Graphical Models Identify Networks of Dietary Intake in a German Adult Population.
Iqbal, Khalid; Buijsse, Brian; Wirth, Janine; Schulze, Matthias B; Floegel, Anna; Boeing, Heiner
2016-03-01
Data-reduction methods such as principal component analysis are often used to derive dietary patterns. However, such methods do not assess how foods are consumed in relation to each other. Gaussian graphical models (GGMs) are a set of novel methods that can address this issue. We sought to apply GGMs to derive sex-specific dietary intake networks representing consumption patterns in a German adult population. Dietary intake data from 10,780 men and 16,340 women of the European Prospective Investigation into Cancer and Nutrition (EPIC)-Potsdam cohort were cross-sectionally analyzed to construct dietary intake networks. Food intake for each participant was estimated using a 148-item food-frequency questionnaire that captured the intake of 49 food groups. GGMs were applied to log-transformed intakes (grams per day) of 49 food groups to construct sex-specific food networks. Semiparametric Gaussian copula graphical models (SGCGMs) were used to confirm GGM results. In men, GGMs identified 1 major dietary network that consisted of intakes of red meat, processed meat, cooked vegetables, sauces, potatoes, cabbage, poultry, legumes, mushrooms, soup, and whole-grain and refined breads. For women, a similar network was identified with the addition of fried potatoes. Other identified networks consisted of dairy products and sweet food groups. SGCGMs yielded results comparable to those of GGMs. GGMs are a powerful exploratory method that can be used to construct dietary networks representing dietary intake patterns that reveal how foods are consumed in relation to each other. GGMs indicated an apparent major role of red meat intake in a consumption pattern in the studied population. In the future, identified networks might be transformed into pattern scores for investigating their associations with health outcomes. © 2016 American Society for Nutrition.
Mixing parametrizations for ocean climate modelling
NASA Astrophysics Data System (ADS)
Gusev, Anatoly; Moshonkin, Sergey; Diansky, Nikolay; Zalesny, Vladimir
2016-04-01
The algorithm is presented of splitting the total evolutionary equations for the turbulence kinetic energy (TKE) and turbulence dissipation frequency (TDF), which is used to parameterize the viscosity and diffusion coefficients in ocean circulation models. The turbulence model equations are split into the stages of transport-diffusion and generation-dissipation. For the generation-dissipation stage, the following schemes are implemented: the explicit-implicit numerical scheme, analytical solution and the asymptotic behavior of the analytical solutions. The experiments were performed with different mixing parameterizations for the modelling of Arctic and the Atlantic climate decadal variability with the eddy-permitting circulation model INMOM (Institute of Numerical Mathematics Ocean Model) using vertical grid refinement in the zone of fully developed turbulence. The proposed model with the split equations for turbulence characteristics is similar to the contemporary differential turbulence models, concerning the physical formulations. At the same time, its algorithm has high enough computational efficiency. Parameterizations with using the split turbulence model make it possible to obtain more adequate structure of temperature and salinity at decadal timescales, compared to the simpler Pacanowski-Philander (PP) turbulence parameterization. Parameterizations with using analytical solution or numerical scheme at the generation-dissipation step of the turbulence model leads to better representation of ocean climate than the faster parameterization using the asymptotic behavior of the analytical solution. At the same time, the computational efficiency left almost unchanged relative to the simple PP parameterization. Usage of PP parametrization in the circulation model leads to realistic simulation of density and circulation with violation of T,S-relationships. This error is majorly avoided with using the proposed parameterizations containing the split turbulence model
Modeling populations of rotationally mixed massive stars
NASA Astrophysics Data System (ADS)
Brott, I.
2011-02-01
Massive stars can be considered as cosmic engines. With their high luminosities, strong stellar winds and violent deaths they drive the evolution of galaxies through-out the history of the universe. Despite the importance of massive stars, their evolution is still poorly understood. Two major issues have plagued evolutionary models of massive stars until today: mixing and mass loss On the main sequence, the effects of mass loss remain limited in the considered mass and metallicity range, this thesis concentrates on the role of mixing in massive stars. This thesis approaches this problem just on the cross road between observations and simulations. The main question: Do evolutionary models of single stars, accounting for the effects of rotation, reproduce the observed properties of real stars. In particular we are interested if the evolutionary models can reproduce the surface abundance changes during the main-sequence phase. To constrain our models we build a population synthesis model for the sample of the VLT-FLAMES Survey of Massive stars, for which star-formation history and rotational velocity distribution are well constrained. We consider the four main regions of the Hunter diagram. Nitrogen un-enriched slow rotators and nitrogen enriched fast rotators that are predicted by theory. Nitrogen enriched slow rotators and nitrogen unenriched fast rotators that are not predicted by our model. We conclude that currently these comparisons are not sufficient to verify the theory of rotational mixing. Physical processes in addition to rotational mixing appear necessary to explain the stars in the later two regions. The chapters of this Thesis have been published in the following Journals: Ch. 2: ``Rotating Massive Main-Sequence Stars I: Grids of Evolutionary Models and Isochrones'', I. Brott, S. E. de Mink, M. Cantiello, N. Langer, A. de Koter, C. J. Evans, I. Hunter, C. Trundle, J.S. Vink submitted to Astronomy & Astrop hysics Ch. 3: ``The VLT-FLAMES Survey of Massive
Mixed Membership Distributions with Applications to Modeling Multiple Strategy Usage
ERIC Educational Resources Information Center
Galyardt, April
2012-01-01
This dissertation examines two related questions. "How do mixed membership models work?" and "Can mixed membership be used to model how students use multiple strategies to solve problems?". Mixed membership models have been used in thousands of applications from text and image processing to genetic microarray analysis. Yet…
Mixed Membership Distributions with Applications to Modeling Multiple Strategy Usage
ERIC Educational Resources Information Center
Galyardt, April
2012-01-01
This dissertation examines two related questions. "How do mixed membership models work?" and "Can mixed membership be used to model how students use multiple strategies to solve problems?". Mixed membership models have been used in thousands of applications from text and image processing to genetic microarray analysis. Yet…
Integrating diagnostic data analysis for W7-AS using Bayesian graphical models
Svensson, J.; Dinklage, A.; Geiger, J.; Werner, A.; Fischer, R
2004-10-01
Analysis of diagnostic data in fusion experiments is usually dealt with separately for each diagnostic, in spite of the existence of a large number of interdependencies between global physics parameters and measurements from different diagnostics. In this article, we demonstrate an integrated data analysis model, applied to the W7-AS stellarator, where diagnostic interdependencies have been modeled in a novel way by using so called Bayesian graphical models. A Thomson scattering system, interferometer, diamagnetic loop, and neutral particle analyzer are combined with an equilibrium reconstruction, forming together one single model for the determination of quantities such as density and temperature profiles, directly in magnetic coordinates. The magnetic coordinate transformation is itself inferred from the measurements. Influence of both statistical and systematic uncertainties on quantities from equilibrium calculations, such as position of flux surfaces, can therefore be readily estimated together with uncertainties of profile estimates. The model allows for modular addition of further diagnostics. A software architecture for such integrated analysis where possibly large number of diagnostic and theoretical codes need to be combined, will also be discussed.
Zhang, Le; Jiang, Beini; Wu, Yukun; Strouthos, Costas; Sun, Phillip Zhe; Su, Jing; Zhou, Xiaobo
2011-12-16
Multiscale agent-based modeling (MABM) has been widely used to simulate Glioblastoma Multiforme (GBM) and its progression. At the intracellular level, the MABM approach employs a system of ordinary differential equations to describe quantitatively specific intracellular molecular pathways that determine phenotypic switches among cells (e.g. from migration to proliferation and vice versa). At the intercellular level, MABM describes cell-cell interactions by a discrete module. At the tissue level, partial differential equations are employed to model the diffusion of chemoattractants, which are the input factors of the intracellular molecular pathway. Moreover, multiscale analysis makes it possible to explore the molecules that play important roles in determining the cellular phenotypic switches that in turn drive the whole GBM expansion. However, owing to limited computational resources, MABM is currently a theoretical biological model that uses relatively coarse grids to simulate a few cancer cells in a small slice of brain cancer tissue. In order to improve this theoretical model to simulate and predict actual GBM cancer progression in real time, a graphics processing unit (GPU)-based parallel computing algorithm was developed and combined with the multi-resolution design to speed up the MABM. The simulated results demonstrated that the GPU-based, multi-resolution and multiscale approach can accelerate the previous MABM around 30-fold with relatively fine grids in a large extracellular matrix. Therefore, the new model has great potential for simulating and predicting real-time GBM progression, if real experimental data are incorporated.
Developing a multiscale, multi-resolution agent-based brain tumor model by graphics processing units
2011-01-01
Multiscale agent-based modeling (MABM) has been widely used to simulate Glioblastoma Multiforme (GBM) and its progression. At the intracellular level, the MABM approach employs a system of ordinary differential equations to describe quantitatively specific intracellular molecular pathways that determine phenotypic switches among cells (e.g. from migration to proliferation and vice versa). At the intercellular level, MABM describes cell-cell interactions by a discrete module. At the tissue level, partial differential equations are employed to model the diffusion of chemoattractants, which are the input factors of the intracellular molecular pathway. Moreover, multiscale analysis makes it possible to explore the molecules that play important roles in determining the cellular phenotypic switches that in turn drive the whole GBM expansion. However, owing to limited computational resources, MABM is currently a theoretical biological model that uses relatively coarse grids to simulate a few cancer cells in a small slice of brain cancer tissue. In order to improve this theoretical model to simulate and predict actual GBM cancer progression in real time, a graphics processing unit (GPU)-based parallel computing algorithm was developed and combined with the multi-resolution design to speed up the MABM. The simulated results demonstrated that the GPU-based, multi-resolution and multiscale approach can accelerate the previous MABM around 30-fold with relatively fine grids in a large extracellular matrix. Therefore, the new model has great potential for simulating and predicting real-time GBM progression, if real experimental data are incorporated. PMID:22176732
Higher-order ice-sheet modelling accelerated by multigrid on graphics cards
NASA Astrophysics Data System (ADS)
Brædstrup, Christian; Egholm, David
2013-04-01
Higher-order ice flow modelling is a very computer intensive process owing primarily to the nonlinear influence of the horizontal stress coupling. When applied for simulating long-term glacial landscape evolution, the ice-sheet models must consider very long time series, while both high temporal and spatial resolution is needed to resolve small effects. The use of higher-order and full stokes models have therefore seen very limited usage in this field. However, recent advances in graphics card (GPU) technology for high performance computing have proven extremely efficient in accelerating many large-scale scientific computations. The general purpose GPU (GPGPU) technology is cheap, has a low power consumption and fits into a normal desktop computer. It could therefore provide a powerful tool for many glaciologists working on ice flow models. Our current research focuses on utilising the GPU as a tool in ice-sheet and glacier modelling. To this extent we have implemented the Integrated Second-Order Shallow Ice Approximation (iSOSIA) equations on the device using the finite difference method. To accelerate the computations, the GPU solver uses a non-linear Red-Black Gauss-Seidel iterator coupled with a Full Approximation Scheme (FAS) multigrid setup to further aid convergence. The GPU finite difference implementation provides the inherent parallelization that scales from hundreds to several thousands of cores on newer cards. We demonstrate the efficiency of the GPU multigrid solver using benchmark experiments.
Model Selection and Accounting for Model Uncertainty in Graphical Models Using OCCAM’s Window
1991-07-22
mental work; C, strenuous physical work; D, systolic blood pressure: E. ratio of 13 and Qt proteins; F, family anamnesis of coronary heart disease...of F, family anamnesis . The models are shown in Figure 4. 12 Table 1: Risk factors for Coronary lfeart Disea:W B No Yes A No Yes No Yes F E D C...a link from smoking (A) to systolic blood pressure (D). There is decisive evidence in favour of the marginal independence of family anamnesis of
Guo, Hongbin; Renaut, Rosemary A; Chen, Kewei; Reiman, Eric M
2010-01-01
Graphical analysis methods are widely used in positron emission tomography quantification because of their simplicity and model independence. But they may, particularly for reversible kinetics, lead to bias in the estimated parameters. The source of the bias is commonly attributed to noise in the data. Assuming a two-tissue compartmental model, we investigate the bias that originates from modeling error. This bias is an intrinsic property of the simplified linear models used for limited scan durations, and it is exaggerated by random noise and numerical quadrature error. Conditions are derived under which Logan's graphical method either over- or under-estimates the distribution volume in the noise-free case. The bias caused by modeling error is quantified analytically. The presented analysis shows that the bias of graphical methods is inversely proportional to the dissociation rate. Furthermore, visual examination of the linearity of the Logan plot is not sufficient for guaranteeing that equilibrium has been reached. A new model which retains the elegant properties of graphical analysis methods is presented, along with a numerical algorithm for its solution. We perform simulations with the fibrillar amyloid β radioligand [11C] benzothiazole-aniline using published data from the University of Pittsburgh and Rotterdam groups. The results show that the proposed method significantly reduces the bias due to modeling error. Moreover, the results for data acquired over a 70 minutes scan duration are at least as good as those obtained using existing methods for data acquired over a 90 minutes scan duration. PMID:20493196
Kizilkaya, Kadir; Tempelman, Robert J
2005-01-01
We propose a general Bayesian approach to heteroskedastic error modeling for generalized linear mixed models (GLMM) in which linked functions of conditional means and residual variances are specified as separate linear combinations of fixed and random effects. We focus on the linear mixed model (LMM) analysis of birth weight (BW) and the cumulative probit mixed model (CPMM) analysis of calving ease (CE). The deviance information criterion (DIC) was demonstrated to be useful in correctly choosing between homoskedastic and heteroskedastic error GLMM for both traits when data was generated according to a mixed model specification for both location parameters and residual variances. Heteroskedastic error LMM and CPMM were fitted, respectively, to BW and CE data on 8847 Italian Piemontese first parity dams in which residual variances were modeled as functions of fixed calf sex and random herd effects. The posterior mean residual variance for male calves was over 40% greater than that for female calves for both traits. Also, the posterior means of the standard deviation of the herd-specific variance ratios (relative to a unitary baseline) were estimated to be 0.60 ± 0.09 for BW and 0.74 ± 0.14 for CE. For both traits, the heteroskedastic error LMM and CPMM were chosen over their homoskedastic error counterparts based on DIC values. PMID:15588567
Repositioning the knee joint in human body FE models using a graphics-based technique.
Jani, Dhaval; Chawla, Anoop; Mukherjee, Sudipto; Goyal, Rahul; Vusirikala, Nataraju; Jayaraman, Suresh
2012-01-01
Human body finite element models (FE-HBMs) are available in standard occupant or pedestrian postures. There is a need to have FE-HBMs in the same posture as a crash victim or to be configured in varying postures. Developing FE models for all possible positions is not practically viable. The current work aims at obtaining a posture-specific human lower extremity model by reconfiguring an existing one. A graphics-based technique was developed to reposition the lower extremity of an FE-HBM by specifying the flexion-extension angle. Elements of the model were segregated into rigid (bones) and deformable components (soft tissues). The bones were rotated about the flexion-extension axis followed by rotation about the longitudinal axis to capture the twisting of the tibia. The desired knee joint movement was thus achieved. Geometric heuristics were then used to reposition the skin. A mapping defined over the space between bones and the skin was used to regenerate the soft tissues. Mesh smoothing was then done to augment mesh quality. The developed method permits control over the kinematics of the joint and maintains the initial mesh quality of the model. For some critical areas (in the joint vicinity) where element distortion is large, mesh smoothing is done to improve mesh quality. A method to reposition the knee joint of a human body FE model was developed. Repositions of a model from 9 degrees of flexion to 90 degrees of flexion in just a few seconds without subjective interventions was demonstrated. Because the mesh quality of the repositioned model was maintained to a predefined level (typically to the level of a well-made model in the initial configuration), the model was suitable for subsequent simulations.
NASA Astrophysics Data System (ADS)
Le Grand, Scott; Götz, Andreas W.; Walker, Ross C.
2013-02-01
A new precision model is proposed for the acceleration of all-atom classical molecular dynamics (MD) simulations on graphics processing units (GPUs). This precision model replaces double precision arithmetic with fixed point integer arithmetic for the accumulation of force components as compared to a previously introduced model that uses mixed single/double precision arithmetic. This significantly boosts performance on modern GPU hardware without sacrificing numerical accuracy. We present an implementation for NVIDIA GPUs of both generalized Born implicit solvent simulations as well as explicit solvent simulations using the particle mesh Ewald (PME) algorithm for long-range electrostatics using this precision model. Tests demonstrate both the performance of this implementation as well as its numerical stability for constant energy and constant temperature biomolecular MD as compared to a double precision CPU implementation and double and mixed single/double precision GPU implementations.
Cluster analysis using multivariate mixed effects models.
Villarroel, Luis; Marshall, Guillermo; Barón, Anna E
2009-09-10
A common situation in the biological and social sciences is to have data on one or more variables measured longitudinally on a sample of individuals. A problem of growing interest in these areas is the grouping of individuals into one of two or more clusters according to their longitudinal behavior. Recently, methods have been proposed to deal with cases where individuals are classified into clusters through a linear model of mixed univariate effects deriving from a longitudinally measured variable. The method proposed in the current work deals with the case of clustering and then classification based on two or more variables measured longitudinally, through the fitting of non-linear multivariate mixed effect models, and with consideration given to parameter estimation for balanced and unbalanced data using an EM algorithm. The application of the method is illustrated with an example in which the clusters are identified and the classification into clusters is compared with the true membership of individuals in one of two groups, which is known at the end of the follow-up period.
ERIC Educational Resources Information Center
Meznarich, R. A.; Shava, R. C.; Lightner, S. L.
2009-01-01
Engineering design graphics courses taught in colleges or universities should provide and equip students preparing for employment with the basic occupational graphics skill competences required by engineering and technology disciplines. Academic institutions should introduce and include topics that cover the newer and more efficient graphics…
ERIC Educational Resources Information Center
Meznarich, R. A.; Shava, R. C.; Lightner, S. L.
2009-01-01
Engineering design graphics courses taught in colleges or universities should provide and equip students preparing for employment with the basic occupational graphics skill competences required by engineering and technology disciplines. Academic institutions should introduce and include topics that cover the newer and more efficient graphics…
A Comparison of Learning Style Models and Assessment Instruments for University Graphics Educators
ERIC Educational Resources Information Center
Harris, La Verne Abe; Sadowski, Mary S.; Birchman, Judy A.
2006-01-01
Kolb (2004) and others have defined learning style as a preference by which students learn and remember what they have learned. This presentation will include a summary of learning style research published in the "Engineering Design Graphics Journal" over the past 15 years on the topic of learning styles and graphics education. The…
Structural and Functional Model of Organization of Geometric and Graphic Training of the Students
ERIC Educational Resources Information Center
Poluyanov, Valery B.; Pyankova, Zhanna A.; Chukalkina, Marina I.; Smolina, Ekaterina S.
2016-01-01
The topicality of the investigated problem is stipulated by the social need for training competitive engineers with a high level of graphical literacy; especially geometric and graphic training of students and its projected results in a competence-based approach; individual characteristics and interests of the students, as well as methodological…
A Comparison of Learning Style Models and Assessment Instruments for University Graphics Educators
ERIC Educational Resources Information Center
Harris, La Verne Abe; Sadowski, Mary S.; Birchman, Judy A.
2006-01-01
Kolb (2004) and others have defined learning style as a preference by which students learn and remember what they have learned. This presentation will include a summary of learning style research published in the "Engineering Design Graphics Journal" over the past 15 years on the topic of learning styles and graphics education. The…
NASA Astrophysics Data System (ADS)
Stork, David G.; Nagy, Gabor
2010-02-01
We explored the working methods of the Italian Baroque master Caravaggio through computer graphics reconstruction of his studio, with special focus on his use of lighting and illumination in The calling of St. Matthew. Although he surely took artistic liberties while constructing this and other works and did not strive to provide a "photographic" rendering of the tableau before him, there are nevertheless numerous visual clues to the likely studio conditions and working methods within the painting: the falloff of brightness along the rear wall, the relative brightness of the faces of figures, and the variation in sharpness of cast shadows (i.e., umbrae and penumbrae). We explored two studio lighting hypotheses: that the primary illumination was local (and hence artificial) and that it was distant solar. We find that the visual evidence can be consistent with local (artificial) illumination if Caravaggio painted his figures separately, adjusting the brightness on each to compensate for the falloff in illumination. Alternatively, the evidence is consistent with solar illumination only if the rear wall had particular reflectance properties, as described by a bi-directional reflectance distribution function, BRDF. (Ours is the first research applying computer graphics to the understanding of artists' praxis that models subtle reflectance properties of surfaces through BRDFs, a technique that may find use in studies of other artists.) A somewhat puzzling visual feature-unnoted in the scholarly literature-is the upward-slanting cast shadow in the upper-right corner of the painting. We found this shadow is naturally consistent with a local illuminant passing through a small window perpendicular to the viewer's line of sight, but could also be consistent with solar illumination if the shadow was due to a slanted, overhanging section of a roof outside the artist's studio. Our results place likely conditions upon any hypotheses concerning Caravaggio's working methods and
Stojnic, Robert; Fu, Audrey Qiuyan; Adryan, Boris
2012-01-01
Inferring the combinatorial regulatory code of transcription factors (TFs) from genome-wide TF binding profiles is challenging. A major reason is that TF binding profiles significantly overlap and are therefore highly correlated. Clustered occurrence of multiple TFs at genomic sites may arise from chromatin accessibility and local cooperation between TFs, or binding sites may simply appear clustered if the profiles are generated from diverse cell populations. Overlaps in TF binding profiles may also result from measurements taken at closely related time intervals. It is thus of great interest to distinguish TFs that directly regulate gene expression from those that are indirectly associated with gene expression. Graphical models, in particular Bayesian networks, provide a powerful mathematical framework to infer different types of dependencies. However, existing methods do not perform well when the features (here: TF binding profiles) are highly correlated, when their association with the biological outcome is weak, and when the sample size is small. Here, we develop a novel computational method, the Neighbourhood Consistent PC (NCPC) algorithms, which deal with these scenarios much more effectively than existing methods do. We further present a novel graphical representation, the Direct Dependence Graph (DDGraph), to better display the complex interactions among variables. NCPC and DDGraph can also be applied to other problems involving highly correlated biological features. Both methods are implemented in the R package ddgraph, available as part of Bioconductor (http://bioconductor.org/packages/2.11/bioc/html/ddgraph.html). Applied to real data, our method identified TFs that specify different classes of cis-regulatory modules (CRMs) in Drosophila mesoderm differentiation. Our analysis also found depletion of the early transcription factor Twist binding at the CRMs regulating expression in visceral and somatic muscle cells at later stages, which suggests a CRM
Stojnic, Robert; Fu, Audrey Qiuyan; Adryan, Boris
2012-01-01
Inferring the combinatorial regulatory code of transcription factors (TFs) from genome-wide TF binding profiles is challenging. A major reason is that TF binding profiles significantly overlap and are therefore highly correlated. Clustered occurrence of multiple TFs at genomic sites may arise from chromatin accessibility and local cooperation between TFs, or binding sites may simply appear clustered if the profiles are generated from diverse cell populations. Overlaps in TF binding profiles may also result from measurements taken at closely related time intervals. It is thus of great interest to distinguish TFs that directly regulate gene expression from those that are indirectly associated with gene expression. Graphical models, in particular Bayesian networks, provide a powerful mathematical framework to infer different types of dependencies. However, existing methods do not perform well when the features (here: TF binding profiles) are highly correlated, when their association with the biological outcome is weak, and when the sample size is small. Here, we develop a novel computational method, the Neighbourhood Consistent PC (NCPC) algorithms, which deal with these scenarios much more effectively than existing methods do. We further present a novel graphical representation, the Direct Dependence Graph (DDGraph), to better display the complex interactions among variables. NCPC and DDGraph can also be applied to other problems involving highly correlated biological features. Both methods are implemented in the R package ddgraph, available as part of Bioconductor (http://bioconductor.org/packages/2.11/bioc/html/ddgraph.html). Applied to real data, our method identified TFs that specify different classes of cis-regulatory modules (CRMs) in Drosophila mesoderm differentiation. Our analysis also found depletion of the early transcription factor Twist binding at the CRMs regulating expression in visceral and somatic muscle cells at later stages, which suggests a CRM
ERIC Educational Resources Information Center
Thompson, John
2009-01-01
Graphic storytelling is a medium that allows students to make and share stories, while developing their art communication skills. American comics today are more varied in genre, approach, and audience than ever before. When considering the impact of Japanese manga on the youth, graphic storytelling emerges as a powerful player in pop culture. In…
NASA Technical Reports Server (NTRS)
1987-01-01
Genigraphics Corporation's Masterpiece 8770 FilmRecorder is an advanced high resolution system designed to improve and expand a company's in-house graphics production. GRAFTIME/software package was designed to allow office personnel with minimal training to produce professional level graphics for business communications and presentations. Products are no longer being manufactured.
ERIC Educational Resources Information Center
Thompson, John
2009-01-01
Graphic storytelling is a medium that allows students to make and share stories, while developing their art communication skills. American comics today are more varied in genre, approach, and audience than ever before. When considering the impact of Japanese manga on the youth, graphic storytelling emerges as a powerful player in pop culture. In…
Joint conditional Gaussian graphical models with multiple sources of genomic data
Chun, Hyonho; Chen, Min; Li, Bing; Zhao, Hongyu
2013-01-01
It is challenging to identify meaningful gene networks because biological interactions are often condition-specific and confounded with external factors. It is necessary to integrate multiple sources of genomic data to facilitate network inference. For example, one can jointly model expression datasets measured from multiple tissues with molecular marker data in so-called genetical genomic studies. In this paper, we propose a joint conditional Gaussian graphical model (JCGGM) that aims for modeling biological processes based on multiple sources of data. This approach is able to integrate multiple sources of information by adopting conditional models combined with joint sparsity regularization. We apply our approach to a real dataset measuring gene expression in four tissues (kidney, liver, heart, and fat) from recombinant inbred rats. Our approach reveals that the liver tissue has the highest level of tissue-specific gene regulations among genes involved in insulin responsive facilitative sugar transporter mediated glucose transport pathway, followed by heart and fat tissues, and this finding can only be attained from our JCGGM approach. PMID:24381584
Ortiz, Andrés; Munilla, Jorge; Álvarez-Illán, Ignacio; Górriz, Juan M.; Ramírez, Javier
2015-01-01
Alzheimer's Disease (AD) is the most common neurodegenerative disease in elderly people. Its development has been shown to be closely related to changes in the brain connectivity network and in the brain activation patterns along with structural changes caused by the neurodegenerative process. Methods to infer dependence between brain regions are usually derived from the analysis of covariance between activation levels in the different areas. However, these covariance-based methods are not able to estimate conditional independence between variables to factor out the influence of other regions. Conversely, models based on the inverse covariance, or precision matrix, such as Sparse Gaussian Graphical Models allow revealing conditional independence between regions by estimating the covariance between two variables given the rest as constant. This paper uses Sparse Inverse Covariance Estimation (SICE) methods to learn undirected graphs in order to derive functional and structural connectivity patterns from Fludeoxyglucose (18F-FDG) Position Emission Tomography (PET) data and segmented Magnetic Resonance images (MRI), drawn from the ADNI database, for Control, MCI (Mild Cognitive Impairment Subjects), and AD subjects. Sparse computation fits perfectly here as brain regions usually only interact with a few other areas. The models clearly show different metabolic covariation patters between subject groups, revealing the loss of strong connections in AD and MCI subjects when compared to Controls. Similarly, the variance between GM (Gray Matter) densities of different regions reveals different structural covariation patterns between the different groups. Thus, the different connectivity patterns for controls and AD are used in this paper to select regions of interest in PET and GM images with discriminative power for early AD diagnosis. Finally, functional an structural models are combined to leverage the classification accuracy. The results obtained in this work show the
Heinzer, T.; Hansen, D.T.; Greer, W.; Sebhat, M.
1996-12-31
A geographic information system (GIS) was used in developing a graphical user interface (GUI) for use with the US Geological Survey`s finite difference ground-water flow model, MODFLOW. The GUI permits the construction of a MODFLOW based ground-water flow model from scratch in a GIS environment. The model grid, input data and output are stored as separate raster data sets which may be viewed, edited, and manipulated in a graphic environment. Other GIS data sets can be displayed with the model data sets for reference and evaluation. The GUI sets up a directory structure for storage of the files associated with the ground-water model and the raster data sets created by the interface. The GUI stores model coefficients and model output as raster values. Values stored by these raster data sets are formatted for use with the ground-water flow model code.
Stricker, C.; Fernando, R.L.; Elston, R.C.
1995-12-01
This paper presents an extension of the finite polygenic mixed model of Fernando et al. to linkage analysis. The finite polygenic mixed model, extended for linkage analysis, leads to a likelihood that can be calculated using efficient algorithms developed for oligogenic models. For comparison, linkage analysis of 5 simulated 4021-member pedigrees was performed using the usual mixed model of inheritance, approximated by Hasstedt, and the finite polygenic mixed model extended for linkage analysis presented here. Maximum likelihood estimates of the finite polygenic mixed model could be inferred to be closer to the simulated values in these pedigrees. 31 refs., 2 tabs.
Stricker, C.; Fernando, R. L.; Elston, R. C.
1995-01-01
This paper presents an extension of the finite polygenic mixed model of FERNANDO et al. (1994) to linkage analysis. The finite polygenic mixed model, extended for linkage analysis, leads to a likelihood that can be calculated using efficient algorithms developed for oligogenic models. For comparison, linkage analysis of 5 simulated 4021-member pedigrees was performed using the usual mixed model of inheritance, approximated by HASSTEDT (1982), and the finite polygenic mixed model extended for linkage analysis presented here. Maximum likelihood estimates of the finite polygenic mixed model could be inferred to be closer to the simulated values in these pedigrees. PMID:8601502
Nonequilibrium antiferromagnetic mixed-spin Ising model.
Godoy, Mauricio; Figueiredo, Wagner
2002-09-01
We studied an antiferromagnetic mixed-spin Ising model on the square lattice subject to two competing stochastic processes. The model system consists of two interpenetrating sublattices of spins sigma=1/2 and S=1, and we take only nearest neighbor interactions between pairs of spins. The system is in contact with a heat bath at temperature T, and the exchange of energy with the heat bath occurs via one-spin flip (Glauber dynamics). Besides, the system interacts with an external agency of energy, which supplies energy to it whenever two nearest neighboring spins are simultaneously flipped. By employing Monte Carlo simulations and a dynamical pair approximation, we found the phase diagram for the stationary states of the model in the plane temperature T versus the competition parameter between one- and two-spin flips p. We observed the appearance of three distinct phases, that are separated by continuous transition lines. We also determined the static critical exponents along these lines and we showed that this nonequilibrium model belongs to the universality class of the two-dimensional equilibrium Ising model.
Stability Approach to Regularization Selection (StARS) for High Dimensional Graphical Models.
Liu, Han; Roeder, Kathryn; Wasserman, Larry
2010-12-31
A challenging problem in estimating high-dimensional graphical models is to choose the regularization parameter in a data-dependent way. The standard techniques include K-fold cross-validation (K-CV), Akaike information criterion (AIC), and Bayesian information criterion (BIC). Though these methods work well for low-dimensional problems, they are not suitable in high dimensional settings. In this paper, we present StARS: a new stability-based method for choosing the regularization parameter in high dimensional inference for undirected graphs. The method has a clear interpretation: we use the least amount of regularization that simultaneously makes a graph sparse and replicable under random sampling. This interpretation requires essentially no conditions. Under mild conditions, we show that StARS is partially sparsistent in terms of graph estimation: i.e. with high probability, all the true edges will be included in the selected model even when the graph size diverges with the sample size. Empirically, the performance of StARS is compared with the state-of-the-art model selection procedures, including K-CV, AIC, and BIC, on both synthetic data and a real microarray dataset. StARS outperforms all these competing procedures.
Donkin, S.G.
1997-09-01
A new method of performing soil toxicity tests with free-living nematodes exposed to several metals and soil types has been adapted to the Langmuir sorption model in an attempt at bridging the gap between physico-chemical and biological data gathered in the complex soil matrix. Pseudo-Langmuir sorption isotherms have been developed using nematode toxic responses (lethality, in this case) in place of measured solvated metal, in order to more accurately model bioavailability. This method allows the graphical determination of Langmuir coefficients describing maximum sorption capacities and sorption affinities of various metal-soil combinations in the context of real biological responses of indigenous organisms. Results from nematode mortality tests with zinc, cadmium, copper, and lead in four soil types and water were used for isotherm construction. The level of agreement between these results and available literature data on metal sorption behavior in soils suggests that biologically relevant data may be successfully fitted to sorption models such as the Langmuir. This would allow for accurate prediction of soil contaminant concentrations which have minimal effect on indigenous invertebrates.
Pairwise graphical models for structural health monitoring with dense sensor arrays
NASA Astrophysics Data System (ADS)
Mohammadi Ghazi, Reza; Chen, Justin G.; Büyüköztürk, Oral
2017-09-01
Through advances in sensor technology and development of camera-based measurement techniques, it has become affordable to obtain high spatial resolution data from structures. Although measured datasets become more informative by increasing the number of sensors, the spatial dependencies between sensor data are increased at the same time. Therefore, appropriate data analysis techniques are needed to handle the inference problem in presence of these dependencies. In this paper, we propose a novel approach that uses graphical models (GM) for considering the spatial dependencies between sensor measurements in dense sensor networks or arrays to improve damage localization accuracy in structural health monitoring (SHM) application. Because there are always unobserved damaged states in this application, the available information is insufficient for learning the GMs. To overcome this challenge, we propose an approximated model that uses the mutual information between sensor measurements to learn the GMs. The study is backed by experimental validation of the method on two test structures. The first is a three-story two-bay steel model structure that is instrumented by MEMS accelerometers. The second experimental setup consists of a plate structure and a video camera to measure the displacement field of the plate. Our results show that considering the spatial dependencies by the proposed algorithm can significantly improve damage localization accuracy.
Extended model for Richtmyer-Meshkov mix
Mikaelian, K O
2009-11-18
We examine four Richtmyer-Meshkov (RM) experiments on shock-generated turbulent mix and find them to be in good agreement with our earlier simple model in which the growth rate h of the mixing layer following a shock or reshock is constant and given by 2{alpha}A{Delta}v, independent of initial conditions h{sub 0}. Here A is the Atwood number ({rho}{sub B}-{rho}{sub A})/({rho}{sub B} + {rho}{sub A}), {rho}{sub A,B} are the densities of the two fluids, {Delta}V is the jump in velocity induced by the shock or reshock, and {alpha} is the constant measured in Rayleigh-Taylor (RT) experiments: {alpha}{sup bubble} {approx} 0.05-0.07, {alpha}{sup spike} {approx} (1.8-2.5){alpha}{sup bubble} for A {approx} 0.7-1.0. In the extended model the growth rate beings to day after a time t*, when h = h*, slowing down from h = h{sub 0} + 2{alpha}A{Delta}vt to h {approx} t{sup {theta}} behavior, with {theta}{sup bubble} {approx} 0.25 and {theta}{sup spike} {approx} 0.36 for A {approx} 0.7. They ascribe this change-over to loss of memory of the direction of the shock or reshock, signaling transition from highly directional to isotropic turbulence. In the simplest extension of the model h*/h{sub 0} is independent of {Delta}v and depends only on A. They find that h*/h{sub 0} {approx} 2.5-3.5 for A {approx} 0.7-1.0.
NASA Astrophysics Data System (ADS)
Skataric, Maja; Bose, Sandip; Zeroug, Smaine; Tilke, Peter
2017-02-01
It is not uncommon in the field of non-destructive evaluation that multiple measurements encompassing a variety of modalities are available for analysis and interpretation for determining the underlying states of nature of the materials or parts being tested. Despite and sometimes due to the richness of data, significant challenges arise in the interpretation manifested as ambiguities and inconsistencies due to various uncertain factors in the physical properties (inputs), environment, measurement device properties, human errors, and the measurement data (outputs). Most of these uncertainties cannot be described by any rigorous mathematical means, and modeling of all possibilities is usually infeasible for many real time applications. In this work, we will discuss an approach based on Hierarchical Bayesian Graphical Models (HBGM) for the improved interpretation of complex (multi-dimensional) problems with parametric uncertainties that lack usable physical models. In this setting, the input space of the physical properties is specified through prior distributions based on domain knowledge and expertise, which are represented as Gaussian mixtures to model the various possible scenarios of interest for non-destructive testing applications. Forward models are then used offline to generate the expected distribution of the proposed measurements which are used to train a hierarchical Bayesian network. In Bayesian analysis, all model parameters are treated as random variables, and inference of the parameters is made on the basis of posterior distribution given the observed data. Learned parameters of the posterior distribution obtained after the training can therefore be used to build an efficient classifier for differentiating new observed data in real time on the basis of pre-trained models. We will illustrate the implementation of the HBGM approach to ultrasonic measurements used for cement evaluation of cased wells in the oil industry.
A graphical method for reducing and relating models in systems biology
Gay, Steven; Soliman, Sylvain; Fages, François
2010-01-01
Motivation: In Systems Biology, an increasing collection of models of various biological processes is currently developed and made available in publicly accessible repositories, such as biomodels.net for instance, through common exchange formats such as SBML. To date, however, there is no general method to relate different models to each other by abstraction or reduction relationships, and this task is left to the modeler for re-using and coupling models. In mathematical biology, model reduction techniques have been studied for a long time, mainly in the case where a model exhibits different time scales, or different spatial phases, which can be analyzed separately. These techniques are however far too restrictive to be applied on a large scale in systems biology, and do not take into account abstractions other than time or phase decompositions. Our purpose here is to propose a general computational method for relating models together, by considering primarily the structure of the interactions and abstracting from their dynamics in a first step. Results: We present a graph-theoretic formalism with node merge and delete operations, in which model reductions can be studied as graph matching problems. From this setting, we derive an algorithm for deciding whether there exists a reduction from one model to another, and evaluate it on the computation of the reduction relations between all SBML models of the biomodels.net repository. In particular, in the case of the numerous models of MAPK signalling, and of the circadian clock, biologically meaningful mappings between models of each class are automatically inferred from the structure of the interactions. We conclude on the generality of our graphical method, on its limits with respect to the representation of the structure of the interactions in SBML, and on some perspectives for dealing with the dynamics. Availability: The algorithms described in this article are implemented in the open-source software modeling
ERIC Educational Resources Information Center
Langan, Jean
1996-01-01
Provides an overview of the research literature concerning the development of graphic art skills, relevant instructional methods, and the specific use of visual models as an instructional method. Pertinent findings, relevant methodological issues, and major conclusions are all discussed. Briefly addresses the viewpoints of gestalt and learning…
ERIC Educational Resources Information Center
de Rooij, Mark; Heiser, Willem J.
2005-01-01
Although RC(M)-association models have become a generally useful tool for the analysis of cross-classified data, the graphical representation resulting from such an analysis can at times be misleading. The relationships present between row category points and column category points cannot be interpreted by inter point distances but only through…
Full Stokes finite-element modeling of ice sheets using a graphics processing unit
NASA Astrophysics Data System (ADS)
Seddik, H.; Greve, R.
2016-12-01
Thermo-mechanical simulation of ice sheets is an important approach to understand and predict their evolution in a changing climate. For that purpose, higher order (e.g., ISSM, BISICLES) and full Stokes (e.g., Elmer/Ice, http://elmerice.elmerfem.org) models are increasingly used to more accurately model the flow of entire ice sheets. In parallel to this development, the rapidly improving performance and capabilities of Graphics Processing Units (GPUs) allows to efficiently offload more calculations of complex and computationally demanding problems on those devices. Thus, in order to continue the trend of using full Stokes models with greater resolutions, using GPUs should be considered for the implementation of ice sheet models. We developed the GPU-accelerated ice-sheet model Sainō. Sainō is an Elmer (http://www.csc.fi/english/pages/elmer) derivative implemented in Objective-C which solves the full Stokes equations with the finite element method. It uses the standard OpenCL language (http://www.khronos.org/opencl/) to offload the assembly of the finite element matrix on the GPU. A mesh-coloring scheme is used so that elements with the same color (non-sharing nodes) are assembled in parallel on the GPU without the need for synchronization primitives. The current implementation shows that, for the ISMIP-HOM experiment A, during the matrix assembly in double precision with 8000, 87,500 and 252,000 brick elements, Sainō is respectively 2x, 10x and 14x faster than Elmer/Ice (when both models are run on a single processing unit). In single precision, Sainō is even 3x, 20x and 25x faster than Elmer/Ice. A detailed description of the comparative results between Sainō and Elmer/Ice will be presented, and further perspectives in optimization and the limitations of the current implementation.
Donovan, Courtney
2014-09-01
This paper focuses on the graphic pathogeographies in David B.'s Epileptic and David Small's Stitches: A Memoir to highlight the significance of geographic concepts in graphic novels of health and disease. Despite its importance in such works, few scholars have examined the role of geography in their narrative and structure. I examine the role of place in Epileptic and Stitches to extend the academic discussion on graphic novels of health and disease and identify how such works bring attention to the role of geography in the individual's engagement with health, disease, and related settings.
Liu, Quan
2016-01-01
Learning a Gaussian graphical model with latent variables is ill posed when there is insufficient sample complexity, thus having to be appropriately regularized. A common choice is convex ℓ1 plus nuclear norm to regularize the searching process. However, the best estimator performance is not always achieved with these additive convex regularizations, especially when the sample complexity is low. In this paper, we consider a concave additive regularization which does not require the strong irrepresentable condition. We use concave regularization to correct the intrinsic estimation biases from Lasso and nuclear penalty as well. We establish the proximity operators for our concave regularizations, respectively, which induces sparsity and low rankness. In addition, we extend our method to also allow the decomposition of fused structure-sparsity plus low rankness, providing a powerful tool for models with temporal information. Specifically, we develop a nontrivial modified alternating direction method of multipliers with at least local convergence. Finally, we use both synthetic and real data to validate the excellence of our method. In the application of reconstructing two-stage cancer networks, “the Warburg effect” can be revealed directly. PMID:27843485
Configuring a Graphical User Interface for Managing Local HYSPLIT Model Runs Through AWIPS
NASA Technical Reports Server (NTRS)
Wheeler, mark M.; Blottman, Peter F.; Sharp, David W.; Hoeth, Brian; VanSpeybroeck, Kurt M.
2009-01-01
Responding to incidents involving the release of harmful airborne pollutants is a continual challenge for Weather Forecast Offices in the National Weather Service. When such incidents occur, current protocol recommends forecaster-initiated requests of NOAA's Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) model output through the National Centers of Environmental Prediction to obtain critical dispersion guidance. Individual requests are submitted manually through a secured web site, with desired multiple requests submitted in sequence, for the purpose of obtaining useful trajectory and concentration forecasts associated with the significant release of harmful chemical gases, radiation, wildfire smoke, etc., into local the atmosphere. To help manage the local HYSPLIT for both routine and emergency use, a graphical user interface was designed for operational efficiency. The interface allows forecasters to quickly determine the current HYSPLIT configuration for the list of predefined sites (e.g., fixed sites and floating sites), and to make any necessary adjustments to key parameters such as Input Model. Number of Forecast Hours, etc. When using the interface, forecasters will obtain desired output more confidently and without the danger of corrupting essential configuration files.
Estimation of growth parameters using a nonlinear mixed Gompertz model.
Wang, Z; Zuidhof, M J
2004-06-01
In order to maximize the utility of simulation models for decision making, accurate estimation of growth parameters and associated variances is crucial. A mixed Gompertz growth model was used to account for between-bird variation and heterogeneous variance. The mixed model had several advantages over the fixed effects model. The mixed model partitioned BW variation into between- and within-bird variation, and the covariance structure assumed with the random effect accounted for part of the BW correlation across ages in the same individual. The amount of residual variance decreased by over 55% with the mixed model. The mixed model reduced estimation biases that resulted from selective sampling. For analysis of longitudinal growth data, the mixed effects growth model is recommended.
MixSIAR: A Bayesian stable isotope mixing model for characterizing intrapopulation niche variation
Background/Question/Methods The science of stable isotope mixing models has tended towards the development of modeling products (e.g. IsoSource, MixSIR, SIAR), where methodological advances or syntheses of the current state of the art are published in parity with software packa...
MixSIAR: A Bayesian stable isotope mixing model for characterizing intrapopulation niche variation
Background/Question/Methods The science of stable isotope mixing models has tended towards the development of modeling products (e.g. IsoSource, MixSIR, SIAR), where methodological advances or syntheses of the current state of the art are published in parity with software packa...
Computer graphics in aerodynamic analysis
NASA Technical Reports Server (NTRS)
Cozzolongo, J. V.
1984-01-01
The use of computer graphics and its application to aerodynamic analyses on a routine basis is outlined. The mathematical modelling of the aircraft geometries and the shading technique implemented are discussed. Examples of computer graphics used to display aerodynamic flow field data and aircraft geometries are shown. A future need in computer graphics for aerodynamic analyses is addressed.
NASA Astrophysics Data System (ADS)
Madadi-Kandjani, E.; Fox, R. O.; Passalacqua, A.
2017-06-01
An extended quadrature method of moments using the β kernel density function (β -EQMOM) is used to approximate solutions to the evolution equation for univariate and bivariate composition probability distribution functions (PDFs) of a passive scalar for binary and ternary mixing. The key element of interest is the molecular mixing term, which is described using the Fokker-Planck (FP) molecular mixing model. The direct numerical simulations (DNSs) of Eswaran and Pope ["Direct numerical simulations of the turbulent mixing of a passive scalar," Phys. Fluids 31, 506 (1988)] and the amplitude mapping closure (AMC) of Pope ["Mapping closures for turbulent mixing and reaction," Theor. Comput. Fluid Dyn. 2, 255 (1991)] are taken as reference solutions to establish the accuracy of the FP model in the case of binary mixing. The DNSs of Juneja and Pope ["A DNS study of turbulent mixing of two passive scalars," Phys. Fluids 8, 2161 (1996)] are used to validate the results obtained for ternary mixing. Simulations are performed with both the conditional scalar dissipation rate (CSDR) proposed by Fox [Computational Methods for Turbulent Reacting Flows (Cambridge University Press, 2003)] and the CSDR from AMC, with the scalar dissipation rate provided as input and obtained from the DNS. Using scalar moments up to fourth order, the ability of the FP model to capture the evolution of the shape of the PDF, important in turbulent mixing problems, is demonstrated. Compared to the widely used assumed β -PDF model [S. S. Girimaji, "Assumed β-pdf model for turbulent mixing: Validation and extension to multiple scalar mixing," Combust. Sci. Technol. 78, 177 (1991)], the β -EQMOM solution to the FP model more accurately describes the initial mixing process with a relatively small increase in computational cost.
On Local Homogeneity and Stochastically Ordered Mixed Rasch Models
ERIC Educational Resources Information Center
Kreiner, Svend; Hansen, Mogens; Hansen, Carsten Rosenberg
2006-01-01
Mixed Rasch models add latent classes to conventional Rasch models, assuming that the Rasch model applies within each class and that relative difficulties of items are different in two or more latent classes. This article considers a family of stochastically ordered mixed Rasch models, with ordinal latent classes characterized by increasing total…
Kim, Won Hwa; Kim, Hyunwoo J; Adluru, Nagesh; Singh, Vikas
2016-06-01
A major goal of imaging studies such as the (ongoing) Human Connectome Project (HCP) is to characterize the structural network map of the human brain and identify its associations with covariates such as genotype, risk factors, and so on that correspond to an individual. But the set of image derived measures and the set of covariates are both large, so we must first estimate a 'parsimonious' set of relations between the measurements. For instance, a Gaussian graphical model will show conditional independences between the random variables, which can then be used to setup specific downstream analyses. But most such data involve a large list of 'latent' variables that remain unobserved, yet affect the 'observed' variables sustantially. Accounting for such latent variables is not directly addressed by standard precision matrix estimation, and is tackled via highly specialized optimization methods. This paper offers a unique harmonic analysis view of this problem. By casting the estimation of the precision matrix in terms of a composition of low-frequency latent variables and high-frequency sparse terms, we show how the problem can be formulated using a new wavelet-type expansion in non-Euclidean spaces. Our formulation poses the estimation problem in the frequency space and shows how it can be solved by a simple sub-gradient scheme. We provide a set of scientific results on ~500 scans from the recently released HCP data where our algorithm recovers highly interpretable and sparse conditional dependencies between brain connectivity pathways and well-known covariates.
Analysis of impact of general-purpose graphics processor units in supersonic flow modeling
NASA Astrophysics Data System (ADS)
Emelyanov, V. N.; Karpenko, A. G.; Kozelkov, A. S.; Teterina, I. V.; Volkov, K. N.; Yalozo, A. V.
2017-06-01
Computational methods are widely used in prediction of complex flowfields associated with off-normal situations in aerospace engineering. Modern graphics processing units (GPU) provide architectures and new programming models that enable to harness their large processing power and to design computational fluid dynamics (CFD) simulations at both high performance and low cost. Possibilities of the use of GPUs for the simulation of external and internal flows on unstructured meshes are discussed. The finite volume method is applied to solve three-dimensional unsteady compressible Euler and Navier-Stokes equations on unstructured meshes with high resolution numerical schemes. CUDA technology is used for programming implementation of parallel computational algorithms. Solutions of some benchmark test cases on GPUs are reported, and the results computed are compared with experimental and computational data. Approaches to optimization of the CFD code related to the use of different types of memory are considered. Speedup of solution on GPUs with respect to the solution on central processor unit (CPU) is compared. Performance measurements show that numerical schemes developed achieve 20-50 speedup on GPU hardware compared to CPU reference implementation. The results obtained provide promising perspective for designing a GPU-based software framework for applications in CFD.
Qian, Yuntao; Murphy, Robert F
2008-02-15
There is extensive interest in automating the collection, organization and analysis of biological data. Data in the form of images in online literature present special challenges for such efforts. The first steps in understanding the contents of a figure are decomposing it into panels and determining the type of each panel. In biological literature, panel types include many kinds of images collected by different techniques, such as photographs of gels or images from microscopes. We have previously described the SLIF system (http://slif.cbi.cmu.edu) that identifies panels containing fluorescence microscope images among figures in online journal articles as a prelude to further analysis of the subcellular patterns in such images. This system contains a pretrained classifier that uses image features to assign a type (class) to each separate panel. However, the types of panels in a figure are often correlated, so that we can consider the class of a panel to be dependent not only on its own features but also on the types of the other panels in a figure. In this article, we introduce the use of a type of probabilistic graphical model, a factor graph, to represent the structured information about the images in a figure, and permit more robust and accurate inference about their types. We obtain significant improvement over results for considering panels separately. The code and data used for the experiments described here are available from http://murphylab.web.cmu.edu/software.
Multi-domain Hierarchical Free-Sketch Recognition Using Graphical Models
NASA Astrophysics Data System (ADS)
Alvarado, Christine
In recent years there has been an increasing interest in sketch-based user interfaces, but the problem of robust free-sketch recognition remains largely unsolved. This chapter presents a graphical-model-based approach to free-sketch recognition that uses context to improve recognition accuracy without placing unnatural constraints on the way the user draws. Our approach uses context to guide the search for possible interpretations and uses a novel form of dynamically constructed Bayesian networks to evaluate these interpretations. An evaluation of this approach on two domains—family trees and circuit diagrams—reveals that in both domains the use of context to reclassify low-level shapes significantly reduces recognition error over a baseline system that does not reinterpret low-level classifications. Finally, we discuss an emerging technique to solve a major remaining challenge for multi-domain sketch recognition revealed by our evaluation: the problem of grouping strokes into individual symbols reliably and efficiently, without placing unnatural constraints on the user's drawing style.
Extremely large-scale simulation of a Kardar-Parisi-Zhang model using graphics cards.
Kelling, Jeffrey; Ódo, Géza
2011-12-01
The octahedron model introduced recently has been implemented onto graphics cards, which permits extremely large-scale simulations via binary lattice gases and bit-coded algorithms. We confirm scaling behavior belonging to the two-dimensional Kardar-Parisi-Zhang universality class and find a surface growth exponent: β = 0.2415(15) on 2(17) × 2(17) systems, ruling out β = 1/4 suggested by field theory. The maximum speedup with respect to a single CPU is 240. The steady state has been analyzed by finite-size scaling and a growth exponent α = 0.393(4) is found. Correction-to-scaling-exponent are computed and the power-spectrum density of the steady state is determined. We calculate the universal scaling functions and cumulants and show that the limit distribution can be obtained by the sizes considered. We provide numerical fitting for the small and large tail behavior of the steady-state scaling function of the interface width.
NASA Astrophysics Data System (ADS)
Secretan, Y.
A discussion of the modular program Mikado is presented. Mikado was developed with the goal of creating a flexible graphic tool to display and help analyze the results of finite element fluid flow computations. Mikado works on unstructured meshes, with elements of mixed geometric type, but also offers the possibility of using structured meshes. The program can be operated by both menu and mouse (interactive), or by command file (batch). Mikado is written in FORTRAN, except for a few system dependent subroutines which are in C. It runs presently on Silicon Graphics' workstations and could be easily ported to the IBM-RISC System/6000 family of workstations.
Lagrangian Mixing in an Axisymmetric Hurricane Model
2010-07-23
particle integration, and are computed for nonlocal regions. The global measures of mixing derived from finite-time Lyapunov exponents , rel- ative...mixing derived from finite-time Lyapunov exponents , relative dispersion, and a measured mixing rate are applied to distinct regions representing...field varies slowly both in space and time. Some of the local techniques currently in use are finite-time Lyapunov exponents , (Haller, 2002; Haller
NASA Technical Reports Server (NTRS)
1990-01-01
A mathematician, David R. Hedgley, Jr. developed a computer program that considers whether a line in a graphic model of a three-dimensional object should or should not be visible. Known as the Hidden Line Computer Code, the program automatically removes superfluous lines and displays an object from a specific viewpoint, just as the human eye would see it. An example of how one company uses the program is the experience of Birdair which specializes in production of fabric skylights and stadium covers. The fabric called SHEERFILL is a Teflon coated fiberglass material developed in cooperation with DuPont Company. SHEERFILL glazed structures are either tension structures or air-supported tension structures. Both are formed by patterned fabric sheets supported by a steel or aluminum frame or cable network. Birdair uses the Hidden Line Computer Code, to illustrate a prospective structure to an architect or owner. The program generates a three- dimensional perspective with the hidden lines removed. This program is still used by Birdair and continues to be commercially available to the public.
Liu, Fang; Luehr, Nathan; Kulik, Heather J; Martínez, Todd J
2015-07-14
The conductor-like polarization model (C-PCM) with switching/Gaussian smooth discretization is a widely used implicit solvation model in chemical simulations. However, its application in quantum mechanical calculations of large-scale biomolecular systems can be limited by computational expense of both the gas phase electronic structure and the solvation interaction. We have previously used graphical processing units (GPUs) to accelerate the first of these steps. Here, we extend the use of GPUs to accelerate electronic structure calculations including C-PCM solvation. Implementation on the GPU leads to significant acceleration of the generation of the required integrals for C-PCM. We further propose two strategies to improve the solution of the required linear equations: a dynamic convergence threshold and a randomized block-Jacobi preconditioner. These strategies are not specific to GPUs and are expected to be beneficial for both CPU and GPU implementations. We benchmark the performance of the new implementation using over 20 small proteins in solvent environment. Using a single GPU, our method evaluates the C-PCM related integrals and their derivatives more than 10× faster than that with a conventional CPU-based implementation. Our improvements to the linear solver provide a further 3× acceleration. The overall calculations including C-PCM solvation require, typically, 20-40% more effort than that for their gas phase counterparts for a moderate basis set and molecule surface discretization level. The relative cost of the C-PCM solvation correction decreases as the basis sets and/or cavity radii increase. Therefore, description of solvation with this model should be routine. We also discuss applications to the study of the conformational landscape of an amyloid fibril.
New applications of a simple mixed-layer model
NASA Astrophysics Data System (ADS)
Fitzjarrald, David R.
1982-04-01
Model formulation of the balance between surface heat and moisture fluxes and subsidence that determines the state of the mixed layer is used to estimate cooling and drying rates in the mixed layer above the tropical ocean based on GATE observations. Estimated cooling rates are comparable to observed radiative cooling rates for thick mixed layers characteristic of undisturbed conditions but are up to five times larger for shallow mixed layers observed during disturbed periods. The additional cooling and drying in the mixed layer needed to maintain shallow, cool mixed layers is hypothesized to be the net result of an assemblage of downdrafts. A new scaling scheme for non-dimensionalizing the mixed-layer thermodynamic budget equations is introduced. The ratio of subsidence at the top of the mixed layer to the product of the entrainment coefficient, a bulk aerodynamic transfer coefficient, and the surface-layer wind speed is shown theoretically to be a fundamental descriptor of the mixed-layer environment.
Stanley, Elise F
2015-01-01
At fast-transmitting presynaptic terminals Ca2+ enter through voltage gated calcium channels (CaVs) and bind to a synaptic vesicle (SV) -associated calcium sensor (SV-sensor) to gate fusion and discharge. An open CaV generates a high-concentration plume, or nanodomain of Ca2+ that dissipates precipitously with distance from the pore. At most fast synapses, such as the frog neuromuscular junction (NMJ), the SV sensors are located sufficiently close to individual CaVs to be gated by single nanodomains. However, at others, such as the mature rodent calyx of Held (calyx of Held), the physiology is more complex with evidence that CaVs that are both close and distant from the SV sensor and it is argued that release is gated primarily by the overlapping Ca2+ nanodomains from many CaVs. We devised a 'graphic modeling' method to sum Ca2+ from individual CaVs located at varying distances from the SV-sensor to determine the SV release probability and also the fraction of that probability that can be attributed to single domain gating. This method was applied first to simplified, low and high CaV density model release sites and then to published data on the contrasting frog NMJ and the rodent calyx of Held native synapses. We report 3 main predictions: the SV-sensor is positioned very close to the point at which the SV fuses with the membrane; single domain-release gating predominates even at synapses where the SV abuts a large cluster of CaVs, and even relatively remote CaVs can contribute significantly to single domain-based gating. PMID:26457441
Lee, Mikyung; Huang, Ruili; Tong, Weida
2016-01-01
Nuclear receptors (NRs) are ligand-activated transcriptional regulators that play vital roles in key biological processes such as growth, differentiation, metabolism, reproduction, and morphogenesis. Disruption of NRs can result in adverse health effects such as NR-mediated endocrine disruption. A comprehensive understanding of core transcriptional targets regulated by NRs helps to elucidate their key biological processes in both toxicological and therapeutic aspects. In this study, we applied a probabilistic graphical model to identify the transcriptional targets of NRs and the biological processes they govern. The Tox21 program profiled a collection of approximate 10 000 environmental chemicals and drugs against a panel of human NRs in a quantitative high-throughput screening format for their NR disruption potential. The Japanese Toxicogenomics Project, one of the most comprehensive efforts in the field of toxicogenomics, generated large-scale gene expression profiles on the effect of 131 compounds (in its first phase of study) at various doses, and different durations, and their combinations. We applied author-topic model to these 2 toxicological datasets, which consists of 11 NRs run in either agonist and/or antagonist mode (18 assays total) and 203 in vitro human gene expression profiles connected by 52 shared drugs. As a result, a set of clusters (topics), which consists of a set of NRs and their associated target genes were determined. Various transcriptional targets of the NRs were identified by assays run in either agonist or antagonist mode. Our results were validated by functional analysis and compared with TRANSFAC data. In summary, our approach resulted in effective identification of associated/affected NRs and their target genes, providing biologically meaningful hypothesis embedded in their relationships. PMID:26643261
CONVECTIVE OVERSHOOT MIXING IN MODELS OF THE STELLAR INTERIOR
Zhang, Q. S.
2013-04-01
Convective overshoot mixing plays an important role in stellar structure and evolution. However, overshoot mixing is also a long-standing problem; it is one of the most uncertain factors in stellar physics. As is well known, convective overshoot mixing is determined by the radial turbulent flux of the chemical component. In this paper, a local model of the radial turbulent flux of the chemical component is established based on hydrodynamic equations and some model assumptions and is tested in stellar models. The main conclusions are as follows. (1) The local model shows that convective overshoot mixing could be regarded as a diffusion process and the diffusion coefficient for different chemical elements is the same. However, if the non-local terms i.e., the gradient of the third-order moments, are taken into account, the diffusion coefficient for each chemical element should in general be different. (2) The diffusion coefficient of convective/overshoot mixing shows different behaviors in the convection zone and in the overshoot region because the characteristic length scale of the mixing is large in the convection zone and small in the overshoot region. Overshoot mixing should be regarded as a weak mixing process. (3) The diffusion coefficient of mixing is tested in stellar models, and it is found that a single choice of our central mixing parameter leads to consistent results for a solar convective envelope model as well as for core convection models of stars with masses from 2 M to 10 M.
Ward-Garrison, Christian; Markstrom, Steven L.; Hay, Lauren E.
2009-01-01
The U.S. Geological Survey Downsizer is a computer application that selects, downloads, verifies, and formats station-based time-series data for environmental-resource models, particularly the Precipitation-Runoff Modeling System. Downsizer implements the client-server software architecture. The client presents a map-based, graphical user interface that is intuitive to modelers; the server provides streamflow and climate time-series data from over 40,000 measurement stations across the United States. This report is the Downsizer user's manual and provides (1) an overview of the software design, (2) installation instructions, (3) a description of the graphical user interface, (4) a description of selected output files, and (5) troubleshooting information.
Robot graphic simulation testbed
NASA Technical Reports Server (NTRS)
Cook, George E.; Sztipanovits, Janos; Biegl, Csaba; Karsai, Gabor; Springfield, James F.
1991-01-01
The objective of this research was twofold. First, the basic capabilities of ROBOSIM (graphical simulation system) were improved and extended by taking advantage of advanced graphic workstation technology and artificial intelligence programming techniques. Second, the scope of the graphic simulation testbed was extended to include general problems of Space Station automation. Hardware support for 3-D graphics and high processing performance make high resolution solid modeling, collision detection, and simulation of structural dynamics computationally feasible. The Space Station is a complex system with many interacting subsystems. Design and testing of automation concepts demand modeling of the affected processes, their interactions, and that of the proposed control systems. The automation testbed was designed to facilitate studies in Space Station automation concepts.
Javan, Ramin; Zeman, Merissa N
2017-08-14
In the context of medical three-dimensional (3D) printing, in addition to 3D reconstruction from cross-sectional imaging, graphic design plays a role in developing and/or enhancing 3D-printed models. A custom prototype modular 3D model of the liver was graphically designed depicting segmental anatomy of the parenchyma containing color-coded hepatic vasculature and biliary tree. Subsequently, 3D printing was performed using transparent resin for the surface of the liver and polyamide material to develop hollow internal structures that allow for passage of catheters and wires. A number of concepts were incorporated into the model. A representative mass with surrounding feeding arterial supply was embedded to demonstrate tumor embolization. A straight narrow hollow tract connecting the mass to the surface of the liver, displaying the path of a biopsy device's needle, and the concept of needle "throw" length was designed. A connection between the middle hepatic and right portal veins was created to demonstrate transjugular intrahepatic portosystemic shunt (TIPS) placement. A hollow amorphous structure representing an abscess was created to allow the demonstration of drainage catheter placement with the formation of pigtail tip. Percutaneous biliary drain and cholecystostomy tube placement were also represented. The skills of graphic designers may be utilized in creating highly customized 3D-printed models. A model was developed for the demonstration and simulation of multiple hepatobiliary interventions, for training purposes, patient counseling and consenting, and as a prototype for future development of a functioning interventional phantom.
Kim, Won Hwa; Kim, Hyunwoo J.; Adluru, Nagesh; Singh, Vikas
2016-01-01
A major goal of imaging studies such as the (ongoing) Human Connectome Project (HCP) is to characterize the structural network map of the human brain and identify its associations with covariates such as genotype, risk factors, and so on that correspond to an individual. But the set of image derived measures and the set of covariates are both large, so we must first estimate a ‘parsimonious’ set of relations between the measurements. For instance, a Gaussian graphical model will show conditional independences between the random variables, which can then be used to setup specific downstream analyses. But most such data involve a large list of ‘latent’ variables that remain unobserved, yet affect the ‘observed’ variables sustantially. Accounting for such latent variables is not directly addressed by standard precision matrix estimation, and is tackled via highly specialized optimization methods. This paper offers a unique harmonic analysis view of this problem. By casting the estimation of the precision matrix in terms of a composition of low-frequency latent variables and high-frequency sparse terms, we show how the problem can be formulated using a new wavelet-type expansion in non-Euclidean spaces. Our formulation poses the estimation problem in the frequency space and shows how it can be solved by a simple sub-gradient scheme. We provide a set of scientific results on ~500 scans from the recently released HCP data where our algorithm recovers highly interpretable and sparse conditional dependencies between brain connectivity pathways and well-known covariates. PMID:28255221
The Effectiveness of an Interactive 3-Dimensional Computer Graphics Model for Medical Education
Konishi, Takeshi; Tamura, Yoko; Moriguchi, Hiroki
2012-01-01
Background Medical students often have difficulty achieving a conceptual understanding of 3-dimensional (3D) anatomy, such as bone alignment, muscles, and complex movements, from 2-dimensional (2D) images. To this end, animated and interactive 3-dimensional computer graphics (3DCG) can provide better visual information to users. In medical fields, research on the advantages of 3DCG in medical education is relatively new. Objective To determine the educational effectiveness of interactive 3DCG. Methods We divided 100 participants (27 men, mean (SD) age 17.9 (0.6) years, and 73 women, mean (SD) age 18.1 (1.1) years) from the Health Sciences University of Mongolia (HSUM) into 3DCG (n = 50) and textbook-only (control) (n = 50) groups. The control group used a textbook and 2D images, while the 3DCG group was trained to use the interactive 3DCG shoulder model in addition to a textbook. We conducted a questionnaire survey via an encrypted satellite network between HSUM and Tokushima University. The questionnaire was scored on a 5-point Likert scale from strongly disagree (score 1) to strongly agree (score 5). Results Interactive 3DCG was effective in undergraduate medical education. Specifically, there was a significant difference in mean (SD) scores between the 3DCG and control groups in their response to questionnaire items regarding content (4.26 (0.69) vs 3.85 (0.68), P = .001) and teaching methods (4.33 (0.65) vs 3.74 (0.79), P < .001), but no significant difference in the Web category. Participants also provided meaningful comments on the advantages of interactive 3DCG. Conclusions Interactive 3DCG materials have positive effects on medical education when properly integrated into conventional education. In particular, our results suggest that interactive 3DCG is more efficient than textbooks alone in medical education and can motivate students to understand complex anatomical structures. PMID:23611759
The effectiveness of an interactive 3-dimensional computer graphics model for medical education.
Battulga, Bayanmunkh; Konishi, Takeshi; Tamura, Yoko; Moriguchi, Hiroki
2012-07-09
Medical students often have difficulty achieving a conceptual understanding of 3-dimensional (3D) anatomy, such as bone alignment, muscles, and complex movements, from 2-dimensional (2D) images. To this end, animated and interactive 3-dimensional computer graphics (3DCG) can provide better visual information to users. In medical fields, research on the advantages of 3DCG in medical education is relatively new. To determine the educational effectiveness of interactive 3DCG. We divided 100 participants (27 men, mean (SD) age 17.9 (0.6) years, and 73 women, mean (SD) age 18.1 (1.1) years) from the Health Sciences University of Mongolia (HSUM) into 3DCG (n = 50) and textbook-only (control) (n = 50) groups. The control group used a textbook and 2D images, while the 3DCG group was trained to use the interactive 3DCG shoulder model in addition to a textbook. We conducted a questionnaire survey via an encrypted satellite network between HSUM and Tokushima University. The questionnaire was scored on a 5-point Likert scale from strongly disagree (score 1) to strongly agree (score 5). Interactive 3DCG was effective in undergraduate medical education. Specifically, there was a significant difference in mean (SD) scores between the 3DCG and control groups in their response to questionnaire items regarding content (4.26 (0.69) vs 3.85 (0.68), P = .001) and teaching methods (4.33 (0.65) vs 3.74 (0.79), P < .001), but no significant difference in the Web category. Participants also provided meaningful comments on the advantages of interactive 3DCG. Interactive 3DCG materials have positive effects on medical education when properly integrated into conventional education. In particular, our results suggest that interactive 3DCG is more efficient than textbooks alone in medical education and can motivate students to understand complex anatomical structures.
Estimation of the linear mixed integrated Ornstein-Uhlenbeck model.
Hughes, Rachael A; Kenward, Michael G; Sterne, Jonathan A C; Tilling, Kate
2017-05-24
The linear mixed model with an added integrated Ornstein-Uhlenbeck (IOU) process (linear mixed IOU model) allows for serial correlation and estimation of the degree of derivative tracking. It is rarely used, partly due to the lack of available software. We implemented the linear mixed IOU model in Stata and using simulations we assessed the feasibility of fitting the model by restricted maximum likelihood when applied to balanced and unbalanced data. We compared different (1) optimization algorithms, (2) parameterizations of the IOU process, (3) data structures and (4) random-effects structures. Fitting the model was practical and feasible when applied to large and moderately sized balanced datasets (20,000 and 500 observations), and large unbalanced datasets with (non-informative) dropout and intermittent missingness. Analysis of a real dataset showed that the linear mixed IOU model was a better fit to the data than the standard linear mixed model (i.e. independent within-subject errors with constant variance).
Perception in statistical graphics
NASA Astrophysics Data System (ADS)
VanderPlas, Susan Ruth
There has been quite a bit of research on statistical graphics and visualization, generally focused on new types of graphics, new software to create graphics, interactivity, and usability studies. Our ability to interpret and use statistical graphics hinges on the interface between the graph itself and the brain that perceives and interprets it, and there is substantially less research on the interplay between graph, eye, brain, and mind than is sufficient to understand the nature of these relationships. The goal of the work presented here is to further explore the interplay between a static graph, the translation of that graph from paper to mental representation (the journey from eye to brain), and the mental processes that operate on that graph once it is transferred into memory (mind). Understanding the perception of statistical graphics should allow researchers to create more effective graphs which produce fewer distortions and viewer errors while reducing the cognitive load necessary to understand the information presented in the graph. Taken together, these experiments should lay a foundation for exploring the perception of statistical graphics. There has been considerable research into the accuracy of numerical judgments viewers make from graphs, and these studies are useful, but it is more effective to understand how errors in these judgments occur so that the root cause of the error can be addressed directly. Understanding how visual reasoning relates to the ability to make judgments from graphs allows us to tailor graphics to particular target audiences. In addition, understanding the hierarchy of salient features in statistical graphics allows us to clearly communicate the important message from data or statistical models by constructing graphics which are designed specifically for the perceptual system.
Random vs. nonrandom mixing in network epidemic models.
Zaric, Gregory S
2002-04-01
In this paper we compare random and nonrandom mixing patterns for network epidemic models. Several of studies have examined the impact of different mixing patterns using compartmental epidemic models. We extend the work on compartmental models to the case of network epidemic models. We define two nonrandom mixing patterns for a network epidemic model and investigate the impact that these mixing patterns have on a number of epidemic outcomes when compared to random mixing. We find that different mixing assumptions lead to small but statistically significant differences in disease prevalence, cumulative number of new infections, final population size, and network structure. Significant differences in outcomes were more likely to be observed for larger populations and longer time horizons. Sensitivity analysis revealed that greater differences in outcomes between random and nonrandom mixing were associated with a larger incremental mortality rate among infected individuals, a larger average number of partners, and a greater probability of forming new partnerships. When adjusted for the initial population size, differences between random and nonrandom mixing models were approximately constant across all population sizes considered. We also considered the impact that differences between mixing models might have on the cost effectiveness ratio for epidemic control interventions.
Shin, Woong-Hee; Kang, Xuejiao; Zhang, Jian; Kihara, Daisuke
2017-01-01
Protein tertiary structure prediction methods have matured in recent years. However, some proteins defy accurate prediction due to factors such as inadequate template structures. While existing model quality assessment methods predict global model quality relatively well, there is substantial room for improvement in local quality assessment, i.e. assessment of the error at each residue position in a model. Local quality is a very important information for practical applications of structure models such as interpreting/designing site-directed mutagenesis of proteins. We have developed a novel local quality assessment method for protein tertiary structure models. The method, named Graph-based Model Quality assessment method (GMQ), explicitly considers the predicted quality of spatially neighboring residues using a graph representation of a query protein structure model. GMQ uses conditional random field as its core of the algorithm, and performs a binary prediction of the quality of each residue in a model, indicating if a residue position is likely to be within an error cutoff or not. The accuracy of GMQ was improved by considering larger graphs to include quality information of more surrounding residues. Moreover, we found that using different edge weights in graphs reflecting different secondary structures further improves the accuracy. GMQ showed competitive performance on a benchmark for quality assessment of structure models from the Critical Assessment of Techniques for Protein Structure Prediction (CASP). PMID:28074879
Shin, Woong-Hee; Kang, Xuejiao; Zhang, Jian; Kihara, Daisuke
2017-01-11
Protein tertiary structure prediction methods have matured in recent years. However, some proteins defy accurate prediction due to factors such as inadequate template structures. While existing model quality assessment methods predict global model quality relatively well, there is substantial room for improvement in local quality assessment, i.e. assessment of the error at each residue position in a model. Local quality is a very important information for practical applications of structure models such as interpreting/designing site-directed mutagenesis of proteins. We have developed a novel local quality assessment method for protein tertiary structure models. The method, named Graph-based Model Quality assessment method (GMQ), explicitly considers the predicted quality of spatially neighboring residues using a graph representation of a query protein structure model. GMQ uses conditional random field as its core of the algorithm, and performs a binary prediction of the quality of each residue in a model, indicating if a residue position is likely to be within an error cutoff or not. The accuracy of GMQ was improved by considering larger graphs to include quality information of more surrounding residues. Moreover, we found that using different edge weights in graphs reflecting different secondary structures further improves the accuracy. GMQ showed competitive performance on a benchmark for quality assessment of structure models from the Critical Assessment of Techniques for Protein Structure Prediction (CASP).
Lagrangian mixed layer modeling of the western equatorial Pacific
NASA Technical Reports Server (NTRS)
Shinoda, Toshiaki; Lukas, Roger
1995-01-01
Processes that control the upper ocean thermohaline structure in the western equatorial Pacific are examined using a Lagrangian mixed layer model. The one-dimensional bulk mixed layer model of Garwood (1977) is integrated along the trajectories derived from a nonlinear 1 1/2 layer reduced gravity model forced with actual wind fields. The Global Precipitation Climatology Project (GPCP) data are used to estimate surface freshwater fluxes for the mixed layer model. The wind stress data which forced the 1 1/2 layer model are used for the mixed layer model. The model was run for the period 1987-1988. This simple model is able to simulate the isothermal layer below the mixed layer in the western Pacific warm pool and its variation. The subduction mechanism hypothesized by Lukas and Lindstrom (1991) is evident in the model results. During periods of strong South Equatorial Current, the warm and salty mixed layer waters in the central Pacific are subducted below the fresh shallow mixed layer in the western Pacific. However, this subduction mechanism is not evident when upwelling Rossby waves reach the western equatorial Pacific or when a prominent deepening of the mixed layer occurs in the western equatorial Pacific or when a prominent deepening of the mixed layer occurs in the western equatorial Pacific due to episodes of strong wind and light precipitation associated with the El Nino-Southern Oscillation. Comparison of the results between the Lagrangian mixed layer model and a locally forced Eulerian mixed layer model indicated that horizontal advection of salty waters from the central Pacific strongly affects the upper ocean salinity variation in the western Pacific, and that this advection is necessary to maintain the upper ocean thermohaline structure in this region.
Radiolysis Model Formulation for Integration with the Mixed Potential Model
Buck, Edgar C.; Wittman, Richard S.
2014-07-10
The U.S. Department of Energy Office of Nuclear Energy (DOE-NE), Office of Fuel Cycle Technology has established the Used Fuel Disposition Campaign (UFDC) to conduct the research and development activities related to storage, transportation, and disposal of used nuclear fuel (UNF) and high-level radioactive waste. Within the UFDC, the components for a general system model of the degradation and subsequent transport of UNF is being developed to analyze the performance of disposal options [Sassani et al., 2012]. Two model components of the near-field part of the problem are the ANL Mixed Potential Model and the PNNL Radiolysis Model. This report is in response to the desire to integrate the two models as outlined in [Buck, E.C, J.L. Jerden, W.L. Ebert, R.S. Wittman, (2013) “Coupling the Mixed Potential and Radiolysis Models for Used Fuel Degradation,” FCRD-UFD-2013-000290, M3FT-PN0806058
Reacting to Graphic Horror: A Model of Empathy and Emotional Behavior.
ERIC Educational Resources Information Center
Tamborini, Ron; And Others
1990-01-01
Studies viewer response to graphic horror films. Reports that undergraduate mass communication students viewed clips from two horror films and a scientific television program. Concludes that people who score high on measures for wandering imagination, fictional involvement, humanistic orientation, and emotional contagion tend to find horror films…
Reacting to Graphic Horror: A Model of Empathy and Emotional Behavior.
ERIC Educational Resources Information Center
Tamborini, Ron; And Others
1990-01-01
Studies viewer response to graphic horror films. Reports that undergraduate mass communication students viewed clips from two horror films and a scientific television program. Concludes that people who score high on measures for wandering imagination, fictional involvement, humanistic orientation, and emotional contagion tend to find horror films…
Implementing a Multiple Criteria Model Base in Co-Op with a Graphical User Interface Generator
1993-09-23
Decision Support System (Co-op) for Windows. The algorithms and the graphical user interfaces for these modules are implemented using Microsoft Visual ... Basic under the Windows based environment operating in a IBM compatible microcomputer. Design of the MCDM programs interface is based on general interface design principles of user control, screen design, and layout.
[Linear mixed modeling of branch biomass for Korean pine plantation].
Dong, Li-Hu; Li, Feng-Ri; Jia, Wei-Wei
2013-12-01
Based on the measurement of 3643 branch biomass samples of 60 Korean pine (Pinus koraiensis) trees from Mengjiagang Forest Farm, Heilongjiang Province, all subset regressions techniques were used to develop the branch biomass model (branch, foliage, and total biomass models). The optimal base model of branch biomass was developed as lnw = k1 + k2 lnL(b) + k3 lnD(b). Then, linear mixed models were developed based on PROC MIXED of SAS 9.3 software, and evaluated with AIC, BIC, Log Likelihood and Likelihood ratio tests. The results showed that the foliage and total biomass models with parameters k1, k2 and k3 as mixed effects showed the best performance. The branch biomass model with parameters k5 and k2 as mixed effects showed the best performance. Finally, we evaluated the optimal base model and the mixed model of branch biomass. Model validation confirmed that the mixed model was better than the optimal base model. The mixed model with random parameters could not only provide more accurate and precise prediction, but also showed the individual difference based on variance-covariance structure.
ERIC Educational Resources Information Center
Towler, Alan L.
This guide to teaching graphic arts, one in a series of instructional materials for junior high industrial arts education, is designed to assist teachers as they plan and implement new courses of study and as they make revisions and improvements in existing courses in order to integrate classroom learning with real-life experiences. This graphic…
ERIC Educational Resources Information Center
Towler, Alan L.
This guide to teaching graphic arts, one in a series of instructional materials for junior high industrial arts education, is designed to assist teachers as they plan and implement new courses of study and as they make revisions and improvements in existing courses in order to integrate classroom learning with real-life experiences. This graphic…
Quantifying spatial distribution of spurious mixing in ocean models.
Ilıcak, Mehmet
2016-12-01
Numerical mixing is inevitable for ocean models due to tracer advection schemes. Until now, there is no robust way to identify the regions of spurious mixing in ocean models. We propose a new method to compute the spatial distribution of the spurious diapycnic mixing in an ocean model. This new method is an extension of available potential energy density method proposed by Winters and Barkan (2013). We test the new method in lock-exchange and baroclinic eddies test cases. We can quantify the amount and the location of numerical mixing. We find high-shear areas are the main regions which are susceptible to numerical truncation errors. We also test the new method to quantify the numerical mixing in different horizontal momentum closures. We conclude that Smagorinsky viscosity has less numerical mixing than the Leith viscosity using the same non-dimensional constant.
2012-01-01
Background Starch serves as a temporal storage of carbohydrates in plant leaves during day/night cycles. To study transcriptional regulatory modules of this dynamic metabolic process, we conducted gene regulation network analysis based on small-sample inference of graphical Gaussian model (GGM). Results Time-series significant analysis was applied for Arabidopsis leaf transcriptome data to obtain a set of genes that are highly regulated under a diurnal cycle. A total of 1,480 diurnally regulated genes included 21 starch metabolic enzymes, 6 clock-associated genes, and 106 transcription factors (TF). A starch-clock-TF gene regulation network comprising 117 nodes and 266 edges was constructed by GGM from these 133 significant genes that are potentially related to the diurnal control of starch metabolism. From this network, we found that β-amylase 3 (b-amy3: At4g17090), which participates in starch degradation in chloroplast, is the most frequently connected gene (a hub gene). The robustness of gene-to-gene regulatory network was further analyzed by TF binding site prediction and by evaluating global co-expression of TFs and target starch metabolic enzymes. As a result, two TFs, indeterminate domain 5 (AtIDD5: At2g02070) and constans-like (COL: At2g21320), were identified as positive regulators of starch synthase 4 (SS4: At4g18240). The inference model of AtIDD5-dependent positive regulation of SS4 gene expression was experimentally supported by decreased SS4 mRNA accumulation in Atidd5 mutant plants during the light period of both short and long day conditions. COL was also shown to positively control SS4 mRNA accumulation. Furthermore, the knockout of AtIDD5 and COL led to deformation of chloroplast and its contained starch granules. This deformity also affected the number of starch granules per chloroplast, which increased significantly in both knockout mutant lines. Conclusions In this study, we utilized a systematic approach of microarray analysis to discover
Models of neutrino mass, mixing and CP violation
NASA Astrophysics Data System (ADS)
King, Stephen F.
2015-12-01
In this topical review we argue that neutrino mass and mixing data motivates extending the Standard Model (SM) to include a non-Abelian discrete flavour symmetry in order to accurately predict the large leptonic mixing angles and {C}{P} violation. We begin with an overview of the SM puzzles, followed by a description of some classic lepton mixing patterns. Lepton mixing may be regarded as a deviation from tri-bimaximal mixing, with charged lepton corrections leading to solar mixing sum rules, or tri-maximal lepton mixing leading to atmospheric mixing rules. We survey neutrino mass models, using a roadmap based on the open questions in neutrino physics. We then focus on the seesaw mechanism with right-handed neutrinos, where sequential dominance (SD) can account for large lepton mixing angles and {C}{P} violation, with precise predictions emerging from constrained SD (CSD). We define the flavour problem and discuss progress towards a theory of favour using GUTs and discrete family symmetry. We classify models as direct, semidirect or indirect, according to the relation between the Klein symmetry of the mass matrices and the discrete family symmetry, in all cases focussing on spontaneous {C}{P} violation. Finally we give two examples of realistic and highly predictive indirect models with CSD, namely an A to Z of flavour with Pati-Salam and a fairly complete A 4 × SU(5) SUSY GUT of flavour, where both models have interesting implications for leptogenesis.
ModelMuse: A U.S. Geological Survey Open-Source, Graphical User Interface for Groundwater Models
NASA Astrophysics Data System (ADS)
Winston, R. B.
2013-12-01
ModelMuse is a free publicly-available graphical preprocessor used to generate the input and display the output for several groundwater models. It is written in Object Pascal and the source code is available on the USGS software web site. Supported models include the MODFLOW family of models, PHAST (version 1), and SUTRA version 2.2. With MODFLOW and PHAST, the user generates a grid and uses 'objects' (points, lines, and polygons) to define boundary conditions and the spatial variation in aquifer properties. Because the objects define the spatial variation, the grid can be changed without the user needing to re-enter spatial data. The same paradigm is used with SUTRA except that the user generates a quadrilateral finite-element mesh instead of a rectangular grid. The user interacts with the model in a top view and in a vertical cross section. The cross section can be at any angle or location. There is also a three-dimensional view of the model. For SUTRA, a new method of visualizing the permeability and related properties has been introduced. In three dimensional SUTRA models, the user specifies the permeability tensor by specifying permeability in three mutually orthogonal directions that can be oriented in space in any direction. Because it is important for the user to be able to check both the magnitudes and directions of the permeabilities, ModelMuse displays the permeabilities as either a two-dimensional or a three-dimensional vector plot. Color is used to differentiate the maximum, middle, and minimum permeability vectors. The magnitude of the permeability is shown by the vector length. The vector angle shows the direction of the maximum, middle, or minimum permeability. Contour and color plots can also be used to display model input and output data.
Nigam, Ravi; Schlosser, Ralf W; Lloyd, Lyle L
2006-09-01
Matrix strategies employing parts of speech arranged in systematic language matrices and milieu language teaching strategies have been successfully used to teach word combining skills to children who have cognitive disabilities and some functional speech. The present study investigated the acquisition and generalized production of two-term semantic relationships in a new population using new types of symbols. Three children with cognitive disabilities and little or no functional speech were taught to combine graphic symbols. The matrix strategy and the mand-model procedure were used concomitantly as intervention procedures. A multiple probe design across sets of action-object combinations with generalization probes of untrained combinations was used to teach the production of graphic symbol combinations. Results indicated that two of the three children learned the early syntactic-semantic rule of combining action-object symbols and demonstrated generalization to untrained action-object combinations and generalization across trainers. The results and future directions for research are discussed.
A multifluid mix model with material strength effects
Chang, C. H.; Scannapieco, A. J.
2012-04-23
We present a new multifluid mix model. Its features include material strength effects and pressure and temperature nonequilibrium between mixing materials. It is applicable to both interpenetration and demixing of immiscible fluids and diffusion of miscible fluids. The presented model exhibits the appropriate smooth transition in mathematical form as the mixture evolves from multiphase to molecular mixing, extending its applicability to the intermediate stages in which both types of mixing are present. Virtual mass force and momentum exchange have been generalized for heterogeneous multimaterial mixtures. The compression work has been extended so that the resulting species energy equations are consistent with the pressure force and material strength.
Computer graphics and the graphic artist
NASA Technical Reports Server (NTRS)
Taylor, N. L.; Fedors, E. G.; Pinelli, T. E.
1985-01-01
A centralized computer graphics system is being developed at the NASA Langley Research Center. This system was required to satisfy multiuser needs, ranging from presentation quality graphics prepared by a graphic artist to 16-mm movie simulations generated by engineers and scientists. While the major thrust of the central graphics system was directed toward engineering and scientific applications, hardware and software capabilities to support the graphic artists were integrated into the design. This paper briefly discusses the importance of computer graphics in research; the central graphics system in terms of systems, software, and hardware requirements; the application of computer graphics to graphic arts, discussed in terms of the requirements for a graphic arts workstation; and the problems encountered in applying computer graphics to the graphic arts. The paper concludes by presenting the status of the central graphics system.
Computer graphics and the graphic artist
NASA Technical Reports Server (NTRS)
Taylor, N. L.; Fedors, E. G.; Pinelli, T. E.
1985-01-01
A centralized computer graphics system is being developed at the NASA Langley Research Center. This system was required to satisfy multiuser needs, ranging from presentation quality graphics prepared by a graphic artist to 16-mm movie simulations generated by engineers and scientists. While the major thrust of the central graphics system was directed toward engineering and scientific applications, hardware and software capabilities to support the graphic artists were integrated into the design. This paper briefly discusses the importance of computer graphics in research; the central graphics system in terms of systems, software, and hardware requirements; the application of computer graphics to graphic arts, discussed in terms of the requirements for a graphic arts workstation; and the problems encountered in applying computer graphics to the graphic arts. The paper concludes by presenting the status of the central graphics system.
ERIC Educational Resources Information Center
Holladay, Jennifer
2009-01-01
Since 2002, Teaching Tolerance's Mix It Up at Lunch Day program has helped millions of students cross social boundaries and create more inclusive school communities. Its goal is to create a safe, purposeful opportunity for students to break down the patterns of social self-segregation that too often plague schools. Research conducted in 2006 by…
ERIC Educational Resources Information Center
Holladay, Jennifer
2009-01-01
Since 2002, Teaching Tolerance's Mix It Up at Lunch Day program has helped millions of students cross social boundaries and create more inclusive school communities. Its goal is to create a safe, purposeful opportunity for students to break down the patterns of social self-segregation that too often plague schools. Research conducted in 2006 by…
Modeling a Rain-Induced Mixed Layer
1990-06-01
te -)-A-- e e -2)- . (7) ’&Z AZ Az D Using the exponential relations with trigonometry , equation (7) becomes, Ok n) 3 (I- cos2ikAz)+ D (1- cos ikAz...completely unknown because there are no prior studies which predict what portion of total energy may go into subsurface mixing. The biggest obstacle
Imaging and quantifying mixing in a model droplet micromixer
NASA Astrophysics Data System (ADS)
Stone, Z. B.; Stone, H. A.
2005-06-01
Rapid mixing is essential in a variety of microfluidic applications but is often difficult to achieve at low Reynolds numbers. Inspired by a recently developed microdevice that mixes reagents in droplets, which simply flow along a periodic serpentine channel [H. Song, J. D. Tice, and R. F. Ismagilov, "A microfluidic system for controlling reaction networks in time," Angew. Chem. Int. Ed. 42, 767 (2003)], we investigate a model "droplet mixer." The model consists of a spherical droplet immersed in a periodic sequence of distinct external flows, which are superpositions of uniform and shear flows. We label the fluid inside the droplet with two colors and visualize mixing with a method we call "backtrace imaging," which allows us to render cross sections of the droplet at arbitrary times during the mixing cycle. To analyze our results, we present a novel scalar measure of mixing that permits us to locate sets of parameters that optimize mixing over a small number of flow cycles.
2013-01-01
Background Until now, no kinetic model was described for the oncologic tracer [18F]fluoromethylcholine ([18F]FCho), so it was aimed to validate a proper model, which is easy to implement and allows tracer quantification in tissues. Methods Based on the metabolic profile, two types of compartmental models were evaluated. One is a 3C2i model, which contains three tissue compartments and two input functions and corrects for possible [18F]fluorobetaine ([18F]FBet) uptake by the tissues. On the other hand, a two-tissue-compartment model (2C1i) was evaluated. Moreover, a comparison, based on intra-observer variability, was made between kinetic modelling and graphical analysis. Results Determination of the [18F]FCho-to-[18F]FBet uptake ratios in tissues and evaluation of the fitting of both kinetic models indicated that corrections for [18F]FBet uptake are not mandatory. In addition, [18F]FCho uptake is well described by the 2C1i model and by graphical analysis by means of the Patlak plot. Conclusions The Patlak plot is a reliable, precise, and robust method to quantify [18F]FCho uptake independent of scan time or plasma clearance. In addition, it is easily implemented, even under non-equilibrium conditions and without creating additional errors. PMID:24034278
Souza, W.R.
1987-01-01
This report documents a graphical display program for the U. S. Geological Survey finite-element groundwater flow and solute transport model. Graphic features of the program, SUTRA-PLOT (SUTRA-PLOT = saturated/unsaturated transport), include: (1) plots of the finite-element mesh, (2) velocity vector plots, (3) contour plots of pressure, solute concentration, temperature, or saturation, and (4) a finite-element interpolator for gridding data prior to contouring. SUTRA-PLOT is written in FORTRAN 77 on a PRIME 750 computer system, and requires Version 9.0 or higher of the DISSPLA graphics library. The program requires two input files: the SUTRA input data list and the SUTRA simulation output listing. The program is menu driven and specifications for individual types of plots are entered and may be edited interactively. Installation instruction, a source code listing, and a description of the computer code are given. Six examples of plotting applications are used to demonstrate various features of the plotting program. (Author 's abstract)
An Investigation of Item Fit Statistics for Mixed IRT Models
ERIC Educational Resources Information Center
Chon, Kyong Hee
2009-01-01
The purpose of this study was to investigate procedures for assessing model fit of IRT models for mixed format data. In this study, various IRT model combinations were fitted to data containing both dichotomous and polytomous item responses, and the suitability of the chosen model mixtures was evaluated based on a number of model fit procedures.…
On the coalescence-dispersion modeling of turbulent molecular mixing
NASA Technical Reports Server (NTRS)
Givi, Peyman; Kosaly, George
1987-01-01
The general coalescence-dispersion (C/D) closure provides phenomenological modeling of turbulent molecular mixing. The models of Curl and Dopazo and O'Brien appear as two limiting C/D models that bracket the range of results one can obtain by various models. This finding is used to investigate the sensitivtiy of the results to the choice of the model. Inert scalar mixing is found to be less model-sensitive than mixing accompanied by chemical reaction. Infinitely fast chemistry approximation is used to relate the C/D approach to Toor's earlier results. Pure mixing and infinite rate chemistry calculations are compared to study further a recent result of Hsieh and O'Brien who found that higher concentration moments are not sensitive to chemistry.
Simulation model for urban ternary mix-traffic flow
NASA Astrophysics Data System (ADS)
Deo, Lalit; Akkawi, Faisal; Deo, Puspita
2007-12-01
A two-lane two-way traffic light controlled X-intersection for ternary mix traffic (cars + buses (equivalent vehicles) + very large trucks/ buses) is developed based on cellular automata model. This model can provide different matrices such as throughput, queue length and delay time. This paper will describe how the model works and how composition of traffic mix effects the throughput (numbers of vehicles navigate through the intersection per unit of time (vph)) and also compare the result with homogeneous counterpart.
Development of a Medicaid Behavioral Health Case-Mix Model
ERIC Educational Resources Information Center
Robst, John
2009-01-01
Many Medicaid programs have either fully or partially carved out mental health services. The evaluation of carve-out plans requires a case-mix model that accounts for differing health status across Medicaid managed care plans. This article develops a diagnosis-based case-mix adjustment system specific to Medicaid behavioral health care. Several…
Diagnostic tools for mixing models of stream water chemistry
Hooper, R.P.
2003-01-01
Mixing models provide a useful null hypothesis against which to evaluate processes controlling stream water chemical data. Because conservative mixing of end-members with constant concentration is a linear process, a number of simple mathematical and multivariate statistical methods can be applied to this problem. Although mixing models have been most typically used in the context of mixing soil and groundwater end-members, an extension of the mathematics of mixing models is presented that assesses the "fit" of a multivariate data set to a lower dimensional mixing subspace without the need for explicitly identified end-members. Diagnostic tools are developed to determine the approximate rank of the data set and to assess lack of fit of the data. This permits identification of processes that violate the assumptions of the mixing model and can suggest the dominant processes controlling stream water chemical variation. These same diagnostic tools can be used to assess the fit of the chemistry of one site into the mixing subspace of a different site, thereby permitting an assessment of the consistency of controlling end-members across sites. This technique is applied to a number of sites at the Panola Mountain Research Watershed located near Atlanta, Georgia.
Development of a Medicaid Behavioral Health Case-Mix Model
ERIC Educational Resources Information Center
Robst, John
2009-01-01
Many Medicaid programs have either fully or partially carved out mental health services. The evaluation of carve-out plans requires a case-mix model that accounts for differing health status across Medicaid managed care plans. This article develops a diagnosis-based case-mix adjustment system specific to Medicaid behavioral health care. Several…
NASA Astrophysics Data System (ADS)
Hitt, O.; Hutchins, M.
2016-12-01
UK river waters face considerable future pressures, primarily from population growth and climate change. In understanding controls on river water quality, experimental studies have successfully identified response to single or paired stressors under controlled conditions. Generalised Linear Model (GLM) approaches are commonly used to quantify stressor-response relationships. To explore a wider variety of stressors physics-based models are used. Our objective is to evaluate how five different types of stressor influence the severity of river eutrophication and its impact on Dissolved Oxygen (DO) an integrated measure of river ecological health. This is done by applying a physics-based river quality model for 4 years at daily time step to a 92 km stretch in the 3445 km2 Thames (UK) catchment. To understand the impact of model structural uncertainty we present results from two alternative formulations of the biological response. Sensitivity analysis carried out using the QUESTOR model (QUality Evaluation and Simulation TOol for River systems) considered gradients of various stressors: river flow, water temperature, urbanisation (abstractions and sewage/industrial effluents), phosphate concentrations in effluents and tributaries and riparian tree shading (modifying the light input). Scalar modifiers applied to the 2009-12 time-series inputs define the gradients. The model has been run for each combination of the values of these 5 variables. Results are analysed using graphical methods in order to identify variation in the type of relationship between different pairs of stressors on the system response. The method allows for all outputs from each combination of stressors to be displayed in one graphic and so showing the results of hundreds of model runs simultaneously. This approach can be carried out for all stressor pairs, and many locations/determinands. Supporting statistical analysis (GLM) reinforces the findings from the graphical analysis. Analysis suggests that
Discrete flavor symmetries and models of neutrino mixing
Altarelli, Guido; Feruglio, Ferruccio
2010-07-15
Application of non-Abelian finite groups to the theory of neutrino masses and mixing is reviewed, which is strongly suggested by the agreement of the tribimaximal (TB) mixing pattern with experiment. After summarizing the motivation and the formalism, concrete models based on A{sub 4}, S{sub 4}, and other finite groups, and their phenomenological implications are discussed, including lepton flavor violating processes, leptogenesis, and the extension to quarks. As an alternative to TB mixing application of discrete flavor symmetries to quark-lepton complementarity and bimaximal mixing is also considered.
Edwards, Lloyd J.; Simpson, Sean L.
2014-01-01
Background The use of 24-hour ambulatory blood pressure monitoring (ABPM) in clinical practice and observational epidemiological studies has grown considerably in the past 25 years. ABPM is a very effective technique for assessing biological, environmental, and drug effects on blood pressure. Objectives In order to enhance the effectiveness of ABPM for clinical and observational research studies via analytical and graphical results, developing alternative data analysis approaches using modern statistical techniques are important. Methods The linear mixed model for the analysis of longitudinal data is particularly well-suited for the estimation of, inference about, and interpretation of both population (mean) and subject-specific trajectories for ABPM data. We propose using a linear mixed model with orthonormal polynomials across time in both the fixed and random effects to analyze ABPM data. Results We demonstrate the proposed analysis technique using data from the Dietary Approaches to Stop Hypertension (DASH) study, a multicenter, randomized, parallel arm feeding study that tested the effects of dietary patterns on blood pressure. Conclusions The linear mixed model is relatively easy to implement (given the complexity of the technique) using available software, allows for straight-forward testing of multiple hypotheses, and the results can be presented to research clinicians using both graphical and tabular displays. Using orthonormal polynomials provides the ability to model the nonlinear trajectories of each subject with the same complexity as the mean model (fixed effects). PMID:24667908
Edwards, Lloyd J; Simpson, Sean L
2014-06-01
The use of 24-h ambulatory blood pressure monitoring (ABPM) in clinical practice and observational epidemiological studies has grown considerably in the past 25 years. ABPM is a very effective technique for assessing biological, environmental, and drug effects on blood pressure. In order to enhance the effectiveness of ABPM for clinical and observational research studies using analytical and graphical results, developing alternative data analysis approaches using modern statistical techniques are important. The linear mixed model for the analysis of longitudinal data is particularly well suited for the estimation of, inference about, and interpretation of both population (mean) and subject-specific trajectories for ABPM data. We propose using a linear mixed model with orthonormal polynomials across time in both the fixed and random effects to analyze ABPM data. We demonstrate the proposed analysis technique using data from the Dietary Approaches to Stop Hypertension (DASH) study, a multicenter, randomized, parallel arm feeding study that tested the effects of dietary patterns on blood pressure. The linear mixed model is relatively easy to implement (given the complexity of the technique) using available software, allows for straightforward testing of multiple hypotheses, and the results can be presented to research clinicians using both graphical and tabular displays. Using orthonormal polynomials provides the ability to model the nonlinear trajectories of each subject with the same complexity as the mean model (fixed effects).
Shell model of optimal passive-scalar mixing
NASA Astrophysics Data System (ADS)
Miles, Christopher; Doering, Charles
2015-11-01
Optimal mixing is significant to process engineering within industries such as food, chemical, pharmaceutical, and petrochemical. An important question in this field is ``How should one stir to create a homogeneous mixture while being energetically efficient?'' To answer this question, we consider an initially unmixed scalar field representing some concentration within a fluid on a periodic domain. This passive-scalar field is advected by the velocity field, our control variable, constrained by a physical quantity such as energy or enstrophy. We consider two objectives: local-in-time (LIT) optimization (what will maximize the mixing rate now?) and global-in-time (GIT) optimization (what will maximize mixing at the end time?). Throughout this work we use the H-1 mix-norm to measure mixing. To gain a better understanding, we provide a simplified mixing model by using a shell model of passive-scalar advection. LIT optimization in this shell model gives perfect mixing in finite time for the energy-constrained case and exponential decay to the perfect-mixed state for the enstrophy-constrained case. Although we only enforce that the time-average energy (or enstrophy) equals a chosen value in GIT optimization, interestingly, the optimal control keeps this value constant over time.
Graphics processing unit implementation of lattice Boltzmann models for flowing soft systems.
Bernaschi, Massimo; Rossi, Ludovico; Benzi, Roberto; Sbragaglia, Mauro; Succi, Sauro
2009-12-01
A graphic processing unit (GPU) implementation of the multicomponent lattice Boltzmann equation with multirange interactions for soft-glassy materials ["glassy" lattice Boltzmann (LB)] is presented. Performance measurements for flows under shear indicate a GPU/CPU speed up in excess of 10 for 1024(2) grids. Such significant speed up permits to carry out multimillion time-steps simulations of 1024(2) grids within tens of hours of GPU time, thereby considerably expanding the scope of the glassy LB toward the investigation of long-time relaxation properties of soft-flowing glassy materials.
On the uniqueness of quantitative DNA difference descriptors in 2D graphical representation models
NASA Astrophysics Data System (ADS)
Nandy, A.; Nandy, P.
2003-01-01
The rapid growth in additions to databases of DNA primary sequence data have led to searches for methods to numerically characterize these data and help in fast identification and retrieval of relevant sequences. The DNA descriptors derived from the 2D graphical representation technique have already been proposed to index chemical toxicity and single nucleotide polymorphic (SNP) genes but the inherent degeneracies in this representation have given rise to doubts about their suitability. We prove in this paper that such degeneracies will exist only in very restricted cases and that the method can be relied upon to provide unique descriptors for, in particular, the SNP genes and several other classes of DNA sequences.
Emergent shapes in graphical design
NASA Astrophysics Data System (ADS)
Grabska, Ewa
2001-06-01
This paper deals with graphical design and extracting emergent shapes, i.e., shapes that are not consciously constructed by designers. Graphical design is discussed in the framework of operational constraints. Defining the system of constraints makes possible to describe the notion of emergence in a formal way. The formal model of the diagrammatic reasoning system serves as a base for our considerations. The proposed approach to graphical design is illustrated by several examples related to decorative art.
NASA Astrophysics Data System (ADS)
Roy, S. G.; Koons, P. O.; Gerbi, C. C.; Capps, D. K.; Tucker, G. E.; Rogers, Z. A.
2014-12-01
Sophisticated numerical tools exist for modeling geomorphic processes and linking them to tectonic and climatic systems, but they are often seen as inaccessible for users with an exploratory level of interest. We have improved the accessibility of landscape evolution models by producing a simple graphics user interface (GUI) that takes advantage of the Channel-Hillslope Integrated Landscape Development (CHILD) model. Model access is flexible: the user can edit values for basic geomorphic, tectonic, and climate parameters, or obtain greater control by defining the spatiotemporal distributions of those parameters. Users can make educated predictions by choosing their own parametric values for the governing equations and interpreting the results immediately through model graphics. This method of modeling allows users to iteratively build their understanding through experimentation. Use of this GUI is intended for inquiry and discovery-based learning activities. We discuss a number of examples of how the GUI can be used at the upper high school, introductory university, and advanced university level. Effective teaching modules initially focus on an inquiry-based example guided by the instructor. As students become familiar with the GUI and the CHILD model, the class can shift to more student-centered exploration and experimentation. To make model interpretations more robust, digital elevation models can be imported and direct comparisons can be made between CHILD model results and natural topography. The GUI is available online through the University of Maine's Earth and Climate Sciences website, through the Community Surface Dynamics Modeling System (CSDMS) model repository, or by contacting the corresponding author.
VISUAL PLUMES MIXING ZONE MODELING SOFTWARE
The US Environmental Protection Agency has a history of developing plume models and providing technical assistance. The Visual Plumes model (VP) is a recent addition to the public-domain models available on the EPA Center for Exposure Assessment Modeling (CEAM) web page. The Wind...
VISUAL PLUMES MIXING ZONE MODELING SOFTWARE
The US Environmental Protection Agency has a history of developing plume models and providing technical assistance. The Visual Plumes model (VP) is a recent addition to the public-domain models available on the EPA Center for Exposure Assessment Modeling (CEAM) web page. The Wind...
Multivariate multilevel nonlinear mixed effects models for timber yield predictions.
Hall, Daniel B; Clutter, Michael
2004-03-01
Nonlinear mixed effects models have become important tools for growth and yield modeling in forestry. To date, applications have concentrated on modeling single growth variables such as tree height or bole volume. Here, we propose multivariate multilevel nonlinear mixed effects models for describing several plot-level timber quantity characteristics simultaneously. We describe how such models can be used to produce future predictions of timber volume (yield). The class of models and methods of estimation and prediction are developed and then illustrated on data from a University of Georgia study of the effects of various site preparation methods on the growth of slash pine (Pinus elliottii Engelm.).
Mixing model with multi-particle interactions for Lagrangian simulations of turbulent mixing
Watanabe, T. Nagata, K.
2016-08-15
We report on the numerical study of the mixing volume model (MVM) for molecular diffusion in Lagrangian simulations of turbulent mixing problems. The MVM is based on the multi-particle interaction in a finite volume (mixing volume). A priori test of the MVM, based on the direct numerical simulations of planar jets, is conducted in the turbulent region and the interfacial layer between the turbulent and non-turbulent fluids. The results show that the MVM predicts well the mean effects of the molecular diffusion under various numerical and flow parameters. The number of the mixing particles should be large for predicting a value of the molecular diffusion term positively correlated to the exact value. The size of the mixing volume relative to the Kolmogorov scale η is important in the performance of the MVM. The scalar transfer across the turbulent/non-turbulent interface is well captured by the MVM especially with the small mixing volume. Furthermore, the MVM with multiple mixing particles is tested in the hybrid implicit large-eddy-simulation/Lagrangian-particle-simulation (LES–LPS) of the planar jet with the characteristic length of the mixing volume of O(100η). Despite the large mixing volume, the MVM works well and decays the scalar variance in a rate close to the reference LES. The statistics in the LPS are very robust to the number of the particles used in the simulations and the computational grid size of the LES. Both in the turbulent core region and the intermittent region, the LPS predicts a scalar field well correlated to the LES.
Graphical programming of telerobotic tasks
Small, D.E.; McDonald, M.J.
1996-11-01
With a goal of producing faster, safer, and cheaper technologies for nuclear waste cleanup, Sandia is actively developing and extending intelligent systems technologies through the US Department of Energy Office of Technology Development (DOE OTD) Robotic Technology Development Program (RTDP). Graphical programming is a key technology for robotic waste cleanup that Sandia is developing for this goal. Graphical programming uses simulation such as TELEGRIP `on-line` to program and control robots. Characterized by its model-based control architecture, integrated simulation, `point-and-click` graphical user interfaces, task and path planning software, and network communications, Sandia`s Graphical Programming systems allow operators to focus on high-level robotic tasks rather than the low-level details. Use of scripted tasks, rather than customized programs minimizes the necessity of recompiling supervisory control systems and enhances flexibility. Rapid world-modelling technologies allow Graphical Programming to be used in dynamic and unpredictable environments including digging and pipe-cutting. This paper describes Sancho, Sandia`s most advanced graphical programming supervisory software. Sancho, now operational on several robot systems, incorporates all of Sandia`s recent advances in supervisory control. Graphical programming uses 3-D graphics models as intuitive operator interfaces to program and control complex robotic systems. The goal of the paper is to help the reader understand how Sandia implements graphical programming systems and which key features in Sancho have proven to be most effective.
Agility and mixed-model furniture production
NASA Astrophysics Data System (ADS)
Yao, Andrew C.
2000-10-01
The manufacture of upholstered furniture provides an excellent opportunity to analyze the effect of a comprehensive communication system on classical production management functions. The objective of the research is to study the scheduling heuristics that embrace the concepts inherent in MRP, JIT and TQM while recognizing the need for agility in a somewhat complex and demanding environment. An on-line, real-time data capture system provides the status and location of production lots, components, subassemblies for schedule control. Current inventory status of raw material and purchased items are required in order to develop and adhere to schedules. For the large variety of styles and fabrics customers may order, the communication system must provide timely, accurate and comprehensive information for intelligent decisions with respect to the product mix and production resources.
Scaled tests and modeling of effluent stack sampling location mixing.
Recknagle, Kurtis P; Yokuda, Satoru T; Ballinger, Marcel Y; Barnett, J Matthew
2009-02-01
A three-dimensional computational fluid dynamics computer model was used to evaluate the mixing at a sampling system for radioactive air emissions. Researchers sought to determine whether the location would meet the criteria for uniform air velocity and contaminant concentration as prescribed in the American National Standards Institute standard, Sampling and Monitoring Releases of Airborne Radioactive Substances from the Stacks and Ducts of Nuclear Facilities. This standard requires that the sampling location be well-mixed and stipulates specific tests to verify the extent of mixing. The exhaust system for the Radiochemical Processing Laboratory was modeled with a computational fluid dynamics code to better understand the flow and contaminant mixing and to predict mixing test results. The modeled results were compared to actual measurements made at a scale-model stack and to the limited data set for the full-scale facility stack. Results indicated that the computational fluid dynamics code provides reasonable predictions for velocity, cyclonic flow, gas, and aerosol uniformity, although the code predicts greater improvement in mixing as the injection point is moved farther away from the sampling location than is actually observed by measurements. In expanding from small to full scale, the modeled predictions for full-scale measurements show similar uniformity values as in the scale model. This work indicated that a computational fluid dynamics code can be a cost-effective aid in designing or retrofitting a facility's stack sampling location that will be required to meet standard ANSI/HPS N13.1-1999.
Sensitivity of the urban airshed model to mixing height profiles
Rao, S.T.; Sistla, G.; Ku, J.Y.; Zhou, N.; Hao, W.
1994-12-31
The United States Environmental Protection Agency (USEPA) has recommended the use of the Urban Airshed Model (UAM), a grid-based photochemical model, for regulatory applications. One of the important parameters in applications of the UAM is the height of the mixed layer or the diffusion break. In this study, we examine the sensitivity of the UAM-predicted ozone concentrations to (a) a spatially invariant diurnal mixing height profile, and (b) a spatially varying diurnal mixing height profile for a high ozone episode of July 1988 for the New York Airshed. The 1985/88 emissions inventory used in the EPA`s Regional Oxidant Modeling simulations has been regridded for this study. Preliminary results suggest that the spatially varying case yields a higher peak ozone concentrations compared to the spatially invariant mixing height simulation, with differences in the peak ozone ranging from a few ppb to about 40 ppb for the days simulated. These differences are attributed to the differences in the shape of the mixing height profiles and its rate of growth during the morning hours when peak emissions are injected into the atmosphere. Examination of the impact of emissions reductions associated with these two mixing height profiles indicates that NO{sub x}-focussed controls provide a greater change in the predicted ozone peak under spatially invariant mixing heights than under the spatially varying mixing height profile. On the other hand, VOC-focussed controls provide a greater change in the predicted peak ozone levels under spatially varying mixing heights than under the spatially invariant mixing height profile.
Souza, W.R.
1999-01-01
This report documents a graphical display post-processor (SutraPlot) for the U.S. Geological Survey Saturated-Unsaturated flow and solute or energy TRAnsport simulation model SUTRA, Version 2D3D.1. This version of SutraPlot is an upgrade to SutraPlot for the 2D-only SUTRA model (Souza, 1987). It has been modified to add 3D functionality, a graphical user interface (GUI), and enhanced graphic output options. Graphical options for 2D SUTRA (2-dimension) simulations include: drawing the 2D finite-element mesh, mesh boundary, and velocity vectors; plots of contours for pressure, saturation, concentration, and temperature within the model region; 2D finite-element based gridding and interpolation; and 2D gridded data export files. Graphical options for 3D SUTRA (3-dimension) simulations include: drawing the 3D finite-element mesh; plots of contours for pressure, saturation, concentration, and temperature in 2D sections of the 3D model region; 3D finite-element based gridding and interpolation; drawing selected regions of velocity vectors (projected on principal coordinate planes); and 3D gridded data export files. Installation instructions and a description of all graphic options are presented. A sample SUTRA problem is described and three step-by-step SutraPlot applications are provided. In addition, the methodology and numerical algorithms for the 2D and 3D finite-element based gridding and interpolation, developed for SutraPlot, are described. 1
Mixing Model Performance in Non-Premixed Turbulent Combustion
NASA Astrophysics Data System (ADS)
Pope, Stephen B.; Ren, Zhuyin
2002-11-01
In order to shed light on their qualitative and quantitative performance, three different turbulent mixing models are studied in application to non-premixed turbulent combustion. In previous works, PDF model calculations with detailed kinetics have been shown to agree well with experimental data for non-premixed piloted jet flames. The calculations from two different groups using different descriptions of the chemistry and turbulent mixing are capable of producing the correct levels of local extinction and reignition. The success of these calculations raises several questions, since it is not clear that the mixing models used contain an adequate description of the processes involved. To address these questions, three mixing models (IEM, modified Curl and EMST) are applied to a partially-stirred reactor burning hydrogen in air. The parameters varied are the residence time and the mixing time scale. For small relative values of the mixing time scale (approaching the perfectly-stirred limit) the models yield the same extinction behavior. But for larger values, the behavior is distictly different, with EMST being must resistant to extinction.
2010-05-10
Naval Research Laboratory Washington, DC 20375-5320 NRL/MR/6180--10-9244 User Manual for Graphical User Interface Version 2.10 with Fire and Smoke...ABSTRACT User Manual for Graphical User Interface Version 2.10 with Fire and Smoke Simulation Model (FSSIM) Version 1.2 Tomasz A. Haupt,* Gregory J...runtime environment for a third-party simulation package, Fire and Smoke Simulation (FSSIM) developed by HAI. This updated user’s manual for the
Weakly nonlinear models for turbulent mixing in a plane mixing layer
NASA Technical Reports Server (NTRS)
Liou, William W.; Morris, Philip J.
1992-01-01
New closure models for turbulent free shear flows are presented in this paper. They are based on a weakly nonlinear theory with a description of the dominant large-scale structures as instability waves. Two models are presented that describe the evolution of the free shear flows in terms of the time-averaged mean flow and the dominant large-scale turbulent structure. The local characteristics of the large-scale motions are described using linear theory. Their amplitude is determined from an energy integral analysis. The models have been applied to the study of an incompressible mixing layer. For both models, predictions of the mean flow developed are made. In the second model, predictions of the time-dependent motion of the large-scale structures in the mixing layer are made. The predictions show good agreement with experimental observations.
Weakly nonlinear models for turbulent mixing in a plane mixing layer
NASA Technical Reports Server (NTRS)
Liou, William W.; Morris, Philip J.
1992-01-01
New closure models for turbulent free shear flows are presented in this paper. They are based on a weakly nonlinear theory with a description of the dominant large-scale structures as instability waves. Two models are presented that describe the evolution of the free shear flows in terms of the time-averaged mean flow and the dominant large-scale turbulent structure. The local characteristics of the large-scale motions are described using linear theory. Their amplitude is determined from an energy integral analysis. The models have been applied to the study of an incompressible mixing layer. For both models, predictions of the mean flow developed are made. In the second model, predictions of the time-dependent motion of the large-scale structures in the mixing layer are made. The predictions show good agreement with experimental observations.
A Comparison of Item Fit Statistics for Mixed IRT Models
ERIC Educational Resources Information Center
Chon, Kyong Hee; Lee, Won-Chan; Dunbar, Stephen B.
2010-01-01
In this study we examined procedures for assessing model-data fit of item response theory (IRT) models for mixed format data. The model fit indices used in this study include PARSCALE's G[superscript 2], Orlando and Thissen's S-X[superscript 2] and S-G[superscript 2], and Stone's chi[superscript 2*] and G[superscript 2*]. To investigate the…
A Bayesian Semiparametric Latent Variable Model for Mixed Responses
ERIC Educational Resources Information Center
Fahrmeir, Ludwig; Raach, Alexander
2007-01-01
In this paper we introduce a latent variable model (LVM) for mixed ordinal and continuous responses, where covariate effects on the continuous latent variables are modelled through a flexible semiparametric Gaussian regression model. We extend existing LVMs with the usual linear covariate effects by including nonparametric components for nonlinear…
Regional Conference on the Analysis of the Unbalanced Mixed Model.
1987-12-31
this complicated problem. Paper titles: The Present Status of Confidence Interval Estimation on Variance Components in Balanced and Unbalanced Random...Models; Prediction-Interval Procedures and (Fixed Effects) Confidence - Interval Procedures for Mixed Linear Models; The Use of Equivalent linear Models
NASA Astrophysics Data System (ADS)
Hashmi, M. S.; Khan, N.; Ullah Khan, Sami; Rashidi, M. M.
In this study, we have constructed a mathematical model to investigate the heat source/sink effects in mixed convection axisymmetric flow of an incompressible, electrically conducting Oldroyd-B fluid between two infinite isothermal stretching disks. The effects of viscous dissipation and Joule heating are also considered in the heat equation. The governing partial differential equations are converted into ordinary differential equations by using appropriate similarity variables. The series solution of these dimensionless equations is constructed by using homotopy analysis method. The convergence of the obtained solution is carefully examined. The effects of various involved parameters on pressure, velocity and temperature profiles are comprehensively studied. A graphical analysis has been presented for various values of problem parameters. The numerical values of wall shear stress and Nusselt number are computed at both upper and lower disks. Moreover, a graphical and tabular explanation for critical values of Frank-Kamenetskii regarding other flow parameters.
Zierer, Jonas; Pallister, Tess; Tsai, Pei-Chien; Krumsiek, Jan; Bell, Jordana T.; Lauc, Gordan; Spector, Tim D; Menni, Cristina; Kastenmüller, Gabi
2016-01-01
Although association studies have unveiled numerous correlations of biochemical markers with age and age-related diseases, we still lack an understanding of their mutual dependencies. To find molecular pathways that underlie age-related diseases as well as their comorbidities, we integrated aging markers from four different high-throughput omics datasets, namely epigenomics, transcriptomics, glycomics and metabolomics, with a comprehensive set of disease phenotypes from 510 participants of the TwinsUK cohort. We used graphical random forests to assess conditional dependencies between omics markers and phenotypes while eliminating mediated associations. Applying this novel approach for multi-omics data integration yields a model consisting of seven modules that represent distinct aspects of aging. These modules are connected by hubs that potentially trigger comorbidities of age-related diseases. As an example, we identified urate as one of these key players mediating the comorbidity of renal disease with body composition and obesity. Body composition variables are in turn associated with inflammatory IgG markers, mediated by the expression of the hormone oxytocin. Thus, oxytocin potentially contributes to the development of chronic low-grade inflammation, which often accompanies obesity. Our multi-omics graphical model demonstrates the interconnectivity of age-related diseases and highlights molecular markers of the aging process that might drive disease comorbidities. PMID:27886242
Estimation of the linear mixed integrated Ornstein–Uhlenbeck model
Hughes, Rachael A.; Kenward, Michael G.; Sterne, Jonathan A. C.; Tilling, Kate
2017-01-01
ABSTRACT The linear mixed model with an added integrated Ornstein–Uhlenbeck (IOU) process (linear mixed IOU model) allows for serial correlation and estimation of the degree of derivative tracking. It is rarely used, partly due to the lack of available software. We implemented the linear mixed IOU model in Stata and using simulations we assessed the feasibility of fitting the model by restricted maximum likelihood when applied to balanced and unbalanced data. We compared different (1) optimization algorithms, (2) parameterizations of the IOU process, (3) data structures and (4) random-effects structures. Fitting the model was practical and feasible when applied to large and moderately sized balanced datasets (20,000 and 500 observations), and large unbalanced datasets with (non-informative) dropout and intermittent missingness. Analysis of a real dataset showed that the linear mixed IOU model was a better fit to the data than the standard linear mixed model (i.e. independent within-subject errors with constant variance). PMID:28515536
Hunt, G. ); Azmy, Y.Y. )
1992-10-01
DCOR is a user-friendly computer implementation of a deterministic combat model developed at ORNL. To make the interpretation of the results more intuitive, a conversion of the numerical solution to a graphic animation sequence of battle evolution is desirable. DCOR uses a coarse computational spatial mesh superimposed on the battlefield. This research is aimed at developing robust methods for computing the position of the combative units over the continuum (and also pixeled) battlefield, from DCOR's discrete-variable solution representing the density of each force type evaluated at gridpoints. Three main problems have been identified and solutions have been devised and implemented in a new visualization module of DCOR. First, there is the problem of distributing the total number of objects, each representing a combative unit of each force type, among the gridpoints at each time level of the animation. This problem is solved by distributing, for each force type, the total number of combative units, one by one, to the gridpoint with the largest calculated number of units. Second, there is the problem of distributing the number of units assigned to each computational gridpoint over the battlefield area attributed to that point. This problem is solved by distributing the units within that area by taking into account the influence of surrounding gridpoints using linear interpolation. Finally, time interpolated solutions must be generated to produce a sufficient number of frames to create a smooth animation sequence. Currently, enough frames may be generated either by direct computation via the PDE solver or by using linear programming techniques to linearly interpolate intermediate frames between calculated frames.
Mixing by barotropic instability in a nonlinear model
NASA Technical Reports Server (NTRS)
Bowman, Kenneth P.; Chen, Ping
1994-01-01
A global, nonlinear, equivalent barotropic model is used to study the isentropic mixing of passive tracers by barotropic instability. Basic states are analytical zonal-mean jets representative of the zonal-mean flow in the upper stratosphere, where the observed 4-day wave is thought to be a result of barotropic, and possibly baroclinic, instability. As is known from previous studies, the phase speed and growth rate of the unstable waves is fairly sensitive to the shape of the zonal-mean jet; and the dominant wave mode at saturation is not necessarily the fastest growing mode; but the unstable modes share many features of the observed 4-day wave. Lagrangian trajectories computed from model winds are used to characterize the mixing by the flow. For profiles with both midlatitude and polar modes, mixing is stronger in midlatitude than inside the vortex; but there is little exchange of air across the vortex boundary. There is a minimum in the Lyapunov exponents of the flow and the particle dispersion at the jet maximum. For profiles with only polar unstable modes, there is weak mixing inside the vortex, no mixing outside the vortex, and no exchange of air across the vortex boundary. These results support the theoretical arguments that, whether wave disturbances are generated by local instability or propagate from other regions, the mixing properties of the total flow are determined by the locations of the wave critical lines and that strong gradients of potential vorticity are very resistant to mixing.
New mixing angles in the left-right symmetric model
NASA Astrophysics Data System (ADS)
Kokado, Akira; Saito, Takesi
2015-12-01
In the left-right symmetric model neutral gauge fields are characterized by three mixing angles θ12,θ23,θ13 between three gauge fields Bμ,WLμ 3,WRμ 3, which produce mass eigenstates Aμ,Zμ,Zμ', when G =S U (2 )L×S U (2 )R×U (1 )B-L×D is spontaneously broken down until U (1 )em . We find a new mixing angle θ', which corresponds to the Weinberg angle θW in the standard model with the S U (2 )L×U (1 )Y gauge symmetry, from these mixing angles. It is then shown that any mixing angle θi j can be expressed by ɛ and θ', where ɛ =gL/gR is a ratio of running left-right gauge coupling strengths. We observe that light gauge bosons are described by θ' only, whereas heavy gauge bosons are described by two parameters ɛ and θ'.
NASA Astrophysics Data System (ADS)
Fourtakas, G.; Rogers, B. D.
2016-06-01
A two-phase numerical model using Smoothed Particle Hydrodynamics (SPH) is applied to two-phase liquid-sediments flows. The absence of a mesh in SPH is ideal for interfacial and highly non-linear flows with changing fragmentation of the interface, mixing and resuspension. The rheology of sediment induced under rapid flows undergoes several states which are only partially described by previous research in SPH. This paper attempts to bridge the gap between the geotechnics, non-Newtonian and Newtonian flows by proposing a model that combines the yielding, shear and suspension layer which are needed to predict accurately the global erosion phenomena, from a hydrodynamics prospective. The numerical SPH scheme is based on the explicit treatment of both phases using Newtonian and the non-Newtonian Bingham-type Herschel-Bulkley-Papanastasiou constitutive model. This is supplemented by the Drucker-Prager yield criterion to predict the onset of yielding of the sediment surface and a concentration suspension model. The multi-phase model has been compared with experimental and 2-D reference numerical models for scour following a dry-bed dam break yielding satisfactory results and improvements over well-known SPH multi-phase models. With 3-D simulations requiring a large number of particles, the code is accelerated with a graphics processing unit (GPU) in the open-source DualSPHysics code. The implementation and optimisation of the code achieved a speed up of x58 over an optimised single thread serial code. A 3-D dam break over a non-cohesive erodible bed simulation with over 4 million particles yields close agreement with experimental scour and water surface profiles.
A Mixed Effects Randomized Item Response Model
ERIC Educational Resources Information Center
Fox, J.-P.; Wyrick, Cheryl
2008-01-01
The randomized response technique ensures that individual item responses, denoted as true item responses, are randomized before observing them and so-called randomized item responses are observed. A relationship is specified between randomized item response data and true item response data. True item response data are modeled with a (non)linear…
Generalized Dynamic Factor Models for Mixed-Measurement Time Series
Cui, Kai; Dunson, David B.
2013-01-01
In this article, we propose generalized Bayesian dynamic factor models for jointly modeling mixed-measurement time series. The framework allows mixed-scale measurements associated with each time series, with different measurements having different distributions in the exponential family conditionally on time-varying latent factor(s). Efficient Bayesian computational algorithms are developed for posterior inference on both the latent factors and model parameters, based on a Metropolis Hastings algorithm with adaptive proposals. The algorithm relies on a Greedy Density Kernel Approximation (GDKA) and parameter expansion with latent factor normalization. We tested the framework and algorithms in simulated studies and applied them to the analysis of intertwined credit and recovery risk for Moody’s rated firms from 1982–2008, illustrating the importance of jointly modeling mixed-measurement time series. The article has supplemental materials available online. PMID:24791133
Regression models for mixed Poisson and continuous longitudinal data.
Yang, Ying; Kang, Jian; Mao, Kai; Zhang, Jie
2007-09-10
In this article we develop flexible regression models in two respects to evaluate the influence of the covariate variables on the mixed Poisson and continuous responses and to evaluate how the correlation between Poisson response and continuous response changes over time. A scenario for dealing with regression models of mixed continuous and Poisson responses when the heterogeneous variance and correlation changing over time exist is proposed. Our general approach is first to jointly build marginal model and to check whether the variance and correlation change over time via likelihood ratio test. If the variance and correlation change over time, we will do a suitable data transformation to properly evaluate the influence of the covariates on the mixed responses. The proposed methods are applied to the interstitial cystitis data base (ICDB) cohort study, and we find that the positive correlations significantly change over time, which suggests heterogeneous variances should not be ignored in modelling and inference.
Generalized Dynamic Factor Models for Mixed-Measurement Time Series.
Cui, Kai; Dunson, David B
2014-02-12
In this article, we propose generalized Bayesian dynamic factor models for jointly modeling mixed-measurement time series. The framework allows mixed-scale measurements associated with each time series, with different measurements having different distributions in the exponential family conditionally on time-varying latent factor(s). Efficient Bayesian computational algorithms are developed for posterior inference on both the latent factors and model parameters, based on a Metropolis Hastings algorithm with adaptive proposals. The algorithm relies on a Greedy Density Kernel Approximation (GDKA) and parameter expansion with latent factor normalization. We tested the framework and algorithms in simulated studies and applied them to the analysis of intertwined credit and recovery risk for Moody's rated firms from 1982-2008, illustrating the importance of jointly modeling mixed-measurement time series. The article has supplemental materials available online.
Graphic engine resource management
NASA Astrophysics Data System (ADS)
Bautin, Mikhail; Dwarakinath, Ashok; Chiueh, Tzi-cker
2008-01-01
Modern consumer-grade 3D graphic cards boast a computation/memory resource that can easily rival or even exceed that of standard desktop PCs. Although these cards are mainly designed for 3D gaming applications, their enormous computational power has attracted developers to port an increasing number of scientific computation programs to these cards, including matrix computation, collision detection, cryptography, database sorting, etc. As more and more applications run on 3D graphic cards, there is a need to allocate the computation/memory resource on these cards among the sharing applications more fairly and efficiently. In this paper, we describe the design, implementation and evaluation of a Graphic Processing Unit (GPU) scheduler based on Deficit Round Robin scheduling that successfully allocates to every process an equal share of the GPU time regardless of their demand. This scheduler, called GERM, estimates the execution time of each GPU command group based on dynamically collected statistics, and controls each process's GPU command production rate through its CPU scheduling priority. Measurements on the first GERM prototype show that this approach can keep the maximal GPU time consumption difference among concurrent GPU processes consistently below 5% for a variety of application mixes.
Mixed-membership models of scientific publications
Erosheva, Elena; Fienberg, Stephen; Lafferty, John
2004-01-01
PNAS is one of world's most cited multidisciplinary scientific journals. The PNAS official classification structure of subjects is reflected in topic labels submitted by the authors of articles, largely related to traditionally established disciplines. These include broad field classifications into physical sciences, biological sciences, social sciences, and further subtopic classifications within the fields. Focusing on biological sciences, we explore an internal soft-classification structure of articles based only on semantic decompositions of abstracts and bibliographies and compare it with the formal discipline classifications. Our model assumes that there is a fixed number of internal categories, each characterized by multinomial distributions over words (in abstracts) and references (in bibliographies). Soft classification for each article is based on proportions of the article's content coming from each category. We discuss the appropriateness of the model for the PNAS database as well as other features of the data relevant to soft classification. PMID:15020766
Qureshi, T.M.; Khan, K.A.
1996-08-01
Modelling stratigraphic sequence by using seismo-geologic approach, integrated with cyclic transgressive-regressive deposits, helps to identify a number of non-structural subtle traps. Most of the hydrocarbons found in Early Cretaceous of Central Indus Basin pertain to structural entrapments of upper transgressive sands. A few wells are producing from middle and basal regressive sands, but the massive regressive sands have not been tested so far. The possibility of stratigraphic traps like wedging or pinch-out, a lateral gradation, an uplift, truncation and overlapping of reservoir rocks is quite promising. The natural basin physiography at times has been modified by extensional episodic events into tectono-morphic terrain. Thus, seismo scanning of tectonically controlled sedimentation might delineate some subtle stratigraphic traps. Amplitude maps representing stratigraphic sequences are generated to identify the traps. Seismic expressions indicate the reservoir quality in terms of amplitude increase or decrease. The data is modelled on computer using graphics simulation techniques.
Teaching the Mixed Model Design: A Flowchart to Facilitate Understanding.
ERIC Educational Resources Information Center
Mills, Jamie D.
2005-01-01
The Mixed Model (MM) design, sometimes known as a Split-Plot design, is very popular in educational research. This model can be used to examine the effects of several independent variables on a dependent variable and it offers a more powerful alternative to the completely randomized design. The MM design considers both a between-subjects factor,…
Nonlinear mixed modeling of basal area growth for shortleaf pine
Chakra B. Budhathoki; Thomas B. Lynch; James M. Guldin
2008-01-01
Mixed model estimation methods were used to fit individual-tree basal area growth models to tree and stand-level measurements available from permanent plots established in naturally regenerated shortleaf pine (Pinus echinata Mill.) even-aged stands in western Arkansas and eastern Oklahoma in the USA. As a part of the development of a comprehensive...
MULTIVARIATE LINEAR MIXED MODELS FOR MULTIPLE OUTCOMES. (R824757)
We propose a multivariate linear mixed (MLMM) for the analysis of multiple outcomes, which generalizes the latent variable model of Sammel and Ryan. The proposed model assumes a flexible correlation structure among the multiple outcomes, and allows a global test of the impact of ...
Teaching Service Modelling to a Mixed Class: An Integrated Approach
ERIC Educational Resources Information Center
Deng, Jeremiah D.; Purvis, Martin K.
2015-01-01
Service modelling has become an increasingly important area in today's telecommunications and information systems practice. We have adapted a Network Design course in order to teach service modelling to a mixed class of both the telecommunication engineering and information systems backgrounds. An integrated approach engaging mathematics teaching…
MULTIVARIATE LINEAR MIXED MODELS FOR MULTIPLE OUTCOMES. (R824757)
We propose a multivariate linear mixed (MLMM) for the analysis of multiple outcomes, which generalizes the latent variable model of Sammel and Ryan. The proposed model assumes a flexible correlation structure among the multiple outcomes, and allows a global test of the impact of ...
Teaching the Mixed Model Design: A Flowchart to Facilitate Understanding.
ERIC Educational Resources Information Center
Mills, Jamie D.
2005-01-01
The Mixed Model (MM) design, sometimes known as a Split-Plot design, is very popular in educational research. This model can be used to examine the effects of several independent variables on a dependent variable and it offers a more powerful alternative to the completely randomized design. The MM design considers both a between-subjects factor,…
Stochastic transport models for mixing in variable-density turbulence
NASA Astrophysics Data System (ADS)
Bakosi, J.; Ristorcelli, J. R.
2011-11-01
In variable-density (VD) turbulent mixing, where very-different- density materials coexist, the density fluctuations can be an order of magnitude larger than their mean. Density fluctuations are non-negligible in the inertia terms of the Navier-Stokes equation which has both quadratic and cubic nonlinearities. Very different mixing rates of different materials give rise to large differential accelerations and some fundamentally new physics that is not seen in constant-density turbulence. In VD flows material mixing is active in a sense far stronger than that applied in the Boussinesq approximation of buoyantly-driven flows: the mass fraction fluctuations are coupled to each other and to the fluid momentum. Statistical modeling of VD mixing requires accounting for basic constraints that are not important in the small-density-fluctuation passive-scalar-mixing approximation: the unit-sum of mass fractions, bounded sample space, and the highly skewed nature of the probability densities become essential. We derive a transport equation for the joint probability of mass fractions, equivalent to a system of stochastic differential equations, that is consistent with VD mixing in multi-component turbulence and consistently reduces to passive scalar mixing in constant-density flows.
Temperature Chaos in Some Spherical Mixed p-Spin Models
NASA Astrophysics Data System (ADS)
Chen, Wei-Kuo; Panchenko, Dmitry
2017-03-01
We give two types of examples of the spherical mixed even- p-spin models for which chaos in temperature holds. These complement some known results for the spherical pure p-spin models and for models with Ising spins. For example, in contrast to a recent result of Subag who showed absence of chaos in temperature in the spherical pure p-spin models for p≥3, we show that even a smaller order perturbation induces temperature chaos.
Hybrid configuration mixing model for odd nuclei
NASA Astrophysics Data System (ADS)
Colò, G.; Bortignon, P. F.; Bocchi, G.
2017-03-01
In this work, we introduce a new approach which is meant to be a first step towards complete self-consistent low-lying spectroscopy of odd nuclei. So far, we essentially limit ourselves to the description of a double-magic core plus an extra nucleon. The model does not contain any free adjustable parameter and is instead based on a Hartree-Fock (HF) description of the particle states in the core, together with self-consistent random-phase approximation (RPA) calculations for the core excitations. We include both collective and noncollective excitations, with proper care of the corrections due to the overlap between them (i.e., due to the nonorthonormality of the basis). As a consequence, with respect to traditional particle-vibration coupling calculations in which one can only address single-nucleon states and particle-vibration multiplets, we can also describe states of shell-model types like 2 particle-1 hole. We will report results for 49Ca and 133Sb and discuss future perspectives.
Mix Model Comparison of Low Feed-Through Implosions
NASA Astrophysics Data System (ADS)
Pino, Jesse; MacLaren, S.; Greenough, J.; Casey, D.; Dewald, E.; Dittrich, T.; Khan, S.; Ma, T.; Sacks, R.; Salmonson, J.; Smalyuk, V.; Tipton, R.; Kyrala, G.
2016-10-01
The CD Mix campaign previously demonstrated the use of nuclear diagnostics to study the mix of separated reactants in plastic capsule implosions at the NIF. Recently, the separated reactants technique has been applied to the Two Shock (TS) implosion platform, which is designed to minimize this feed-through and isolate local mix at the gas-ablator interface and produce core yields in good agreement with 1D clean simulations. The effects of both inner surface roughness and convergence ratio have been probed. The TT, DT, and DD neutron signals respectively give information about core gas performance, gas-shell atomic mix, and heating of the shell. In this talk, we describe efforts to model these implosions using high-resolution 2D ARES simulations. Various methods of interfacial mix will be considered, including the Reynolds-Averaged Navier Stokes (RANS) KL method as well as and a multicomponent enhanced diffusivity model with species, thermal, and pressure gradient terms. We also give predictions of a upcoming campaign to investigate Mid-Z mixing by adding a Ge dopant to the CD layer. LLNL-ABS-697251 This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
The salinity effect in a mixed layer ocean model
NASA Technical Reports Server (NTRS)
Miller, J. R.
1976-01-01
A model of the thermally mixed layer in the upper ocean as developed by Kraus and Turner and extended by Denman is further extended to investigate the effects of salinity. In the tropical and subtropical Atlantic Ocean rapid increases in salinity occur at the bottom of a uniformly mixed surface layer. The most significant effects produced by the inclusion of salinity are the reduction of the deepening rate and the corresponding change in the heating characteristics of the mixed layer. If the net surface heating is positive, but small, salinity effects must be included to determine whether the mixed layer temperature will increase or decrease. Precipitation over tropical oceans leads to the development of a shallow stable layer accompanied by a decrease in the temperature and salinity at the sea surface.
A 3D Bubble Merger Model for RTI Mixing
NASA Astrophysics Data System (ADS)
Cheng, Baolian
2015-11-01
In this work we present a model for the merger processes of bubbles at the edge of an unstable acceleration driven mixing layer. Steady acceleration defines a self-similar mixing process, with a time-dependent inverse cascade of structures of increasing size. The time evolution is itself a renormalization group evolution. The model predicts the growth rate of a Rayleigh-Taylor chaotic fluid-mixing layer. The 3-D model differs from the 2-D merger model in several important ways. Beyond the extension of the model to three dimensions, the model contains one phenomenological parameter, the variance of the bubble radii at fixed time. The model also predicts several experimental numbers: the bubble mixing rate, the mean bubble radius, and the bubble height separation at the time of merger. From these we also obtain the bubble height to the radius aspect ratio, which is in good agreement with experiments. Applications to recent NIF and Omega experiments will be discussed. This work was performed under the auspices of the U.S. Department of Energy by the Los Alamos National Laboratory under Contract No. W-7405-ENG-36.
Spectral mixing models of S-Type asteroids
NASA Technical Reports Server (NTRS)
Clark, Beth E.; Lucey, Paul G.; Bell, Jeffrey F.; Fanale, Fraser P.
1993-01-01
This paper presents the results of an attempt to determine S-Type asteroid mineralogies with the use of Hapke theory spectral mixing modelling. Previous attempts to understand the spectral variations present in this single class of asteroids have concentrated on spectral parameters such as absorption band center wavelengths, band area ratios, and geometric albedos. The procedure taken here is to utilize the Hapke spectral reflectance model to calculate single scatter albedo as a function of wavelength for a suite of candidate end-member materials. These materials are then mixed linearly in single scatter albedo space, and the mixture is converted, assuming intimate particle mixing, back to reflectance for the spectrum matching routine. A total of 39 S-Type asteroids selected from the Bell et al. survey have been matched with mixture model spectra.
Modeling and Analysis of Mixed Synchronous/Asynchronous Systems
NASA Technical Reports Server (NTRS)
Driscoll, Kevin R.; Madl. Gabor; Hall, Brendan
2012-01-01
Practical safety-critical distributed systems must integrate safety critical and non-critical data in a common platform. Safety critical systems almost always consist of isochronous components that have synchronous or asynchronous interface with other components. Many of these systems also support a mix of synchronous and asynchronous interfaces. This report presents a study on the modeling and analysis of asynchronous, synchronous, and mixed synchronous/asynchronous systems. We build on the SAE Architecture Analysis and Design Language (AADL) to capture architectures for analysis. We present preliminary work targeted to capture mixed low- and high-criticality data, as well as real-time properties in a common Model of Computation (MoC). An abstract, but representative, test specimen system was created as the system to be modeled.
Computer modeling of ORNL storage tank sludge mobilization and mixing
Terrones, G.; Eyler, L.L.
1993-09-01
This report presents and analyzes the results of the computer modeling of mixing and mobilization of sludge in horizontal, cylindrical storage tanks using submerged liquid jets. The computer modeling uses the TEMPEST computational fluid dynamics computer program. The horizontal, cylindrical storage tank configuration is similar to the Melton Valley Storage Tanks (MVST) at Oak Ridge National (ORNL). The MVST tank contents exhibit non-homogeneous, non-Newtonian rheology characteristics. The eventual goals of the simulations are to determine under what conditions sludge mobilization using submerged liquid jets is feasible in tanks of this configuration, and to estimate mixing times required to approach homogeneity of the contents of the tanks.
Mixed waste treatment model: Basis and analysis
Palmer, B.A.
1995-09-01
The Department of Energy`s Programmatic Environmental Impact Statement (PEIS) required treatment system capacities for risk and cost calculation. Los Alamos was tasked with providing these capacities to the PEIS team. This involved understanding the Department of Energy (DOE) Complex waste, making the necessary changes to correct for problems, categorizing the waste for treatment, and determining the treatment system requirements. The treatment system requirements depended on the incoming waste, which varied for each PEIS case. The treatment system requirements also depended on the type of treatment that was desired. Because different groups contributing to the PEIS needed specific types of results, we provided the treatment system requirements in a variety of forms. In total, some 40 data files were created for the TRU cases, and for the MLLW case, there were 105 separate data files. Each data file represents one treatment case consisting of the selected waste from various sites, a selected treatment system, and the reporting requirements for such a case. The treatment system requirements in their most basic form are the treatment process rates for unit operations in the desired treatment system, based on a 10-year working life and 20-year accumulation of the waste. These results were reported in cubic meters and for the MLLW case, in kilograms as well. The treatment system model consisted of unit operations that are linked together. Each unit operation`s function depended on the input waste streams, waste matrix, and contaminants. Each unit operation outputs one or more waste streams whose matrix, contaminants, and volume/mass may have changed as a result of the treatment. These output streams are then routed to the appropriate unit operation for additional treatment until the output waste stream meets the treatment requirements for disposal. The total waste for each unit operation was calculated as well as the waste for each matrix treated by the unit.
Villeneuve, Daniel L.; Larkin, Patrick; Knoebl, Iris; Miracle, Ann L.; Kahl, Michael D.; Jensen, Kathleen M.; Makynen, Elizabeth A.; Durhan, Elizabeth J.; Carter, Barbara J.; Denslow, Nancy D.; Ankley, Gerald T.
2007-01-01
Conceptual or graphical systems models are powerful tools that can help facilitate hypothesis-based ecotoxicogenomic research and aid mechanistic interpretation of toxicogenomic results. This paper presents a novel conceptual model of the teleost brain-pituitary-gonadal axis designed to aid ecotoxigenomics research on endocrine-disrupting chemicals using small fish models. Application of the model to toxicogenomics research was illustrated in the context of a recent study that examined the effects of the competitive aromatase inhibitor, fadrozole, on mRNA transcript abundance in gonad, brain, and liver tissue of exposed fathead minnows using a novel fathead minnow oligonucleotide microarray and quantitative real-time polymerase chain reaction. Changes in transcript abundance observed in the ovaries of females exposed to 6.3 ug fadrozole/L for 7 d were functionally consistent with fadrozole’s mechanism of action, and expected compensatory responses of the BPG-axis to fadrozole’s effects. Furthermore, array results helped identify additional elements (genes/proteins) that could be included in the model to potentially increase it’s predictive capacity. However, model-based predictions did not readily explain the lack of differential mRNA expression (relative to controls) observed in the ovary of females exposed to 60 ug fadrozole/L for 7 d. Both the utility and limitations of conceptual systems models as tools for hypothesis-driven ecotoxicogenomics research are discussed.
Statistical basis and outputs of stable isotope mixing models: Comment on Fry (2013)
A recent article by Fry (2013; Mar Ecol Prog Ser 472:1−13) reviewed approaches to solving underdetermined stable isotope mixing systems, and presented a new graphical approach and set of summary statistics for the analysis of such systems. In his review, Fry (2013) mis-characteri...
Statistical basis and outputs of stable isotope mixing models: Comment on Fry (2013)
A recent article by Fry (2013; Mar Ecol Prog Ser 472:1−13) reviewed approaches to solving underdetermined stable isotope mixing systems, and presented a new graphical approach and set of summary statistics for the analysis of such systems. In his review, Fry (2013) mis-characteri...
Graphics Career Ladder AFSC 231X1
1992-01-01
paints , other than watercolor , casein, or tempera paints F224 Mix watercolor , casein, or tempera paints F235 Produce preliminary...color schemes for graphics F223 Mix paints , other than watercolor , casein, or tempera paints F224 Mix watercolor , casein, or tempera paints P392...TASK STATEMENTS: F191 Clean airbrush parts F194 Clean paint brushes F223 Mix paints , other than watercolor , casein, or tempera paints F224 Mix
Spread in model climate sensitivity traced to atmospheric convective mixing.
Sherwood, Steven C; Bony, Sandrine; Dufresne, Jean-Louis
2014-01-02
Equilibrium climate sensitivity refers to the ultimate change in global mean temperature in response to a change in external forcing. Despite decades of research attempting to narrow uncertainties, equilibrium climate sensitivity estimates from climate models still span roughly 1.5 to 5 degrees Celsius for a doubling of atmospheric carbon dioxide concentration, precluding accurate projections of future climate. The spread arises largely from differences in the feedback from low clouds, for reasons not yet understood. Here we show that differences in the simulated strength of convective mixing between the lower and middle tropical troposphere explain about half of the variance in climate sensitivity estimated by 43 climate models. The apparent mechanism is that such mixing dehydrates the low-cloud layer at a rate that increases as the climate warms, and this rate of increase depends on the initial mixing strength, linking the mixing to cloud feedback. The mixing inferred from observations appears to be sufficiently strong to imply a climate sensitivity of more than 3 degrees for a doubling of carbon dioxide. This is significantly higher than the currently accepted lower bound of 1.5 degrees, thereby constraining model projections towards relatively severe future warming.
Sensitivity of fine sediment source apportionment to mixing model assumptions
NASA Astrophysics Data System (ADS)
Cooper, Richard; Krueger, Tobias; Hiscock, Kevin; Rawlins, Barry
2015-04-01
Mixing models have become increasingly common tools for quantifying fine sediment redistribution in river catchments. The associated uncertainties may be modelled coherently and flexibly within a Bayesian statistical framework (Cooper et al., 2015). However, there is more than one way to represent these uncertainties because the modeller has considerable leeway in making error assumptions and model structural choices. In this presentation, we demonstrate how different mixing model setups can impact upon fine sediment source apportionment estimates via a one-factor-at-a-time (OFAT) sensitivity analysis. We formulate 13 versions of a mixing model, each with different error assumptions and model structural choices, and apply them to sediment geochemistry data from the River Blackwater, Norfolk, UK, to apportion suspended particulate matter (SPM) contributions from three sources (arable topsoils, road verges and subsurface material) under base flow conditions between August 2012 and August 2013 (Cooper et al., 2014). Whilst all 13 models estimate subsurface sources to be the largest contributor of SPM (median ~76%), comparison of apportionment estimates reveals varying degrees of sensitivity to changing prior parameter distributions, inclusion of covariance terms, incorporation of time-variant distributions and methods of proportion characterisation. We also demonstrate differences in apportionment results between a full and an empirical Bayesian setup and between a Bayesian and a popular Least Squares optimisation approach. Our OFAT sensitivity analysis reveals that mixing model structural choices and error assumptions can significantly impact upon fine sediment source apportionment results, with estimated median contributions in this study varying by up to 21% between model versions. Users of mixing models are therefore strongly advised to carefully consider and justify their choice of model setup prior to conducting fine sediment source apportionment investigations
Was the Hadean Earth stagnant? Constraints from dynamic mixing models
NASA Astrophysics Data System (ADS)
O'Neill, C.; Debaille, V.; Griffin, W. L.
2013-12-01
As a result of high internal heat production, high rates of impact bombardment, and primordial heat from accretion, a result a strong case is made for extremely high internal temperatures, low internal viscosities, and extremely vigorous mantle convection in the Hadean mantle. Previous studies of mixing of high-Rayleigh number convection indicates that chemically heterogeneous mantle anomalies should have efficiently remixed into the mantle on timescales of less than 100Myr. However, 142Nd and 182W isotope studies indicate that heterogeneous mantle domains survived, without mixing, for over 2Gyr - at odds with mixing rates expected. Similarly, platinum group elements concentrations in Archaean komatiites, purported due to the later veneer of meteoritic addition on the Earth, only achieve current levels at 2.7Ga - indicating a time lag of almost 1-2Gyr in mixing this material thoroughly in the mantle. Whilst previous studies have sought to understand slow Archaean mantle mixing via mantle layering due to endothermic phase changes, or anomalously viscous blobs of material, these have demonstrated limited efficacy. Here we pursue another explanation for inefficient mantle mixing in the Hadean: tectonic regime. A number of lines of evidence suggest resurfacing in the Archaean was episodic, and extending these models to Hadean times implies the Hadean was characterized by long periods of tectonic quiescence. We explore mixing times in 3D spherical-cap models of mantle convection, which incorporate vertically stratified and temperature-dependent viscosities. At an extreme, we show that mixing in stagnant lid regimes is over an order of magnitude less efficient than mobile lid mixing, and for plausible Rayleigh numbers and internal heat production, the lag in Hadean convective recycling can be explained. The attractiveness of this explanation is that it not only explains the long-lived 142Nd and 182W mantle anomalies, but also 1) posits an explanation for the delay
MIX DESIGN FOR SMALL-SCALE MODELS OF CONCRETE STRUCTURES
An easily applied method of mix design was developed for concretes suitable for use in small -scale models of concrete structures. By use of the...properties were collected for model concretes with portland cement and gypsum cement bases. These concretes had maximum aggregate sizes of No. 4...strength, the model concretes using approximately scaled aggregate were found to have about the same splitting-tensile strength and flexural strength, a
NASA Astrophysics Data System (ADS)
Runge, J.; Petoukhov, V.; Kurths, J.
2013-12-01
The analysis of time delays using lagged cross correlations is commonly used to gain insights into interaction mechanisms between climatological processes, also to quantify the strength of a mechanism. Especially ENSOs teleconnections have been investigated with this approach. Here we critically evaluate how justified this method is, i.e., what aspect of a climatic mechanism such an inferred time lag actually measures. We find a strong dependence on serial dependencies or autocorrelation which can lead to misleading conclusions about the time delays and also obscures a quantification of the interaction mechanism. To overcome these possible artifacts, we propose a two-step procedure based on the concept of graphical models recently introduced to climate research. In the first step, graphical models are used to detect the existence of (Granger-) causal interactions which determines the time delays of a mechanism. In the second step a certain partial correlation is introduced that allows to specifically quantify the strength of an interaction mechanism in a well interpretable way that enables to exclude misleading effects of serial correlation as well as more general dependencies. With this approach we find novel interpretations of the time delays and strengths of ENSOs teleconnections. The potential of the approach to quantify interactions also between more than two variables is demonstrated by investigating the mechanism of the Walker circulation. Overview over important teleconnections. The black dashed lines denote the regions used in the bivariate analyses, while the gray boxes show the three regions analyzed to study the Walker circulation (see the inset). The arrows indicate the direction with the gray shading roughly corresponding to the novel partial correlation measure strength. The label gives the value and time lag in months in brackets.
Modeling of three-dimensional mixing and reacting ducted flows
NASA Technical Reports Server (NTRS)
Zelazny, S. W.; Baker, A. J.; Rushmore, W. L.
1976-01-01
A computer code, based upon a finite element solution algorithm, was developed to solve the governing equations for three-dimensional, reacting boundary region, and constant area ducted flow fields. Effective diffusion coefficients are employed to allow analyses of turbulent, transitional or laminar flows. The code was used to investigate mixing and reacting hydrogen jets injected from multiple orifices, transverse and parallel to a supersonic air stream. Computational results provide a three-dimensional description of velocity, temperature, and species-concentration fields downstream of injection. Experimental data for eight cases covering different injection conditions and geometries were modeled using mixing length theory (MLT). These results were used as a baseline for examining the relative merits of other mixing models. Calculations were made using a two-equation turbulence model (k+d) and comparisons were made between experiment and mixing length theory predictions. The k+d model shows only a slight improvement in predictive capability over MLT. Results of an examination of the effect of tensorial transport coefficients on mass and momentum field distribution are also presented. Solutions demonstrating the ability of the code to model ducted flows and parallel strut injection are presented and discussed.
Multikernel linear mixed models for complex phenotype prediction
Weissbrod, Omer; Geiger, Dan; Rosset, Saharon
2016-01-01
Linear mixed models (LMMs) and their extensions have recently become the method of choice in phenotype prediction for complex traits. However, LMM use to date has typically been limited by assuming simple genetic architectures. Here, we present multikernel linear mixed model (MKLMM), a predictive modeling framework that extends the standard LMM using multiple-kernel machine learning approaches. MKLMM can model genetic interactions and is particularly suitable for modeling complex local interactions between nearby variants. We additionally present MKLMM-Adapt, which automatically infers interaction types across multiple genomic regions. In an analysis of eight case-control data sets from the Wellcome Trust Case Control Consortium and more than a hundred mouse phenotypes, MKLMM-Adapt consistently outperforms competing methods in phenotype prediction. MKLMM is as computationally efficient as standard LMMs and does not require storage of genotypes, thus achieving state-of-the-art predictive power without compromising computational feasibility or genomic privacy. PMID:27302636
Rana, V; Bednarek, D; Wu, J; Rudin, S
2012-06-01
To develop a library of graphic human models that closely match patients undergoing interventional fluoroscopic procedures in order to obtain an accurate estimate of their skin dose. A dose tracking system (DTS) has been developed that calculates the dose to the patient's skin in real time during fluoroscopic procedures based on a graphical simulation of the x-ray system and the patient. The calculation is performed using a lookup table containing values of mGy per mAs at a reference point and inverse-square correction using the distance from the source to individual points on the skin. For proper inverse-square correction, the external shape of the graphic should closely match that of the patient. We are in the process of developing a library of 3D human graphic models categorized as a function of basic body type, sex, height and weight. Two different open- source software applications are being used to develop graphic models with varying weights and heights, to 'morph' the shapes for body type and to 'pose' them for proper positioning on the table. The DTS software is being designed such that the most appropriate body graphic can be automatically selected based on input of several basic patient dimensional metrics. A series of male and female body graphic models have been developed which vary in weight and height. Matching pairs have been constructed with arms at the side and over the head to simulate the usual placement in cardiac procedures. The error in skin dose calculation due to inverse-square correction is expected to be below 5% if the graphic can match the position of the patient's skin surface within 1 cm. A library of categorized body shapes should allow close matching of the graphic to the patient shape allowing more accurate determination of skin dose with the DTS. Support for this work was provided in part by NIH grants R43FD0158401, R44FD0158402, R01EB002873 and R01EB008425, and by Toshiba Medical Systems Corporation. © 2012 American Association
Sensitivity Analysis of Mixed Models for Incomplete Longitudinal Data
ERIC Educational Resources Information Center
Xu, Shu; Blozis, Shelley A.
2011-01-01
Mixed models are used for the analysis of data measured over time to study population-level change and individual differences in change characteristics. Linear and nonlinear functions may be used to describe a longitudinal response, individuals need not be observed at the same time points, and missing data, assumed to be missing at random (MAR),…
Development of stable isotope mixing models in ecology - Fremantle
More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...
A Nonlinear Mixed Effects Model for Latent Variables
ERIC Educational Resources Information Center
Harring, Jeffrey R.
2009-01-01
The nonlinear mixed effects model for continuous repeated measures data has become an increasingly popular and versatile tool for investigating nonlinear longitudinal change in observed variables. In practice, for each individual subject, multiple measurements are obtained on a single response variable over time or condition. This structure can be…
Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models
ERIC Educational Resources Information Center
Wagler, Amy E.
2014-01-01
Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…
Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models
ERIC Educational Resources Information Center
Wagler, Amy E.
2014-01-01
Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…
Dynamics and Modeling of Turbulent Mixing in Oceanic Flows
2010-09-30
channel flow (for a nice theoretical discussion, see Armenio and Sarkar 2002), the mixing properties of each of the Prt formulations might not be...to incorporate effects of inhomogeneity into turblence models. REFERENCES Armenio , V. and Sarkar, S. 2002. An investigation of stably stratified
Historical development of stable isotope mixing models in ecology
More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...
INCORPORATING CONCENTRATION DEPENDENCE IN STABLE ISOTOPE MIXING MODELS
Stable isotopes are frequently used to quantify the contributions of multiple sources to a mixture; e.g., C and N isotopic signatures can be used to determine the fraction of three food sources in a consumer's diet. The standard dual isotope, three source linear mixing model ass...
COMBINING SOURCES IN STABLE ISOTOPE MIXING MODELS: ALTERNATIVE METHODS
Stable isotope mixing models are often used to quantify source contributions to a mixture. Examples include pollution source identification; trophic web studies; analysis of water sources for soils, plants, or water bodies; and many others. A common problem is having too many s...
The Worm Process for the Ising Model is Rapidly Mixing
NASA Astrophysics Data System (ADS)
Collevecchio, Andrea; Garoni, Timothy M.; Hyndman, Timothy; Tokarev, Daniel
2016-09-01
We prove rapid mixing of the worm process for the zero-field ferromagnetic Ising model, on all finite connected graphs, and at all temperatures. As a corollary, we obtain a fully-polynomial randomized approximation scheme for the Ising susceptibility, and for a certain restriction of the two-point correlation function.
Sensitivity Analysis of Mixed Models for Incomplete Longitudinal Data
ERIC Educational Resources Information Center
Xu, Shu; Blozis, Shelley A.
2011-01-01
Mixed models are used for the analysis of data measured over time to study population-level change and individual differences in change characteristics. Linear and nonlinear functions may be used to describe a longitudinal response, individuals need not be observed at the same time points, and missing data, assumed to be missing at random (MAR),…
Development of stable isotope mixing models in ecology - Sydney
More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...
Development of stable isotope mixing models in ecology - Perth
More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...
Historical development of stable isotope mixing models in ecology
More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...
Development of stable isotope mixing models in ecology - Dublin
More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...
COMBINING SOURCES IN STABLE ISOTOPE MIXING MODELS: ALTERNATIVE METHODS
Stable isotope mixing models are often used to quantify source contributions to a mixture. Examples include pollution source identification; trophic web studies; analysis of water sources for soils, plants, or water bodies; and many others. A common problem is having too many s...
A Nonlinear Mixed Effects Model for Latent Variables
ERIC Educational Resources Information Center
Harring, Jeffrey R.
2009-01-01
The nonlinear mixed effects model for continuous repeated measures data has become an increasingly popular and versatile tool for investigating nonlinear longitudinal change in observed variables. In practice, for each individual subject, multiple measurements are obtained on a single response variable over time or condition. This structure can be…
INCORPORATING CONCENTRATION DEPENDENCE IN STABLE ISOTOPE MIXING MODELS
Stable isotopes are frequently used to quantify the contributions of multiple sources to a mixture; e.g., C and N isotopic signatures can be used to determine the fraction of three food sources in a consumer's diet. The standard dual isotope, three source linear mixing model ass...
Development of stable isotope mixing models in ecology - Perth
More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...
Development of stable isotope mixing models in ecology - Fremantle
More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...
Development of stable isotope mixing models in ecology - Sydney
More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...
Development of stable isotope mixing models in ecology - Dublin
More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...
Kim, Hui Taek; Ahn, Tae Young; Jang, Jae Hoon; Kim, Kang Hee; Lee, Sung Jae; Jung, Duk Young
2017-03-01
Three-dimensional (3D) computed tomography imaging is now being used to generate 3D models for planning orthopaedic surgery, but the process remains time consuming and expensive. For chronic radial head dislocation, we have designed a graphic overlay approach that employs selected 3D computer images and widely available software to simplify the process of osteotomy site selection. We studied 5 patients (2 traumatic and 3 congenital) with unilateral radial head dislocation. These patients were treated with surgery based on traditional radiographs, but they also had full sets of 3D CT imaging done both before and after their surgery: these 3D CT images form the basis for this study. From the 3D CT images, each patient generated 3 sets of 3D-printed bone models: 2 copies of the preoperative condition, and 1 copy of the postoperative condition. One set of the preoperative models was then actually osteotomized and fixed in the manner suggested by our graphic technique. Arcs of rotation of the 3 sets of 3D-printed bone models were then compared. Arcs of rotation of the 3 groups of bone models were significantly different, with the models osteotomized accordingly to our graphic technique having the widest arcs. For chronic radial head dislocation, our graphic overlay approach simplifies the selection of the osteotomy site(s). Three-dimensional-printed bone models suggest that this approach could improve range of motion of the forearm in actual surgical practice. Level IV-therapeutic study.
Zhou, C; Shen, G; Zhu, H; Yang, J; Zhang, Y; Feng, J; Shen, B
2000-01-01
A three-dimensional (3D) graphic model of a single-chain Fv (scFv) which was derived from an anti-human placental acidic isoferritin (PAF) monoclonal antibody (MAb) was constructed by a homologous protein-predicting computer algorithm on Silicon graphic computer station. The structure, surface static electricity and hydrophobicity of scFv were investigated. Computer graphic modelling indicated that all regions of scFv including the inker, variable regions of the heavy (VH) and light (VL) chains were suitable. The VH region and the VL region were involved in composing the "hydrophobic pocket". The linker was drifted away VH and VL regions. The complementarity determining regions (CDRs) of VH and VL regions surrounded the "hydrophobic pocket". This study provides a theory basis for improving antibody affinity, investigating antibody structure and analyzing the functions of VH and VL regions in antibody activity.
Cross-Validation for Nonlinear Mixed Effects Models
Colby, Emily; Bair, Eric
2013-01-01
Cross-validation is frequently used for model selection in a variety of applications. However, it is difficult to apply cross-validation to mixed effects models (including nonlinear mixed effects models or NLME models) due to the fact that cross-validation requires “out-of-sample” predictions of the outcome variable, which cannot be easily calculated when random effects are present. We describe two novel variants of cross-validation that can be applied to nonlinear mixed effects models. One variant, where out-of-sample predictions are based on post hoc estimates of the random effects, can be used to select the overall structural model. Another variant, where cross-validation seeks to minimize the estimated random effects rather than the estimated residuals, can be used to select covariates to include in the model. We show that these methods produce accurate results in a variety of simulated data sets and apply them to two publicly available population pharmacokinetic data sets. PMID:23532511
An epidemic model to evaluate the homogeneous mixing assumption
NASA Astrophysics Data System (ADS)
Turnes, P. P.; Monteiro, L. H. A.
2014-11-01
Many epidemic models are written in terms of ordinary differential equations (ODE). This approach relies on the homogeneous mixing assumption; that is, the topological structure of the contact network established by the individuals of the host population is not relevant to predict the spread of a pathogen in this population. Here, we propose an epidemic model based on ODE to study the propagation of contagious diseases conferring no immunity. The state variables of this model are the percentages of susceptible individuals, infectious individuals and empty space. We show that this dynamical system can experience transcritical and Hopf bifurcations. Then, we employ this model to evaluate the validity of the homogeneous mixing assumption by using real data related to the transmission of gonorrhea, hepatitis C virus, human immunodeficiency virus, and obesity.
Logit-normal mixed model for Indian monsoon precipitation
NASA Astrophysics Data System (ADS)
Dietz, L. R.; Chatterjee, S.
2014-09-01
Describing the nature and variability of Indian monsoon precipitation is a topic of much debate in the current literature. We suggest the use of a generalized linear mixed model (GLMM), specifically, the logit-normal mixed model, to describe the underlying structure of this complex climatic event. Four GLMM algorithms are described and simulations are performed to vet these algorithms before applying them to the Indian precipitation data. The logit-normal model was applied to light, moderate, and extreme rainfall. Findings indicated that physical constructs were preserved by the models, and random effects were significant in many cases. We also found GLMM estimation methods were sensitive to tuning parameters and assumptions and therefore, recommend use of multiple methods in applications. This work provides a novel use of GLMM and promotes its addition to the gamut of tools for analysis in studying climate phenomena.
An explicit mixed numerical method for mesoscale model
NASA Technical Reports Server (NTRS)
Hsu, H.-M.
1981-01-01
A mixed numerical method has been developed for mesoscale models. The technique consists of a forward difference scheme for time tendency terms, an upstream scheme for advective terms, and a central scheme for the other terms in a physical system. It is shown that the mixed method is conditionally stable and highly accurate for approximating the system of either shallow-water equations in one dimension or primitive equations in three dimensions. Since the technique is explicit and two time level, it conserves computer and programming resources.
Low-order models of biogenic ocean mixing
NASA Astrophysics Data System (ADS)
Dabiri, J. O.; Rosinelli, D.; Koumoutsakos, P.
2009-12-01
Biogenic ocean mixing, the process whereby swimming animals may affect ocean circulation, has primarily been studied using order-of-magnitude theoretical estimates and a small number of field observations. We describe numerical simulations of arrays of simplified animal shapes migrating in inviscid fluid and at finite Reynolds numbers. The effect of density stratification is modeled in the fluid dynamic equations of motion by a buoyancy acceleration term, which arises due to perturbations to the density field by the migrating bodies. The effects of fluid viscosity, body spacing, and array configuration are investigated to identify scenarios in which a meaningful contribution to ocean mixing by swimming animals is plausible.
Arthur, Evan J.; Brooks, Charles L.
2016-01-01
Two fundamental challenges of simulating biologically relevant systems are the rapid calculation of the energy of solvation, and the trajectory length of a given simulation. The Generalized Born model with a Simple sWitching function (GBSW) addresses these issues by using an efficient approximation of Poisson–Boltzmann (PB) theory to calculate each solute atom's free energy of solvation, the gradient of this potential, and the subsequent forces of solvation without the need for explicit solvent molecules. This study presents a parallel refactoring of the original GBSW algorithm and its implementation on newly available, low cost graphics chips with thousands of processing cores. Depending on the system size and nonbonded force cutoffs, the new GBSW algorithm offers speed increases of between one and two orders of magnitude over previous implementations while maintaining similar levels of accuracy. We find that much of the algorithm scales linearly with an increase of system size, which makes this water model cost effective for solvating large systems. Additionally, we utilize our GPU-accelerated GBSW model to fold the model system chignolin, and in doing so we demonstrate that these speed enhancements now make accessible folding studies of peptides and potentially small proteins. PMID:26786647
Kuchinke, Wolfgang; Ohmann, Christian; Verheij, Robert A; van Veen, Evert-Ben; Arvanitis, Theodoros N; Taweel, Adel; Delaney, Brendan C
2014-12-01
To develop a model describing core concepts and principles of data flow, data privacy and confidentiality, in a simple and flexible way, using concise process descriptions and a diagrammatic notation applied to research workflow processes. The model should help to generate robust data privacy frameworks for research done with patient data. Based on an exploration of EU legal requirements for data protection and privacy, data access policies, and existing privacy frameworks of research projects, basic concepts and common processes were extracted, described and incorporated into a model with a formal graphical representation and a standardised notation. The Unified Modelling Language (UML) notation was enriched by workflow and own symbols to enable the representation of extended data flow requirements, data privacy and data security requirements, privacy enhancing techniques (PET) and to allow privacy threat analysis for research scenarios. Our model is built upon the concept of three privacy zones (Care Zone, Non-care Zone and Research Zone) containing databases, data transformation operators, such as data linkers and privacy filters. Using these model components, a risk gradient for moving data from a zone of high risk for patient identification to a zone of low risk can be described. The model was applied to the analysis of data flows in several general clinical research use cases and two research scenarios from the TRANSFoRm project (e.g., finding patients for clinical research and linkage of databases). The model was validated by representing research done with the NIVEL Primary Care Database in the Netherlands. The model allows analysis of data privacy and confidentiality issues for research with patient data in a structured way and provides a framework to specify a privacy compliant data flow, to communicate privacy requirements and to identify weak points for an adequate implementation of data privacy. Copyright © 2014 Elsevier Ireland Ltd. All rights
Quasi 1D Modeling of Mixed Compression Supersonic Inlets
NASA Technical Reports Server (NTRS)
Kopasakis, George; Connolly, Joseph W.; Paxson, Daniel E.; Woolwine, Kyle J.
2012-01-01
The AeroServoElasticity task under the NASA Supersonics Project is developing dynamic models of the propulsion system and the vehicle in order to conduct research for integrated vehicle dynamic performance. As part of this effort, a nonlinear quasi 1-dimensional model of the 2-dimensional bifurcated mixed compression supersonic inlet is being developed. The model utilizes computational fluid dynamics for both the supersonic and subsonic diffusers. The oblique shocks are modeled utilizing compressible flow equations. This model also implements variable geometry required to control the normal shock position. The model is flexible and can also be utilized to simulate other mixed compression supersonic inlet designs. The model was validated both in time and in the frequency domain against the legacy LArge Perturbation INlet code, which has been previously verified using test data. This legacy code written in FORTRAN is quite extensive and complex in terms of the amount of software and number of subroutines. Further, the legacy code is not suitable for closed loop feedback controls design, and the simulation environment is not amenable to systems integration. Therefore, a solution is to develop an innovative, more simplified, mixed compression inlet model with the same steady state and dynamic performance as the legacy code that also can be used for controls design. The new nonlinear dynamic model is implemented in MATLAB Simulink. This environment allows easier development of linear models for controls design for shock positioning. The new model is also well suited for integration with a propulsion system model to study inlet/propulsion system performance, and integration with an aero-servo-elastic system model to study integrated vehicle ride quality, vehicle stability, and efficiency.
An un-mixing model to study watershed erosion processes
NASA Astrophysics Data System (ADS)
Fox, J. F.; Papanicolaou, A. N.
2008-01-01
An un-mixing model is formulated within a Bayesian Markov Chain Monte Carlo framework for use within land-use fingerprinting to study watershed erosion processes. The model has two new components: (1) An equation and erosion process parameter are used to weight tracer signatures from each erosion process within a land-use. (2) An extra tracer distribution and episodic erosion parameter are used to represent soil eroded throughout the sampling duration and thus include the episodic nature of erosion. To test specification of these new parameters, the un-mixing model is applied in the 15 km 2 Jerome Creek Watershed in the Palouse Region of Northwestern Idaho. Erosion processes include surface erosion upon mountain slopes due to logging in the forest land-use and rill/interrill erosion on cultivated slopes and headcut erosion in riparian floodplains of the agricultural land-use (winter wheat/peas rotation and hay pasture). Episodic erosion occurs for the event where the model is applied. A sensitivity analysis shows that the smallest Bayesian credible set results when the new parameters are specified using hydrologic data and process-based models. The un-mixing model predicts that 90% of the eroded-soil originated from the agricultural land-use and 10% originated from the forest land-use. A comparative study is performed that estimates 90.5% and 9.5% of eroded-soil originated from the agricultural and forest land-uses. Successful performance of the un-mixing model highlights future application as a standalone probabilistic tool to monitor watershed erosion processes that exhibit non-equilibrium conditions and provide calibration data for process-based watershed models.
Application of large eddy interaction model to a mixing layer
NASA Technical Reports Server (NTRS)
Murthy, S. N. B.
1989-01-01
The large eddy interaction model (LEIM) is a statistical model of turbulence based on the interaction of selected eddies with the mean flow and all of the eddies in a turbulent shear flow. It can be utilized as the starting point for obtaining physical structures in the flow. The possible application of the LEIM to a mixing layer formed between two parallel, incompressible flows with a small temperature difference is developed by invoking a detailed similarity between the spectra of velocity and temperature.
Logistic Mixed Models to Investigate Implicit and Explicit Belief Tracking
Lages, Martin; Scheel, Anne
2016-01-01
We investigated the proposition of a two-systems Theory of Mind in adults’ belief tracking. A sample of N = 45 participants predicted the choice of one of two opponent players after observing several rounds in an animated card game. Three matches of this card game were played and initial gaze direction on target and subsequent choice predictions were recorded for each belief task and participant. We conducted logistic regressions with mixed effects on the binary data and developed Bayesian logistic mixed models to infer implicit and explicit mentalizing in true belief and false belief tasks. Although logistic regressions with mixed effects predicted the data well a Bayesian logistic mixed model with latent task- and subject-specific parameters gave a better account of the data. As expected explicit choice predictions suggested a clear understanding of true and false beliefs (TB/FB). Surprisingly, however, model parameters for initial gaze direction also indicated belief tracking. We discuss why task-specific parameters for initial gaze directions are different from choice predictions yet reflect second-order perspective taking. PMID:27853440
Seismic tests for solar models with tachocline mixing
NASA Astrophysics Data System (ADS)
Brun, A. S.; Antia, H. M.; Chitre, S. M.; Zahn, J.-P.
2002-08-01
We have computed accurate 1-D solar models including both a macroscopic mixing process in the solar tachocline as well as up-to-date microscopic physical ingredients. Using sound speed and density profiles inferred through primary inversion of the solar oscillation frequencies coupled with the equation of thermal equilibrium, we have extracted the temperature and hydrogen abundance profiles. These inferred quantities place strong constraints on our theoretical models in terms of the extent and strength of our macroscopic mixing, on the photospheric heavy elements abundance, on the nuclear reaction rates such as S11 and S34 and on the efficiency of the microscopic diffusion. We find a good overall agreement between the seismic Sun and our models if we introduce a macroscopic mixing in the tachocline and allow for variation within their uncertainties of the main physical ingredients. From our study we deduce that the solar hydrogen abundance at the solar age is Xinv=0.732+/- 0.001 and that based on the 9Be photospheric depletion, the maximum extent of mixing in the tachocline is 5% of the solar radius. The nuclear reaction rate for the fundamental pp reaction is found to be S11(0)=4.06+/- 0.07 10-25 MeV barns, i.e., 1.5% higher than the present theoretical determination. The predicted solar neutrino fluxes are discussed in the light of the new SNO/SuperKamiokande results.
Logistic Mixed Models to Investigate Implicit and Explicit Belief Tracking.
Lages, Martin; Scheel, Anne
2016-01-01
We investigated the proposition of a two-systems Theory of Mind in adults' belief tracking. A sample of N = 45 participants predicted the choice of one of two opponent players after observing several rounds in an animated card game. Three matches of this card game were played and initial gaze direction on target and subsequent choice predictions were recorded for each belief task and participant. We conducted logistic regressions with mixed effects on the binary data and developed Bayesian logistic mixed models to infer implicit and explicit mentalizing in true belief and false belief tasks. Although logistic regressions with mixed effects predicted the data well a Bayesian logistic mixed model with latent task- and subject-specific parameters gave a better account of the data. As expected explicit choice predictions suggested a clear understanding of true and false beliefs (TB/FB). Surprisingly, however, model parameters for initial gaze direction also indicated belief tracking. We discuss why task-specific parameters for initial gaze directions are different from choice predictions yet reflect second-order perspective taking.
Plasma interfacial mixing layers: Comparisons of fluid and kinetic models
NASA Astrophysics Data System (ADS)
Vold, Erik; Yin, Lin; Taitano, William; Albright, B. J.; Chacon, Luis; Simakov, Andrei; Molvig, Kim
2016-10-01
We examine plasma transport across an initial discontinuity between two species by comparing fluid and kinetic models. The fluid model employs a kinetic theory approximation for plasma transport in the limit of small Knudsen number. The kinetic simulations include explicit particle-in-cell simulations (VPIC) and a new implicit Vlasov-Fokker-Planck code, iFP. The two kinetic methods are shown to be in close agreement for many aspects of the mixing dynamics at early times (to several hundred collision times). The fluid model captures some of the earliest time dynamic behavior seen in the kinetic results, and also generally agrees with iFP at late times when the total pressure gradient relaxes and the species transport is dominated by slow diffusive processes. The results show three distinct phases of the mixing: a pressure discontinuity forms across the initial interface (on times of a few collisions), the pressure perturbations propagate away from the interfacial mixing region (on time scales of an acoustic transit) and at late times the pressure relaxes in the mix region leaving a non-zero center of mass flow velocity. The center of mass velocity associated with the outward propagating pressure waves is required to conserve momentum in the rest frame. Work performed under the auspices of the U.S. DOE by the LANS, LLC, Los Alamos National Laboratory under Contract No. DE-AC52-06NA25396. Funding provided by the Advanced Simulation and Computing (ASC) Program.
Upscaling of Mixing Processes using a Spatial Markov Model
NASA Astrophysics Data System (ADS)
Bolster, Diogo; Sund, Nicole; Porta, Giovanni
2016-11-01
The Spatial Markov model is a model that has been used to successfully upscale transport behavior across a broad range of spatially heterogeneous flows, with most examples to date coming from applications relating to porous media. In its most common current forms the model predicts spatially averaged concentrations. However, many processes, including for example chemical reactions, require an adequate understanding of mixing below the averaging scale, which means that knowledge of subscale fluctuations, or closures that adequately describe them, are needed. Here we present a framework, consistent with the Spatial Markov modeling framework, that enables us to do this. We apply and present it as applied to a simple example, a spatially periodic flow at low Reynolds number. We demonstrate that our upscaled model can successfully predict mixing by comparing results from direct numerical simulations to predictions with our upscaled model. To this end we focus on predicting two common metrics of mixing: the dilution index and the scalar dissipation. For both metrics our upscaled predictions very closely match observed values from the DNS. This material is based upon work supported by NSF Grants EAR-1351625 and EAR-1417264.
NASA Astrophysics Data System (ADS)
Alexander, K.; Easterbrook, S. M.
2015-01-01
We analyse the source code of eight coupled climate models, selected from those that participated in the CMIP5 (Taylor et al., 2012) or EMICAR5 (Eby et al., 2013; Zickfeld et al., 2013) intercomparison projects. For each model, we sort the preprocessed code into components and subcomponents based on dependency structure. We then create software architecture diagrams which show the relative sizes of these components/subcomponents and the flow of data between them. The diagrams also illustrate several major classes of climate model design; the distribution of complexity between components, which depends on historical development paths as well as the conscious goals of each institution; and the sharing of components between different modelling groups. These diagrams offer insights into the similarities and differences between models, and have the potential to be useful tools for communication between scientists, scientific institutions, and the public.
NASA Astrophysics Data System (ADS)
Alexander, K.; Easterbrook, S. M.
2015-04-01
We analyze the source code of eight coupled climate models, selected from those that participated in the CMIP5 (Taylor et al., 2012) or EMICAR5 (Eby et al., 2013; Zickfeld et al., 2013) intercomparison projects. For each model, we sort the preprocessed code into components and subcomponents based on dependency structure. We then create software architecture diagrams that show the relative sizes of these components/subcomponents and the flow of data between them. The diagrams also illustrate several major classes of climate model design; the distribution of complexity between components, which depends on historical development paths as well as the conscious goals of each institution; and the sharing of components between different modeling groups. These diagrams offer insights into the similarities and differences in structure between climate models, and have the potential to be useful tools for communication between scientists, scientific institutions, and the public.
Gill, Peter; Curran, James; Elliot, Keith
2005-01-01
The use of expert systems to interpret short tandem repeat DNA profiles in forensic, medical and ancient DNA applications is becoming increasingly prevalent as high-throughput analytical systems generate large amounts of data that are time-consuming to process. With special reference to low copy number (LCN) applications, we use a graphical model to simulate stochastic variation associated with the entire DNA process starting with extraction of sample, followed by the processing associated with the preparation of a PCR reaction mixture and PCR itself. Each part of the process is modelled with input efficiency parameters. Then, the key output parameters that define the characteristics of a DNA profile are derived, namely heterozygote balance (Hb) and the probability of allelic drop-out p(D). The model can be used to estimate the unknown efficiency parameters, such as pi(extraction). 'What-if' scenarios can be used to improve and optimize the entire process, e.g. by increasing the aliquot forwarded to PCR, the improvement expected to a given DNA profile can be reliably predicted. We demonstrate that Hb and drop-out are mainly a function of stochastic effect of pre-PCR molecular selection. Whole genome amplification is unlikely to give any benefit over conventional PCR for LCN.
Gill, Peter; Curran, James; Elliot, Keith
2005-01-01
The use of expert systems to interpret short tandem repeat DNA profiles in forensic, medical and ancient DNA applications is becoming increasingly prevalent as high-throughput analytical systems generate large amounts of data that are time-consuming to process. With special reference to low copy number (LCN) applications, we use a graphical model to simulate stochastic variation associated with the entire DNA process starting with extraction of sample, followed by the processing associated with the preparation of a PCR reaction mixture and PCR itself. Each part of the process is modelled with input efficiency parameters. Then, the key output parameters that define the characteristics of a DNA profile are derived, namely heterozygote balance (Hb) and the probability of allelic drop-out p(D). The model can be used to estimate the unknown efficiency parameters, such as πextraction. ‘What-if’ scenarios can be used to improve and optimize the entire process, e.g. by increasing the aliquot forwarded to PCR, the improvement expected to a given DNA profile can be reliably predicted. We demonstrate that Hb and drop-out are mainly a function of stochastic effect of pre-PCR molecular selection. Whole genome amplification is unlikely to give any benefit over conventional PCR for LCN. PMID:15681615
Graphical Technique to Support the Teaching/Learning Process of Software Process Reference Models
NASA Astrophysics Data System (ADS)
Espinosa-Curiel, Ismael Edrein; Rodríguez-Jacobo, Josefina; Fernández-Zepeda, José Alberto
In this paper, we propose a set of diagrams to visualize software process reference models (PRM). The diagrams, called dimods, are the combination of some visual and process modeling techniques such as rich pictures, mind maps, IDEF and RAD diagrams. We show the use of this technique by designing a set of dimods for the Mexican Software Industry Process Model (MoProSoft). Additionally, we perform an evaluation of the usefulness of dimods. The result of the evaluation shows that dimods may be a support tool that facilitates the understanding, memorization, and learning of software PRMs in both, software development organizations and universities. The results also show that dimods may have advantages over the traditional description methods for these types of models.
Linear mixing model applied to AVHRR LAC data
NASA Technical Reports Server (NTRS)
Holben, Brent N.; Shimabukuro, Yosio E.
1993-01-01
A linear mixing model was applied to coarse spatial resolution data from the NOAA Advanced Very High Resolution Radiometer. The reflective component of the 3.55 - 3.93 microns channel was extracted and used with the two reflective channels 0.58 - 0.68 microns and 0.725 - 1.1 microns to run a Constraine Least Squares model to generate vegetation, soil, and shade fraction images for an area in the Western region of Brazil. The Landsat Thematic Mapper data covering the Emas National park region was used for estimating the spectral response of the mixture components and for evaluating the mixing model results. The fraction images were compared with an unsupervised classification derived from Landsat TM data acquired on the same day. The relationship between the fraction images and normalized difference vegetation index images show the potential of the unmixing techniques when using coarse resolution data for global studies.
The Use of the Linear Mixed Model in Human Genetics.
Dandine-Roulland, Claire; Perdry, Hervé
2015-01-01
We give a short but detailed review of the methods used to deal with linear mixed models (restricted likelihood, AIREML algorithm, best linear unbiased predictors, etc.), with a few original points. Then we describe three common applications of the linear mixed model in contemporary human genetics: association testing (pathways analysis or rare variants association tests), genomic heritability estimates, and correction for population stratification in genome-wide association studies. We also consider the performance of best linear unbiased predictors for prediction in this context, through a simulation study for rare variants in a short genomic region, and through a short theoretical development for genome-wide data. For each of these applications, we discuss the relevance and the impact of modeling genetic effects as random effects. © 2016 S. Karger AG, Basel.
Logit-normal mixed model for Indian Monsoon rainfall extremes
NASA Astrophysics Data System (ADS)
Dietz, L. R.; Chatterjee, S.
2014-03-01
Describing the nature and variability of Indian monsoon rainfall extremes is a topic of much debate in the current literature. We suggest the use of a generalized linear mixed model (GLMM), specifically, the logit-normal mixed model, to describe the underlying structure of this complex climatic event. Several GLMM algorithms are described and simulations are performed to vet these algorithms before applying them to the Indian precipitation data procured from the National Climatic Data Center. The logit-normal model was applied with fixed covariates of latitude, longitude, elevation, daily minimum and maximum temperatures with a random intercept by weather station. In general, the estimation methods concurred in their suggestion of a relationship between the El Niño Southern Oscillation (ENSO) and extreme rainfall variability estimates. This work provides a valuable starting point for extending GLMM to incorporate the intricate dependencies in extreme climate events.
Probabilistic Graphical Models for the Analysis and Synthesis of Musical Audio
NASA Astrophysics Data System (ADS)
Hoffmann, Matthew Douglas
Content-based Music Information Retrieval (MIR) systems seek to automatically extract meaningful information from musical audio signals. This thesis applies new and existing generative probabilistic models to several content-based MIR tasks: timbral similarity estimation, semantic annotation and retrieval, and latent source discovery and separation. In order to estimate how similar two songs sound to one another, we employ a Hierarchical Dirichlet Process (HDP) mixture model to discover a shared representation of the distribution of timbres in each song. Comparing songs under this shared representation yields better query-by-example retrieval quality and scalability than previous approaches. To predict what tags are likely to apply to a song (e.g., "rap," "happy," or "driving music"), we develop the Codeword Bernoulli Average (CBA) model, a simple and fast mixture-of-experts model. Despite its simplicity, CBA performs at least as well as state-of-the-art approaches at automatically annotating songs and finding to what songs in a database a given tag most applies. Finally, we address the problem of latent source discovery and separation by developing two Bayesian nonparametric models, the Shift-Invariant HDP and Gamma Process NMF. These models allow us to discover what sounds (e.g. bass drums, guitar chords, etc.) are present in a song or set of songs and to isolate or suppress individual source. These models' ability to decide how many latent sources are necessary to model the data is particularly valuable in this application, since it is impossible to guess a priori how many sounds will appear in a given song or set of songs. Once they have been fit to data, probabilistic models can also be used to drive the synthesis of new musical audio, both for creative purposes and to qualitatively diagnose what information a model does and does not capture. We also adapt the SIHDP model to create new versions of input audio with arbitrary sample sets, for example, to create
Graphic Enhancement of the Aircraft Penetration Model for Use as an Analytic Tool.
1983-03-01
CHART NAT IONAL BUREAU OF SIANDARDS ]9t)3 A ,i - NAVAL POSTGRADUATE SCHOOL 0 ; Monterey, California DTICSELE CTI-I SMAY 17 on3 THESIS GRPHIC ...increase the understanding by both the designer and the user of the particular process or interaction that is being modelled. It should provide...the Naval Postgraduate School. The ele- ments of the model evolved from a class project whicli was designed to demonstrate some of the basic techniques
Numerical investigation of algebraic oceanic turbulent mixing-layer models
NASA Astrophysics Data System (ADS)
Chacón-Rebollo, T.; Gómez-Mármol, M.; Rubino, S.
2013-11-01
In this paper we investigate the finite-time and asymptotic behaviour of algebraic turbulent mixing-layer models by numerical simulation. We compare the performances given by three different settings of the eddy viscosity. We consider Richardson number-based vertical eddy viscosity models. Two of these are classical algebraic turbulence models usually used in numerical simulations of global oceanic circulation, i.e. the Pacanowski-Philander and the Gent models, while the other one is a more recent model (Bennis et al., 2010) proposed to prevent numerical instabilities generated by physically unstable configurations. The numerical schemes are based on the standard finite element method. We perform some numerical tests for relatively large deviations of realistic initial conditions provided by the Tropical Atmosphere Ocean (TAO) array. These initial conditions correspond to states close to mixing-layer profiles, measured on the Equatorial Pacific region called the West-Pacific Warm Pool. We conclude that mixing-layer profiles could be considered as kinds of "absorbing configurations" in finite time that asymptotically evolve to steady states under the application of negative surface energy fluxes.
Kierzek, Andrzej M; Zhou, Lu; Wanner, Barry L
2010-03-01
Two-component systems (TCSs) are prevalent signal transduction systems in bacteria that control innumerable adaptive responses to environmental cues and host-pathogen interactions. We constructed a detailed stochastic kinetic model of two component signalling based on published data. Our model has been validated with flow cytometry data and used to examine reporter gene expression in response to extracellular signal strength. The model shows that, depending on the actual kinetic parameters, TCSs exhibit all-or-none, graded or mixed mode responses. In accordance with other studies, positively autoregulated TCSs exhibit all-or-none responses. Unexpectedly, our model revealed that TCSs lacking a positive feedback loop exhibit not only graded but also mixed mode responses, in which variation of the signal strength alters the level of gene expression in induced cells while the regulated gene continues to be expressed at the basal level in a substantial fraction of cells. The graded response of the TCS changes to mixed mode response by an increase of the translation initiation rate of the histidine kinase. Thus, a TCS is an evolvable design pattern capable of implementing deterministic regulation and stochastic switches associated with both graded and threshold responses. This has implications for understanding the emergence of population diversity in pathogenic bacteria and the design of genetic circuits in synthetic biology applications. The model is available in systems biology markup language (SBML) and systems biology graphical notation (SBGN) formats and can be used as a component of large-scale biochemical reaction network models.
NASA Technical Reports Server (NTRS)
Nelson, D. P.
1981-01-01
A graphical presentation of the aerodynamic data acquired during coannular nozzle performance wind tunnel tests is given. The graphical data consist of plots of nozzle gross thrust coefficient, fan nozzle discharge coefficient, and primary nozzle discharge coefficient. Normalized model component static pressure distributions are presented as a function of primary total pressure, fan total pressure, and ambient static pressure for selected operating conditions. In addition, the supersonic cruise configuration data include plots of nozzle efficiency and secondary-to-fan total pressure pumping characteristics. Supersonic and subsonic cruise data are given.
Graphic comparison of reserve-growth models for conventional oil and accumulation
Klett, T.R.
2003-01-01
The U.S. Geological Survey (USGS) periodically assesses crude oil, natural gas, and natural gas liquids resources of the world. The assessment procedure requires estimated recover-able oil and natural gas volumes (field size, cumulative production plus remaining reserves) in discovered fields. Because initial reserves are typically conservative, subsequent estimates increase through time as these fields are developed and produced. The USGS assessment of petroleum resources makes estimates, or forecasts, of the potential additions to reserves in discovered oil and gas fields resulting from field development, and it also estimates the potential fully developed sizes of undiscovered fields. The term ?reserve growth? refers to the commonly observed upward adjustment of reserve estimates. Because such additions are related to increases in the total size of a field, the USGS uses field sizes to model reserve growth. Future reserve growth in existing fields is a major component of remaining U.S. oil and natural gas resources and has therefore become a necessary element of U.S. petroleum resource assessments. Past and currently proposed reserve-growth models compared herein aid in the selection of a suitable set of forecast functions to provide an estimate of potential additions to reserves from reserve growth in the ongoing National Oil and Gas Assessment Project (NOGA). Reserve growth is modeled by construction of a curve that represents annual fractional changes of recoverable oil and natural gas volumes (for fields and reservoirs), which provides growth factors. Growth factors are used to calculate forecast functions, which are sets of field- or reservoir-size multipliers. Comparisons of forecast functions were made based on datasets used to construct the models, field type, modeling method, and length of forecast span. Comparisons were also made between forecast functions based on field-level and reservoir- level growth, and between forecast functions based on older
Quantifying Mixing and Scales of Heterogeneity in 2-D Numerical Models of Chaotic Mantle Mixing
NASA Astrophysics Data System (ADS)
Harris, A. C.; Naliboff, J.; Prytulak, J.; Vanacore, E.; Cooper, K. M.; Hart, S.; Kellogg, L. H.
2006-12-01
Fundamental to our understanding of geochemical reservoirs within the Earth's mantle is the concept of the scale and distribution of heterogeneity. Although many studies approach this concept qualitatively few have attempted a quantitative assessment. Through a collaborative effort at the CIDER (Cooperative Institute for Deep Earth Research) 2006 summer workshop, we applied a 2-D/1-D power spectral and statistical analysis, respectively, to the temperature field and passive tracer distribution within a 2-D numerical model of mantle convection. The resultant data provides a means to objectively describe the scales of mixing and heterogeneity within various model scenarios. The dynamic models used had a 1x10 aspect ratio, included temperature- and pressure-dependent viscosity, had a Rayleigh number of 10^7, and had both internal and basal heating. One end member case includes a layered structure for viscosity and thermal conductivity, with a sharp increase in the mid-mantle. Spectral analysis of the temperature fields indicates that power near the upper and lower boundary layers is concentrated in long-wavelength structures while in the mid-mantle the spectrum is broader. Layering the viscosity structure enhances this dichotomy, but does not isolate the upper from the lower mantle and does not necessarily lead to decreased mixing rates or efficiency. Preliminary results demonstrate that the overall particle distribution, measured as a function of the distance between particles, is not necessarily unimodal. Furthermore, at a given time step this distribution may become multimodal.
Chen, Hsiang-Chun; Wehrly, Thomas E
2015-02-20
The classic concordance correlation coefficient measures the agreement between two variables. In recent studies, concordance correlation coefficients have been generalized to deal with responses from a distribution from the exponential family using the univariate generalized linear mixed model. Multivariate data arise when responses on the same unit are measured repeatedly by several methods. The relationship among these responses is often of interest. In clustered mixed data, the correlation could be present between repeated measurements either within the same observer or between different methods on the same subjects. Indices for measuring such association are needed. This study proposes a series of indices, namely, intra-correlation, inter-correlation, and total correlation coefficients to measure the correlation under various circumstances in a multivariate generalized linear model, especially for joint modeling of clustered count and continuous outcomes. The proposed indices are natural extensions of the concordance correlation coefficient. We demonstrate the methodology with simulation studies. A case example of osteoarthritis study is provided to illustrate the use of these proposed indices. Copyright © 2014 John Wiley & Sons, Ltd.
Modeling of Low Feed-Through CD Mix Implosions
NASA Astrophysics Data System (ADS)
Pino, Jesse; MacLaren, Steven; Greenough, Jeff; Casey, Daniel; Dittrich, Tom; Kahn, Shahab; Kyrala, George; Ma, Tammy; Salmonson, Jay; Smalyuk, Vladimir; Tipton, Robert
2015-11-01
The CD Mix campaign previously demonstrated the use of nuclear diagnostics to study the mix of separated reactants in plastic capsule implosions at the National Ignition Facility. However, the previous implosions suffered from large instability growth seeded from perturbations on the outside of the capsule. Recently, the separated reactants technique has been applied to two platforms designed to minimize this feed-through and isolate local mix at the gas-ablator interface: the Two Shock (TS) and Adiabat-Shaped (AS) Platforms. Additionally, the background contamination of Deuterium in the gas has been greatly reduced, allowing for simultaneous observation of TT, DT, and DD neutrons, which respectively give information about core gas performance, gas-shell atomic mix, and heating of the shell. In this talk, we describe efforts to model these implosions using high-resolution 2D ARES simulations with both a Reynolds-Averaged Navier Stokes method and an enhanced diffusivity model. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-674867.
Uncertainty in mixing models: a blessing in disguise?
NASA Astrophysics Data System (ADS)
Delsman, J. R.; Oude Essink, G. H. P.
2012-04-01
Despite the abundance of tracer-based studies in catchment hydrology over the past decades, relatively few studies have addressed the uncertainty associated with these studies in much detail. This uncertainty stems from analytical error, spatial and temporal variance in end-member composition, and from not incorporating all relevant processes in the necessarily simplistic mixing models. Instead of applying standard EMMA methodology, we used end-member mixing model analysis within a Monte Carlo framework to quantify the uncertainty surrounding our analysis. Borrowing from the well-known GLUE methodology, we discarded mixing models that could not satisfactorily explain sample concentrations and analyzed the posterior parameter set. This use of environmental tracers aided in disentangling hydrological pathways in a Dutch polder catchment. This 10 km2 agricultural catchment is situated in the coastal region of the Netherlands. Brackish groundwater seepage, originating from Holocene marine transgressions, adversely affects water quality in this catchment. Current water management practice is aimed at improving water quality by flushing the catchment with fresh water from the river Rhine. Climate change is projected to decrease future fresh water availability, signifying the need for a more sustainable water management practice and a better understanding of the functioning of the catchment. The end-member mixing analysis increased our understanding of the hydrology of the studied catchment. The use of a GLUE-like framework for applying the end-member mixing analysis not only quantified the uncertainty associated with the analysis, the analysis of the posterior parameter set also identified the existence of catchment processes otherwise overlooked.
ERIC Educational Resources Information Center
Kriston, Levente; Melchior, Hanne; Hergert, Anika; Bergelt, Corinna; Watzke, Birgit; Schulz, Holger; von Wolff, Alessa
2011-01-01
The aim of our study was to develop a graphical tool that can be used in addition to standard statistical criteria to support decisions on the number of classes in explorative categorical latent variable modeling for rehabilitation research. Data from two rehabilitation research projects were used. In the first study, a latent profile analysis was…
ERIC Educational Resources Information Center
Whitman, David L.; Terry, Ronald E.
1985-01-01
Demonstrating petroleum engineering concepts in undergraduate laboratories often requires expensive and time-consuming experiments. To eliminate these problems, a graphical simulation technique was developed for junior-level laboratories which illustrate vapor-liquid equilibrium and the use of mathematical modeling. A description of this…
ERIC Educational Resources Information Center
Kriston, Levente; Melchior, Hanne; Hergert, Anika; Bergelt, Corinna; Watzke, Birgit; Schulz, Holger; von Wolff, Alessa
2011-01-01
The aim of our study was to develop a graphical tool that can be used in addition to standard statistical criteria to support decisions on the number of classes in explorative categorical latent variable modeling for rehabilitation research. Data from two rehabilitation research projects were used. In the first study, a latent profile analysis was…
A graphical interface based model for wind turbine drive train dynamics
Manwell, J.F.; McGowan, J.G.; Abdulwahid, U.; Rogers, A.; McNiff, B.
1996-12-31
This paper presents a summary of a wind turbine drive train dynamics code that has been under development at the University of Massachusetts, under National Renewable Energy Laboratory (NREL) support. The code is intended to be used to assist in the proper design and selection of drive train components. This work summarizes the development of the equations of motion for the model, and discusses the method of solution. In addition, a number of comparisons with analytical solutions and experimental field data are given. The summary includes conclusions and suggestions for future work on the model. 13 refs., 10 figs.
Perry, Nicholas S; Baucom, Katherine J W; Bourne, Stacia; Butner, Jonathan; Crenshaw, Alexander O; Hogan, Jasara N; Imel, Zac E; Wiltshire, Travis J; Baucom, Brian R W
2017-08-01
Researchers commonly use repeated-measures actor-partner interdependence models (RM-APIM) to understand how romantic partners change in relation to one another over time. However, traditional interpretations of the results of these models do not fully or correctly capture the dyadic temporal patterns estimated in RM-APIM. Interpretation of results from these models largely focuses on the meaning of single-parameter estimates in isolation from all the others. However, considering individual coefficients separately impedes the understanding of how these associations combine to produce an interdependent pattern that emerges over time. Additionally, positive within-person, or actor, effects are commonly misinterpreted as indicating growth from one time point to the next when they actually represent decline. We suggest that change-as-outcome RM-APIMs and vector field diagrams (VFDs) can be used to improve the understanding and presentation of dyadic patterns of association described by standard RM-APIMs. The current article briefly reviews the conceptual foundations of RM-APIMs, demonstrates how change-as-outcome RM-APIMs and VFDs can aid interpretation of standard RM-APIMs, and provides a tutorial in making VFDs using multilevel modeling. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
NASA Technical Reports Server (NTRS)
Phillips, D. T.; Manseur, B.; Foster, J. W.
1982-01-01
Alternate definitions of system failure create complex analysis for which analytic solutions are available only for simple, special cases. The GRASP methodology is a computer simulation approach for solving all classes of problems in which both failure and repair events are modeled according to the probability laws of the individual components of the system.
A Adaptive Mixing Depth Model for AN Industrialized Shoreline Area.
NASA Astrophysics Data System (ADS)
Dunk, Richard H.
1993-01-01
Internal boundary layer characteristics are often overlooked in atmospheric diffusion modeling applications but are essential for accurate air quality assessment. This study focuses on a unique air pollution problem that is partially resolved by representative internal boundary layer description and prediction. Emissions from a secondary non-ferrous smelter located adjacent to a large waterway, which is situated near a major coastal zone, became suspect in causing adverse air quality. In an effort to prove or disprove this allegation, "accepted" air quality modeling was performed. Predicted downwind concentrations indicated that the smelter plume was not responsible for causing regulatory standards to be exceeded. However, chronic community complaints continued to be directed toward the smelter facility. Further investigation into the problem revealed that complaint occurrences coincided with onshore southeasterly flows. Internal boundary layer development during onshore flow was assumed to produce a mixing depth conducive to plume trapping or fumigation. The preceding premise led to the utilization of estimated internal boundary layer depths for dispersion model input in an attempt to improve prediction accuracy. Monitored downwind ambient air concentrations showed that model predictions were still substantially lower than actual values. After analyzing the monitored values and comparing them with actual plume observations conducted during several onshore flow occurrences, the author hypothesized that the waterway could cause a damping effect on internal boundary layer development. This effective decrease in mixing depths would explain the abnormally high ambient air concentrations experienced during onshore flows. Therefore, a full-scale field study was designed and implemented to study the waterway's influence on mixing depth characteristics. The resultant data were compiled and formulated into an area-specific mixing depth model that can be adapted to
Fermion masses and mixing in general warped extra dimensional models
NASA Astrophysics Data System (ADS)
Frank, Mariana; Hamzaoui, Cherif; Pourtolami, Nima; Toharia, Manuel
2015-06-01
We analyze fermion masses and mixing in a general warped extra dimensional model, where all the Standard Model (SM) fields, including the Higgs, are allowed to propagate in the bulk. In this context, a slightly broken flavor symmetry imposed universally on all fermion fields, without distinction, can generate the full flavor structure of the SM, including quarks, charged leptons and neutrinos. For quarks and charged leptons, the exponential sensitivity of their wave functions to small flavor breaking effects yield hierarchical masses and mixing as it is usual in warped models with fermions in the bulk. In the neutrino sector, the exponential wave-function factors can be flavor blind and thus insensitive to the small flavor symmetry breaking effects, directly linking their masses and mixing angles to the flavor symmetric structure of the five-dimensional neutrino Yukawa couplings. The Higgs must be localized in the bulk and the model is more successful in generalized warped scenarios where the metric background solution is different than five-dimensional anti-de Sitter (AdS5 ). We study these features in two simple frameworks, flavor complimentarity and flavor democracy, which provide specific predictions and correlations between quarks and leptons, testable as more precise data in the neutrino sector becomes available.
Water diffusion in bicelles and the mixed bicelle model.
Soong, Ronald; Macdonald, Peter M
2009-01-06
To test a prediction of the mixed bicelle model, stimulated echo (STE) pulsed field gradient (PFG) (1)H nuclear magnetic resonance (NMR) measurements of water diffusion between and across bicellar lamellae were performed in positively and negatively magnetically aligned bicelles, composed of mixtures of DHPC (1,2-dihexanoyl-sn-glycero-3-phosphocholine) and DMPC (1,2-dimyristoyl-sn-glycero-3-phosphocholine), as a function of temperature and of the proportion of added short-chain lipid DHPC. (31)P NMR spectra obtained for each situation confirmed that the DHPC undergoes fast exchange between curved and planar regions as per the mixed bicelle model and permitted an estimate of the proportion of the two DHPC populations. Water diffusion across the bicellar lamellae was shown to scale directly with q*, the fraction of edge versus planar phospholipid, rather than simply the ratio q, the global fraction of long-chain to short-chain phospholipid. Geometric modeling of the dependence of water diffusion on q* suggested an upper limit of 400 A for the size of DHPC-rich toroidal perforations within the bicelle lamellae. These findings constitute an independent confirmation of the mixed bicelle model in which DHPC is not confined to edge regions but enjoys, instead, a finite miscibility with DMPC.
A process-based model for bioturbation-induced mixing
NASA Astrophysics Data System (ADS)
Aquino, T.; Roche, K. R.; Aubeneau, A. F.; Packman, A. I.; Bolster, D.
2016-12-01
Bioturbation, the redistribution of sediment by living organisms, represents the dominant mode of sediment transport in many freshwater systems. In the absence of high shear events, such as storms, this process is often the main driver of vertical mixing of substances in the upper layers below the sediment-water interface, where chemical conditions may vary substantially. Thus, modeling the fate and transport of deposited or adsorbed substances such as organic matter, contaminants and nutrients in the subsurface often requires adequate understanding and parameterization of bioturbation mechanisms. Although the potential of bioturbation regarding mixing of substances in the subsurface is widely recognized, a modeling framework that ties the physics of bioturbation to mixing is largely missing. In this talk, we present a process-based model based on simple assumptions about organism burrowing behavior, and discuss the impact of two different transport mechanics. We compare the results of our model to experimental data from a recent ex situ study of bioturbation in a freshwater system by the worm Lumbriculus variegatus.
Comparing turbulent mixing of biogenic VOC across model scale
NASA Astrophysics Data System (ADS)
Li, Y.; Barth, M. C.; Steiner, A. L.
2016-12-01
Vertical mixing of biogenic volatile organic compounds (BVOC) in the planetary boundary layer (PBL) is very important in simulating the formation of ozone, secondary organic aerosols (SOA), and climate feedbacks. To assess the representation of vertical mixing in the atmosphere for the Baltimore-Washington DISCOVER-AQ 2011 campaign, we use two models of different scale and turbulence representation: (1) the National Center for Atmospheric Research's Large Eddy Simulation (LES), and (2) the Weather Research and Forecasting-Chemistry (WRF-Chem) model to simulate regional meteorology and chemistry. For WRF-Chem, we evaluate the boundary layer schemes in the model at convection-permitting scales (4km). WRF-Chem simulated vertical profiles are compared with the results from turbulence-resolving LES model under similar meteorological and chemical environment. The influence of clouds on gas and aqueous species and the impact of cloud processing at both scales are evaluated. Temporal evolutions of a surface-to-cloud concentration ratio are calculated to determine the capability of BVOC vertical mixing in WRF-Chem.
Measurements and Models for Hazardous chemical and Mixed Wastes
Laurel A. Watts; Cynthia D. Holcomb; Stephanie L. Outcalt; Beverly Louie; Michael E. Mullins; Tony N. Rogers
2002-08-21
Mixed solvent aqueous waste of various chemical compositions constitutes a significant fraction of the total waste produced by industry in the United States. Not only does the chemical process industry create large quantities of aqueous waste, but the majority of the waste inventory at the DOE sites previously used for nuclear weapons production is mixed solvent aqueous waste. In addition, large quantities of waste are expected to be generated in the clean-up of those sites. In order to effectively treat, safely handle, and properly dispose of these wastes, accurate and comprehensive knowledge of basic thermophysical properties is essential. The goal of this work is to develop a phase equilibrium model for mixed solvent aqueous solutions containing salts. An equation of state was sought for these mixtures that (a) would require a minimum of adjustable parameters and (b) could be obtained from a available data or data that were easily measured. A model was developed to predict vapor composition and pressure given the liquid composition and temperature. It is based on the Peng-Robinson equation of state, adapted to include non-volatile and salt components. The model itself is capable of predicting the vapor-liquid equilibria of a wide variety of systems composed of water, organic solvents, salts, nonvolatile solutes, and acids or bases. The representative system o water + acetone + 2-propanol + NaNo3 was selected to test and verify the model. Vapor-liquid equilibrium and phase density measurements were performed for this system and its constituent binaries.
Wilderjans, Tom F; Ceulemans, Eva; Van Mechelen, Iven; Depril, Dirk
2011-03-01
In many areas of psychology, one is interested in disclosing the underlying structural mechanisms that generated an object by variable data set. Often, based on theoretical or empirical arguments, it may be expected that these underlying mechanisms imply that the objects are grouped into clusters that are allowed to overlap (i.e., an object may belong to more than one cluster). In such cases, analyzing the data with Mirkin's additive profile clustering model may be appropriate. In this model: (1) each object may belong to no, one or several clusters, (2) there is a specific variable profile associated with each cluster, and (3) the scores of the objects on the variables can be reconstructed by adding the cluster-specific variable profiles of the clusters the object in question belongs to. Until now, however, no software program has been publicly available to perform an additive profile clustering analysis. For this purpose, in this article, the ADPROCLUS program, steered by a graphical user interface, is presented. We further illustrate its use by means of the analysis of a patient by symptom data matrix.
Fierro, Andrew Dickens, James; Neuber, Andreas
2014-12-15
A 3-dimensional particle-in-cell/Monte Carlo collision simulation that is fully implemented on a graphics processing unit (GPU) is described and used to determine low-temperature plasma characteristics at high reduced electric field, E/n, in nitrogen gas. Details of implementation on the GPU using the NVIDIA Compute Unified Device Architecture framework are discussed with respect to efficient code execution. The software is capable of tracking around 10 × 10{sup 6} particles with dynamic weighting and a total mesh size larger than 10{sup 8} cells. Verification of the simulation is performed by comparing the electron energy distribution function and plasma transport parameters to known Boltzmann Equation (BE) solvers. Under the assumption of a uniform electric field and neglecting the build-up of positive ion space charge, the simulation agrees well with the BE solvers. The model is utilized to calculate plasma characteristics of a pulsed, parallel plate discharge. A photoionization model provides the simulation with additional electrons after the initial seeded electron density has drifted towards the anode. Comparison of the performance benefits between the GPU-implementation versus a CPU-implementation is considered, and a speed-up factor of 13 for a 3D relaxation Poisson solver is obtained. Furthermore, a factor 60 speed-up is realized for parallelization of the electron processes.
A mixed model reduction method for preserving selected physical information
NASA Astrophysics Data System (ADS)
Zhang, Jing; Zheng, Gangtie
2017-03-01
A new model reduction method in the frequency domain is presented. By mixedly using the model reduction techniques from both the time domain and the frequency domain, the dynamic model is condensed to selected physical coordinates, and the contribution of slave degrees of freedom is taken as a modification to the model in the form of effective modal mass of virtually constrained modes. The reduced model can preserve the physical information related to the selected physical coordinates such as physical parameters and physical space positions of corresponding structure components. For the cases of non-classical damping, the method is extended to the model reduction in the state space but still only contains the selected physical coordinates. Numerical results are presented to validate the method and show the effectiveness of the model reduction.
Effects of Radion Mixing on the Standard Model Higgs Boson
Rizzo, Thomas G.
2002-09-09
We discuss how mixing between the Standard Model Higgs boson and the radion of the Randall-Sundrum model can lead to significant shifts in the expected properties of the Higgs boson. In particular we show that the total and partial decay widths of the Higgs, as well as the h {yields} gg branching fraction, can be substantially altered from their SM expectations, while the remaining branching fractions are modified less than about 5% for most of the parameter space volume. Precision measurements of Higgs boson properties at at a Linear Collider are shown to probe a large region of the Randall-Sundrum model parameter space.
Extension of the stochastic mixing model to cumulonimbus clouds
Raymond, D.J.; Blyth, A.M. )
1992-11-01
The stochastic mixing model of cumulus clouds is extended to the case in which ice and precipitation form. A simple cloud microphysical model is adopted in which ice crystals and aggregates are carried along with the updraft, whereas raindrops, graupel, and hail are assumed to immediately fall out. The model is then applied to the 2 August 1984 case study of convection over the Magdalena Mountains of central New Mexico, with excellent results. The formation of ice and precipitation can explain the transition of this system from a cumulus congestus cloud to thunderstorm. 28 refs.
Learning a Tracking and Estimation Integrated Graphical Model for Human Pose Tracking.
Zhao, Lin; Gao, Xinbo; Tao, Dacheng; Li, Xuelong
2015-12-01
We investigate the tracking of 2-D human poses in a video stream to determine the spatial configuration of body parts in each frame, but this is not a trivial task because people may wear different kinds of clothing and may move very quickly and unpredictably. The technology of pose estimation is typically applied, but it ignores the temporal context and cannot provide smooth, reliable tracking results. Therefore, we develop a tracking and estimation integrated model (TEIM) to fully exploit temporal information by integrating pose estimation with visual tracking. However, joint parsing of multiple articulated parts over time is difficult, because a full model with edges capturing all pairwise relationships within and between frames is loopy and intractable. In previous models, approximate inference was usually resorted to, but it cannot promise good results and the computational cost is large. We overcome these problems by exploring the idea of divide and conquer, which decomposes the full model into two much simpler tractable submodels. In addition, a novel two-step iteration strategy is proposed to efficiently conquer the joint parsing problem. Algorithmically, we design TEIM very carefully so that: 1) it enables pose estimation and visual tracking to compensate for each other to achieve desirable tracking results; 2) it is able to deal with the problem of tracking loss; and 3) it only needs past information and is capable of tracking online. Experiments are conducted on two public data sets in the wild with ground truth layout annotations, and the experimental results indicate the effectiveness of the proposed TEIM framework.
Probabilistic Graphical Models for the Analysis and Synthesis of Musical Audio
2010-11-01
demonstrated the effectiveness of the GaP-NMF model on several problems in analyzing and processing recorded music . Although this work has focused on...2008. ACM. [23] P. Chordia and A. Rae. Using source separation to improve tempo detection. In Proc. Tenth Int’l Conf. on Music Information Retrieval...Cook. Finding latent sources in recorded music with a shift-invariant HDP. In Proc. Digital Audio Effects (DAFx-09), 2009. [49] M.D. Hoffman, D.M
Graphical Models for Recovering Probabilistic and Causal Queries from Missing Data
2014-11-01
employ a formal representation called ‘Missingness Graphs ’ (m- graphs , for short) to explicitly portray the missingness process as well as the...exists any theoretical impediment to estimability of queries of interest, m- graphs can also provide a means for communication and refinement of...assumptions about the missingness process. Furthermore, m- graphs permit us to detect violations in modeling assumptions even when the dataset is
A multivariate probabilistic graphical model for real-time volcano monitoring on Mount Etna
NASA Astrophysics Data System (ADS)
Cannavò, Flavio; Cannata, Andrea; Cassisi, Carmelo; Di Grazia, Giuseppe; Montalto, Placido; Prestifilippo, Michele; Privitera, Eugenio; Coltelli, Mauro; Gambino, Salvatore
2017-05-01
Real-time assessment of the state of a volcano plays a key role for civil protection purposes. Unfortunately, because of the coupling of highly nonlinear and partially known complex volcanic processes, and the intrinsic uncertainties in measured parameters, the state of a volcano needs to be expressed in probabilistic terms, thus making any rapid assessment sometimes impractical. With the aim of aiding on-duty personnel in volcano-monitoring roles, we present an expert system approach to automatically estimate the ongoing state of a volcano from all available measurements. The system consists of a probabilistic model that encodes the conditional dependencies between measurements and volcanic states in a directed acyclic graph and renders an estimation of the probability distribution of the feasible volcanic states. We test the model with Mount Etna (Italy) as a case study by considering a long record of multivariate data. Results indicate that the proposed model is effective for early warning and has considerable potential for decision-making purposes.
PVA in Igneous Petrology: The Rosetta Stone for Testing Mixing and Fractionation Models
NASA Astrophysics Data System (ADS)
Vogel, T. A.; Ehrlich, R.
2006-05-01
One of the major goals of igneous petrology is to evaluate the relative contributions of fractional crystallization and magma mixing (or assimilation) that produce the chemical variations within related igneous units (plutons, sills and dikes, ash-flow tuffs, lavas etc). Mixing and fractional crystallization have often been evaluated by selecting a few variables (major elements, trace elements, isotopes) and modeling the trends. EC-AFC models have been developed to include energy constraints along with selected trace elements and isotopes. Polytopic Vector Analysis (PVA) is a technique that uses all of the chemical variations (major elements and trace elements) in all the samples to determine: (1) the number of end member compositions present in the system, (2) the chemical composition of each end member, and (3) the relative contribution of each end member in each sample from the igneous unit. Each sample in the dataset is described as the sum of some fraction of each end member; therefore each sample is uniquely described by a specific amount of each of the end members. Each end member is defined in the same non negative units as the sample values. Graphical analysis of the output allows the recognition of trends either due to crystal fraction or mixing of separate magma batches (assimilation), as samples form discrete clusters or trends with different variations in end member proportions. Mixing of discrete magma batches is immediately apparent, as samples representing mixed magmas plot between the parent magmas. PVA has been used successfully to identify end members in aqueous geochemistry and petroleum. However, even though it was originally developed in part by igneous petrologists, it has not been thoroughly tested on petrologic problems. In order to evaluate PVA, we selected three igneous units in which fractionation and mixing processes had been identified: (1) glasses from Kilauea Iki drilling, which are unquestionably due to crystal fractionation; (2
Analysis of mixed model in gear transmission based on ADAMS
NASA Astrophysics Data System (ADS)
Li, Xiufeng; Wang, Yabin
2012-09-01
The traditional method of mechanical gear driving simulation includes gear pair method and solid to solid contact method. The former has higher solving efficiency but lower results accuracy; the latter usually obtains higher precision of results while the calculation process is complex, also it is not easy to converge. Currently, most of the researches are focused on the description of geometric models and the definition of boundary conditions. However, none of them can solve the problems fundamentally. To improve the simulation efficiency while ensure the results with high accuracy, a mixed model method which uses gear tooth profiles to take the place of the solid gear to simulate gear movement is presented under these circumstances. In the process of modeling, build the solid models of the mechanism in the SolidWorks firstly; Then collect the point coordinates of outline curves of the gear using SolidWorks API and create fit curves in Adams based on the point coordinates; Next, adjust the position of those fitting curves according to the position of the contact area; Finally, define the loading conditions, boundary conditions and simulation parameters. The method provides gear shape information by tooth profile curves; simulates the mesh process through tooth profile curve to curve contact and offer mass as well as inertia data via solid gear models. This simulation process combines the two models to complete the gear driving analysis. In order to verify the validity of the method presented, both theoretical derivation and numerical simulation on a runaway escapement are conducted. The results show that the computational efficiency of the mixed model method is 1.4 times over the traditional method which contains solid to solid contact. Meanwhile, the simulation results are more closely to theoretical calculations. Consequently, mixed model method has a high application value regarding to the study of the dynamics of gear mechanism.
Overview of the Graphical User Interface for the GERM Code (GCR Event-Based Risk Model
NASA Technical Reports Server (NTRS)
Kim, Myung-Hee; Cucinotta, Francis A.
2010-01-01
The descriptions of biophysical events from heavy ions are of interest in radiobiology, cancer therapy, and space exploration. The biophysical description of the passage of heavy ions in tissue and shielding materials is best described by a stochastic approach that includes both ion track structure and nuclear interactions. A new computer model called the GCR Event-based Risk Model (GERM) code was developed for the description of biophysical events from heavy ion beams at the NASA Space Radiation Laboratory (NSRL). The GERM code calculates basic physical and biophysical quantities of high-energy protons and heavy ions that have been studied at NSRL for the purpose of simulating space radiobiological effects. For mono-energetic beams, the code evaluates the linear-energy transfer (LET), range (R), and absorption in tissue equivalent material for a given Charge (Z), Mass Number (A) and kinetic energy (E) of an ion. In addition, a set of biophysical properties are evaluated such as the Poisson distribution of ion or delta-ray hits for a specified cellular area, cell survival curves, and mutation and tumor probabilities. The GERM code also calculates the radiation transport of the beam line for either a fixed number of user-specified depths or at multiple positions along the Bragg curve of the particle. The contributions from primary ion and nuclear secondaries are evaluated. The GERM code accounts for the major nuclear interaction processes of importance for describing heavy ion beams, including nuclear fragmentation, elastic scattering, and knockout-cascade processes by using the quantum multiple scattering fragmentation (QMSFRG) model. The QMSFRG model has been shown to be in excellent agreement with available experimental data for nuclear fragmentation cross sections, and has been used by the GERM code for application to thick target experiments. The GERM code provides scientists participating in NSRL experiments with the data needed for the interpretation of their
Overview of the Graphical User Interface for the GERMcode (GCR Event-Based Risk Model)
NASA Technical Reports Server (NTRS)
Kim, Myung-Hee Y.; Cucinotta, Francis A.
2010-01-01
The descriptions of biophysical events from heavy ions are of interest in radiobiology, cancer therapy, and space exploration. The biophysical description of the passage of heavy ions in tissue and shielding materials is best described by a stochastic approach that includes both ion track structure and nuclear interactions. A new computer model called the GCR Event-based Risk Model (GERM) code was developed for the description of biophysical events from heavy ion beams at the NASA Space Radiation Laboratory (NSRL). The GERMcode calculates basic physical and biophysical quantities of high-energy protons and heavy ions that have been studied at NSRL for the purpose of simulating space radiobiological effects. For mono-energetic beams, the code evaluates the linear-energy transfer (LET), range (R), and absorption in tissue equivalent material for a given Charge (Z), Mass Number (A) and kinetic energy (E) of an ion. In addition, a set of biophysical properties are evaluated such as the Poisson distribution of ion or delta-ray hits for a specified cellular area, cell survival curves, and mutation and tumor probabilities. The GERMcode also calculates the radiation transport of the beam line for either a fixed number of user-specified depths or at multiple positions along the Bragg curve of the particle. The contributions from primary ion and nuclear secondaries are evaluated. The GERMcode accounts for the major nuclear interaction processes of importance for describing heavy ion beams, including nuclear fragmentation, elastic scattering, and knockout-cascade processes by using the quantum multiple scattering fragmentation (QMSFRG) model. The QMSFRG model has been shown to be in excellent agreement with available experimental data for nuclear fragmentation cross sections, and has been used by the GERMcode for application to thick target experiments. The GERMcode provides scientists participating in NSRL experiments with the data needed for the interpretation of their
Overview of the Graphical User Interface for the GERMcode (GCR Event-Based Risk Model)
NASA Technical Reports Server (NTRS)
Kim, Myung-Hee Y.; Cucinotta, Francis A.
2010-01-01
The descriptions of biophysical events from heavy ions are of interest in radiobiology, cancer therapy, and space exploration. The biophysical description of the passage of heavy ions in tissue and shielding materials is best described by a stochastic approach that includes both ion track structure and nuclear interactions. A new computer model called the GCR Event-based Risk Model (GERM) code was developed for the description of biophysical events from heavy ion beams at the NASA Space Radiation Laboratory (NSRL). The GERMcode calculates basic physical and biophysical quantities of high-energy protons and heavy ions that have been studied at NSRL for the purpose of simulating space radiobiological effects. For mono-energetic beams, the code evaluates the linear-energy transfer (LET), range (R), and absorption in tissue equivalent material for a given Charge (Z), Mass Number (A) and kinetic energy (E) of an ion. In addition, a set of biophysical properties are evaluated such as the Poisson distribution of ion or delta-ray hits for a specified cellular area, cell survival curves, and mutation and tumor probabilities. The GERMcode also calculates the radiation transport of the beam line for either a fixed number of user-specified depths or at multiple positions along the Bragg curve of the particle. The contributions from primary ion and nuclear secondaries are evaluated. The GERMcode accounts for the major nuclear interaction processes of importance for describing heavy ion beams, including nuclear fragmentation, elastic scattering, and knockout-cascade processes by using the quantum multiple scattering fragmentation (QMSFRG) model. The QMSFRG model has been shown to be in excellent agreement with available experimental data for nuclear fragmentation cross sections, and has been used by the GERMcode for application to thick target experiments. The GERMcode provides scientists participating in NSRL experiments with the data needed for the interpretation of their
Shell Model Depiction of Isospin Mixing in sd Shell
Lam, Yi Hua; Smirnova, Nadya A.; Caurier, Etienne
2011-11-30
We constructed a new empirical isospin-symmetry breaking (ISB) Hamiltonian in the sd(1s{sub 1/2}, 0d{sub 5/2} and 0d{sub 3/2}) shell-model space. In this contribution, we present its application to two important case studies: (i){beta}-delayed proton emission from {sup 22}Al and (ii) isospin-mixing correction to superallowed 0{sup +}{yields}0{sup +}{beta}-decay ft-values.
Continuum Modeling of Mixed Conductors: a Study of Ceria
NASA Astrophysics Data System (ADS)
Ciucci, Francesco
In this thesis we have derived a new way to analyze the impedance response of mixed conducting materials for use in solid oxide fuel cells (SOFCs), with the main focus on anodic materials, in particular cerium oxides. First we have analyzed the impact of mixed conductivity coupled to electrocatalytic behavior in the linear time-independent domain for a thick ceria sample. We have derived that, for a promising fuel cell material, Samarium Doped Ceria, chemical reactions are the determining component of the polarization resistance. As a second step we have extended the previous model to the time-dependent case, where we focused on single harmonic excitation, the impedance spectroscopy conditions. We extended the model to the case where some input diffusivities are spatially nonuniform. For instance we considered the case where diffusivities change significantly in the vicinity of the electrocatalytic region. As a third and final step we use to model to capture the two dimensional behavior of mixed conducting thin films, where the electronic motion from one side of the sample to the other is impeded. Such conditions are similar to those encountered in fuel cells where an electrolyte conducting exclusively oxygen ions is placed between the anode and the cathode. The framework developed was also extended to study a popular cathodic material, Lanthanum Manganite. The model is used to give unprecedented insight in SOFC polarization resistance analysis of mixed conductors. It helps elucidate rigorously rate determining steps and to address the interplay of diffusion with diffusion losses. Electrochemical surface losses dominate for most experimental conditions of Samarium Doped Ceria and they are shown to be strongly dependent on geometry.
Estimating Preferential Flow in Karstic Aquifers Using Statistical Mixed Models
Anaya, Angel A.; Padilla, Ingrid; Macchiavelli, Raul; Vesper, Dorothy J.; Meeker, John D.; Alshawabkeh, Akram N.
2013-01-01
Karst aquifers are highly productive groundwater systems often associated with conduit flow. These systems can be highly vulnerable to contamination, resulting in a high potential for contaminant exposure to humans and ecosystems. This work develops statistical models to spatially characterize flow and transport patterns in karstified limestone and determines the effect of aquifer flow rates on these patterns. A laboratory-scale Geo-HydroBed model is used to simulate flow and transport processes in a karstic limestone unit. The model consists of stainless-steel tanks containing a karstified limestone block collected from a karst aquifer formation in northern Puerto Rico. Experimental work involves making a series of flow and tracer injections, while monitoring hydraulic and tracer response spatially and temporally. Statistical mixed models are applied to hydraulic data to determine likely pathways of preferential flow in the limestone units. The models indicate a highly heterogeneous system with dominant, flow-dependent preferential flow regions. Results indicate that regions of preferential flow tend to expand at higher groundwater flow rates, suggesting a greater volume of the system being flushed by flowing water at higher rates. Spatial and temporal distribution of tracer concentrations indicates the presence of conduit-like and diffuse flow transport in the system, supporting the notion of both combined transport mechanisms in the limestone unit. The temporal response of tracer concentrations at different locations in the model coincide with, and confirms the preferential flow distribution generated with the statistical mixed models used in the study. PMID:23802921
Mixed-gas model for predicting decompression sickness in rats.
Lillo, R S; Parker, E C
2000-12-01
A mixed-gas model for rats was developed to further explore the role of different gases in decompression and to provide a global model for possible future evaluation of its usefulness for human prediction. A Hill-equation dose-response model was fitted to over 5,000 rat dives by using the technique of maximum likelihood. These dives used various mixtures of He, N(2), Ar, and O(2) and had times at depth up to 2 h and varied decompression profiles. Results supported past findings, including 1) differences among the gases in decompression risk (He < N(2) < Ar) and exchange rate (He > Ar approximately N(2)), 2) significant decompression risk of O(2), and 3) increased risk of decompression sickness with heavier animals. New findings included asymmetrical gas exchange with gas washout often unexpectedly faster than uptake. Model success was demonstrated by the relatively small errors (and their random scatter) between model predictions and actual incidences. This mixed-gas model for prediction of decompression sickness in rats is the first such model for any animal species that covers such a broad range of gas mixtures and dive profiles.
Effects of mixing in threshold models of social behavior
NASA Astrophysics Data System (ADS)
Akhmetzhanov, Andrei R.; Worden, Lee; Dushoff, Jonathan
2013-07-01
We consider the dynamics of an extension of the influential Granovetter model of social behavior, where individuals are affected by their personal preferences and observation of the neighbors’ behavior. Individuals are arranged in a network (usually the square lattice), and each has a state and a fixed threshold for behavior changes. We simulate the system asynchronously by picking a random individual and we either update its state or exchange it with another randomly chosen individual (mixing). We describe the dynamics analytically in the fast-mixing limit by using the mean-field approximation and investigate it mainly numerically in the case of finite mixing. We show that the dynamics converge to a manifold in state space, which determines the possible equilibria, and show how to estimate the projection of this manifold by using simulated trajectories, emitted from different initial points. We show that the effects of considering the network can be decomposed into finite-neighborhood effects, and finite-mixing-rate effects, which have qualitatively similar effects. Both of these effects increase the tendency of the system to move from a less-desired equilibrium to the “ground state.” Our findings can be used to probe shifts in behavioral norms and have implications for the role of information flow in determining when social norms that have become unpopular in particular communities (such as foot binding or female genital cutting) persist or vanish.
Effects of mixing in threshold models of social behavior.
Akhmetzhanov, Andrei R; Worden, Lee; Dushoff, Jonathan
2013-07-01
We consider the dynamics of an extension of the influential Granovetter model of social behavior, where individuals are affected by their personal preferences and observation of the neighbors' behavior. Individuals are arranged in a network (usually the square lattice), and each has a state and a fixed threshold for behavior changes. We simulate the system asynchronously by picking a random individual and we either update its state or exchange it with another randomly chosen individual (mixing). We describe the dynamics analytically in the fast-mixing limit by using the mean-field approximation and investigate it mainly numerically in the case of finite mixing. We show that the dynamics converge to a manifold in state space, which determines the possible equilibria, and show how to estimate the projection of this manifold by using simulated trajectories, emitted from different initial points. We show that the effects of considering the network can be decomposed into finite-neighborhood effects, and finite-mixing-rate effects, which have qualitatively similar effects. Both of these effects increase the tendency of the system to move from a less-desired equilibrium to the "ground state." Our findings can be used to probe shifts in behavioral norms and have implications for the role of information flow in determining when social norms that have become unpopular in particular communities (such as foot binding or female genital cutting) persist or vanish.
Modeling and diagnosing interface mix in layered ICF implosions
NASA Astrophysics Data System (ADS)
Weber, C. R.; Berzak Hopkins, L. F.; Clark, D. S.; Haan, S. W.; Ho, D. D.; Meezan, N. B.; Milovich, J. L.; Robey, H. F.; Smalyuk, V. A.; Thomas, C. A.
2015-11-01
Mixing at the fuel-ablator interface of an inertial confinement fusion (ICF) implosion can arise from an unfavorable in-flight Atwood number between the cryogenic DT fuel and the ablator. High-Z dopant is typically added to the ablator to control the Atwood number, but recent high-density carbon (HDC) capsules have been shot at the National Ignition Facility (NIF) without this added dopant. Highly resolved post-shot modeling of these implosions shows that there was significant mixing of ablator material into the dense DT fuel. This mix lowers the fuel density and results in less overall compression, helping to explain the measured ratio of down scattered-to-primary neutrons. Future experimental designs will seek to improve this issue through adding dopant and changing the x-ray spectra with a different hohlraum wall material. To test these changes, we are designing an experimental platform to look at the growth of this mixing layer. This technique uses side-on radiography to measure the spatial extent of an embedded high-Z tracer layer near the interface. Work performed under the auspices of the U.S. D.O.E. by Lawrence Livermore National Laboratory under Contract No. DE-AC52-07NA27344.
Study on system dynamics of evolutionary mix-game models
NASA Astrophysics Data System (ADS)
Gou, Chengling; Guo, Xiaoqian; Chen, Fang
2008-11-01
Mix-game model is ameliorated from an agent-based MG model, which is used to simulate the real financial market. Different from MG, there are two groups of agents in Mix-game: Group 1 plays a majority game and Group 2 plays a minority game. These two groups of agents have different bounded abilities to deal with historical information and to count their own performance. In this paper, we modify Mix-game model by assigning the evolution abilities to agents: if the winning rates of agents are smaller than a threshold, they will copy the best strategies the other agent has; and agents will repeat such evolution at certain time intervals. Through simulations this paper finds: (1) the average winning rates of agents in Group 1 and the mean volatilities increase with the increases of the thresholds of Group 1; (2) the average winning rates of both groups decrease but the mean volatilities of system increase with the increase of the thresholds of Group 2; (3) the thresholds of Group 2 have greater impact on system dynamics than the thresholds of Group 1; (4) the characteristics of system dynamics under different time intervals of strategy change are similar to each other qualitatively, but they are different quantitatively; (5) As the time interval of strategy change increases from 1 to 20, the system behaves more and more stable and the performances of agents in both groups become better also.
Miranda, Diogo Julien; Wen, Chao Lung
2017-07-18
Preliminary studies suggest the need of a global vision in academic reform, leading to education re-invention. This would include problem-based education using transversal topics, developing of thinking skills, social interaction, and information-processing skills. We aimed to develop a new educational model in health with modular components to be broadcast and applied as a tele-education course. We developed a systematic model based on a "Skills and Goals Matrix" to adapt scientific contents on fictional screenplays, three-dimensional (3D) computer graphics of the human body, and interactive documentaries. We selected 13 topics based on youth vulnerabilities in Brazil to be disseminated through a television show with 15 episodes. We developed scientific content for each theme, naturally inserting it into screenplays, together with 3D sequences and interactive documentaries. The modular structure was then adapted to a distance-learning course. The television show was broadcast on national television for two consecutive years to an estimated audience of 30 million homes, and ever since on an Internet Protocol Television (IPTV) channel. It was also reorganized as a tele-education course for 2 years, reaching 1,180 subscriptions from all 27 Brazilian states, resulting in 240 graduates. Positive results indicate the feasibility, acceptability, and effectiveness of a model of modular entertainment audio-visual productions using health and education integrated concepts. This structure also allowed the model to be interconnected with other sources and applied as tele-education course, educating, informing, and stimulating the behavior change. Future works should reinforce this joint structure of telehealth, communication, and education.
LIMO EEG: a toolbox for hierarchical LInear MOdeling of ElectroEncephaloGraphic data.
Pernet, Cyril R; Chauveau, Nicolas; Gaspar, Carl; Rousselet, Guillaume A
2011-01-01
Magnetic- and electric-evoked brain responses have traditionally been analyzed by comparing the peaks or mean amplitudes of signals from selected channels and averaged across trials. More recently, tools have been developed to investigate single trial response variability (e.g., EEGLAB) and to test differences between averaged evoked responses over the entire scalp and time dimensions (e.g., SPM, Fieldtrip). LIMO EEG is a Matlab toolbox (EEGLAB compatible) to analyse evoked responses over all space and time dimensions, while accounting for single trial variability using a simple hierarchical linear modelling of the data. In addition, LIMO EEG provides robust parametric tests, therefore providing a new and complementary tool in the analysis of neural evoked responses.
Juan, H F; Hung, C C; Wang, K T; Chiou, S H
1999-04-13
We present a systematic structure comparison of three major classes of postsynaptic snake toxins, which include short and long chain alpha-type neurotoxins plus one angusticeps-type toxin of black mamba snake family. Two novel alpha-type neurotoxins isolated from Taiwan cobra (Naja naja atra) possessing distinct primary sequences and different postsynaptic neurotoxicities were taken as exemplars for short and long chain neurotoxins and compared with the major lethal short-chain neurotoxin in the same venom, i.e., cobrotoxin, based on the derived three-dimensional structure of this toxin in solution by NMR spectroscopy. A structure comparison among these two alpha-neurotoxins and angusticeps-type toxin (denoted as FS2) was carried out by the secondary-structure prediction together with computer homology-modeling based on multiple sequence alignment of their primary sequences and established NMR structures of cobrotoxin and FS2. It is of interest to find that upon pairwise superpositions of these modeled three-dimensional polypeptide chains, distinct differences in the overall peptide flexibility and interior microenvironment between these toxins can be detected along the three constituting polypeptide loops, which may reflect some intrinsic differences in the surface hydrophobicity of several hydrophobic peptide segments present on the surface loops of these toxin molecules as revealed by hydropathy profiles. Construction of a phylogenetic tree for these structurally related and functionally distinct toxins corroborates that all long and short toxins present in diverse snake families are evolutionarily related to each other, supposedly derived from an ancestral polypeptide by gene duplication and subsequent mutational substitutions leading to divergence of multiple three-loop toxin peptides.