Science.gov

Sample records for mixed graphical models

  1. Learning the Structure of Mixed Graphical Models

    PubMed Central

    Lee, Jason D.; Hastie, Trevor J.

    2014-01-01

    We consider the problem of learning the structure of a pairwise graphical model over continuous and discrete variables. We present a new pairwise model for graphical models with both continuous and discrete variables that is amenable to structure learning. In previous work, authors have considered structure learning of Gaussian graphical models and structure learning of discrete models. Our approach is a natural generalization of these two lines of work to the mixed case. The penalization scheme involves a novel symmetric use of the group-lasso norm and follows naturally from a particular parametrization of the model. Supplementary materials for this paper are available online. PMID:26085782

  2. Selection and estimation for mixed graphical models

    PubMed Central

    Chen, Shizhe; Witten, Daniela M.; shojaie, Ali

    2016-01-01

    Summary We consider the problem of estimating the parameters in a pairwise graphical model in which the distribution of each node, conditioned on the others, may have a different exponential family form. We identify restrictions on the parameter space required for the existence of a well-defined joint density, and establish the consistency of the neighbourhood selection approach for graph reconstruction in high dimensions when the true underlying graph is sparse. Motivated by our theoretical results, we investigate the selection of edges between nodes whose conditional distributions take different parametric forms, and show that efficiency can be gained if edge estimates obtained from the regressions of particular nodes are used to reconstruct the graph. These results are illustrated with examples of Gaussian, Bernoulli, Poisson and exponential distributions. Our theoretical findings are corroborated by evidence from simulation studies.

  3. Mapping eQTL Networks with Mixed Graphical Markov Models

    PubMed Central

    Tur, Inma; Roverato, Alberto; Castelo, Robert

    2014-01-01

    Expression quantitative trait loci (eQTL) mapping constitutes a challenging problem due to, among other reasons, the high-dimensional multivariate nature of gene-expression traits. Next to the expression heterogeneity produced by confounding factors and other sources of unwanted variation, indirect effects spread throughout genes as a result of genetic, molecular, and environmental perturbations. From a multivariate perspective one would like to adjust for the effect of all of these factors to end up with a network of direct associations connecting the path from genotype to phenotype. In this article we approach this challenge with mixed graphical Markov models, higher-order conditional independences, and q-order correlation graphs. These models show that additive genetic effects propagate through the network as function of gene–gene correlations. Our estimation of the eQTL network underlying a well-studied yeast data set leads to a sparse structure with more direct genetic and regulatory associations that enable a straightforward comparison of the genetic control of gene expression across chromosomes. Interestingly, it also reveals that eQTLs explain most of the expression variability of network hub genes. PMID:25271303

  4. Linear mixed-effects models for within-participant psychology experiments: an introductory tutorial and free, graphical user interface (LMMgui)

    PubMed Central

    Magezi, David A.

    2015-01-01

    Linear mixed-effects models (LMMs) are increasingly being used for data analysis in cognitive neuroscience and experimental psychology, where within-participant designs are common. The current article provides an introductory review of the use of LMMs for within-participant data analysis and describes a free, simple, graphical user interface (LMMgui). LMMgui uses the package lme4 (Bates et al., 2014a,b) in the statistical environment R (R Core Team). PMID:25657634

  5. Learning Graphical Models With Hubs

    PubMed Central

    Tan, Kean Ming; London, Palma; Mohan, Karthik; Lee, Su-In; Fazel, Maryam; Witten, Daniela

    2014-01-01

    We consider the problem of learning a high-dimensional graphical model in which there are a few hub nodes that are densely-connected to many other nodes. Many authors have studied the use of an ℓ1 penalty in order to learn a sparse graph in the high-dimensional setting. However, the ℓ1 penalty implicitly assumes that each edge is equally likely and independent of all other edges. We propose a general framework to accommodate more realistic networks with hub nodes, using a convex formulation that involves a row-column overlap norm penalty. We apply this general framework to three widely-used probabilistic graphical models: the Gaussian graphical model, the covariance graph model, and the binary Ising model. An alternating direction method of multipliers algorithm is used to solve the corresponding convex optimization problems. On synthetic data, we demonstrate that our proposed framework outperforms competitors that do not explicitly model hub nodes. We illustrate our proposal on a webpage data set and a gene expression data set. PMID:25620891

  6. Graphical Models for Ordinal Data

    PubMed Central

    Guo, Jian; Levina, Elizaveta; Michailidis, George; Zhu, Ji

    2014-01-01

    A graphical model for ordinal variables is considered, where it is assumed that the data are generated by discretizing the marginal distributions of a latent multivariate Gaussian distribution. The relationships between these ordinal variables are then described by the underlying Gaussian graphical model and can be inferred by estimating the corresponding concentration matrix. Direct estimation of the model is computationally expensive, but an approximate EM-like algorithm is developed to provide an accurate estimate of the parameters at a fraction of the computational cost. Numerical evidence based on simulation studies shows the strong performance of the algorithm, which is also illustrated on data sets on movie ratings and an educational survey. PMID:26120267

  7. Representing Learning With Graphical Models

    NASA Technical Reports Server (NTRS)

    Buntine, Wray L.; Lum, Henry, Jr. (Technical Monitor)

    1994-01-01

    Probabilistic graphical models are being used widely in artificial intelligence, for instance, in diagnosis and expert systems, as a unified qualitative and quantitative framework for representing and reasoning with probabilities and independencies. Their development and use spans several fields including artificial intelligence, decision theory and statistics, and provides an important bridge between these communities. This paper shows by way of example that these models can be extended to machine learning, neural networks and knowledge discovery by representing the notion of a sample on the graphical model. Not only does this allow a flexible variety of learning problems to be represented, it also provides the means for representing the goal of learning and opens the way for the automatic development of learning algorithms from specifications.

  8. Building Regression Models: The Importance of Graphics.

    ERIC Educational Resources Information Center

    Dunn, Richard

    1989-01-01

    Points out reasons for using graphical methods to teach simple and multiple regression analysis. Argues that a graphically oriented approach has considerable pedagogic advantages in the exposition of simple and multiple regression. Shows that graphical methods may play a central role in the process of building regression models. (Author/LS)

  9. Cavity approximation for graphical models.

    PubMed

    Rizzo, T; Wemmenhove, B; Kappen, H J

    2007-07-01

    We reformulate the cavity approximation (CA), a class of algorithms recently introduced for improving the Bethe approximation estimates of marginals in graphical models. In our formulation, which allows for the treatment of multivalued variables, a further generalization to factor graphs with arbitrary order of interaction factors is explicitly carried out, and a message passing algorithm that implements the first order correction to the Bethe approximation is described. Furthermore, we investigate an implementation of the CA for pairwise interactions. In all cases considered we could confirm that CA[k] with increasing k provides a sequence of approximations of markedly increasing precision. Furthermore, in some cases we could also confirm the general expectation that the approximation of order k , whose computational complexity is O(N(k+1)) has an error that scales as 1/N(k+1) with the size of the system. We discuss the relation between this approach and some recent developments in the field. PMID:17677405

  10. A graphical language for reliability model generation

    NASA Technical Reports Server (NTRS)

    Howell, Sandra V.; Bavuso, Salvatore J.; Haley, Pamela J.

    1990-01-01

    A graphical interface capability of the hybrid automated reliability predictor (HARP) is described. The graphics-oriented (GO) module provides the user with a graphical language for modeling system failure modes through the selection of various fault tree gates, including sequence dependency gates, or by a Markov chain. With this graphical input language, a fault tree becomes a convenient notation for describing a system. In accounting for any sequence dependencies, HARP converts the fault-tree notation to a complex stochastic process that is reduced to a Markov chain which it can then solve for system reliability. The graphics capability is available for use on an IBM-compatible PC, a Sun, and a VAX workstation. The GO module is written in the C programming language and uses the Graphical Kernel System (GKS) standard for graphics implementation. The PC, VAX, and Sun versions of the HARP GO module are currently in beta-testing.

  11. Graphical Models via Univariate Exponential Family Distributions

    PubMed Central

    Yang, Eunho; Ravikumar, Pradeep; Allen, Genevera I.; Liu, Zhandong

    2016-01-01

    Undirected graphical models, or Markov networks, are a popular class of statistical models, used in a wide variety of applications. Popular instances of this class include Gaussian graphical models and Ising models. In many settings, however, it might not be clear which subclass of graphical models to use, particularly for non-Gaussian and non-categorical data. In this paper, we consider a general sub-class of graphical models where the node-wise conditional distributions arise from exponential families. This allows us to derive multivariate graphical model distributions from univariate exponential family distributions, such as the Poisson, negative binomial, and exponential distributions. Our key contributions include a class of M-estimators to fit these graphical model distributions; and rigorous statistical analysis showing that these M-estimators recover the true graphical model structure exactly, with high probability. We provide examples of genomic and proteomic networks learned via instances of our class of graphical models derived from Poisson and exponential distributions. PMID:27570498

  12. Detection of text strings from mixed text/graphics images

    NASA Astrophysics Data System (ADS)

    Tsai, Chien-Hua; Papachristou, Christos A.

    2000-12-01

    A robust system for text strings separation from mixed text/graphics images is presented. Based on a union-find (region growing) strategy the algorithm is thus able to classify the text from graphics and adapts to changes in document type, language category (e.g., English, Chinese and Japanese), text font style and size, and text string orientation within digital images. In addition, it allows for a document skew that usually occurs in documents, without skew correction prior to discrimination while these proposed methods such a projection profile or run length coding are not always suitable for the condition. The method has been tested with a variety of printed documents from different origins with one common set of parameters, and the experimental results of the performance of the algorithm in terms of computational efficiency are demonstrated by using several tested images from the evaluation.

  13. Understanding human functioning using graphical models

    PubMed Central

    2010-01-01

    Background Functioning and disability are universal human experiences. However, our current understanding of functioning from a comprehensive perspective is limited. The development of the International Classification of Functioning, Disability and Health (ICF) on the one hand and recent developments in graphical modeling on the other hand might be combined and open the door to a more comprehensive understanding of human functioning. The objective of our paper therefore is to explore how graphical models can be used in the study of ICF data for a range of applications. Methods We show the applicability of graphical models on ICF data for different tasks: Visualization of the dependence structure of the data set, dimension reduction and comparison of subpopulations. Moreover, we further developed and applied recent findings in causal inference using graphical models to estimate bounds on intervention effects in an observational study with many variables and without knowing the underlying causal structure. Results In each field, graphical models could be applied giving results of high face-validity. In particular, graphical models could be used for visualization of functioning in patients with spinal cord injury. The resulting graph consisted of several connected components which can be used for dimension reduction. Moreover, we found that the differences in the dependence structures between subpopulations were relevant and could be systematically analyzed using graphical models. Finally, when estimating bounds on causal effects of ICF categories on general health perceptions among patients with chronic health conditions, we found that the five ICF categories that showed the strongest effect were plausible. Conclusions Graphical Models are a flexible tool and lend themselves for a wide range of applications. In particular, studies involving ICF data seem to be suited for analysis using graphical models. PMID:20149230

  14. Operations for Learning with Graphical Models

    NASA Technical Reports Server (NTRS)

    Buntine, Wray L.

    1994-01-01

    This paper is a multidisciplinary review of empirical, statistical learning from a graphical model perspective. Well-known examples of graphical models include Bayesian net- works, directed graphs representing a Markov chain, and undirected networks representing a Markov field. These graphical models are extended to model data analysis and empirical learning using the notation of plates. Graphical operations for simplifying and manipulating a problem are provided including decomposition, differentiation, and the manipulation of probability models from the exponential family. These operations adapt existing techniques from statistics and automatic differentiation to graphs. Two standard algorithm schemes for learning are reviewed in a graphical framework: Gibbs sampling and the expectation maximization algorithm. Some algorithms are developed in this graphical framework including a generalized version of linear regression, techniques for feed-forward networks, and learning Gaussian and discrete Bayesian networks from data. The paper concludes by sketching some implications for data analysis and summarizing some popular algorithms that fall within the framework presented. The main original contributions here are the decomposition techniques and the demonstration that graphical models provide a framework for understanding and developing complex learning algorithms.

  15. Probabilistic Graphical Model Representation in Phylogenetics

    PubMed Central

    Höhna, Sebastian; Heath, Tracy A.; Boussau, Bastien; Landis, Michael J.; Ronquist, Fredrik; Huelsenbeck, John P.

    2014-01-01

    Recent years have seen a rapid expansion of the model space explored in statistical phylogenetics, emphasizing the need for new approaches to statistical model representation and software development. Clear communication and representation of the chosen model is crucial for: (i) reproducibility of an analysis, (ii) model development, and (iii) software design. Moreover, a unified, clear and understandable framework for model representation lowers the barrier for beginners and nonspecialists to grasp complex phylogenetic models, including their assumptions and parameter/variable dependencies. Graphical modeling is a unifying framework that has gained in popularity in the statistical literature in recent years. The core idea is to break complex models into conditionally independent distributions. The strength lies in the comprehensibility, flexibility, and adaptability of this formalism, and the large body of computational work based on it. Graphical models are well-suited to teach statistical models, to facilitate communication among phylogeneticists and in the development of generic software for simulation and statistical inference. Here, we provide an introduction to graphical models for phylogeneticists and extend the standard graphical model representation to the realm of phylogenetics. We introduce a new graphical model component, tree plates, to capture the changing structure of the subgraph corresponding to a phylogenetic tree. We describe a range of phylogenetic models using the graphical model framework and introduce modules to simplify the representation of standard components in large and complex models. Phylogenetic model graphs can be readily used in simulation, maximum likelihood inference, and Bayesian inference using, for example, Metropolis–Hastings or Gibbs sampling of the posterior distribution. [Computation; graphical models; inference; modularization; statistical phylogenetics; tree plate.] PMID:24951559

  16. Modelling structured data with Probabilistic Graphical Models

    NASA Astrophysics Data System (ADS)

    Forbes, F.

    2016-05-01

    Most clustering and classification methods are based on the assumption that the objects to be clustered are independent. However, in more and more modern applications, data are structured in a way that makes this assumption not realistic and potentially misleading. A typical example that can be viewed as a clustering task is image segmentation where the objects are the pixels on a regular grid and depend on neighbouring pixels on this grid. Also, when data are geographically located, it is of interest to cluster data with an underlying dependence structure accounting for some spatial localisation. These spatial interactions can be naturally encoded via a graph not necessarily regular as a grid. Data sets can then be modelled via Markov random fields and mixture models (e.g. the so-called MRF and Hidden MRF). More generally, probabilistic graphical models are tools that can be used to represent and manipulate data in a structured way while modeling uncertainty. This chapter introduces the basic concepts. The two main classes of probabilistic graphical models are considered: Bayesian networks and Markov networks. The key concept of conditional independence and its link to Markov properties is presented. The main problems that can be solved with such tools are described. Some illustrations are given associated with some practical work.

  17. Graphical Models and Computerized Adaptive Testing.

    ERIC Educational Resources Information Center

    Almond, Russell G.; Mislevy, Robert J.

    1999-01-01

    Considers computerized adaptive testing from the perspective of graphical modeling (GM). GM provides methods for making inferences about multifaceted skills and knowledge and for extracting data from complex performances. Provides examples from language-proficiency assessment. (SLD)

  18. Graphical workstation capability for reliability modeling

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.; Koppen, Sandra V.; Haley, Pamela J.

    1992-01-01

    In addition to computational capabilities, software tools for estimating the reliability of fault-tolerant digital computer systems must also provide a means of interfacing with the user. Described here is the new graphical interface capability of the hybrid automated reliability predictor (HARP), a software package that implements advanced reliability modeling techniques. The graphics oriented (GO) module provides the user with a graphical language for modeling system failure modes through the selection of various fault-tree gates, including sequence-dependency gates, or by a Markov chain. By using this graphical input language, a fault tree becomes a convenient notation for describing a system. In accounting for any sequence dependencies, HARP converts the fault-tree notation to a complex stochastic process that is reduced to a Markov chain, which it can then solve for system reliability. The graphics capability is available for use on an IBM-compatible PC, a Sun, and a VAX workstation. The GO module is written in the C programming language and uses the graphical kernal system (GKS) standard for graphics implementation. The PC, VAX, and Sun versions of the HARP GO module are currently in beta-testing stages.

  19. Probabilistic graphical model representation in phylogenetics.

    PubMed

    Höhna, Sebastian; Heath, Tracy A; Boussau, Bastien; Landis, Michael J; Ronquist, Fredrik; Huelsenbeck, John P

    2014-09-01

    Recent years have seen a rapid expansion of the model space explored in statistical phylogenetics, emphasizing the need for new approaches to statistical model representation and software development. Clear communication and representation of the chosen model is crucial for: (i) reproducibility of an analysis, (ii) model development, and (iii) software design. Moreover, a unified, clear and understandable framework for model representation lowers the barrier for beginners and nonspecialists to grasp complex phylogenetic models, including their assumptions and parameter/variable dependencies. Graphical modeling is a unifying framework that has gained in popularity in the statistical literature in recent years. The core idea is to break complex models into conditionally independent distributions. The strength lies in the comprehensibility, flexibility, and adaptability of this formalism, and the large body of computational work based on it. Graphical models are well-suited to teach statistical models, to facilitate communication among phylogeneticists and in the development of generic software for simulation and statistical inference. Here, we provide an introduction to graphical models for phylogeneticists and extend the standard graphical model representation to the realm of phylogenetics. We introduce a new graphical model component, tree plates, to capture the changing structure of the subgraph corresponding to a phylogenetic tree. We describe a range of phylogenetic models using the graphical model framework and introduce modules to simplify the representation of standard components in large and complex models. Phylogenetic model graphs can be readily used in simulation, maximum likelihood inference, and Bayesian inference using, for example, Metropolis-Hastings or Gibbs sampling of the posterior distribution. PMID:24951559

  20. Light reflection models for computer graphics.

    PubMed

    Greenberg, D P

    1989-04-14

    During the past 20 years, computer graphic techniques for simulating the reflection of light have progressed so that today images of photorealistic quality can be produced. Early algorithms considered direct lighting only, but global illumination phenomena with indirect lighting, surface interreflections, and shadows can now be modeled with ray tracing, radiosity, and Monte Carlo simulations. This article describes the historical development of computer graphic algorithms for light reflection and pictorially illustrates what will be commonly available in the near future. PMID:17835348

  1. Graphical Model Theory for Wireless Sensor Networks

    SciTech Connect

    Davis, William B.

    2002-12-08

    Information processing in sensor networks, with many small processors, demands a theory of computation that allows the minimization of processing effort, and the distribution of this effort throughout the network. Graphical model theory provides a probabilistic theory of computation that explicitly addresses complexity and decentralization for optimizing network computation. The junction tree algorithm, for decentralized inference on graphical probability models, can be instantiated in a variety of applications useful for wireless sensor networks, including: sensor validation and fusion; data compression and channel coding; expert systems, with decentralized data structures, and efficient local queries; pattern classification, and machine learning. Graphical models for these applications are sketched, and a model of dynamic sensor validation and fusion is presented in more depth, to illustrate the junction tree algorithm.

  2. Operations on Graphical Models with Plates

    NASA Technical Reports Server (NTRS)

    Buntine, Wray L.; Lum, Henry, Jr. (Technical Monitor)

    1994-01-01

    This paper explains how graphical models, for instance Bayesian or Markov networks, can be extended to model problems in data analysis and learning. This provides a unified framework that combines lessons learned from the artificial intelligence, statistical and connectionist communities. This also offers a set of principles for developing a software generator for data analysis, whereby a learning or discovery system can be compiled from specifications. Many of the popular learning algorithms can be compiled in this way from graphical specifications. While in a sense this paper is a multidisciplinary review of learning, the main contribution here is the presentation of the material within the unifying framework of graphical models, and the observation that, as a result, the process of developing learning algorithms can be partly automated.

  3. Data Analysis with Graphical Models: Software Tools

    NASA Technical Reports Server (NTRS)

    Buntine, Wray L.

    1994-01-01

    Probabilistic graphical models (directed and undirected Markov fields, and combined in chain graphs) are used widely in expert systems, image processing and other areas as a framework for representing and reasoning with probabilities. They come with corresponding algorithms for performing probabilistic inference. This paper discusses an extension to these models by Spiegelhalter and Gilks, plates, used to graphically model the notion of a sample. This offers a graphical specification language for representing data analysis problems. When combined with general methods for statistical inference, this also offers a unifying framework for prototyping and/or generating data analysis algorithms from graphical specifications. This paper outlines the framework and then presents some basic tools for the task: a graphical version of the Pitman-Koopman Theorem for the exponential family, problem decomposition, and the calculation of exact Bayes factors. Other tools already developed, such as automatic differentiation, Gibbs sampling, and use of the EM algorithm, make this a broad basis for the generation of data analysis software.

  4. Joint estimation of multiple graphical models

    PubMed Central

    Guo, Jian; Levina, Elizaveta; Michailidis, George; Zhu, Ji

    2011-01-01

    Summary Gaussian graphical models explore dependence relationships between random variables, through the estimation of the corresponding inverse covariance matrices. In this paper we develop an estimator for such models appropriate for data from several graphical models that share the same variables and some of the dependence structure. In this setting, estimating a single graphical model would mask the underlying heterogeneity, while estimating separate models for each category does not take advantage of the common structure. We propose a method that jointly estimates the graphical models corresponding to the different categories present in the data, aiming to preserve the common structure, while allowing for differences between the categories. This is achieved through a hierarchical penalty that targets the removal of common zeros in the inverse covariance matrices across categories. We establish the asymptotic consistency and sparsity of the proposed estimator in the high-dimensional case, and illustrate its performance on a number of simulated networks. An application to learning semantic connections between terms from webpages collected from computer science departments is included. PMID:23049124

  5. Planar graphical models which are easy

    SciTech Connect

    Chertkov, Michael; Chernyak, Vladimir

    2009-01-01

    We describe a rich family of binary variables statistical mechanics models on planar graphs which are equivalent to Gaussian Grassmann Graphical models (free fermions). Calculation of partition function (weighted counting) in the models is easy (of polynomial complexity) as reduced to evaluation of determinants of matrixes linear in the number of variables. In particular, this family of models covers Holographic Algorithms of Valiant and extends on the Gauge Transformations discussed in our previous works.

  6. Software for Data Analysis with Graphical Models

    NASA Technical Reports Server (NTRS)

    Buntine, Wray L.; Roy, H. Scott

    1994-01-01

    Probabilistic graphical models are being used widely in artificial intelligence and statistics, for instance, in diagnosis and expert systems, as a framework for representing and reasoning with probabilities and independencies. They come with corresponding algorithms for performing statistical inference. This offers a unifying framework for prototyping and/or generating data analysis algorithms from graphical specifications. This paper illustrates the framework with an example and then presents some basic techniques for the task: problem decomposition and the calculation of exact Bayes factors. Other tools already developed, such as automatic differentiation, Gibbs sampling, and use of the EM algorithm, make this a broad basis for the generation of data analysis software.

  7. Item Screening in Graphical Loglinear Rasch Models

    ERIC Educational Resources Information Center

    Kreiner, Svend; Christensen, Karl Bang

    2011-01-01

    In behavioural sciences, local dependence and DIF are common, and purification procedures that eliminate items with these weaknesses often result in short scales with poor reliability. Graphical loglinear Rasch models (Kreiner & Christensen, in "Statistical Methods for Quality of Life Studies," ed. by M. Mesbah, F.C. Cole & M.T. Lee, Kluwer…

  8. Research on graphical workflow modeling tool

    NASA Astrophysics Data System (ADS)

    Gu, Hongjiu

    2013-07-01

    Through the technical analysis of existing modeling tools, combined with Web technology, this paper presents a graphical workflow modeling tool design program, through which designers can draw process directly in the browser and automatically transform the drawn process description in XML description file, to facilitate the workflow engine analysis and barrier-free sharing of workflow data in a networked environment. The program has software reusability, cross-platform, scalability, and strong practicality.

  9. Planar graphical models which are easy

    NASA Astrophysics Data System (ADS)

    Chernyak, Vladimir Y.; Chertkov, Michael

    2010-11-01

    We describe a rich family of binary variables statistical mechanics models on a given planar graph which are equivalent to Gaussian Grassmann graphical models (free fermions) defined on the same graph. Calculation of the partition function (weighted counting) for such a model is easy (of polynomial complexity) as it is reducible to evaluation of a Pfaffian of a matrix of size equal to twice the number of edges in the graph. In particular, this approach touches upon holographic algorithms of Valiant and utilizes the gauge transformations discussed in our previous works.

  10. Graphics

    ERIC Educational Resources Information Center

    Post, Susan

    1975-01-01

    An art teacher described an elective course in graphics which was designed to enlarge a student's knowledge of value, color, shape within a shape, transparency, line and texture. This course utilized the technique of working a multi-colored print from a single block that was first introduced by Picasso. (Author/RK)

  11. The cluster graphical lasso for improved estimation of Gaussian graphical models

    PubMed Central

    Tan, Kean Ming; Witten, Daniela; Shojaie, Ali

    2015-01-01

    The task of estimating a Gaussian graphical model in the high-dimensional setting is considered. The graphical lasso, which involves maximizing the Gaussian log likelihood subject to a lasso penalty, is a well-studied approach for this task. A surprising connection between the graphical lasso and hierarchical clustering is introduced: the graphical lasso in effect performs a two-step procedure, in which (1) single linkage hierarchical clustering is performed on the variables in order to identify connected components, and then (2) a penalized log likelihood is maximized on the subset of variables within each connected component. Thus, the graphical lasso determines the connected components of the estimated network via single linkage clustering. The single linkage clustering is known to perform poorly in certain finite-sample settings. Therefore, the cluster graphical lasso, which involves clustering the features using an alternative to single linkage clustering, and then performing the graphical lasso on the subset of variables within each cluster, is proposed. Model selection consistency for this technique is established, and its improved performance relative to the graphical lasso is demonstrated in a simulation study, as well as in applications to a university webpage and a gene expression data sets. PMID:25642008

  12. Connections between Graphical Gaussian Models and Factor Analysis

    ERIC Educational Resources Information Center

    Salgueiro, M. Fatima; Smith, Peter W. F.; McDonald, John W.

    2010-01-01

    Connections between graphical Gaussian models and classical single-factor models are obtained by parameterizing the single-factor model as a graphical Gaussian model. Models are represented by independence graphs, and associations between each manifest variable and the latent factor are measured by factor partial correlations. Power calculations…

  13. Teaching "Instant Experience" with Graphical Model Validation Techniques

    ERIC Educational Resources Information Center

    Ekstrøm, Claus Thorn

    2014-01-01

    Graphical model validation techniques for linear normal models are often used to check the assumptions underlying a statistical model. We describe an approach to provide "instant experience" in looking at a graphical model validation plot, so it becomes easier to validate if any of the underlying assumptions are violated.

  14. ModelMate - A graphical user interface for model analysis

    USGS Publications Warehouse

    Banta, Edward R.

    2011-01-01

    ModelMate is a graphical user interface designed to facilitate use of model-analysis programs with models. This initial version of ModelMate supports one model-analysis program, UCODE_2005, and one model software program, MODFLOW-2005. ModelMate can be used to prepare input files for UCODE_2005, run UCODE_2005, and display analysis results. A link to the GW_Chart graphing program facilitates visual interpretation of results. ModelMate includes capabilities for organizing directories used with the parallel-processing capabilities of UCODE_2005 and for maintaining files in those directories to be identical to a set of files in a master directory. ModelMate can be used on its own or in conjunction with ModelMuse, a graphical user interface for MODFLOW-2005 and PHAST.

  15. Mixed Methods Analysis and Information Visualization: Graphical Display for Effective Communication of Research Results

    ERIC Educational Resources Information Center

    Onwuegbuzie, Anthony J.; Dickinson, Wendy B.

    2008-01-01

    In this paper, we introduce various graphical methods that can be used to represent data in mixed research. First, we present a broad taxonomy of visual representation. Next, we use this taxonomy to provide an overview of visual techniques for quantitative data display and qualitative data display. Then, we propose what we call "crossover" visual…

  16. A Guide to the Literature on Learning Graphical Models

    NASA Technical Reports Server (NTRS)

    Buntine, Wray L.; Friedland, Peter (Technical Monitor)

    1994-01-01

    This literature review discusses different methods under the general rubric of learning Bayesian networks from data, and more generally, learning probabilistic graphical models. Because many problems in artificial intelligence, statistics and neural networks can be represented as a probabilistic graphical model, this area provides a unifying perspective on learning. This paper organizes the research in this area along methodological lines of increasing complexity.

  17. Interactive graphical model building using telepresence and virtual reality

    SciTech Connect

    Cooke, C.; Stansfield, S.

    1993-10-01

    This paper presents a prototype system developed at Sandia National Laboratories to create and verify computer-generated graphical models of remote physical environments. The goal of the system is to create an interface between an operator and a computer vision system so that graphical models can be created interactively. Virtual reality and telepresence are used to allow interaction between the operator, computer, and remote environment. A stereo view of the remote environment is produced by two CCD cameras. The cameras are mounted on a three degree-of-freedom platform which is slaved to a mechanically-tracked, stereoscopic viewing device. This gives the operator a sense of immersion in the physical environment. The stereo video is enhanced by overlaying the graphical model onto it. Overlay of the graphical model onto the stereo video allows visual verification of graphical models. Creation of a graphical model is accomplished by allowing the operator to assist the computer in modeling. The operator controls a 3-D cursor to mark objects to be modeled. The computer then automatically extracts positional and geometric information about the object and creates the graphical model.

  18. Retrospective Study on Mathematical Modeling Based on Computer Graphic Processing

    NASA Astrophysics Data System (ADS)

    Zhang, Kai Li

    Graphics & image making is an important field in computer application, in which visualization software has been widely used with the characteristics of convenience and quick. However, it was thought by modeling designers that the software had been limited in it's function and flexibility because mathematics modeling platform was not built. A non-visualization graphics software appearing at this moment enabled the graphics & image design has a very good mathematics modeling platform. In the paper, a polished pyramid is established by multivariate spline function algorithm, and validate the non-visualization software is good in mathematical modeling.

  19. Robust Gaussian Graphical Modeling via l1 Penalization

    PubMed Central

    Sun, Hokeun; Li, Hongzhe

    2012-01-01

    Summary Gaussian graphical models have been widely used as an effective method for studying the conditional independency structure among genes and for constructing genetic networks. However, gene expression data typically have heavier tails or more outlying observations than the standard Gaussian distribution. Such outliers in gene expression data can lead to wrong inference on the dependency structure among the genes. We propose a l1 penalized estimation procedure for the sparse Gaussian graphical models that is robustified against possible outliers. The likelihood function is weighted according to how the observation is deviated, where the deviation of the observation is measured based on its own likelihood. An efficient computational algorithm based on the coordinate gradient descent method is developed to obtain the minimizer of the negative penalized robustified-likelihood, where nonzero elements of the concentration matrix represents the graphical links among the genes. After the graphical structure is obtained, we re-estimate the positive definite concentration matrix using an iterative proportional fitting algorithm. Through simulations, we demonstrate that the proposed robust method performs much better than the graphical Lasso for the Gaussian graphical models in terms of both graph structure selection and estimation when outliers are present. We apply the robust estimation procedure to an analysis of yeast gene expression data and show that the resulting graph has better biological interpretation than that obtained from the graphical Lasso. PMID:23020775

  20. A probabilistic graphical model based stochastic input model construction

    SciTech Connect

    Wan, Jiang; Zabaras, Nicholas

    2014-09-01

    Model reduction techniques have been widely used in modeling of high-dimensional stochastic input in uncertainty quantification tasks. However, the probabilistic modeling of random variables projected into reduced-order spaces presents a number of computational challenges. Due to the curse of dimensionality, the underlying dependence relationships between these random variables are difficult to capture. In this work, a probabilistic graphical model based approach is employed to learn the dependence by running a number of conditional independence tests using observation data. Thus a probabilistic model of the joint PDF is obtained and the PDF is factorized into a set of conditional distributions based on the dependence structure of the variables. The estimation of the joint PDF from data is then transformed to estimating conditional distributions under reduced dimensions. To improve the computational efficiency, a polynomial chaos expansion is further applied to represent the random field in terms of a set of standard random variables. This technique is combined with both linear and nonlinear model reduction methods. Numerical examples are presented to demonstrate the accuracy and efficiency of the probabilistic graphical model based stochastic input models. - Highlights: • Data-driven stochastic input models without the assumption of independence of the reduced random variables. • The problem is transformed to a Bayesian network structure learning problem. • Examples are given in flows in random media.

  1. Progress in mix modeling

    SciTech Connect

    Harrison, A.K.

    1997-03-14

    We have identified the Cranfill multifluid turbulence model (Cranfill, 1992) as a starting point for development of subgrid models of instability, turbulent and mixing processes. We have differenced the closed system of equations in conservation form, and coded them in the object-oriented hydrodynamics code FLAG, which is to be used as a testbed for such models.

  2. Mixed Markov models

    PubMed Central

    Fridman, Arthur

    2003-01-01

    Markov random fields can encode complex probabilistic relationships involving multiple variables and admit efficient procedures for probabilistic inference. However, from a knowledge engineering point of view, these models suffer from a serious limitation. The graph of a Markov field must connect all pairs of variables that are conditionally dependent even for a single choice of values of the other variables. This makes it hard to encode interactions that occur only in a certain context and are absent in all others. Furthermore, the requirement that two variables be connected unless always conditionally independent may lead to excessively dense graphs, obscuring the independencies present among the variables and leading to computationally prohibitive inference algorithms. Mumford [Mumford, D. (1996) in ICIAM 95, eds. Kirchgassner, K., Marenholtz, O. & Mennicken, R. (Akademie Verlag, Berlin), pp. 233–256] proposed an alternative modeling framework where the graph need not be rigid and completely determined a priori. Mixed Markov models contain node-valued random variables that, when instantiated, augment the graph by a set of transient edges. A single joint probability distribution relates the values of regular and node-valued variables. In this article, we study the analytical and computational properties of mixed Markov models. In particular, we show that positive mixed models have a local Markov property that is equivalent to their global factorization. We also describe a computationally efficient procedure for answering probabilistic queries in mixed Markov models. PMID:12829802

  3. ADVANCED MIXING MODELS

    SciTech Connect

    Lee, S; Dimenna, R; Tamburello, D

    2011-02-14

    height from zero to 10 ft. The sludge has been characterized and modeled as micron-sized solids, typically 1 to 5 microns, at weight fractions as high as 20 to 30 wt%, specific gravities to 1.4, and viscosities up to 64 cp during motion. The sludge is suspended and mixed through the use of submersible slurry jet pumps. To suspend settled sludge, water is added to the tank as a slurry medium and stirred with the jet pump. Although there is considerable technical literature on mixing and solid suspension in agitated tanks, very little literature has been published on jet mixing in a large-scale tank. One of the main objectives in the waste processing is to provide feed of a uniform slurry composition at a certain weight percentage (e.g. typically {approx}13 wt% at SRS) over an extended period of time. In preparation of the sludge for slurrying, several important questions have been raised with regard to sludge suspension and mixing of the solid suspension in the bulk of the tank: (1) How much time is required to prepare a slurry with a uniform solid composition? (2) How long will it take to suspend and mix the sludge for uniform composition in any particular waste tank? (3) What are good mixing indicators to answer the questions concerning sludge mixing stated above in a general fashion applicable to any waste tank/slurry pump geometry and fluid/sludge combination?

  4. Integrating Surface Modeling into the Engineering Design Graphics Curriculum

    ERIC Educational Resources Information Center

    Hartman, Nathan W.

    2006-01-01

    It has been suggested there is a knowledge base that surrounds the use of 3D modeling within the engineering design process and correspondingly within engineering design graphics education. While solid modeling receives a great deal of attention and discussion relative to curriculum efforts, and rightly so, surface modeling is an equally viable 3D…

  5. Transient thermoregulatory model with graphics output

    NASA Technical Reports Server (NTRS)

    Grounds, D. J.

    1974-01-01

    A user's guide is presented for the transient version of the thermoregulatory model. The model is designed to simulate the transient response of the human thermoregulatory system to thermal inputs. The model consists of 41 compartments over which the terms of the heat balance are computed. The control mechanisms which are identified are sweating, vaso-constriction and vasodilation.

  6. Mining functional modules in genetic networks with decomposable graphical models.

    PubMed

    Dejori, Mathäus; Schwaighofer, Anton; Tresp, Volker; Stetter, Martin

    2004-01-01

    In recent years, graphical models have become an increasingly important tool for the structural analysis of genome-wide expression profiles at the systems level. Here we present a new graphical modelling technique, which is based on decomposable graphical models, and apply it to a set of gene expression profiles from acute lymphoblastic leukemia (ALL). The new method explains probabilistic dependencies of expression levels in terms of the concerted action of underlying genetic functional modules, which are represented as so-called "cliques" in the graph. In addition, the method uses continuous-valued (instead of discretized) expression levels, and makes no particular assumption about their probability distribution. We show that the method successfully groups members of known functional modules to cliques. Our method allows the evaluation of the importance of genes for global cellular functions based on both link count and the clique membership count. PMID:15268775

  7. Teaching Geometry through Dynamic Modeling in Introductory Engineering Graphics.

    ERIC Educational Resources Information Center

    Wiebe, Eric N.; Branoff, Ted J.; Hartman, Nathan W.

    2003-01-01

    Examines how constraint-based 3D modeling can be used as a vehicle for rethinking instructional approaches to engineering design graphics. Focuses on moving from a mode of instruction based on the crafting by students and assessment by instructors of static 2D drawings and 3D models. Suggests that the new approach is better aligned with…

  8. ADVANCED MIXING MODELS

    SciTech Connect

    Lee, S; Richard Dimenna, R; David Tamburello, D

    2008-11-13

    The process of recovering the waste in storage tanks at the Savannah River Site (SRS) typically requires mixing the contents of the tank with one to four dual-nozzle jet mixers located within the tank. The typical criteria to establish a mixed condition in a tank are based on the number of pumps in operation and the time duration of operation. To ensure that a mixed condition is achieved, operating times are set conservatively long. This approach results in high operational costs because of the long mixing times and high maintenance and repair costs for the same reason. A significant reduction in both of these costs might be realized by reducing the required mixing time based on calculating a reliable indicator of mixing with a suitably validated computer code. The work described in this report establishes the basis for further development of the theory leading to the identified mixing indicators, the benchmark analyses demonstrating their consistency with widely accepted correlations, and the application of those indicators to SRS waste tanks to provide a better, physically based estimate of the required mixing time. Waste storage tanks at SRS contain settled sludge which varies in height from zero to 10 ft. The sludge has been characterized and modeled as micron-sized solids, typically 1 to 5 microns, at weight fractions as high as 20 to 30 wt%, specific gravities to 1.4, and viscosities up to 64 cp during motion. The sludge is suspended and mixed through the use of submersible slurry jet pumps. To suspend settled sludge, water is added to the tank as a slurry medium and stirred with the jet pump. Although there is considerable technical literature on mixing and solid suspension in agitated tanks, very little literature has been published on jet mixing in a large-scale tank. If shorter mixing times can be shown to support Defense Waste Processing Facility (DWPF) or other feed requirements, longer pump lifetimes can be achieved with associated operational cost and

  9. Color mixing models

    NASA Astrophysics Data System (ADS)

    Harrington, Steven J.

    1992-05-01

    In black-and-white printing the page image can be represented within a computer as an array of binary values indicating whether or not pixels should be inked. The Boolean operators of AND, OR, and EXCLUSIVE-OR are often used when adding new objects to the image array. For color printing the page may be represented as an array of continuous tone color values, and the generalization of these logic functions to gray-scale or full-color images is, in general, not defined or understood. When incrementally composing a page image new colors can replace old in an image buffer, or new colors and old can be combined according to some mixing function to form a composite color which is stored. This paper examines the properties of the Boolean operations and suggests full-color mixing functions which preserve the desired properties. These functions can be used to combine colored images, giving various transparency effects. The relationships between the mixing functions and physical models of color mixing are also discussed.

  10. MAGIC: Model and Graphic Information Converter

    NASA Technical Reports Server (NTRS)

    Herbert, W. C.

    2009-01-01

    MAGIC is a software tool capable of converting highly detailed 3D models from an open, standard format, VRML 2.0/97, into the proprietary DTS file format used by the Torque Game Engine from GarageGames. MAGIC is used to convert 3D simulations from authoritative sources into the data needed to run the simulations in NASA's Distributed Observer Network. The Distributed Observer Network (DON) is a simulation presentation tool built by NASA to facilitate the simulation sharing requirements of the Data Presentation and Visualization effort within the Constellation Program. DON is built on top of the Torque Game Engine (TGE) and has chosen TGE's Dynamix Three Space (DTS) file format to represent 3D objects within simulations.

  11. Graphical Approach to Model Reduction for Nonlinear Biochemical Networks

    PubMed Central

    Holland, David O.; Krainak, Nicholas C.; Saucerman, Jeffrey J.

    2011-01-01

    Model reduction is a central challenge to the development and analysis of multiscale physiology models. Advances in model reduction are needed not only for computational feasibility but also for obtaining conceptual insights from complex systems. Here, we introduce an intuitive graphical approach to model reduction based on phase plane analysis. Timescale separation is identified by the degree of hysteresis observed in phase-loops, which guides a “concentration-clamp” procedure for estimating explicit algebraic relationships between species equilibrating on fast timescales. The primary advantages of this approach over Jacobian-based timescale decomposition are that: 1) it incorporates nonlinear system dynamics, and 2) it can be easily visualized, even directly from experimental data. We tested this graphical model reduction approach using a 25-variable model of cardiac β1-adrenergic signaling, obtaining 6- and 4-variable reduced models that retain good predictive capabilities even in response to new perturbations. These 6 signaling species appear to be optimal “kinetic biomarkers” of the overall β1-adrenergic pathway. The 6-variable reduced model is well suited for integration into multiscale models of heart function, and more generally, this graphical model reduction approach is readily applicable to a variety of other complex biological systems. PMID:21901136

  12. Workflow modeling in the graphic arts and printing industry

    NASA Astrophysics Data System (ADS)

    Tuijn, Chris

    2003-12-01

    The last few years, a lot of effort has been spent on the standardization of the workflow in the graphic arts and printing industry. The main reasons for this standardization are two-fold: first of all, the need to represent all aspects of products, processes and resources in a uniform, digital framework and, secondly, the need to have different systems communicate with each other without having to implement dedicated drivers or protocols. Since many years, a number of organizations in the IT sector have been quite busy developing models and languages on the topic of workflow modeling. In addition to the more formal methods (such as, e.g., extended finite state machines, Petri Nets, Markov Chains etc.) introduced a number of decades ago, more pragmatic methods have been proposed quite recently. We hereby think in particular of the activities of the Workflow Management Coalition that resulted in an XML based Process Definition Language. Although one might be tempted to use the already established standards in the graphic environment, one should be well aware of the complexity and uniqueness of the graphic arts workflow. In this paper, we will show that it is quite hard though not impossible to model the graphic arts workflow using the already established workflow systems. After a brief summary of the graphic arts workflow requirements, we will show why the traditional models are less suitable to use. It will turn out that one of the main reasons for the incompatibility is that the graphic arts workflow is primarily resource driven; this means that the activation of processes depends on the status of different incoming resources. The fact that processes can start running with a partial availability of the input resources is a further complication that asks for additional knowledge on process level. In the second part of this paper, we will discuss in more detail the different software components that are available in any graphic enterprise. In the last part, we will

  13. A PC-based graphical simulator for physiological pharmacokinetic models.

    PubMed

    Wada, D R; Stanski, D R; Ebling, W F

    1995-04-01

    Since many intravenous anesthetic drugs alter blood flows, physiologically-based pharmacokinetic models describing drug disposition may be time-varying. Using the commercially available programming software MATLAB, a platform to simulate time-varying physiological pharmacokinetic models was developed. The platform is based upon a library of pharmacokinetic blocks which mimic physiological structure. The blocks can be linked together flexibly to form models for different drugs. Because of MATLAB's additional numerical capabilities (e.g. non-linear optimization), the platform provides a complete graphical microcomputer-based tool for physiologic pharmacokinetic modeling. PMID:7656558

  14. Probabilistic graphic models applied to identification of diseases.

    PubMed

    Sato, Renato Cesar; Sato, Graziela Tiemy Kajita

    2015-01-01

    Decision-making is fundamental when making diagnosis or choosing treatment. The broad dissemination of computed systems and databases allows systematization of part of decisions through artificial intelligence. In this text, we present basic use of probabilistic graphic models as tools to analyze causality in health conditions. This method has been used to make diagnosis of Alzheimer´s disease, sleep apnea and heart diseases. PMID:26154555

  15. Probabilistic graphic models applied to identification of diseases

    PubMed Central

    Sato, Renato Cesar; Sato, Graziela Tiemy Kajita

    2015-01-01

    ABSTRACT Decision-making is fundamental when making diagnosis or choosing treatment. The broad dissemination of computed systems and databases allows systematization of part of decisions through artificial intelligence. In this text, we present basic use of probabilistic graphic models as tools to analyze causality in health conditions. This method has been used to make diagnosis of Alzheimer´s disease, sleep apnea and heart diseases. PMID:26154555

  16. Mixed additive models

    NASA Astrophysics Data System (ADS)

    Carvalho, Francisco; Covas, Ricardo

    2016-06-01

    We consider mixed models y =∑i =0 w Xiβi with V (y )=∑i =1 w θiMi Where Mi=XiXi⊤ , i = 1, . . ., w, and µ = X0β0. For these we will estimate the variance components θ1, . . ., θw, aswell estimable vectors through the decomposition of the initial model into sub-models y(h), h ∈ Γ, with V (y (h ))=γ (h )Ig (h )h ∈Γ . Moreover we will consider L extensions of these models, i.e., y˚=Ly+ɛ, where L=D (1n1, . . ., 1nw) and ɛ, independent of y, has null mean vector and variance covariance matrix θw+1Iw, where w =∑i =1 n wi .

  17. Graphical models of residue coupling in protein families.

    PubMed

    Thomas, John; Ramakrishnan, Naren; Bailey-Kellogg, Chris

    2008-01-01

    Many statistical measures and algorithmic techniques have been proposed for studying residue coupling in protein families. Generally speaking, two residue positions are considered coupled if, in the sequence record, some of their amino acid type combinations are significantly more common than others. While the proposed approaches have proven useful in finding and describing coupling, a significant missing component is a formal probabilistic model that explicates and compactly represents the coupling, integrates information about sequence,structure, and function, and supports inferential procedures for analysis, diagnosis, and prediction.We present an approach to learning and using probabilistic graphical models of residue coupling. These models capture significant conservation and coupling constraints observable ina multiply-aligned set of sequences. Our approach can place a structural prior on considered couplings, so that all identified relationships have direct mechanistic explanations. It can also incorporate information about functional classes, and thereby learn a differential graphical model that distinguishes constraints common to all classes from those unique to individual classes. Such differential models separately account for class-specific conservation and family-wide coupling, two different sources of sequence covariation. They are then able to perform interpretable functional classification of new sequences, explaining classification decisions in terms of the underlying conservation and coupling constraints. We apply our approach in studies of both G protein-coupled receptors and PDZ domains, identifying and analyzing family-wide and class-specific constraints, and performing functional classification. The results demonstrate that graphical models of residue coupling provide a powerful tool for uncovering, representing, and utilizing significant sequence structure-function relationships in protein families. PMID:18451428

  18. SN_GUI: a graphical user interface for snowpack modeling

    NASA Astrophysics Data System (ADS)

    Spreitzhofer, G.; Fierz, C.; Lehning, M.

    2004-10-01

    SNOWPACK is a physical snow cover model. The model not only serves as a valuable research tool, but also runs operationally on a network of high Alpine automatic weather and snow measurement sites. In order to facilitate the operation of SNOWPACK and the interpretation of the results obtained by this model, a user-friendly graphical user interface for snowpack modeling, named SN_GUI, was created. This Java-based and thus platform-independent tool can be operated in two modes, one designed to fulfill the requirements of avalanche warning services (e.g. by providing information about critical layers within the snowpack that are closely related to the avalanche activity), and the other one offering a variety of additional options satisfying the needs of researchers. The user of SN_GUI is graphically guided through the entire process of creating snow cover simulations. The starting point is the efficient creation of input parameter files for SNOWPACK, followed by the launching of SNOWPACK with a variety of parameter settings. Finally, after the successful termination of the run, a number of interactive display options may be used to visualize the model output. Among these are vertical profiles and time profiles for many parameters. Besides other features, SN_GUI allows the use of various color, time and coordinate scales, and the comparison of measured and observed parameters.

  19. Collaborative multi organ segmentation by integrating deformable and graphical models.

    PubMed

    Uzunbaş, Mustafa Gökhan; Chen, Chao; Zhang, Shaoting; Poh, Kilian M; Li, Kang; Metaxas, Dimitris

    2013-01-01

    Organ segmentation is a challenging problem on which significant progress has been made. Deformable models (DM) and graphical models (GM) are two important categories of optimization based image segmentation methods. Efforts have been made on integrating two types of models into one framework. However, previous methods are not designed for segmenting multiple organs simultaneously and accurately. In this paper, we propose a hybrid multi organ segmentation approach by integrating DM and GM in a coupled optimization framework. Specifically, we show that region-based deformable models can be integrated with Markov Random Fields (MRF), such that multiple models' evolutions are driven by a maximum a posteriori (MAP) inference. It brings global and local deformation constraints into a unified framework for simultaneous segmentation of multiple objects in an image. We validate this proposed method on two challenging problems of multi organ segmentation, and the results are promising. PMID:24579136

  20. Developing satellite ground control software through graphical models

    NASA Technical Reports Server (NTRS)

    Bailin, Sidney; Henderson, Scott; Paterra, Frank; Truszkowski, Walt

    1992-01-01

    This paper discusses a program of investigation into software development as graphical modeling. The goal of this work is a more efficient development and maintenance process for the ground-based software that controls unmanned scientific satellites launched by NASA. The main hypothesis of the program is that modeling of the spacecraft and its subsystems, and reasoning about such models, can--and should--form the key activities of software development; by using such models as inputs, the generation of code to perform various functions (such as simulation and diagnostics of spacecraft components) can be automated. Moreover, we contend that automation can provide significant support for reasoning about the software system at the diagram level.

  1. Ice-sheet modelling accelerated by graphics cards

    NASA Astrophysics Data System (ADS)

    Brædstrup, Christian Fredborg; Damsgaard, Anders; Egholm, David Lundbek

    2014-11-01

    Studies of glaciers and ice sheets have increased the demand for high performance numerical ice flow models over the past decades. When exploring the highly non-linear dynamics of fast flowing glaciers and ice streams, or when coupling multiple flow processes for ice, water, and sediment, researchers are often forced to use super-computing clusters. As an alternative to conventional high-performance computing hardware, the Graphical Processing Unit (GPU) is capable of massively parallel computing while retaining a compact design and low cost. In this study, we present a strategy for accelerating a higher-order ice flow model using a GPU. By applying the newest GPU hardware, we achieve up to 180× speedup compared to a similar but serial CPU implementation. Our results suggest that GPU acceleration is a competitive option for ice-flow modelling when compared to CPU-optimised algorithms parallelised by the OpenMP or Message Passing Interface (MPI) protocols.

  2. An Accurate and Dynamic Computer Graphics Muscle Model

    NASA Technical Reports Server (NTRS)

    Levine, David Asher

    1997-01-01

    A computer based musculo-skeletal model was developed at the University in the departments of Mechanical and Biomedical Engineering. This model accurately represents human shoulder kinematics. The result of this model is the graphical display of bones moving through an appropriate range of motion based on inputs of EMGs and external forces. The need existed to incorporate a geometric muscle model in the larger musculo-skeletal model. Previous muscle models did not accurately represent muscle geometries, nor did they account for the kinematics of tendons. This thesis covers the creation of a new muscle model for use in the above musculo-skeletal model. This muscle model was based on anatomical data from the Visible Human Project (VHP) cadaver study. Two-dimensional digital images from the VHP were analyzed and reconstructed to recreate the three-dimensional muscle geometries. The recreated geometries were smoothed, reduced, and sliced to form data files defining the surfaces of each muscle. The muscle modeling function opened these files during run-time and recreated the muscle surface. The modeling function applied constant volume limitations to the muscle and constant geometry limitations to the tendons.

  3. Groundwater modeling and remedial optimization design using graphical user interfaces

    SciTech Connect

    Deschaine, L.M.

    1997-05-01

    The ability to accurately predict the behavior of chemicals in groundwater systems under natural flow circumstances or remedial screening and design conditions is the cornerstone to the environmental industry. The ability to do this efficiently and effectively communicate the information to the client and regulators is what differentiates effective consultants from ineffective consultants. Recent advances in groundwater modeling graphical user interfaces (GUIs) are doing for numerical modeling what Windows{trademark} did for DOS{trademark}. GUI facilitates both the modeling process and the information exchange. This Test Drive evaluates the performance of two GUIs--Groundwater Vistas and ModIME--on an actual groundwater model calibration and remedial design optimization project. In the early days of numerical modeling, data input consisted of large arrays of numbers that required intensive labor to input and troubleshoot. Model calibration was also manual, as was interpreting the reams of computer output for each of the tens or hundreds of simulations required to calibrate and perform optimal groundwater remedial design. During this period, the majority of the modelers effort (and budget) was spent just getting the model running, as opposed to solving the environmental challenge at hand. GUIs take the majority of the grunt work out of the modeling process, thereby allowing the modeler to focus on designing optimal solutions.

  4. Mode Estimation for High Dimensional Discrete Tree Graphical Models

    PubMed Central

    Chen, Chao; Liu, Han; Metaxas, Dimitris N.; Zhao, Tianqi

    2014-01-01

    This paper studies the following problem: given samples from a high dimensional discrete distribution, we want to estimate the leading (δ, ρ)-modes of the underlying distributions. A point is defined to be a (δ, ρ)-mode if it is a local optimum of the density within a δ-neighborhood under metric ρ. As we increase the “scale” parameter δ, the neighborhood size increases and the total number of modes monotonically decreases. The sequence of the (δ, ρ)-modes reveal intrinsic topographical information of the underlying distributions. Though the mode finding problem is generally intractable in high dimensions, this paper unveils that, if the distribution can be approximated well by a tree graphical model, mode characterization is significantly easier. An efficient algorithm with provable theoretical guarantees is proposed and is applied to applications like data analysis and multiple predictions. PMID:25620859

  5. De novo protein conformational sampling using a probabilistic graphical model

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Debswapna; Cheng, Jianlin

    2015-11-01

    Efficient exploration of protein conformational space remains challenging especially for large proteins when assembling discretized structural fragments extracted from a protein structure data database. We propose a fragment-free probabilistic graphical model, FUSION, for conformational sampling in continuous space and assess its accuracy using ‘blind’ protein targets with a length up to 250 residues from the CASP11 structure prediction exercise. The method reduces sampling bottlenecks, exhibits strong convergence, and demonstrates better performance than the popular fragment assembly method, ROSETTA, on relatively larger proteins with a length of more than 150 residues in our benchmark set. FUSION is freely available through a web server at http://protein.rnet.missouri.edu/FUSION/.

  6. De novo protein conformational sampling using a probabilistic graphical model

    PubMed Central

    Bhattacharya, Debswapna; Cheng, Jianlin

    2015-01-01

    Efficient exploration of protein conformational space remains challenging especially for large proteins when assembling discretized structural fragments extracted from a protein structure data database. We propose a fragment-free probabilistic graphical model, FUSION, for conformational sampling in continuous space and assess its accuracy using ‘blind’ protein targets with a length up to 250 residues from the CASP11 structure prediction exercise. The method reduces sampling bottlenecks, exhibits strong convergence, and demonstrates better performance than the popular fragment assembly method, ROSETTA, on relatively larger proteins with a length of more than 150 residues in our benchmark set. FUSION is freely available through a web server at http://protein.rnet.missouri.edu/FUSION/. PMID:26541939

  7. Kinematic modelling of disc galaxies using graphics processing units

    NASA Astrophysics Data System (ADS)

    Bekiaris, G.; Glazebrook, K.; Fluke, C. J.; Abraham, R.

    2016-01-01

    With large-scale integral field spectroscopy (IFS) surveys of thousands of galaxies currently under-way or planned, the astronomical community is in need of methods, techniques and tools that will allow the analysis of huge amounts of data. We focus on the kinematic modelling of disc galaxies and investigate the potential use of massively parallel architectures, such as the graphics processing unit (GPU), as an accelerator for the computationally expensive model-fitting procedure. We review the algorithms involved in model-fitting and evaluate their suitability for GPU implementation. We employ different optimization techniques, including the Levenberg-Marquardt and nested sampling algorithms, but also a naive brute-force approach based on nested grids. We find that the GPU can accelerate the model-fitting procedure up to a factor of ˜100 when compared to a single-threaded CPU, and up to a factor of ˜10 when compared to a multithreaded dual CPU configuration. Our method's accuracy, precision and robustness are assessed by successfully recovering the kinematic properties of simulated data, and also by verifying the kinematic modelling results of galaxies from the GHASP and DYNAMO surveys as found in the literature. The resulting GBKFIT code is available for download from: http://supercomputing.swin.edu.au/gbkfit.

  8. Handling geophysical flows: Numerical modelling using Graphical Processing Units

    NASA Astrophysics Data System (ADS)

    Garcia-Navarro, Pilar; Lacasta, Asier; Juez, Carmelo; Morales-Hernandez, Mario

    2016-04-01

    Computational tools may help engineers in the assessment of sediment transport during the decision-making processes. The main requirements are that the numerical results have to be accurate and simulation models must be fast. The present work is based on the 2D shallow water equations in combination with the 2D Exner equation [1]. The resulting numerical model accuracy was already discussed in previous work. Regarding the speed of the computation, the Exner equation slows down the already costly 2D shallow water model as the number of variables to solve is increased and the numerical stability is more restrictive. On the other hand, the movement of poorly sorted material over steep areas constitutes a hazardous environmental problem. Computational tools help in the predictions of such landslides [2]. In order to overcome this problem, this work proposes the use of Graphical Processing Units (GPUs) for decreasing significantly the simulation time [3, 4]. The numerical scheme implemented in GPU is based on a finite volume scheme. The mathematical model and the numerical implementation are compared against experimental and field data. In addition, the computational times obtained with the Graphical Hardware technology are compared against Single-Core (sequential) and Multi-Core (parallel) CPU implementations. References [Juez et al.(2014)] Juez, C., Murillo, J., & Garca-Navarro, P. (2014) A 2D weakly-coupled and efficient numerical model for transient shallow flow and movable bed. Advances in Water Resources. 71 93-109. [Juez et al.(2013)] Juez, C., Murillo, J., & Garca-Navarro, P. (2013) . 2D simulation of granular flow over irregular steep slopes using global and local coordinates. Journal of Computational Physics. 225 166-204. [Lacasta et al.(2014)] Lacasta, A., Morales-Hernndez, M., Murillo, J., & Garca-Navarro, P. (2014) An optimized GPU implementation of a 2D free surface simulation model on unstructured meshes Advances in Engineering Software. 78 1-15. [Lacasta

  9. Accelerating compartmental modeling on a graphical processing unit.

    PubMed

    Ben-Shalom, Roy; Liberman, Gilad; Korngreen, Alon

    2013-01-01

    Compartmental modeling is a widely used tool in neurophysiology but the detail and scope of such models is frequently limited by lack of computational resources. Here we implement compartmental modeling on low cost Graphical Processing Units (GPUs), which significantly increases simulation speed compared to NEURON. Testing two methods for solving the current diffusion equation system revealed which method is more useful for specific neuron morphologies. Regions of applicability were investigated using a range of simulations from a single membrane potential trace simulated in a simple fork morphology to multiple traces on multiple realistic cells. A runtime peak 150-fold faster than the CPU was achieved. This application can be used for statistical analysis and data fitting optimizations of compartmental models and may be used for simultaneously simulating large populations of neurons. Since GPUs are forging ahead and proving to be more cost-effective than CPUs, this may significantly decrease the cost of computation power and open new computational possibilities for laboratories with limited budgets. PMID:23508232

  10. Dynamics of Mental Model Construction from Text and Graphics

    ERIC Educational Resources Information Center

    Hochpöchler, Ulrike; Schnotz, Wolfgang; Rasch, Thorsten; Ullrich, Mark; Horz, Holger; McElvany, Nele; Baumert, Jürgen

    2013-01-01

    When students read for learning, they frequently are required to integrate text and graphics information into coherent knowledge structures. The following study aimed at analyzing how students deal with texts and how they deal with graphics when they try to integrate the two sources of information. Furthermore, the study investigated differences…

  11. An Arabidopsis gene network based on the graphical Gaussian model

    PubMed Central

    Ma, Shisong; Gong, Qingqiu; Bohnert, Hans J.

    2007-01-01

    We describe a gene network for the Arabidopsis thaliana transcriptome based on a modified graphical Gaussian model (GGM). Through partial correlation (pcor), GGM infers coregulation patterns between gene pairs conditional on the behavior of other genes. Regularized GGM calculated pcor between gene pairs among ∼2000 input genes at a time. Regularized GGM coupled with iterative random samplings of genes was expanded into a network that covered the Arabidopsis genome (22,266 genes). This resulted in a network of 18,625 interactions (edges) among 6760 genes (nodes) with high confidence and connections representing ∼0.01% of all possible edges. When queried for selected genes, locally coherent subnetworks mainly related to metabolic functions, and stress responses emerged. Examples of networks for biochemical pathways, cell wall metabolism, and cold responses are presented. GGM displayed known coregulation pathways as subnetworks and added novel components to known edges. Finally, the network reconciled individual subnetworks in a topology joined at the whole-genome level and provided a general framework that can instruct future studies on plant metabolism and stress responses. The network model is included. PMID:17921353

  12. Counterfactual graphical models for longitudinal mediation analysis with unobserved confounding.

    PubMed

    Shpitser, Ilya

    2013-08-01

    Questions concerning mediated causal effects are of great interest in psychology, cognitive science, medicine, social science, public health, and many other disciplines. For instance, about 60% of recent papers published in leading journals in social psychology contain at least one mediation test (Rucker, Preacher, Tormala, & Petty, 2011). Standard parametric approaches to mediation analysis employ regression models, and either the "difference method" (Judd & Kenny, 1981), more common in epidemiology, or the "product method" (Baron & Kenny, 1986), more common in the social sciences. In this article, we first discuss a known, but perhaps often unappreciated, fact that these parametric approaches are a special case of a general counterfactual framework for reasoning about causality first described by Neyman (1923) and Rubin (1924) and linked to causal graphical models by Robins (1986) and Pearl (2006). We then show a number of advantages of this framework. First, it makes the strong assumptions underlying mediation analysis explicit. Second, it avoids a number of problems present in the product and difference methods, such as biased estimates of effects in certain cases. Finally, we show the generality of this framework by proving a novel result which allows mediation analysis to be applied to longitudinal settings with unobserved confounders. PMID:23899340

  13. Graphical User Interface for Simulink Integrated Performance Analysis Model

    NASA Technical Reports Server (NTRS)

    Durham, R. Caitlyn

    2009-01-01

    The J-2X Engine (built by Pratt & Whitney Rocketdyne,) in the Upper Stage of the Ares I Crew Launch Vehicle, will only start within a certain range of temperature and pressure for Liquid Hydrogen and Liquid Oxygen propellants. The purpose of the Simulink Integrated Performance Analysis Model is to verify that in all reasonable conditions the temperature and pressure of the propellants are within the required J-2X engine start boxes. In order to run the simulation, test variables must be entered at all reasonable values of parameters such as heat leak and mass flow rate. To make this testing process as efficient as possible in order to save the maximum amount of time and money, and to show that the J-2X engine will start when it is required to do so, a graphical user interface (GUI) was created to allow the input of values to be used as parameters in the Simulink Model, without opening or altering the contents of the model. The GUI must allow for test data to come from Microsoft Excel files, allow those values to be edited before testing, place those values into the Simulink Model, and get the output from the Simulink Model. The GUI was built using MATLAB, and will run the Simulink simulation when the Simulate option is activated. After running the simulation, the GUI will construct a new Microsoft Excel file, as well as a MATLAB matrix file, using the output values for each test of the simulation so that they may graphed and compared to other values.

  14. A Graphical Method for Assessing the Identification of Linear Structural Equation Models

    ERIC Educational Resources Information Center

    Eusebi, Paolo

    2008-01-01

    A graphical method is presented for assessing the state of identifiability of the parameters in a linear structural equation model based on the associated directed graph. We do not restrict attention to recursive models. In the recent literature, methods based on graphical models have been presented as a useful tool for assessing the state of…

  15. JACK - ANTHROPOMETRIC MODELING SYSTEM FOR SILICON GRAPHICS WORKSTATIONS

    NASA Technical Reports Server (NTRS)

    Smith, B.

    1994-01-01

    JACK is an interactive graphics program developed at the University of Pennsylvania that displays and manipulates articulated geometric figures. JACK is typically used to observe how a human mannequin interacts with its environment and what effects body types will have upon the performance of a task in a simulated environment. Any environment can be created, and any number of mannequins can be placed anywhere in that environment. JACK includes facilities to construct limited geometric objects, position figures, perform a variety of analyses on the figures, describe the motion of the figures and specify lighting and surface property information for rendering high quality images. JACK is supplied with a variety of body types pre-defined and known to the system. There are both male and female bodies, ranging from the 5th to the 95th percentile, based on NASA Standard 3000. Each mannequin is fully articulated and reflects the joint limitations of a normal human. JACK is an editor for manipulating previously defined objects known as "Peabody" objects. Used to describe the figures as well as the internal data structure for representing them, Peabody is a language with a powerful and flexible mechanism for representing connectivity between objects, both the joints between individual segments within a figure and arbitrary connections between different figures. Peabody objects are generally comprised of several individual figures, each one a collection of segments. Each segment has a geometry represented by PSURF files that consist of polygons or curved surface patches. Although JACK does not have the capability to create new objects, objects may be created by other geometric modeling programs and then translated into the PSURF format. Environment files are a collection of figures and attributes that may be dynamically moved under the control of an animation file. The animation facilities allow the user to create a sequence of commands that duplicate the movements of a

  16. A Gaussian graphical model approach to climate networks

    SciTech Connect

    Zerenner, Tanja; Friederichs, Petra; Hense, Andreas; Lehnertz, Klaus

    2014-06-15

    Distinguishing between direct and indirect connections is essential when interpreting network structures in terms of dynamical interactions and stability. When constructing networks from climate data the nodes are usually defined on a spatial grid. The edges are usually derived from a bivariate dependency measure, such as Pearson correlation coefficients or mutual information. Thus, the edges indistinguishably represent direct and indirect dependencies. Interpreting climate data fields as realizations of Gaussian Random Fields (GRFs), we have constructed networks according to the Gaussian Graphical Model (GGM) approach. In contrast to the widely used method, the edges of GGM networks are based on partial correlations denoting direct dependencies. Furthermore, GRFs can be represented not only on points in space, but also by expansion coefficients of orthogonal basis functions, such as spherical harmonics. This leads to a modified definition of network nodes and edges in spectral space, which is motivated from an atmospheric dynamics perspective. We construct and analyze networks from climate data in grid point space as well as in spectral space, and derive the edges from both Pearson and partial correlations. Network characteristics, such as mean degree, average shortest path length, and clustering coefficient, reveal that the networks posses an ordered and strongly locally interconnected structure rather than small-world properties. Despite this, the network structures differ strongly depending on the construction method. Straightforward approaches to infer networks from climate data while not regarding any physical processes may contain too strong simplifications to describe the dynamics of the climate system appropriately.

  17. A Gaussian graphical model approach to climate networks

    NASA Astrophysics Data System (ADS)

    Zerenner, Tanja; Friederichs, Petra; Lehnertz, Klaus; Hense, Andreas

    2014-06-01

    Distinguishing between direct and indirect connections is essential when interpreting network structures in terms of dynamical interactions and stability. When constructing networks from climate data the nodes are usually defined on a spatial grid. The edges are usually derived from a bivariate dependency measure, such as Pearson correlation coefficients or mutual information. Thus, the edges indistinguishably represent direct and indirect dependencies. Interpreting climate data fields as realizations of Gaussian Random Fields (GRFs), we have constructed networks according to the Gaussian Graphical Model (GGM) approach. In contrast to the widely used method, the edges of GGM networks are based on partial correlations denoting direct dependencies. Furthermore, GRFs can be represented not only on points in space, but also by expansion coefficients of orthogonal basis functions, such as spherical harmonics. This leads to a modified definition of network nodes and edges in spectral space, which is motivated from an atmospheric dynamics perspective. We construct and analyze networks from climate data in grid point space as well as in spectral space, and derive the edges from both Pearson and partial correlations. Network characteristics, such as mean degree, average shortest path length, and clustering coefficient, reveal that the networks posses an ordered and strongly locally interconnected structure rather than small-world properties. Despite this, the network structures differ strongly depending on the construction method. Straightforward approaches to infer networks from climate data while not regarding any physical processes may contain too strong simplifications to describe the dynamics of the climate system appropriately.

  18. A Gaussian graphical model approach to climate networks.

    PubMed

    Zerenner, Tanja; Friederichs, Petra; Lehnertz, Klaus; Hense, Andreas

    2014-06-01

    Distinguishing between direct and indirect connections is essential when interpreting network structures in terms of dynamical interactions and stability. When constructing networks from climate data the nodes are usually defined on a spatial grid. The edges are usually derived from a bivariate dependency measure, such as Pearson correlation coefficients or mutual information. Thus, the edges indistinguishably represent direct and indirect dependencies. Interpreting climate data fields as realizations of Gaussian Random Fields (GRFs), we have constructed networks according to the Gaussian Graphical Model (GGM) approach. In contrast to the widely used method, the edges of GGM networks are based on partial correlations denoting direct dependencies. Furthermore, GRFs can be represented not only on points in space, but also by expansion coefficients of orthogonal basis functions, such as spherical harmonics. This leads to a modified definition of network nodes and edges in spectral space, which is motivated from an atmospheric dynamics perspective. We construct and analyze networks from climate data in grid point space as well as in spectral space, and derive the edges from both Pearson and partial correlations. Network characteristics, such as mean degree, average shortest path length, and clustering coefficient, reveal that the networks posses an ordered and strongly locally interconnected structure rather than small-world properties. Despite this, the network structures differ strongly depending on the construction method. Straightforward approaches to infer networks from climate data while not regarding any physical processes may contain too strong simplifications to describe the dynamics of the climate system appropriately. PMID:24985417

  19. A mixed relaxed clock model

    PubMed Central

    2016-01-01

    Over recent years, several alternative relaxed clock models have been proposed in the context of Bayesian dating. These models fall in two distinct categories: uncorrelated and autocorrelated across branches. The choice between these two classes of relaxed clocks is still an open question. More fundamentally, the true process of rate variation may have both long-term trends and short-term fluctuations, suggesting that more sophisticated clock models unfolding over multiple time scales should ultimately be developed. Here, a mixed relaxed clock model is introduced, which can be mechanistically interpreted as a rate variation process undergoing short-term fluctuations on the top of Brownian long-term trends. Statistically, this mixed clock represents an alternative solution to the problem of choosing between autocorrelated and uncorrelated relaxed clocks, by proposing instead to combine their respective merits. Fitting this model on a dataset of 105 placental mammals, using both node-dating and tip-dating approaches, suggests that the two pure clocks, Brownian and white noise, are rejected in favour of a mixed model with approximately equal contributions for its uncorrelated and autocorrelated components. The tip-dating analysis is particularly sensitive to the choice of the relaxed clock model. In this context, the classical pure Brownian relaxed clock appears to be overly rigid, leading to biases in divergence time estimation. By contrast, the use of a mixed clock leads to more recent and more reasonable estimates for the crown ages of placental orders and superorders. Altogether, the mixed clock introduced here represents a first step towards empirically more adequate models of the patterns of rate variation across phylogenetic trees. This article is part of the themed issue ‘Dating species divergences using rocks and clocks’. PMID:27325829

  20. A mixed relaxed clock model.

    PubMed

    Lartillot, Nicolas; Phillips, Matthew J; Ronquist, Fredrik

    2016-07-19

    Over recent years, several alternative relaxed clock models have been proposed in the context of Bayesian dating. These models fall in two distinct categories: uncorrelated and autocorrelated across branches. The choice between these two classes of relaxed clocks is still an open question. More fundamentally, the true process of rate variation may have both long-term trends and short-term fluctuations, suggesting that more sophisticated clock models unfolding over multiple time scales should ultimately be developed. Here, a mixed relaxed clock model is introduced, which can be mechanistically interpreted as a rate variation process undergoing short-term fluctuations on the top of Brownian long-term trends. Statistically, this mixed clock represents an alternative solution to the problem of choosing between autocorrelated and uncorrelated relaxed clocks, by proposing instead to combine their respective merits. Fitting this model on a dataset of 105 placental mammals, using both node-dating and tip-dating approaches, suggests that the two pure clocks, Brownian and white noise, are rejected in favour of a mixed model with approximately equal contributions for its uncorrelated and autocorrelated components. The tip-dating analysis is particularly sensitive to the choice of the relaxed clock model. In this context, the classical pure Brownian relaxed clock appears to be overly rigid, leading to biases in divergence time estimation. By contrast, the use of a mixed clock leads to more recent and more reasonable estimates for the crown ages of placental orders and superorders. Altogether, the mixed clock introduced here represents a first step towards empirically more adequate models of the patterns of rate variation across phylogenetic trees.This article is part of the themed issue 'Dating species divergences using rocks and clocks'. PMID:27325829

  1. Overview of Neutrino Mixing Models and Their Mixing Angle Predictions

    SciTech Connect

    Albright, Carl H.

    2009-11-01

    An overview of neutrino-mixing models is presented with emphasis on the types of horizontal flavor and vertical family symmetries that have been invoked. Distributions for the mixing angles of many models are displayed. Ways to differentiate among the models and to narrow the list of viable models are discussed.

  2. Viscoelastic Finite Difference Modeling Using Graphics Processing Units

    NASA Astrophysics Data System (ADS)

    Fabien-Ouellet, G.; Gloaguen, E.; Giroux, B.

    2014-12-01

    Full waveform seismic modeling requires a huge amount of computing power that still challenges today's technology. This limits the applicability of powerful processing approaches in seismic exploration like full-waveform inversion. This paper explores the use of Graphics Processing Units (GPU) to compute a time based finite-difference solution to the viscoelastic wave equation. The aim is to investigate whether the adoption of the GPU technology is susceptible to reduce significantly the computing time of simulations. The code presented herein is based on the freely accessible software of Bohlen (2002) in 2D provided under a General Public License (GNU) licence. This implementation is based on a second order centred differences scheme to approximate time differences and staggered grid schemes with centred difference of order 2, 4, 6, 8, and 12 for spatial derivatives. The code is fully parallel and is written using the Message Passing Interface (MPI), and it thus supports simulations of vast seismic models on a cluster of CPUs. To port the code from Bohlen (2002) on GPUs, the OpenCl framework was chosen for its ability to work on both CPUs and GPUs and its adoption by most of GPU manufacturers. In our implementation, OpenCL works in conjunction with MPI, which allows computations on a cluster of GPU for large-scale model simulations. We tested our code for model sizes between 1002 and 60002 elements. Comparison shows a decrease in computation time of more than two orders of magnitude between the GPU implementation run on a AMD Radeon HD 7950 and the CPU implementation run on a 2.26 GHz Intel Xeon Quad-Core. The speed-up varies depending on the order of the finite difference approximation and generally increases for higher orders. Increasing speed-ups are also obtained for increasing model size, which can be explained by kernel overheads and delays introduced by memory transfers to and from the GPU through the PCI-E bus. Those tests indicate that the GPU memory size

  3. Bayesian stable isotope mixing models

    EPA Science Inventory

    In this paper we review recent advances in Stable Isotope Mixing Models (SIMMs) and place them into an over-arching Bayesian statistical framework which allows for several useful extensions. SIMMs are used to quantify the proportional contributions of various sources to a mixtur...

  4. Graphics development of DCOR: Deterministic combat model of Oak Ridge

    SciTech Connect

    Hunt, G.; Azmy, Y.Y.

    1992-10-01

    DCOR is a user-friendly computer implementation of a deterministic combat model developed at ORNL. To make the interpretation of the results more intuitive, a conversion of the numerical solution to a graphic animation sequence of battle evolution is desirable. DCOR uses a coarse computational spatial mesh superimposed on the battlefield. This research is aimed at developing robust methods for computing the position of the combative units over the continuum (and also pixeled) battlefield, from DCOR`s discrete-variable solution representing the density of each force type evaluated at gridpoints. Three main problems have been identified and solutions have been devised and implemented in a new visualization module of DCOR. First, there is the problem of distributing the total number of objects, each representing a combative unit of each force type, among the gridpoints at each time level of the animation. This problem is solved by distributing, for each force type, the total number of combative units, one by one, to the gridpoint with the largest calculated number of units. Second, there is the problem of distributing the number of units assigned to each computational gridpoint over the battlefield area attributed to that point. This problem is solved by distributing the units within that area by taking into account the influence of surrounding gridpoints using linear interpolation. Finally, time interpolated solutions must be generated to produce a sufficient number of frames to create a smooth animation sequence. Currently, enough frames may be generated either by direct computation via the PDE solver or by using linear programming techniques to linearly interpolate intermediate frames between calculated frames.

  5. Inference of Mix from Experimental Data and Theoretical Mix Models

    SciTech Connect

    Welser-Sherrill, L.; Haynes, D. A.; Cooley, J. H.; Mancini, R. C.; Haan, S. W.; Golovkin, I. E.

    2007-08-02

    The mixing between fuel and shell materials in Inertial Confinement Fusion implosion cores is a topic of great interest. Mixing due to hydrodynamic instabilities can affect implosion dynamics and could also go so far as to prevent ignition. We have demonstrated that it is possible to extract information on mixing directly from experimental data using spectroscopic arguments. In order to compare this data-driven analysis to a theoretical framework, two independent mix models, Youngs' phenomenological model and the Haan saturation model, have been implemented in conjunction with a series of clean hydrodynamic simulations that model the experiments. The first tests of these methods were carried out based on a set of indirect drive implosions at the OMEGA laser. We now focus on direct drive experiments, and endeavor to approach the problem from another perspective. In the current work, we use Youngs' and Haan's mix models in conjunction with hydrodynamic simulations in order to design experimental platforms that exhibit measurably different levels of mix. Once the experiments are completed based on these designs, the results of a data-driven mix analysis will be compared to the levels of mix predicted by the simulations. In this way, we aim to increase our confidence in the methods used to extract mixing information from the experimental data, as well as to study sensitivities and the range of validity of the mix models.

  6. An Item Response Unfolding Model for Graphic Rating Scales

    ERIC Educational Resources Information Center

    Liu, Ying

    2009-01-01

    The graphic rating scale, a measurement tool used in many areas of psychology, usually takes a form of a fixed-length line segment, with both ends bounded and labeled as extreme responses. The raters mark somewhere on the line, and the length of the line segment from one endpoint to the mark is taken as the measure. An item response unfolding…

  7. A probabilistic graphical model approach to stochastic multiscale partial differential equations

    SciTech Connect

    Wan, Jiang; Zabaras, Nicholas; Center for Applied Mathematics, Cornell University, 657 Frank H.T. Rhodes Hall, Ithaca, NY 14853

    2013-10-01

    We develop a probabilistic graphical model based methodology to efficiently perform uncertainty quantification in the presence of both stochastic input and multiple scales. Both the stochastic input and model responses are treated as random variables in this framework. Their relationships are modeled by graphical models which give explicit factorization of a high-dimensional joint probability distribution. The hyperparameters in the probabilistic model are learned using sequential Monte Carlo (SMC) method, which is superior to standard Markov chain Monte Carlo (MCMC) methods for multi-modal distributions. Finally, we make predictions from the probabilistic graphical model using the belief propagation algorithm. Numerical examples are presented to show the accuracy and efficiency of the predictive capability of the developed graphical model.

  8. Graphics modelling of non-contact thickness measuring robotics work cell

    NASA Technical Reports Server (NTRS)

    Warren, Charles W.

    1990-01-01

    A system was developed for measuring, in real time, the thickness of a sprayable insulation during its application. The system was graphically modelled, off-line, using a state-of-the-art graphics workstation and associated software. This model was to contain a 3D color model of a workcell containing a robot and an air bearing turntable. A communication link was established between the graphics workstations and the robot's controller. Sequences of robot motion generated by the computer simulation are transmitted to the robot for execution.

  9. Graphics-based intelligent search and abstracting using Data Modeling

    NASA Astrophysics Data System (ADS)

    Jaenisch, Holger M.; Handley, James W.; Case, Carl T.; Songy, Claude G.

    2002-11-01

    This paper presents an autonomous text and context-mining algorithm that converts text documents into point clouds for visual search cues. This algorithm is applied to the task of data-mining a scriptural database comprised of the Old and New Testaments from the Bible and the Book of Mormon, Doctrine and Covenants, and the Pearl of Great Price. Results are generated which graphically show the scripture that represents the average concept of the database and the mining of the documents down to the verse level.

  10. Top View of a Computer Graphic Model of the Opportunity Lander and Rover

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site] PIA05265

    A computer graphics model of the Opportunity lander and rover are super-imposed on top of the martian terrain where Opportunity landed.

  11. A few modeling and rendering techniques for computer graphics and their implementation on ultra hardware

    NASA Technical Reports Server (NTRS)

    Bidasaria, Hari

    1989-01-01

    Ultra network is a recently installed very high speed graphic hardware at NASA Langley Research Center. The Ultra Network interfaced to Voyager through its HSX channel is capable of transmitting up to 800 million bits of information per second. It is capable of displaying fifteen to twenty frames of precomputed images of size 1024 x 2368 with 24 bits of color information per pixel per second. Modeling and rendering techniques are being developed in computer graphics and implemented on Ultra hardware. A ray tracer is being developed for use at the Flight Software and Graphic branch. Changes were made to make the ray tracer compatible with Voyager.

  12. Interactive computer graphic surface modeling of three-dimensional solid domains for boundary element analysis

    NASA Technical Reports Server (NTRS)

    Perucchio, R.; Ingraffea, A. R.

    1984-01-01

    The establishment of the boundary element method (BEM) as a valid tool for solving problems in structural mechanics and in other fields of applied physics is discussed. The development of an integrated interactive computer graphic system for the application of the BEM to three dimensional problems in elastostatics is described. The integration of interactive computer graphic techniques and the BEM takes place at the preprocessing and postprocessing stages of the analysis process, when, respectively, the data base is generated and the results are interpreted. The interactive computer graphic modeling techniques used for generating and discretizing the boundary surfaces of a solid domain are outlined.

  13. RevBayes: Bayesian Phylogenetic Inference Using Graphical Models and an Interactive Model-Specification Language.

    PubMed

    Höhna, Sebastian; Landis, Michael J; Heath, Tracy A; Boussau, Bastien; Lartillot, Nicolas; Moore, Brian R; Huelsenbeck, John P; Ronquist, Fredrik

    2016-07-01

    Programs for Bayesian inference of phylogeny currently implement a unique and fixed suite of models. Consequently, users of these software packages are simultaneously forced to use a number of programs for a given study, while also lacking the freedom to explore models that have not been implemented by the developers of those programs. We developed a new open-source software package, RevBayes, to address these problems. RevBayes is entirely based on probabilistic graphical models, a powerful generic framework for specifying and analyzing statistical models. Phylogenetic-graphical models can be specified interactively in RevBayes, piece by piece, using a new succinct and intuitive language called Rev. Rev is similar to the R language and the BUGS model-specification language, and should be easy to learn for most users. The strength of RevBayes is the simplicity with which one can design, specify, and implement new and complex models. Fortunately, this tremendous flexibility does not come at the cost of slower computation; as we demonstrate, RevBayes outperforms competing software for several standard analyses. Compared with other programs, RevBayes has fewer black-box elements. Users need to explicitly specify each part of the model and analysis. Although this explicitness may initially be unfamiliar, we are convinced that this transparency will improve understanding of phylogenetic models in our field. Moreover, it will motivate the search for improvements to existing methods by brazenly exposing the model choices that we make to critical scrutiny. RevBayes is freely available at http://www.RevBayes.com [Bayesian inference; Graphical models; MCMC; statistical phylogenetics.]. PMID:27235697

  14. RevBayes: Bayesian Phylogenetic Inference Using Graphical Models and an Interactive Model-Specification Language

    PubMed Central

    Höhna, Sebastian; Landis, Michael J.

    2016-01-01

    Programs for Bayesian inference of phylogeny currently implement a unique and fixed suite of models. Consequently, users of these software packages are simultaneously forced to use a number of programs for a given study, while also lacking the freedom to explore models that have not been implemented by the developers of those programs. We developed a new open-source software package, RevBayes, to address these problems. RevBayes is entirely based on probabilistic graphical models, a powerful generic framework for specifying and analyzing statistical models. Phylogenetic-graphical models can be specified interactively in RevBayes, piece by piece, using a new succinct and intuitive language called Rev. Rev is similar to the R language and the BUGS model-specification language, and should be easy to learn for most users. The strength of RevBayes is the simplicity with which one can design, specify, and implement new and complex models. Fortunately, this tremendous flexibility does not come at the cost of slower computation; as we demonstrate, RevBayes outperforms competing software for several standard analyses. Compared with other programs, RevBayes has fewer black-box elements. Users need to explicitly specify each part of the model and analysis. Although this explicitness may initially be unfamiliar, we are convinced that this transparency will improve understanding of phylogenetic models in our field. Moreover, it will motivate the search for improvements to existing methods by brazenly exposing the model choices that we make to critical scrutiny. RevBayes is freely available at http://www.RevBayes.com. [Bayesian inference; Graphical models; MCMC; statistical phylogenetics.] PMID:27235697

  15. Graphical Means for Inspecting Qualitative Models of System Behaviour

    ERIC Educational Resources Information Center

    Bouwer, Anders; Bredeweg, Bert

    2010-01-01

    This article presents the design and evaluation of a tool for inspecting conceptual models of system behaviour. The basis for this research is the Garp framework for qualitative simulation. This framework includes modelling primitives, such as entities, quantities and causal dependencies, which are combined into model fragments and scenarios.…

  16. Word-level language modeling for P300 spellers based on discriminative graphical models

    PubMed Central

    Saa, Jaime F Delgado; de Pesters, Adriana; McFarland, Dennis; Çetin, Müjdat

    2016-01-01

    Objective In this work we propose a probabilistic graphical model framework that uses language priors at the level of words as a mechanism to increase the performance of P300-based spellers. Approach This paper is concerned with brain-computer interfaces based on P300 spellers. Motivated by P300 spelling scenarios involving communication based on a limited vocabulary, we propose a probabilistic graphical model framework and an associated classification algorithm that uses learned statistical models of language at the level of words. Exploiting such high-level contextual information helps reduce the error rate of the speller. Main results Our experimental results demonstrate that the proposed approach offers several advantages over existing methods. Most importantly, it increases the classification accuracy while reducing the number of times the letters need to be flashed, increasing the communication rate of the system. Significance The proposed approach models all the variables in the P300 speller in a unified framework and has the capability to correct errors in previous letters in a word, given the data for the current one. The structure of the model we propose allows the use of efficient inference algorithms, which in turn makes it possible to use this approach in real-time applications. PMID:25686293

  17. Word-level language modeling for P300 spellers based on discriminative graphical models

    NASA Astrophysics Data System (ADS)

    Delgado Saa, Jaime F.; de Pesters, Adriana; McFarland, Dennis; Çetin, Müjdat

    2015-04-01

    Objective. In this work we propose a probabilistic graphical model framework that uses language priors at the level of words as a mechanism to increase the performance of P300-based spellers. Approach. This paper is concerned with brain-computer interfaces based on P300 spellers. Motivated by P300 spelling scenarios involving communication based on a limited vocabulary, we propose a probabilistic graphical model framework and an associated classification algorithm that uses learned statistical models of language at the level of words. Exploiting such high-level contextual information helps reduce the error rate of the speller. Main results. Our experimental results demonstrate that the proposed approach offers several advantages over existing methods. Most importantly, it increases the classification accuracy while reducing the number of times the letters need to be flashed, increasing the communication rate of the system. Significance. The proposed approach models all the variables in the P300 speller in a unified framework and has the capability to correct errors in previous letters in a word, given the data for the current one. The structure of the model we propose allows the use of efficient inference algorithms, which in turn makes it possible to use this approach in real-time applications.

  18. PRay - A graphical user interface for interactive visualization and modification of rayinvr models

    NASA Astrophysics Data System (ADS)

    Fromm, T.

    2016-01-01

    PRay is a graphical user interface for interactive displaying and editing of velocity models for seismic refraction. It is optimized for editing rayinvr models but can also be used as a dynamic viewer for ray tracing results from other software. The main features are the graphical editing of nodes and fast adjusting of the display (stations and phases). It can be extended by user-defined shell scripts and links to phase picking software. PRay is open source software written in the scripting language Perl, runs on Unix-like operating systems including Mac OS X and provides a version controlled source code repository for community development.

  19. Automatic Construction of Anomaly Detectors from Graphical Models

    SciTech Connect

    Ferragut, Erik M; Darmon, David M; Shue, Craig A; Kelley, Stephen

    2011-01-01

    Detection of rare or previously unseen attacks in cyber security presents a central challenge: how does one search for a sufficiently wide variety of types of anomalies and yet allow the process to scale to increasingly complex data? In particular, creating each anomaly detector manually and training each one separately presents untenable strains on both human and computer resources. In this paper we propose a systematic method for constructing a potentially very large number of complementary anomaly detectors from a single probabilistic model of the data. Only one model needs to be trained, but numerous detectors can then be implemented. This approach promises to scale better than manual methods to the complex heterogeneity of real-life data. As an example, we develop a Latent Dirichlet Allocation probability model of TCP connections entering Oak Ridge National Laboratory. We show that several detectors can be automatically constructed from the model and will provide anomaly detection at flow, sub-flow, and host (both server and client) levels. This demonstrates how the fundamental connection between anomaly detection and probabilistic modeling can be exploited to develop more robust operational solutions.

  20. Use and abuse of mixing models (MixSIAR)

    EPA Science Inventory

    Background/Question/MethodsCharacterizing trophic links in food webs is a fundamental ecological question. In our efforts to quantify energy flow through food webs, ecologists have increasingly used mixing models to analyze biological tracer data, often from stable isotopes. Whil...

  1. OASIS: A GRAPHICAL DECISION SUPPORT SYSTEM FOR GROUNDWATER CONTAMINANT MODELING

    EPA Science Inventory

    Three new software technologies were applied to develop an efficient and easy to use decision support system far ground-water contaminant modeling. raphical interfaces create a more intuitive and effective form of communication with the computer compared to text-based interfaces....

  2. Probabilistic assessment of agricultural droughts using graphical models

    NASA Astrophysics Data System (ADS)

    Ramadas, Meenu; Govindaraju, Rao S.

    2015-07-01

    Agricultural droughts are often characterized by soil moisture in the root zone of the soil, but crop needs are rarely factored into the analysis. Since water needs vary with crops, agricultural drought incidences in a region can be characterized better if crop responses to soil water deficits are also accounted for in the drought index. This study investigates agricultural droughts driven by plant stress due to soil moisture deficits using crop stress functions available in the literature. Crop water stress is assumed to begin at the soil moisture level corresponding to incipient stomatal closure, and reaches its maximum at the crop's wilting point. Using available location-specific crop acreage data, a weighted crop water stress function is computed. A new probabilistic agricultural drought index is then developed within a hidden Markov model (HMM) framework that provides model uncertainty in drought classification and accounts for time dependence between drought states. The proposed index allows probabilistic classification of the drought states and takes due cognizance of the stress experienced by the crop due to soil moisture deficit. The capabilities of HMM model formulations for assessing agricultural droughts are compared to those of current drought indices such as standardized precipitation evapotranspiration index (SPEI) and self-calibrating Palmer drought severity index (SC-PDSI). The HMM model identified critical drought events and several drought occurrences that are not detected by either SPEI or SC-PDSI, and shows promise as a tool for agricultural drought studies.

  3. A Local Poisson Graphical Model for inferring networks from sequencing data.

    PubMed

    Allen, Genevera I; Liu, Zhandong

    2013-09-01

    Gaussian graphical models, a class of undirected graphs or Markov Networks, are often used to infer gene networks based on microarray expression data. Many scientists, however, have begun using high-throughput sequencing technologies such as RNA-sequencing or next generation sequencing to measure gene expression. As the resulting data consists of counts of sequencing reads for each gene, Gaussian graphical models are not optimal for this discrete data. In this paper, we propose a novel method for inferring gene networks from sequencing data: the Local Poisson Graphical Model. Our model assumes a Local Markov property where each variable conditional on all other variables is Poisson distributed. We develop a neighborhood selection algorithm to fit our model locally by performing a series of l1 penalized Poisson, or log-linear, regressions. This yields a fast parallel algorithm for estimating networks from next generation sequencing data. In simulations, we illustrate the effectiveness of our methods for recovering network structure from count data. A case study on breast cancer microRNAs (miRNAs), a novel application of graphical models, finds known regulators of breast cancer genes and discovers novel miRNA clusters and hubs that are targets for future research. PMID:23955777

  4. (Hyper)-graphical models in biomedical image analysis.

    PubMed

    Paragios, Nikos; Ferrante, Enzo; Glocker, Ben; Komodakis, Nikos; Parisot, Sarah; Zacharaki, Evangelia I

    2016-10-01

    Computational vision, visual computing and biomedical image analysis have made tremendous progress over the past two decades. This is mostly due the development of efficient learning and inference algorithms which allow better and richer modeling of image and visual understanding tasks. Hyper-graph representations are among the most prominent tools to address such perception through the casting of perception as a graph optimization problem. In this paper, we briefly introduce the importance of such representations, discuss their strength and limitations, provide appropriate strategies for their inference and present their application to address a variety of problems in biomedical image analysis. PMID:27377331

  5. Incorporating Solid Modeling and Team-Based Design into Freshman Engineering Graphics.

    ERIC Educational Resources Information Center

    Buchal, Ralph O.

    2001-01-01

    Describes the integration of these topics through a major team-based design and computer aided design (CAD) modeling project in freshman engineering graphics at the University of Western Ontario. Involves n=250 students working in teams of four to design and document an original Lego toy. Includes 12 references. (Author/YDS)

  6. Parallelized CCHE2D flow model with CUDA Fortran on Graphics Process Units

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper presents the CCHE2D implicit flow model parallelized using CUDA Fortran programming technique on Graphics Processing Units (GPUs). A parallelized implicit Alternating Direction Implicit (ADI) solver using Parallel Cyclic Reduction (PCR) algorithm on GPU is developed and tested. This solve...

  7. A Monthly Water-Balance Model Driven By a Graphical User Interface

    USGS Publications Warehouse

    McCabe, Gregory J.; Markstrom, Steven L.

    2007-01-01

    This report describes a monthly water-balance model driven by a graphical user interface, referred to as the Thornthwaite monthly water-balance program. Computations of monthly water-balance components of the hydrologic cycle are made for a specified location. The program can be used as a research tool, an assessment tool, and a tool for classroom instruction.

  8. Graphical Modeling: A New Response Type for Measuring the Qualitative Component of Mathematical Reasoning.

    ERIC Educational Resources Information Center

    Bennett, Randy Elliot; Morley, Mary; Quardt, Dennis; Rock, Donald A.

    2000-01-01

    Investigated the functioning of a new computer-delivered graphical modeling (GM) response type for use in a graduate admissions assessment using two GM tests differing in item features randomly spiraled among participants. Results show GM scores to be reliable and moderately related to the quantitative section of the Graduate Record Examinations.…

  9. Quantifying uncertainty in stable isotope mixing models

    NASA Astrophysics Data System (ADS)

    Davis, Paul; Syme, James; Heikoop, Jeffrey; Fessenden-Rahn, Julianna; Perkins, George; Newman, Brent; Chrystal, Abbey E.; Hagerty, Shannon B.

    2015-05-01

    Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, Stable Isotope Analysis in R (SIAR), a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (δ15N and δ18O) but all methods tested are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the uncertainty in calculated

  10. Quantifying uncertainty in stable isotope mixing models

    SciTech Connect

    Davis, Paul; Syme, James; Heikoop, Jeffrey; Fessenden-Rahn, Julianna; Perkins, George; Newman, Brent; Chrystal, Abbey E.; Hagerty, Shannon B.

    2015-05-19

    Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, SIAR [Parnell et al., 2010] a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (δ15N and δ18O) but all methods tested are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the

  11. Quantifying uncertainty in stable isotope mixing models

    DOE PAGESBeta

    Davis, Paul; Syme, James; Heikoop, Jeffrey; Fessenden-Rahn, Julianna; Perkins, George; Newman, Brent; Chrystal, Abbey E.; Hagerty, Shannon B.

    2015-05-19

    Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, SIAR [Parnell et al., 2010] a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (δ15N and δ18O) but all methods testedmore » are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the uncertainty in calculated

  12. Modeling and diagnosis of structural systems through sparse dynamic graphical models

    NASA Astrophysics Data System (ADS)

    Bornn, Luke; Farrar, Charles R.; Higdon, David; Murphy, Kevin P.

    2016-06-01

    Since their introduction into the structural health monitoring field, time-domain statistical models have been applied with considerable success. Current approaches still have several flaws, however, as they typically ignore the structure of the system, using individual sensor data for modeling and diagnosis. This paper introduces a Bayesian framework containing much of the previous work with autoregressive models as a special case. In addition, the framework allows for natural inclusion of structural knowledge through the form of prior distributions on the model parameters. Acknowledging the need for computational efficiency, we extend the framework through the use of decomposable graphical models, exploiting sparsity in the system to give models that are simple to fit and understand. This sparsity can be specified from knowledge of the system, from the data itself, or through a combination of the two. Using both simulated and real data, we demonstrate the capability of the model to capture the dynamics of the system and to provide clear indications of structural change and damage. We also demonstrate how learning the sparsity in the system gives insight into the structure's physical properties.

  13. Transition mixing study empirical model report

    NASA Technical Reports Server (NTRS)

    Srinivasan, R.; White, C.

    1988-01-01

    The empirical model developed in the NASA Dilution Jet Mixing Program has been extended to include the curvature effects of transition liners. This extension is based on the results of a 3-D numerical model generated under this contract. The empirical model results agree well with the numerical model results for all tests cases evaluated. The empirical model shows faster mixing rates compared to the numerical model. Both models show drift of jets toward the inner wall of a turning duct. The structure of the jets from the inner wall does not exhibit the familiar kidney-shaped structures observed for the outer wall jets or for jets injected in rectangular ducts.

  14. Experiments with a low-cost system for computer graphics material model acquisition

    NASA Astrophysics Data System (ADS)

    Rushmeier, Holly; Lockerman, Yitzhak; Cartwright, Luke; Pitera, David

    2015-03-01

    We consider the design of an inexpensive system for acquiring material models for computer graphics rendering applications in animation, games and conceptual design. To be useful in these applications a system must be able to model a rich range of appearances in a computationally tractable form. The range of appearance of interest in computer graphics includes materials that have spatially varying properties, directionality, small-scale geometric structure, and subsurface scattering. To be computationally tractable, material models for graphics must be compact, editable, and efficient to numerically evaluate for ray tracing importance sampling. To construct appropriate models for a range of interesting materials, we take the approach of separating out directly and indirectly scattered light using high spatial frequency patterns introduced by Nayar et al. in 2006. To acquire the data at low cost, we use a set of Raspberry Pi computers and cameras clamped to miniature projectors. We explore techniques to separate out surface and subsurface indirect lighting. This separation would allow the fitting of simple, and so tractable, analytical models to features of the appearance model. The goal of the system is to provide models for physically accurate renderings that are visually equivalent to viewing the original physical materials.

  15. VISUAL PLUMES MIXING ZONE MODELING SOFTWARE

    EPA Science Inventory

    The U.S. Environmental Protection Agency has a long history of both supporting plume model development and providing mixing zone modeling software. The Visual Plumes model is the most recent addition to the suite of public-domain models available through the EPA-Athens Center f...

  16. Using graphical models to infer missing streamflow data with its application to the Ohio river basin

    NASA Astrophysics Data System (ADS)

    Villalba, G.; Liang, X.; Salas, D.; Liang, Y.

    2013-12-01

    The spatial relationship among streamflow gauges is an interesting but challenging problem. In this study, we apply a probabilistic graphical modeling approach to exploring and represent the spatial relationships among the streamflow gauges and then inferring missing data with low uncertainties. The Ohio River Basin is used as a case of study to analyze the spatial correlations among the streamflow gauges. An undirected graphical model is used to identify the main spatial correlations among the gauges. Given that model, multivariate linear regressions are used to infer missing data. The accuracy of the method is tested against historical data from 34 daily streamflow gauges over the Ohio River basin from which a period of 30 years' data is available. Our initial study shows promising results.

  17. Square Root Graphical Models: Multivariate Generalizations of Univariate Exponential Families that Permit Positive Dependencies

    PubMed Central

    Inouye, David I.; Ravikumar, Pradeep; Dhillon, Inderjit S.

    2016-01-01

    We develop Square Root Graphical Models (SQR), a novel class of parametric graphical models that provides multivariate generalizations of univariate exponential family distributions. Previous multivariate graphical models (Yang et al., 2015) did not allow positive dependencies for the exponential and Poisson generalizations. However, in many real-world datasets, variables clearly have positive dependencies. For example, the airport delay time in New York—modeled as an exponential distribution—is positively related to the delay time in Boston. With this motivation, we give an example of our model class derived from the univariate exponential distribution that allows for almost arbitrary positive and negative dependencies with only a mild condition on the parameter matrix—a condition akin to the positive definiteness of the Gaussian covariance matrix. Our Poisson generalization allows for both positive and negative dependencies without any constraints on the parameter values. We also develop parameter estimation methods using node-wise regressions with ℓ1 regularization and likelihood approximation methods using sampling. Finally, we demonstrate our exponential generalization on a synthetic dataset and a real-world dataset of airport delay times.

  18. Model selection for factorial Gaussian graphical models with an application to dynamic regulatory networks.

    PubMed

    Vinciotti, Veronica; Augugliaro, Luigi; Abbruzzo, Antonino; Wit, Ernst C

    2016-06-01

    Factorial Gaussian graphical Models (fGGMs) have recently been proposed for inferring dynamic gene regulatory networks from genomic high-throughput data. In the search for true regulatory relationships amongst the vast space of possible networks, these models allow the imposition of certain restrictions on the dynamic nature of these relationships, such as Markov dependencies of low order - some entries of the precision matrix are a priori zeros - or equal dependency strengths across time lags - some entries of the precision matrix are assumed to be equal. The precision matrix is then estimated by l1-penalized maximum likelihood, imposing a further constraint on the absolute value of its entries, which results in sparse networks. Selecting the optimal sparsity level is a major challenge for this type of approaches. In this paper, we evaluate the performance of a number of model selection criteria for fGGMs by means of two simulated regulatory networks from realistic biological processes. The analysis reveals a good performance of fGGMs in comparison with other methods for inferring dynamic networks and of the KLCV criterion in particular for model selection. Finally, we present an application on a high-resolution time-course microarray data from the Neisseria meningitidis bacterium, a causative agent of life-threatening infections such as meningitis. The methodology described in this paper is implemented in the R package sglasso, freely available at CRAN, http://CRAN.R-project.org/package=sglasso. PMID:27023322

  19. From least squares to multilevel modeling: A graphical introduction to Bayesian inference

    NASA Astrophysics Data System (ADS)

    Loredo, Thomas J.

    2016-01-01

    This tutorial presentation will introduce some of the key ideas and techniques involved in applying Bayesian methods to problems in astrostatistics. The focus will be on the big picture: understanding the foundations (interpreting probability, Bayes's theorem, the law of total probability and marginalization), making connections to traditional methods (propagation of errors, least squares, chi-squared, maximum likelihood, Monte Carlo simulation), and highlighting problems where a Bayesian approach can be particularly powerful (Poisson processes, density estimation and curve fitting with measurement error). The "graphical" component of the title reflects an emphasis on pictorial representations of some of the math, but also on the use of graphical models (multilevel or hierarchical models) for analyzing complex data. Code for some examples from the talk will be available to participants, in Python and in the Stan probabilistic programming language.

  20. Sculpting proteins interactively: continual energy minimization embedded in a graphical modeling system.

    PubMed Central

    Surles, M. C.; Richardson, J. S.; Richardson, D. C.; Brooks, F. P.

    1994-01-01

    We describe a new paradigm for modeling proteins in interactive computer graphics systems--continual maintenance of a physically valid representation, combined with direct user control and visualization. This is achieved by a fast algorithm for energy minimization, capable of real-time performance on all atoms of a small protein, plus graphically specified user tugs. The modeling system, called Sculpt, rigidly constrains bond lengths, bond angles, and planar groups (similar to existing interactive modeling programs), while it applies elastic restraints to minimize the potential energy due to torsions, hydrogen bonds, and van der Waals and electrostatic interactions (similar to existing batch minimization programs), and user-specified springs. The graphical interface can show bad and/or favorable contacts, and individual energy terms can be turned on or off to determine their effects and interactions. Sculpt finds a local minimum of the total energy that satisfies all the constraints using an augmented Lagrange-multiplier method; calculation time increases only linearly with the number of atoms because the matrix of constraint gradients is sparse and banded. On a 100-MHz MIPS R4000 processor (Silicon Graphics Indigo), Sculpt achieves 11 updates per second on a 20-residue fragment and 2 updates per second on an 80-residue protein, using all atoms except non-H-bonding hydrogens, and without electrostatic interactions. Applications of Sculpt are described: to reverse the direction of bundle packing in a designed 4-helix bundle protein, to fold up a 2-stranded beta-ribbon into an approximate beta-barrel, and to design the sequence and conformation of a 30-residue peptide that mimics one partner of a protein subunit interaction. Computer models that are both interactive and physically realistic (within the limitations of a given force field) have 2 significant advantages: (1) they make feasible the modeling of very large changes (such as needed for de novo design), and

  1. Scotogenic model for co-bimaximal mixing

    NASA Astrophysics Data System (ADS)

    Ferreira, P. M.; Grimus, W.; Jurčiukonis, D.; Lavoura, L.

    2016-07-01

    We present a scotogenic model, i.e. a one-loop neutrino mass model with dark right-handed neutrino gauge singlets and one inert dark scalar gauge doublet η, which has symmetries that lead to co-bimaximal mixing, i.e. to an atmospheric mixing angle θ 23 = 45° and to a CP -violating phase δ = ±π /2, while the mixing angle θ 13 remains arbitrary. The symmetries consist of softly broken lepton numbers L α ( α = e, μ, τ ), a non-standard CP symmetry, and three Z_2 symmetries. We indicate two possibilities for extending the model to the quark sector. Since the model has, besides η, three scalar gauge doublets, we perform a thorough discussion of its scalar sector. We demonstrate that it can accommodate a Standard Model-like scalar with mass 125 GeV, with all the other charged and neutral scalars having much higher masses.

  2. From Nominal to Quantitative Codification of Content-Neutral Variables in Graphics Research: The Beginnings of a Manifest Content Model.

    ERIC Educational Resources Information Center

    Crow, Wendell C.

    This paper suggests ways in which manifest, physical attributes of graphic elements can be described and measured. It also proposes a preliminary conceptual model that accounts for the readily apparent, measurable variables in a visual message. The graphic elements that are described include format, typeface, and photographs/artwork. The…

  3. Animated computer graphics models of space and earth sciences data generated via the massively parallel processor

    NASA Technical Reports Server (NTRS)

    Treinish, Lloyd A.; Gough, Michael L.; Wildenhain, W. David

    1987-01-01

    The capability was developed of rapidly producing visual representations of large, complex, multi-dimensional space and earth sciences data sets via the implementation of computer graphics modeling techniques on the Massively Parallel Processor (MPP) by employing techniques recently developed for typically non-scientific applications. Such capabilities can provide a new and valuable tool for the understanding of complex scientific data, and a new application of parallel computing via the MPP. A prototype system with such capabilities was developed and integrated into the National Space Science Data Center's (NSSDC) Pilot Climate Data System (PCDS) data-independent environment for computer graphics data display to provide easy access to users. While developing these capabilities, several problems had to be solved independently of the actual use of the MPP, all of which are outlined.

  4. The effects of a dynamic graphical model during simulation-based training of console operation skill

    NASA Technical Reports Server (NTRS)

    Farquhar, John D.; Regian, J. Wesley

    1993-01-01

    LOADER is a Windows-based simulation of a complex procedural task. The task requires subjects to execute long sequences of console-operation actions (e.g., button presses, switch actuations, dial rotations) to accomplish specific goals. The LOADER interface is a graphical computer-simulated console which controls railroad cars, tracks, and cranes in a fictitious railroad yard. We hypothesized that acquisition of LOADER performance skill would be supported by the representation of a dynamic graphical model linking console actions to goal and goal states in the 'railroad yard'. Twenty-nine subjects were randomly assigned to one of two treatments (i.e., dynamic model or no model). During training, both groups received identical text-based instruction in an instructional-window above the LOADER interface. One group, however, additionally saw a dynamic version of the bird's-eye view of the railroad yard. After training, both groups were tested under identical conditions. They were asked to perform the complete procedure without guidance and without access to either type of railroad yard representation. Results indicate that rather than becoming dependent on the animated rail yard model, subjects in the dynamic model condition apparently internalized the model, as evidenced by their performance after the model was removed.

  5. A Module for Graphical Display of Model Results with the CBP Toolbox

    SciTech Connect

    Smith, F.

    2015-04-21

    This report describes work performed by the Savannah River National Laboratory (SRNL) in fiscal year 2014 to add enhanced graphical capabilities to display model results in the Cementitious Barriers Project (CBP) Toolbox. Because Version 2.0 of the CBP Toolbox has just been released, the graphing enhancements described in this report have not yet been integrated into a new version of the Toolbox. Instead they have been tested using a standalone GoldSim model and, while they are substantially complete, may undergo further refinement before full implementation. Nevertheless, this report is issued to document the FY14 development efforts which will provide a basis for further development of the CBP Toolbox.

  6. Learning a structured graphical model with boosted top-down features for ultrasound image segmentation.

    PubMed

    Hao, Zhihui; Wang, Qiang; Wang, Xiaotao; Kim, Jung Bae; Hwang, Youngkyoo; Cho, Baek Hwan; Guo, Ping; Lee, Won Ki

    2013-01-01

    A key problem for many medical image segmentation tasks is the combination of different-level knowledge. We propose a novel scheme of embedding detected regions into a superpixel based graphical model, by which we achieve a full leverage on various image cues for ultrasound lesion segmentation. Region features are mapped into a higher-dimensional space via a boosted model to become well controlled. Parameters for regions, superpixels and a new affinity term are learned simultaneously within the framework of structured learning. Experiments on a breast ultrasound image data set confirm the effectiveness of the proposed approach as well as our two novel modules. PMID:24505670

  7. A graphical user interface for numerical modeling of acclimation responses of vegetation to climate change

    NASA Astrophysics Data System (ADS)

    Le, Phong V. V.; Kumar, Praveen; Drewry, Darren T.; Quijano, Juan C.

    2012-12-01

    Ecophysiological models that vertically resolve vegetation canopy states are becoming a powerful tool for studying the exchange of mass, energy, and momentum between the land surface and the atmosphere. A mechanistic multilayer canopy-soil-root system model (MLCan) developed by Drewry et al. (2010a) has been used to capture the emergent vegetation responses to elevated atmospheric CO2 for both C3 and C4 plants under various climate conditions. However, processing input data and setting up such a model can be time-consuming and error-prone. In this paper, a graphical user interface that has been developed for MLCan is presented. The design of this interface aims to provide visualization capabilities and interactive support for processing input meteorological forcing data and vegetation parameter values to facilitate the use of this model. In addition, the interface also provides graphical tools for analyzing the forcing data and simulated numerical results. The model and its interface are both written in the MATLAB programming language. Finally, an application of this model package for capturing the ecohydrological responses of three bioenergy crops (maize, miscanthus, and switchgrass) to local environmental drivers at two different sites in the Midwestern United States is presented.

  8. Model-Independent Bounds on Kinetic Mixing

    DOE PAGESBeta

    Hook, Anson; Izaguirre, Eder; Wacker, Jay G.

    2011-01-01

    New Abelimore » an vector bosons can kinetically mix with the hypercharge gauge boson of the Standard Model. This letter computes the model-independent limits on vector bosons with masses from 1 GeV to 1 TeV. The limits arise from the numerous e + e − experiments that have been performed in this energy range and bound the kinetic mixing by ϵ ≲ 0.03 for most of the mass range studied, regardless of any additional interactions that the new vector boson may have.« less

  9. Model Independent Bounds on Kinetic Mixing

    SciTech Connect

    Hook, Anson; Izaguirre, Eder; Wacker, Jay G.; /SLAC

    2011-08-22

    New Abelian vector bosons can kinetically mix with the hypercharge gauge boson of the Standard Model. This letter computes the model independent limits on vector bosons with masses from 1 GeV to 1 TeV. The limits arise from the numerous e{sup +}e{sup -} experiments that have been performed in this energy range and bound the kinetic mixing by {epsilon} {approx}< 0.03 for most of the mass range studied, regardless of any additional interactions that the new vector boson may have.

  10. A Spectral Graphical Model Approach for Learning Brain Connectivity Network of Children's Narrative Comprehension

    PubMed Central

    Meng, Xiangxiang; Karunanayaka, Prasanna; Holland, Scott K.

    2011-01-01

    Abstract Narrative comprehension is a fundamental cognitive skill that involves the coordination of different functional brain regions. We develop a spectral graphical model with model averaging to study the connectivity networks underlying these brain regions using fMRI data collected from a story comprehension task. Based on the spectral density matrices in the frequency domain, this model captures the temporal dependency of the entire fMRI time series between brain regions. A Bayesian model averaging procedure is then applied to select the best directional links that constitute the brain network. Using this model, brain networks of three distinct age groups are constructed to assess the dynamic change of network connectivity with respect to age. PMID:22432453

  11. A spectral graphical model approach for learning brain connectivity network of children's narrative comprehension.

    PubMed

    Lin, Xiaodong; Meng, Xiangxiang; Karunanayaka, Prasanna; Holland, Scott K

    2011-01-01

    Narrative comprehension is a fundamental cognitive skill that involves the coordination of different functional brain regions. We develop a spectral graphical model with model averaging to study the connectivity networks underlying these brain regions using fMRI data collected from a story comprehension task. Based on the spectral density matrices in the frequency domain, this model captures the temporal dependency of the entire fMRI time series between brain regions. A Bayesian model averaging procedure is then applied to select the best directional links that constitute the brain network. Using this model, brain networks of three distinct age groups are constructed to assess the dynamic change of network connectivity with respect to age. PMID:22432453

  12. ESTIMATING HETEROGENEOUS GRAPHICAL MODELS FOR DISCRETE DATA WITH AN APPLICATION TO ROLL CALL VOTING

    PubMed Central

    Guo, Jian; Cheng, Jie; Levina, Elizaveta; Michailidis, George; Zhu, Ji

    2016-01-01

    We consider the problem of jointly estimating a collection of graphical models for discrete data, corresponding to several categories that share some common structure. An example for such a setting is voting records of legislators on different issues, such as defense, energy, and healthcare. We develop a Markov graphical model to characterize the heterogeneous dependence structures arising from such data. The model is fitted via a joint estimation method that preserves the underlying common graph structure, but also allows for differences between the networks. The method employs a group penalty that targets the common zero interaction effects across all the networks. We apply the method to describe the internal networks of the U.S. Senate on several important issues. Our analysis reveals individual structure for each issue, distinct from the underlying well-known bipartisan structure common to all categories which we are able to extract separately. We also establish consistency of the proposed method both for parameter estimation and model selection, and evaluate its numerical performance on a number of simulated examples. PMID:27182289

  13. The Mixed Effects Trend Vector Model

    ERIC Educational Resources Information Center

    de Rooij, Mark; Schouteden, Martijn

    2012-01-01

    Maximum likelihood estimation of mixed effect baseline category logit models for multinomial longitudinal data can be prohibitive due to the integral dimension of the random effects distribution. We propose to use multidimensional unfolding methodology to reduce the dimensionality of the problem. As a by-product, readily interpretable graphical…

  14. A computer graphics based model for scattering from objects of arbitrary shapes in the optical region

    NASA Technical Reports Server (NTRS)

    Goel, Narendra S.; Rozehnal, Ivan; Thompson, Richard L.

    1991-01-01

    A computer-graphics-based model, named DIANA, is presented for generation of objects of arbitrary shape and for calculating bidirectional reflectances and scattering from them, in the visible and infrared region. The computer generation is based on a modified Lindenmayer system approach which makes it possible to generate objects of arbitrary shapes and to simulate their growth, dynamics, and movement. Rendering techniques are used to display an object on a computer screen with appropriate shading and shadowing and to calculate the scattering and reflectance from the object. The technique is illustrated with scattering from canopies of simulated corn plants.

  15. Simplified models of mixed dark matter

    SciTech Connect

    Cheung, Clifford; Sanford, David E-mail: dsanford@caltech.edu

    2014-02-01

    We explore simplified models of mixed dark matter (DM), defined here to be a stable relic composed of a singlet and an electroweak charged state. Our setup describes a broad spectrum of thermal DM candidates that can naturally accommodate the observed DM abundance but are subject to substantial constraints from current and upcoming direct detection experiments. We identify ''blind spots'' at which the DM-Higgs coupling is identically zero, thus nullifying direct detection constraints on spin independent scattering. Furthermore, we characterize the fine-tuning in mixing angles, i.e. well-tempering, required for thermal freeze-out to accommodate the observed abundance. Present and projected limits from LUX and XENON1T force many thermal relic models into blind spot tuning, well-tempering, or both. This simplified model framework generalizes bino-Higgsino DM in the MSSM, singlino-Higgsino DM in the NMSSM, and scalar DM candidates that appear in models of extended Higgs sectors.

  16. A graphical method to assess distribution assumption in group-based trajectory models.

    PubMed

    Elsensohn, Mad-Hélénie; Klich, Amna; Ecochard, René; Bastard, Mathieu; Genolini, Christophe; Etard, Jean-François; Gustin, Marie-Paule

    2016-04-01

    Group-based trajectory models had a rapid development in the analysis of longitudinal data in clinical research. In these models, the assumption of homoscedasticity of the residuals is frequently made but this assumption is not always met. We developed here an easy-to-perform graphical method to assess the assumption of homoscedasticity of the residuals to apply especially in group-based trajectory models. The method is based on drawing an envelope to visualize the local dispersion of the residuals around each typical trajectory. Its efficiency is demonstrated using data on CD4 lymphocyte counts in patients with human immunodeficiency virus put on antiretroviral therapy. Four distinct distributions that take into account increasing parts of the variability of the observed data are presented. Significant differences in group structures and trajectory patterns were found according to the chosen distribution. These differences might have large impacts on the final trajectories and their characteristics; thus on potential medical decisions. With a single glance, the graphical criteria allow the choice of the distribution that best capture data variability and help dealing with a potential heteroscedasticity problem. PMID:23427224

  17. FastGGM: An Efficient Algorithm for the Inference of Gaussian Graphical Model in Biological Networks.

    PubMed

    Wang, Ting; Ren, Zhao; Ding, Ying; Fang, Zhou; Sun, Zhe; MacDonald, Matthew L; Sweet, Robert A; Wang, Jieru; Chen, Wei

    2016-02-01

    Biological networks provide additional information for the analysis of human diseases, beyond the traditional analysis that focuses on single variables. Gaussian graphical model (GGM), a probability model that characterizes the conditional dependence structure of a set of random variables by a graph, has wide applications in the analysis of biological networks, such as inferring interaction or comparing differential networks. However, existing approaches are either not statistically rigorous or are inefficient for high-dimensional data that include tens of thousands of variables for making inference. In this study, we propose an efficient algorithm to implement the estimation of GGM and obtain p-value and confidence interval for each edge in the graph, based on a recent proposal by Ren et al., 2015. Through simulation studies, we demonstrate that the algorithm is faster by several orders of magnitude than the current implemented algorithm for Ren et al. without losing any accuracy. Then, we apply our algorithm to two real data sets: transcriptomic data from a study of childhood asthma and proteomic data from a study of Alzheimer's disease. We estimate the global gene or protein interaction networks for the disease and healthy samples. The resulting networks reveal interesting interactions and the differential networks between cases and controls show functional relevance to the diseases. In conclusion, we provide a computationally fast algorithm to implement a statistically sound procedure for constructing Gaussian graphical model and making inference with high-dimensional biological data. The algorithm has been implemented in an R package named "FastGGM". PMID:26872036

  18. Bayesian Learning in Sparse Graphical Factor Models via Variational Mean-Field Annealing

    PubMed Central

    Yoshida, Ryo; West, Mike

    2010-01-01

    We describe a class of sparse latent factor models, called graphical factor models (GFMs), and relevant sparse learning algorithms for posterior mode estimation. Linear, Gaussian GFMs have sparse, orthogonal factor loadings matrices, that, in addition to sparsity of the implied covariance matrices, also induce conditional independence structures via zeros in the implied precision matrices. We describe the models and their use for robust estimation of sparse latent factor structure and data/signal reconstruction. We develop computational algorithms for model exploration and posterior mode search, addressing the hard combinatorial optimization involved in the search over a huge space of potential sparse configurations. A mean-field variational technique coupled with annealing is developed to successively generate “artificial” posterior distributions that, at the limiting temperature in the annealing schedule, define required posterior modes in the GFM parameter space. Several detailed empirical studies and comparisons to related approaches are discussed, including analyses of handwritten digit image and cancer gene expression data. PMID:20890391

  19. Modified graphical autocatalytic set model of combustion process in circulating fluidized bed boiler

    NASA Astrophysics Data System (ADS)

    Yusof, Nurul Syazwani; Bakar, Sumarni Abu; Ismail, Razidah

    2014-07-01

    Circulating Fluidized Bed Boiler (CFB) is a device for generating steam by burning fossil fuels in a furnace operating under a special hydrodynamic condition. Autocatalytic Set has provided a graphical model of chemical reactions that occurred during combustion process in CFB. Eight important chemical substances known as species were represented as nodes and catalytic relationships between nodes are represented by the edges in the graph. In this paper, the model is extended and modified by considering other relevant chemical reactions that also exist during the process. Catalytic relationship among the species in the model is discussed. The result reveals that the modified model is able to gives more explanation of the relationship among the species during the process at initial time t.

  20. AZOrange - High performance open source machine learning for QSAR modeling in a graphical programming environment

    PubMed Central

    2011-01-01

    Background Machine learning has a vast range of applications. In particular, advanced machine learning methods are routinely and increasingly used in quantitative structure activity relationship (QSAR) modeling. QSAR data sets often encompass tens of thousands of compounds and the size of proprietary, as well as public data sets, is rapidly growing. Hence, there is a demand for computationally efficient machine learning algorithms, easily available to researchers without extensive machine learning knowledge. In granting the scientific principles of transparency and reproducibility, Open Source solutions are increasingly acknowledged by regulatory authorities. Thus, an Open Source state-of-the-art high performance machine learning platform, interfacing multiple, customized machine learning algorithms for both graphical programming and scripting, to be used for large scale development of QSAR models of regulatory quality, is of great value to the QSAR community. Results This paper describes the implementation of the Open Source machine learning package AZOrange. AZOrange is specially developed to support batch generation of QSAR models in providing the full work flow of QSAR modeling, from descriptor calculation to automated model building, validation and selection. The automated work flow relies upon the customization of the machine learning algorithms and a generalized, automated model hyper-parameter selection process. Several high performance machine learning algorithms are interfaced for efficient data set specific selection of the statistical method, promoting model accuracy. Using the high performance machine learning algorithms of AZOrange does not require programming knowledge as flexible applications can be created, not only at a scripting level, but also in a graphical programming environment. Conclusions AZOrange is a step towards meeting the needs for an Open Source high performance machine learning platform, supporting the efficient development of

  1. Computer Graphics.

    ERIC Educational Resources Information Center

    Halpern, Jeanne W.

    1970-01-01

    Computer graphics have been called the most exciting development in computer technology. At the University of Michigan, three kinds of graphics output equipment are now being used: symbolic printers, line plotters or drafting devices, and cathode-ray tubes (CRT). Six examples are given that demonstrate the range of graphics use at the University.…

  2. Linear Mixed Models: Gum and Beyond

    NASA Astrophysics Data System (ADS)

    Arendacká, Barbora; Täubner, Angelika; Eichstädt, Sascha; Bruns, Thomas; Elster, Clemens

    2014-04-01

    In Annex H.5, the Guide to the Evaluation of Uncertainty in Measurement (GUM) [1] recognizes the necessity to analyze certain types of experiments by applying random effects ANOVA models. These belong to the more general family of linear mixed models that we focus on in the current paper. Extending the short introduction provided by the GUM, our aim is to show that the more general, linear mixed models cover a wider range of situations occurring in practice and can be beneficial when employed in data analysis of long-term repeated experiments. Namely, we point out their potential as an aid in establishing an uncertainty budget and as means for gaining more insight into the measurement process. We also comment on computational issues and to make the explanations less abstract, we illustrate all the concepts with the help of a measurement campaign conducted in order to challenge the uncertainty budget in calibration of accelerometers.

  3. Joint sulcal detection on cortical surfaces with graphical models and boosted priors.

    PubMed

    Shi, Yonggang; Tu, Zhuowen; Reiss, Allan L; Dutton, Rebecca A; Lee, Agatha D; Galaburda, Albert M; Dinov, Ivo; Thompson, Paul M; Toga, Arthur W

    2009-03-01

    In this paper, we propose an automated approach for the joint detection of major sulci on cortical surfaces. By representing sulci as nodes in a graphical model, we incorporate Markovian relations between sulci and formulate their detection as a maximum a posteriori (MAP) estimation problem over the joint space of major sulci. To make the inference tractable, a sample space with a finite number of candidate curves is automatically generated at each node based on the Hamilton-Jacobi skeleton of sulcal regions. Using the AdaBoost algorithm, we learn both individual and pairwise shape priors of sulcal curves from training data, which are then used to define potential functions in the graphical model based on the connection between AdaBoost and logistic regression. Finally belief propagation is used to perform the MAP inference and select the joint detection results from the sample spaces of candidate curves. In our experiments, we quantitatively validate our algorithm with manually traced curves and demonstrate the automatically detected curves can capture the main body of sulci very accurately. A comparison with independently detected results is also conducted to illustrate the advantage of the joint detection approach. PMID:19244008

  4. Bayesian Estimation of Latently-grouped Parameters in Undirected Graphical Models

    PubMed Central

    Liu, Jie; Page, David

    2014-01-01

    In large-scale applications of undirected graphical models, such as social networks and biological networks, similar patterns occur frequently and give rise to similar parameters. In this situation, it is beneficial to group the parameters for more efficient learning. We show that even when the grouping is unknown, we can infer these parameter groups during learning via a Bayesian approach. We impose a Dirichlet process prior on the parameters. Posterior inference usually involves calculating intractable terms, and we propose two approximation algorithms, namely a Metropolis-Hastings algorithm with auxiliary variables and a Gibbs sampling algorithm with “stripped” Beta approximation (Gibbs_SBA). Simulations show that both algorithms outperform conventional maximum likelihood estimation (MLE). Gibbs_SBA’s performance is close to Gibbs sampling with exact likelihood calculation. Models learned with Gibbs_SBA also generalize better than the models learned by MLE on real-world Senate voting data. PMID:25404848

  5. ModelMuse - A Graphical User Interface for MODFLOW-2005 and PHAST

    USGS Publications Warehouse

    Winston, Richard B.

    2009-01-01

    ModelMuse is a graphical user interface (GUI) for the U.S. Geological Survey (USGS) models MODFLOW-2005 and PHAST. This software package provides a GUI for creating the flow and transport input file for PHAST and the input files for MODFLOW-2005. In ModelMuse, the spatial data for the model is independent of the grid, and the temporal data is independent of the stress periods. Being able to input these data independently allows the user to redefine the spatial and temporal discretization at will. This report describes the basic concepts required to work with ModelMuse. These basic concepts include the model grid, data sets, formulas, objects, the method used to assign values to data sets, and model features. The ModelMuse main window has a top, front, and side view of the model that can be used for editing the model, and a 3-D view of the model that can be used to display properties of the model. ModelMuse has tools to generate and edit the model grid. It also has a variety of interpolation methods and geographic functions that can be used to help define the spatial variability of the model. ModelMuse can be used to execute both MODFLOW-2005 and PHAST and can also display the results of MODFLOW-2005 models. An example of using ModelMuse with MODFLOW-2005 is included in this report. Several additional examples are described in the help system for ModelMuse, which can be accessed from the Help menu.

  6. Fertility intentions and outcomes: Implementing the Theory of Planned Behavior with graphical models.

    PubMed

    Mencarini, Letizia; Vignoli, Daniele; Gottard, Anna

    2015-03-01

    This paper studies fertility intentions and their outcomes, analyzing the complete path leading to fertility behavior according to the social psychological model of Theory Planned Behavior (TPB). We move beyond existing research using graphical models to have a precise understanding, and a formal description, of the developmental fertility decision-making process. Our findings yield new results for the Italian case which are empirically robust and theoretically coherent, adding important insights to the effectiveness of the TPB for fertility research. In line with TPB, all intentions' primary antecedents are found to be determinants of the level of fertility intentions, but do not affect fertility outcomes, being pre-filtered by fertility intentions. Nevertheless, in contrast with TPB, background factors are not fully mediated by intentions' primary antecedents, influencing directly fertility intentions and even fertility behaviors. PMID:26047838

  7. Learning Sequence Determinants of Protein:Protein Interaction Specificity with Sparse Graphical Models

    PubMed Central

    Kamisetty, Hetunandan; Ghosh, Bornika; Langmead, Christopher James; Bailey-Kellogg, Chris

    2015-01-01

    Abstract In studying the strength and specificity of interaction between members of two protein families, key questions center on which pairs of possible partners actually interact, how well they interact, and why they interact while others do not. The advent of large-scale experimental studies of interactions between members of a target family and a diverse set of possible interaction partners offers the opportunity to address these questions. We develop here a method, DgSpi (data-driven graphical models of specificity in protein:protein interactions), for learning and using graphical models that explicitly represent the amino acid basis for interaction specificity (why) and extend earlier classification-oriented approaches (which) to predict the ΔG of binding (how well). We demonstrate the effectiveness of our approach in analyzing and predicting interactions between a set of 82 PDZ recognition modules against a panel of 217 possible peptide partners, based on data from MacBeath and colleagues. Our predicted ΔG values are highly predictive of the experimentally measured ones, reaching correlation coefficients of 0.69 in 10-fold cross-validation and 0.63 in leave-one-PDZ-out cross-validation. Furthermore, the model serves as a compact representation of amino acid constraints underlying the interactions, enabling protein-level ΔG predictions to be naturally understood in terms of residue-level constraints. Finally, the model DgSpi readily enables the design of new interacting partners, and we demonstrate that designed ligands are novel and diverse. PMID:25973864

  8. Molecular Graphics and Chemistry.

    ERIC Educational Resources Information Center

    Weber, Jacques; And Others

    1992-01-01

    Explains molecular graphics, i.e., the application of computer graphics techniques to investigate molecular structure, function, and interaction. Structural models and molecular surfaces are discussed, and a theoretical model that can be used for the evaluation of intermolecular interaction energies for organometallics is described. (45…

  9. Graphical Representations for Ising and Potts Models in General External Fields

    NASA Astrophysics Data System (ADS)

    Cioletti, Leandro; Vila, Roberto

    2016-01-01

    This work is concerned with the theory of graphical representation for the Ising and Potts models over general lattices with non-translation invariant external field. We explicitly describe in terms of the random-cluster representation the distribution function and, consequently, the expected value of a single spin for the Ising and q-state Potts models with general external fields. We also consider the Gibbs states for the Edwards-Sokal representation of the Potts model with non-translation invariant magnetic field and prove a version of the FKG inequality for the so called general random-cluster model (GRC model) with free and wired boundary conditions in the non-translation invariant case. Adding the amenability hypothesis on the lattice, we obtain the uniqueness of the infinite connected component and the almost sure quasilocality of the Gibbs measures for the GRC model with such general magnetic fields. As a final application of the theory developed, we show the uniqueness of the Gibbs measures for the ferromagnetic Ising model with a positive power-law decay magnetic field with small enough power, as conjectured in Bissacot et al. (Commun Math Phys 337: 41-53, 2015).

  10. FastGGM: An Efficient Algorithm for the Inference of Gaussian Graphical Model in Biological Networks

    PubMed Central

    Ding, Ying; Fang, Zhou; Sun, Zhe; MacDonald, Matthew L.; Sweet, Robert A.; Wang, Jieru; Chen, Wei

    2016-01-01

    Biological networks provide additional information for the analysis of human diseases, beyond the traditional analysis that focuses on single variables. Gaussian graphical model (GGM), a probability model that characterizes the conditional dependence structure of a set of random variables by a graph, has wide applications in the analysis of biological networks, such as inferring interaction or comparing differential networks. However, existing approaches are either not statistically rigorous or are inefficient for high-dimensional data that include tens of thousands of variables for making inference. In this study, we propose an efficient algorithm to implement the estimation of GGM and obtain p-value and confidence interval for each edge in the graph, based on a recent proposal by Ren et al., 2015. Through simulation studies, we demonstrate that the algorithm is faster by several orders of magnitude than the current implemented algorithm for Ren et al. without losing any accuracy. Then, we apply our algorithm to two real data sets: transcriptomic data from a study of childhood asthma and proteomic data from a study of Alzheimer’s disease. We estimate the global gene or protein interaction networks for the disease and healthy samples. The resulting networks reveal interesting interactions and the differential networks between cases and controls show functional relevance to the diseases. In conclusion, we provide a computationally fast algorithm to implement a statistically sound procedure for constructing Gaussian graphical model and making inference with high-dimensional biological data. The algorithm has been implemented in an R package named “FastGGM”. PMID:26872036

  11. Gaussian graphical modeling reconstructs pathway reactions from high-throughput metabolomics data

    PubMed Central

    2011-01-01

    Background With the advent of high-throughput targeted metabolic profiling techniques, the question of how to interpret and analyze the resulting vast amount of data becomes more and more important. In this work we address the reconstruction of metabolic reactions from cross-sectional metabolomics data, that is without the requirement for time-resolved measurements or specific system perturbations. Previous studies in this area mainly focused on Pearson correlation coefficients, which however are generally incapable of distinguishing between direct and indirect metabolic interactions. Results In our new approach we propose the application of a Gaussian graphical model (GGM), an undirected probabilistic graphical model estimating the conditional dependence between variables. GGMs are based on partial correlation coefficients, that is pairwise Pearson correlation coefficients conditioned against the correlation with all other metabolites. We first demonstrate the general validity of the method and its advantages over regular correlation networks with computer-simulated reaction systems. Then we estimate a GGM on data from a large human population cohort, covering 1020 fasting blood serum samples with 151 quantified metabolites. The GGM is much sparser than the correlation network, shows a modular structure with respect to metabolite classes, and is stable to the choice of samples in the data set. On the example of human fatty acid metabolism, we demonstrate for the first time that high partial correlation coefficients generally correspond to known metabolic reactions. This feature is evaluated both manually by investigating specific pairs of high-scoring metabolites, and then systematically on a literature-curated model of fatty acid synthesis and degradation. Our method detects many known reactions along with possibly novel pathway interactions, representing candidates for further experimental examination. Conclusions In summary, we demonstrate strong signatures of

  12. Two graphical user interfaces for managing and analyzing MODFLOW groundwater-model scenarios

    USGS Publications Warehouse

    Banta, Edward R.

    2014-01-01

    Scenario Manager and Scenario Analyzer are graphical user interfaces that facilitate the use of calibrated, MODFLOW-based groundwater models for investigating possible responses to proposed stresses on a groundwater system. Scenario Manager allows a user, starting with a calibrated model, to design and run model scenarios by adding or modifying stresses simulated by the model. Scenario Analyzer facilitates the process of extracting data from model output and preparing such display elements as maps, charts, and tables. Both programs are designed for users who are familiar with the science on which groundwater modeling is based but who may not have a groundwater modeler’s expertise in building and calibrating a groundwater model from start to finish. With Scenario Manager, the user can manipulate model input to simulate withdrawal or injection wells, time-variant specified hydraulic heads, recharge, and such surface-water features as rivers and canals. Input for stresses to be simulated comes from user-provided geographic information system files and time-series data files. A Scenario Manager project can contain multiple scenarios and is self-documenting. Scenario Analyzer can be used to analyze output from any MODFLOW-based model; it is not limited to use with scenarios generated by Scenario Manager. Model-simulated values of hydraulic head, drawdown, solute concentration, and cell-by-cell flow rates can be presented in display elements. Map data can be represented as lines of equal value (contours) or as a gradated color fill. Charts and tables display time-series data obtained from output generated by a transient-state model run or from user-provided text files of time-series data. A display element can be based entirely on output of a single model run, or, to facilitate comparison of results of multiple scenarios, an element can be based on output from multiple model runs. Scenario Analyzer can export display elements and supporting metadata as a Portable

  13. Graphics Processing Unit (GPU) Acceleration of the Goddard Earth Observing System Atmospheric Model

    NASA Technical Reports Server (NTRS)

    Putnam, Williama

    2011-01-01

    The Goddard Earth Observing System 5 (GEOS-5) is the atmospheric model used by the Global Modeling and Assimilation Office (GMAO) for a variety of applications, from long-term climate prediction at relatively coarse resolution, to data assimilation and numerical weather prediction, to very high-resolution cloud-resolving simulations. GEOS-5 is being ported to a graphics processing unit (GPU) cluster at the NASA Center for Climate Simulation (NCCS). By utilizing GPU co-processor technology, we expect to increase the throughput of GEOS-5 by at least an order of magnitude, and accelerate the process of scientific exploration across all scales of global modeling, including: The large-scale, high-end application of non-hydrostatic, global, cloud-resolving modeling at 10- to I-kilometer (km) global resolutions Intermediate-resolution seasonal climate and weather prediction at 50- to 25-km on small clusters of GPUs Long-range, coarse-resolution climate modeling, enabled on a small box of GPUs for the individual researcher After being ported to the GPU cluster, the primary physics components and the dynamical core of GEOS-5 have demonstrated a potential speedup of 15-40 times over conventional processor cores. Performance improvements of this magnitude reduce the required scalability of 1-km, global, cloud-resolving models from an unfathomable 6 million cores to an attainable 200,000 GPU-enabled cores.

  14. Introduction of a methodology for visualization and graphical interpretation of Bayesian classification models.

    PubMed

    Balfer, Jenny; Bajorath, Jürgen

    2014-09-22

    Supervised machine learning models are widely used in chemoinformatics, especially for the prediction of new active compounds or targets of known actives. Bayesian classification methods are among the most popular machine learning approaches for the prediction of activity from chemical structure. Much work has focused on predicting structure-activity relationships (SARs) on the basis of experimental training data. By contrast, only a few efforts have thus far been made to rationalize the performance of Bayesian or other supervised machine learning models and better understand why they might succeed or fail. In this study, we introduce an intuitive approach for the visualization and graphical interpretation of naïve Bayesian classification models. Parameters derived during supervised learning are visualized and interactively analyzed to gain insights into model performance and identify features that determine predictions. The methodology is introduced in detail and applied to assess Bayesian modeling efforts and predictions on compound data sets of varying structural complexity. Different classification models and features determining their performance are characterized in detail. A prototypic implementation of the approach is provided. PMID:25137527

  15. A Graphical User Interface for Parameterizing Biochemical Models of Photosynthesis and Chlorophyll Fluorescence

    NASA Astrophysics Data System (ADS)

    Kornfeld, A.; Van der Tol, C.; Berry, J. A.

    2015-12-01

    Recent advances in optical remote sensing of photosynthesis offer great promise for estimating gross primary productivity (GPP) at leaf, canopy and even global scale. These methods -including solar-induced chlorophyll fluorescence (SIF) emission, fluorescence spectra, and hyperspectral features such as the red edge and the photochemical reflectance index (PRI) - can be used to greatly enhance the predictive power of global circulation models (GCMs) by providing better constraints on GPP. The way to use measured optical data to parameterize existing models such as SCOPE (Soil Canopy Observation, Photochemistry and Energy fluxes) is not trivial, however. We have therefore extended a biochemical model to include fluorescence and other parameters in a coupled treatment. To help parameterize the model, we then use nonlinear curve-fitting routines to determine the parameter set that enables model results to best fit leaf-level gas exchange and optical data measurements. To make the tool more accessible to all practitioners, we have further designed a graphical user interface (GUI) based front-end to allow researchers to analyze data with a minimum of effort while, at the same time, allowing them to change parameters interactively to visualize how variation in model parameters affect predicted outcomes such as photosynthetic rates, electron transport, and chlorophyll fluorescence. Here we discuss the tool and its effectiveness, using recently-gathered leaf-level data.

  16. uPy: a ubiquitous computer graphics Python API with Biological Modeling Applications

    PubMed Central

    Autin, L.; Johnson, G.; Hake, J.; Olson, A.; Sanner, M.

    2015-01-01

    In this paper we describe uPy, an extension module for the Python programming language that provides a uniform abstraction of the APIs of several 3D computer graphics programs called hosts, including: Blender, Maya, Cinema4D, and DejaVu. A plugin written with uPy is a unique piece of code that will run in all uPy-supported hosts. We demonstrate the creation of complex plug-ins for molecular/cellular modeling and visualization and discuss how uPy can more generally simplify programming for many types of projects (not solely science applications) intended for multi-host distribution. uPy is available at http://upy.scripps.edu PMID:24806987

  17. Graphical representation of life paths to better convey results of decision models to patients.

    PubMed

    Rubrichi, Stefania; Rognoni, Carla; Sacchi, Lucia; Parimbelli, Enea; Napolitano, Carlo; Mazzanti, Andrea; Quaglini, Silvana

    2015-04-01

    The inclusion of patients' perspectives in clinical practice has become an important matter for health professionals, in view of the increasing attention to patient-centered care. In this regard, this report illustrates a method for developing a visual aid that supports the physician in the process of informing patients about a critical decisional problem. In particular, we focused on interpretation of the results of decision trees embedding Markov models implemented with the commercial tool TreeAge Pro. Starting from patient-level simulations and exploiting some advanced functionalities of TreeAge Pro, we combined results to produce a novel graphical output that represents the distributions of outcomes over the lifetime for the different decision options, thus becoming a more informative decision support in a context of shared decision making. The training example used to illustrate the method is a decision tree for thromboembolism risk prevention in patients with nonvalvular atrial fibrillation. PMID:25589524

  18. Glossiness of Colored Papers based on Computer Graphics Model and Its Measuring Method

    NASA Astrophysics Data System (ADS)

    Aida, Teizo

    In the case of colored papers, the color of surface effects strongly upon the gloss of its paper. The new glossiness for such a colored paper is suggested in this paper. First, using the Achromatic and Chromatic Munsell colored chips, the author obtained experimental equation which represents the relation between lightness V ( or V and saturation C ) and psychological glossiness Gph of these chips. Then, the author defined a new glossiness G for the colored papers, based on the above mentioned experimental equations Gph and Cook-Torrance's reflection model which are widely used in the filed of Computer Graphics. This new glossiness is shown to be nearly proportional to the psychological glossiness Gph. The measuring system for the new glossiness G is furthermore descrived. The measuring time for one specimen is within 1 minute.

  19. Shaded computer graphic techniques for visualizing and interpreting analytic fluid flow models

    NASA Technical Reports Server (NTRS)

    Parke, F. I.

    1981-01-01

    Mathematical models which predict the behavior of fluid flow in different experiments are simulated using digital computers. The simulations predict values of parameters of the fluid flow (pressure, temperature and velocity vector) at many points in the fluid. Visualization of the spatial variation in the value of these parameters is important to comprehend and check the data generated, to identify the regions of interest in the flow, and for effectively communicating information about the flow to others. The state of the art imaging techniques developed in the field of three dimensional shaded computer graphics is applied to visualization of fluid flow. Use of an imaging technique known as 'SCAN' for visualizing fluid flow, is studied and the results are presented.

  20. NATURAL graphics

    NASA Technical Reports Server (NTRS)

    Jones, R. H.

    1984-01-01

    The hardware and software developments in computer graphics are discussed. Major topics include: system capabilities, hardware design, system compatibility, and software interface with the data base management system.

  1. Model Selection with the Linear Mixed Model for Longitudinal Data

    ERIC Educational Resources Information Center

    Ryoo, Ji Hoon

    2011-01-01

    Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or…

  2. Gray component replacement using color mixing models

    NASA Astrophysics Data System (ADS)

    Kang, Henry R.

    1994-05-01

    A new approach to the gray component replacement (GCR) has been developed. It employs the color mixing theory for modeling the spectral fit between the 3-color and 4-color prints. To achieve this goal, we first examine the accuracy of the models with respect to the experimental results by applying them to the prints made by a Canon Color Laser Copier-500 (CLC-500). An empirical halftone correction factor is used for improving the data fitting. Among the models tested, the halftone corrected Kubelka-Munk theory gives the closest fit, followed by the halftone corrected Beer-Bouguer law and the Yule-Neilsen approach. We then apply the halftone corrected BB law to GCR. The main feature of this GCR approach is based on the spectral measurements of the primary color step wedges and a software package implementing the color mixing model. The software determines the amount of the gray component to be removed, then adjusts each primary color until a good match of the peak wavelengths between the 3-color and 4-color spectra is obtained. Results indicate that the average (Delta) Eab between cmy and cmyk renditions of 64 color patches is 3.11 (Delta) Eab. Eighty-seven percent of the patches has (Delta) Eab less than 5 units. The advantage of this approach is its simplicity; there is no need for the black printer and under color addition. Because this approach is based on the spectral reproduction, it minimizes the metamerism.

  3. Toward Better Modeling of Supercritical Turbulent Mixing

    NASA Technical Reports Server (NTRS)

    Selle, Laurent; Okongo'o, Nora; Bellan, Josette; Harstad, Kenneth

    2008-01-01

    study was done as part of an effort to develop computational models representing turbulent mixing under thermodynamic supercritical (here, high pressure) conditions. The question was whether the large-eddy simulation (LES) approach, developed previously for atmospheric-pressure compressible-perfect-gas and incompressible flows, can be extended to real-gas non-ideal (including supercritical) fluid mixtures. [In LES, the governing equations are approximated such that the flow field is spatially filtered and subgrid-scale (SGS) phenomena are represented by models.] The study included analyses of results from direct numerical simulation (DNS) of several such mixing layers based on the Navier-Stokes, total-energy, and conservation- of-chemical-species governing equations. Comparison of LES and DNS results revealed the need to augment the atmospheric- pressure LES equations with additional SGS momentum and energy terms. These new terms are the direct result of high-density-gradient-magnitude regions found in the DNS and observed experimentally under fully turbulent flow conditions. A model has been derived for the new term in the momentum equation and was found to perform well at small filter size but to deteriorate with increasing filter size. Several alternative models were derived for the new SGS term in the energy equation that would need further investigations to determine if they are too computationally intensive in LES.

  4. Higher-order ice-sheet modelling accelerated by multigrid on graphics cards

    NASA Astrophysics Data System (ADS)

    Brædstrup, Christian; Egholm, David

    2013-04-01

    Higher-order ice flow modelling is a very computer intensive process owing primarily to the nonlinear influence of the horizontal stress coupling. When applied for simulating long-term glacial landscape evolution, the ice-sheet models must consider very long time series, while both high temporal and spatial resolution is needed to resolve small effects. The use of higher-order and full stokes models have therefore seen very limited usage in this field. However, recent advances in graphics card (GPU) technology for high performance computing have proven extremely efficient in accelerating many large-scale scientific computations. The general purpose GPU (GPGPU) technology is cheap, has a low power consumption and fits into a normal desktop computer. It could therefore provide a powerful tool for many glaciologists working on ice flow models. Our current research focuses on utilising the GPU as a tool in ice-sheet and glacier modelling. To this extent we have implemented the Integrated Second-Order Shallow Ice Approximation (iSOSIA) equations on the device using the finite difference method. To accelerate the computations, the GPU solver uses a non-linear Red-Black Gauss-Seidel iterator coupled with a Full Approximation Scheme (FAS) multigrid setup to further aid convergence. The GPU finite difference implementation provides the inherent parallelization that scales from hundreds to several thousands of cores on newer cards. We demonstrate the efficiency of the GPU multigrid solver using benchmark experiments.

  5. Inference of ICF implosion core mix using experimental data and theoretical mix modeling

    SciTech Connect

    Sherrill, Leslie Welser; Haynes, Donald A; Cooley, James H; Sherrill, Manolo E; Mancini, Roberto C; Tommasini, Riccardo; Golovkin, Igor E; Haan, Steven W

    2009-01-01

    The mixing between fuel and shell materials in Inertial Confinement Fusion (lCF) implosion cores is a current topic of interest. The goal of this work was to design direct-drive ICF experiments which have varying levels of mix, and subsequently to extract information on mixing directly from the experimental data using spectroscopic techniques. The experimental design was accomplished using hydrodynamic simulations in conjunction with Haan's saturation model, which was used to predict the mix levels of candidate experimental configurations. These theoretical predictions were then compared to the mixing information which was extracted from the experimental data, and it was found that Haan's mix model predicted trends in the width of the mix layer as a function of initial shell thickness. These results contribute to an assessment of the range of validity and predictive capability of the Haan saturation model, as well as increasing confidence in the methods used to extract mixing information from experimental data.

  6. Inference of ICF Implosion Core Mix using Experimental Data and Theoretical Mix Modeling

    SciTech Connect

    Welser-Sherrill, L; Haynes, D A; Mancini, R C; Cooley, J H; Tommasini, R; Golovkin, I E; Sherrill, M E; Haan, S W

    2008-04-30

    The mixing between fuel and shell materials in Inertial Confinement Fusion (ICF) implosion cores is a current topic of interest. The goal of this work was to design direct-drive ICF experiments which have varying levels of mix, and subsequently to extract information on mixing directly from the experimental data using spectroscopic techniques. The experimental design was accomplished using hydrodynamic simulations in conjunction with Haan's saturation model, which was used to predict the mix levels of candidate experimental configurations. These theoretical predictions were then compared to the mixing information which was extracted from the experimental data, and it was found that Haan's mix model performed well in predicting trends in the width of the mix layer. With these results, we have contributed to an assessment of the range of validity and predictive capability of the Haan saturation model, as well as increased our confidence in the methods used to extract mixing information from experimental data.

  7. Graphic Storytelling

    ERIC Educational Resources Information Center

    Thompson, John

    2009-01-01

    Graphic storytelling is a medium that allows students to make and share stories, while developing their art communication skills. American comics today are more varied in genre, approach, and audience than ever before. When considering the impact of Japanese manga on the youth, graphic storytelling emerges as a powerful player in pop culture. In…

  8. Business Graphics

    NASA Technical Reports Server (NTRS)

    1987-01-01

    Genigraphics Corporation's Masterpiece 8770 FilmRecorder is an advanced high resolution system designed to improve and expand a company's in-house graphics production. GRAFTIME/software package was designed to allow office personnel with minimal training to produce professional level graphics for business communications and presentations. Products are no longer being manufactured.

  9. Mixing parameterizations in ocean climate modeling

    NASA Astrophysics Data System (ADS)

    Moshonkin, S. N.; Gusev, A. V.; Zalesny, V. B.; Byshev, V. I.

    2016-03-01

    Results of numerical experiments with an eddy-permitting ocean circulation model on the simulation of the climatic variability of the North Atlantic and the Arctic Ocean are analyzed. We compare the ocean simulation quality with using different subgrid mixing parameterizations. The circulation model is found to be sensitive to a mixing parametrization. The computation of viscosity and diffusivity coefficients by an original splitting algorithm of the evolution equations for turbulence characteristics is found to be as efficient as traditional Monin-Obukhov parameterizations. At the same time, however, the variability of ocean climate characteristics is simulated more adequately. The simulation of salinity fields in the entire study region improves most significantly. Turbulent processes have a large effect on the circulation in the long-term through changes in the density fields. The velocity fields in the Gulf Stream and in the entire North Atlantic Subpolar Cyclonic Gyre are reproduced more realistically. The surface level height in the Arctic Basin is simulated more faithfully, marking the Beaufort Gyre better. The use of the Prandtl number as a function of the Richardson number improves the quality of ocean modeling.

  10. Inferring Caravaggio's studio lighting and praxis in The calling of St. Matthew by computer graphics modeling

    NASA Astrophysics Data System (ADS)

    Stork, David G.; Nagy, Gabor

    2010-02-01

    We explored the working methods of the Italian Baroque master Caravaggio through computer graphics reconstruction of his studio, with special focus on his use of lighting and illumination in The calling of St. Matthew. Although he surely took artistic liberties while constructing this and other works and did not strive to provide a "photographic" rendering of the tableau before him, there are nevertheless numerous visual clues to the likely studio conditions and working methods within the painting: the falloff of brightness along the rear wall, the relative brightness of the faces of figures, and the variation in sharpness of cast shadows (i.e., umbrae and penumbrae). We explored two studio lighting hypotheses: that the primary illumination was local (and hence artificial) and that it was distant solar. We find that the visual evidence can be consistent with local (artificial) illumination if Caravaggio painted his figures separately, adjusting the brightness on each to compensate for the falloff in illumination. Alternatively, the evidence is consistent with solar illumination only if the rear wall had particular reflectance properties, as described by a bi-directional reflectance distribution function, BRDF. (Ours is the first research applying computer graphics to the understanding of artists' praxis that models subtle reflectance properties of surfaces through BRDFs, a technique that may find use in studies of other artists.) A somewhat puzzling visual feature-unnoted in the scholarly literature-is the upward-slanting cast shadow in the upper-right corner of the painting. We found this shadow is naturally consistent with a local illuminant passing through a small window perpendicular to the viewer's line of sight, but could also be consistent with solar illumination if the shadow was due to a slanted, overhanging section of a roof outside the artist's studio. Our results place likely conditions upon any hypotheses concerning Caravaggio's working methods and

  11. A graphical model method for integrating multiple sources of genome-scale data

    PubMed Central

    Dvorkin, Daniel; Biehs, Brian; Kechris, Katerina

    2016-01-01

    Making effective use of multiple data sources is a major challenge in modern bioinformatics. Genome-wide data such as measures of transcription factor binding, gene expression, and sequence conservation, which are used to identify binding regions and genes that are important to major biological processes such as development and disease, can be difficult to use together due to the different biological meanings and statistical distributions of the heterogeneous data types, but each can provide valuable information for understanding the processes under study. Here we present methods for integrating multiple data sources to gain a more complete picture of gene regulation and expression. Our goal is to identify genes and cis-regulatory regions which play specific biological roles. We describe a graphical mixture model approach for data integration, examine the effect of using different model topologies, and discuss methods for evaluating the effectiveness of the models. Model fitting is computationally efficient and produces results which have clear biological and statistical interpretations. The Hedgehog and Dorsal signaling pathways in Drosophila, which are critical in embryonic development, are used as examples. PMID:23934610

  12. Mixing parametrizations for ocean climate modelling

    NASA Astrophysics Data System (ADS)

    Gusev, Anatoly; Moshonkin, Sergey; Diansky, Nikolay; Zalesny, Vladimir

    2016-04-01

    The algorithm is presented of splitting the total evolutionary equations for the turbulence kinetic energy (TKE) and turbulence dissipation frequency (TDF), which is used to parameterize the viscosity and diffusion coefficients in ocean circulation models. The turbulence model equations are split into the stages of transport-diffusion and generation-dissipation. For the generation-dissipation stage, the following schemes are implemented: the explicit-implicit numerical scheme, analytical solution and the asymptotic behavior of the analytical solutions. The experiments were performed with different mixing parameterizations for the modelling of Arctic and the Atlantic climate decadal variability with the eddy-permitting circulation model INMOM (Institute of Numerical Mathematics Ocean Model) using vertical grid refinement in the zone of fully developed turbulence. The proposed model with the split equations for turbulence characteristics is similar to the contemporary differential turbulence models, concerning the physical formulations. At the same time, its algorithm has high enough computational efficiency. Parameterizations with using the split turbulence model make it possible to obtain more adequate structure of temperature and salinity at decadal timescales, compared to the simpler Pacanowski-Philander (PP) turbulence parameterization. Parameterizations with using analytical solution or numerical scheme at the generation-dissipation step of the turbulence model leads to better representation of ocean climate than the faster parameterization using the asymptotic behavior of the analytical solution. At the same time, the computational efficiency left almost unchanged relative to the simple PP parameterization. Usage of PP parametrization in the circulation model leads to realistic simulation of density and circulation with violation of T,S-relationships. This error is majorly avoided with using the proposed parameterizations containing the split turbulence model

  13. Cascade Models of Turbulence and Mixing

    NASA Astrophysics Data System (ADS)

    Kadanoff, Leo P.

    1997-01-01

    This note describes two kinds of work on turbulence. First it describes a simplified model of turbulent energy-cascades called the GOY model. Second it mentions work on a model of mixing in fluids. In addition to a brief historical discussion, I include some mention of our own work carried on at the University of Chicago by Jane Wang, Detlef Lohse, Roberto Benzi, Norbert Schörghofer, Scott Wunsch, Tong Zhou and myself. Our own studies are in large measure the outgrowth of a paper by M. H. Jensen, G. Paladin, and A. Vulpiani [1]. I mention this connection with some sadness because I recall Paladin's recent death in a mountain accident.

  14. Exploratory graphical models of functional and structural connectivity patterns for Alzheimer's Disease diagnosis

    PubMed Central

    Ortiz, Andrés; Munilla, Jorge; Álvarez-Illán, Ignacio; Górriz, Juan M.; Ramírez, Javier

    2015-01-01

    Alzheimer's Disease (AD) is the most common neurodegenerative disease in elderly people. Its development has been shown to be closely related to changes in the brain connectivity network and in the brain activation patterns along with structural changes caused by the neurodegenerative process. Methods to infer dependence between brain regions are usually derived from the analysis of covariance between activation levels in the different areas. However, these covariance-based methods are not able to estimate conditional independence between variables to factor out the influence of other regions. Conversely, models based on the inverse covariance, or precision matrix, such as Sparse Gaussian Graphical Models allow revealing conditional independence between regions by estimating the covariance between two variables given the rest as constant. This paper uses Sparse Inverse Covariance Estimation (SICE) methods to learn undirected graphs in order to derive functional and structural connectivity patterns from Fludeoxyglucose (18F-FDG) Position Emission Tomography (PET) data and segmented Magnetic Resonance images (MRI), drawn from the ADNI database, for Control, MCI (Mild Cognitive Impairment Subjects), and AD subjects. Sparse computation fits perfectly here as brain regions usually only interact with a few other areas. The models clearly show different metabolic covariation patters between subject groups, revealing the loss of strong connections in AD and MCI subjects when compared to Controls. Similarly, the variance between GM (Gray Matter) densities of different regions reveals different structural covariation patterns between the different groups. Thus, the different connectivity patterns for controls and AD are used in this paper to select regions of interest in PET and GM images with discriminative power for early AD diagnosis. Finally, functional an structural models are combined to leverage the classification accuracy. The results obtained in this work show the

  15. A Comparison of Learning Style Models and Assessment Instruments for University Graphics Educators

    ERIC Educational Resources Information Center

    Harris, La Verne Abe; Sadowski, Mary S.; Birchman, Judy A.

    2006-01-01

    Kolb (2004) and others have defined learning style as a preference by which students learn and remember what they have learned. This presentation will include a summary of learning style research published in the "Engineering Design Graphics Journal" over the past 15 years on the topic of learning styles and graphics education. The presenters will…

  16. A Curriculum Model: Engineering Design Graphics Course Updates Based on Industrial and Academic Institution Requirements

    ERIC Educational Resources Information Center

    Meznarich, R. A.; Shava, R. C.; Lightner, S. L.

    2009-01-01

    Engineering design graphics courses taught in colleges or universities should provide and equip students preparing for employment with the basic occupational graphics skill competences required by engineering and technology disciplines. Academic institutions should introduce and include topics that cover the newer and more efficient graphics…

  17. Mixed Membership Distributions with Applications to Modeling Multiple Strategy Usage

    ERIC Educational Resources Information Center

    Galyardt, April

    2012-01-01

    This dissertation examines two related questions. "How do mixed membership models work?" and "Can mixed membership be used to model how students use multiple strategies to solve problems?". Mixed membership models have been used in thousands of applications from text and image processing to genetic microarray analysis. Yet…

  18. Modeling populations of rotationally mixed massive stars

    NASA Astrophysics Data System (ADS)

    Brott, I.

    2011-02-01

    Massive stars can be considered as cosmic engines. With their high luminosities, strong stellar winds and violent deaths they drive the evolution of galaxies through-out the history of the universe. Despite the importance of massive stars, their evolution is still poorly understood. Two major issues have plagued evolutionary models of massive stars until today: mixing and mass loss On the main sequence, the effects of mass loss remain limited in the considered mass and metallicity range, this thesis concentrates on the role of mixing in massive stars. This thesis approaches this problem just on the cross road between observations and simulations. The main question: Do evolutionary models of single stars, accounting for the effects of rotation, reproduce the observed properties of real stars. In particular we are interested if the evolutionary models can reproduce the surface abundance changes during the main-sequence phase. To constrain our models we build a population synthesis model for the sample of the VLT-FLAMES Survey of Massive stars, for which star-formation history and rotational velocity distribution are well constrained. We consider the four main regions of the Hunter diagram. Nitrogen un-enriched slow rotators and nitrogen enriched fast rotators that are predicted by theory. Nitrogen enriched slow rotators and nitrogen unenriched fast rotators that are not predicted by our model. We conclude that currently these comparisons are not sufficient to verify the theory of rotational mixing. Physical processes in addition to rotational mixing appear necessary to explain the stars in the later two regions. The chapters of this Thesis have been published in the following Journals: Ch. 2: ``Rotating Massive Main-Sequence Stars I: Grids of Evolutionary Models and Isochrones'', I. Brott, S. E. de Mink, M. Cantiello, N. Langer, A. de Koter, C. J. Evans, I. Hunter, C. Trundle, J.S. Vink submitted to Astronomy & Astrop hysics Ch. 3: ``The VLT-FLAMES Survey of Massive

  19. Reducing Modeling Error of Graphical Methods for Estimating Volume of Distribution Measurements in PIB-PET study

    PubMed Central

    Guo, Hongbin; Renaut, Rosemary A; Chen, Kewei; Reiman, Eric M

    2010-01-01

    Graphical analysis methods are widely used in positron emission tomography quantification because of their simplicity and model independence. But they may, particularly for reversible kinetics, lead to bias in the estimated parameters. The source of the bias is commonly attributed to noise in the data. Assuming a two-tissue compartmental model, we investigate the bias that originates from modeling error. This bias is an intrinsic property of the simplified linear models used for limited scan durations, and it is exaggerated by random noise and numerical quadrature error. Conditions are derived under which Logan's graphical method either over- or under-estimates the distribution volume in the noise-free case. The bias caused by modeling error is quantified analytically. The presented analysis shows that the bias of graphical methods is inversely proportional to the dissociation rate. Furthermore, visual examination of the linearity of the Logan plot is not sufficient for guaranteeing that equilibrium has been reached. A new model which retains the elegant properties of graphical analysis methods is presented, along with a numerical algorithm for its solution. We perform simulations with the fibrillar amyloid β radioligand [11C] benzothiazole-aniline using published data from the University of Pittsburgh and Rotterdam groups. The results show that the proposed method significantly reduces the bias due to modeling error. Moreover, the results for data acquired over a 70 minutes scan duration are at least as good as those obtained using existing methods for data acquired over a 90 minutes scan duration. PMID:20493196

  20. Graphical determination of metal bioavailability to soil invertebrates utilizing the Langmuir sorption model

    SciTech Connect

    Donkin, S.G.

    1997-09-01

    A new method of performing soil toxicity tests with free-living nematodes exposed to several metals and soil types has been adapted to the Langmuir sorption model in an attempt at bridging the gap between physico-chemical and biological data gathered in the complex soil matrix. Pseudo-Langmuir sorption isotherms have been developed using nematode toxic responses (lethality, in this case) in place of measured solvated metal, in order to more accurately model bioavailability. This method allows the graphical determination of Langmuir coefficients describing maximum sorption capacities and sorption affinities of various metal-soil combinations in the context of real biological responses of indigenous organisms. Results from nematode mortality tests with zinc, cadmium, copper, and lead in four soil types and water were used for isotherm construction. The level of agreement between these results and available literature data on metal sorption behavior in soils suggests that biologically relevant data may be successfully fitted to sorption models such as the Langmuir. This would allow for accurate prediction of soil contaminant concentrations which have minimal effect on indigenous invertebrates.

  1. Parallel flow accumulation algorithms for graphical processing units with application to RUSLE model

    NASA Astrophysics Data System (ADS)

    Sten, Johan; Lilja, Harri; Hyväluoma, Jari; Westerholm, Jan; Aspnäs, Mats

    2016-04-01

    Digital elevation models (DEMs) are widely used in the modeling of surface hydrology, which typically includes the determination of flow directions and flow accumulation. The use of high-resolution DEMs increases the accuracy of flow accumulation computation, but as a drawback, the computational time may become excessively long if large areas are analyzed. In this paper we investigate the use of graphical processing units (GPUs) for efficient flow accumulation calculations. We present two new parallel flow accumulation algorithms based on dependency transfer and topological sorting and compare them to previously published flow transfer and indegree-based algorithms. We benchmark the GPU implementations against industry standards, ArcGIS and SAGA. With the flow-transfer D8 flow routing model and binary input data, a speed up of 19 is achieved compared to ArcGIS and 15 compared to SAGA. We show that on GPUs the topological sort-based flow accumulation algorithm leads on average to a speedup by a factor of 7 over the flow-transfer algorithm. Thus a total speed up of the order of 100 is achieved. We test the algorithms by applying them to the Revised Universal Soil Loss Equation (RUSLE) erosion model. For this purpose we present parallel versions of the slope, LS factor and RUSLE algorithms and show that the RUSLE erosion results for an area of 12 km x 24 km containing 72 million cells can be calculated in less than a second. Since flow accumulation is needed in many hydrological models, the developed algorithms may find use in many other applications than RUSLE modeling. The algorithm based on topological sorting is particularly promising for dynamic hydrological models where flow accumulations are repeatedly computed over an unchanged DEM.

  2. Development of a graphical user interface in GIS raster format for the finite difference ground-water model code, MODFLOW

    SciTech Connect

    Heinzer, T.; Hansen, D.T.; Greer, W.; Sebhat, M.

    1996-12-31

    A geographic information system (GIS) was used in developing a graphical user interface (GUI) for use with the US Geological Survey`s finite difference ground-water flow model, MODFLOW. The GUI permits the construction of a MODFLOW based ground-water flow model from scratch in a GIS environment. The model grid, input data and output are stored as separate raster data sets which may be viewed, edited, and manipulated in a graphic environment. Other GIS data sets can be displayed with the model data sets for reference and evaluation. The GUI sets up a directory structure for storage of the files associated with the ground-water model and the raster data sets created by the interface. The GUI stores model coefficients and model output as raster values. Values stored by these raster data sets are formatted for use with the ground-water flow model code.

  3. Linkage Analysis with an Alternative Formulation for the Mixed Model of Inheritance: The Finite Polygenic Mixed Model

    PubMed Central

    Stricker, C.; Fernando, R. L.; Elston, R. C.

    1995-01-01

    This paper presents an extension of the finite polygenic mixed model of FERNANDO et al. (1994) to linkage analysis. The finite polygenic mixed model, extended for linkage analysis, leads to a likelihood that can be calculated using efficient algorithms developed for oligogenic models. For comparison, linkage analysis of 5 simulated 4021-member pedigrees was performed using the usual mixed model of inheritance, approximated by HASSTEDT (1982), and the finite polygenic mixed model extended for linkage analysis presented here. Maximum likelihood estimates of the finite polygenic mixed model could be inferred to be closer to the simulated values in these pedigrees. PMID:8601502

  4. Configuring a Graphical User Interface for Managing Local HYSPLIT Model Runs Through AWIPS

    NASA Technical Reports Server (NTRS)

    Wheeler, mark M.; Blottman, Peter F.; Sharp, David W.; Hoeth, Brian; VanSpeybroeck, Kurt M.

    2009-01-01

    Responding to incidents involving the release of harmful airborne pollutants is a continual challenge for Weather Forecast Offices in the National Weather Service. When such incidents occur, current protocol recommends forecaster-initiated requests of NOAA's Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) model output through the National Centers of Environmental Prediction to obtain critical dispersion guidance. Individual requests are submitted manually through a secured web site, with desired multiple requests submitted in sequence, for the purpose of obtaining useful trajectory and concentration forecasts associated with the significant release of harmful chemical gases, radiation, wildfire smoke, etc., into local the atmosphere. To help manage the local HYSPLIT for both routine and emergency use, a graphical user interface was designed for operational efficiency. The interface allows forecasters to quickly determine the current HYSPLIT configuration for the list of predefined sites (e.g., fixed sites and floating sites), and to make any necessary adjustments to key parameters such as Input Model. Number of Forecast Hours, etc. When using the interface, forecasters will obtain desired output more confidently and without the danger of corrupting essential configuration files.

  5. Colocalization Estimation Using Graphical Modeling and Variational Bayesian Expectation Maximization: Towards a Parameter-Free Approach.

    PubMed

    Awate, Suyash P; Radhakrishnan, Thyagarajan

    2015-01-01

    In microscopy imaging, colocalization between two biological entities (e.g., protein-protein or protein-cell) refers to the (stochastic) dependencies between the spatial locations of the two entities in the biological specimen. Measuring colocalization between two entities relies on fluorescence imaging of the specimen using two fluorescent chemicals, each of which indicates the presence/absence of one of the entities at any pixel location. State-of-the-art methods for estimating colocalization rely on post-processing image data using an adhoc sequence of algorithms with many free parameters that are tuned visually. This leads to loss of reproducibility of the results. This paper proposes a brand-new framework for estimating the nature and strength of colocalization directly from corrupted image data by solving a single unified optimization problem that automatically deals with noise, object labeling, and parameter tuning. The proposed framework relies on probabilistic graphical image modeling and a novel inference scheme using variational Bayesian expectation maximization for estimating all model parameters, including colocalization, from data. Results on simulated and real-world data demonstrate improved performance over the state of the art. PMID:26221663

  6. Extended model for Richtmyer-Meshkov mix

    SciTech Connect

    Mikaelian, K O

    2009-11-18

    We examine four Richtmyer-Meshkov (RM) experiments on shock-generated turbulent mix and find them to be in good agreement with our earlier simple model in which the growth rate h of the mixing layer following a shock or reshock is constant and given by 2{alpha}A{Delta}v, independent of initial conditions h{sub 0}. Here A is the Atwood number ({rho}{sub B}-{rho}{sub A})/({rho}{sub B} + {rho}{sub A}), {rho}{sub A,B} are the densities of the two fluids, {Delta}V is the jump in velocity induced by the shock or reshock, and {alpha} is the constant measured in Rayleigh-Taylor (RT) experiments: {alpha}{sup bubble} {approx} 0.05-0.07, {alpha}{sup spike} {approx} (1.8-2.5){alpha}{sup bubble} for A {approx} 0.7-1.0. In the extended model the growth rate beings to day after a time t*, when h = h*, slowing down from h = h{sub 0} + 2{alpha}A{Delta}vt to h {approx} t{sup {theta}} behavior, with {theta}{sup bubble} {approx} 0.25 and {theta}{sup spike} {approx} 0.36 for A {approx} 0.7. They ascribe this change-over to loss of memory of the direction of the shock or reshock, signaling transition from highly directional to isotropic turbulence. In the simplest extension of the model h*/h{sub 0} is independent of {Delta}v and depends only on A. They find that h*/h{sub 0} {approx} 2.5-3.5 for A {approx} 0.7-1.0.

  7. Graphical Representations and Odds Ratios in a Distance-Association Model for the Analysis of Cross-Classified Data

    ERIC Educational Resources Information Center

    de Rooij, Mark; Heiser, Willem J.

    2005-01-01

    Although RC(M)-association models have become a generally useful tool for the analysis of cross-classified data, the graphical representation resulting from such an analysis can at times be misleading. The relationships present between row category points and column category points cannot be interpreted by inter point distances but only through…

  8. Design Graphics

    NASA Technical Reports Server (NTRS)

    1990-01-01

    A mathematician, David R. Hedgley, Jr. developed a computer program that considers whether a line in a graphic model of a three-dimensional object should or should not be visible. Known as the Hidden Line Computer Code, the program automatically removes superfluous lines and displays an object from a specific viewpoint, just as the human eye would see it. An example of how one company uses the program is the experience of Birdair which specializes in production of fabric skylights and stadium covers. The fabric called SHEERFILL is a Teflon coated fiberglass material developed in cooperation with DuPont Company. SHEERFILL glazed structures are either tension structures or air-supported tension structures. Both are formed by patterned fabric sheets supported by a steel or aluminum frame or cable network. Birdair uses the Hidden Line Computer Code, to illustrate a prospective structure to an architect or owner. The program generates a three- dimensional perspective with the hidden lines removed. This program is still used by Birdair and continues to be commercially available to the public.

  9. Hierarchical graphical models for simultaneous tracking and recognition in wide-area scenes.

    PubMed

    Nayak, Nandita M; Zhu, Yingying; Chowdhury, Amit K Roy

    2015-07-01

    We present a unified framework to track multiple people, as well localize, and label their activities, in complex long-duration video sequences. To do this, we focus on two aspects: 1) the influence of tracks on the activities performed by the corresponding actors and 2) the structural relationships across activities. We propose a two-level hierarchical graphical model, which learns the relationship between tracks, relationship between tracks, and their corresponding activity segments, as well as the spatiotemporal relationships across activity segments. Such contextual relationships between tracks and activity segments are exploited at both the levels in the hierarchy for increased robustness. An L1-regularized structure learning approach is proposed for this purpose. While it is well known that availability of the labels and locations of activities can help in determining tracks more accurately and vice-versa, most current approaches have dealt with these problems separately. Inspired by research in the area of biological vision, we propose a bidirectional approach that integrates both bottom-up and top-down processing, i.e., bottom-up recognition of activities using computed tracks and top-down computation of tracks using the obtained recognition. We demonstrate our results on the recent and publicly available UCLA and VIRAT data sets consisting of realistic indoor and outdoor surveillance sequences. PMID:25700452

  10. Quantum Chemistry for Solvated Molecules on Graphical Processing Units Using Polarizable Continuum Models.

    PubMed

    Liu, Fang; Luehr, Nathan; Kulik, Heather J; Martínez, Todd J

    2015-07-14

    The conductor-like polarization model (C-PCM) with switching/Gaussian smooth discretization is a widely used implicit solvation model in chemical simulations. However, its application in quantum mechanical calculations of large-scale biomolecular systems can be limited by computational expense of both the gas phase electronic structure and the solvation interaction. We have previously used graphical processing units (GPUs) to accelerate the first of these steps. Here, we extend the use of GPUs to accelerate electronic structure calculations including C-PCM solvation. Implementation on the GPU leads to significant acceleration of the generation of the required integrals for C-PCM. We further propose two strategies to improve the solution of the required linear equations: a dynamic convergence threshold and a randomized block-Jacobi preconditioner. These strategies are not specific to GPUs and are expected to be beneficial for both CPU and GPU implementations. We benchmark the performance of the new implementation using over 20 small proteins in solvent environment. Using a single GPU, our method evaluates the C-PCM related integrals and their derivatives more than 10× faster than that with a conventional CPU-based implementation. Our improvements to the linear solver provide a further 3× acceleration. The overall calculations including C-PCM solvation require, typically, 20-40% more effort than that for their gas phase counterparts for a moderate basis set and molecule surface discretization level. The relative cost of the C-PCM solvation correction decreases as the basis sets and/or cavity radii increase. Therefore, description of solvation with this model should be routine. We also discuss applications to the study of the conformational landscape of an amyloid fibril. PMID:26575750

  11. MixSIAR: A Bayesian stable isotope mixing model for characterizing intrapopulation niche variation

    EPA Science Inventory

    Background/Question/Methods The science of stable isotope mixing models has tended towards the development of modeling products (e.g. IsoSource, MixSIR, SIAR), where methodological advances or syntheses of the current state of the art are published in parity with software packa...

  12. On Local Homogeneity and Stochastically Ordered Mixed Rasch Models

    ERIC Educational Resources Information Center

    Kreiner, Svend; Hansen, Mogens; Hansen, Carsten Rosenberg

    2006-01-01

    Mixed Rasch models add latent classes to conventional Rasch models, assuming that the Rasch model applies within each class and that relative difficulties of items are different in two or more latent classes. This article considers a family of stochastically ordered mixed Rasch models, with ordinal latent classes characterized by increasing total…

  13. Single calcium channel domain gating of synaptic vesicle fusion at fast synapses; analysis by graphic modeling

    PubMed Central

    Stanley, Elise F

    2015-01-01

    At fast-transmitting presynaptic terminals Ca2+ enter through voltage gated calcium channels (CaVs) and bind to a synaptic vesicle (SV) -associated calcium sensor (SV-sensor) to gate fusion and discharge. An open CaV generates a high-concentration plume, or nanodomain of Ca2+ that dissipates precipitously with distance from the pore. At most fast synapses, such as the frog neuromuscular junction (NMJ), the SV sensors are located sufficiently close to individual CaVs to be gated by single nanodomains. However, at others, such as the mature rodent calyx of Held (calyx of Held), the physiology is more complex with evidence that CaVs that are both close and distant from the SV sensor and it is argued that release is gated primarily by the overlapping Ca2+ nanodomains from many CaVs. We devised a 'graphic modeling' method to sum Ca2+ from individual CaVs located at varying distances from the SV-sensor to determine the SV release probability and also the fraction of that probability that can be attributed to single domain gating. This method was applied first to simplified, low and high CaV density model release sites and then to published data on the contrasting frog NMJ and the rodent calyx of Held native synapses. We report 3 main predictions: the SV-sensor is positioned very close to the point at which the SV fuses with the membrane; single domain-release gating predominates even at synapses where the SV abuts a large cluster of CaVs, and even relatively remote CaVs can contribute significantly to single domain-based gating. PMID:26457441

  14. Computer graphics in aerodynamic analysis

    NASA Technical Reports Server (NTRS)

    Cozzolongo, J. V.

    1984-01-01

    The use of computer graphics and its application to aerodynamic analyses on a routine basis is outlined. The mathematical modelling of the aircraft geometries and the shading technique implemented are discussed. Examples of computer graphics used to display aerodynamic flow field data and aircraft geometries are shown. A future need in computer graphics for aerodynamic analyses is addressed.

  15. The Effectiveness of an Interactive 3-Dimensional Computer Graphics Model for Medical Education

    PubMed Central

    Konishi, Takeshi; Tamura, Yoko; Moriguchi, Hiroki

    2012-01-01

    Background Medical students often have difficulty achieving a conceptual understanding of 3-dimensional (3D) anatomy, such as bone alignment, muscles, and complex movements, from 2-dimensional (2D) images. To this end, animated and interactive 3-dimensional computer graphics (3DCG) can provide better visual information to users. In medical fields, research on the advantages of 3DCG in medical education is relatively new. Objective To determine the educational effectiveness of interactive 3DCG. Methods We divided 100 participants (27 men, mean (SD) age 17.9 (0.6) years, and 73 women, mean (SD) age 18.1 (1.1) years) from the Health Sciences University of Mongolia (HSUM) into 3DCG (n = 50) and textbook-only (control) (n = 50) groups. The control group used a textbook and 2D images, while the 3DCG group was trained to use the interactive 3DCG shoulder model in addition to a textbook. We conducted a questionnaire survey via an encrypted satellite network between HSUM and Tokushima University. The questionnaire was scored on a 5-point Likert scale from strongly disagree (score 1) to strongly agree (score 5). Results Interactive 3DCG was effective in undergraduate medical education. Specifically, there was a significant difference in mean (SD) scores between the 3DCG and control groups in their response to questionnaire items regarding content (4.26 (0.69) vs 3.85 (0.68), P = .001) and teaching methods (4.33 (0.65) vs 3.74 (0.79), P < .001), but no significant difference in the Web category. Participants also provided meaningful comments on the advantages of interactive 3DCG. Conclusions Interactive 3DCG materials have positive effects on medical education when properly integrated into conventional education. In particular, our results suggest that interactive 3DCG is more efficient than textbooks alone in medical education and can motivate students to understand complex anatomical structures. PMID:23611759

  16. Probabilistic graphical models to deal with age estimation of living persons.

    PubMed

    Sironi, Emanuele; Gallidabino, Matteo; Weyermann, Céline; Taroni, Franco

    2016-03-01

    Due to the rise of criminal, civil and administrative judicial situations involving people lacking valid identity documents, age estimation of living persons has become an important operational procedure for numerous forensic and medicolegal services worldwide. The chronological age of a given person is generally estimated from the observed degree of maturity of some selected physical attributes by means of statistical methods. However, their application in the forensic framework suffers from some conceptual and practical drawbacks, as recently claimed in the specialised literature. The aim of this paper is therefore to offer an alternative solution for overcoming these limits, by reiterating the utility of a probabilistic Bayesian approach for age estimation. This approach allows one to deal in a transparent way with the uncertainty surrounding the age estimation process and to produce all the relevant information in the form of posterior probability distribution about the chronological age of the person under investigation. Furthermore, this probability distribution can also be used for evaluating in a coherent way the possibility that the examined individual is younger or older than a given legal age threshold having a particular legal interest. The main novelty introduced by this work is the development of a probabilistic graphical model, i.e. a Bayesian network, for dealing with the problem at hand. The use of this kind of probabilistic tool can significantly facilitate the application of the proposed methodology: examples are presented based on data related to the ossification status of the medial clavicular epiphysis. The reliability and the advantages of this probabilistic tool are presented and discussed. PMID:25794687

  17. Downsizer - A Graphical User Interface-Based Application for Browsing, Acquiring, and Formatting Time-Series Data for Hydrologic Modeling

    USGS Publications Warehouse

    Ward-Garrison, Christian; Markstrom, Steven L.; Hay, Lauren E.

    2009-01-01

    The U.S. Geological Survey Downsizer is a computer application that selects, downloads, verifies, and formats station-based time-series data for environmental-resource models, particularly the Precipitation-Runoff Modeling System. Downsizer implements the client-server software architecture. The client presents a map-based, graphical user interface that is intuitive to modelers; the server provides streamflow and climate time-series data from over 40,000 measurement stations across the United States. This report is the Downsizer user's manual and provides (1) an overview of the software design, (2) installation instructions, (3) a description of the graphical user interface, (4) a description of selected output files, and (5) troubleshooting information.

  18. Robot graphic simulation testbed

    NASA Technical Reports Server (NTRS)

    Cook, George E.; Sztipanovits, Janos; Biegl, Csaba; Karsai, Gabor; Springfield, James F.

    1991-01-01

    The objective of this research was twofold. First, the basic capabilities of ROBOSIM (graphical simulation system) were improved and extended by taking advantage of advanced graphic workstation technology and artificial intelligence programming techniques. Second, the scope of the graphic simulation testbed was extended to include general problems of Space Station automation. Hardware support for 3-D graphics and high processing performance make high resolution solid modeling, collision detection, and simulation of structural dynamics computationally feasible. The Space Station is a complex system with many interacting subsystems. Design and testing of automation concepts demand modeling of the affected processes, their interactions, and that of the proposed control systems. The automation testbed was designed to facilitate studies in Space Station automation concepts.

  19. Mixed Barrier Model for the Mixed Glass Former Effect in Ion Conducting Glasses

    NASA Astrophysics Data System (ADS)

    Schuch, Michael; Müller, Christian R.; Maass, Philipp; Martin, Steve W.

    2009-04-01

    Mixing two types of glass formers in ion conducting glasses can be exploited to lower conductivity activation energy and thereby increasing the ionic conductivity, a phenomenon known as the mixed glass former effect (MGFE). We develop a model for this MGFE, where activation barriers for individual ion jumps get lowered in inhomogeneous environments containing both types of network forming units. Fits of the model to experimental data allow one to estimate the strength of the barrier reduction, and they indicate a spatial clustering of the two types of network formers. The model predicts a time-temperature superposition of conductivity spectra onto a common master curve independent of the mixing ratio.

  20. Inferring transcriptional gene regulation network of starch metabolism in Arabidopsis thaliana leaves using graphical Gaussian model

    PubMed Central

    2012-01-01

    Background Starch serves as a temporal storage of carbohydrates in plant leaves during day/night cycles. To study transcriptional regulatory modules of this dynamic metabolic process, we conducted gene regulation network analysis based on small-sample inference of graphical Gaussian model (GGM). Results Time-series significant analysis was applied for Arabidopsis leaf transcriptome data to obtain a set of genes that are highly regulated under a diurnal cycle. A total of 1,480 diurnally regulated genes included 21 starch metabolic enzymes, 6 clock-associated genes, and 106 transcription factors (TF). A starch-clock-TF gene regulation network comprising 117 nodes and 266 edges was constructed by GGM from these 133 significant genes that are potentially related to the diurnal control of starch metabolism. From this network, we found that β-amylase 3 (b-amy3: At4g17090), which participates in starch degradation in chloroplast, is the most frequently connected gene (a hub gene). The robustness of gene-to-gene regulatory network was further analyzed by TF binding site prediction and by evaluating global co-expression of TFs and target starch metabolic enzymes. As a result, two TFs, indeterminate domain 5 (AtIDD5: At2g02070) and constans-like (COL: At2g21320), were identified as positive regulators of starch synthase 4 (SS4: At4g18240). The inference model of AtIDD5-dependent positive regulation of SS4 gene expression was experimentally supported by decreased SS4 mRNA accumulation in Atidd5 mutant plants during the light period of both short and long day conditions. COL was also shown to positively control SS4 mRNA accumulation. Furthermore, the knockout of AtIDD5 and COL led to deformation of chloroplast and its contained starch granules. This deformity also affected the number of starch granules per chloroplast, which increased significantly in both knockout mutant lines. Conclusions In this study, we utilized a systematic approach of microarray analysis to discover

  1. Perception in statistical graphics

    NASA Astrophysics Data System (ADS)

    VanderPlas, Susan Ruth

    There has been quite a bit of research on statistical graphics and visualization, generally focused on new types of graphics, new software to create graphics, interactivity, and usability studies. Our ability to interpret and use statistical graphics hinges on the interface between the graph itself and the brain that perceives and interprets it, and there is substantially less research on the interplay between graph, eye, brain, and mind than is sufficient to understand the nature of these relationships. The goal of the work presented here is to further explore the interplay between a static graph, the translation of that graph from paper to mental representation (the journey from eye to brain), and the mental processes that operate on that graph once it is transferred into memory (mind). Understanding the perception of statistical graphics should allow researchers to create more effective graphs which produce fewer distortions and viewer errors while reducing the cognitive load necessary to understand the information presented in the graph. Taken together, these experiments should lay a foundation for exploring the perception of statistical graphics. There has been considerable research into the accuracy of numerical judgments viewers make from graphs, and these studies are useful, but it is more effective to understand how errors in these judgments occur so that the root cause of the error can be addressed directly. Understanding how visual reasoning relates to the ability to make judgments from graphs allows us to tailor graphics to particular target audiences. In addition, understanding the hierarchy of salient features in statistical graphics allows us to clearly communicate the important message from data or statistical models by constructing graphics which are designed specifically for the perceptual system.

  2. Lagrangian mixed layer modeling of the western equatorial Pacific

    NASA Technical Reports Server (NTRS)

    Shinoda, Toshiaki; Lukas, Roger

    1995-01-01

    Processes that control the upper ocean thermohaline structure in the western equatorial Pacific are examined using a Lagrangian mixed layer model. The one-dimensional bulk mixed layer model of Garwood (1977) is integrated along the trajectories derived from a nonlinear 1 1/2 layer reduced gravity model forced with actual wind fields. The Global Precipitation Climatology Project (GPCP) data are used to estimate surface freshwater fluxes for the mixed layer model. The wind stress data which forced the 1 1/2 layer model are used for the mixed layer model. The model was run for the period 1987-1988. This simple model is able to simulate the isothermal layer below the mixed layer in the western Pacific warm pool and its variation. The subduction mechanism hypothesized by Lukas and Lindstrom (1991) is evident in the model results. During periods of strong South Equatorial Current, the warm and salty mixed layer waters in the central Pacific are subducted below the fresh shallow mixed layer in the western Pacific. However, this subduction mechanism is not evident when upwelling Rossby waves reach the western equatorial Pacific or when a prominent deepening of the mixed layer occurs in the western equatorial Pacific or when a prominent deepening of the mixed layer occurs in the western equatorial Pacific due to episodes of strong wind and light precipitation associated with the El Nino-Southern Oscillation. Comparison of the results between the Lagrangian mixed layer model and a locally forced Eulerian mixed layer model indicated that horizontal advection of salty waters from the central Pacific strongly affects the upper ocean salinity variation in the western Pacific, and that this advection is necessary to maintain the upper ocean thermohaline structure in this region.

  3. Radiolysis Model Formulation for Integration with the Mixed Potential Model

    SciTech Connect

    Buck, Edgar C.; Wittman, Richard S.

    2014-07-10

    The U.S. Department of Energy Office of Nuclear Energy (DOE-NE), Office of Fuel Cycle Technology has established the Used Fuel Disposition Campaign (UFDC) to conduct the research and development activities related to storage, transportation, and disposal of used nuclear fuel (UNF) and high-level radioactive waste. Within the UFDC, the components for a general system model of the degradation and subsequent transport of UNF is being developed to analyze the performance of disposal options [Sassani et al., 2012]. Two model components of the near-field part of the problem are the ANL Mixed Potential Model and the PNNL Radiolysis Model. This report is in response to the desire to integrate the two models as outlined in [Buck, E.C, J.L. Jerden, W.L. Ebert, R.S. Wittman, (2013) “Coupling the Mixed Potential and Radiolysis Models for Used Fuel Degradation,” FCRD-UFD-2013-000290, M3FT-PN0806058

  4. ModelMuse: A U.S. Geological Survey Open-Source, Graphical User Interface for Groundwater Models

    NASA Astrophysics Data System (ADS)

    Winston, R. B.

    2013-12-01

    ModelMuse is a free publicly-available graphical preprocessor used to generate the input and display the output for several groundwater models. It is written in Object Pascal and the source code is available on the USGS software web site. Supported models include the MODFLOW family of models, PHAST (version 1), and SUTRA version 2.2. With MODFLOW and PHAST, the user generates a grid and uses 'objects' (points, lines, and polygons) to define boundary conditions and the spatial variation in aquifer properties. Because the objects define the spatial variation, the grid can be changed without the user needing to re-enter spatial data. The same paradigm is used with SUTRA except that the user generates a quadrilateral finite-element mesh instead of a rectangular grid. The user interacts with the model in a top view and in a vertical cross section. The cross section can be at any angle or location. There is also a three-dimensional view of the model. For SUTRA, a new method of visualizing the permeability and related properties has been introduced. In three dimensional SUTRA models, the user specifies the permeability tensor by specifying permeability in three mutually orthogonal directions that can be oriented in space in any direction. Because it is important for the user to be able to check both the magnitudes and directions of the permeabilities, ModelMuse displays the permeabilities as either a two-dimensional or a three-dimensional vector plot. Color is used to differentiate the maximum, middle, and minimum permeability vectors. The magnitude of the permeability is shown by the vector length. The vector angle shows the direction of the maximum, middle, or minimum permeability. Contour and color plots can also be used to display model input and output data.

  5. Lidar observations of mixed layer dynamics - Tests of parameterized entrainment models of mixed layer growth rate

    NASA Technical Reports Server (NTRS)

    Boers, R.; Eloranta, E. W.; Coulter, R. L.

    1984-01-01

    Ground based lidar measurements of the atmospheric mixed layer depth, the entrainment zone depth and the wind speed and wind direction were used to test various parameterized entrainment models of mixed layer growth rate. Six case studies under clear air convective conditions over flat terrain in central Illinois are presented. It is shown that surface heating alone accounts for a major portion of the rise of the mixed layer on all days. A new set of entrainment model constants was determined which optimized height predictions for the dataset. Under convective conditions, the shape of the mixed layer height prediction curves closely resembled the observed shapes. Under conditions when significant wind shear was present, the shape of the height prediction curve departed from the data suggesting deficiencies in the parameterization of shear production. Development of small cumulus clouds on top of the layer is shown to affect mixed layer depths in the afternoon growth phase.

  6. Interactive computer graphics

    NASA Astrophysics Data System (ADS)

    Purser, K.

    1980-08-01

    Design layouts have traditionally been done on a drafting board by drawing a two-dimensional representation with section cuts and side views to describe the exact three-dimensional model. With the advent of computer graphics, a three-dimensional model can be created directly. The computer stores the exact three-dimensional model, which can be examined from any angle and at any scale. A brief overview of interactive computer graphics, how models are made and some of the benefits/limitations are described.

  7. Models of neutrino mass, mixing and CP violation

    NASA Astrophysics Data System (ADS)

    King, Stephen F.

    2015-12-01

    In this topical review we argue that neutrino mass and mixing data motivates extending the Standard Model (SM) to include a non-Abelian discrete flavour symmetry in order to accurately predict the large leptonic mixing angles and {C}{P} violation. We begin with an overview of the SM puzzles, followed by a description of some classic lepton mixing patterns. Lepton mixing may be regarded as a deviation from tri-bimaximal mixing, with charged lepton corrections leading to solar mixing sum rules, or tri-maximal lepton mixing leading to atmospheric mixing rules. We survey neutrino mass models, using a roadmap based on the open questions in neutrino physics. We then focus on the seesaw mechanism with right-handed neutrinos, where sequential dominance (SD) can account for large lepton mixing angles and {C}{P} violation, with precise predictions emerging from constrained SD (CSD). We define the flavour problem and discuss progress towards a theory of favour using GUTs and discrete family symmetry. We classify models as direct, semidirect or indirect, according to the relation between the Klein symmetry of the mass matrices and the discrete family symmetry, in all cases focussing on spontaneous {C}{P} violation. Finally we give two examples of realistic and highly predictive indirect models with CSD, namely an A to Z of flavour with Pati-Salam and a fairly complete A 4 × SU(5) SUSY GUT of flavour, where both models have interesting implications for leptogenesis.

  8. Taxonomy Of Magma Mixing I: Magma Mixing Metrics And The Thermochemistry Of Magma Hybridization Illuminated With A Toy Model

    NASA Astrophysics Data System (ADS)

    Spera, F. J.; Bohrson, W. A.; Schmidt, J.

    2013-12-01

    The rock record preserves abundant evidence of magma mixing in the form of mafic enclaves and mixed pumice in volcanic eruptions, syn-plutonic mafic or silicic dikes and intrusive complexes, replenishment events recorded in cumulates from layered intrusions, and crystal scale heterogeneity in phenocrysts and cumulate minerals. These evidently show that magma mixing in conjunction with crystallization (perfect fractional or incremental batch) is a first-order petrogenetic process. Magma mixing (sensu lato) occurs across a spectrum of mixed states from magma mingling to complete blending. The degree of mixing is quantified (Oldenburg et al, 1989) using two measures: the statistics of the segregation length scales (scale of segregation, L*) and the spatial contrast in composition (C) relative to the mean C (intensity of segregation, I). Mingling of dissimilar magmas produces a heterogeneous mixture containing discrete regions of end member melts and populations of crystals with L* = finite and I > 0. When L*→∞ and I→0 , the mixing magmas become hybridized and can be studied thermodynamically. Such hybrid magma is a multiphase equilibrium mixture of homogeneous melt, unzoned crystals and possible bubbles of a supercritical fluid. Here, we use a toy model to elucidate the principles of magma hybridization in a binary system (components A and B with pure crystals of α or β phase) with simple thermodynamics to build an outcome taxonomy. This binary system is not unlike the system Anorthite-Diopside, the classic low-pressure model basalt system. In the toy model, there are seven parameters describing the phase equilibria (eutectic T and X, specific heat, melting T and fusion enthalpies of α and β crystals) and five variables describing the magma mixing conditions: end member bulk compositions, temperatures and fraction of resident magma (M) that blends with recharge (R) magma to form a single equilibrium hybrid magma. There are 24 possible initial states when M

  9. Computer graphics and the graphic artist

    NASA Technical Reports Server (NTRS)

    Taylor, N. L.; Fedors, E. G.; Pinelli, T. E.

    1985-01-01

    A centralized computer graphics system is being developed at the NASA Langley Research Center. This system was required to satisfy multiuser needs, ranging from presentation quality graphics prepared by a graphic artist to 16-mm movie simulations generated by engineers and scientists. While the major thrust of the central graphics system was directed toward engineering and scientific applications, hardware and software capabilities to support the graphic artists were integrated into the design. This paper briefly discusses the importance of computer graphics in research; the central graphics system in terms of systems, software, and hardware requirements; the application of computer graphics to graphic arts, discussed in terms of the requirements for a graphic arts workstation; and the problems encountered in applying computer graphics to the graphic arts. The paper concludes by presenting the status of the central graphics system.

  10. Comparison between kinetic modelling and graphical analysis for the quantification of [18F]fluoromethylcholine uptake in mice

    PubMed Central

    2013-01-01

    Background Until now, no kinetic model was described for the oncologic tracer [18F]fluoromethylcholine ([18F]FCho), so it was aimed to validate a proper model, which is easy to implement and allows tracer quantification in tissues. Methods Based on the metabolic profile, two types of compartmental models were evaluated. One is a 3C2i model, which contains three tissue compartments and two input functions and corrects for possible [18F]fluorobetaine ([18F]FBet) uptake by the tissues. On the other hand, a two-tissue-compartment model (2C1i) was evaluated. Moreover, a comparison, based on intra-observer variability, was made between kinetic modelling and graphical analysis. Results Determination of the [18F]FCho-to-[18F]FBet uptake ratios in tissues and evaluation of the fitting of both kinetic models indicated that corrections for [18F]FBet uptake are not mandatory. In addition, [18F]FCho uptake is well described by the 2C1i model and by graphical analysis by means of the Patlak plot. Conclusions The Patlak plot is a reliable, precise, and robust method to quantify [18F]FCho uptake independent of scan time or plasma clearance. In addition, it is easily implemented, even under non-equilibrium conditions and without creating additional errors. PMID:24034278

  11. A multifluid mix model with material strength effects

    SciTech Connect

    Chang, C. H.; Scannapieco, A. J.

    2012-04-23

    We present a new multifluid mix model. Its features include material strength effects and pressure and temperature nonequilibrium between mixing materials. It is applicable to both interpenetration and demixing of immiscible fluids and diffusion of miscible fluids. The presented model exhibits the appropriate smooth transition in mathematical form as the mixture evolves from multiphase to molecular mixing, extending its applicability to the intermediate stages in which both types of mixing are present. Virtual mass force and momentum exchange have been generalized for heterogeneous multimaterial mixtures. The compression work has been extended so that the resulting species energy equations are consistent with the pressure force and material strength.

  12. A New Model for Mix It Up

    ERIC Educational Resources Information Center

    Holladay, Jennifer

    2009-01-01

    Since 2002, Teaching Tolerance's Mix It Up at Lunch Day program has helped millions of students cross social boundaries and create more inclusive school communities. Its goal is to create a safe, purposeful opportunity for students to break down the patterns of social self-segregation that too often plague schools. Research conducted in 2006 by…

  13. Analysis and modeling of subgrid scalar mixing using numerical data

    NASA Technical Reports Server (NTRS)

    Girimaji, Sharath S.; Zhou, YE

    1995-01-01

    Direct numerical simulations (DNS) of passive scalar mixing in isotropic turbulence is used to study, analyze and, subsequently, model the role of small (subgrid) scales in the mixing process. In particular, we attempt to model the dissipation of the large scale (supergrid) scalar fluctuations caused by the subgrid scales by decomposing it into two parts: (1) the effect due to the interaction among the subgrid scales; and (2) the effect due to interaction between the supergrid and the subgrid scales. Model comparisons with DNS data show good agreement. This model is expected to be useful in the large eddy simulations of scalar mixing and reaction.

  14. Parameter recovery and model selection in mixed Rasch models.

    PubMed

    Preinerstorfer, David; Formann, Anton K

    2012-05-01

    This study examines the precision of conditional maximum likelihood estimates and the quality of model selection methods based on information criteria (AIC and BIC) in mixed Rasch models. The design of the Monte Carlo simulation study included four test lengths (10, 15, 25, 40), three sample sizes (500, 1000, 2500), two simulated mixture conditions (one and two groups), and population homogeneity (equally sized subgroups) or heterogeneity (one subgroup three times larger than the other). The results show that both increasing sample size and increasing number of items lead to higher accuracy; medium-range parameters were estimated more precisely than extreme ones; and the accuracy was higher in homogeneous populations. The minimum-BIC method leads to almost perfect results and is more reliable than AIC-based model selection. The results are compared to findings by Li, Cohen, Kim, and Cho (2009) and practical guidelines are provided. PMID:21675964

  15. Documentation of a graphical display program for the saturated- unsaturated transport (SUTRA) finite-element simulation model

    USGS Publications Warehouse

    Souza, W.R.

    1987-01-01

    This report documents a graphical display program for the U. S. Geological Survey finite-element groundwater flow and solute transport model. Graphic features of the program, SUTRA-PLOT (SUTRA-PLOT = saturated/unsaturated transport), include: (1) plots of the finite-element mesh, (2) velocity vector plots, (3) contour plots of pressure, solute concentration, temperature, or saturation, and (4) a finite-element interpolator for gridding data prior to contouring. SUTRA-PLOT is written in FORTRAN 77 on a PRIME 750 computer system, and requires Version 9.0 or higher of the DISSPLA graphics library. The program requires two input files: the SUTRA input data list and the SUTRA simulation output listing. The program is menu driven and specifications for individual types of plots are entered and may be edited interactively. Installation instruction, a source code listing, and a description of the computer code are given. Six examples of plotting applications are used to demonstrate various features of the plotting program. (Author 's abstract)

  16. An Investigation of Item Fit Statistics for Mixed IRT Models

    ERIC Educational Resources Information Center

    Chon, Kyong Hee

    2009-01-01

    The purpose of this study was to investigate procedures for assessing model fit of IRT models for mixed format data. In this study, various IRT model combinations were fitted to data containing both dichotomous and polytomous item responses, and the suitability of the chosen model mixtures was evaluated based on a number of model fit procedures.…

  17. On the coalescence-dispersion modeling of turbulent molecular mixing

    NASA Technical Reports Server (NTRS)

    Givi, Peyman; Kosaly, George

    1987-01-01

    The general coalescence-dispersion (C/D) closure provides phenomenological modeling of turbulent molecular mixing. The models of Curl and Dopazo and O'Brien appear as two limiting C/D models that bracket the range of results one can obtain by various models. This finding is used to investigate the sensitivtiy of the results to the choice of the model. Inert scalar mixing is found to be less model-sensitive than mixing accompanied by chemical reaction. Infinitely fast chemistry approximation is used to relate the C/D approach to Toor's earlier results. Pure mixing and infinite rate chemistry calculations are compared to study further a recent result of Hsieh and O'Brien who found that higher concentration moments are not sensitive to chemistry.

  18. Computer modeling of jet mixing in INEL waste tanks

    SciTech Connect

    Meyer, P.A.

    1994-01-01

    The objective of this study is to examine the feasibility of using submerged jet mixing pumps to mobilize and suspend settled sludge materials in INEL High Level Radioactive Waste Tanks. Scenarios include removing the heel (a shallow liquid and sludge layer remaining after tank emptying processes) and mobilizing and suspending solids in full or partially full tanks. The approach used was to (1) briefly review jet mixing theory, (2) review erosion literature in order to identify and estimate important sludge characterization parameters (3) perform computer modeling of submerged liquid mixing jets in INEL tank geometries, (4) develop analytical models from which pump operating conditions and mixing times can be estimated, and (5) analyze model results to determine overall feasibility of using jet mixing pumps and make design recommendations.

  19. Diagnostic tools for mixing models of stream water chemistry

    USGS Publications Warehouse

    Hooper, R.P.

    2003-01-01

    Mixing models provide a useful null hypothesis against which to evaluate processes controlling stream water chemical data. Because conservative mixing of end-members with constant concentration is a linear process, a number of simple mathematical and multivariate statistical methods can be applied to this problem. Although mixing models have been most typically used in the context of mixing soil and groundwater end-members, an extension of the mathematics of mixing models is presented that assesses the "fit" of a multivariate data set to a lower dimensional mixing subspace without the need for explicitly identified end-members. Diagnostic tools are developed to determine the approximate rank of the data set and to assess lack of fit of the data. This permits identification of processes that violate the assumptions of the mixing model and can suggest the dominant processes controlling stream water chemical variation. These same diagnostic tools can be used to assess the fit of the chemistry of one site into the mixing subspace of a different site, thereby permitting an assessment of the consistency of controlling end-members across sites. This technique is applied to a number of sites at the Panola Mountain Research Watershed located near Atlanta, Georgia.

  20. Development of a Medicaid Behavioral Health Case-Mix Model

    ERIC Educational Resources Information Center

    Robst, John

    2009-01-01

    Many Medicaid programs have either fully or partially carved out mental health services. The evaluation of carve-out plans requires a case-mix model that accounts for differing health status across Medicaid managed care plans. This article develops a diagnosis-based case-mix adjustment system specific to Medicaid behavioral health care. Several…

  1. Kinetic mixing effect in the 3 -3 -1 -1 model

    NASA Astrophysics Data System (ADS)

    Dong, P. V.; Si, D. T.

    2016-06-01

    We show that the mixing effect of the neutral gauge bosons in the 3 -3 -1 -1 model comes from two sources. The first one is due to the 3 -3 -1 -1 gauge symmetry breaking as usual, whereas the second one results from the kinetic mixing between the gauge bosons of U (1 )X and U (1 )N groups, which are used to determine the electric charge and baryon minus lepton numbers, respectively. Such mixings modify the ρ -parameter and the known couplings of Z with fermions. The constraints that arise from flavor-changing neutral currents due to the gauge boson mixings and nonuniversal fermion generations are also given.

  2. Building Models in the Classroom: Taking Advantage of Sophisticated Geomorphic Numerical Tools Using a Simple Graphical User Interface

    NASA Astrophysics Data System (ADS)

    Roy, S. G.; Koons, P. O.; Gerbi, C. C.; Capps, D. K.; Tucker, G. E.; Rogers, Z. A.

    2014-12-01

    Sophisticated numerical tools exist for modeling geomorphic processes and linking them to tectonic and climatic systems, but they are often seen as inaccessible for users with an exploratory level of interest. We have improved the accessibility of landscape evolution models by producing a simple graphics user interface (GUI) that takes advantage of the Channel-Hillslope Integrated Landscape Development (CHILD) model. Model access is flexible: the user can edit values for basic geomorphic, tectonic, and climate parameters, or obtain greater control by defining the spatiotemporal distributions of those parameters. Users can make educated predictions by choosing their own parametric values for the governing equations and interpreting the results immediately through model graphics. This method of modeling allows users to iteratively build their understanding through experimentation. Use of this GUI is intended for inquiry and discovery-based learning activities. We discuss a number of examples of how the GUI can be used at the upper high school, introductory university, and advanced university level. Effective teaching modules initially focus on an inquiry-based example guided by the instructor. As students become familiar with the GUI and the CHILD model, the class can shift to more student-centered exploration and experimentation. To make model interpretations more robust, digital elevation models can be imported and direct comparisons can be made between CHILD model results and natural topography. The GUI is available online through the University of Maine's Earth and Climate Sciences website, through the Community Surface Dynamics Modeling System (CSDMS) model repository, or by contacting the corresponding author.

  3. Shell model of optimal passive-scalar mixing

    NASA Astrophysics Data System (ADS)

    Miles, Christopher; Doering, Charles

    2015-11-01

    Optimal mixing is significant to process engineering within industries such as food, chemical, pharmaceutical, and petrochemical. An important question in this field is ``How should one stir to create a homogeneous mixture while being energetically efficient?'' To answer this question, we consider an initially unmixed scalar field representing some concentration within a fluid on a periodic domain. This passive-scalar field is advected by the velocity field, our control variable, constrained by a physical quantity such as energy or enstrophy. We consider two objectives: local-in-time (LIT) optimization (what will maximize the mixing rate now?) and global-in-time (GIT) optimization (what will maximize mixing at the end time?). Throughout this work we use the H-1 mix-norm to measure mixing. To gain a better understanding, we provide a simplified mixing model by using a shell model of passive-scalar advection. LIT optimization in this shell model gives perfect mixing in finite time for the energy-constrained case and exponential decay to the perfect-mixed state for the enstrophy-constrained case. Although we only enforce that the time-average energy (or enstrophy) equals a chosen value in GIT optimization, interestingly, the optimal control keeps this value constant over time.

  4. VISUAL PLUMES MIXING ZONE MODELING SOFTWARE

    EPA Science Inventory

    The US Environmental Protection Agency has a history of developing plume models and providing technical assistance. The Visual Plumes model (VP) is a recent addition to the public-domain models available on the EPA Center for Exposure Assessment Modeling (CEAM) web page. The Wind...

  5. An Analysis of 24-Hour Ambulatory Blood Pressure Monitoring Data using Orthonormal Polynomials in the Linear Mixed Model

    PubMed Central

    Edwards, Lloyd J.; Simpson, Sean L.

    2014-01-01

    Background The use of 24-hour ambulatory blood pressure monitoring (ABPM) in clinical practice and observational epidemiological studies has grown considerably in the past 25 years. ABPM is a very effective technique for assessing biological, environmental, and drug effects on blood pressure. Objectives In order to enhance the effectiveness of ABPM for clinical and observational research studies via analytical and graphical results, developing alternative data analysis approaches using modern statistical techniques are important. Methods The linear mixed model for the analysis of longitudinal data is particularly well-suited for the estimation of, inference about, and interpretation of both population (mean) and subject-specific trajectories for ABPM data. We propose using a linear mixed model with orthonormal polynomials across time in both the fixed and random effects to analyze ABPM data. Results We demonstrate the proposed analysis technique using data from the Dietary Approaches to Stop Hypertension (DASH) study, a multicenter, randomized, parallel arm feeding study that tested the effects of dietary patterns on blood pressure. Conclusions The linear mixed model is relatively easy to implement (given the complexity of the technique) using available software, allows for straight-forward testing of multiple hypotheses, and the results can be presented to research clinicians using both graphical and tabular displays. Using orthonormal polynomials provides the ability to model the nonlinear trajectories of each subject with the same complexity as the mean model (fixed effects). PMID:24667908

  6. Agility and mixed-model furniture production

    NASA Astrophysics Data System (ADS)

    Yao, Andrew C.

    2000-10-01

    The manufacture of upholstered furniture provides an excellent opportunity to analyze the effect of a comprehensive communication system on classical production management functions. The objective of the research is to study the scheduling heuristics that embrace the concepts inherent in MRP, JIT and TQM while recognizing the need for agility in a somewhat complex and demanding environment. An on-line, real-time data capture system provides the status and location of production lots, components, subassemblies for schedule control. Current inventory status of raw material and purchased items are required in order to develop and adhere to schedules. For the large variety of styles and fabrics customers may order, the communication system must provide timely, accurate and comprehensive information for intelligent decisions with respect to the product mix and production resources.

  7. Graphical programming of telerobotic tasks

    SciTech Connect

    Small, D.E.; McDonald, M.J.

    1996-11-01

    With a goal of producing faster, safer, and cheaper technologies for nuclear waste cleanup, Sandia is actively developing and extending intelligent systems technologies through the US Department of Energy Office of Technology Development (DOE OTD) Robotic Technology Development Program (RTDP). Graphical programming is a key technology for robotic waste cleanup that Sandia is developing for this goal. Graphical programming uses simulation such as TELEGRIP `on-line` to program and control robots. Characterized by its model-based control architecture, integrated simulation, `point-and-click` graphical user interfaces, task and path planning software, and network communications, Sandia`s Graphical Programming systems allow operators to focus on high-level robotic tasks rather than the low-level details. Use of scripted tasks, rather than customized programs minimizes the necessity of recompiling supervisory control systems and enhances flexibility. Rapid world-modelling technologies allow Graphical Programming to be used in dynamic and unpredictable environments including digging and pipe-cutting. This paper describes Sancho, Sandia`s most advanced graphical programming supervisory software. Sancho, now operational on several robot systems, incorporates all of Sandia`s recent advances in supervisory control. Graphical programming uses 3-D graphics models as intuitive operator interfaces to program and control complex robotic systems. The goal of the paper is to help the reader understand how Sandia implements graphical programming systems and which key features in Sancho have proven to be most effective.

  8. Graphics development of DCOR: Deterministic combat model of Oak Ridge. [Deterministic Combat model of Oak Ridge (DCOR)

    SciTech Connect

    Hunt, G. ); Azmy, Y.Y. )

    1992-10-01

    DCOR is a user-friendly computer implementation of a deterministic combat model developed at ORNL. To make the interpretation of the results more intuitive, a conversion of the numerical solution to a graphic animation sequence of battle evolution is desirable. DCOR uses a coarse computational spatial mesh superimposed on the battlefield. This research is aimed at developing robust methods for computing the position of the combative units over the continuum (and also pixeled) battlefield, from DCOR's discrete-variable solution representing the density of each force type evaluated at gridpoints. Three main problems have been identified and solutions have been devised and implemented in a new visualization module of DCOR. First, there is the problem of distributing the total number of objects, each representing a combative unit of each force type, among the gridpoints at each time level of the animation. This problem is solved by distributing, for each force type, the total number of combative units, one by one, to the gridpoint with the largest calculated number of units. Second, there is the problem of distributing the number of units assigned to each computational gridpoint over the battlefield area attributed to that point. This problem is solved by distributing the units within that area by taking into account the influence of surrounding gridpoints using linear interpolation. Finally, time interpolated solutions must be generated to produce a sufficient number of frames to create a smooth animation sequence. Currently, enough frames may be generated either by direct computation via the PDE solver or by using linear programming techniques to linearly interpolate intermediate frames between calculated frames.

  9. SutraPlot, a graphical post-processor for SUTRA, a model for ground-water flow with solute or energy transport

    USGS Publications Warehouse

    Souza, W.R.

    1999-01-01

    This report documents a graphical display post-processor (SutraPlot) for the U.S. Geological Survey Saturated-Unsaturated flow and solute or energy TRAnsport simulation model SUTRA, Version 2D3D.1. This version of SutraPlot is an upgrade to SutraPlot for the 2D-only SUTRA model (Souza, 1987). It has been modified to add 3D functionality, a graphical user interface (GUI), and enhanced graphic output options. Graphical options for 2D SUTRA (2-dimension) simulations include: drawing the 2D finite-element mesh, mesh boundary, and velocity vectors; plots of contours for pressure, saturation, concentration, and temperature within the model region; 2D finite-element based gridding and interpolation; and 2D gridded data export files. Graphical options for 3D SUTRA (3-dimension) simulations include: drawing the 3D finite-element mesh; plots of contours for pressure, saturation, concentration, and temperature in 2D sections of the 3D model region; 3D finite-element based gridding and interpolation; drawing selected regions of velocity vectors (projected on principal coordinate planes); and 3D gridded data export files. Installation instructions and a description of all graphic options are presented. A sample SUTRA problem is described and three step-by-step SutraPlot applications are provided. In addition, the methodology and numerical algorithms for the 2D and 3D finite-element based gridding and interpolation, developed for SutraPlot, are described. 1

  10. A Comparison of Item Fit Statistics for Mixed IRT Models

    ERIC Educational Resources Information Center

    Chon, Kyong Hee; Lee, Won-Chan; Dunbar, Stephen B.

    2010-01-01

    In this study we examined procedures for assessing model-data fit of item response theory (IRT) models for mixed format data. The model fit indices used in this study include PARSCALE's G[superscript 2], Orlando and Thissen's S-X[superscript 2] and S-G[superscript 2], and Stone's chi[superscript 2*] and G[superscript 2*]. To investigate the…

  11. A Bayesian Semiparametric Latent Variable Model for Mixed Responses

    ERIC Educational Resources Information Center

    Fahrmeir, Ludwig; Raach, Alexander

    2007-01-01

    In this paper we introduce a latent variable model (LVM) for mixed ordinal and continuous responses, where covariate effects on the continuous latent variables are modelled through a flexible semiparametric Gaussian regression model. We extend existing LVMs with the usual linear covariate effects by including nonparametric components for nonlinear…

  12. Weakly nonlinear models for turbulent mixing in a plane mixing layer

    NASA Technical Reports Server (NTRS)

    Liou, William W.; Morris, Philip J.

    1992-01-01

    New closure models for turbulent free shear flows are presented in this paper. They are based on a weakly nonlinear theory with a description of the dominant large-scale structures as instability waves. Two models are presented that describe the evolution of the free shear flows in terms of the time-averaged mean flow and the dominant large-scale turbulent structure. The local characteristics of the large-scale motions are described using linear theory. Their amplitude is determined from an energy integral analysis. The models have been applied to the study of an incompressible mixing layer. For both models, predictions of the mean flow developed are made. In the second model, predictions of the time-dependent motion of the large-scale structures in the mixing layer are made. The predictions show good agreement with experimental observations.

  13. LIDAR OBSERVATIONS OF MIXED LAYER DYNAMICS: TESTS OF PARAMETERIZED ENTRAINMENT MODELS OF MIXED LAYER GROWTH RATE

    EPA Science Inventory

    Lidar measurements of the atmospheric boundary layer height, the entrainment zone, wind speed and direction, ancillary temperature profiles and surface flux data were used to test current parameterized entrainment models of mixed layer growth rate. Six case studies under clear ai...

  14. A Novel Graphical User Interface for High-Efficacy Modeling of Human Perceptual Similarity Opinions

    SciTech Connect

    Kress, James M; Xu, Songhua; Tourassi, Georgia

    2013-01-01

    We present a novel graphical user interface (GUI) that facilitates high-efficacy collection of perceptual similarity opinions of a user in an effective and intuitive manner. The GUI is based on a hybrid mechanism that combines ranking and rating. Namely, it presents a base image for rating its similarity to seven peripheral images that are displayed simultaneously following a circular layout. The user is asked to report the base image s pairwise similarity to each peripheral image on a fixed scale while preserving the relative ranking among all peripheral images. The collected data are then used to predict the user s subjective opinions regarding the perceptual similarity of images. We tested this new approach against two methods commonly used in perceptual similarity studies: (1) a ranking method that presents triplets of images for selecting the image pair with the highest internal similarity and (2) a rating method that presents pairs of images for rating their relative similarity on a fixed scale. We aimed to determine which data collection method was the most time efficient and effective for predicting a user s perceptual opinions regarding the similarity of mammographic masses. Our study was conducted with eight individuals. By using the proposed GUI, we were able to derive individual user profiles that were 41.4% to 46.9% more accurate than those derived with the other two data collection GUIs. The accuracy improvement was statistically significant.

  15. Mixing by barotropic instability in a nonlinear model

    NASA Technical Reports Server (NTRS)

    Bowman, Kenneth P.; Chen, Ping

    1994-01-01

    A global, nonlinear, equivalent barotropic model is used to study the isentropic mixing of passive tracers by barotropic instability. Basic states are analytical zonal-mean jets representative of the zonal-mean flow in the upper stratosphere, where the observed 4-day wave is thought to be a result of barotropic, and possibly baroclinic, instability. As is known from previous studies, the phase speed and growth rate of the unstable waves is fairly sensitive to the shape of the zonal-mean jet; and the dominant wave mode at saturation is not necessarily the fastest growing mode; but the unstable modes share many features of the observed 4-day wave. Lagrangian trajectories computed from model winds are used to characterize the mixing by the flow. For profiles with both midlatitude and polar modes, mixing is stronger in midlatitude than inside the vortex; but there is little exchange of air across the vortex boundary. There is a minimum in the Lyapunov exponents of the flow and the particle dispersion at the jet maximum. For profiles with only polar unstable modes, there is weak mixing inside the vortex, no mixing outside the vortex, and no exchange of air across the vortex boundary. These results support the theoretical arguments that, whether wave disturbances are generated by local instability or propagate from other regions, the mixing properties of the total flow are determined by the locations of the wave critical lines and that strong gradients of potential vorticity are very resistant to mixing.

  16. Modelling multi-phase liquid-sediment scour and resuspension induced by rapid flows using Smoothed Particle Hydrodynamics (SPH) accelerated with a Graphics Processing Unit (GPU)

    NASA Astrophysics Data System (ADS)

    Fourtakas, G.; Rogers, B. D.

    2016-06-01

    A two-phase numerical model using Smoothed Particle Hydrodynamics (SPH) is applied to two-phase liquid-sediments flows. The absence of a mesh in SPH is ideal for interfacial and highly non-linear flows with changing fragmentation of the interface, mixing and resuspension. The rheology of sediment induced under rapid flows undergoes several states which are only partially described by previous research in SPH. This paper attempts to bridge the gap between the geotechnics, non-Newtonian and Newtonian flows by proposing a model that combines the yielding, shear and suspension layer which are needed to predict accurately the global erosion phenomena, from a hydrodynamics prospective. The numerical SPH scheme is based on the explicit treatment of both phases using Newtonian and the non-Newtonian Bingham-type Herschel-Bulkley-Papanastasiou constitutive model. This is supplemented by the Drucker-Prager yield criterion to predict the onset of yielding of the sediment surface and a concentration suspension model. The multi-phase model has been compared with experimental and 2-D reference numerical models for scour following a dry-bed dam break yielding satisfactory results and improvements over well-known SPH multi-phase models. With 3-D simulations requiring a large number of particles, the code is accelerated with a graphics processing unit (GPU) in the open-source DualSPHysics code. The implementation and optimisation of the code achieved a speed up of x58 over an optimised single thread serial code. A 3-D dam break over a non-cohesive erodible bed simulation with over 4 million particles yields close agreement with experimental scour and water surface profiles.

  17. New mixing angles in the left-right symmetric model

    NASA Astrophysics Data System (ADS)

    Kokado, Akira; Saito, Takesi

    2015-12-01

    In the left-right symmetric model neutral gauge fields are characterized by three mixing angles θ12,θ23,θ13 between three gauge fields Bμ,WLμ 3,WRμ 3, which produce mass eigenstates Aμ,Zμ,Zμ', when G =S U (2 )L×S U (2 )R×U (1 )B-L×D is spontaneously broken down until U (1 )em . We find a new mixing angle θ', which corresponds to the Weinberg angle θW in the standard model with the S U (2 )L×U (1 )Y gauge symmetry, from these mixing angles. It is then shown that any mixing angle θi j can be expressed by ɛ and θ', where ɛ =gL/gR is a ratio of running left-right gauge coupling strengths. We observe that light gauge bosons are described by θ' only, whereas heavy gauge bosons are described by two parameters ɛ and θ'.

  18. Evaluation of rural-air-quality simulation models. Addendum B: graphical display of model performance using the Clifty Creek data base

    SciTech Connect

    Cox, W.M.; Moss, G.K.; Tikvart, J.A.; Baldridge, E.

    1985-08-01

    The addendum uses a variety of graphical formats to display and compare the performance of four rural models using the Clifty Creek data base. The four models included MPTER (EPA), PPSP (Martin Marietta Corp.), MPSDM (ERT), and TEM-8A (Texas Air Control Board). Graphic displays were developed and used for both operational evaluation and diagnostic evaluation purposes. Plots of bias of the average vs station downwind distance by stability and wind-speed class revealed clear patterns of accentuated underprediction and overprediction for stations closer to the source. PPSP showed a tendency for decreasing overprediction with increasing station distance for all meteorological subsets while the other three models showed varying patterns depending on the meteorological class. Diurnal plots of the bias of the average vs hour of the day revealed a pattern of underestimation during the nocturnal hours and overestimation during hours of strong solar radiation with MPSDM and MPTER showing the least overall bias throughout the day.

  19. Graphic engine resource management

    NASA Astrophysics Data System (ADS)

    Bautin, Mikhail; Dwarakinath, Ashok; Chiueh, Tzi-cker

    2008-01-01

    Modern consumer-grade 3D graphic cards boast a computation/memory resource that can easily rival or even exceed that of standard desktop PCs. Although these cards are mainly designed for 3D gaming applications, their enormous computational power has attracted developers to port an increasing number of scientific computation programs to these cards, including matrix computation, collision detection, cryptography, database sorting, etc. As more and more applications run on 3D graphic cards, there is a need to allocate the computation/memory resource on these cards among the sharing applications more fairly and efficiently. In this paper, we describe the design, implementation and evaluation of a Graphic Processing Unit (GPU) scheduler based on Deficit Round Robin scheduling that successfully allocates to every process an equal share of the GPU time regardless of their demand. This scheduler, called GERM, estimates the execution time of each GPU command group based on dynamically collected statistics, and controls each process's GPU command production rate through its CPU scheduling priority. Measurements on the first GERM prototype show that this approach can keep the maximal GPU time consumption difference among concurrent GPU processes consistently below 5% for a variety of application mixes.

  20. Computer aided graphics simulation modelling using seismogeologic approach in sequence stratigraphy of Early Cretaceous Punjab platform, Central Indus Basin, Pakistan

    SciTech Connect

    Qureshi, T.M.; Khan, K.A.

    1996-08-01

    Modelling stratigraphic sequence by using seismo-geologic approach, integrated with cyclic transgressive-regressive deposits, helps to identify a number of non-structural subtle traps. Most of the hydrocarbons found in Early Cretaceous of Central Indus Basin pertain to structural entrapments of upper transgressive sands. A few wells are producing from middle and basal regressive sands, but the massive regressive sands have not been tested so far. The possibility of stratigraphic traps like wedging or pinch-out, a lateral gradation, an uplift, truncation and overlapping of reservoir rocks is quite promising. The natural basin physiography at times has been modified by extensional episodic events into tectono-morphic terrain. Thus, seismo scanning of tectonically controlled sedimentation might delineate some subtle stratigraphic traps. Amplitude maps representing stratigraphic sequences are generated to identify the traps. Seismic expressions indicate the reservoir quality in terms of amplitude increase or decrease. The data is modelled on computer using graphics simulation techniques.

  1. Simplified renormalizable T' model for tribimaximal mixing and Cabibbo angle

    NASA Astrophysics Data System (ADS)

    Frampton, Paul H.; Kephart, Thomas W.; Matsuzaki, Shinya

    2008-10-01

    In a simplified renormalizable model where the neutrinos have Pontecorvo-Maki-Nakagawa-Sakata (PMNS) mixings tan⁡2θ12=(1)/(2), θ13=0, θ23=π/4 and with flavor symmetry T' there is a corresponding prediction where the quarks have Cabibbo-Kobayashi-Maskawa (CKM) mixings tan⁡2Θ12=(2)/(3), Θ13=0, Θ23=0.

  2. A Mixed Effects Randomized Item Response Model

    ERIC Educational Resources Information Center

    Fox, J.-P.; Wyrick, Cheryl

    2008-01-01

    The randomized response technique ensures that individual item responses, denoted as true item responses, are randomized before observing them and so-called randomized item responses are observed. A relationship is specified between randomized item response data and true item response data. True item response data are modeled with a (non)linear…

  3. Generalized Dynamic Factor Models for Mixed-Measurement Time Series

    PubMed Central

    Cui, Kai; Dunson, David B.

    2013-01-01

    In this article, we propose generalized Bayesian dynamic factor models for jointly modeling mixed-measurement time series. The framework allows mixed-scale measurements associated with each time series, with different measurements having different distributions in the exponential family conditionally on time-varying latent factor(s). Efficient Bayesian computational algorithms are developed for posterior inference on both the latent factors and model parameters, based on a Metropolis Hastings algorithm with adaptive proposals. The algorithm relies on a Greedy Density Kernel Approximation (GDKA) and parameter expansion with latent factor normalization. We tested the framework and algorithms in simulated studies and applied them to the analysis of intertwined credit and recovery risk for Moody’s rated firms from 1982–2008, illustrating the importance of jointly modeling mixed-measurement time series. The article has supplemental materials available online. PMID:24791133

  4. Generalized Dynamic Factor Models for Mixed-Measurement Time Series.

    PubMed

    Cui, Kai; Dunson, David B

    2014-02-12

    In this article, we propose generalized Bayesian dynamic factor models for jointly modeling mixed-measurement time series. The framework allows mixed-scale measurements associated with each time series, with different measurements having different distributions in the exponential family conditionally on time-varying latent factor(s). Efficient Bayesian computational algorithms are developed for posterior inference on both the latent factors and model parameters, based on a Metropolis Hastings algorithm with adaptive proposals. The algorithm relies on a Greedy Density Kernel Approximation (GDKA) and parameter expansion with latent factor normalization. We tested the framework and algorithms in simulated studies and applied them to the analysis of intertwined credit and recovery risk for Moody's rated firms from 1982-2008, illustrating the importance of jointly modeling mixed-measurement time series. The article has supplemental materials available online. PMID:24791133

  5. Teaching Service Modelling to a Mixed Class: An Integrated Approach

    ERIC Educational Resources Information Center

    Deng, Jeremiah D.; Purvis, Martin K.

    2015-01-01

    Service modelling has become an increasingly important area in today's telecommunications and information systems practice. We have adapted a Network Design course in order to teach service modelling to a mixed class of both the telecommunication engineering and information systems backgrounds. An integrated approach engaging mathematics teaching…

  6. Analyzing Mixed-Dyadic Data Using Structural Equation Models

    ERIC Educational Resources Information Center

    Peugh, James L.; DiLillo, David; Panuzio, Jillian

    2013-01-01

    Mixed-dyadic data, collected from distinguishable (nonexchangeable) or indistinguishable (exchangeable) dyads, require statistical analysis techniques that model the variation within dyads and between dyads appropriately. The purpose of this article is to provide a tutorial for performing structural equation modeling analyses of cross-sectional…

  7. MULTIVARIATE LINEAR MIXED MODELS FOR MULTIPLE OUTCOMES. (R824757)

    EPA Science Inventory

    We propose a multivariate linear mixed (MLMM) for the analysis of multiple outcomes, which generalizes the latent variable model of Sammel and Ryan. The proposed model assumes a flexible correlation structure among the multiple outcomes, and allows a global test of the impact of ...

  8. Generation Mixing in the Sakata-Nagoya Model

    NASA Astrophysics Data System (ADS)

    Nishijima, K.

    The Sakata model as combined with the SU(3) symmetry served in introducing the idea of the fundamental triplet in particle physics. In the Nagoya model the correspondence between baryons and leptons was emphasized and was exploited later in forming the concept of generation and generation-mixing.

  9. Model for compound formation during ion-beam mixing

    SciTech Connect

    Desimoni, J.; Traverse, A. )

    1993-11-01

    We propose an ion-beam-mixing model that accounts for compound formation at a boundary between two materials during ion irradiation. It is based on Fick's law together with a chemical driving force in order to simulate the chemical reaction at the boundary. The behavior of the squared thickness of the mixed layer, [ital X][sup 2], with the irradiation fluence, [Phi], has been found in several mixing experiments to be either quadratic ([ital X][sup 2][alpha][Phi][sup 2]) or linear ([ital X][sup 2][alpha][Phi]), a result which is qualitatively reproduced. Depending on the fluence range, compound formation or diffusion is the limiting process of mixing kinetics. A criterion is established in terms of the ratio of the diffusion coefficient [ital D] due to irradiation to the chemical reaction rate squared which allows us to predict quadratic or linear behavior. When diffusion is the limiting process, [ital D] is enhanced by a factor which accounts for the formation of a compound in the mixed layer. Good agreement is found between the calculated mixing rates and the data taken from mixing experiments in metal/Si bilayers.

  10. Modeling the iron cycling in the mixed layer

    NASA Astrophysics Data System (ADS)

    Weber, L.; Voelker, C.; Schartau, M.; Wolf-Gladrow, D.

    2003-04-01

    We present a comprehensive model of the iron cycling within the mixed layer of the ocean, which predicts the time course of iron concentration and speciation. The speciation of iron within the mixed layer is heavily influenced by photochemistry, organic complexation, colloid formation and aggregation, as well as uptake and release by marine biota. The model is driven by mixed layer dynamics, dust deposition and insolation, as well as coupled to a simple ecosystem model (based on Schartau at al.2001: Deep-Sea Res.II.48,1769-1800) and applied to the site of the Bermuda Atlantic Time-series Study (BATS). Parameters in the model were chosen to reproduce the small number of available speciation measurements resolving a daily cycle. The model clearly reproduces the available Fe concentration at the BATS station but the annual balance of Fe fluxes at BATS is less constrained, due to uncertainties in the model parameters. Hence we discuss the model's sensitivity to parameter uncertainties and which observations might help to better constrain the relevant model parameters. Futher we discuss how the most important model parameters are constrained by the data. The mixed layer cycle in the model strongly influences seasonality of primary production as well as light dependency of photoreductive processes and therefore controlls iron speciation. Futhermore short events within a day (e.g. heavy rain, change of irradiance, intense dust deposition and temporary deepening of the mixed layer) may push processes like colloidal aggregation. For this reason we compare two versions of the model: The first one is forced by monthly averaged climatological variables, the second one by daily climatological variabilities.