Science.gov

Sample records for mixed graphical models

  1. Learning the Structure of Mixed Graphical Models

    PubMed Central

    Lee, Jason D.; Hastie, Trevor J.

    2014-01-01

    We consider the problem of learning the structure of a pairwise graphical model over continuous and discrete variables. We present a new pairwise model for graphical models with both continuous and discrete variables that is amenable to structure learning. In previous work, authors have considered structure learning of Gaussian graphical models and structure learning of discrete models. Our approach is a natural generalization of these two lines of work to the mixed case. The penalization scheme involves a novel symmetric use of the group-lasso norm and follows naturally from a particular parametrization of the model. Supplementary materials for this paper are available online. PMID:26085782

  2. Selection and estimation for mixed graphical models

    PubMed Central

    Chen, Shizhe; Witten, Daniela M.; shojaie, Ali

    2016-01-01

    Summary We consider the problem of estimating the parameters in a pairwise graphical model in which the distribution of each node, conditioned on the others, may have a different exponential family form. We identify restrictions on the parameter space required for the existence of a well-defined joint density, and establish the consistency of the neighbourhood selection approach for graph reconstruction in high dimensions when the true underlying graph is sparse. Motivated by our theoretical results, we investigate the selection of edges between nodes whose conditional distributions take different parametric forms, and show that efficiency can be gained if edge estimates obtained from the regressions of particular nodes are used to reconstruct the graph. These results are illustrated with examples of Gaussian, Bernoulli, Poisson and exponential distributions. Our theoretical findings are corroborated by evidence from simulation studies.

  3. Mapping eQTL Networks with Mixed Graphical Markov Models

    PubMed Central

    Tur, Inma; Roverato, Alberto; Castelo, Robert

    2014-01-01

    Expression quantitative trait loci (eQTL) mapping constitutes a challenging problem due to, among other reasons, the high-dimensional multivariate nature of gene-expression traits. Next to the expression heterogeneity produced by confounding factors and other sources of unwanted variation, indirect effects spread throughout genes as a result of genetic, molecular, and environmental perturbations. From a multivariate perspective one would like to adjust for the effect of all of these factors to end up with a network of direct associations connecting the path from genotype to phenotype. In this article we approach this challenge with mixed graphical Markov models, higher-order conditional independences, and q-order correlation graphs. These models show that additive genetic effects propagate through the network as function of gene–gene correlations. Our estimation of the eQTL network underlying a well-studied yeast data set leads to a sparse structure with more direct genetic and regulatory associations that enable a straightforward comparison of the genetic control of gene expression across chromosomes. Interestingly, it also reveals that eQTLs explain most of the expression variability of network hub genes. PMID:25271303

  4. Linear mixed-effects models for within-participant psychology experiments: an introductory tutorial and free, graphical user interface (LMMgui)

    PubMed Central

    Magezi, David A.

    2015-01-01

    Linear mixed-effects models (LMMs) are increasingly being used for data analysis in cognitive neuroscience and experimental psychology, where within-participant designs are common. The current article provides an introductory review of the use of LMMs for within-participant data analysis and describes a free, simple, graphical user interface (LMMgui). LMMgui uses the package lme4 (Bates et al., 2014a,b) in the statistical environment R (R Core Team). PMID:25657634

  5. Learning Graphical Models With Hubs

    PubMed Central

    Tan, Kean Ming; London, Palma; Mohan, Karthik; Lee, Su-In; Fazel, Maryam; Witten, Daniela

    2014-01-01

    We consider the problem of learning a high-dimensional graphical model in which there are a few hub nodes that are densely-connected to many other nodes. Many authors have studied the use of an ℓ1 penalty in order to learn a sparse graph in the high-dimensional setting. However, the ℓ1 penalty implicitly assumes that each edge is equally likely and independent of all other edges. We propose a general framework to accommodate more realistic networks with hub nodes, using a convex formulation that involves a row-column overlap norm penalty. We apply this general framework to three widely-used probabilistic graphical models: the Gaussian graphical model, the covariance graph model, and the binary Ising model. An alternating direction method of multipliers algorithm is used to solve the corresponding convex optimization problems. On synthetic data, we demonstrate that our proposed framework outperforms competitors that do not explicitly model hub nodes. We illustrate our proposal on a webpage data set and a gene expression data set. PMID:25620891

  6. Representing Learning With Graphical Models

    NASA Technical Reports Server (NTRS)

    Buntine, Wray L.; Lum, Henry, Jr. (Technical Monitor)

    1994-01-01

    Probabilistic graphical models are being used widely in artificial intelligence, for instance, in diagnosis and expert systems, as a unified qualitative and quantitative framework for representing and reasoning with probabilities and independencies. Their development and use spans several fields including artificial intelligence, decision theory and statistics, and provides an important bridge between these communities. This paper shows by way of example that these models can be extended to machine learning, neural networks and knowledge discovery by representing the notion of a sample on the graphical model. Not only does this allow a flexible variety of learning problems to be represented, it also provides the means for representing the goal of learning and opens the way for the automatic development of learning algorithms from specifications.

  7. Graphical Models for Ordinal Data

    PubMed Central

    Guo, Jian; Levina, Elizaveta; Michailidis, George; Zhu, Ji

    2014-01-01

    A graphical model for ordinal variables is considered, where it is assumed that the data are generated by discretizing the marginal distributions of a latent multivariate Gaussian distribution. The relationships between these ordinal variables are then described by the underlying Gaussian graphical model and can be inferred by estimating the corresponding concentration matrix. Direct estimation of the model is computationally expensive, but an approximate EM-like algorithm is developed to provide an accurate estimate of the parameters at a fraction of the computational cost. Numerical evidence based on simulation studies shows the strong performance of the algorithm, which is also illustrated on data sets on movie ratings and an educational survey. PMID:26120267

  8. Building Regression Models: The Importance of Graphics.

    ERIC Educational Resources Information Center

    Dunn, Richard

    1989-01-01

    Points out reasons for using graphical methods to teach simple and multiple regression analysis. Argues that a graphically oriented approach has considerable pedagogic advantages in the exposition of simple and multiple regression. Shows that graphical methods may play a central role in the process of building regression models. (Author/LS)

  9. Cavity approximation for graphical models.

    PubMed

    Rizzo, T; Wemmenhove, B; Kappen, H J

    2007-07-01

    We reformulate the cavity approximation (CA), a class of algorithms recently introduced for improving the Bethe approximation estimates of marginals in graphical models. In our formulation, which allows for the treatment of multivalued variables, a further generalization to factor graphs with arbitrary order of interaction factors is explicitly carried out, and a message passing algorithm that implements the first order correction to the Bethe approximation is described. Furthermore, we investigate an implementation of the CA for pairwise interactions. In all cases considered we could confirm that CA[k] with increasing k provides a sequence of approximations of markedly increasing precision. Furthermore, in some cases we could also confirm the general expectation that the approximation of order k , whose computational complexity is O(N(k+1)) has an error that scales as 1/N(k+1) with the size of the system. We discuss the relation between this approach and some recent developments in the field. PMID:17677405

  10. A graphical language for reliability model generation

    NASA Technical Reports Server (NTRS)

    Howell, Sandra V.; Bavuso, Salvatore J.; Haley, Pamela J.

    1990-01-01

    A graphical interface capability of the hybrid automated reliability predictor (HARP) is described. The graphics-oriented (GO) module provides the user with a graphical language for modeling system failure modes through the selection of various fault tree gates, including sequence dependency gates, or by a Markov chain. With this graphical input language, a fault tree becomes a convenient notation for describing a system. In accounting for any sequence dependencies, HARP converts the fault-tree notation to a complex stochastic process that is reduced to a Markov chain which it can then solve for system reliability. The graphics capability is available for use on an IBM-compatible PC, a Sun, and a VAX workstation. The GO module is written in the C programming language and uses the Graphical Kernel System (GKS) standard for graphics implementation. The PC, VAX, and Sun versions of the HARP GO module are currently in beta-testing.

  11. Graphical Models via Univariate Exponential Family Distributions

    PubMed Central

    Yang, Eunho; Ravikumar, Pradeep; Allen, Genevera I.; Liu, Zhandong

    2016-01-01

    Undirected graphical models, or Markov networks, are a popular class of statistical models, used in a wide variety of applications. Popular instances of this class include Gaussian graphical models and Ising models. In many settings, however, it might not be clear which subclass of graphical models to use, particularly for non-Gaussian and non-categorical data. In this paper, we consider a general sub-class of graphical models where the node-wise conditional distributions arise from exponential families. This allows us to derive multivariate graphical model distributions from univariate exponential family distributions, such as the Poisson, negative binomial, and exponential distributions. Our key contributions include a class of M-estimators to fit these graphical model distributions; and rigorous statistical analysis showing that these M-estimators recover the true graphical model structure exactly, with high probability. We provide examples of genomic and proteomic networks learned via instances of our class of graphical models derived from Poisson and exponential distributions. PMID:27570498

  12. Detection of text strings from mixed text/graphics images

    NASA Astrophysics Data System (ADS)

    Tsai, Chien-Hua; Papachristou, Christos A.

    2000-12-01

    A robust system for text strings separation from mixed text/graphics images is presented. Based on a union-find (region growing) strategy the algorithm is thus able to classify the text from graphics and adapts to changes in document type, language category (e.g., English, Chinese and Japanese), text font style and size, and text string orientation within digital images. In addition, it allows for a document skew that usually occurs in documents, without skew correction prior to discrimination while these proposed methods such a projection profile or run length coding are not always suitable for the condition. The method has been tested with a variety of printed documents from different origins with one common set of parameters, and the experimental results of the performance of the algorithm in terms of computational efficiency are demonstrated by using several tested images from the evaluation.

  13. Understanding human functioning using graphical models

    PubMed Central

    2010-01-01

    Background Functioning and disability are universal human experiences. However, our current understanding of functioning from a comprehensive perspective is limited. The development of the International Classification of Functioning, Disability and Health (ICF) on the one hand and recent developments in graphical modeling on the other hand might be combined and open the door to a more comprehensive understanding of human functioning. The objective of our paper therefore is to explore how graphical models can be used in the study of ICF data for a range of applications. Methods We show the applicability of graphical models on ICF data for different tasks: Visualization of the dependence structure of the data set, dimension reduction and comparison of subpopulations. Moreover, we further developed and applied recent findings in causal inference using graphical models to estimate bounds on intervention effects in an observational study with many variables and without knowing the underlying causal structure. Results In each field, graphical models could be applied giving results of high face-validity. In particular, graphical models could be used for visualization of functioning in patients with spinal cord injury. The resulting graph consisted of several connected components which can be used for dimension reduction. Moreover, we found that the differences in the dependence structures between subpopulations were relevant and could be systematically analyzed using graphical models. Finally, when estimating bounds on causal effects of ICF categories on general health perceptions among patients with chronic health conditions, we found that the five ICF categories that showed the strongest effect were plausible. Conclusions Graphical Models are a flexible tool and lend themselves for a wide range of applications. In particular, studies involving ICF data seem to be suited for analysis using graphical models. PMID:20149230

  14. Operations for Learning with Graphical Models

    NASA Technical Reports Server (NTRS)

    Buntine, Wray L.

    1994-01-01

    This paper is a multidisciplinary review of empirical, statistical learning from a graphical model perspective. Well-known examples of graphical models include Bayesian net- works, directed graphs representing a Markov chain, and undirected networks representing a Markov field. These graphical models are extended to model data analysis and empirical learning using the notation of plates. Graphical operations for simplifying and manipulating a problem are provided including decomposition, differentiation, and the manipulation of probability models from the exponential family. These operations adapt existing techniques from statistics and automatic differentiation to graphs. Two standard algorithm schemes for learning are reviewed in a graphical framework: Gibbs sampling and the expectation maximization algorithm. Some algorithms are developed in this graphical framework including a generalized version of linear regression, techniques for feed-forward networks, and learning Gaussian and discrete Bayesian networks from data. The paper concludes by sketching some implications for data analysis and summarizing some popular algorithms that fall within the framework presented. The main original contributions here are the decomposition techniques and the demonstration that graphical models provide a framework for understanding and developing complex learning algorithms.

  15. Probabilistic Graphical Model Representation in Phylogenetics

    PubMed Central

    Höhna, Sebastian; Heath, Tracy A.; Boussau, Bastien; Landis, Michael J.; Ronquist, Fredrik; Huelsenbeck, John P.

    2014-01-01

    Recent years have seen a rapid expansion of the model space explored in statistical phylogenetics, emphasizing the need for new approaches to statistical model representation and software development. Clear communication and representation of the chosen model is crucial for: (i) reproducibility of an analysis, (ii) model development, and (iii) software design. Moreover, a unified, clear and understandable framework for model representation lowers the barrier for beginners and nonspecialists to grasp complex phylogenetic models, including their assumptions and parameter/variable dependencies. Graphical modeling is a unifying framework that has gained in popularity in the statistical literature in recent years. The core idea is to break complex models into conditionally independent distributions. The strength lies in the comprehensibility, flexibility, and adaptability of this formalism, and the large body of computational work based on it. Graphical models are well-suited to teach statistical models, to facilitate communication among phylogeneticists and in the development of generic software for simulation and statistical inference. Here, we provide an introduction to graphical models for phylogeneticists and extend the standard graphical model representation to the realm of phylogenetics. We introduce a new graphical model component, tree plates, to capture the changing structure of the subgraph corresponding to a phylogenetic tree. We describe a range of phylogenetic models using the graphical model framework and introduce modules to simplify the representation of standard components in large and complex models. Phylogenetic model graphs can be readily used in simulation, maximum likelihood inference, and Bayesian inference using, for example, Metropolis–Hastings or Gibbs sampling of the posterior distribution. [Computation; graphical models; inference; modularization; statistical phylogenetics; tree plate.] PMID:24951559

  16. Modelling structured data with Probabilistic Graphical Models

    NASA Astrophysics Data System (ADS)

    Forbes, F.

    2016-05-01

    Most clustering and classification methods are based on the assumption that the objects to be clustered are independent. However, in more and more modern applications, data are structured in a way that makes this assumption not realistic and potentially misleading. A typical example that can be viewed as a clustering task is image segmentation where the objects are the pixels on a regular grid and depend on neighbouring pixels on this grid. Also, when data are geographically located, it is of interest to cluster data with an underlying dependence structure accounting for some spatial localisation. These spatial interactions can be naturally encoded via a graph not necessarily regular as a grid. Data sets can then be modelled via Markov random fields and mixture models (e.g. the so-called MRF and Hidden MRF). More generally, probabilistic graphical models are tools that can be used to represent and manipulate data in a structured way while modeling uncertainty. This chapter introduces the basic concepts. The two main classes of probabilistic graphical models are considered: Bayesian networks and Markov networks. The key concept of conditional independence and its link to Markov properties is presented. The main problems that can be solved with such tools are described. Some illustrations are given associated with some practical work.

  17. Graphical Models and Computerized Adaptive Testing.

    ERIC Educational Resources Information Center

    Almond, Russell G.; Mislevy, Robert J.

    1999-01-01

    Considers computerized adaptive testing from the perspective of graphical modeling (GM). GM provides methods for making inferences about multifaceted skills and knowledge and for extracting data from complex performances. Provides examples from language-proficiency assessment. (SLD)

  18. Graphical workstation capability for reliability modeling

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.; Koppen, Sandra V.; Haley, Pamela J.

    1992-01-01

    In addition to computational capabilities, software tools for estimating the reliability of fault-tolerant digital computer systems must also provide a means of interfacing with the user. Described here is the new graphical interface capability of the hybrid automated reliability predictor (HARP), a software package that implements advanced reliability modeling techniques. The graphics oriented (GO) module provides the user with a graphical language for modeling system failure modes through the selection of various fault-tree gates, including sequence-dependency gates, or by a Markov chain. By using this graphical input language, a fault tree becomes a convenient notation for describing a system. In accounting for any sequence dependencies, HARP converts the fault-tree notation to a complex stochastic process that is reduced to a Markov chain, which it can then solve for system reliability. The graphics capability is available for use on an IBM-compatible PC, a Sun, and a VAX workstation. The GO module is written in the C programming language and uses the graphical kernal system (GKS) standard for graphics implementation. The PC, VAX, and Sun versions of the HARP GO module are currently in beta-testing stages.

  19. Probabilistic graphical model representation in phylogenetics.

    PubMed

    Höhna, Sebastian; Heath, Tracy A; Boussau, Bastien; Landis, Michael J; Ronquist, Fredrik; Huelsenbeck, John P

    2014-09-01

    Recent years have seen a rapid expansion of the model space explored in statistical phylogenetics, emphasizing the need for new approaches to statistical model representation and software development. Clear communication and representation of the chosen model is crucial for: (i) reproducibility of an analysis, (ii) model development, and (iii) software design. Moreover, a unified, clear and understandable framework for model representation lowers the barrier for beginners and nonspecialists to grasp complex phylogenetic models, including their assumptions and parameter/variable dependencies. Graphical modeling is a unifying framework that has gained in popularity in the statistical literature in recent years. The core idea is to break complex models into conditionally independent distributions. The strength lies in the comprehensibility, flexibility, and adaptability of this formalism, and the large body of computational work based on it. Graphical models are well-suited to teach statistical models, to facilitate communication among phylogeneticists and in the development of generic software for simulation and statistical inference. Here, we provide an introduction to graphical models for phylogeneticists and extend the standard graphical model representation to the realm of phylogenetics. We introduce a new graphical model component, tree plates, to capture the changing structure of the subgraph corresponding to a phylogenetic tree. We describe a range of phylogenetic models using the graphical model framework and introduce modules to simplify the representation of standard components in large and complex models. Phylogenetic model graphs can be readily used in simulation, maximum likelihood inference, and Bayesian inference using, for example, Metropolis-Hastings or Gibbs sampling of the posterior distribution. PMID:24951559

  20. Light reflection models for computer graphics.

    PubMed

    Greenberg, D P

    1989-04-14

    During the past 20 years, computer graphic techniques for simulating the reflection of light have progressed so that today images of photorealistic quality can be produced. Early algorithms considered direct lighting only, but global illumination phenomena with indirect lighting, surface interreflections, and shadows can now be modeled with ray tracing, radiosity, and Monte Carlo simulations. This article describes the historical development of computer graphic algorithms for light reflection and pictorially illustrates what will be commonly available in the near future. PMID:17835348

  1. Graphical Model Theory for Wireless Sensor Networks

    SciTech Connect

    Davis, William B.

    2002-12-08

    Information processing in sensor networks, with many small processors, demands a theory of computation that allows the minimization of processing effort, and the distribution of this effort throughout the network. Graphical model theory provides a probabilistic theory of computation that explicitly addresses complexity and decentralization for optimizing network computation. The junction tree algorithm, for decentralized inference on graphical probability models, can be instantiated in a variety of applications useful for wireless sensor networks, including: sensor validation and fusion; data compression and channel coding; expert systems, with decentralized data structures, and efficient local queries; pattern classification, and machine learning. Graphical models for these applications are sketched, and a model of dynamic sensor validation and fusion is presented in more depth, to illustrate the junction tree algorithm.

  2. Operations on Graphical Models with Plates

    NASA Technical Reports Server (NTRS)

    Buntine, Wray L.; Lum, Henry, Jr. (Technical Monitor)

    1994-01-01

    This paper explains how graphical models, for instance Bayesian or Markov networks, can be extended to model problems in data analysis and learning. This provides a unified framework that combines lessons learned from the artificial intelligence, statistical and connectionist communities. This also offers a set of principles for developing a software generator for data analysis, whereby a learning or discovery system can be compiled from specifications. Many of the popular learning algorithms can be compiled in this way from graphical specifications. While in a sense this paper is a multidisciplinary review of learning, the main contribution here is the presentation of the material within the unifying framework of graphical models, and the observation that, as a result, the process of developing learning algorithms can be partly automated.

  3. Data Analysis with Graphical Models: Software Tools

    NASA Technical Reports Server (NTRS)

    Buntine, Wray L.

    1994-01-01

    Probabilistic graphical models (directed and undirected Markov fields, and combined in chain graphs) are used widely in expert systems, image processing and other areas as a framework for representing and reasoning with probabilities. They come with corresponding algorithms for performing probabilistic inference. This paper discusses an extension to these models by Spiegelhalter and Gilks, plates, used to graphically model the notion of a sample. This offers a graphical specification language for representing data analysis problems. When combined with general methods for statistical inference, this also offers a unifying framework for prototyping and/or generating data analysis algorithms from graphical specifications. This paper outlines the framework and then presents some basic tools for the task: a graphical version of the Pitman-Koopman Theorem for the exponential family, problem decomposition, and the calculation of exact Bayes factors. Other tools already developed, such as automatic differentiation, Gibbs sampling, and use of the EM algorithm, make this a broad basis for the generation of data analysis software.

  4. Joint estimation of multiple graphical models

    PubMed Central

    Guo, Jian; Levina, Elizaveta; Michailidis, George; Zhu, Ji

    2011-01-01

    Summary Gaussian graphical models explore dependence relationships between random variables, through the estimation of the corresponding inverse covariance matrices. In this paper we develop an estimator for such models appropriate for data from several graphical models that share the same variables and some of the dependence structure. In this setting, estimating a single graphical model would mask the underlying heterogeneity, while estimating separate models for each category does not take advantage of the common structure. We propose a method that jointly estimates the graphical models corresponding to the different categories present in the data, aiming to preserve the common structure, while allowing for differences between the categories. This is achieved through a hierarchical penalty that targets the removal of common zeros in the inverse covariance matrices across categories. We establish the asymptotic consistency and sparsity of the proposed estimator in the high-dimensional case, and illustrate its performance on a number of simulated networks. An application to learning semantic connections between terms from webpages collected from computer science departments is included. PMID:23049124

  5. Planar graphical models which are easy

    SciTech Connect

    Chertkov, Michael; Chernyak, Vladimir

    2009-01-01

    We describe a rich family of binary variables statistical mechanics models on planar graphs which are equivalent to Gaussian Grassmann Graphical models (free fermions). Calculation of partition function (weighted counting) in the models is easy (of polynomial complexity) as reduced to evaluation of determinants of matrixes linear in the number of variables. In particular, this family of models covers Holographic Algorithms of Valiant and extends on the Gauge Transformations discussed in our previous works.

  6. Software for Data Analysis with Graphical Models

    NASA Technical Reports Server (NTRS)

    Buntine, Wray L.; Roy, H. Scott

    1994-01-01

    Probabilistic graphical models are being used widely in artificial intelligence and statistics, for instance, in diagnosis and expert systems, as a framework for representing and reasoning with probabilities and independencies. They come with corresponding algorithms for performing statistical inference. This offers a unifying framework for prototyping and/or generating data analysis algorithms from graphical specifications. This paper illustrates the framework with an example and then presents some basic techniques for the task: problem decomposition and the calculation of exact Bayes factors. Other tools already developed, such as automatic differentiation, Gibbs sampling, and use of the EM algorithm, make this a broad basis for the generation of data analysis software.

  7. Item Screening in Graphical Loglinear Rasch Models

    ERIC Educational Resources Information Center

    Kreiner, Svend; Christensen, Karl Bang

    2011-01-01

    In behavioural sciences, local dependence and DIF are common, and purification procedures that eliminate items with these weaknesses often result in short scales with poor reliability. Graphical loglinear Rasch models (Kreiner & Christensen, in "Statistical Methods for Quality of Life Studies," ed. by M. Mesbah, F.C. Cole & M.T. Lee, Kluwer…

  8. Research on graphical workflow modeling tool

    NASA Astrophysics Data System (ADS)

    Gu, Hongjiu

    2013-07-01

    Through the technical analysis of existing modeling tools, combined with Web technology, this paper presents a graphical workflow modeling tool design program, through which designers can draw process directly in the browser and automatically transform the drawn process description in XML description file, to facilitate the workflow engine analysis and barrier-free sharing of workflow data in a networked environment. The program has software reusability, cross-platform, scalability, and strong practicality.

  9. Planar graphical models which are easy

    NASA Astrophysics Data System (ADS)

    Chernyak, Vladimir Y.; Chertkov, Michael

    2010-11-01

    We describe a rich family of binary variables statistical mechanics models on a given planar graph which are equivalent to Gaussian Grassmann graphical models (free fermions) defined on the same graph. Calculation of the partition function (weighted counting) for such a model is easy (of polynomial complexity) as it is reducible to evaluation of a Pfaffian of a matrix of size equal to twice the number of edges in the graph. In particular, this approach touches upon holographic algorithms of Valiant and utilizes the gauge transformations discussed in our previous works.

  10. Graphics

    ERIC Educational Resources Information Center

    Post, Susan

    1975-01-01

    An art teacher described an elective course in graphics which was designed to enlarge a student's knowledge of value, color, shape within a shape, transparency, line and texture. This course utilized the technique of working a multi-colored print from a single block that was first introduced by Picasso. (Author/RK)

  11. The cluster graphical lasso for improved estimation of Gaussian graphical models

    PubMed Central

    Tan, Kean Ming; Witten, Daniela; Shojaie, Ali

    2015-01-01

    The task of estimating a Gaussian graphical model in the high-dimensional setting is considered. The graphical lasso, which involves maximizing the Gaussian log likelihood subject to a lasso penalty, is a well-studied approach for this task. A surprising connection between the graphical lasso and hierarchical clustering is introduced: the graphical lasso in effect performs a two-step procedure, in which (1) single linkage hierarchical clustering is performed on the variables in order to identify connected components, and then (2) a penalized log likelihood is maximized on the subset of variables within each connected component. Thus, the graphical lasso determines the connected components of the estimated network via single linkage clustering. The single linkage clustering is known to perform poorly in certain finite-sample settings. Therefore, the cluster graphical lasso, which involves clustering the features using an alternative to single linkage clustering, and then performing the graphical lasso on the subset of variables within each cluster, is proposed. Model selection consistency for this technique is established, and its improved performance relative to the graphical lasso is demonstrated in a simulation study, as well as in applications to a university webpage and a gene expression data sets. PMID:25642008

  12. Connections between Graphical Gaussian Models and Factor Analysis

    ERIC Educational Resources Information Center

    Salgueiro, M. Fatima; Smith, Peter W. F.; McDonald, John W.

    2010-01-01

    Connections between graphical Gaussian models and classical single-factor models are obtained by parameterizing the single-factor model as a graphical Gaussian model. Models are represented by independence graphs, and associations between each manifest variable and the latent factor are measured by factor partial correlations. Power calculations…

  13. Teaching "Instant Experience" with Graphical Model Validation Techniques

    ERIC Educational Resources Information Center

    Ekstrøm, Claus Thorn

    2014-01-01

    Graphical model validation techniques for linear normal models are often used to check the assumptions underlying a statistical model. We describe an approach to provide "instant experience" in looking at a graphical model validation plot, so it becomes easier to validate if any of the underlying assumptions are violated.

  14. ModelMate - A graphical user interface for model analysis

    USGS Publications Warehouse

    Banta, Edward R.

    2011-01-01

    ModelMate is a graphical user interface designed to facilitate use of model-analysis programs with models. This initial version of ModelMate supports one model-analysis program, UCODE_2005, and one model software program, MODFLOW-2005. ModelMate can be used to prepare input files for UCODE_2005, run UCODE_2005, and display analysis results. A link to the GW_Chart graphing program facilitates visual interpretation of results. ModelMate includes capabilities for organizing directories used with the parallel-processing capabilities of UCODE_2005 and for maintaining files in those directories to be identical to a set of files in a master directory. ModelMate can be used on its own or in conjunction with ModelMuse, a graphical user interface for MODFLOW-2005 and PHAST.

  15. Mixed Methods Analysis and Information Visualization: Graphical Display for Effective Communication of Research Results

    ERIC Educational Resources Information Center

    Onwuegbuzie, Anthony J.; Dickinson, Wendy B.

    2008-01-01

    In this paper, we introduce various graphical methods that can be used to represent data in mixed research. First, we present a broad taxonomy of visual representation. Next, we use this taxonomy to provide an overview of visual techniques for quantitative data display and qualitative data display. Then, we propose what we call "crossover" visual…

  16. A Guide to the Literature on Learning Graphical Models

    NASA Technical Reports Server (NTRS)

    Buntine, Wray L.; Friedland, Peter (Technical Monitor)

    1994-01-01

    This literature review discusses different methods under the general rubric of learning Bayesian networks from data, and more generally, learning probabilistic graphical models. Because many problems in artificial intelligence, statistics and neural networks can be represented as a probabilistic graphical model, this area provides a unifying perspective on learning. This paper organizes the research in this area along methodological lines of increasing complexity.

  17. Interactive graphical model building using telepresence and virtual reality

    SciTech Connect

    Cooke, C.; Stansfield, S.

    1993-10-01

    This paper presents a prototype system developed at Sandia National Laboratories to create and verify computer-generated graphical models of remote physical environments. The goal of the system is to create an interface between an operator and a computer vision system so that graphical models can be created interactively. Virtual reality and telepresence are used to allow interaction between the operator, computer, and remote environment. A stereo view of the remote environment is produced by two CCD cameras. The cameras are mounted on a three degree-of-freedom platform which is slaved to a mechanically-tracked, stereoscopic viewing device. This gives the operator a sense of immersion in the physical environment. The stereo video is enhanced by overlaying the graphical model onto it. Overlay of the graphical model onto the stereo video allows visual verification of graphical models. Creation of a graphical model is accomplished by allowing the operator to assist the computer in modeling. The operator controls a 3-D cursor to mark objects to be modeled. The computer then automatically extracts positional and geometric information about the object and creates the graphical model.

  18. Retrospective Study on Mathematical Modeling Based on Computer Graphic Processing

    NASA Astrophysics Data System (ADS)

    Zhang, Kai Li

    Graphics & image making is an important field in computer application, in which visualization software has been widely used with the characteristics of convenience and quick. However, it was thought by modeling designers that the software had been limited in it's function and flexibility because mathematics modeling platform was not built. A non-visualization graphics software appearing at this moment enabled the graphics & image design has a very good mathematics modeling platform. In the paper, a polished pyramid is established by multivariate spline function algorithm, and validate the non-visualization software is good in mathematical modeling.

  19. Robust Gaussian Graphical Modeling via l1 Penalization

    PubMed Central

    Sun, Hokeun; Li, Hongzhe

    2012-01-01

    Summary Gaussian graphical models have been widely used as an effective method for studying the conditional independency structure among genes and for constructing genetic networks. However, gene expression data typically have heavier tails or more outlying observations than the standard Gaussian distribution. Such outliers in gene expression data can lead to wrong inference on the dependency structure among the genes. We propose a l1 penalized estimation procedure for the sparse Gaussian graphical models that is robustified against possible outliers. The likelihood function is weighted according to how the observation is deviated, where the deviation of the observation is measured based on its own likelihood. An efficient computational algorithm based on the coordinate gradient descent method is developed to obtain the minimizer of the negative penalized robustified-likelihood, where nonzero elements of the concentration matrix represents the graphical links among the genes. After the graphical structure is obtained, we re-estimate the positive definite concentration matrix using an iterative proportional fitting algorithm. Through simulations, we demonstrate that the proposed robust method performs much better than the graphical Lasso for the Gaussian graphical models in terms of both graph structure selection and estimation when outliers are present. We apply the robust estimation procedure to an analysis of yeast gene expression data and show that the resulting graph has better biological interpretation than that obtained from the graphical Lasso. PMID:23020775

  20. A probabilistic graphical model based stochastic input model construction

    SciTech Connect

    Wan, Jiang; Zabaras, Nicholas

    2014-09-01

    Model reduction techniques have been widely used in modeling of high-dimensional stochastic input in uncertainty quantification tasks. However, the probabilistic modeling of random variables projected into reduced-order spaces presents a number of computational challenges. Due to the curse of dimensionality, the underlying dependence relationships between these random variables are difficult to capture. In this work, a probabilistic graphical model based approach is employed to learn the dependence by running a number of conditional independence tests using observation data. Thus a probabilistic model of the joint PDF is obtained and the PDF is factorized into a set of conditional distributions based on the dependence structure of the variables. The estimation of the joint PDF from data is then transformed to estimating conditional distributions under reduced dimensions. To improve the computational efficiency, a polynomial chaos expansion is further applied to represent the random field in terms of a set of standard random variables. This technique is combined with both linear and nonlinear model reduction methods. Numerical examples are presented to demonstrate the accuracy and efficiency of the probabilistic graphical model based stochastic input models. - Highlights: • Data-driven stochastic input models without the assumption of independence of the reduced random variables. • The problem is transformed to a Bayesian network structure learning problem. • Examples are given in flows in random media.

  1. Progress in mix modeling

    SciTech Connect

    Harrison, A.K.

    1997-03-14

    We have identified the Cranfill multifluid turbulence model (Cranfill, 1992) as a starting point for development of subgrid models of instability, turbulent and mixing processes. We have differenced the closed system of equations in conservation form, and coded them in the object-oriented hydrodynamics code FLAG, which is to be used as a testbed for such models.

  2. Mixed Markov models

    PubMed Central

    Fridman, Arthur

    2003-01-01

    Markov random fields can encode complex probabilistic relationships involving multiple variables and admit efficient procedures for probabilistic inference. However, from a knowledge engineering point of view, these models suffer from a serious limitation. The graph of a Markov field must connect all pairs of variables that are conditionally dependent even for a single choice of values of the other variables. This makes it hard to encode interactions that occur only in a certain context and are absent in all others. Furthermore, the requirement that two variables be connected unless always conditionally independent may lead to excessively dense graphs, obscuring the independencies present among the variables and leading to computationally prohibitive inference algorithms. Mumford [Mumford, D. (1996) in ICIAM 95, eds. Kirchgassner, K., Marenholtz, O. & Mennicken, R. (Akademie Verlag, Berlin), pp. 233–256] proposed an alternative modeling framework where the graph need not be rigid and completely determined a priori. Mixed Markov models contain node-valued random variables that, when instantiated, augment the graph by a set of transient edges. A single joint probability distribution relates the values of regular and node-valued variables. In this article, we study the analytical and computational properties of mixed Markov models. In particular, we show that positive mixed models have a local Markov property that is equivalent to their global factorization. We also describe a computationally efficient procedure for answering probabilistic queries in mixed Markov models. PMID:12829802

  3. ADVANCED MIXING MODELS

    SciTech Connect

    Lee, S; Dimenna, R; Tamburello, D

    2011-02-14

    height from zero to 10 ft. The sludge has been characterized and modeled as micron-sized solids, typically 1 to 5 microns, at weight fractions as high as 20 to 30 wt%, specific gravities to 1.4, and viscosities up to 64 cp during motion. The sludge is suspended and mixed through the use of submersible slurry jet pumps. To suspend settled sludge, water is added to the tank as a slurry medium and stirred with the jet pump. Although there is considerable technical literature on mixing and solid suspension in agitated tanks, very little literature has been published on jet mixing in a large-scale tank. One of the main objectives in the waste processing is to provide feed of a uniform slurry composition at a certain weight percentage (e.g. typically {approx}13 wt% at SRS) over an extended period of time. In preparation of the sludge for slurrying, several important questions have been raised with regard to sludge suspension and mixing of the solid suspension in the bulk of the tank: (1) How much time is required to prepare a slurry with a uniform solid composition? (2) How long will it take to suspend and mix the sludge for uniform composition in any particular waste tank? (3) What are good mixing indicators to answer the questions concerning sludge mixing stated above in a general fashion applicable to any waste tank/slurry pump geometry and fluid/sludge combination?

  4. Integrating Surface Modeling into the Engineering Design Graphics Curriculum

    ERIC Educational Resources Information Center

    Hartman, Nathan W.

    2006-01-01

    It has been suggested there is a knowledge base that surrounds the use of 3D modeling within the engineering design process and correspondingly within engineering design graphics education. While solid modeling receives a great deal of attention and discussion relative to curriculum efforts, and rightly so, surface modeling is an equally viable 3D…

  5. Transient thermoregulatory model with graphics output

    NASA Technical Reports Server (NTRS)

    Grounds, D. J.

    1974-01-01

    A user's guide is presented for the transient version of the thermoregulatory model. The model is designed to simulate the transient response of the human thermoregulatory system to thermal inputs. The model consists of 41 compartments over which the terms of the heat balance are computed. The control mechanisms which are identified are sweating, vaso-constriction and vasodilation.

  6. Mining functional modules in genetic networks with decomposable graphical models.

    PubMed

    Dejori, Mathäus; Schwaighofer, Anton; Tresp, Volker; Stetter, Martin

    2004-01-01

    In recent years, graphical models have become an increasingly important tool for the structural analysis of genome-wide expression profiles at the systems level. Here we present a new graphical modelling technique, which is based on decomposable graphical models, and apply it to a set of gene expression profiles from acute lymphoblastic leukemia (ALL). The new method explains probabilistic dependencies of expression levels in terms of the concerted action of underlying genetic functional modules, which are represented as so-called "cliques" in the graph. In addition, the method uses continuous-valued (instead of discretized) expression levels, and makes no particular assumption about their probability distribution. We show that the method successfully groups members of known functional modules to cliques. Our method allows the evaluation of the importance of genes for global cellular functions based on both link count and the clique membership count. PMID:15268775

  7. Teaching Geometry through Dynamic Modeling in Introductory Engineering Graphics.

    ERIC Educational Resources Information Center

    Wiebe, Eric N.; Branoff, Ted J.; Hartman, Nathan W.

    2003-01-01

    Examines how constraint-based 3D modeling can be used as a vehicle for rethinking instructional approaches to engineering design graphics. Focuses on moving from a mode of instruction based on the crafting by students and assessment by instructors of static 2D drawings and 3D models. Suggests that the new approach is better aligned with…

  8. ADVANCED MIXING MODELS

    SciTech Connect

    Lee, S; Richard Dimenna, R; David Tamburello, D

    2008-11-13

    The process of recovering the waste in storage tanks at the Savannah River Site (SRS) typically requires mixing the contents of the tank with one to four dual-nozzle jet mixers located within the tank. The typical criteria to establish a mixed condition in a tank are based on the number of pumps in operation and the time duration of operation. To ensure that a mixed condition is achieved, operating times are set conservatively long. This approach results in high operational costs because of the long mixing times and high maintenance and repair costs for the same reason. A significant reduction in both of these costs might be realized by reducing the required mixing time based on calculating a reliable indicator of mixing with a suitably validated computer code. The work described in this report establishes the basis for further development of the theory leading to the identified mixing indicators, the benchmark analyses demonstrating their consistency with widely accepted correlations, and the application of those indicators to SRS waste tanks to provide a better, physically based estimate of the required mixing time. Waste storage tanks at SRS contain settled sludge which varies in height from zero to 10 ft. The sludge has been characterized and modeled as micron-sized solids, typically 1 to 5 microns, at weight fractions as high as 20 to 30 wt%, specific gravities to 1.4, and viscosities up to 64 cp during motion. The sludge is suspended and mixed through the use of submersible slurry jet pumps. To suspend settled sludge, water is added to the tank as a slurry medium and stirred with the jet pump. Although there is considerable technical literature on mixing and solid suspension in agitated tanks, very little literature has been published on jet mixing in a large-scale tank. If shorter mixing times can be shown to support Defense Waste Processing Facility (DWPF) or other feed requirements, longer pump lifetimes can be achieved with associated operational cost and

  9. MAGIC: Model and Graphic Information Converter

    NASA Technical Reports Server (NTRS)

    Herbert, W. C.

    2009-01-01

    MAGIC is a software tool capable of converting highly detailed 3D models from an open, standard format, VRML 2.0/97, into the proprietary DTS file format used by the Torque Game Engine from GarageGames. MAGIC is used to convert 3D simulations from authoritative sources into the data needed to run the simulations in NASA's Distributed Observer Network. The Distributed Observer Network (DON) is a simulation presentation tool built by NASA to facilitate the simulation sharing requirements of the Data Presentation and Visualization effort within the Constellation Program. DON is built on top of the Torque Game Engine (TGE) and has chosen TGE's Dynamix Three Space (DTS) file format to represent 3D objects within simulations.

  10. Color mixing models

    NASA Astrophysics Data System (ADS)

    Harrington, Steven J.

    1992-05-01

    In black-and-white printing the page image can be represented within a computer as an array of binary values indicating whether or not pixels should be inked. The Boolean operators of AND, OR, and EXCLUSIVE-OR are often used when adding new objects to the image array. For color printing the page may be represented as an array of continuous tone color values, and the generalization of these logic functions to gray-scale or full-color images is, in general, not defined or understood. When incrementally composing a page image new colors can replace old in an image buffer, or new colors and old can be combined according to some mixing function to form a composite color which is stored. This paper examines the properties of the Boolean operations and suggests full-color mixing functions which preserve the desired properties. These functions can be used to combine colored images, giving various transparency effects. The relationships between the mixing functions and physical models of color mixing are also discussed.

  11. Graphical Approach to Model Reduction for Nonlinear Biochemical Networks

    PubMed Central

    Holland, David O.; Krainak, Nicholas C.; Saucerman, Jeffrey J.

    2011-01-01

    Model reduction is a central challenge to the development and analysis of multiscale physiology models. Advances in model reduction are needed not only for computational feasibility but also for obtaining conceptual insights from complex systems. Here, we introduce an intuitive graphical approach to model reduction based on phase plane analysis. Timescale separation is identified by the degree of hysteresis observed in phase-loops, which guides a “concentration-clamp” procedure for estimating explicit algebraic relationships between species equilibrating on fast timescales. The primary advantages of this approach over Jacobian-based timescale decomposition are that: 1) it incorporates nonlinear system dynamics, and 2) it can be easily visualized, even directly from experimental data. We tested this graphical model reduction approach using a 25-variable model of cardiac β1-adrenergic signaling, obtaining 6- and 4-variable reduced models that retain good predictive capabilities even in response to new perturbations. These 6 signaling species appear to be optimal “kinetic biomarkers” of the overall β1-adrenergic pathway. The 6-variable reduced model is well suited for integration into multiscale models of heart function, and more generally, this graphical model reduction approach is readily applicable to a variety of other complex biological systems. PMID:21901136

  12. Workflow modeling in the graphic arts and printing industry

    NASA Astrophysics Data System (ADS)

    Tuijn, Chris

    2003-12-01

    The last few years, a lot of effort has been spent on the standardization of the workflow in the graphic arts and printing industry. The main reasons for this standardization are two-fold: first of all, the need to represent all aspects of products, processes and resources in a uniform, digital framework and, secondly, the need to have different systems communicate with each other without having to implement dedicated drivers or protocols. Since many years, a number of organizations in the IT sector have been quite busy developing models and languages on the topic of workflow modeling. In addition to the more formal methods (such as, e.g., extended finite state machines, Petri Nets, Markov Chains etc.) introduced a number of decades ago, more pragmatic methods have been proposed quite recently. We hereby think in particular of the activities of the Workflow Management Coalition that resulted in an XML based Process Definition Language. Although one might be tempted to use the already established standards in the graphic environment, one should be well aware of the complexity and uniqueness of the graphic arts workflow. In this paper, we will show that it is quite hard though not impossible to model the graphic arts workflow using the already established workflow systems. After a brief summary of the graphic arts workflow requirements, we will show why the traditional models are less suitable to use. It will turn out that one of the main reasons for the incompatibility is that the graphic arts workflow is primarily resource driven; this means that the activation of processes depends on the status of different incoming resources. The fact that processes can start running with a partial availability of the input resources is a further complication that asks for additional knowledge on process level. In the second part of this paper, we will discuss in more detail the different software components that are available in any graphic enterprise. In the last part, we will

  13. A PC-based graphical simulator for physiological pharmacokinetic models.

    PubMed

    Wada, D R; Stanski, D R; Ebling, W F

    1995-04-01

    Since many intravenous anesthetic drugs alter blood flows, physiologically-based pharmacokinetic models describing drug disposition may be time-varying. Using the commercially available programming software MATLAB, a platform to simulate time-varying physiological pharmacokinetic models was developed. The platform is based upon a library of pharmacokinetic blocks which mimic physiological structure. The blocks can be linked together flexibly to form models for different drugs. Because of MATLAB's additional numerical capabilities (e.g. non-linear optimization), the platform provides a complete graphical microcomputer-based tool for physiologic pharmacokinetic modeling. PMID:7656558

  14. Probabilistic graphic models applied to identification of diseases.

    PubMed

    Sato, Renato Cesar; Sato, Graziela Tiemy Kajita

    2015-01-01

    Decision-making is fundamental when making diagnosis or choosing treatment. The broad dissemination of computed systems and databases allows systematization of part of decisions through artificial intelligence. In this text, we present basic use of probabilistic graphic models as tools to analyze causality in health conditions. This method has been used to make diagnosis of Alzheimer´s disease, sleep apnea and heart diseases. PMID:26154555

  15. Probabilistic graphic models applied to identification of diseases

    PubMed Central

    Sato, Renato Cesar; Sato, Graziela Tiemy Kajita

    2015-01-01

    ABSTRACT Decision-making is fundamental when making diagnosis or choosing treatment. The broad dissemination of computed systems and databases allows systematization of part of decisions through artificial intelligence. In this text, we present basic use of probabilistic graphic models as tools to analyze causality in health conditions. This method has been used to make diagnosis of Alzheimer´s disease, sleep apnea and heart diseases. PMID:26154555

  16. Mixed additive models

    NASA Astrophysics Data System (ADS)

    Carvalho, Francisco; Covas, Ricardo

    2016-06-01

    We consider mixed models y =∑i =0 w Xiβi with V (y )=∑i =1 w θiMi Where Mi=XiXi⊤ , i = 1, . . ., w, and µ = X0β0. For these we will estimate the variance components θ1, . . ., θw, aswell estimable vectors through the decomposition of the initial model into sub-models y(h), h ∈ Γ, with V (y (h ))=γ (h )Ig (h )h ∈Γ . Moreover we will consider L extensions of these models, i.e., y˚=Ly+ɛ, where L=D (1n1, . . ., 1nw) and ɛ, independent of y, has null mean vector and variance covariance matrix θw+1Iw, where w =∑i =1 n wi .

  17. Graphical models of residue coupling in protein families.

    PubMed

    Thomas, John; Ramakrishnan, Naren; Bailey-Kellogg, Chris

    2008-01-01

    Many statistical measures and algorithmic techniques have been proposed for studying residue coupling in protein families. Generally speaking, two residue positions are considered coupled if, in the sequence record, some of their amino acid type combinations are significantly more common than others. While the proposed approaches have proven useful in finding and describing coupling, a significant missing component is a formal probabilistic model that explicates and compactly represents the coupling, integrates information about sequence,structure, and function, and supports inferential procedures for analysis, diagnosis, and prediction.We present an approach to learning and using probabilistic graphical models of residue coupling. These models capture significant conservation and coupling constraints observable ina multiply-aligned set of sequences. Our approach can place a structural prior on considered couplings, so that all identified relationships have direct mechanistic explanations. It can also incorporate information about functional classes, and thereby learn a differential graphical model that distinguishes constraints common to all classes from those unique to individual classes. Such differential models separately account for class-specific conservation and family-wide coupling, two different sources of sequence covariation. They are then able to perform interpretable functional classification of new sequences, explaining classification decisions in terms of the underlying conservation and coupling constraints. We apply our approach in studies of both G protein-coupled receptors and PDZ domains, identifying and analyzing family-wide and class-specific constraints, and performing functional classification. The results demonstrate that graphical models of residue coupling provide a powerful tool for uncovering, representing, and utilizing significant sequence structure-function relationships in protein families. PMID:18451428

  18. SN_GUI: a graphical user interface for snowpack modeling

    NASA Astrophysics Data System (ADS)

    Spreitzhofer, G.; Fierz, C.; Lehning, M.

    2004-10-01

    SNOWPACK is a physical snow cover model. The model not only serves as a valuable research tool, but also runs operationally on a network of high Alpine automatic weather and snow measurement sites. In order to facilitate the operation of SNOWPACK and the interpretation of the results obtained by this model, a user-friendly graphical user interface for snowpack modeling, named SN_GUI, was created. This Java-based and thus platform-independent tool can be operated in two modes, one designed to fulfill the requirements of avalanche warning services (e.g. by providing information about critical layers within the snowpack that are closely related to the avalanche activity), and the other one offering a variety of additional options satisfying the needs of researchers. The user of SN_GUI is graphically guided through the entire process of creating snow cover simulations. The starting point is the efficient creation of input parameter files for SNOWPACK, followed by the launching of SNOWPACK with a variety of parameter settings. Finally, after the successful termination of the run, a number of interactive display options may be used to visualize the model output. Among these are vertical profiles and time profiles for many parameters. Besides other features, SN_GUI allows the use of various color, time and coordinate scales, and the comparison of measured and observed parameters.

  19. Collaborative multi organ segmentation by integrating deformable and graphical models.

    PubMed

    Uzunbaş, Mustafa Gökhan; Chen, Chao; Zhang, Shaoting; Poh, Kilian M; Li, Kang; Metaxas, Dimitris

    2013-01-01

    Organ segmentation is a challenging problem on which significant progress has been made. Deformable models (DM) and graphical models (GM) are two important categories of optimization based image segmentation methods. Efforts have been made on integrating two types of models into one framework. However, previous methods are not designed for segmenting multiple organs simultaneously and accurately. In this paper, we propose a hybrid multi organ segmentation approach by integrating DM and GM in a coupled optimization framework. Specifically, we show that region-based deformable models can be integrated with Markov Random Fields (MRF), such that multiple models' evolutions are driven by a maximum a posteriori (MAP) inference. It brings global and local deformation constraints into a unified framework for simultaneous segmentation of multiple objects in an image. We validate this proposed method on two challenging problems of multi organ segmentation, and the results are promising. PMID:24579136

  20. Developing satellite ground control software through graphical models

    NASA Technical Reports Server (NTRS)

    Bailin, Sidney; Henderson, Scott; Paterra, Frank; Truszkowski, Walt

    1992-01-01

    This paper discusses a program of investigation into software development as graphical modeling. The goal of this work is a more efficient development and maintenance process for the ground-based software that controls unmanned scientific satellites launched by NASA. The main hypothesis of the program is that modeling of the spacecraft and its subsystems, and reasoning about such models, can--and should--form the key activities of software development; by using such models as inputs, the generation of code to perform various functions (such as simulation and diagnostics of spacecraft components) can be automated. Moreover, we contend that automation can provide significant support for reasoning about the software system at the diagram level.

  1. Ice-sheet modelling accelerated by graphics cards

    NASA Astrophysics Data System (ADS)

    Brædstrup, Christian Fredborg; Damsgaard, Anders; Egholm, David Lundbek

    2014-11-01

    Studies of glaciers and ice sheets have increased the demand for high performance numerical ice flow models over the past decades. When exploring the highly non-linear dynamics of fast flowing glaciers and ice streams, or when coupling multiple flow processes for ice, water, and sediment, researchers are often forced to use super-computing clusters. As an alternative to conventional high-performance computing hardware, the Graphical Processing Unit (GPU) is capable of massively parallel computing while retaining a compact design and low cost. In this study, we present a strategy for accelerating a higher-order ice flow model using a GPU. By applying the newest GPU hardware, we achieve up to 180× speedup compared to a similar but serial CPU implementation. Our results suggest that GPU acceleration is a competitive option for ice-flow modelling when compared to CPU-optimised algorithms parallelised by the OpenMP or Message Passing Interface (MPI) protocols.

  2. An Accurate and Dynamic Computer Graphics Muscle Model

    NASA Technical Reports Server (NTRS)

    Levine, David Asher

    1997-01-01

    A computer based musculo-skeletal model was developed at the University in the departments of Mechanical and Biomedical Engineering. This model accurately represents human shoulder kinematics. The result of this model is the graphical display of bones moving through an appropriate range of motion based on inputs of EMGs and external forces. The need existed to incorporate a geometric muscle model in the larger musculo-skeletal model. Previous muscle models did not accurately represent muscle geometries, nor did they account for the kinematics of tendons. This thesis covers the creation of a new muscle model for use in the above musculo-skeletal model. This muscle model was based on anatomical data from the Visible Human Project (VHP) cadaver study. Two-dimensional digital images from the VHP were analyzed and reconstructed to recreate the three-dimensional muscle geometries. The recreated geometries were smoothed, reduced, and sliced to form data files defining the surfaces of each muscle. The muscle modeling function opened these files during run-time and recreated the muscle surface. The modeling function applied constant volume limitations to the muscle and constant geometry limitations to the tendons.

  3. Groundwater modeling and remedial optimization design using graphical user interfaces

    SciTech Connect

    Deschaine, L.M.

    1997-05-01

    The ability to accurately predict the behavior of chemicals in groundwater systems under natural flow circumstances or remedial screening and design conditions is the cornerstone to the environmental industry. The ability to do this efficiently and effectively communicate the information to the client and regulators is what differentiates effective consultants from ineffective consultants. Recent advances in groundwater modeling graphical user interfaces (GUIs) are doing for numerical modeling what Windows{trademark} did for DOS{trademark}. GUI facilitates both the modeling process and the information exchange. This Test Drive evaluates the performance of two GUIs--Groundwater Vistas and ModIME--on an actual groundwater model calibration and remedial design optimization project. In the early days of numerical modeling, data input consisted of large arrays of numbers that required intensive labor to input and troubleshoot. Model calibration was also manual, as was interpreting the reams of computer output for each of the tens or hundreds of simulations required to calibrate and perform optimal groundwater remedial design. During this period, the majority of the modelers effort (and budget) was spent just getting the model running, as opposed to solving the environmental challenge at hand. GUIs take the majority of the grunt work out of the modeling process, thereby allowing the modeler to focus on designing optimal solutions.

  4. De novo protein conformational sampling using a probabilistic graphical model

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Debswapna; Cheng, Jianlin

    2015-11-01

    Efficient exploration of protein conformational space remains challenging especially for large proteins when assembling discretized structural fragments extracted from a protein structure data database. We propose a fragment-free probabilistic graphical model, FUSION, for conformational sampling in continuous space and assess its accuracy using ‘blind’ protein targets with a length up to 250 residues from the CASP11 structure prediction exercise. The method reduces sampling bottlenecks, exhibits strong convergence, and demonstrates better performance than the popular fragment assembly method, ROSETTA, on relatively larger proteins with a length of more than 150 residues in our benchmark set. FUSION is freely available through a web server at http://protein.rnet.missouri.edu/FUSION/.

  5. Mode Estimation for High Dimensional Discrete Tree Graphical Models

    PubMed Central

    Chen, Chao; Liu, Han; Metaxas, Dimitris N.; Zhao, Tianqi

    2014-01-01

    This paper studies the following problem: given samples from a high dimensional discrete distribution, we want to estimate the leading (δ, ρ)-modes of the underlying distributions. A point is defined to be a (δ, ρ)-mode if it is a local optimum of the density within a δ-neighborhood under metric ρ. As we increase the “scale” parameter δ, the neighborhood size increases and the total number of modes monotonically decreases. The sequence of the (δ, ρ)-modes reveal intrinsic topographical information of the underlying distributions. Though the mode finding problem is generally intractable in high dimensions, this paper unveils that, if the distribution can be approximated well by a tree graphical model, mode characterization is significantly easier. An efficient algorithm with provable theoretical guarantees is proposed and is applied to applications like data analysis and multiple predictions. PMID:25620859

  6. De novo protein conformational sampling using a probabilistic graphical model

    PubMed Central

    Bhattacharya, Debswapna; Cheng, Jianlin

    2015-01-01

    Efficient exploration of protein conformational space remains challenging especially for large proteins when assembling discretized structural fragments extracted from a protein structure data database. We propose a fragment-free probabilistic graphical model, FUSION, for conformational sampling in continuous space and assess its accuracy using ‘blind’ protein targets with a length up to 250 residues from the CASP11 structure prediction exercise. The method reduces sampling bottlenecks, exhibits strong convergence, and demonstrates better performance than the popular fragment assembly method, ROSETTA, on relatively larger proteins with a length of more than 150 residues in our benchmark set. FUSION is freely available through a web server at http://protein.rnet.missouri.edu/FUSION/. PMID:26541939

  7. Kinematic modelling of disc galaxies using graphics processing units

    NASA Astrophysics Data System (ADS)

    Bekiaris, G.; Glazebrook, K.; Fluke, C. J.; Abraham, R.

    2016-01-01

    With large-scale integral field spectroscopy (IFS) surveys of thousands of galaxies currently under-way or planned, the astronomical community is in need of methods, techniques and tools that will allow the analysis of huge amounts of data. We focus on the kinematic modelling of disc galaxies and investigate the potential use of massively parallel architectures, such as the graphics processing unit (GPU), as an accelerator for the computationally expensive model-fitting procedure. We review the algorithms involved in model-fitting and evaluate their suitability for GPU implementation. We employ different optimization techniques, including the Levenberg-Marquardt and nested sampling algorithms, but also a naive brute-force approach based on nested grids. We find that the GPU can accelerate the model-fitting procedure up to a factor of ˜100 when compared to a single-threaded CPU, and up to a factor of ˜10 when compared to a multithreaded dual CPU configuration. Our method's accuracy, precision and robustness are assessed by successfully recovering the kinematic properties of simulated data, and also by verifying the kinematic modelling results of galaxies from the GHASP and DYNAMO surveys as found in the literature. The resulting GBKFIT code is available for download from: http://supercomputing.swin.edu.au/gbkfit.

  8. Handling geophysical flows: Numerical modelling using Graphical Processing Units

    NASA Astrophysics Data System (ADS)

    Garcia-Navarro, Pilar; Lacasta, Asier; Juez, Carmelo; Morales-Hernandez, Mario

    2016-04-01

    Computational tools may help engineers in the assessment of sediment transport during the decision-making processes. The main requirements are that the numerical results have to be accurate and simulation models must be fast. The present work is based on the 2D shallow water equations in combination with the 2D Exner equation [1]. The resulting numerical model accuracy was already discussed in previous work. Regarding the speed of the computation, the Exner equation slows down the already costly 2D shallow water model as the number of variables to solve is increased and the numerical stability is more restrictive. On the other hand, the movement of poorly sorted material over steep areas constitutes a hazardous environmental problem. Computational tools help in the predictions of such landslides [2]. In order to overcome this problem, this work proposes the use of Graphical Processing Units (GPUs) for decreasing significantly the simulation time [3, 4]. The numerical scheme implemented in GPU is based on a finite volume scheme. The mathematical model and the numerical implementation are compared against experimental and field data. In addition, the computational times obtained with the Graphical Hardware technology are compared against Single-Core (sequential) and Multi-Core (parallel) CPU implementations. References [Juez et al.(2014)] Juez, C., Murillo, J., & Garca-Navarro, P. (2014) A 2D weakly-coupled and efficient numerical model for transient shallow flow and movable bed. Advances in Water Resources. 71 93-109. [Juez et al.(2013)] Juez, C., Murillo, J., & Garca-Navarro, P. (2013) . 2D simulation of granular flow over irregular steep slopes using global and local coordinates. Journal of Computational Physics. 225 166-204. [Lacasta et al.(2014)] Lacasta, A., Morales-Hernndez, M., Murillo, J., & Garca-Navarro, P. (2014) An optimized GPU implementation of a 2D free surface simulation model on unstructured meshes Advances in Engineering Software. 78 1-15. [Lacasta

  9. Accelerating compartmental modeling on a graphical processing unit.

    PubMed

    Ben-Shalom, Roy; Liberman, Gilad; Korngreen, Alon

    2013-01-01

    Compartmental modeling is a widely used tool in neurophysiology but the detail and scope of such models is frequently limited by lack of computational resources. Here we implement compartmental modeling on low cost Graphical Processing Units (GPUs), which significantly increases simulation speed compared to NEURON. Testing two methods for solving the current diffusion equation system revealed which method is more useful for specific neuron morphologies. Regions of applicability were investigated using a range of simulations from a single membrane potential trace simulated in a simple fork morphology to multiple traces on multiple realistic cells. A runtime peak 150-fold faster than the CPU was achieved. This application can be used for statistical analysis and data fitting optimizations of compartmental models and may be used for simultaneously simulating large populations of neurons. Since GPUs are forging ahead and proving to be more cost-effective than CPUs, this may significantly decrease the cost of computation power and open new computational possibilities for laboratories with limited budgets. PMID:23508232

  10. Dynamics of Mental Model Construction from Text and Graphics

    ERIC Educational Resources Information Center

    Hochpöchler, Ulrike; Schnotz, Wolfgang; Rasch, Thorsten; Ullrich, Mark; Horz, Holger; McElvany, Nele; Baumert, Jürgen

    2013-01-01

    When students read for learning, they frequently are required to integrate text and graphics information into coherent knowledge structures. The following study aimed at analyzing how students deal with texts and how they deal with graphics when they try to integrate the two sources of information. Furthermore, the study investigated differences…

  11. An Arabidopsis gene network based on the graphical Gaussian model

    PubMed Central

    Ma, Shisong; Gong, Qingqiu; Bohnert, Hans J.

    2007-01-01

    We describe a gene network for the Arabidopsis thaliana transcriptome based on a modified graphical Gaussian model (GGM). Through partial correlation (pcor), GGM infers coregulation patterns between gene pairs conditional on the behavior of other genes. Regularized GGM calculated pcor between gene pairs among ∼2000 input genes at a time. Regularized GGM coupled with iterative random samplings of genes was expanded into a network that covered the Arabidopsis genome (22,266 genes). This resulted in a network of 18,625 interactions (edges) among 6760 genes (nodes) with high confidence and connections representing ∼0.01% of all possible edges. When queried for selected genes, locally coherent subnetworks mainly related to metabolic functions, and stress responses emerged. Examples of networks for biochemical pathways, cell wall metabolism, and cold responses are presented. GGM displayed known coregulation pathways as subnetworks and added novel components to known edges. Finally, the network reconciled individual subnetworks in a topology joined at the whole-genome level and provided a general framework that can instruct future studies on plant metabolism and stress responses. The network model is included. PMID:17921353

  12. Counterfactual graphical models for longitudinal mediation analysis with unobserved confounding.

    PubMed

    Shpitser, Ilya

    2013-08-01

    Questions concerning mediated causal effects are of great interest in psychology, cognitive science, medicine, social science, public health, and many other disciplines. For instance, about 60% of recent papers published in leading journals in social psychology contain at least one mediation test (Rucker, Preacher, Tormala, & Petty, 2011). Standard parametric approaches to mediation analysis employ regression models, and either the "difference method" (Judd & Kenny, 1981), more common in epidemiology, or the "product method" (Baron & Kenny, 1986), more common in the social sciences. In this article, we first discuss a known, but perhaps often unappreciated, fact that these parametric approaches are a special case of a general counterfactual framework for reasoning about causality first described by Neyman (1923) and Rubin (1924) and linked to causal graphical models by Robins (1986) and Pearl (2006). We then show a number of advantages of this framework. First, it makes the strong assumptions underlying mediation analysis explicit. Second, it avoids a number of problems present in the product and difference methods, such as biased estimates of effects in certain cases. Finally, we show the generality of this framework by proving a novel result which allows mediation analysis to be applied to longitudinal settings with unobserved confounders. PMID:23899340

  13. Graphical User Interface for Simulink Integrated Performance Analysis Model

    NASA Technical Reports Server (NTRS)

    Durham, R. Caitlyn

    2009-01-01

    The J-2X Engine (built by Pratt & Whitney Rocketdyne,) in the Upper Stage of the Ares I Crew Launch Vehicle, will only start within a certain range of temperature and pressure for Liquid Hydrogen and Liquid Oxygen propellants. The purpose of the Simulink Integrated Performance Analysis Model is to verify that in all reasonable conditions the temperature and pressure of the propellants are within the required J-2X engine start boxes. In order to run the simulation, test variables must be entered at all reasonable values of parameters such as heat leak and mass flow rate. To make this testing process as efficient as possible in order to save the maximum amount of time and money, and to show that the J-2X engine will start when it is required to do so, a graphical user interface (GUI) was created to allow the input of values to be used as parameters in the Simulink Model, without opening or altering the contents of the model. The GUI must allow for test data to come from Microsoft Excel files, allow those values to be edited before testing, place those values into the Simulink Model, and get the output from the Simulink Model. The GUI was built using MATLAB, and will run the Simulink simulation when the Simulate option is activated. After running the simulation, the GUI will construct a new Microsoft Excel file, as well as a MATLAB matrix file, using the output values for each test of the simulation so that they may graphed and compared to other values.

  14. A Graphical Method for Assessing the Identification of Linear Structural Equation Models

    ERIC Educational Resources Information Center

    Eusebi, Paolo

    2008-01-01

    A graphical method is presented for assessing the state of identifiability of the parameters in a linear structural equation model based on the associated directed graph. We do not restrict attention to recursive models. In the recent literature, methods based on graphical models have been presented as a useful tool for assessing the state of…

  15. JACK - ANTHROPOMETRIC MODELING SYSTEM FOR SILICON GRAPHICS WORKSTATIONS

    NASA Technical Reports Server (NTRS)

    Smith, B.

    1994-01-01

    JACK is an interactive graphics program developed at the University of Pennsylvania that displays and manipulates articulated geometric figures. JACK is typically used to observe how a human mannequin interacts with its environment and what effects body types will have upon the performance of a task in a simulated environment. Any environment can be created, and any number of mannequins can be placed anywhere in that environment. JACK includes facilities to construct limited geometric objects, position figures, perform a variety of analyses on the figures, describe the motion of the figures and specify lighting and surface property information for rendering high quality images. JACK is supplied with a variety of body types pre-defined and known to the system. There are both male and female bodies, ranging from the 5th to the 95th percentile, based on NASA Standard 3000. Each mannequin is fully articulated and reflects the joint limitations of a normal human. JACK is an editor for manipulating previously defined objects known as "Peabody" objects. Used to describe the figures as well as the internal data structure for representing them, Peabody is a language with a powerful and flexible mechanism for representing connectivity between objects, both the joints between individual segments within a figure and arbitrary connections between different figures. Peabody objects are generally comprised of several individual figures, each one a collection of segments. Each segment has a geometry represented by PSURF files that consist of polygons or curved surface patches. Although JACK does not have the capability to create new objects, objects may be created by other geometric modeling programs and then translated into the PSURF format. Environment files are a collection of figures and attributes that may be dynamically moved under the control of an animation file. The animation facilities allow the user to create a sequence of commands that duplicate the movements of a

  16. A Gaussian graphical model approach to climate networks

    NASA Astrophysics Data System (ADS)

    Zerenner, Tanja; Friederichs, Petra; Lehnertz, Klaus; Hense, Andreas

    2014-06-01

    Distinguishing between direct and indirect connections is essential when interpreting network structures in terms of dynamical interactions and stability. When constructing networks from climate data the nodes are usually defined on a spatial grid. The edges are usually derived from a bivariate dependency measure, such as Pearson correlation coefficients or mutual information. Thus, the edges indistinguishably represent direct and indirect dependencies. Interpreting climate data fields as realizations of Gaussian Random Fields (GRFs), we have constructed networks according to the Gaussian Graphical Model (GGM) approach. In contrast to the widely used method, the edges of GGM networks are based on partial correlations denoting direct dependencies. Furthermore, GRFs can be represented not only on points in space, but also by expansion coefficients of orthogonal basis functions, such as spherical harmonics. This leads to a modified definition of network nodes and edges in spectral space, which is motivated from an atmospheric dynamics perspective. We construct and analyze networks from climate data in grid point space as well as in spectral space, and derive the edges from both Pearson and partial correlations. Network characteristics, such as mean degree, average shortest path length, and clustering coefficient, reveal that the networks posses an ordered and strongly locally interconnected structure rather than small-world properties. Despite this, the network structures differ strongly depending on the construction method. Straightforward approaches to infer networks from climate data while not regarding any physical processes may contain too strong simplifications to describe the dynamics of the climate system appropriately.

  17. A Gaussian graphical model approach to climate networks

    SciTech Connect

    Zerenner, Tanja; Friederichs, Petra; Hense, Andreas; Lehnertz, Klaus

    2014-06-15

    Distinguishing between direct and indirect connections is essential when interpreting network structures in terms of dynamical interactions and stability. When constructing networks from climate data the nodes are usually defined on a spatial grid. The edges are usually derived from a bivariate dependency measure, such as Pearson correlation coefficients or mutual information. Thus, the edges indistinguishably represent direct and indirect dependencies. Interpreting climate data fields as realizations of Gaussian Random Fields (GRFs), we have constructed networks according to the Gaussian Graphical Model (GGM) approach. In contrast to the widely used method, the edges of GGM networks are based on partial correlations denoting direct dependencies. Furthermore, GRFs can be represented not only on points in space, but also by expansion coefficients of orthogonal basis functions, such as spherical harmonics. This leads to a modified definition of network nodes and edges in spectral space, which is motivated from an atmospheric dynamics perspective. We construct and analyze networks from climate data in grid point space as well as in spectral space, and derive the edges from both Pearson and partial correlations. Network characteristics, such as mean degree, average shortest path length, and clustering coefficient, reveal that the networks posses an ordered and strongly locally interconnected structure rather than small-world properties. Despite this, the network structures differ strongly depending on the construction method. Straightforward approaches to infer networks from climate data while not regarding any physical processes may contain too strong simplifications to describe the dynamics of the climate system appropriately.

  18. A Gaussian graphical model approach to climate networks.

    PubMed

    Zerenner, Tanja; Friederichs, Petra; Lehnertz, Klaus; Hense, Andreas

    2014-06-01

    Distinguishing between direct and indirect connections is essential when interpreting network structures in terms of dynamical interactions and stability. When constructing networks from climate data the nodes are usually defined on a spatial grid. The edges are usually derived from a bivariate dependency measure, such as Pearson correlation coefficients or mutual information. Thus, the edges indistinguishably represent direct and indirect dependencies. Interpreting climate data fields as realizations of Gaussian Random Fields (GRFs), we have constructed networks according to the Gaussian Graphical Model (GGM) approach. In contrast to the widely used method, the edges of GGM networks are based on partial correlations denoting direct dependencies. Furthermore, GRFs can be represented not only on points in space, but also by expansion coefficients of orthogonal basis functions, such as spherical harmonics. This leads to a modified definition of network nodes and edges in spectral space, which is motivated from an atmospheric dynamics perspective. We construct and analyze networks from climate data in grid point space as well as in spectral space, and derive the edges from both Pearson and partial correlations. Network characteristics, such as mean degree, average shortest path length, and clustering coefficient, reveal that the networks posses an ordered and strongly locally interconnected structure rather than small-world properties. Despite this, the network structures differ strongly depending on the construction method. Straightforward approaches to infer networks from climate data while not regarding any physical processes may contain too strong simplifications to describe the dynamics of the climate system appropriately. PMID:24985417

  19. A mixed relaxed clock model

    PubMed Central

    2016-01-01

    Over recent years, several alternative relaxed clock models have been proposed in the context of Bayesian dating. These models fall in two distinct categories: uncorrelated and autocorrelated across branches. The choice between these two classes of relaxed clocks is still an open question. More fundamentally, the true process of rate variation may have both long-term trends and short-term fluctuations, suggesting that more sophisticated clock models unfolding over multiple time scales should ultimately be developed. Here, a mixed relaxed clock model is introduced, which can be mechanistically interpreted as a rate variation process undergoing short-term fluctuations on the top of Brownian long-term trends. Statistically, this mixed clock represents an alternative solution to the problem of choosing between autocorrelated and uncorrelated relaxed clocks, by proposing instead to combine their respective merits. Fitting this model on a dataset of 105 placental mammals, using both node-dating and tip-dating approaches, suggests that the two pure clocks, Brownian and white noise, are rejected in favour of a mixed model with approximately equal contributions for its uncorrelated and autocorrelated components. The tip-dating analysis is particularly sensitive to the choice of the relaxed clock model. In this context, the classical pure Brownian relaxed clock appears to be overly rigid, leading to biases in divergence time estimation. By contrast, the use of a mixed clock leads to more recent and more reasonable estimates for the crown ages of placental orders and superorders. Altogether, the mixed clock introduced here represents a first step towards empirically more adequate models of the patterns of rate variation across phylogenetic trees. This article is part of the themed issue ‘Dating species divergences using rocks and clocks’. PMID:27325829

  20. A mixed relaxed clock model.

    PubMed

    Lartillot, Nicolas; Phillips, Matthew J; Ronquist, Fredrik

    2016-07-19

    Over recent years, several alternative relaxed clock models have been proposed in the context of Bayesian dating. These models fall in two distinct categories: uncorrelated and autocorrelated across branches. The choice between these two classes of relaxed clocks is still an open question. More fundamentally, the true process of rate variation may have both long-term trends and short-term fluctuations, suggesting that more sophisticated clock models unfolding over multiple time scales should ultimately be developed. Here, a mixed relaxed clock model is introduced, which can be mechanistically interpreted as a rate variation process undergoing short-term fluctuations on the top of Brownian long-term trends. Statistically, this mixed clock represents an alternative solution to the problem of choosing between autocorrelated and uncorrelated relaxed clocks, by proposing instead to combine their respective merits. Fitting this model on a dataset of 105 placental mammals, using both node-dating and tip-dating approaches, suggests that the two pure clocks, Brownian and white noise, are rejected in favour of a mixed model with approximately equal contributions for its uncorrelated and autocorrelated components. The tip-dating analysis is particularly sensitive to the choice of the relaxed clock model. In this context, the classical pure Brownian relaxed clock appears to be overly rigid, leading to biases in divergence time estimation. By contrast, the use of a mixed clock leads to more recent and more reasonable estimates for the crown ages of placental orders and superorders. Altogether, the mixed clock introduced here represents a first step towards empirically more adequate models of the patterns of rate variation across phylogenetic trees.This article is part of the themed issue 'Dating species divergences using rocks and clocks'. PMID:27325829

  1. Overview of Neutrino Mixing Models and Their Mixing Angle Predictions

    SciTech Connect

    Albright, Carl H.

    2009-11-01

    An overview of neutrino-mixing models is presented with emphasis on the types of horizontal flavor and vertical family symmetries that have been invoked. Distributions for the mixing angles of many models are displayed. Ways to differentiate among the models and to narrow the list of viable models are discussed.

  2. Viscoelastic Finite Difference Modeling Using Graphics Processing Units

    NASA Astrophysics Data System (ADS)

    Fabien-Ouellet, G.; Gloaguen, E.; Giroux, B.

    2014-12-01

    Full waveform seismic modeling requires a huge amount of computing power that still challenges today's technology. This limits the applicability of powerful processing approaches in seismic exploration like full-waveform inversion. This paper explores the use of Graphics Processing Units (GPU) to compute a time based finite-difference solution to the viscoelastic wave equation. The aim is to investigate whether the adoption of the GPU technology is susceptible to reduce significantly the computing time of simulations. The code presented herein is based on the freely accessible software of Bohlen (2002) in 2D provided under a General Public License (GNU) licence. This implementation is based on a second order centred differences scheme to approximate time differences and staggered grid schemes with centred difference of order 2, 4, 6, 8, and 12 for spatial derivatives. The code is fully parallel and is written using the Message Passing Interface (MPI), and it thus supports simulations of vast seismic models on a cluster of CPUs. To port the code from Bohlen (2002) on GPUs, the OpenCl framework was chosen for its ability to work on both CPUs and GPUs and its adoption by most of GPU manufacturers. In our implementation, OpenCL works in conjunction with MPI, which allows computations on a cluster of GPU for large-scale model simulations. We tested our code for model sizes between 1002 and 60002 elements. Comparison shows a decrease in computation time of more than two orders of magnitude between the GPU implementation run on a AMD Radeon HD 7950 and the CPU implementation run on a 2.26 GHz Intel Xeon Quad-Core. The speed-up varies depending on the order of the finite difference approximation and generally increases for higher orders. Increasing speed-ups are also obtained for increasing model size, which can be explained by kernel overheads and delays introduced by memory transfers to and from the GPU through the PCI-E bus. Those tests indicate that the GPU memory size

  3. Bayesian stable isotope mixing models

    EPA Science Inventory

    In this paper we review recent advances in Stable Isotope Mixing Models (SIMMs) and place them into an over-arching Bayesian statistical framework which allows for several useful extensions. SIMMs are used to quantify the proportional contributions of various sources to a mixtur...

  4. Graphics development of DCOR: Deterministic combat model of Oak Ridge

    SciTech Connect

    Hunt, G.; Azmy, Y.Y.

    1992-10-01

    DCOR is a user-friendly computer implementation of a deterministic combat model developed at ORNL. To make the interpretation of the results more intuitive, a conversion of the numerical solution to a graphic animation sequence of battle evolution is desirable. DCOR uses a coarse computational spatial mesh superimposed on the battlefield. This research is aimed at developing robust methods for computing the position of the combative units over the continuum (and also pixeled) battlefield, from DCOR`s discrete-variable solution representing the density of each force type evaluated at gridpoints. Three main problems have been identified and solutions have been devised and implemented in a new visualization module of DCOR. First, there is the problem of distributing the total number of objects, each representing a combative unit of each force type, among the gridpoints at each time level of the animation. This problem is solved by distributing, for each force type, the total number of combative units, one by one, to the gridpoint with the largest calculated number of units. Second, there is the problem of distributing the number of units assigned to each computational gridpoint over the battlefield area attributed to that point. This problem is solved by distributing the units within that area by taking into account the influence of surrounding gridpoints using linear interpolation. Finally, time interpolated solutions must be generated to produce a sufficient number of frames to create a smooth animation sequence. Currently, enough frames may be generated either by direct computation via the PDE solver or by using linear programming techniques to linearly interpolate intermediate frames between calculated frames.

  5. Inference of Mix from Experimental Data and Theoretical Mix Models

    SciTech Connect

    Welser-Sherrill, L.; Haynes, D. A.; Cooley, J. H.; Mancini, R. C.; Haan, S. W.; Golovkin, I. E.

    2007-08-02

    The mixing between fuel and shell materials in Inertial Confinement Fusion implosion cores is a topic of great interest. Mixing due to hydrodynamic instabilities can affect implosion dynamics and could also go so far as to prevent ignition. We have demonstrated that it is possible to extract information on mixing directly from experimental data using spectroscopic arguments. In order to compare this data-driven analysis to a theoretical framework, two independent mix models, Youngs' phenomenological model and the Haan saturation model, have been implemented in conjunction with a series of clean hydrodynamic simulations that model the experiments. The first tests of these methods were carried out based on a set of indirect drive implosions at the OMEGA laser. We now focus on direct drive experiments, and endeavor to approach the problem from another perspective. In the current work, we use Youngs' and Haan's mix models in conjunction with hydrodynamic simulations in order to design experimental platforms that exhibit measurably different levels of mix. Once the experiments are completed based on these designs, the results of a data-driven mix analysis will be compared to the levels of mix predicted by the simulations. In this way, we aim to increase our confidence in the methods used to extract mixing information from the experimental data, as well as to study sensitivities and the range of validity of the mix models.

  6. An Item Response Unfolding Model for Graphic Rating Scales

    ERIC Educational Resources Information Center

    Liu, Ying

    2009-01-01

    The graphic rating scale, a measurement tool used in many areas of psychology, usually takes a form of a fixed-length line segment, with both ends bounded and labeled as extreme responses. The raters mark somewhere on the line, and the length of the line segment from one endpoint to the mark is taken as the measure. An item response unfolding…

  7. A probabilistic graphical model approach to stochastic multiscale partial differential equations

    SciTech Connect

    Wan, Jiang; Zabaras, Nicholas; Center for Applied Mathematics, Cornell University, 657 Frank H.T. Rhodes Hall, Ithaca, NY 14853

    2013-10-01

    We develop a probabilistic graphical model based methodology to efficiently perform uncertainty quantification in the presence of both stochastic input and multiple scales. Both the stochastic input and model responses are treated as random variables in this framework. Their relationships are modeled by graphical models which give explicit factorization of a high-dimensional joint probability distribution. The hyperparameters in the probabilistic model are learned using sequential Monte Carlo (SMC) method, which is superior to standard Markov chain Monte Carlo (MCMC) methods for multi-modal distributions. Finally, we make predictions from the probabilistic graphical model using the belief propagation algorithm. Numerical examples are presented to show the accuracy and efficiency of the predictive capability of the developed graphical model.

  8. Graphics modelling of non-contact thickness measuring robotics work cell

    NASA Technical Reports Server (NTRS)

    Warren, Charles W.

    1990-01-01

    A system was developed for measuring, in real time, the thickness of a sprayable insulation during its application. The system was graphically modelled, off-line, using a state-of-the-art graphics workstation and associated software. This model was to contain a 3D color model of a workcell containing a robot and an air bearing turntable. A communication link was established between the graphics workstations and the robot's controller. Sequences of robot motion generated by the computer simulation are transmitted to the robot for execution.

  9. Graphics-based intelligent search and abstracting using Data Modeling

    NASA Astrophysics Data System (ADS)

    Jaenisch, Holger M.; Handley, James W.; Case, Carl T.; Songy, Claude G.

    2002-11-01

    This paper presents an autonomous text and context-mining algorithm that converts text documents into point clouds for visual search cues. This algorithm is applied to the task of data-mining a scriptural database comprised of the Old and New Testaments from the Bible and the Book of Mormon, Doctrine and Covenants, and the Pearl of Great Price. Results are generated which graphically show the scripture that represents the average concept of the database and the mining of the documents down to the verse level.

  10. Top View of a Computer Graphic Model of the Opportunity Lander and Rover

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site] PIA05265

    A computer graphics model of the Opportunity lander and rover are super-imposed on top of the martian terrain where Opportunity landed.

  11. Interactive computer graphic surface modeling of three-dimensional solid domains for boundary element analysis

    NASA Technical Reports Server (NTRS)

    Perucchio, R.; Ingraffea, A. R.

    1984-01-01

    The establishment of the boundary element method (BEM) as a valid tool for solving problems in structural mechanics and in other fields of applied physics is discussed. The development of an integrated interactive computer graphic system for the application of the BEM to three dimensional problems in elastostatics is described. The integration of interactive computer graphic techniques and the BEM takes place at the preprocessing and postprocessing stages of the analysis process, when, respectively, the data base is generated and the results are interpreted. The interactive computer graphic modeling techniques used for generating and discretizing the boundary surfaces of a solid domain are outlined.

  12. A few modeling and rendering techniques for computer graphics and their implementation on ultra hardware

    NASA Technical Reports Server (NTRS)

    Bidasaria, Hari

    1989-01-01

    Ultra network is a recently installed very high speed graphic hardware at NASA Langley Research Center. The Ultra Network interfaced to Voyager through its HSX channel is capable of transmitting up to 800 million bits of information per second. It is capable of displaying fifteen to twenty frames of precomputed images of size 1024 x 2368 with 24 bits of color information per pixel per second. Modeling and rendering techniques are being developed in computer graphics and implemented on Ultra hardware. A ray tracer is being developed for use at the Flight Software and Graphic branch. Changes were made to make the ray tracer compatible with Voyager.

  13. RevBayes: Bayesian Phylogenetic Inference Using Graphical Models and an Interactive Model-Specification Language

    PubMed Central

    Höhna, Sebastian; Landis, Michael J.

    2016-01-01

    Programs for Bayesian inference of phylogeny currently implement a unique and fixed suite of models. Consequently, users of these software packages are simultaneously forced to use a number of programs for a given study, while also lacking the freedom to explore models that have not been implemented by the developers of those programs. We developed a new open-source software package, RevBayes, to address these problems. RevBayes is entirely based on probabilistic graphical models, a powerful generic framework for specifying and analyzing statistical models. Phylogenetic-graphical models can be specified interactively in RevBayes, piece by piece, using a new succinct and intuitive language called Rev. Rev is similar to the R language and the BUGS model-specification language, and should be easy to learn for most users. The strength of RevBayes is the simplicity with which one can design, specify, and implement new and complex models. Fortunately, this tremendous flexibility does not come at the cost of slower computation; as we demonstrate, RevBayes outperforms competing software for several standard analyses. Compared with other programs, RevBayes has fewer black-box elements. Users need to explicitly specify each part of the model and analysis. Although this explicitness may initially be unfamiliar, we are convinced that this transparency will improve understanding of phylogenetic models in our field. Moreover, it will motivate the search for improvements to existing methods by brazenly exposing the model choices that we make to critical scrutiny. RevBayes is freely available at http://www.RevBayes.com. [Bayesian inference; Graphical models; MCMC; statistical phylogenetics.] PMID:27235697

  14. RevBayes: Bayesian Phylogenetic Inference Using Graphical Models and an Interactive Model-Specification Language.

    PubMed

    Höhna, Sebastian; Landis, Michael J; Heath, Tracy A; Boussau, Bastien; Lartillot, Nicolas; Moore, Brian R; Huelsenbeck, John P; Ronquist, Fredrik

    2016-07-01

    Programs for Bayesian inference of phylogeny currently implement a unique and fixed suite of models. Consequently, users of these software packages are simultaneously forced to use a number of programs for a given study, while also lacking the freedom to explore models that have not been implemented by the developers of those programs. We developed a new open-source software package, RevBayes, to address these problems. RevBayes is entirely based on probabilistic graphical models, a powerful generic framework for specifying and analyzing statistical models. Phylogenetic-graphical models can be specified interactively in RevBayes, piece by piece, using a new succinct and intuitive language called Rev. Rev is similar to the R language and the BUGS model-specification language, and should be easy to learn for most users. The strength of RevBayes is the simplicity with which one can design, specify, and implement new and complex models. Fortunately, this tremendous flexibility does not come at the cost of slower computation; as we demonstrate, RevBayes outperforms competing software for several standard analyses. Compared with other programs, RevBayes has fewer black-box elements. Users need to explicitly specify each part of the model and analysis. Although this explicitness may initially be unfamiliar, we are convinced that this transparency will improve understanding of phylogenetic models in our field. Moreover, it will motivate the search for improvements to existing methods by brazenly exposing the model choices that we make to critical scrutiny. RevBayes is freely available at http://www.RevBayes.com [Bayesian inference; Graphical models; MCMC; statistical phylogenetics.]. PMID:27235697

  15. Graphical Means for Inspecting Qualitative Models of System Behaviour

    ERIC Educational Resources Information Center

    Bouwer, Anders; Bredeweg, Bert

    2010-01-01

    This article presents the design and evaluation of a tool for inspecting conceptual models of system behaviour. The basis for this research is the Garp framework for qualitative simulation. This framework includes modelling primitives, such as entities, quantities and causal dependencies, which are combined into model fragments and scenarios.…

  16. Word-level language modeling for P300 spellers based on discriminative graphical models

    PubMed Central

    Saa, Jaime F Delgado; de Pesters, Adriana; McFarland, Dennis; Çetin, Müjdat

    2016-01-01

    Objective In this work we propose a probabilistic graphical model framework that uses language priors at the level of words as a mechanism to increase the performance of P300-based spellers. Approach This paper is concerned with brain-computer interfaces based on P300 spellers. Motivated by P300 spelling scenarios involving communication based on a limited vocabulary, we propose a probabilistic graphical model framework and an associated classification algorithm that uses learned statistical models of language at the level of words. Exploiting such high-level contextual information helps reduce the error rate of the speller. Main results Our experimental results demonstrate that the proposed approach offers several advantages over existing methods. Most importantly, it increases the classification accuracy while reducing the number of times the letters need to be flashed, increasing the communication rate of the system. Significance The proposed approach models all the variables in the P300 speller in a unified framework and has the capability to correct errors in previous letters in a word, given the data for the current one. The structure of the model we propose allows the use of efficient inference algorithms, which in turn makes it possible to use this approach in real-time applications. PMID:25686293

  17. Word-level language modeling for P300 spellers based on discriminative graphical models

    NASA Astrophysics Data System (ADS)

    Delgado Saa, Jaime F.; de Pesters, Adriana; McFarland, Dennis; Çetin, Müjdat

    2015-04-01

    Objective. In this work we propose a probabilistic graphical model framework that uses language priors at the level of words as a mechanism to increase the performance of P300-based spellers. Approach. This paper is concerned with brain-computer interfaces based on P300 spellers. Motivated by P300 spelling scenarios involving communication based on a limited vocabulary, we propose a probabilistic graphical model framework and an associated classification algorithm that uses learned statistical models of language at the level of words. Exploiting such high-level contextual information helps reduce the error rate of the speller. Main results. Our experimental results demonstrate that the proposed approach offers several advantages over existing methods. Most importantly, it increases the classification accuracy while reducing the number of times the letters need to be flashed, increasing the communication rate of the system. Significance. The proposed approach models all the variables in the P300 speller in a unified framework and has the capability to correct errors in previous letters in a word, given the data for the current one. The structure of the model we propose allows the use of efficient inference algorithms, which in turn makes it possible to use this approach in real-time applications.

  18. PRay - A graphical user interface for interactive visualization and modification of rayinvr models

    NASA Astrophysics Data System (ADS)

    Fromm, T.

    2016-01-01

    PRay is a graphical user interface for interactive displaying and editing of velocity models for seismic refraction. It is optimized for editing rayinvr models but can also be used as a dynamic viewer for ray tracing results from other software. The main features are the graphical editing of nodes and fast adjusting of the display (stations and phases). It can be extended by user-defined shell scripts and links to phase picking software. PRay is open source software written in the scripting language Perl, runs on Unix-like operating systems including Mac OS X and provides a version controlled source code repository for community development.

  19. Automatic Construction of Anomaly Detectors from Graphical Models

    SciTech Connect

    Ferragut, Erik M; Darmon, David M; Shue, Craig A; Kelley, Stephen

    2011-01-01

    Detection of rare or previously unseen attacks in cyber security presents a central challenge: how does one search for a sufficiently wide variety of types of anomalies and yet allow the process to scale to increasingly complex data? In particular, creating each anomaly detector manually and training each one separately presents untenable strains on both human and computer resources. In this paper we propose a systematic method for constructing a potentially very large number of complementary anomaly detectors from a single probabilistic model of the data. Only one model needs to be trained, but numerous detectors can then be implemented. This approach promises to scale better than manual methods to the complex heterogeneity of real-life data. As an example, we develop a Latent Dirichlet Allocation probability model of TCP connections entering Oak Ridge National Laboratory. We show that several detectors can be automatically constructed from the model and will provide anomaly detection at flow, sub-flow, and host (both server and client) levels. This demonstrates how the fundamental connection between anomaly detection and probabilistic modeling can be exploited to develop more robust operational solutions.

  20. Use and abuse of mixing models (MixSIAR)

    EPA Science Inventory

    Background/Question/MethodsCharacterizing trophic links in food webs is a fundamental ecological question. In our efforts to quantify energy flow through food webs, ecologists have increasingly used mixing models to analyze biological tracer data, often from stable isotopes. Whil...

  1. OASIS: A GRAPHICAL DECISION SUPPORT SYSTEM FOR GROUNDWATER CONTAMINANT MODELING

    EPA Science Inventory

    Three new software technologies were applied to develop an efficient and easy to use decision support system far ground-water contaminant modeling. raphical interfaces create a more intuitive and effective form of communication with the computer compared to text-based interfaces....

  2. Probabilistic assessment of agricultural droughts using graphical models

    NASA Astrophysics Data System (ADS)

    Ramadas, Meenu; Govindaraju, Rao S.

    2015-07-01

    Agricultural droughts are often characterized by soil moisture in the root zone of the soil, but crop needs are rarely factored into the analysis. Since water needs vary with crops, agricultural drought incidences in a region can be characterized better if crop responses to soil water deficits are also accounted for in the drought index. This study investigates agricultural droughts driven by plant stress due to soil moisture deficits using crop stress functions available in the literature. Crop water stress is assumed to begin at the soil moisture level corresponding to incipient stomatal closure, and reaches its maximum at the crop's wilting point. Using available location-specific crop acreage data, a weighted crop water stress function is computed. A new probabilistic agricultural drought index is then developed within a hidden Markov model (HMM) framework that provides model uncertainty in drought classification and accounts for time dependence between drought states. The proposed index allows probabilistic classification of the drought states and takes due cognizance of the stress experienced by the crop due to soil moisture deficit. The capabilities of HMM model formulations for assessing agricultural droughts are compared to those of current drought indices such as standardized precipitation evapotranspiration index (SPEI) and self-calibrating Palmer drought severity index (SC-PDSI). The HMM model identified critical drought events and several drought occurrences that are not detected by either SPEI or SC-PDSI, and shows promise as a tool for agricultural drought studies.

  3. A Local Poisson Graphical Model for inferring networks from sequencing data.

    PubMed

    Allen, Genevera I; Liu, Zhandong

    2013-09-01

    Gaussian graphical models, a class of undirected graphs or Markov Networks, are often used to infer gene networks based on microarray expression data. Many scientists, however, have begun using high-throughput sequencing technologies such as RNA-sequencing or next generation sequencing to measure gene expression. As the resulting data consists of counts of sequencing reads for each gene, Gaussian graphical models are not optimal for this discrete data. In this paper, we propose a novel method for inferring gene networks from sequencing data: the Local Poisson Graphical Model. Our model assumes a Local Markov property where each variable conditional on all other variables is Poisson distributed. We develop a neighborhood selection algorithm to fit our model locally by performing a series of l1 penalized Poisson, or log-linear, regressions. This yields a fast parallel algorithm for estimating networks from next generation sequencing data. In simulations, we illustrate the effectiveness of our methods for recovering network structure from count data. A case study on breast cancer microRNAs (miRNAs), a novel application of graphical models, finds known regulators of breast cancer genes and discovers novel miRNA clusters and hubs that are targets for future research. PMID:23955777

  4. (Hyper)-graphical models in biomedical image analysis.

    PubMed

    Paragios, Nikos; Ferrante, Enzo; Glocker, Ben; Komodakis, Nikos; Parisot, Sarah; Zacharaki, Evangelia I

    2016-10-01

    Computational vision, visual computing and biomedical image analysis have made tremendous progress over the past two decades. This is mostly due the development of efficient learning and inference algorithms which allow better and richer modeling of image and visual understanding tasks. Hyper-graph representations are among the most prominent tools to address such perception through the casting of perception as a graph optimization problem. In this paper, we briefly introduce the importance of such representations, discuss their strength and limitations, provide appropriate strategies for their inference and present their application to address a variety of problems in biomedical image analysis. PMID:27377331

  5. Incorporating Solid Modeling and Team-Based Design into Freshman Engineering Graphics.

    ERIC Educational Resources Information Center

    Buchal, Ralph O.

    2001-01-01

    Describes the integration of these topics through a major team-based design and computer aided design (CAD) modeling project in freshman engineering graphics at the University of Western Ontario. Involves n=250 students working in teams of four to design and document an original Lego toy. Includes 12 references. (Author/YDS)

  6. A Monthly Water-Balance Model Driven By a Graphical User Interface

    USGS Publications Warehouse

    McCabe, Gregory J.; Markstrom, Steven L.

    2007-01-01

    This report describes a monthly water-balance model driven by a graphical user interface, referred to as the Thornthwaite monthly water-balance program. Computations of monthly water-balance components of the hydrologic cycle are made for a specified location. The program can be used as a research tool, an assessment tool, and a tool for classroom instruction.

  7. Graphical Modeling: A New Response Type for Measuring the Qualitative Component of Mathematical Reasoning.

    ERIC Educational Resources Information Center

    Bennett, Randy Elliot; Morley, Mary; Quardt, Dennis; Rock, Donald A.

    2000-01-01

    Investigated the functioning of a new computer-delivered graphical modeling (GM) response type for use in a graduate admissions assessment using two GM tests differing in item features randomly spiraled among participants. Results show GM scores to be reliable and moderately related to the quantitative section of the Graduate Record Examinations.…

  8. Parallelized CCHE2D flow model with CUDA Fortran on Graphics Process Units

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper presents the CCHE2D implicit flow model parallelized using CUDA Fortran programming technique on Graphics Processing Units (GPUs). A parallelized implicit Alternating Direction Implicit (ADI) solver using Parallel Cyclic Reduction (PCR) algorithm on GPU is developed and tested. This solve...

  9. Quantifying uncertainty in stable isotope mixing models

    NASA Astrophysics Data System (ADS)

    Davis, Paul; Syme, James; Heikoop, Jeffrey; Fessenden-Rahn, Julianna; Perkins, George; Newman, Brent; Chrystal, Abbey E.; Hagerty, Shannon B.

    2015-05-01

    Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, Stable Isotope Analysis in R (SIAR), a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (δ15N and δ18O) but all methods tested are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the uncertainty in calculated

  10. Quantifying uncertainty in stable isotope mixing models

    SciTech Connect

    Davis, Paul; Syme, James; Heikoop, Jeffrey; Fessenden-Rahn, Julianna; Perkins, George; Newman, Brent; Chrystal, Abbey E.; Hagerty, Shannon B.

    2015-05-19

    Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, SIAR [Parnell et al., 2010] a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (δ15N and δ18O) but all methods tested are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the

  11. Quantifying uncertainty in stable isotope mixing models

    DOE PAGESBeta

    Davis, Paul; Syme, James; Heikoop, Jeffrey; Fessenden-Rahn, Julianna; Perkins, George; Newman, Brent; Chrystal, Abbey E.; Hagerty, Shannon B.

    2015-05-19

    Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, SIAR [Parnell et al., 2010] a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (δ15N and δ18O) but all methods testedmore » are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the uncertainty in calculated

  12. Modeling and diagnosis of structural systems through sparse dynamic graphical models

    NASA Astrophysics Data System (ADS)

    Bornn, Luke; Farrar, Charles R.; Higdon, David; Murphy, Kevin P.

    2016-06-01

    Since their introduction into the structural health monitoring field, time-domain statistical models have been applied with considerable success. Current approaches still have several flaws, however, as they typically ignore the structure of the system, using individual sensor data for modeling and diagnosis. This paper introduces a Bayesian framework containing much of the previous work with autoregressive models as a special case. In addition, the framework allows for natural inclusion of structural knowledge through the form of prior distributions on the model parameters. Acknowledging the need for computational efficiency, we extend the framework through the use of decomposable graphical models, exploiting sparsity in the system to give models that are simple to fit and understand. This sparsity can be specified from knowledge of the system, from the data itself, or through a combination of the two. Using both simulated and real data, we demonstrate the capability of the model to capture the dynamics of the system and to provide clear indications of structural change and damage. We also demonstrate how learning the sparsity in the system gives insight into the structure's physical properties.

  13. Transition mixing study empirical model report

    NASA Technical Reports Server (NTRS)

    Srinivasan, R.; White, C.

    1988-01-01

    The empirical model developed in the NASA Dilution Jet Mixing Program has been extended to include the curvature effects of transition liners. This extension is based on the results of a 3-D numerical model generated under this contract. The empirical model results agree well with the numerical model results for all tests cases evaluated. The empirical model shows faster mixing rates compared to the numerical model. Both models show drift of jets toward the inner wall of a turning duct. The structure of the jets from the inner wall does not exhibit the familiar kidney-shaped structures observed for the outer wall jets or for jets injected in rectangular ducts.

  14. Experiments with a low-cost system for computer graphics material model acquisition

    NASA Astrophysics Data System (ADS)

    Rushmeier, Holly; Lockerman, Yitzhak; Cartwright, Luke; Pitera, David

    2015-03-01

    We consider the design of an inexpensive system for acquiring material models for computer graphics rendering applications in animation, games and conceptual design. To be useful in these applications a system must be able to model a rich range of appearances in a computationally tractable form. The range of appearance of interest in computer graphics includes materials that have spatially varying properties, directionality, small-scale geometric structure, and subsurface scattering. To be computationally tractable, material models for graphics must be compact, editable, and efficient to numerically evaluate for ray tracing importance sampling. To construct appropriate models for a range of interesting materials, we take the approach of separating out directly and indirectly scattered light using high spatial frequency patterns introduced by Nayar et al. in 2006. To acquire the data at low cost, we use a set of Raspberry Pi computers and cameras clamped to miniature projectors. We explore techniques to separate out surface and subsurface indirect lighting. This separation would allow the fitting of simple, and so tractable, analytical models to features of the appearance model. The goal of the system is to provide models for physically accurate renderings that are visually equivalent to viewing the original physical materials.

  15. VISUAL PLUMES MIXING ZONE MODELING SOFTWARE

    EPA Science Inventory

    The U.S. Environmental Protection Agency has a long history of both supporting plume model development and providing mixing zone modeling software. The Visual Plumes model is the most recent addition to the suite of public-domain models available through the EPA-Athens Center f...

  16. Using graphical models to infer missing streamflow data with its application to the Ohio river basin

    NASA Astrophysics Data System (ADS)

    Villalba, G.; Liang, X.; Salas, D.; Liang, Y.

    2013-12-01

    The spatial relationship among streamflow gauges is an interesting but challenging problem. In this study, we apply a probabilistic graphical modeling approach to exploring and represent the spatial relationships among the streamflow gauges and then inferring missing data with low uncertainties. The Ohio River Basin is used as a case of study to analyze the spatial correlations among the streamflow gauges. An undirected graphical model is used to identify the main spatial correlations among the gauges. Given that model, multivariate linear regressions are used to infer missing data. The accuracy of the method is tested against historical data from 34 daily streamflow gauges over the Ohio River basin from which a period of 30 years' data is available. Our initial study shows promising results.

  17. Square Root Graphical Models: Multivariate Generalizations of Univariate Exponential Families that Permit Positive Dependencies

    PubMed Central

    Inouye, David I.; Ravikumar, Pradeep; Dhillon, Inderjit S.

    2016-01-01

    We develop Square Root Graphical Models (SQR), a novel class of parametric graphical models that provides multivariate generalizations of univariate exponential family distributions. Previous multivariate graphical models (Yang et al., 2015) did not allow positive dependencies for the exponential and Poisson generalizations. However, in many real-world datasets, variables clearly have positive dependencies. For example, the airport delay time in New York—modeled as an exponential distribution—is positively related to the delay time in Boston. With this motivation, we give an example of our model class derived from the univariate exponential distribution that allows for almost arbitrary positive and negative dependencies with only a mild condition on the parameter matrix—a condition akin to the positive definiteness of the Gaussian covariance matrix. Our Poisson generalization allows for both positive and negative dependencies without any constraints on the parameter values. We also develop parameter estimation methods using node-wise regressions with ℓ1 regularization and likelihood approximation methods using sampling. Finally, we demonstrate our exponential generalization on a synthetic dataset and a real-world dataset of airport delay times.

  18. Model selection for factorial Gaussian graphical models with an application to dynamic regulatory networks.

    PubMed

    Vinciotti, Veronica; Augugliaro, Luigi; Abbruzzo, Antonino; Wit, Ernst C

    2016-06-01

    Factorial Gaussian graphical Models (fGGMs) have recently been proposed for inferring dynamic gene regulatory networks from genomic high-throughput data. In the search for true regulatory relationships amongst the vast space of possible networks, these models allow the imposition of certain restrictions on the dynamic nature of these relationships, such as Markov dependencies of low order - some entries of the precision matrix are a priori zeros - or equal dependency strengths across time lags - some entries of the precision matrix are assumed to be equal. The precision matrix is then estimated by l1-penalized maximum likelihood, imposing a further constraint on the absolute value of its entries, which results in sparse networks. Selecting the optimal sparsity level is a major challenge for this type of approaches. In this paper, we evaluate the performance of a number of model selection criteria for fGGMs by means of two simulated regulatory networks from realistic biological processes. The analysis reveals a good performance of fGGMs in comparison with other methods for inferring dynamic networks and of the KLCV criterion in particular for model selection. Finally, we present an application on a high-resolution time-course microarray data from the Neisseria meningitidis bacterium, a causative agent of life-threatening infections such as meningitis. The methodology described in this paper is implemented in the R package sglasso, freely available at CRAN, http://CRAN.R-project.org/package=sglasso. PMID:27023322

  19. From least squares to multilevel modeling: A graphical introduction to Bayesian inference

    NASA Astrophysics Data System (ADS)

    Loredo, Thomas J.

    2016-01-01

    This tutorial presentation will introduce some of the key ideas and techniques involved in applying Bayesian methods to problems in astrostatistics. The focus will be on the big picture: understanding the foundations (interpreting probability, Bayes's theorem, the law of total probability and marginalization), making connections to traditional methods (propagation of errors, least squares, chi-squared, maximum likelihood, Monte Carlo simulation), and highlighting problems where a Bayesian approach can be particularly powerful (Poisson processes, density estimation and curve fitting with measurement error). The "graphical" component of the title reflects an emphasis on pictorial representations of some of the math, but also on the use of graphical models (multilevel or hierarchical models) for analyzing complex data. Code for some examples from the talk will be available to participants, in Python and in the Stan probabilistic programming language.

  20. Sculpting proteins interactively: continual energy minimization embedded in a graphical modeling system.

    PubMed Central

    Surles, M. C.; Richardson, J. S.; Richardson, D. C.; Brooks, F. P.

    1994-01-01

    We describe a new paradigm for modeling proteins in interactive computer graphics systems--continual maintenance of a physically valid representation, combined with direct user control and visualization. This is achieved by a fast algorithm for energy minimization, capable of real-time performance on all atoms of a small protein, plus graphically specified user tugs. The modeling system, called Sculpt, rigidly constrains bond lengths, bond angles, and planar groups (similar to existing interactive modeling programs), while it applies elastic restraints to minimize the potential energy due to torsions, hydrogen bonds, and van der Waals and electrostatic interactions (similar to existing batch minimization programs), and user-specified springs. The graphical interface can show bad and/or favorable contacts, and individual energy terms can be turned on or off to determine their effects and interactions. Sculpt finds a local minimum of the total energy that satisfies all the constraints using an augmented Lagrange-multiplier method; calculation time increases only linearly with the number of atoms because the matrix of constraint gradients is sparse and banded. On a 100-MHz MIPS R4000 processor (Silicon Graphics Indigo), Sculpt achieves 11 updates per second on a 20-residue fragment and 2 updates per second on an 80-residue protein, using all atoms except non-H-bonding hydrogens, and without electrostatic interactions. Applications of Sculpt are described: to reverse the direction of bundle packing in a designed 4-helix bundle protein, to fold up a 2-stranded beta-ribbon into an approximate beta-barrel, and to design the sequence and conformation of a 30-residue peptide that mimics one partner of a protein subunit interaction. Computer models that are both interactive and physically realistic (within the limitations of a given force field) have 2 significant advantages: (1) they make feasible the modeling of very large changes (such as needed for de novo design), and

  1. From Nominal to Quantitative Codification of Content-Neutral Variables in Graphics Research: The Beginnings of a Manifest Content Model.

    ERIC Educational Resources Information Center

    Crow, Wendell C.

    This paper suggests ways in which manifest, physical attributes of graphic elements can be described and measured. It also proposes a preliminary conceptual model that accounts for the readily apparent, measurable variables in a visual message. The graphic elements that are described include format, typeface, and photographs/artwork. The…

  2. Scotogenic model for co-bimaximal mixing

    NASA Astrophysics Data System (ADS)

    Ferreira, P. M.; Grimus, W.; Jurčiukonis, D.; Lavoura, L.

    2016-07-01

    We present a scotogenic model, i.e. a one-loop neutrino mass model with dark right-handed neutrino gauge singlets and one inert dark scalar gauge doublet η, which has symmetries that lead to co-bimaximal mixing, i.e. to an atmospheric mixing angle θ 23 = 45° and to a CP -violating phase δ = ±π /2, while the mixing angle θ 13 remains arbitrary. The symmetries consist of softly broken lepton numbers L α ( α = e, μ, τ ), a non-standard CP symmetry, and three Z_2 symmetries. We indicate two possibilities for extending the model to the quark sector. Since the model has, besides η, three scalar gauge doublets, we perform a thorough discussion of its scalar sector. We demonstrate that it can accommodate a Standard Model-like scalar with mass 125 GeV, with all the other charged and neutral scalars having much higher masses.

  3. Animated computer graphics models of space and earth sciences data generated via the massively parallel processor

    NASA Technical Reports Server (NTRS)

    Treinish, Lloyd A.; Gough, Michael L.; Wildenhain, W. David

    1987-01-01

    The capability was developed of rapidly producing visual representations of large, complex, multi-dimensional space and earth sciences data sets via the implementation of computer graphics modeling techniques on the Massively Parallel Processor (MPP) by employing techniques recently developed for typically non-scientific applications. Such capabilities can provide a new and valuable tool for the understanding of complex scientific data, and a new application of parallel computing via the MPP. A prototype system with such capabilities was developed and integrated into the National Space Science Data Center's (NSSDC) Pilot Climate Data System (PCDS) data-independent environment for computer graphics data display to provide easy access to users. While developing these capabilities, several problems had to be solved independently of the actual use of the MPP, all of which are outlined.

  4. The effects of a dynamic graphical model during simulation-based training of console operation skill

    NASA Technical Reports Server (NTRS)

    Farquhar, John D.; Regian, J. Wesley

    1993-01-01

    LOADER is a Windows-based simulation of a complex procedural task. The task requires subjects to execute long sequences of console-operation actions (e.g., button presses, switch actuations, dial rotations) to accomplish specific goals. The LOADER interface is a graphical computer-simulated console which controls railroad cars, tracks, and cranes in a fictitious railroad yard. We hypothesized that acquisition of LOADER performance skill would be supported by the representation of a dynamic graphical model linking console actions to goal and goal states in the 'railroad yard'. Twenty-nine subjects were randomly assigned to one of two treatments (i.e., dynamic model or no model). During training, both groups received identical text-based instruction in an instructional-window above the LOADER interface. One group, however, additionally saw a dynamic version of the bird's-eye view of the railroad yard. After training, both groups were tested under identical conditions. They were asked to perform the complete procedure without guidance and without access to either type of railroad yard representation. Results indicate that rather than becoming dependent on the animated rail yard model, subjects in the dynamic model condition apparently internalized the model, as evidenced by their performance after the model was removed.

  5. Learning a structured graphical model with boosted top-down features for ultrasound image segmentation.

    PubMed

    Hao, Zhihui; Wang, Qiang; Wang, Xiaotao; Kim, Jung Bae; Hwang, Youngkyoo; Cho, Baek Hwan; Guo, Ping; Lee, Won Ki

    2013-01-01

    A key problem for many medical image segmentation tasks is the combination of different-level knowledge. We propose a novel scheme of embedding detected regions into a superpixel based graphical model, by which we achieve a full leverage on various image cues for ultrasound lesion segmentation. Region features are mapped into a higher-dimensional space via a boosted model to become well controlled. Parameters for regions, superpixels and a new affinity term are learned simultaneously within the framework of structured learning. Experiments on a breast ultrasound image data set confirm the effectiveness of the proposed approach as well as our two novel modules. PMID:24505670

  6. A Module for Graphical Display of Model Results with the CBP Toolbox

    SciTech Connect

    Smith, F.

    2015-04-21

    This report describes work performed by the Savannah River National Laboratory (SRNL) in fiscal year 2014 to add enhanced graphical capabilities to display model results in the Cementitious Barriers Project (CBP) Toolbox. Because Version 2.0 of the CBP Toolbox has just been released, the graphing enhancements described in this report have not yet been integrated into a new version of the Toolbox. Instead they have been tested using a standalone GoldSim model and, while they are substantially complete, may undergo further refinement before full implementation. Nevertheless, this report is issued to document the FY14 development efforts which will provide a basis for further development of the CBP Toolbox.

  7. A graphical user interface for numerical modeling of acclimation responses of vegetation to climate change

    NASA Astrophysics Data System (ADS)

    Le, Phong V. V.; Kumar, Praveen; Drewry, Darren T.; Quijano, Juan C.

    2012-12-01

    Ecophysiological models that vertically resolve vegetation canopy states are becoming a powerful tool for studying the exchange of mass, energy, and momentum between the land surface and the atmosphere. A mechanistic multilayer canopy-soil-root system model (MLCan) developed by Drewry et al. (2010a) has been used to capture the emergent vegetation responses to elevated atmospheric CO2 for both C3 and C4 plants under various climate conditions. However, processing input data and setting up such a model can be time-consuming and error-prone. In this paper, a graphical user interface that has been developed for MLCan is presented. The design of this interface aims to provide visualization capabilities and interactive support for processing input meteorological forcing data and vegetation parameter values to facilitate the use of this model. In addition, the interface also provides graphical tools for analyzing the forcing data and simulated numerical results. The model and its interface are both written in the MATLAB programming language. Finally, an application of this model package for capturing the ecohydrological responses of three bioenergy crops (maize, miscanthus, and switchgrass) to local environmental drivers at two different sites in the Midwestern United States is presented.

  8. Model-Independent Bounds on Kinetic Mixing

    DOE PAGESBeta

    Hook, Anson; Izaguirre, Eder; Wacker, Jay G.

    2011-01-01

    New Abelimore » an vector bosons can kinetically mix with the hypercharge gauge boson of the Standard Model. This letter computes the model-independent limits on vector bosons with masses from 1 GeV to 1 TeV. The limits arise from the numerous e + e − experiments that have been performed in this energy range and bound the kinetic mixing by ϵ ≲ 0.03 for most of the mass range studied, regardless of any additional interactions that the new vector boson may have.« less

  9. Model Independent Bounds on Kinetic Mixing

    SciTech Connect

    Hook, Anson; Izaguirre, Eder; Wacker, Jay G.; /SLAC

    2011-08-22

    New Abelian vector bosons can kinetically mix with the hypercharge gauge boson of the Standard Model. This letter computes the model independent limits on vector bosons with masses from 1 GeV to 1 TeV. The limits arise from the numerous e{sup +}e{sup -} experiments that have been performed in this energy range and bound the kinetic mixing by {epsilon} {approx}< 0.03 for most of the mass range studied, regardless of any additional interactions that the new vector boson may have.

  10. A Spectral Graphical Model Approach for Learning Brain Connectivity Network of Children's Narrative Comprehension

    PubMed Central

    Meng, Xiangxiang; Karunanayaka, Prasanna; Holland, Scott K.

    2011-01-01

    Abstract Narrative comprehension is a fundamental cognitive skill that involves the coordination of different functional brain regions. We develop a spectral graphical model with model averaging to study the connectivity networks underlying these brain regions using fMRI data collected from a story comprehension task. Based on the spectral density matrices in the frequency domain, this model captures the temporal dependency of the entire fMRI time series between brain regions. A Bayesian model averaging procedure is then applied to select the best directional links that constitute the brain network. Using this model, brain networks of three distinct age groups are constructed to assess the dynamic change of network connectivity with respect to age. PMID:22432453

  11. A spectral graphical model approach for learning brain connectivity network of children's narrative comprehension.

    PubMed

    Lin, Xiaodong; Meng, Xiangxiang; Karunanayaka, Prasanna; Holland, Scott K

    2011-01-01

    Narrative comprehension is a fundamental cognitive skill that involves the coordination of different functional brain regions. We develop a spectral graphical model with model averaging to study the connectivity networks underlying these brain regions using fMRI data collected from a story comprehension task. Based on the spectral density matrices in the frequency domain, this model captures the temporal dependency of the entire fMRI time series between brain regions. A Bayesian model averaging procedure is then applied to select the best directional links that constitute the brain network. Using this model, brain networks of three distinct age groups are constructed to assess the dynamic change of network connectivity with respect to age. PMID:22432453

  12. ESTIMATING HETEROGENEOUS GRAPHICAL MODELS FOR DISCRETE DATA WITH AN APPLICATION TO ROLL CALL VOTING

    PubMed Central

    Guo, Jian; Cheng, Jie; Levina, Elizaveta; Michailidis, George; Zhu, Ji

    2016-01-01

    We consider the problem of jointly estimating a collection of graphical models for discrete data, corresponding to several categories that share some common structure. An example for such a setting is voting records of legislators on different issues, such as defense, energy, and healthcare. We develop a Markov graphical model to characterize the heterogeneous dependence structures arising from such data. The model is fitted via a joint estimation method that preserves the underlying common graph structure, but also allows for differences between the networks. The method employs a group penalty that targets the common zero interaction effects across all the networks. We apply the method to describe the internal networks of the U.S. Senate on several important issues. Our analysis reveals individual structure for each issue, distinct from the underlying well-known bipartisan structure common to all categories which we are able to extract separately. We also establish consistency of the proposed method both for parameter estimation and model selection, and evaluate its numerical performance on a number of simulated examples. PMID:27182289

  13. The Mixed Effects Trend Vector Model

    ERIC Educational Resources Information Center

    de Rooij, Mark; Schouteden, Martijn

    2012-01-01

    Maximum likelihood estimation of mixed effect baseline category logit models for multinomial longitudinal data can be prohibitive due to the integral dimension of the random effects distribution. We propose to use multidimensional unfolding methodology to reduce the dimensionality of the problem. As a by-product, readily interpretable graphical…

  14. A computer graphics based model for scattering from objects of arbitrary shapes in the optical region

    NASA Technical Reports Server (NTRS)

    Goel, Narendra S.; Rozehnal, Ivan; Thompson, Richard L.

    1991-01-01

    A computer-graphics-based model, named DIANA, is presented for generation of objects of arbitrary shape and for calculating bidirectional reflectances and scattering from them, in the visible and infrared region. The computer generation is based on a modified Lindenmayer system approach which makes it possible to generate objects of arbitrary shapes and to simulate their growth, dynamics, and movement. Rendering techniques are used to display an object on a computer screen with appropriate shading and shadowing and to calculate the scattering and reflectance from the object. The technique is illustrated with scattering from canopies of simulated corn plants.

  15. Simplified models of mixed dark matter

    SciTech Connect

    Cheung, Clifford; Sanford, David E-mail: dsanford@caltech.edu

    2014-02-01

    We explore simplified models of mixed dark matter (DM), defined here to be a stable relic composed of a singlet and an electroweak charged state. Our setup describes a broad spectrum of thermal DM candidates that can naturally accommodate the observed DM abundance but are subject to substantial constraints from current and upcoming direct detection experiments. We identify ''blind spots'' at which the DM-Higgs coupling is identically zero, thus nullifying direct detection constraints on spin independent scattering. Furthermore, we characterize the fine-tuning in mixing angles, i.e. well-tempering, required for thermal freeze-out to accommodate the observed abundance. Present and projected limits from LUX and XENON1T force many thermal relic models into blind spot tuning, well-tempering, or both. This simplified model framework generalizes bino-Higgsino DM in the MSSM, singlino-Higgsino DM in the NMSSM, and scalar DM candidates that appear in models of extended Higgs sectors.

  16. A graphical method to assess distribution assumption in group-based trajectory models.

    PubMed

    Elsensohn, Mad-Hélénie; Klich, Amna; Ecochard, René; Bastard, Mathieu; Genolini, Christophe; Etard, Jean-François; Gustin, Marie-Paule

    2016-04-01

    Group-based trajectory models had a rapid development in the analysis of longitudinal data in clinical research. In these models, the assumption of homoscedasticity of the residuals is frequently made but this assumption is not always met. We developed here an easy-to-perform graphical method to assess the assumption of homoscedasticity of the residuals to apply especially in group-based trajectory models. The method is based on drawing an envelope to visualize the local dispersion of the residuals around each typical trajectory. Its efficiency is demonstrated using data on CD4 lymphocyte counts in patients with human immunodeficiency virus put on antiretroviral therapy. Four distinct distributions that take into account increasing parts of the variability of the observed data are presented. Significant differences in group structures and trajectory patterns were found according to the chosen distribution. These differences might have large impacts on the final trajectories and their characteristics; thus on potential medical decisions. With a single glance, the graphical criteria allow the choice of the distribution that best capture data variability and help dealing with a potential heteroscedasticity problem. PMID:23427224

  17. FastGGM: An Efficient Algorithm for the Inference of Gaussian Graphical Model in Biological Networks.

    PubMed

    Wang, Ting; Ren, Zhao; Ding, Ying; Fang, Zhou; Sun, Zhe; MacDonald, Matthew L; Sweet, Robert A; Wang, Jieru; Chen, Wei

    2016-02-01

    Biological networks provide additional information for the analysis of human diseases, beyond the traditional analysis that focuses on single variables. Gaussian graphical model (GGM), a probability model that characterizes the conditional dependence structure of a set of random variables by a graph, has wide applications in the analysis of biological networks, such as inferring interaction or comparing differential networks. However, existing approaches are either not statistically rigorous or are inefficient for high-dimensional data that include tens of thousands of variables for making inference. In this study, we propose an efficient algorithm to implement the estimation of GGM and obtain p-value and confidence interval for each edge in the graph, based on a recent proposal by Ren et al., 2015. Through simulation studies, we demonstrate that the algorithm is faster by several orders of magnitude than the current implemented algorithm for Ren et al. without losing any accuracy. Then, we apply our algorithm to two real data sets: transcriptomic data from a study of childhood asthma and proteomic data from a study of Alzheimer's disease. We estimate the global gene or protein interaction networks for the disease and healthy samples. The resulting networks reveal interesting interactions and the differential networks between cases and controls show functional relevance to the diseases. In conclusion, we provide a computationally fast algorithm to implement a statistically sound procedure for constructing Gaussian graphical model and making inference with high-dimensional biological data. The algorithm has been implemented in an R package named "FastGGM". PMID:26872036

  18. Modified graphical autocatalytic set model of combustion process in circulating fluidized bed boiler

    NASA Astrophysics Data System (ADS)

    Yusof, Nurul Syazwani; Bakar, Sumarni Abu; Ismail, Razidah

    2014-07-01

    Circulating Fluidized Bed Boiler (CFB) is a device for generating steam by burning fossil fuels in a furnace operating under a special hydrodynamic condition. Autocatalytic Set has provided a graphical model of chemical reactions that occurred during combustion process in CFB. Eight important chemical substances known as species were represented as nodes and catalytic relationships between nodes are represented by the edges in the graph. In this paper, the model is extended and modified by considering other relevant chemical reactions that also exist during the process. Catalytic relationship among the species in the model is discussed. The result reveals that the modified model is able to gives more explanation of the relationship among the species during the process at initial time t.

  19. Bayesian Learning in Sparse Graphical Factor Models via Variational Mean-Field Annealing

    PubMed Central

    Yoshida, Ryo; West, Mike

    2010-01-01

    We describe a class of sparse latent factor models, called graphical factor models (GFMs), and relevant sparse learning algorithms for posterior mode estimation. Linear, Gaussian GFMs have sparse, orthogonal factor loadings matrices, that, in addition to sparsity of the implied covariance matrices, also induce conditional independence structures via zeros in the implied precision matrices. We describe the models and their use for robust estimation of sparse latent factor structure and data/signal reconstruction. We develop computational algorithms for model exploration and posterior mode search, addressing the hard combinatorial optimization involved in the search over a huge space of potential sparse configurations. A mean-field variational technique coupled with annealing is developed to successively generate “artificial” posterior distributions that, at the limiting temperature in the annealing schedule, define required posterior modes in the GFM parameter space. Several detailed empirical studies and comparisons to related approaches are discussed, including analyses of handwritten digit image and cancer gene expression data. PMID:20890391

  20. AZOrange - High performance open source machine learning for QSAR modeling in a graphical programming environment

    PubMed Central

    2011-01-01

    Background Machine learning has a vast range of applications. In particular, advanced machine learning methods are routinely and increasingly used in quantitative structure activity relationship (QSAR) modeling. QSAR data sets often encompass tens of thousands of compounds and the size of proprietary, as well as public data sets, is rapidly growing. Hence, there is a demand for computationally efficient machine learning algorithms, easily available to researchers without extensive machine learning knowledge. In granting the scientific principles of transparency and reproducibility, Open Source solutions are increasingly acknowledged by regulatory authorities. Thus, an Open Source state-of-the-art high performance machine learning platform, interfacing multiple, customized machine learning algorithms for both graphical programming and scripting, to be used for large scale development of QSAR models of regulatory quality, is of great value to the QSAR community. Results This paper describes the implementation of the Open Source machine learning package AZOrange. AZOrange is specially developed to support batch generation of QSAR models in providing the full work flow of QSAR modeling, from descriptor calculation to automated model building, validation and selection. The automated work flow relies upon the customization of the machine learning algorithms and a generalized, automated model hyper-parameter selection process. Several high performance machine learning algorithms are interfaced for efficient data set specific selection of the statistical method, promoting model accuracy. Using the high performance machine learning algorithms of AZOrange does not require programming knowledge as flexible applications can be created, not only at a scripting level, but also in a graphical programming environment. Conclusions AZOrange is a step towards meeting the needs for an Open Source high performance machine learning platform, supporting the efficient development of

  1. Computer Graphics.

    ERIC Educational Resources Information Center

    Halpern, Jeanne W.

    1970-01-01

    Computer graphics have been called the most exciting development in computer technology. At the University of Michigan, three kinds of graphics output equipment are now being used: symbolic printers, line plotters or drafting devices, and cathode-ray tubes (CRT). Six examples are given that demonstrate the range of graphics use at the University.…

  2. Linear Mixed Models: Gum and Beyond

    NASA Astrophysics Data System (ADS)

    Arendacká, Barbora; Täubner, Angelika; Eichstädt, Sascha; Bruns, Thomas; Elster, Clemens

    2014-04-01

    In Annex H.5, the Guide to the Evaluation of Uncertainty in Measurement (GUM) [1] recognizes the necessity to analyze certain types of experiments by applying random effects ANOVA models. These belong to the more general family of linear mixed models that we focus on in the current paper. Extending the short introduction provided by the GUM, our aim is to show that the more general, linear mixed models cover a wider range of situations occurring in practice and can be beneficial when employed in data analysis of long-term repeated experiments. Namely, we point out their potential as an aid in establishing an uncertainty budget and as means for gaining more insight into the measurement process. We also comment on computational issues and to make the explanations less abstract, we illustrate all the concepts with the help of a measurement campaign conducted in order to challenge the uncertainty budget in calibration of accelerometers.

  3. Joint sulcal detection on cortical surfaces with graphical models and boosted priors.

    PubMed

    Shi, Yonggang; Tu, Zhuowen; Reiss, Allan L; Dutton, Rebecca A; Lee, Agatha D; Galaburda, Albert M; Dinov, Ivo; Thompson, Paul M; Toga, Arthur W

    2009-03-01

    In this paper, we propose an automated approach for the joint detection of major sulci on cortical surfaces. By representing sulci as nodes in a graphical model, we incorporate Markovian relations between sulci and formulate their detection as a maximum a posteriori (MAP) estimation problem over the joint space of major sulci. To make the inference tractable, a sample space with a finite number of candidate curves is automatically generated at each node based on the Hamilton-Jacobi skeleton of sulcal regions. Using the AdaBoost algorithm, we learn both individual and pairwise shape priors of sulcal curves from training data, which are then used to define potential functions in the graphical model based on the connection between AdaBoost and logistic regression. Finally belief propagation is used to perform the MAP inference and select the joint detection results from the sample spaces of candidate curves. In our experiments, we quantitatively validate our algorithm with manually traced curves and demonstrate the automatically detected curves can capture the main body of sulci very accurately. A comparison with independently detected results is also conducted to illustrate the advantage of the joint detection approach. PMID:19244008

  4. Bayesian Estimation of Latently-grouped Parameters in Undirected Graphical Models

    PubMed Central

    Liu, Jie; Page, David

    2014-01-01

    In large-scale applications of undirected graphical models, such as social networks and biological networks, similar patterns occur frequently and give rise to similar parameters. In this situation, it is beneficial to group the parameters for more efficient learning. We show that even when the grouping is unknown, we can infer these parameter groups during learning via a Bayesian approach. We impose a Dirichlet process prior on the parameters. Posterior inference usually involves calculating intractable terms, and we propose two approximation algorithms, namely a Metropolis-Hastings algorithm with auxiliary variables and a Gibbs sampling algorithm with “stripped” Beta approximation (Gibbs_SBA). Simulations show that both algorithms outperform conventional maximum likelihood estimation (MLE). Gibbs_SBA’s performance is close to Gibbs sampling with exact likelihood calculation. Models learned with Gibbs_SBA also generalize better than the models learned by MLE on real-world Senate voting data. PMID:25404848

  5. ModelMuse - A Graphical User Interface for MODFLOW-2005 and PHAST

    USGS Publications Warehouse

    Winston, Richard B.

    2009-01-01

    ModelMuse is a graphical user interface (GUI) for the U.S. Geological Survey (USGS) models MODFLOW-2005 and PHAST. This software package provides a GUI for creating the flow and transport input file for PHAST and the input files for MODFLOW-2005. In ModelMuse, the spatial data for the model is independent of the grid, and the temporal data is independent of the stress periods. Being able to input these data independently allows the user to redefine the spatial and temporal discretization at will. This report describes the basic concepts required to work with ModelMuse. These basic concepts include the model grid, data sets, formulas, objects, the method used to assign values to data sets, and model features. The ModelMuse main window has a top, front, and side view of the model that can be used for editing the model, and a 3-D view of the model that can be used to display properties of the model. ModelMuse has tools to generate and edit the model grid. It also has a variety of interpolation methods and geographic functions that can be used to help define the spatial variability of the model. ModelMuse can be used to execute both MODFLOW-2005 and PHAST and can also display the results of MODFLOW-2005 models. An example of using ModelMuse with MODFLOW-2005 is included in this report. Several additional examples are described in the help system for ModelMuse, which can be accessed from the Help menu.

  6. Fertility intentions and outcomes: Implementing the Theory of Planned Behavior with graphical models.

    PubMed

    Mencarini, Letizia; Vignoli, Daniele; Gottard, Anna

    2015-03-01

    This paper studies fertility intentions and their outcomes, analyzing the complete path leading to fertility behavior according to the social psychological model of Theory Planned Behavior (TPB). We move beyond existing research using graphical models to have a precise understanding, and a formal description, of the developmental fertility decision-making process. Our findings yield new results for the Italian case which are empirically robust and theoretically coherent, adding important insights to the effectiveness of the TPB for fertility research. In line with TPB, all intentions' primary antecedents are found to be determinants of the level of fertility intentions, but do not affect fertility outcomes, being pre-filtered by fertility intentions. Nevertheless, in contrast with TPB, background factors are not fully mediated by intentions' primary antecedents, influencing directly fertility intentions and even fertility behaviors. PMID:26047838

  7. Learning Sequence Determinants of Protein:Protein Interaction Specificity with Sparse Graphical Models

    PubMed Central

    Kamisetty, Hetunandan; Ghosh, Bornika; Langmead, Christopher James; Bailey-Kellogg, Chris

    2015-01-01

    Abstract In studying the strength and specificity of interaction between members of two protein families, key questions center on which pairs of possible partners actually interact, how well they interact, and why they interact while others do not. The advent of large-scale experimental studies of interactions between members of a target family and a diverse set of possible interaction partners offers the opportunity to address these questions. We develop here a method, DgSpi (data-driven graphical models of specificity in protein:protein interactions), for learning and using graphical models that explicitly represent the amino acid basis for interaction specificity (why) and extend earlier classification-oriented approaches (which) to predict the ΔG of binding (how well). We demonstrate the effectiveness of our approach in analyzing and predicting interactions between a set of 82 PDZ recognition modules against a panel of 217 possible peptide partners, based on data from MacBeath and colleagues. Our predicted ΔG values are highly predictive of the experimentally measured ones, reaching correlation coefficients of 0.69 in 10-fold cross-validation and 0.63 in leave-one-PDZ-out cross-validation. Furthermore, the model serves as a compact representation of amino acid constraints underlying the interactions, enabling protein-level ΔG predictions to be naturally understood in terms of residue-level constraints. Finally, the model DgSpi readily enables the design of new interacting partners, and we demonstrate that designed ligands are novel and diverse. PMID:25973864

  8. Molecular Graphics and Chemistry.

    ERIC Educational Resources Information Center

    Weber, Jacques; And Others

    1992-01-01

    Explains molecular graphics, i.e., the application of computer graphics techniques to investigate molecular structure, function, and interaction. Structural models and molecular surfaces are discussed, and a theoretical model that can be used for the evaluation of intermolecular interaction energies for organometallics is described. (45…

  9. Graphical Representations for Ising and Potts Models in General External Fields

    NASA Astrophysics Data System (ADS)

    Cioletti, Leandro; Vila, Roberto

    2016-01-01

    This work is concerned with the theory of graphical representation for the Ising and Potts models over general lattices with non-translation invariant external field. We explicitly describe in terms of the random-cluster representation the distribution function and, consequently, the expected value of a single spin for the Ising and q-state Potts models with general external fields. We also consider the Gibbs states for the Edwards-Sokal representation of the Potts model with non-translation invariant magnetic field and prove a version of the FKG inequality for the so called general random-cluster model (GRC model) with free and wired boundary conditions in the non-translation invariant case. Adding the amenability hypothesis on the lattice, we obtain the uniqueness of the infinite connected component and the almost sure quasilocality of the Gibbs measures for the GRC model with such general magnetic fields. As a final application of the theory developed, we show the uniqueness of the Gibbs measures for the ferromagnetic Ising model with a positive power-law decay magnetic field with small enough power, as conjectured in Bissacot et al. (Commun Math Phys 337: 41-53, 2015).

  10. FastGGM: An Efficient Algorithm for the Inference of Gaussian Graphical Model in Biological Networks

    PubMed Central

    Ding, Ying; Fang, Zhou; Sun, Zhe; MacDonald, Matthew L.; Sweet, Robert A.; Wang, Jieru; Chen, Wei

    2016-01-01

    Biological networks provide additional information for the analysis of human diseases, beyond the traditional analysis that focuses on single variables. Gaussian graphical model (GGM), a probability model that characterizes the conditional dependence structure of a set of random variables by a graph, has wide applications in the analysis of biological networks, such as inferring interaction or comparing differential networks. However, existing approaches are either not statistically rigorous or are inefficient for high-dimensional data that include tens of thousands of variables for making inference. In this study, we propose an efficient algorithm to implement the estimation of GGM and obtain p-value and confidence interval for each edge in the graph, based on a recent proposal by Ren et al., 2015. Through simulation studies, we demonstrate that the algorithm is faster by several orders of magnitude than the current implemented algorithm for Ren et al. without losing any accuracy. Then, we apply our algorithm to two real data sets: transcriptomic data from a study of childhood asthma and proteomic data from a study of Alzheimer’s disease. We estimate the global gene or protein interaction networks for the disease and healthy samples. The resulting networks reveal interesting interactions and the differential networks between cases and controls show functional relevance to the diseases. In conclusion, we provide a computationally fast algorithm to implement a statistically sound procedure for constructing Gaussian graphical model and making inference with high-dimensional biological data. The algorithm has been implemented in an R package named “FastGGM”. PMID:26872036

  11. Gaussian graphical modeling reconstructs pathway reactions from high-throughput metabolomics data

    PubMed Central

    2011-01-01

    Background With the advent of high-throughput targeted metabolic profiling techniques, the question of how to interpret and analyze the resulting vast amount of data becomes more and more important. In this work we address the reconstruction of metabolic reactions from cross-sectional metabolomics data, that is without the requirement for time-resolved measurements or specific system perturbations. Previous studies in this area mainly focused on Pearson correlation coefficients, which however are generally incapable of distinguishing between direct and indirect metabolic interactions. Results In our new approach we propose the application of a Gaussian graphical model (GGM), an undirected probabilistic graphical model estimating the conditional dependence between variables. GGMs are based on partial correlation coefficients, that is pairwise Pearson correlation coefficients conditioned against the correlation with all other metabolites. We first demonstrate the general validity of the method and its advantages over regular correlation networks with computer-simulated reaction systems. Then we estimate a GGM on data from a large human population cohort, covering 1020 fasting blood serum samples with 151 quantified metabolites. The GGM is much sparser than the correlation network, shows a modular structure with respect to metabolite classes, and is stable to the choice of samples in the data set. On the example of human fatty acid metabolism, we demonstrate for the first time that high partial correlation coefficients generally correspond to known metabolic reactions. This feature is evaluated both manually by investigating specific pairs of high-scoring metabolites, and then systematically on a literature-curated model of fatty acid synthesis and degradation. Our method detects many known reactions along with possibly novel pathway interactions, representing candidates for further experimental examination. Conclusions In summary, we demonstrate strong signatures of

  12. Two graphical user interfaces for managing and analyzing MODFLOW groundwater-model scenarios

    USGS Publications Warehouse

    Banta, Edward R.

    2014-01-01

    Scenario Manager and Scenario Analyzer are graphical user interfaces that facilitate the use of calibrated, MODFLOW-based groundwater models for investigating possible responses to proposed stresses on a groundwater system. Scenario Manager allows a user, starting with a calibrated model, to design and run model scenarios by adding or modifying stresses simulated by the model. Scenario Analyzer facilitates the process of extracting data from model output and preparing such display elements as maps, charts, and tables. Both programs are designed for users who are familiar with the science on which groundwater modeling is based but who may not have a groundwater modeler’s expertise in building and calibrating a groundwater model from start to finish. With Scenario Manager, the user can manipulate model input to simulate withdrawal or injection wells, time-variant specified hydraulic heads, recharge, and such surface-water features as rivers and canals. Input for stresses to be simulated comes from user-provided geographic information system files and time-series data files. A Scenario Manager project can contain multiple scenarios and is self-documenting. Scenario Analyzer can be used to analyze output from any MODFLOW-based model; it is not limited to use with scenarios generated by Scenario Manager. Model-simulated values of hydraulic head, drawdown, solute concentration, and cell-by-cell flow rates can be presented in display elements. Map data can be represented as lines of equal value (contours) or as a gradated color fill. Charts and tables display time-series data obtained from output generated by a transient-state model run or from user-provided text files of time-series data. A display element can be based entirely on output of a single model run, or, to facilitate comparison of results of multiple scenarios, an element can be based on output from multiple model runs. Scenario Analyzer can export display elements and supporting metadata as a Portable

  13. Graphics Processing Unit (GPU) Acceleration of the Goddard Earth Observing System Atmospheric Model

    NASA Technical Reports Server (NTRS)

    Putnam, Williama

    2011-01-01

    The Goddard Earth Observing System 5 (GEOS-5) is the atmospheric model used by the Global Modeling and Assimilation Office (GMAO) for a variety of applications, from long-term climate prediction at relatively coarse resolution, to data assimilation and numerical weather prediction, to very high-resolution cloud-resolving simulations. GEOS-5 is being ported to a graphics processing unit (GPU) cluster at the NASA Center for Climate Simulation (NCCS). By utilizing GPU co-processor technology, we expect to increase the throughput of GEOS-5 by at least an order of magnitude, and accelerate the process of scientific exploration across all scales of global modeling, including: The large-scale, high-end application of non-hydrostatic, global, cloud-resolving modeling at 10- to I-kilometer (km) global resolutions Intermediate-resolution seasonal climate and weather prediction at 50- to 25-km on small clusters of GPUs Long-range, coarse-resolution climate modeling, enabled on a small box of GPUs for the individual researcher After being ported to the GPU cluster, the primary physics components and the dynamical core of GEOS-5 have demonstrated a potential speedup of 15-40 times over conventional processor cores. Performance improvements of this magnitude reduce the required scalability of 1-km, global, cloud-resolving models from an unfathomable 6 million cores to an attainable 200,000 GPU-enabled cores.

  14. Introduction of a methodology for visualization and graphical interpretation of Bayesian classification models.

    PubMed

    Balfer, Jenny; Bajorath, Jürgen

    2014-09-22

    Supervised machine learning models are widely used in chemoinformatics, especially for the prediction of new active compounds or targets of known actives. Bayesian classification methods are among the most popular machine learning approaches for the prediction of activity from chemical structure. Much work has focused on predicting structure-activity relationships (SARs) on the basis of experimental training data. By contrast, only a few efforts have thus far been made to rationalize the performance of Bayesian or other supervised machine learning models and better understand why they might succeed or fail. In this study, we introduce an intuitive approach for the visualization and graphical interpretation of naïve Bayesian classification models. Parameters derived during supervised learning are visualized and interactively analyzed to gain insights into model performance and identify features that determine predictions. The methodology is introduced in detail and applied to assess Bayesian modeling efforts and predictions on compound data sets of varying structural complexity. Different classification models and features determining their performance are characterized in detail. A prototypic implementation of the approach is provided. PMID:25137527

  15. A Graphical User Interface for Parameterizing Biochemical Models of Photosynthesis and Chlorophyll Fluorescence

    NASA Astrophysics Data System (ADS)

    Kornfeld, A.; Van der Tol, C.; Berry, J. A.

    2015-12-01

    Recent advances in optical remote sensing of photosynthesis offer great promise for estimating gross primary productivity (GPP) at leaf, canopy and even global scale. These methods -including solar-induced chlorophyll fluorescence (SIF) emission, fluorescence spectra, and hyperspectral features such as the red edge and the photochemical reflectance index (PRI) - can be used to greatly enhance the predictive power of global circulation models (GCMs) by providing better constraints on GPP. The way to use measured optical data to parameterize existing models such as SCOPE (Soil Canopy Observation, Photochemistry and Energy fluxes) is not trivial, however. We have therefore extended a biochemical model to include fluorescence and other parameters in a coupled treatment. To help parameterize the model, we then use nonlinear curve-fitting routines to determine the parameter set that enables model results to best fit leaf-level gas exchange and optical data measurements. To make the tool more accessible to all practitioners, we have further designed a graphical user interface (GUI) based front-end to allow researchers to analyze data with a minimum of effort while, at the same time, allowing them to change parameters interactively to visualize how variation in model parameters affect predicted outcomes such as photosynthetic rates, electron transport, and chlorophyll fluorescence. Here we discuss the tool and its effectiveness, using recently-gathered leaf-level data.

  16. Shaded computer graphic techniques for visualizing and interpreting analytic fluid flow models

    NASA Technical Reports Server (NTRS)

    Parke, F. I.

    1981-01-01

    Mathematical models which predict the behavior of fluid flow in different experiments are simulated using digital computers. The simulations predict values of parameters of the fluid flow (pressure, temperature and velocity vector) at many points in the fluid. Visualization of the spatial variation in the value of these parameters is important to comprehend and check the data generated, to identify the regions of interest in the flow, and for effectively communicating information about the flow to others. The state of the art imaging techniques developed in the field of three dimensional shaded computer graphics is applied to visualization of fluid flow. Use of an imaging technique known as 'SCAN' for visualizing fluid flow, is studied and the results are presented.

  17. uPy: a ubiquitous computer graphics Python API with Biological Modeling Applications

    PubMed Central

    Autin, L.; Johnson, G.; Hake, J.; Olson, A.; Sanner, M.

    2015-01-01

    In this paper we describe uPy, an extension module for the Python programming language that provides a uniform abstraction of the APIs of several 3D computer graphics programs called hosts, including: Blender, Maya, Cinema4D, and DejaVu. A plugin written with uPy is a unique piece of code that will run in all uPy-supported hosts. We demonstrate the creation of complex plug-ins for molecular/cellular modeling and visualization and discuss how uPy can more generally simplify programming for many types of projects (not solely science applications) intended for multi-host distribution. uPy is available at http://upy.scripps.edu PMID:24806987

  18. Graphical representation of life paths to better convey results of decision models to patients.

    PubMed

    Rubrichi, Stefania; Rognoni, Carla; Sacchi, Lucia; Parimbelli, Enea; Napolitano, Carlo; Mazzanti, Andrea; Quaglini, Silvana

    2015-04-01

    The inclusion of patients' perspectives in clinical practice has become an important matter for health professionals, in view of the increasing attention to patient-centered care. In this regard, this report illustrates a method for developing a visual aid that supports the physician in the process of informing patients about a critical decisional problem. In particular, we focused on interpretation of the results of decision trees embedding Markov models implemented with the commercial tool TreeAge Pro. Starting from patient-level simulations and exploiting some advanced functionalities of TreeAge Pro, we combined results to produce a novel graphical output that represents the distributions of outcomes over the lifetime for the different decision options, thus becoming a more informative decision support in a context of shared decision making. The training example used to illustrate the method is a decision tree for thromboembolism risk prevention in patients with nonvalvular atrial fibrillation. PMID:25589524

  19. Glossiness of Colored Papers based on Computer Graphics Model and Its Measuring Method

    NASA Astrophysics Data System (ADS)

    Aida, Teizo

    In the case of colored papers, the color of surface effects strongly upon the gloss of its paper. The new glossiness for such a colored paper is suggested in this paper. First, using the Achromatic and Chromatic Munsell colored chips, the author obtained experimental equation which represents the relation between lightness V ( or V and saturation C ) and psychological glossiness Gph of these chips. Then, the author defined a new glossiness G for the colored papers, based on the above mentioned experimental equations Gph and Cook-Torrance's reflection model which are widely used in the filed of Computer Graphics. This new glossiness is shown to be nearly proportional to the psychological glossiness Gph. The measuring system for the new glossiness G is furthermore descrived. The measuring time for one specimen is within 1 minute.

  20. NATURAL graphics

    NASA Technical Reports Server (NTRS)

    Jones, R. H.

    1984-01-01

    The hardware and software developments in computer graphics are discussed. Major topics include: system capabilities, hardware design, system compatibility, and software interface with the data base management system.

  1. Model Selection with the Linear Mixed Model for Longitudinal Data

    ERIC Educational Resources Information Center

    Ryoo, Ji Hoon

    2011-01-01

    Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or…

  2. Toward Better Modeling of Supercritical Turbulent Mixing

    NASA Technical Reports Server (NTRS)

    Selle, Laurent; Okongo'o, Nora; Bellan, Josette; Harstad, Kenneth

    2008-01-01

    study was done as part of an effort to develop computational models representing turbulent mixing under thermodynamic supercritical (here, high pressure) conditions. The question was whether the large-eddy simulation (LES) approach, developed previously for atmospheric-pressure compressible-perfect-gas and incompressible flows, can be extended to real-gas non-ideal (including supercritical) fluid mixtures. [In LES, the governing equations are approximated such that the flow field is spatially filtered and subgrid-scale (SGS) phenomena are represented by models.] The study included analyses of results from direct numerical simulation (DNS) of several such mixing layers based on the Navier-Stokes, total-energy, and conservation- of-chemical-species governing equations. Comparison of LES and DNS results revealed the need to augment the atmospheric- pressure LES equations with additional SGS momentum and energy terms. These new terms are the direct result of high-density-gradient-magnitude regions found in the DNS and observed experimentally under fully turbulent flow conditions. A model has been derived for the new term in the momentum equation and was found to perform well at small filter size but to deteriorate with increasing filter size. Several alternative models were derived for the new SGS term in the energy equation that would need further investigations to determine if they are too computationally intensive in LES.

  3. Gray component replacement using color mixing models

    NASA Astrophysics Data System (ADS)

    Kang, Henry R.

    1994-05-01

    A new approach to the gray component replacement (GCR) has been developed. It employs the color mixing theory for modeling the spectral fit between the 3-color and 4-color prints. To achieve this goal, we first examine the accuracy of the models with respect to the experimental results by applying them to the prints made by a Canon Color Laser Copier-500 (CLC-500). An empirical halftone correction factor is used for improving the data fitting. Among the models tested, the halftone corrected Kubelka-Munk theory gives the closest fit, followed by the halftone corrected Beer-Bouguer law and the Yule-Neilsen approach. We then apply the halftone corrected BB law to GCR. The main feature of this GCR approach is based on the spectral measurements of the primary color step wedges and a software package implementing the color mixing model. The software determines the amount of the gray component to be removed, then adjusts each primary color until a good match of the peak wavelengths between the 3-color and 4-color spectra is obtained. Results indicate that the average (Delta) Eab between cmy and cmyk renditions of 64 color patches is 3.11 (Delta) Eab. Eighty-seven percent of the patches has (Delta) Eab less than 5 units. The advantage of this approach is its simplicity; there is no need for the black printer and under color addition. Because this approach is based on the spectral reproduction, it minimizes the metamerism.

  4. Higher-order ice-sheet modelling accelerated by multigrid on graphics cards

    NASA Astrophysics Data System (ADS)

    Brædstrup, Christian; Egholm, David

    2013-04-01

    Higher-order ice flow modelling is a very computer intensive process owing primarily to the nonlinear influence of the horizontal stress coupling. When applied for simulating long-term glacial landscape evolution, the ice-sheet models must consider very long time series, while both high temporal and spatial resolution is needed to resolve small effects. The use of higher-order and full stokes models have therefore seen very limited usage in this field. However, recent advances in graphics card (GPU) technology for high performance computing have proven extremely efficient in accelerating many large-scale scientific computations. The general purpose GPU (GPGPU) technology is cheap, has a low power consumption and fits into a normal desktop computer. It could therefore provide a powerful tool for many glaciologists working on ice flow models. Our current research focuses on utilising the GPU as a tool in ice-sheet and glacier modelling. To this extent we have implemented the Integrated Second-Order Shallow Ice Approximation (iSOSIA) equations on the device using the finite difference method. To accelerate the computations, the GPU solver uses a non-linear Red-Black Gauss-Seidel iterator coupled with a Full Approximation Scheme (FAS) multigrid setup to further aid convergence. The GPU finite difference implementation provides the inherent parallelization that scales from hundreds to several thousands of cores on newer cards. We demonstrate the efficiency of the GPU multigrid solver using benchmark experiments.

  5. Inference of ICF Implosion Core Mix using Experimental Data and Theoretical Mix Modeling

    SciTech Connect

    Welser-Sherrill, L; Haynes, D A; Mancini, R C; Cooley, J H; Tommasini, R; Golovkin, I E; Sherrill, M E; Haan, S W

    2008-04-30

    The mixing between fuel and shell materials in Inertial Confinement Fusion (ICF) implosion cores is a current topic of interest. The goal of this work was to design direct-drive ICF experiments which have varying levels of mix, and subsequently to extract information on mixing directly from the experimental data using spectroscopic techniques. The experimental design was accomplished using hydrodynamic simulations in conjunction with Haan's saturation model, which was used to predict the mix levels of candidate experimental configurations. These theoretical predictions were then compared to the mixing information which was extracted from the experimental data, and it was found that Haan's mix model performed well in predicting trends in the width of the mix layer. With these results, we have contributed to an assessment of the range of validity and predictive capability of the Haan saturation model, as well as increased our confidence in the methods used to extract mixing information from experimental data.

  6. Inference of ICF implosion core mix using experimental data and theoretical mix modeling

    SciTech Connect

    Sherrill, Leslie Welser; Haynes, Donald A; Cooley, James H; Sherrill, Manolo E; Mancini, Roberto C; Tommasini, Riccardo; Golovkin, Igor E; Haan, Steven W

    2009-01-01

    The mixing between fuel and shell materials in Inertial Confinement Fusion (lCF) implosion cores is a current topic of interest. The goal of this work was to design direct-drive ICF experiments which have varying levels of mix, and subsequently to extract information on mixing directly from the experimental data using spectroscopic techniques. The experimental design was accomplished using hydrodynamic simulations in conjunction with Haan's saturation model, which was used to predict the mix levels of candidate experimental configurations. These theoretical predictions were then compared to the mixing information which was extracted from the experimental data, and it was found that Haan's mix model predicted trends in the width of the mix layer as a function of initial shell thickness. These results contribute to an assessment of the range of validity and predictive capability of the Haan saturation model, as well as increasing confidence in the methods used to extract mixing information from experimental data.

  7. Business Graphics

    NASA Technical Reports Server (NTRS)

    1987-01-01

    Genigraphics Corporation's Masterpiece 8770 FilmRecorder is an advanced high resolution system designed to improve and expand a company's in-house graphics production. GRAFTIME/software package was designed to allow office personnel with minimal training to produce professional level graphics for business communications and presentations. Products are no longer being manufactured.

  8. Graphic Storytelling

    ERIC Educational Resources Information Center

    Thompson, John

    2009-01-01

    Graphic storytelling is a medium that allows students to make and share stories, while developing their art communication skills. American comics today are more varied in genre, approach, and audience than ever before. When considering the impact of Japanese manga on the youth, graphic storytelling emerges as a powerful player in pop culture. In…

  9. Inferring Caravaggio's studio lighting and praxis in The calling of St. Matthew by computer graphics modeling

    NASA Astrophysics Data System (ADS)

    Stork, David G.; Nagy, Gabor

    2010-02-01

    We explored the working methods of the Italian Baroque master Caravaggio through computer graphics reconstruction of his studio, with special focus on his use of lighting and illumination in The calling of St. Matthew. Although he surely took artistic liberties while constructing this and other works and did not strive to provide a "photographic" rendering of the tableau before him, there are nevertheless numerous visual clues to the likely studio conditions and working methods within the painting: the falloff of brightness along the rear wall, the relative brightness of the faces of figures, and the variation in sharpness of cast shadows (i.e., umbrae and penumbrae). We explored two studio lighting hypotheses: that the primary illumination was local (and hence artificial) and that it was distant solar. We find that the visual evidence can be consistent with local (artificial) illumination if Caravaggio painted his figures separately, adjusting the brightness on each to compensate for the falloff in illumination. Alternatively, the evidence is consistent with solar illumination only if the rear wall had particular reflectance properties, as described by a bi-directional reflectance distribution function, BRDF. (Ours is the first research applying computer graphics to the understanding of artists' praxis that models subtle reflectance properties of surfaces through BRDFs, a technique that may find use in studies of other artists.) A somewhat puzzling visual feature-unnoted in the scholarly literature-is the upward-slanting cast shadow in the upper-right corner of the painting. We found this shadow is naturally consistent with a local illuminant passing through a small window perpendicular to the viewer's line of sight, but could also be consistent with solar illumination if the shadow was due to a slanted, overhanging section of a roof outside the artist's studio. Our results place likely conditions upon any hypotheses concerning Caravaggio's working methods and

  10. A graphical model method for integrating multiple sources of genome-scale data

    PubMed Central

    Dvorkin, Daniel; Biehs, Brian; Kechris, Katerina

    2016-01-01

    Making effective use of multiple data sources is a major challenge in modern bioinformatics. Genome-wide data such as measures of transcription factor binding, gene expression, and sequence conservation, which are used to identify binding regions and genes that are important to major biological processes such as development and disease, can be difficult to use together due to the different biological meanings and statistical distributions of the heterogeneous data types, but each can provide valuable information for understanding the processes under study. Here we present methods for integrating multiple data sources to gain a more complete picture of gene regulation and expression. Our goal is to identify genes and cis-regulatory regions which play specific biological roles. We describe a graphical mixture model approach for data integration, examine the effect of using different model topologies, and discuss methods for evaluating the effectiveness of the models. Model fitting is computationally efficient and produces results which have clear biological and statistical interpretations. The Hedgehog and Dorsal signaling pathways in Drosophila, which are critical in embryonic development, are used as examples. PMID:23934610

  11. Mixing parameterizations in ocean climate modeling

    NASA Astrophysics Data System (ADS)

    Moshonkin, S. N.; Gusev, A. V.; Zalesny, V. B.; Byshev, V. I.

    2016-03-01

    Results of numerical experiments with an eddy-permitting ocean circulation model on the simulation of the climatic variability of the North Atlantic and the Arctic Ocean are analyzed. We compare the ocean simulation quality with using different subgrid mixing parameterizations. The circulation model is found to be sensitive to a mixing parametrization. The computation of viscosity and diffusivity coefficients by an original splitting algorithm of the evolution equations for turbulence characteristics is found to be as efficient as traditional Monin-Obukhov parameterizations. At the same time, however, the variability of ocean climate characteristics is simulated more adequately. The simulation of salinity fields in the entire study region improves most significantly. Turbulent processes have a large effect on the circulation in the long-term through changes in the density fields. The velocity fields in the Gulf Stream and in the entire North Atlantic Subpolar Cyclonic Gyre are reproduced more realistically. The surface level height in the Arctic Basin is simulated more faithfully, marking the Beaufort Gyre better. The use of the Prandtl number as a function of the Richardson number improves the quality of ocean modeling.

  12. A Comparison of Learning Style Models and Assessment Instruments for University Graphics Educators

    ERIC Educational Resources Information Center

    Harris, La Verne Abe; Sadowski, Mary S.; Birchman, Judy A.

    2006-01-01

    Kolb (2004) and others have defined learning style as a preference by which students learn and remember what they have learned. This presentation will include a summary of learning style research published in the "Engineering Design Graphics Journal" over the past 15 years on the topic of learning styles and graphics education. The presenters will…

  13. A Curriculum Model: Engineering Design Graphics Course Updates Based on Industrial and Academic Institution Requirements

    ERIC Educational Resources Information Center

    Meznarich, R. A.; Shava, R. C.; Lightner, S. L.

    2009-01-01

    Engineering design graphics courses taught in colleges or universities should provide and equip students preparing for employment with the basic occupational graphics skill competences required by engineering and technology disciplines. Academic institutions should introduce and include topics that cover the newer and more efficient graphics…

  14. Exploratory graphical models of functional and structural connectivity patterns for Alzheimer's Disease diagnosis

    PubMed Central

    Ortiz, Andrés; Munilla, Jorge; Álvarez-Illán, Ignacio; Górriz, Juan M.; Ramírez, Javier

    2015-01-01

    Alzheimer's Disease (AD) is the most common neurodegenerative disease in elderly people. Its development has been shown to be closely related to changes in the brain connectivity network and in the brain activation patterns along with structural changes caused by the neurodegenerative process. Methods to infer dependence between brain regions are usually derived from the analysis of covariance between activation levels in the different areas. However, these covariance-based methods are not able to estimate conditional independence between variables to factor out the influence of other regions. Conversely, models based on the inverse covariance, or precision matrix, such as Sparse Gaussian Graphical Models allow revealing conditional independence between regions by estimating the covariance between two variables given the rest as constant. This paper uses Sparse Inverse Covariance Estimation (SICE) methods to learn undirected graphs in order to derive functional and structural connectivity patterns from Fludeoxyglucose (18F-FDG) Position Emission Tomography (PET) data and segmented Magnetic Resonance images (MRI), drawn from the ADNI database, for Control, MCI (Mild Cognitive Impairment Subjects), and AD subjects. Sparse computation fits perfectly here as brain regions usually only interact with a few other areas. The models clearly show different metabolic covariation patters between subject groups, revealing the loss of strong connections in AD and MCI subjects when compared to Controls. Similarly, the variance between GM (Gray Matter) densities of different regions reveals different structural covariation patterns between the different groups. Thus, the different connectivity patterns for controls and AD are used in this paper to select regions of interest in PET and GM images with discriminative power for early AD diagnosis. Finally, functional an structural models are combined to leverage the classification accuracy. The results obtained in this work show the

  15. Mixing parametrizations for ocean climate modelling

    NASA Astrophysics Data System (ADS)

    Gusev, Anatoly; Moshonkin, Sergey; Diansky, Nikolay; Zalesny, Vladimir

    2016-04-01

    The algorithm is presented of splitting the total evolutionary equations for the turbulence kinetic energy (TKE) and turbulence dissipation frequency (TDF), which is used to parameterize the viscosity and diffusion coefficients in ocean circulation models. The turbulence model equations are split into the stages of transport-diffusion and generation-dissipation. For the generation-dissipation stage, the following schemes are implemented: the explicit-implicit numerical scheme, analytical solution and the asymptotic behavior of the analytical solutions. The experiments were performed with different mixing parameterizations for the modelling of Arctic and the Atlantic climate decadal variability with the eddy-permitting circulation model INMOM (Institute of Numerical Mathematics Ocean Model) using vertical grid refinement in the zone of fully developed turbulence. The proposed model with the split equations for turbulence characteristics is similar to the contemporary differential turbulence models, concerning the physical formulations. At the same time, its algorithm has high enough computational efficiency. Parameterizations with using the split turbulence model make it possible to obtain more adequate structure of temperature and salinity at decadal timescales, compared to the simpler Pacanowski-Philander (PP) turbulence parameterization. Parameterizations with using analytical solution or numerical scheme at the generation-dissipation step of the turbulence model leads to better representation of ocean climate than the faster parameterization using the asymptotic behavior of the analytical solution. At the same time, the computational efficiency left almost unchanged relative to the simple PP parameterization. Usage of PP parametrization in the circulation model leads to realistic simulation of density and circulation with violation of T,S-relationships. This error is majorly avoided with using the proposed parameterizations containing the split turbulence model

  16. Cascade Models of Turbulence and Mixing

    NASA Astrophysics Data System (ADS)

    Kadanoff, Leo P.

    1997-01-01

    This note describes two kinds of work on turbulence. First it describes a simplified model of turbulent energy-cascades called the GOY model. Second it mentions work on a model of mixing in fluids. In addition to a brief historical discussion, I include some mention of our own work carried on at the University of Chicago by Jane Wang, Detlef Lohse, Roberto Benzi, Norbert Schörghofer, Scott Wunsch, Tong Zhou and myself. Our own studies are in large measure the outgrowth of a paper by M. H. Jensen, G. Paladin, and A. Vulpiani [1]. I mention this connection with some sadness because I recall Paladin's recent death in a mountain accident.

  17. Reducing Modeling Error of Graphical Methods for Estimating Volume of Distribution Measurements in PIB-PET study

    PubMed Central

    Guo, Hongbin; Renaut, Rosemary A; Chen, Kewei; Reiman, Eric M

    2010-01-01

    Graphical analysis methods are widely used in positron emission tomography quantification because of their simplicity and model independence. But they may, particularly for reversible kinetics, lead to bias in the estimated parameters. The source of the bias is commonly attributed to noise in the data. Assuming a two-tissue compartmental model, we investigate the bias that originates from modeling error. This bias is an intrinsic property of the simplified linear models used for limited scan durations, and it is exaggerated by random noise and numerical quadrature error. Conditions are derived under which Logan's graphical method either over- or under-estimates the distribution volume in the noise-free case. The bias caused by modeling error is quantified analytically. The presented analysis shows that the bias of graphical methods is inversely proportional to the dissociation rate. Furthermore, visual examination of the linearity of the Logan plot is not sufficient for guaranteeing that equilibrium has been reached. A new model which retains the elegant properties of graphical analysis methods is presented, along with a numerical algorithm for its solution. We perform simulations with the fibrillar amyloid β radioligand [11C] benzothiazole-aniline using published data from the University of Pittsburgh and Rotterdam groups. The results show that the proposed method significantly reduces the bias due to modeling error. Moreover, the results for data acquired over a 70 minutes scan duration are at least as good as those obtained using existing methods for data acquired over a 90 minutes scan duration. PMID:20493196

  18. Mixed Membership Distributions with Applications to Modeling Multiple Strategy Usage

    ERIC Educational Resources Information Center

    Galyardt, April

    2012-01-01

    This dissertation examines two related questions. "How do mixed membership models work?" and "Can mixed membership be used to model how students use multiple strategies to solve problems?". Mixed membership models have been used in thousands of applications from text and image processing to genetic microarray analysis. Yet…

  19. Modeling populations of rotationally mixed massive stars

    NASA Astrophysics Data System (ADS)

    Brott, I.

    2011-02-01

    Massive stars can be considered as cosmic engines. With their high luminosities, strong stellar winds and violent deaths they drive the evolution of galaxies through-out the history of the universe. Despite the importance of massive stars, their evolution is still poorly understood. Two major issues have plagued evolutionary models of massive stars until today: mixing and mass loss On the main sequence, the effects of mass loss remain limited in the considered mass and metallicity range, this thesis concentrates on the role of mixing in massive stars. This thesis approaches this problem just on the cross road between observations and simulations. The main question: Do evolutionary models of single stars, accounting for the effects of rotation, reproduce the observed properties of real stars. In particular we are interested if the evolutionary models can reproduce the surface abundance changes during the main-sequence phase. To constrain our models we build a population synthesis model for the sample of the VLT-FLAMES Survey of Massive stars, for which star-formation history and rotational velocity distribution are well constrained. We consider the four main regions of the Hunter diagram. Nitrogen un-enriched slow rotators and nitrogen enriched fast rotators that are predicted by theory. Nitrogen enriched slow rotators and nitrogen unenriched fast rotators that are not predicted by our model. We conclude that currently these comparisons are not sufficient to verify the theory of rotational mixing. Physical processes in addition to rotational mixing appear necessary to explain the stars in the later two regions. The chapters of this Thesis have been published in the following Journals: Ch. 2: ``Rotating Massive Main-Sequence Stars I: Grids of Evolutionary Models and Isochrones'', I. Brott, S. E. de Mink, M. Cantiello, N. Langer, A. de Koter, C. J. Evans, I. Hunter, C. Trundle, J.S. Vink submitted to Astronomy & Astrop hysics Ch. 3: ``The VLT-FLAMES Survey of Massive

  20. Graphical determination of metal bioavailability to soil invertebrates utilizing the Langmuir sorption model

    SciTech Connect

    Donkin, S.G.

    1997-09-01

    A new method of performing soil toxicity tests with free-living nematodes exposed to several metals and soil types has been adapted to the Langmuir sorption model in an attempt at bridging the gap between physico-chemical and biological data gathered in the complex soil matrix. Pseudo-Langmuir sorption isotherms have been developed using nematode toxic responses (lethality, in this case) in place of measured solvated metal, in order to more accurately model bioavailability. This method allows the graphical determination of Langmuir coefficients describing maximum sorption capacities and sorption affinities of various metal-soil combinations in the context of real biological responses of indigenous organisms. Results from nematode mortality tests with zinc, cadmium, copper, and lead in four soil types and water were used for isotherm construction. The level of agreement between these results and available literature data on metal sorption behavior in soils suggests that biologically relevant data may be successfully fitted to sorption models such as the Langmuir. This would allow for accurate prediction of soil contaminant concentrations which have minimal effect on indigenous invertebrates.

  1. Parallel flow accumulation algorithms for graphical processing units with application to RUSLE model

    NASA Astrophysics Data System (ADS)

    Sten, Johan; Lilja, Harri; Hyväluoma, Jari; Westerholm, Jan; Aspnäs, Mats

    2016-04-01

    Digital elevation models (DEMs) are widely used in the modeling of surface hydrology, which typically includes the determination of flow directions and flow accumulation. The use of high-resolution DEMs increases the accuracy of flow accumulation computation, but as a drawback, the computational time may become excessively long if large areas are analyzed. In this paper we investigate the use of graphical processing units (GPUs) for efficient flow accumulation calculations. We present two new parallel flow accumulation algorithms based on dependency transfer and topological sorting and compare them to previously published flow transfer and indegree-based algorithms. We benchmark the GPU implementations against industry standards, ArcGIS and SAGA. With the flow-transfer D8 flow routing model and binary input data, a speed up of 19 is achieved compared to ArcGIS and 15 compared to SAGA. We show that on GPUs the topological sort-based flow accumulation algorithm leads on average to a speedup by a factor of 7 over the flow-transfer algorithm. Thus a total speed up of the order of 100 is achieved. We test the algorithms by applying them to the Revised Universal Soil Loss Equation (RUSLE) erosion model. For this purpose we present parallel versions of the slope, LS factor and RUSLE algorithms and show that the RUSLE erosion results for an area of 12 km x 24 km containing 72 million cells can be calculated in less than a second. Since flow accumulation is needed in many hydrological models, the developed algorithms may find use in many other applications than RUSLE modeling. The algorithm based on topological sorting is particularly promising for dynamic hydrological models where flow accumulations are repeatedly computed over an unchanged DEM.

  2. Development of a graphical user interface in GIS raster format for the finite difference ground-water model code, MODFLOW

    SciTech Connect

    Heinzer, T.; Hansen, D.T.; Greer, W.; Sebhat, M.

    1996-12-31

    A geographic information system (GIS) was used in developing a graphical user interface (GUI) for use with the US Geological Survey`s finite difference ground-water flow model, MODFLOW. The GUI permits the construction of a MODFLOW based ground-water flow model from scratch in a GIS environment. The model grid, input data and output are stored as separate raster data sets which may be viewed, edited, and manipulated in a graphic environment. Other GIS data sets can be displayed with the model data sets for reference and evaluation. The GUI sets up a directory structure for storage of the files associated with the ground-water model and the raster data sets created by the interface. The GUI stores model coefficients and model output as raster values. Values stored by these raster data sets are formatted for use with the ground-water flow model code.

  3. Colocalization Estimation Using Graphical Modeling and Variational Bayesian Expectation Maximization: Towards a Parameter-Free Approach.

    PubMed

    Awate, Suyash P; Radhakrishnan, Thyagarajan

    2015-01-01

    In microscopy imaging, colocalization between two biological entities (e.g., protein-protein or protein-cell) refers to the (stochastic) dependencies between the spatial locations of the two entities in the biological specimen. Measuring colocalization between two entities relies on fluorescence imaging of the specimen using two fluorescent chemicals, each of which indicates the presence/absence of one of the entities at any pixel location. State-of-the-art methods for estimating colocalization rely on post-processing image data using an adhoc sequence of algorithms with many free parameters that are tuned visually. This leads to loss of reproducibility of the results. This paper proposes a brand-new framework for estimating the nature and strength of colocalization directly from corrupted image data by solving a single unified optimization problem that automatically deals with noise, object labeling, and parameter tuning. The proposed framework relies on probabilistic graphical image modeling and a novel inference scheme using variational Bayesian expectation maximization for estimating all model parameters, including colocalization, from data. Results on simulated and real-world data demonstrate improved performance over the state of the art. PMID:26221663

  4. Configuring a Graphical User Interface for Managing Local HYSPLIT Model Runs Through AWIPS

    NASA Technical Reports Server (NTRS)

    Wheeler, mark M.; Blottman, Peter F.; Sharp, David W.; Hoeth, Brian; VanSpeybroeck, Kurt M.

    2009-01-01

    Responding to incidents involving the release of harmful airborne pollutants is a continual challenge for Weather Forecast Offices in the National Weather Service. When such incidents occur, current protocol recommends forecaster-initiated requests of NOAA's Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) model output through the National Centers of Environmental Prediction to obtain critical dispersion guidance. Individual requests are submitted manually through a secured web site, with desired multiple requests submitted in sequence, for the purpose of obtaining useful trajectory and concentration forecasts associated with the significant release of harmful chemical gases, radiation, wildfire smoke, etc., into local the atmosphere. To help manage the local HYSPLIT for both routine and emergency use, a graphical user interface was designed for operational efficiency. The interface allows forecasters to quickly determine the current HYSPLIT configuration for the list of predefined sites (e.g., fixed sites and floating sites), and to make any necessary adjustments to key parameters such as Input Model. Number of Forecast Hours, etc. When using the interface, forecasters will obtain desired output more confidently and without the danger of corrupting essential configuration files.

  5. Linkage Analysis with an Alternative Formulation for the Mixed Model of Inheritance: The Finite Polygenic Mixed Model

    PubMed Central

    Stricker, C.; Fernando, R. L.; Elston, R. C.

    1995-01-01

    This paper presents an extension of the finite polygenic mixed model of FERNANDO et al. (1994) to linkage analysis. The finite polygenic mixed model, extended for linkage analysis, leads to a likelihood that can be calculated using efficient algorithms developed for oligogenic models. For comparison, linkage analysis of 5 simulated 4021-member pedigrees was performed using the usual mixed model of inheritance, approximated by HASSTEDT (1982), and the finite polygenic mixed model extended for linkage analysis presented here. Maximum likelihood estimates of the finite polygenic mixed model could be inferred to be closer to the simulated values in these pedigrees. PMID:8601502

  6. Extended model for Richtmyer-Meshkov mix

    SciTech Connect

    Mikaelian, K O

    2009-11-18

    We examine four Richtmyer-Meshkov (RM) experiments on shock-generated turbulent mix and find them to be in good agreement with our earlier simple model in which the growth rate h of the mixing layer following a shock or reshock is constant and given by 2{alpha}A{Delta}v, independent of initial conditions h{sub 0}. Here A is the Atwood number ({rho}{sub B}-{rho}{sub A})/({rho}{sub B} + {rho}{sub A}), {rho}{sub A,B} are the densities of the two fluids, {Delta}V is the jump in velocity induced by the shock or reshock, and {alpha} is the constant measured in Rayleigh-Taylor (RT) experiments: {alpha}{sup bubble} {approx} 0.05-0.07, {alpha}{sup spike} {approx} (1.8-2.5){alpha}{sup bubble} for A {approx} 0.7-1.0. In the extended model the growth rate beings to day after a time t*, when h = h*, slowing down from h = h{sub 0} + 2{alpha}A{Delta}vt to h {approx} t{sup {theta}} behavior, with {theta}{sup bubble} {approx} 0.25 and {theta}{sup spike} {approx} 0.36 for A {approx} 0.7. They ascribe this change-over to loss of memory of the direction of the shock or reshock, signaling transition from highly directional to isotropic turbulence. In the simplest extension of the model h*/h{sub 0} is independent of {Delta}v and depends only on A. They find that h*/h{sub 0} {approx} 2.5-3.5 for A {approx} 0.7-1.0.

  7. Graphical Representations and Odds Ratios in a Distance-Association Model for the Analysis of Cross-Classified Data

    ERIC Educational Resources Information Center

    de Rooij, Mark; Heiser, Willem J.

    2005-01-01

    Although RC(M)-association models have become a generally useful tool for the analysis of cross-classified data, the graphical representation resulting from such an analysis can at times be misleading. The relationships present between row category points and column category points cannot be interpreted by inter point distances but only through…

  8. Design Graphics

    NASA Technical Reports Server (NTRS)

    1990-01-01

    A mathematician, David R. Hedgley, Jr. developed a computer program that considers whether a line in a graphic model of a three-dimensional object should or should not be visible. Known as the Hidden Line Computer Code, the program automatically removes superfluous lines and displays an object from a specific viewpoint, just as the human eye would see it. An example of how one company uses the program is the experience of Birdair which specializes in production of fabric skylights and stadium covers. The fabric called SHEERFILL is a Teflon coated fiberglass material developed in cooperation with DuPont Company. SHEERFILL glazed structures are either tension structures or air-supported tension structures. Both are formed by patterned fabric sheets supported by a steel or aluminum frame or cable network. Birdair uses the Hidden Line Computer Code, to illustrate a prospective structure to an architect or owner. The program generates a three- dimensional perspective with the hidden lines removed. This program is still used by Birdair and continues to be commercially available to the public.

  9. Hierarchical graphical models for simultaneous tracking and recognition in wide-area scenes.

    PubMed

    Nayak, Nandita M; Zhu, Yingying; Chowdhury, Amit K Roy

    2015-07-01

    We present a unified framework to track multiple people, as well localize, and label their activities, in complex long-duration video sequences. To do this, we focus on two aspects: 1) the influence of tracks on the activities performed by the corresponding actors and 2) the structural relationships across activities. We propose a two-level hierarchical graphical model, which learns the relationship between tracks, relationship between tracks, and their corresponding activity segments, as well as the spatiotemporal relationships across activity segments. Such contextual relationships between tracks and activity segments are exploited at both the levels in the hierarchy for increased robustness. An L1-regularized structure learning approach is proposed for this purpose. While it is well known that availability of the labels and locations of activities can help in determining tracks more accurately and vice-versa, most current approaches have dealt with these problems separately. Inspired by research in the area of biological vision, we propose a bidirectional approach that integrates both bottom-up and top-down processing, i.e., bottom-up recognition of activities using computed tracks and top-down computation of tracks using the obtained recognition. We demonstrate our results on the recent and publicly available UCLA and VIRAT data sets consisting of realistic indoor and outdoor surveillance sequences. PMID:25700452

  10. Quantum Chemistry for Solvated Molecules on Graphical Processing Units Using Polarizable Continuum Models.

    PubMed

    Liu, Fang; Luehr, Nathan; Kulik, Heather J; Martínez, Todd J

    2015-07-14

    The conductor-like polarization model (C-PCM) with switching/Gaussian smooth discretization is a widely used implicit solvation model in chemical simulations. However, its application in quantum mechanical calculations of large-scale biomolecular systems can be limited by computational expense of both the gas phase electronic structure and the solvation interaction. We have previously used graphical processing units (GPUs) to accelerate the first of these steps. Here, we extend the use of GPUs to accelerate electronic structure calculations including C-PCM solvation. Implementation on the GPU leads to significant acceleration of the generation of the required integrals for C-PCM. We further propose two strategies to improve the solution of the required linear equations: a dynamic convergence threshold and a randomized block-Jacobi preconditioner. These strategies are not specific to GPUs and are expected to be beneficial for both CPU and GPU implementations. We benchmark the performance of the new implementation using over 20 small proteins in solvent environment. Using a single GPU, our method evaluates the C-PCM related integrals and their derivatives more than 10× faster than that with a conventional CPU-based implementation. Our improvements to the linear solver provide a further 3× acceleration. The overall calculations including C-PCM solvation require, typically, 20-40% more effort than that for their gas phase counterparts for a moderate basis set and molecule surface discretization level. The relative cost of the C-PCM solvation correction decreases as the basis sets and/or cavity radii increase. Therefore, description of solvation with this model should be routine. We also discuss applications to the study of the conformational landscape of an amyloid fibril. PMID:26575750

  11. MixSIAR: A Bayesian stable isotope mixing model for characterizing intrapopulation niche variation

    EPA Science Inventory

    Background/Question/Methods The science of stable isotope mixing models has tended towards the development of modeling products (e.g. IsoSource, MixSIR, SIAR), where methodological advances or syntheses of the current state of the art are published in parity with software packa...

  12. Single calcium channel domain gating of synaptic vesicle fusion at fast synapses; analysis by graphic modeling

    PubMed Central

    Stanley, Elise F

    2015-01-01

    At fast-transmitting presynaptic terminals Ca2+ enter through voltage gated calcium channels (CaVs) and bind to a synaptic vesicle (SV) -associated calcium sensor (SV-sensor) to gate fusion and discharge. An open CaV generates a high-concentration plume, or nanodomain of Ca2+ that dissipates precipitously with distance from the pore. At most fast synapses, such as the frog neuromuscular junction (NMJ), the SV sensors are located sufficiently close to individual CaVs to be gated by single nanodomains. However, at others, such as the mature rodent calyx of Held (calyx of Held), the physiology is more complex with evidence that CaVs that are both close and distant from the SV sensor and it is argued that release is gated primarily by the overlapping Ca2+ nanodomains from many CaVs. We devised a 'graphic modeling' method to sum Ca2+ from individual CaVs located at varying distances from the SV-sensor to determine the SV release probability and also the fraction of that probability that can be attributed to single domain gating. This method was applied first to simplified, low and high CaV density model release sites and then to published data on the contrasting frog NMJ and the rodent calyx of Held native synapses. We report 3 main predictions: the SV-sensor is positioned very close to the point at which the SV fuses with the membrane; single domain-release gating predominates even at synapses where the SV abuts a large cluster of CaVs, and even relatively remote CaVs can contribute significantly to single domain-based gating. PMID:26457441

  13. On Local Homogeneity and Stochastically Ordered Mixed Rasch Models

    ERIC Educational Resources Information Center

    Kreiner, Svend; Hansen, Mogens; Hansen, Carsten Rosenberg

    2006-01-01

    Mixed Rasch models add latent classes to conventional Rasch models, assuming that the Rasch model applies within each class and that relative difficulties of items are different in two or more latent classes. This article considers a family of stochastically ordered mixed Rasch models, with ordinal latent classes characterized by increasing total…

  14. Computer graphics in aerodynamic analysis

    NASA Technical Reports Server (NTRS)

    Cozzolongo, J. V.

    1984-01-01

    The use of computer graphics and its application to aerodynamic analyses on a routine basis is outlined. The mathematical modelling of the aircraft geometries and the shading technique implemented are discussed. Examples of computer graphics used to display aerodynamic flow field data and aircraft geometries are shown. A future need in computer graphics for aerodynamic analyses is addressed.

  15. The Effectiveness of an Interactive 3-Dimensional Computer Graphics Model for Medical Education

    PubMed Central

    Konishi, Takeshi; Tamura, Yoko; Moriguchi, Hiroki

    2012-01-01

    Background Medical students often have difficulty achieving a conceptual understanding of 3-dimensional (3D) anatomy, such as bone alignment, muscles, and complex movements, from 2-dimensional (2D) images. To this end, animated and interactive 3-dimensional computer graphics (3DCG) can provide better visual information to users. In medical fields, research on the advantages of 3DCG in medical education is relatively new. Objective To determine the educational effectiveness of interactive 3DCG. Methods We divided 100 participants (27 men, mean (SD) age 17.9 (0.6) years, and 73 women, mean (SD) age 18.1 (1.1) years) from the Health Sciences University of Mongolia (HSUM) into 3DCG (n = 50) and textbook-only (control) (n = 50) groups. The control group used a textbook and 2D images, while the 3DCG group was trained to use the interactive 3DCG shoulder model in addition to a textbook. We conducted a questionnaire survey via an encrypted satellite network between HSUM and Tokushima University. The questionnaire was scored on a 5-point Likert scale from strongly disagree (score 1) to strongly agree (score 5). Results Interactive 3DCG was effective in undergraduate medical education. Specifically, there was a significant difference in mean (SD) scores between the 3DCG and control groups in their response to questionnaire items regarding content (4.26 (0.69) vs 3.85 (0.68), P = .001) and teaching methods (4.33 (0.65) vs 3.74 (0.79), P < .001), but no significant difference in the Web category. Participants also provided meaningful comments on the advantages of interactive 3DCG. Conclusions Interactive 3DCG materials have positive effects on medical education when properly integrated into conventional education. In particular, our results suggest that interactive 3DCG is more efficient than textbooks alone in medical education and can motivate students to understand complex anatomical structures. PMID:23611759

  16. Probabilistic graphical models to deal with age estimation of living persons.

    PubMed

    Sironi, Emanuele; Gallidabino, Matteo; Weyermann, Céline; Taroni, Franco

    2016-03-01

    Due to the rise of criminal, civil and administrative judicial situations involving people lacking valid identity documents, age estimation of living persons has become an important operational procedure for numerous forensic and medicolegal services worldwide. The chronological age of a given person is generally estimated from the observed degree of maturity of some selected physical attributes by means of statistical methods. However, their application in the forensic framework suffers from some conceptual and practical drawbacks, as recently claimed in the specialised literature. The aim of this paper is therefore to offer an alternative solution for overcoming these limits, by reiterating the utility of a probabilistic Bayesian approach for age estimation. This approach allows one to deal in a transparent way with the uncertainty surrounding the age estimation process and to produce all the relevant information in the form of posterior probability distribution about the chronological age of the person under investigation. Furthermore, this probability distribution can also be used for evaluating in a coherent way the possibility that the examined individual is younger or older than a given legal age threshold having a particular legal interest. The main novelty introduced by this work is the development of a probabilistic graphical model, i.e. a Bayesian network, for dealing with the problem at hand. The use of this kind of probabilistic tool can significantly facilitate the application of the proposed methodology: examples are presented based on data related to the ossification status of the medial clavicular epiphysis. The reliability and the advantages of this probabilistic tool are presented and discussed. PMID:25794687

  17. Downsizer - A Graphical User Interface-Based Application for Browsing, Acquiring, and Formatting Time-Series Data for Hydrologic Modeling

    USGS Publications Warehouse

    Ward-Garrison, Christian; Markstrom, Steven L.; Hay, Lauren E.

    2009-01-01

    The U.S. Geological Survey Downsizer is a computer application that selects, downloads, verifies, and formats station-based time-series data for environmental-resource models, particularly the Precipitation-Runoff Modeling System. Downsizer implements the client-server software architecture. The client presents a map-based, graphical user interface that is intuitive to modelers; the server provides streamflow and climate time-series data from over 40,000 measurement stations across the United States. This report is the Downsizer user's manual and provides (1) an overview of the software design, (2) installation instructions, (3) a description of the graphical user interface, (4) a description of selected output files, and (5) troubleshooting information.

  18. Robot graphic simulation testbed

    NASA Technical Reports Server (NTRS)

    Cook, George E.; Sztipanovits, Janos; Biegl, Csaba; Karsai, Gabor; Springfield, James F.

    1991-01-01

    The objective of this research was twofold. First, the basic capabilities of ROBOSIM (graphical simulation system) were improved and extended by taking advantage of advanced graphic workstation technology and artificial intelligence programming techniques. Second, the scope of the graphic simulation testbed was extended to include general problems of Space Station automation. Hardware support for 3-D graphics and high processing performance make high resolution solid modeling, collision detection, and simulation of structural dynamics computationally feasible. The Space Station is a complex system with many interacting subsystems. Design and testing of automation concepts demand modeling of the affected processes, their interactions, and that of the proposed control systems. The automation testbed was designed to facilitate studies in Space Station automation concepts.

  19. Mixed Barrier Model for the Mixed Glass Former Effect in Ion Conducting Glasses

    NASA Astrophysics Data System (ADS)

    Schuch, Michael; Müller, Christian R.; Maass, Philipp; Martin, Steve W.

    2009-04-01

    Mixing two types of glass formers in ion conducting glasses can be exploited to lower conductivity activation energy and thereby increasing the ionic conductivity, a phenomenon known as the mixed glass former effect (MGFE). We develop a model for this MGFE, where activation barriers for individual ion jumps get lowered in inhomogeneous environments containing both types of network forming units. Fits of the model to experimental data allow one to estimate the strength of the barrier reduction, and they indicate a spatial clustering of the two types of network formers. The model predicts a time-temperature superposition of conductivity spectra onto a common master curve independent of the mixing ratio.

  20. Inferring transcriptional gene regulation network of starch metabolism in Arabidopsis thaliana leaves using graphical Gaussian model

    PubMed Central

    2012-01-01

    Background Starch serves as a temporal storage of carbohydrates in plant leaves during day/night cycles. To study transcriptional regulatory modules of this dynamic metabolic process, we conducted gene regulation network analysis based on small-sample inference of graphical Gaussian model (GGM). Results Time-series significant analysis was applied for Arabidopsis leaf transcriptome data to obtain a set of genes that are highly regulated under a diurnal cycle. A total of 1,480 diurnally regulated genes included 21 starch metabolic enzymes, 6 clock-associated genes, and 106 transcription factors (TF). A starch-clock-TF gene regulation network comprising 117 nodes and 266 edges was constructed by GGM from these 133 significant genes that are potentially related to the diurnal control of starch metabolism. From this network, we found that β-amylase 3 (b-amy3: At4g17090), which participates in starch degradation in chloroplast, is the most frequently connected gene (a hub gene). The robustness of gene-to-gene regulatory network was further analyzed by TF binding site prediction and by evaluating global co-expression of TFs and target starch metabolic enzymes. As a result, two TFs, indeterminate domain 5 (AtIDD5: At2g02070) and constans-like (COL: At2g21320), were identified as positive regulators of starch synthase 4 (SS4: At4g18240). The inference model of AtIDD5-dependent positive regulation of SS4 gene expression was experimentally supported by decreased SS4 mRNA accumulation in Atidd5 mutant plants during the light period of both short and long day conditions. COL was also shown to positively control SS4 mRNA accumulation. Furthermore, the knockout of AtIDD5 and COL led to deformation of chloroplast and its contained starch granules. This deformity also affected the number of starch granules per chloroplast, which increased significantly in both knockout mutant lines. Conclusions In this study, we utilized a systematic approach of microarray analysis to discover

  1. Perception in statistical graphics

    NASA Astrophysics Data System (ADS)

    VanderPlas, Susan Ruth

    There has been quite a bit of research on statistical graphics and visualization, generally focused on new types of graphics, new software to create graphics, interactivity, and usability studies. Our ability to interpret and use statistical graphics hinges on the interface between the graph itself and the brain that perceives and interprets it, and there is substantially less research on the interplay between graph, eye, brain, and mind than is sufficient to understand the nature of these relationships. The goal of the work presented here is to further explore the interplay between a static graph, the translation of that graph from paper to mental representation (the journey from eye to brain), and the mental processes that operate on that graph once it is transferred into memory (mind). Understanding the perception of statistical graphics should allow researchers to create more effective graphs which produce fewer distortions and viewer errors while reducing the cognitive load necessary to understand the information presented in the graph. Taken together, these experiments should lay a foundation for exploring the perception of statistical graphics. There has been considerable research into the accuracy of numerical judgments viewers make from graphs, and these studies are useful, but it is more effective to understand how errors in these judgments occur so that the root cause of the error can be addressed directly. Understanding how visual reasoning relates to the ability to make judgments from graphs allows us to tailor graphics to particular target audiences. In addition, understanding the hierarchy of salient features in statistical graphics allows us to clearly communicate the important message from data or statistical models by constructing graphics which are designed specifically for the perceptual system.

  2. Lagrangian mixed layer modeling of the western equatorial Pacific

    NASA Technical Reports Server (NTRS)

    Shinoda, Toshiaki; Lukas, Roger

    1995-01-01

    Processes that control the upper ocean thermohaline structure in the western equatorial Pacific are examined using a Lagrangian mixed layer model. The one-dimensional bulk mixed layer model of Garwood (1977) is integrated along the trajectories derived from a nonlinear 1 1/2 layer reduced gravity model forced with actual wind fields. The Global Precipitation Climatology Project (GPCP) data are used to estimate surface freshwater fluxes for the mixed layer model. The wind stress data which forced the 1 1/2 layer model are used for the mixed layer model. The model was run for the period 1987-1988. This simple model is able to simulate the isothermal layer below the mixed layer in the western Pacific warm pool and its variation. The subduction mechanism hypothesized by Lukas and Lindstrom (1991) is evident in the model results. During periods of strong South Equatorial Current, the warm and salty mixed layer waters in the central Pacific are subducted below the fresh shallow mixed layer in the western Pacific. However, this subduction mechanism is not evident when upwelling Rossby waves reach the western equatorial Pacific or when a prominent deepening of the mixed layer occurs in the western equatorial Pacific or when a prominent deepening of the mixed layer occurs in the western equatorial Pacific due to episodes of strong wind and light precipitation associated with the El Nino-Southern Oscillation. Comparison of the results between the Lagrangian mixed layer model and a locally forced Eulerian mixed layer model indicated that horizontal advection of salty waters from the central Pacific strongly affects the upper ocean salinity variation in the western Pacific, and that this advection is necessary to maintain the upper ocean thermohaline structure in this region.

  3. Radiolysis Model Formulation for Integration with the Mixed Potential Model

    SciTech Connect

    Buck, Edgar C.; Wittman, Richard S.

    2014-07-10

    The U.S. Department of Energy Office of Nuclear Energy (DOE-NE), Office of Fuel Cycle Technology has established the Used Fuel Disposition Campaign (UFDC) to conduct the research and development activities related to storage, transportation, and disposal of used nuclear fuel (UNF) and high-level radioactive waste. Within the UFDC, the components for a general system model of the degradation and subsequent transport of UNF is being developed to analyze the performance of disposal options [Sassani et al., 2012]. Two model components of the near-field part of the problem are the ANL Mixed Potential Model and the PNNL Radiolysis Model. This report is in response to the desire to integrate the two models as outlined in [Buck, E.C, J.L. Jerden, W.L. Ebert, R.S. Wittman, (2013) “Coupling the Mixed Potential and Radiolysis Models for Used Fuel Degradation,” FCRD-UFD-2013-000290, M3FT-PN0806058

  4. ModelMuse: A U.S. Geological Survey Open-Source, Graphical User Interface for Groundwater Models

    NASA Astrophysics Data System (ADS)

    Winston, R. B.

    2013-12-01

    ModelMuse is a free publicly-available graphical preprocessor used to generate the input and display the output for several groundwater models. It is written in Object Pascal and the source code is available on the USGS software web site. Supported models include the MODFLOW family of models, PHAST (version 1), and SUTRA version 2.2. With MODFLOW and PHAST, the user generates a grid and uses 'objects' (points, lines, and polygons) to define boundary conditions and the spatial variation in aquifer properties. Because the objects define the spatial variation, the grid can be changed without the user needing to re-enter spatial data. The same paradigm is used with SUTRA except that the user generates a quadrilateral finite-element mesh instead of a rectangular grid. The user interacts with the model in a top view and in a vertical cross section. The cross section can be at any angle or location. There is also a three-dimensional view of the model. For SUTRA, a new method of visualizing the permeability and related properties has been introduced. In three dimensional SUTRA models, the user specifies the permeability tensor by specifying permeability in three mutually orthogonal directions that can be oriented in space in any direction. Because it is important for the user to be able to check both the magnitudes and directions of the permeabilities, ModelMuse displays the permeabilities as either a two-dimensional or a three-dimensional vector plot. Color is used to differentiate the maximum, middle, and minimum permeability vectors. The magnitude of the permeability is shown by the vector length. The vector angle shows the direction of the maximum, middle, or minimum permeability. Contour and color plots can also be used to display model input and output data.

  5. Lidar observations of mixed layer dynamics - Tests of parameterized entrainment models of mixed layer growth rate

    NASA Technical Reports Server (NTRS)

    Boers, R.; Eloranta, E. W.; Coulter, R. L.

    1984-01-01

    Ground based lidar measurements of the atmospheric mixed layer depth, the entrainment zone depth and the wind speed and wind direction were used to test various parameterized entrainment models of mixed layer growth rate. Six case studies under clear air convective conditions over flat terrain in central Illinois are presented. It is shown that surface heating alone accounts for a major portion of the rise of the mixed layer on all days. A new set of entrainment model constants was determined which optimized height predictions for the dataset. Under convective conditions, the shape of the mixed layer height prediction curves closely resembled the observed shapes. Under conditions when significant wind shear was present, the shape of the height prediction curve departed from the data suggesting deficiencies in the parameterization of shear production. Development of small cumulus clouds on top of the layer is shown to affect mixed layer depths in the afternoon growth phase.

  6. Interactive computer graphics

    NASA Astrophysics Data System (ADS)

    Purser, K.

    1980-08-01

    Design layouts have traditionally been done on a drafting board by drawing a two-dimensional representation with section cuts and side views to describe the exact three-dimensional model. With the advent of computer graphics, a three-dimensional model can be created directly. The computer stores the exact three-dimensional model, which can be examined from any angle and at any scale. A brief overview of interactive computer graphics, how models are made and some of the benefits/limitations are described.

  7. Models of neutrino mass, mixing and CP violation

    NASA Astrophysics Data System (ADS)

    King, Stephen F.

    2015-12-01

    In this topical review we argue that neutrino mass and mixing data motivates extending the Standard Model (SM) to include a non-Abelian discrete flavour symmetry in order to accurately predict the large leptonic mixing angles and {C}{P} violation. We begin with an overview of the SM puzzles, followed by a description of some classic lepton mixing patterns. Lepton mixing may be regarded as a deviation from tri-bimaximal mixing, with charged lepton corrections leading to solar mixing sum rules, or tri-maximal lepton mixing leading to atmospheric mixing rules. We survey neutrino mass models, using a roadmap based on the open questions in neutrino physics. We then focus on the seesaw mechanism with right-handed neutrinos, where sequential dominance (SD) can account for large lepton mixing angles and {C}{P} violation, with precise predictions emerging from constrained SD (CSD). We define the flavour problem and discuss progress towards a theory of favour using GUTs and discrete family symmetry. We classify models as direct, semidirect or indirect, according to the relation between the Klein symmetry of the mass matrices and the discrete family symmetry, in all cases focussing on spontaneous {C}{P} violation. Finally we give two examples of realistic and highly predictive indirect models with CSD, namely an A to Z of flavour with Pati-Salam and a fairly complete A 4 × SU(5) SUSY GUT of flavour, where both models have interesting implications for leptogenesis.

  8. Taxonomy Of Magma Mixing I: Magma Mixing Metrics And The Thermochemistry Of Magma Hybridization Illuminated With A Toy Model

    NASA Astrophysics Data System (ADS)

    Spera, F. J.; Bohrson, W. A.; Schmidt, J.

    2013-12-01

    The rock record preserves abundant evidence of magma mixing in the form of mafic enclaves and mixed pumice in volcanic eruptions, syn-plutonic mafic or silicic dikes and intrusive complexes, replenishment events recorded in cumulates from layered intrusions, and crystal scale heterogeneity in phenocrysts and cumulate minerals. These evidently show that magma mixing in conjunction with crystallization (perfect fractional or incremental batch) is a first-order petrogenetic process. Magma mixing (sensu lato) occurs across a spectrum of mixed states from magma mingling to complete blending. The degree of mixing is quantified (Oldenburg et al, 1989) using two measures: the statistics of the segregation length scales (scale of segregation, L*) and the spatial contrast in composition (C) relative to the mean C (intensity of segregation, I). Mingling of dissimilar magmas produces a heterogeneous mixture containing discrete regions of end member melts and populations of crystals with L* = finite and I > 0. When L*→∞ and I→0 , the mixing magmas become hybridized and can be studied thermodynamically. Such hybrid magma is a multiphase equilibrium mixture of homogeneous melt, unzoned crystals and possible bubbles of a supercritical fluid. Here, we use a toy model to elucidate the principles of magma hybridization in a binary system (components A and B with pure crystals of α or β phase) with simple thermodynamics to build an outcome taxonomy. This binary system is not unlike the system Anorthite-Diopside, the classic low-pressure model basalt system. In the toy model, there are seven parameters describing the phase equilibria (eutectic T and X, specific heat, melting T and fusion enthalpies of α and β crystals) and five variables describing the magma mixing conditions: end member bulk compositions, temperatures and fraction of resident magma (M) that blends with recharge (R) magma to form a single equilibrium hybrid magma. There are 24 possible initial states when M

  9. Computer graphics and the graphic artist

    NASA Technical Reports Server (NTRS)

    Taylor, N. L.; Fedors, E. G.; Pinelli, T. E.

    1985-01-01

    A centralized computer graphics system is being developed at the NASA Langley Research Center. This system was required to satisfy multiuser needs, ranging from presentation quality graphics prepared by a graphic artist to 16-mm movie simulations generated by engineers and scientists. While the major thrust of the central graphics system was directed toward engineering and scientific applications, hardware and software capabilities to support the graphic artists were integrated into the design. This paper briefly discusses the importance of computer graphics in research; the central graphics system in terms of systems, software, and hardware requirements; the application of computer graphics to graphic arts, discussed in terms of the requirements for a graphic arts workstation; and the problems encountered in applying computer graphics to the graphic arts. The paper concludes by presenting the status of the central graphics system.

  10. Comparison between kinetic modelling and graphical analysis for the quantification of [18F]fluoromethylcholine uptake in mice

    PubMed Central

    2013-01-01

    Background Until now, no kinetic model was described for the oncologic tracer [18F]fluoromethylcholine ([18F]FCho), so it was aimed to validate a proper model, which is easy to implement and allows tracer quantification in tissues. Methods Based on the metabolic profile, two types of compartmental models were evaluated. One is a 3C2i model, which contains three tissue compartments and two input functions and corrects for possible [18F]fluorobetaine ([18F]FBet) uptake by the tissues. On the other hand, a two-tissue-compartment model (2C1i) was evaluated. Moreover, a comparison, based on intra-observer variability, was made between kinetic modelling and graphical analysis. Results Determination of the [18F]FCho-to-[18F]FBet uptake ratios in tissues and evaluation of the fitting of both kinetic models indicated that corrections for [18F]FBet uptake are not mandatory. In addition, [18F]FCho uptake is well described by the 2C1i model and by graphical analysis by means of the Patlak plot. Conclusions The Patlak plot is a reliable, precise, and robust method to quantify [18F]FCho uptake independent of scan time or plasma clearance. In addition, it is easily implemented, even under non-equilibrium conditions and without creating additional errors. PMID:24034278

  11. A multifluid mix model with material strength effects

    SciTech Connect

    Chang, C. H.; Scannapieco, A. J.

    2012-04-23

    We present a new multifluid mix model. Its features include material strength effects and pressure and temperature nonequilibrium between mixing materials. It is applicable to both interpenetration and demixing of immiscible fluids and diffusion of miscible fluids. The presented model exhibits the appropriate smooth transition in mathematical form as the mixture evolves from multiphase to molecular mixing, extending its applicability to the intermediate stages in which both types of mixing are present. Virtual mass force and momentum exchange have been generalized for heterogeneous multimaterial mixtures. The compression work has been extended so that the resulting species energy equations are consistent with the pressure force and material strength.

  12. A New Model for Mix It Up

    ERIC Educational Resources Information Center

    Holladay, Jennifer

    2009-01-01

    Since 2002, Teaching Tolerance's Mix It Up at Lunch Day program has helped millions of students cross social boundaries and create more inclusive school communities. Its goal is to create a safe, purposeful opportunity for students to break down the patterns of social self-segregation that too often plague schools. Research conducted in 2006 by…

  13. Analysis and modeling of subgrid scalar mixing using numerical data

    NASA Technical Reports Server (NTRS)

    Girimaji, Sharath S.; Zhou, YE

    1995-01-01

    Direct numerical simulations (DNS) of passive scalar mixing in isotropic turbulence is used to study, analyze and, subsequently, model the role of small (subgrid) scales in the mixing process. In particular, we attempt to model the dissipation of the large scale (supergrid) scalar fluctuations caused by the subgrid scales by decomposing it into two parts: (1) the effect due to the interaction among the subgrid scales; and (2) the effect due to interaction between the supergrid and the subgrid scales. Model comparisons with DNS data show good agreement. This model is expected to be useful in the large eddy simulations of scalar mixing and reaction.

  14. Parameter recovery and model selection in mixed Rasch models.

    PubMed

    Preinerstorfer, David; Formann, Anton K

    2012-05-01

    This study examines the precision of conditional maximum likelihood estimates and the quality of model selection methods based on information criteria (AIC and BIC) in mixed Rasch models. The design of the Monte Carlo simulation study included four test lengths (10, 15, 25, 40), three sample sizes (500, 1000, 2500), two simulated mixture conditions (one and two groups), and population homogeneity (equally sized subgroups) or heterogeneity (one subgroup three times larger than the other). The results show that both increasing sample size and increasing number of items lead to higher accuracy; medium-range parameters were estimated more precisely than extreme ones; and the accuracy was higher in homogeneous populations. The minimum-BIC method leads to almost perfect results and is more reliable than AIC-based model selection. The results are compared to findings by Li, Cohen, Kim, and Cho (2009) and practical guidelines are provided. PMID:21675964

  15. Documentation of a graphical display program for the saturated- unsaturated transport (SUTRA) finite-element simulation model

    USGS Publications Warehouse

    Souza, W.R.

    1987-01-01

    This report documents a graphical display program for the U. S. Geological Survey finite-element groundwater flow and solute transport model. Graphic features of the program, SUTRA-PLOT (SUTRA-PLOT = saturated/unsaturated transport), include: (1) plots of the finite-element mesh, (2) velocity vector plots, (3) contour plots of pressure, solute concentration, temperature, or saturation, and (4) a finite-element interpolator for gridding data prior to contouring. SUTRA-PLOT is written in FORTRAN 77 on a PRIME 750 computer system, and requires Version 9.0 or higher of the DISSPLA graphics library. The program requires two input files: the SUTRA input data list and the SUTRA simulation output listing. The program is menu driven and specifications for individual types of plots are entered and may be edited interactively. Installation instruction, a source code listing, and a description of the computer code are given. Six examples of plotting applications are used to demonstrate various features of the plotting program. (Author 's abstract)

  16. An Investigation of Item Fit Statistics for Mixed IRT Models

    ERIC Educational Resources Information Center

    Chon, Kyong Hee

    2009-01-01

    The purpose of this study was to investigate procedures for assessing model fit of IRT models for mixed format data. In this study, various IRT model combinations were fitted to data containing both dichotomous and polytomous item responses, and the suitability of the chosen model mixtures was evaluated based on a number of model fit procedures.…

  17. On the coalescence-dispersion modeling of turbulent molecular mixing

    NASA Technical Reports Server (NTRS)

    Givi, Peyman; Kosaly, George

    1987-01-01

    The general coalescence-dispersion (C/D) closure provides phenomenological modeling of turbulent molecular mixing. The models of Curl and Dopazo and O'Brien appear as two limiting C/D models that bracket the range of results one can obtain by various models. This finding is used to investigate the sensitivtiy of the results to the choice of the model. Inert scalar mixing is found to be less model-sensitive than mixing accompanied by chemical reaction. Infinitely fast chemistry approximation is used to relate the C/D approach to Toor's earlier results. Pure mixing and infinite rate chemistry calculations are compared to study further a recent result of Hsieh and O'Brien who found that higher concentration moments are not sensitive to chemistry.

  18. Computer modeling of jet mixing in INEL waste tanks

    SciTech Connect

    Meyer, P.A.

    1994-01-01

    The objective of this study is to examine the feasibility of using submerged jet mixing pumps to mobilize and suspend settled sludge materials in INEL High Level Radioactive Waste Tanks. Scenarios include removing the heel (a shallow liquid and sludge layer remaining after tank emptying processes) and mobilizing and suspending solids in full or partially full tanks. The approach used was to (1) briefly review jet mixing theory, (2) review erosion literature in order to identify and estimate important sludge characterization parameters (3) perform computer modeling of submerged liquid mixing jets in INEL tank geometries, (4) develop analytical models from which pump operating conditions and mixing times can be estimated, and (5) analyze model results to determine overall feasibility of using jet mixing pumps and make design recommendations.

  19. Diagnostic tools for mixing models of stream water chemistry

    USGS Publications Warehouse

    Hooper, R.P.

    2003-01-01

    Mixing models provide a useful null hypothesis against which to evaluate processes controlling stream water chemical data. Because conservative mixing of end-members with constant concentration is a linear process, a number of simple mathematical and multivariate statistical methods can be applied to this problem. Although mixing models have been most typically used in the context of mixing soil and groundwater end-members, an extension of the mathematics of mixing models is presented that assesses the "fit" of a multivariate data set to a lower dimensional mixing subspace without the need for explicitly identified end-members. Diagnostic tools are developed to determine the approximate rank of the data set and to assess lack of fit of the data. This permits identification of processes that violate the assumptions of the mixing model and can suggest the dominant processes controlling stream water chemical variation. These same diagnostic tools can be used to assess the fit of the chemistry of one site into the mixing subspace of a different site, thereby permitting an assessment of the consistency of controlling end-members across sites. This technique is applied to a number of sites at the Panola Mountain Research Watershed located near Atlanta, Georgia.

  20. Development of a Medicaid Behavioral Health Case-Mix Model

    ERIC Educational Resources Information Center

    Robst, John

    2009-01-01

    Many Medicaid programs have either fully or partially carved out mental health services. The evaluation of carve-out plans requires a case-mix model that accounts for differing health status across Medicaid managed care plans. This article develops a diagnosis-based case-mix adjustment system specific to Medicaid behavioral health care. Several…

  1. Building Models in the Classroom: Taking Advantage of Sophisticated Geomorphic Numerical Tools Using a Simple Graphical User Interface

    NASA Astrophysics Data System (ADS)

    Roy, S. G.; Koons, P. O.; Gerbi, C. C.; Capps, D. K.; Tucker, G. E.; Rogers, Z. A.

    2014-12-01

    Sophisticated numerical tools exist for modeling geomorphic processes and linking them to tectonic and climatic systems, but they are often seen as inaccessible for users with an exploratory level of interest. We have improved the accessibility of landscape evolution models by producing a simple graphics user interface (GUI) that takes advantage of the Channel-Hillslope Integrated Landscape Development (CHILD) model. Model access is flexible: the user can edit values for basic geomorphic, tectonic, and climate parameters, or obtain greater control by defining the spatiotemporal distributions of those parameters. Users can make educated predictions by choosing their own parametric values for the governing equations and interpreting the results immediately through model graphics. This method of modeling allows users to iteratively build their understanding through experimentation. Use of this GUI is intended for inquiry and discovery-based learning activities. We discuss a number of examples of how the GUI can be used at the upper high school, introductory university, and advanced university level. Effective teaching modules initially focus on an inquiry-based example guided by the instructor. As students become familiar with the GUI and the CHILD model, the class can shift to more student-centered exploration and experimentation. To make model interpretations more robust, digital elevation models can be imported and direct comparisons can be made between CHILD model results and natural topography. The GUI is available online through the University of Maine's Earth and Climate Sciences website, through the Community Surface Dynamics Modeling System (CSDMS) model repository, or by contacting the corresponding author.

  2. Kinetic mixing effect in the 3 -3 -1 -1 model

    NASA Astrophysics Data System (ADS)

    Dong, P. V.; Si, D. T.

    2016-06-01

    We show that the mixing effect of the neutral gauge bosons in the 3 -3 -1 -1 model comes from two sources. The first one is due to the 3 -3 -1 -1 gauge symmetry breaking as usual, whereas the second one results from the kinetic mixing between the gauge bosons of U (1 )X and U (1 )N groups, which are used to determine the electric charge and baryon minus lepton numbers, respectively. Such mixings modify the ρ -parameter and the known couplings of Z with fermions. The constraints that arise from flavor-changing neutral currents due to the gauge boson mixings and nonuniversal fermion generations are also given.

  3. Shell model of optimal passive-scalar mixing

    NASA Astrophysics Data System (ADS)

    Miles, Christopher; Doering, Charles

    2015-11-01

    Optimal mixing is significant to process engineering within industries such as food, chemical, pharmaceutical, and petrochemical. An important question in this field is ``How should one stir to create a homogeneous mixture while being energetically efficient?'' To answer this question, we consider an initially unmixed scalar field representing some concentration within a fluid on a periodic domain. This passive-scalar field is advected by the velocity field, our control variable, constrained by a physical quantity such as energy or enstrophy. We consider two objectives: local-in-time (LIT) optimization (what will maximize the mixing rate now?) and global-in-time (GIT) optimization (what will maximize mixing at the end time?). Throughout this work we use the H-1 mix-norm to measure mixing. To gain a better understanding, we provide a simplified mixing model by using a shell model of passive-scalar advection. LIT optimization in this shell model gives perfect mixing in finite time for the energy-constrained case and exponential decay to the perfect-mixed state for the enstrophy-constrained case. Although we only enforce that the time-average energy (or enstrophy) equals a chosen value in GIT optimization, interestingly, the optimal control keeps this value constant over time.

  4. VISUAL PLUMES MIXING ZONE MODELING SOFTWARE

    EPA Science Inventory

    The US Environmental Protection Agency has a history of developing plume models and providing technical assistance. The Visual Plumes model (VP) is a recent addition to the public-domain models available on the EPA Center for Exposure Assessment Modeling (CEAM) web page. The Wind...

  5. An Analysis of 24-Hour Ambulatory Blood Pressure Monitoring Data using Orthonormal Polynomials in the Linear Mixed Model

    PubMed Central

    Edwards, Lloyd J.; Simpson, Sean L.

    2014-01-01

    Background The use of 24-hour ambulatory blood pressure monitoring (ABPM) in clinical practice and observational epidemiological studies has grown considerably in the past 25 years. ABPM is a very effective technique for assessing biological, environmental, and drug effects on blood pressure. Objectives In order to enhance the effectiveness of ABPM for clinical and observational research studies via analytical and graphical results, developing alternative data analysis approaches using modern statistical techniques are important. Methods The linear mixed model for the analysis of longitudinal data is particularly well-suited for the estimation of, inference about, and interpretation of both population (mean) and subject-specific trajectories for ABPM data. We propose using a linear mixed model with orthonormal polynomials across time in both the fixed and random effects to analyze ABPM data. Results We demonstrate the proposed analysis technique using data from the Dietary Approaches to Stop Hypertension (DASH) study, a multicenter, randomized, parallel arm feeding study that tested the effects of dietary patterns on blood pressure. Conclusions The linear mixed model is relatively easy to implement (given the complexity of the technique) using available software, allows for straight-forward testing of multiple hypotheses, and the results can be presented to research clinicians using both graphical and tabular displays. Using orthonormal polynomials provides the ability to model the nonlinear trajectories of each subject with the same complexity as the mean model (fixed effects). PMID:24667908

  6. Agility and mixed-model furniture production

    NASA Astrophysics Data System (ADS)

    Yao, Andrew C.

    2000-10-01

    The manufacture of upholstered furniture provides an excellent opportunity to analyze the effect of a comprehensive communication system on classical production management functions. The objective of the research is to study the scheduling heuristics that embrace the concepts inherent in MRP, JIT and TQM while recognizing the need for agility in a somewhat complex and demanding environment. An on-line, real-time data capture system provides the status and location of production lots, components, subassemblies for schedule control. Current inventory status of raw material and purchased items are required in order to develop and adhere to schedules. For the large variety of styles and fabrics customers may order, the communication system must provide timely, accurate and comprehensive information for intelligent decisions with respect to the product mix and production resources.

  7. Graphical programming of telerobotic tasks

    SciTech Connect

    Small, D.E.; McDonald, M.J.

    1996-11-01

    With a goal of producing faster, safer, and cheaper technologies for nuclear waste cleanup, Sandia is actively developing and extending intelligent systems technologies through the US Department of Energy Office of Technology Development (DOE OTD) Robotic Technology Development Program (RTDP). Graphical programming is a key technology for robotic waste cleanup that Sandia is developing for this goal. Graphical programming uses simulation such as TELEGRIP `on-line` to program and control robots. Characterized by its model-based control architecture, integrated simulation, `point-and-click` graphical user interfaces, task and path planning software, and network communications, Sandia`s Graphical Programming systems allow operators to focus on high-level robotic tasks rather than the low-level details. Use of scripted tasks, rather than customized programs minimizes the necessity of recompiling supervisory control systems and enhances flexibility. Rapid world-modelling technologies allow Graphical Programming to be used in dynamic and unpredictable environments including digging and pipe-cutting. This paper describes Sancho, Sandia`s most advanced graphical programming supervisory software. Sancho, now operational on several robot systems, incorporates all of Sandia`s recent advances in supervisory control. Graphical programming uses 3-D graphics models as intuitive operator interfaces to program and control complex robotic systems. The goal of the paper is to help the reader understand how Sandia implements graphical programming systems and which key features in Sancho have proven to be most effective.

  8. Graphics development of DCOR: Deterministic combat model of Oak Ridge. [Deterministic Combat model of Oak Ridge (DCOR)

    SciTech Connect

    Hunt, G. ); Azmy, Y.Y. )

    1992-10-01

    DCOR is a user-friendly computer implementation of a deterministic combat model developed at ORNL. To make the interpretation of the results more intuitive, a conversion of the numerical solution to a graphic animation sequence of battle evolution is desirable. DCOR uses a coarse computational spatial mesh superimposed on the battlefield. This research is aimed at developing robust methods for computing the position of the combative units over the continuum (and also pixeled) battlefield, from DCOR's discrete-variable solution representing the density of each force type evaluated at gridpoints. Three main problems have been identified and solutions have been devised and implemented in a new visualization module of DCOR. First, there is the problem of distributing the total number of objects, each representing a combative unit of each force type, among the gridpoints at each time level of the animation. This problem is solved by distributing, for each force type, the total number of combative units, one by one, to the gridpoint with the largest calculated number of units. Second, there is the problem of distributing the number of units assigned to each computational gridpoint over the battlefield area attributed to that point. This problem is solved by distributing the units within that area by taking into account the influence of surrounding gridpoints using linear interpolation. Finally, time interpolated solutions must be generated to produce a sufficient number of frames to create a smooth animation sequence. Currently, enough frames may be generated either by direct computation via the PDE solver or by using linear programming techniques to linearly interpolate intermediate frames between calculated frames.

  9. SutraPlot, a graphical post-processor for SUTRA, a model for ground-water flow with solute or energy transport

    USGS Publications Warehouse

    Souza, W.R.

    1999-01-01

    This report documents a graphical display post-processor (SutraPlot) for the U.S. Geological Survey Saturated-Unsaturated flow and solute or energy TRAnsport simulation model SUTRA, Version 2D3D.1. This version of SutraPlot is an upgrade to SutraPlot for the 2D-only SUTRA model (Souza, 1987). It has been modified to add 3D functionality, a graphical user interface (GUI), and enhanced graphic output options. Graphical options for 2D SUTRA (2-dimension) simulations include: drawing the 2D finite-element mesh, mesh boundary, and velocity vectors; plots of contours for pressure, saturation, concentration, and temperature within the model region; 2D finite-element based gridding and interpolation; and 2D gridded data export files. Graphical options for 3D SUTRA (3-dimension) simulations include: drawing the 3D finite-element mesh; plots of contours for pressure, saturation, concentration, and temperature in 2D sections of the 3D model region; 3D finite-element based gridding and interpolation; drawing selected regions of velocity vectors (projected on principal coordinate planes); and 3D gridded data export files. Installation instructions and a description of all graphic options are presented. A sample SUTRA problem is described and three step-by-step SutraPlot applications are provided. In addition, the methodology and numerical algorithms for the 2D and 3D finite-element based gridding and interpolation, developed for SutraPlot, are described. 1

  10. A Bayesian Semiparametric Latent Variable Model for Mixed Responses

    ERIC Educational Resources Information Center

    Fahrmeir, Ludwig; Raach, Alexander

    2007-01-01

    In this paper we introduce a latent variable model (LVM) for mixed ordinal and continuous responses, where covariate effects on the continuous latent variables are modelled through a flexible semiparametric Gaussian regression model. We extend existing LVMs with the usual linear covariate effects by including nonparametric components for nonlinear…

  11. A Comparison of Item Fit Statistics for Mixed IRT Models

    ERIC Educational Resources Information Center

    Chon, Kyong Hee; Lee, Won-Chan; Dunbar, Stephen B.

    2010-01-01

    In this study we examined procedures for assessing model-data fit of item response theory (IRT) models for mixed format data. The model fit indices used in this study include PARSCALE's G[superscript 2], Orlando and Thissen's S-X[superscript 2] and S-G[superscript 2], and Stone's chi[superscript 2*] and G[superscript 2*]. To investigate the…

  12. Weakly nonlinear models for turbulent mixing in a plane mixing layer

    NASA Technical Reports Server (NTRS)

    Liou, William W.; Morris, Philip J.

    1992-01-01

    New closure models for turbulent free shear flows are presented in this paper. They are based on a weakly nonlinear theory with a description of the dominant large-scale structures as instability waves. Two models are presented that describe the evolution of the free shear flows in terms of the time-averaged mean flow and the dominant large-scale turbulent structure. The local characteristics of the large-scale motions are described using linear theory. Their amplitude is determined from an energy integral analysis. The models have been applied to the study of an incompressible mixing layer. For both models, predictions of the mean flow developed are made. In the second model, predictions of the time-dependent motion of the large-scale structures in the mixing layer are made. The predictions show good agreement with experimental observations.

  13. LIDAR OBSERVATIONS OF MIXED LAYER DYNAMICS: TESTS OF PARAMETERIZED ENTRAINMENT MODELS OF MIXED LAYER GROWTH RATE

    EPA Science Inventory

    Lidar measurements of the atmospheric boundary layer height, the entrainment zone, wind speed and direction, ancillary temperature profiles and surface flux data were used to test current parameterized entrainment models of mixed layer growth rate. Six case studies under clear ai...

  14. A Novel Graphical User Interface for High-Efficacy Modeling of Human Perceptual Similarity Opinions

    SciTech Connect

    Kress, James M; Xu, Songhua; Tourassi, Georgia

    2013-01-01

    We present a novel graphical user interface (GUI) that facilitates high-efficacy collection of perceptual similarity opinions of a user in an effective and intuitive manner. The GUI is based on a hybrid mechanism that combines ranking and rating. Namely, it presents a base image for rating its similarity to seven peripheral images that are displayed simultaneously following a circular layout. The user is asked to report the base image s pairwise similarity to each peripheral image on a fixed scale while preserving the relative ranking among all peripheral images. The collected data are then used to predict the user s subjective opinions regarding the perceptual similarity of images. We tested this new approach against two methods commonly used in perceptual similarity studies: (1) a ranking method that presents triplets of images for selecting the image pair with the highest internal similarity and (2) a rating method that presents pairs of images for rating their relative similarity on a fixed scale. We aimed to determine which data collection method was the most time efficient and effective for predicting a user s perceptual opinions regarding the similarity of mammographic masses. Our study was conducted with eight individuals. By using the proposed GUI, we were able to derive individual user profiles that were 41.4% to 46.9% more accurate than those derived with the other two data collection GUIs. The accuracy improvement was statistically significant.

  15. Modelling multi-phase liquid-sediment scour and resuspension induced by rapid flows using Smoothed Particle Hydrodynamics (SPH) accelerated with a Graphics Processing Unit (GPU)

    NASA Astrophysics Data System (ADS)

    Fourtakas, G.; Rogers, B. D.

    2016-06-01

    A two-phase numerical model using Smoothed Particle Hydrodynamics (SPH) is applied to two-phase liquid-sediments flows. The absence of a mesh in SPH is ideal for interfacial and highly non-linear flows with changing fragmentation of the interface, mixing and resuspension. The rheology of sediment induced under rapid flows undergoes several states which are only partially described by previous research in SPH. This paper attempts to bridge the gap between the geotechnics, non-Newtonian and Newtonian flows by proposing a model that combines the yielding, shear and suspension layer which are needed to predict accurately the global erosion phenomena, from a hydrodynamics prospective. The numerical SPH scheme is based on the explicit treatment of both phases using Newtonian and the non-Newtonian Bingham-type Herschel-Bulkley-Papanastasiou constitutive model. This is supplemented by the Drucker-Prager yield criterion to predict the onset of yielding of the sediment surface and a concentration suspension model. The multi-phase model has been compared with experimental and 2-D reference numerical models for scour following a dry-bed dam break yielding satisfactory results and improvements over well-known SPH multi-phase models. With 3-D simulations requiring a large number of particles, the code is accelerated with a graphics processing unit (GPU) in the open-source DualSPHysics code. The implementation and optimisation of the code achieved a speed up of x58 over an optimised single thread serial code. A 3-D dam break over a non-cohesive erodible bed simulation with over 4 million particles yields close agreement with experimental scour and water surface profiles.

  16. Mixing by barotropic instability in a nonlinear model

    NASA Technical Reports Server (NTRS)

    Bowman, Kenneth P.; Chen, Ping

    1994-01-01

    A global, nonlinear, equivalent barotropic model is used to study the isentropic mixing of passive tracers by barotropic instability. Basic states are analytical zonal-mean jets representative of the zonal-mean flow in the upper stratosphere, where the observed 4-day wave is thought to be a result of barotropic, and possibly baroclinic, instability. As is known from previous studies, the phase speed and growth rate of the unstable waves is fairly sensitive to the shape of the zonal-mean jet; and the dominant wave mode at saturation is not necessarily the fastest growing mode; but the unstable modes share many features of the observed 4-day wave. Lagrangian trajectories computed from model winds are used to characterize the mixing by the flow. For profiles with both midlatitude and polar modes, mixing is stronger in midlatitude than inside the vortex; but there is little exchange of air across the vortex boundary. There is a minimum in the Lyapunov exponents of the flow and the particle dispersion at the jet maximum. For profiles with only polar unstable modes, there is weak mixing inside the vortex, no mixing outside the vortex, and no exchange of air across the vortex boundary. These results support the theoretical arguments that, whether wave disturbances are generated by local instability or propagate from other regions, the mixing properties of the total flow are determined by the locations of the wave critical lines and that strong gradients of potential vorticity are very resistant to mixing.

  17. New mixing angles in the left-right symmetric model

    NASA Astrophysics Data System (ADS)

    Kokado, Akira; Saito, Takesi

    2015-12-01

    In the left-right symmetric model neutral gauge fields are characterized by three mixing angles θ12,θ23,θ13 between three gauge fields Bμ,WLμ 3,WRμ 3, which produce mass eigenstates Aμ,Zμ,Zμ', when G =S U (2 )L×S U (2 )R×U (1 )B-L×D is spontaneously broken down until U (1 )em . We find a new mixing angle θ', which corresponds to the Weinberg angle θW in the standard model with the S U (2 )L×U (1 )Y gauge symmetry, from these mixing angles. It is then shown that any mixing angle θi j can be expressed by ɛ and θ', where ɛ =gL/gR is a ratio of running left-right gauge coupling strengths. We observe that light gauge bosons are described by θ' only, whereas heavy gauge bosons are described by two parameters ɛ and θ'.

  18. Evaluation of rural-air-quality simulation models. Addendum B: graphical display of model performance using the Clifty Creek data base

    SciTech Connect

    Cox, W.M.; Moss, G.K.; Tikvart, J.A.; Baldridge, E.

    1985-08-01

    The addendum uses a variety of graphical formats to display and compare the performance of four rural models using the Clifty Creek data base. The four models included MPTER (EPA), PPSP (Martin Marietta Corp.), MPSDM (ERT), and TEM-8A (Texas Air Control Board). Graphic displays were developed and used for both operational evaluation and diagnostic evaluation purposes. Plots of bias of the average vs station downwind distance by stability and wind-speed class revealed clear patterns of accentuated underprediction and overprediction for stations closer to the source. PPSP showed a tendency for decreasing overprediction with increasing station distance for all meteorological subsets while the other three models showed varying patterns depending on the meteorological class. Diurnal plots of the bias of the average vs hour of the day revealed a pattern of underestimation during the nocturnal hours and overestimation during hours of strong solar radiation with MPSDM and MPTER showing the least overall bias throughout the day.

  19. Graphic engine resource management

    NASA Astrophysics Data System (ADS)

    Bautin, Mikhail; Dwarakinath, Ashok; Chiueh, Tzi-cker

    2008-01-01

    Modern consumer-grade 3D graphic cards boast a computation/memory resource that can easily rival or even exceed that of standard desktop PCs. Although these cards are mainly designed for 3D gaming applications, their enormous computational power has attracted developers to port an increasing number of scientific computation programs to these cards, including matrix computation, collision detection, cryptography, database sorting, etc. As more and more applications run on 3D graphic cards, there is a need to allocate the computation/memory resource on these cards among the sharing applications more fairly and efficiently. In this paper, we describe the design, implementation and evaluation of a Graphic Processing Unit (GPU) scheduler based on Deficit Round Robin scheduling that successfully allocates to every process an equal share of the GPU time regardless of their demand. This scheduler, called GERM, estimates the execution time of each GPU command group based on dynamically collected statistics, and controls each process's GPU command production rate through its CPU scheduling priority. Measurements on the first GERM prototype show that this approach can keep the maximal GPU time consumption difference among concurrent GPU processes consistently below 5% for a variety of application mixes.

  20. Computer aided graphics simulation modelling using seismogeologic approach in sequence stratigraphy of Early Cretaceous Punjab platform, Central Indus Basin, Pakistan

    SciTech Connect

    Qureshi, T.M.; Khan, K.A.

    1996-08-01

    Modelling stratigraphic sequence by using seismo-geologic approach, integrated with cyclic transgressive-regressive deposits, helps to identify a number of non-structural subtle traps. Most of the hydrocarbons found in Early Cretaceous of Central Indus Basin pertain to structural entrapments of upper transgressive sands. A few wells are producing from middle and basal regressive sands, but the massive regressive sands have not been tested so far. The possibility of stratigraphic traps like wedging or pinch-out, a lateral gradation, an uplift, truncation and overlapping of reservoir rocks is quite promising. The natural basin physiography at times has been modified by extensional episodic events into tectono-morphic terrain. Thus, seismo scanning of tectonically controlled sedimentation might delineate some subtle stratigraphic traps. Amplitude maps representing stratigraphic sequences are generated to identify the traps. Seismic expressions indicate the reservoir quality in terms of amplitude increase or decrease. The data is modelled on computer using graphics simulation techniques.

  1. Simplified renormalizable T' model for tribimaximal mixing and Cabibbo angle

    NASA Astrophysics Data System (ADS)

    Frampton, Paul H.; Kephart, Thomas W.; Matsuzaki, Shinya

    2008-10-01

    In a simplified renormalizable model where the neutrinos have Pontecorvo-Maki-Nakagawa-Sakata (PMNS) mixings tan⁡2θ12=(1)/(2), θ13=0, θ23=π/4 and with flavor symmetry T' there is a corresponding prediction where the quarks have Cabibbo-Kobayashi-Maskawa (CKM) mixings tan⁡2Θ12=(2)/(3), Θ13=0, Θ23=0.

  2. A Mixed Effects Randomized Item Response Model

    ERIC Educational Resources Information Center

    Fox, J.-P.; Wyrick, Cheryl

    2008-01-01

    The randomized response technique ensures that individual item responses, denoted as true item responses, are randomized before observing them and so-called randomized item responses are observed. A relationship is specified between randomized item response data and true item response data. True item response data are modeled with a (non)linear…

  3. Generalized Dynamic Factor Models for Mixed-Measurement Time Series

    PubMed Central

    Cui, Kai; Dunson, David B.

    2013-01-01

    In this article, we propose generalized Bayesian dynamic factor models for jointly modeling mixed-measurement time series. The framework allows mixed-scale measurements associated with each time series, with different measurements having different distributions in the exponential family conditionally on time-varying latent factor(s). Efficient Bayesian computational algorithms are developed for posterior inference on both the latent factors and model parameters, based on a Metropolis Hastings algorithm with adaptive proposals. The algorithm relies on a Greedy Density Kernel Approximation (GDKA) and parameter expansion with latent factor normalization. We tested the framework and algorithms in simulated studies and applied them to the analysis of intertwined credit and recovery risk for Moody’s rated firms from 1982–2008, illustrating the importance of jointly modeling mixed-measurement time series. The article has supplemental materials available online. PMID:24791133

  4. Generalized Dynamic Factor Models for Mixed-Measurement Time Series.

    PubMed

    Cui, Kai; Dunson, David B

    2014-02-12

    In this article, we propose generalized Bayesian dynamic factor models for jointly modeling mixed-measurement time series. The framework allows mixed-scale measurements associated with each time series, with different measurements having different distributions in the exponential family conditionally on time-varying latent factor(s). Efficient Bayesian computational algorithms are developed for posterior inference on both the latent factors and model parameters, based on a Metropolis Hastings algorithm with adaptive proposals. The algorithm relies on a Greedy Density Kernel Approximation (GDKA) and parameter expansion with latent factor normalization. We tested the framework and algorithms in simulated studies and applied them to the analysis of intertwined credit and recovery risk for Moody's rated firms from 1982-2008, illustrating the importance of jointly modeling mixed-measurement time series. The article has supplemental materials available online. PMID:24791133

  5. Analyzing Mixed-Dyadic Data Using Structural Equation Models

    ERIC Educational Resources Information Center

    Peugh, James L.; DiLillo, David; Panuzio, Jillian

    2013-01-01

    Mixed-dyadic data, collected from distinguishable (nonexchangeable) or indistinguishable (exchangeable) dyads, require statistical analysis techniques that model the variation within dyads and between dyads appropriately. The purpose of this article is to provide a tutorial for performing structural equation modeling analyses of cross-sectional…

  6. Teaching Service Modelling to a Mixed Class: An Integrated Approach

    ERIC Educational Resources Information Center

    Deng, Jeremiah D.; Purvis, Martin K.

    2015-01-01

    Service modelling has become an increasingly important area in today's telecommunications and information systems practice. We have adapted a Network Design course in order to teach service modelling to a mixed class of both the telecommunication engineering and information systems backgrounds. An integrated approach engaging mathematics teaching…

  7. MULTIVARIATE LINEAR MIXED MODELS FOR MULTIPLE OUTCOMES. (R824757)

    EPA Science Inventory

    We propose a multivariate linear mixed (MLMM) for the analysis of multiple outcomes, which generalizes the latent variable model of Sammel and Ryan. The proposed model assumes a flexible correlation structure among the multiple outcomes, and allows a global test of the impact of ...

  8. Generation Mixing in the Sakata-Nagoya Model

    NASA Astrophysics Data System (ADS)

    Nishijima, K.

    The Sakata model as combined with the SU(3) symmetry served in introducing the idea of the fundamental triplet in particle physics. In the Nagoya model the correspondence between baryons and leptons was emphasized and was exploited later in forming the concept of generation and generation-mixing.

  9. Model for compound formation during ion-beam mixing

    SciTech Connect

    Desimoni, J.; Traverse, A. )

    1993-11-01

    We propose an ion-beam-mixing model that accounts for compound formation at a boundary between two materials during ion irradiation. It is based on Fick's law together with a chemical driving force in order to simulate the chemical reaction at the boundary. The behavior of the squared thickness of the mixed layer, [ital X][sup 2], with the irradiation fluence, [Phi], has been found in several mixing experiments to be either quadratic ([ital X][sup 2][alpha][Phi][sup 2]) or linear ([ital X][sup 2][alpha][Phi]), a result which is qualitatively reproduced. Depending on the fluence range, compound formation or diffusion is the limiting process of mixing kinetics. A criterion is established in terms of the ratio of the diffusion coefficient [ital D] due to irradiation to the chemical reaction rate squared which allows us to predict quadratic or linear behavior. When diffusion is the limiting process, [ital D] is enhanced by a factor which accounts for the formation of a compound in the mixed layer. Good agreement is found between the calculated mixing rates and the data taken from mixing experiments in metal/Si bilayers.

  10. Modeling the iron cycling in the mixed layer

    NASA Astrophysics Data System (ADS)

    Weber, L.; Voelker, C.; Schartau, M.; Wolf-Gladrow, D.

    2003-04-01

    We present a comprehensive model of the iron cycling within the mixed layer of the ocean, which predicts the time course of iron concentration and speciation. The speciation of iron within the mixed layer is heavily influenced by photochemistry, organic complexation, colloid formation and aggregation, as well as uptake and release by marine biota. The model is driven by mixed layer dynamics, dust deposition and insolation, as well as coupled to a simple ecosystem model (based on Schartau at al.2001: Deep-Sea Res.II.48,1769-1800) and applied to the site of the Bermuda Atlantic Time-series Study (BATS). Parameters in the model were chosen to reproduce the small number of available speciation measurements resolving a daily cycle. The model clearly reproduces the available Fe concentration at the BATS station but the annual balance of Fe fluxes at BATS is less constrained, due to uncertainties in the model parameters. Hence we discuss the model's sensitivity to parameter uncertainties and which observations might help to better constrain the relevant model parameters. Futher we discuss how the most important model parameters are constrained by the data. The mixed layer cycle in the model strongly influences seasonality of primary production as well as light dependency of photoreductive processes and therefore controlls iron speciation. Futhermore short events within a day (e.g. heavy rain, change of irradiance, intense dust deposition and temporary deepening of the mixed layer) may push processes like colloidal aggregation. For this reason we compare two versions of the model: The first one is forced by monthly averaged climatological variables, the second one by daily climatological variabilities.

  11. USING GIS AND A GRAPHICAL USER INTERFACE TO MODEL LAND DEGRADATION

    EPA Science Inventory

    Geographic information systems (GIS) are increasingly being used to model and ecosystem characteristics. his article describes the usefulness of GIS to model the susceptibility to desertification of and lands, using climatic, soil, vegetative, and anthropogenic indicators. he GIS...

  12. A 3D Bubble Merger Model for RTI Mixing

    NASA Astrophysics Data System (ADS)

    Cheng, Baolian

    2015-11-01

    In this work we present a model for the merger processes of bubbles at the edge of an unstable acceleration driven mixing layer. Steady acceleration defines a self-similar mixing process, with a time-dependent inverse cascade of structures of increasing size. The time evolution is itself a renormalization group evolution. The model predicts the growth rate of a Rayleigh-Taylor chaotic fluid-mixing layer. The 3-D model differs from the 2-D merger model in several important ways. Beyond the extension of the model to three dimensions, the model contains one phenomenological parameter, the variance of the bubble radii at fixed time. The model also predicts several experimental numbers: the bubble mixing rate, the mean bubble radius, and the bubble height separation at the time of merger. From these we also obtain the bubble height to the radius aspect ratio, which is in good agreement with experiments. Applications to recent NIF and Omega experiments will be discussed. This work was performed under the auspices of the U.S. Department of Energy by the Los Alamos National Laboratory under Contract No. W-7405-ENG-36.

  13. Modeling and Analysis of Mixed Synchronous/Asynchronous Systems

    NASA Technical Reports Server (NTRS)

    Driscoll, Kevin R.; Madl. Gabor; Hall, Brendan

    2012-01-01

    Practical safety-critical distributed systems must integrate safety critical and non-critical data in a common platform. Safety critical systems almost always consist of isochronous components that have synchronous or asynchronous interface with other components. Many of these systems also support a mix of synchronous and asynchronous interfaces. This report presents a study on the modeling and analysis of asynchronous, synchronous, and mixed synchronous/asynchronous systems. We build on the SAE Architecture Analysis and Design Language (AADL) to capture architectures for analysis. We present preliminary work targeted to capture mixed low- and high-criticality data, as well as real-time properties in a common Model of Computation (MoC). An abstract, but representative, test specimen system was created as the system to be modeled.

  14. Fermion masses and mixing in Δ (27 ) flavor model

    NASA Astrophysics Data System (ADS)

    Abbas, Mohammed; Khalil, Shaaban

    2015-03-01

    An extension of the Standard Model (SM) based on the non-Abelian discrete group Δ (27 ) is considered. The Δ (27 ) flavor symmetry is spontaneously broken only by gauge singlet scalar fields, therefore our model is free from any flavor changing neutral current (FCNC). We show that the model accounts simultaneously for the observed quark and lepton masses and their mixing. In the quark sector, we find that the up-quark mass matrix is flavor diagonal and the Cabbibo-Kobayashi-Maskawa (CKM) mixing matrix arises from down quarks. In the lepton sector, we show that the charged lepton mass matrix is almost diagonal. We also adopt type-I seesaw mechanism to generate neutrino masses. A deviated mixing matrix from tri-bimaximal Maki-Nakagawa-Sakata (MNS), with a correlation between sin θ13 and sin2θ23 are illustrated.

  15. Quantifying the Strength and Delay of ENSOs Teleconnections with Graphical Models and a novel Partial Correlation Measure

    NASA Astrophysics Data System (ADS)

    Runge, J.; Petoukhov, V.; Kurths, J.

    2013-12-01

    The analysis of time delays using lagged cross correlations is commonly used to gain insights into interaction mechanisms between climatological processes, also to quantify the strength of a mechanism. Especially ENSOs teleconnections have been investigated with this approach. Here we critically evaluate how justified this method is, i.e., what aspect of a climatic mechanism such an inferred time lag actually measures. We find a strong dependence on serial dependencies or autocorrelation which can lead to misleading conclusions about the time delays and also obscures a quantification of the interaction mechanism. To overcome these possible artifacts, we propose a two-step procedure based on the concept of graphical models recently introduced to climate research. In the first step, graphical models are used to detect the existence of (Granger-) causal interactions which determines the time delays of a mechanism. In the second step a certain partial correlation is introduced that allows to specifically quantify the strength of an interaction mechanism in a well interpretable way that enables to exclude misleading effects of serial correlation as well as more general dependencies. With this approach we find novel interpretations of the time delays and strengths of ENSOs teleconnections. The potential of the approach to quantify interactions also between more than two variables is demonstrated by investigating the mechanism of the Walker circulation. Overview over important teleconnections. The black dashed lines denote the regions used in the bivariate analyses, while the gray boxes show the three regions analyzed to study the Walker circulation (see the inset). The arrows indicate the direction with the gray shading roughly corresponding to the novel partial correlation measure strength. The label gives the value and time lag in months in brackets.

  16. Computer modeling of ORNL storage tank sludge mobilization and mixing

    SciTech Connect

    Terrones, G.; Eyler, L.L.

    1993-09-01

    This report presents and analyzes the results of the computer modeling of mixing and mobilization of sludge in horizontal, cylindrical storage tanks using submerged liquid jets. The computer modeling uses the TEMPEST computational fluid dynamics computer program. The horizontal, cylindrical storage tank configuration is similar to the Melton Valley Storage Tanks (MVST) at Oak Ridge National (ORNL). The MVST tank contents exhibit non-homogeneous, non-Newtonian rheology characteristics. The eventual goals of the simulations are to determine under what conditions sludge mobilization using submerged liquid jets is feasible in tanks of this configuration, and to estimate mixing times required to approach homogeneity of the contents of the tanks.

  17. Photoionized Mixing Layer Models of the Diffuse Ionized Gas

    NASA Astrophysics Data System (ADS)

    Binette, Luc; Flores-Fajardo, Nahiely; Raga, Alejandro C.; Drissen, Laurent; Morisset, Christophe

    2009-04-01

    It is generally believed that O stars, confined near the galactic midplane, are somehow able to photoionize a significant fraction of what is termed the "diffuse ionized gas" (DIG) of spiral galaxies, which can extend up to 1-2 kpc above the galactic midplane. The heating of the DIG remains poorly understood, however, as simple photoionization models do not reproduce the observed line ratio correlations well or the DIG temperature. We present turbulent mixing layer (TML) models in which warm photoionized condensations are immersed in a hot supersonic wind. Turbulent dissipation and mixing generate an intermediate region where the gas is accelerated, heated, and mixed. The emission spectrum of such layers is compared with observations of Rand of the DIG in the edge-on spiral NGC 891. We generate two sequence of models that fit the line ratio correlations between [S II]/Hα, [O I]/Hα, [N II]/[S II], and [O III]/Hβ reasonably well. In one sequence of models, the hot wind velocity increases, while in the other, the ionization parameter and layer opacity increase. Despite the success of the mixing layer models, the overall efficiency in reprocessing the stellar UV is much too low, much less than 1%, which compels us to reject the TML model in its present form.

  18. Mixed waste treatment model: Basis and analysis

    SciTech Connect

    Palmer, B.A.

    1995-09-01

    The Department of Energy`s Programmatic Environmental Impact Statement (PEIS) required treatment system capacities for risk and cost calculation. Los Alamos was tasked with providing these capacities to the PEIS team. This involved understanding the Department of Energy (DOE) Complex waste, making the necessary changes to correct for problems, categorizing the waste for treatment, and determining the treatment system requirements. The treatment system requirements depended on the incoming waste, which varied for each PEIS case. The treatment system requirements also depended on the type of treatment that was desired. Because different groups contributing to the PEIS needed specific types of results, we provided the treatment system requirements in a variety of forms. In total, some 40 data files were created for the TRU cases, and for the MLLW case, there were 105 separate data files. Each data file represents one treatment case consisting of the selected waste from various sites, a selected treatment system, and the reporting requirements for such a case. The treatment system requirements in their most basic form are the treatment process rates for unit operations in the desired treatment system, based on a 10-year working life and 20-year accumulation of the waste. These results were reported in cubic meters and for the MLLW case, in kilograms as well. The treatment system model consisted of unit operations that are linked together. Each unit operation`s function depended on the input waste streams, waste matrix, and contaminants. Each unit operation outputs one or more waste streams whose matrix, contaminants, and volume/mass may have changed as a result of the treatment. These output streams are then routed to the appropriate unit operation for additional treatment until the output waste stream meets the treatment requirements for disposal. The total waste for each unit operation was calculated as well as the waste for each matrix treated by the unit.

  19. Sensitivity of fine sediment source apportionment to mixing model assumptions

    NASA Astrophysics Data System (ADS)

    Cooper, Richard; Krueger, Tobias; Hiscock, Kevin; Rawlins, Barry

    2015-04-01

    Mixing models have become increasingly common tools for quantifying fine sediment redistribution in river catchments. The associated uncertainties may be modelled coherently and flexibly within a Bayesian statistical framework (Cooper et al., 2015). However, there is more than one way to represent these uncertainties because the modeller has considerable leeway in making error assumptions and model structural choices. In this presentation, we demonstrate how different mixing model setups can impact upon fine sediment source apportionment estimates via a one-factor-at-a-time (OFAT) sensitivity analysis. We formulate 13 versions of a mixing model, each with different error assumptions and model structural choices, and apply them to sediment geochemistry data from the River Blackwater, Norfolk, UK, to apportion suspended particulate matter (SPM) contributions from three sources (arable topsoils, road verges and subsurface material) under base flow conditions between August 2012 and August 2013 (Cooper et al., 2014). Whilst all 13 models estimate subsurface sources to be the largest contributor of SPM (median ~76%), comparison of apportionment estimates reveals varying degrees of sensitivity to changing prior parameter distributions, inclusion of covariance terms, incorporation of time-variant distributions and methods of proportion characterisation. We also demonstrate differences in apportionment results between a full and an empirical Bayesian setup and between a Bayesian and a popular Least Squares optimisation approach. Our OFAT sensitivity analysis reveals that mixing model structural choices and error assumptions can significantly impact upon fine sediment source apportionment results, with estimated median contributions in this study varying by up to 21% between model versions. Users of mixing models are therefore strongly advised to carefully consider and justify their choice of model setup prior to conducting fine sediment source apportionment investigations

  20. The software architecture of climate models: a graphical comparison of CMIP5 and EMICAR5 configurations

    NASA Astrophysics Data System (ADS)

    Alexander, K.; Easterbrook, S. M.

    2015-04-01

    We analyze the source code of eight coupled climate models, selected from those that participated in the CMIP5 (Taylor et al., 2012) or EMICAR5 (Eby et al., 2013; Zickfeld et al., 2013) intercomparison projects. For each model, we sort the preprocessed code into components and subcomponents based on dependency structure. We then create software architecture diagrams that show the relative sizes of these components/subcomponents and the flow of data between them. The diagrams also illustrate several major classes of climate model design; the distribution of complexity between components, which depends on historical development paths as well as the conscious goals of each institution; and the sharing of components between different modeling groups. These diagrams offer insights into the similarities and differences in structure between climate models, and have the potential to be useful tools for communication between scientists, scientific institutions, and the public.

  1. The software architecture of climate models: a graphical comparison of CMIP5 and EMICAR5 configurations

    NASA Astrophysics Data System (ADS)

    Alexander, K.; Easterbrook, S. M.

    2015-01-01

    We analyse the source code of eight coupled climate models, selected from those that participated in the CMIP5 (Taylor et al., 2012) or EMICAR5 (Eby et al., 2013; Zickfeld et al., 2013) intercomparison projects. For each model, we sort the preprocessed code into components and subcomponents based on dependency structure. We then create software architecture diagrams which show the relative sizes of these components/subcomponents and the flow of data between them. The diagrams also illustrate several major classes of climate model design; the distribution of complexity between components, which depends on historical development paths as well as the conscious goals of each institution; and the sharing of components between different modelling groups. These diagrams offer insights into the similarities and differences between models, and have the potential to be useful tools for communication between scientists, scientific institutions, and the public.

  2. Modelling spherical explosions with turbulent mixing and post-detonation

    NASA Astrophysics Data System (ADS)

    Saurel, Richard; Huber, Gregory; Jourdan, Georges; Lapébie, Emmanuel; Munier, Laurent

    2012-11-01

    This paper addresses post detonation modelling in spherical explosions. One of the challenges is thus related to compressible turbulent mixing layers modelling. A one-dimensional flow model is derived consisting in a reduced two-phase compressible flow model with velocity drift. To reduce the number of model parameters, the stiff velocity relaxation limit is considered. A semi-discrete analysis is used resulting in a specific artificial viscosity formulation embedded in the diffuse interface model of Kapila et al. [Phys. Fluids 13(10), 3002-3024 (2001)], 10.1063/1.1398042. Thanks to the velocity non-equilibrium model and semi discrete formulation, the model fulfils the second law of thermodynamics in the global sense and uses a single parameter. Multidimensional mixing layer effects occurring at gas-gas unstable interfaces are thus summarized as artificial viscosity effects. Model's predictions are compared against experimental measurements of mixing layer growth in shock tubes at moderate initial pressure ratios as well as fireball radius evolutions in air explosions at high initial pressure ratios. Also, pressure signals recorded at various stations are compared, showing excellent agreement for the leading shock wave as well as the secondary one. With the help of various experiments in the low and high initial pressure ratios bounds, estimates for the interpenetration parameter are given.

  3. Modeling of three-dimensional mixing and reacting ducted flows

    NASA Technical Reports Server (NTRS)

    Zelazny, S. W.; Baker, A. J.; Rushmore, W. L.

    1976-01-01

    A computer code, based upon a finite element solution algorithm, was developed to solve the governing equations for three-dimensional, reacting boundary region, and constant area ducted flow fields. Effective diffusion coefficients are employed to allow analyses of turbulent, transitional or laminar flows. The code was used to investigate mixing and reacting hydrogen jets injected from multiple orifices, transverse and parallel to a supersonic air stream. Computational results provide a three-dimensional description of velocity, temperature, and species-concentration fields downstream of injection. Experimental data for eight cases covering different injection conditions and geometries were modeled using mixing length theory (MLT). These results were used as a baseline for examining the relative merits of other mixing models. Calculations were made using a two-equation turbulence model (k+d) and comparisons were made between experiment and mixing length theory predictions. The k+d model shows only a slight improvement in predictive capability over MLT. Results of an examination of the effect of tensorial transport coefficients on mass and momentum field distribution are also presented. Solutions demonstrating the ability of the code to model ducted flows and parallel strut injection are presented and discussed.

  4. Graphical Technique to Support the Teaching/Learning Process of Software Process Reference Models

    NASA Astrophysics Data System (ADS)

    Espinosa-Curiel, Ismael Edrein; Rodríguez-Jacobo, Josefina; Fernández-Zepeda, José Alberto

    In this paper, we propose a set of diagrams to visualize software process reference models (PRM). The diagrams, called dimods, are the combination of some visual and process modeling techniques such as rich pictures, mind maps, IDEF and RAD diagrams. We show the use of this technique by designing a set of dimods for the Mexican Software Industry Process Model (MoProSoft). Additionally, we perform an evaluation of the usefulness of dimods. The result of the evaluation shows that dimods may be a support tool that facilitates the understanding, memorization, and learning of software PRMs in both, software development organizations and universities. The results also show that dimods may have advantages over the traditional description methods for these types of models.

  5. Probabilistic Graphical Models for the Analysis and Synthesis of Musical Audio

    NASA Astrophysics Data System (ADS)

    Hoffmann, Matthew Douglas

    Content-based Music Information Retrieval (MIR) systems seek to automatically extract meaningful information from musical audio signals. This thesis applies new and existing generative probabilistic models to several content-based MIR tasks: timbral similarity estimation, semantic annotation and retrieval, and latent source discovery and separation. In order to estimate how similar two songs sound to one another, we employ a Hierarchical Dirichlet Process (HDP) mixture model to discover a shared representation of the distribution of timbres in each song. Comparing songs under this shared representation yields better query-by-example retrieval quality and scalability than previous approaches. To predict what tags are likely to apply to a song (e.g., "rap," "happy," or "driving music"), we develop the Codeword Bernoulli Average (CBA) model, a simple and fast mixture-of-experts model. Despite its simplicity, CBA performs at least as well as state-of-the-art approaches at automatically annotating songs and finding to what songs in a database a given tag most applies. Finally, we address the problem of latent source discovery and separation by developing two Bayesian nonparametric models, the Shift-Invariant HDP and Gamma Process NMF. These models allow us to discover what sounds (e.g. bass drums, guitar chords, etc.) are present in a song or set of songs and to isolate or suppress individual source. These models' ability to decide how many latent sources are necessary to model the data is particularly valuable in this application, since it is impossible to guess a priori how many sounds will appear in a given song or set of songs. Once they have been fit to data, probabilistic models can also be used to drive the synthesis of new musical audio, both for creative purposes and to qualitatively diagnose what information a model does and does not capture. We also adapt the SIHDP model to create new versions of input audio with arbitrary sample sets, for example, to create

  6. Modeling and Simulation of Fluid Mixing Laser Experiments and Supernova

    SciTech Connect

    Glimm, James

    2008-06-24

    The three year plan for this project is to develop novel theories and advanced simulation methods leading to a systematic understanding of turbulent mixing. A primary focus is the comparison of simulation models (both Direct Numerical Simulation and subgrid averaged models) to experiments. The comprehension and reduction of experimental and simulation data are central goals of this proposal. We will model 2D and 3D perturbations of planar interfaces. We will compare these tests with models derived from averaged equations (our own and those of others). As a second focus, we will develop physics based subgrid simulation models of diffusion across an interface, with physical but no numerical mass diffusion. We will conduct analytic studies of mix, in support of these objectives. Advanced issues, including multiple layers and reshock, will be considered.

  7. Multikernel linear mixed models for complex phenotype prediction.

    PubMed

    Weissbrod, Omer; Geiger, Dan; Rosset, Saharon

    2016-07-01

    Linear mixed models (LMMs) and their extensions have recently become the method of choice in phenotype prediction for complex traits. However, LMM use to date has typically been limited by assuming simple genetic architectures. Here, we present multikernel linear mixed model (MKLMM), a predictive modeling framework that extends the standard LMM using multiple-kernel machine learning approaches. MKLMM can model genetic interactions and is particularly suitable for modeling complex local interactions between nearby variants. We additionally present MKLMM-Adapt, which automatically infers interaction types across multiple genomic regions. In an analysis of eight case-control data sets from the Wellcome Trust Case Control Consortium and more than a hundred mouse phenotypes, MKLMM-Adapt consistently outperforms competing methods in phenotype prediction. MKLMM is as computationally efficient as standard LMMs and does not require storage of genotypes, thus achieving state-of-the-art predictive power without compromising computational feasibility or genomic privacy. PMID:27302636

  8. Parallelization and improvements of the generalized born model with a simple sWitching function for modern graphics processors.

    PubMed

    Arthur, Evan J; Brooks, Charles L

    2016-04-15

    Two fundamental challenges of simulating biologically relevant systems are the rapid calculation of the energy of solvation and the trajectory length of a given simulation. The Generalized Born model with a Simple sWitching function (GBSW) addresses these issues by using an efficient approximation of Poisson-Boltzmann (PB) theory to calculate each solute atom's free energy of solvation, the gradient of this potential, and the subsequent forces of solvation without the need for explicit solvent molecules. This study presents a parallel refactoring of the original GBSW algorithm and its implementation on newly available, low cost graphics chips with thousands of processing cores. Depending on the system size and nonbonded force cutoffs, the new GBSW algorithm offers speed increases of between one and two orders of magnitude over previous implementations while maintaining similar levels of accuracy. We find that much of the algorithm scales linearly with an increase of system size, which makes this water model cost effective for solvating large systems. Additionally, we utilize our GPU-accelerated GBSW model to fold the model system chignolin, and in doing so we demonstrate that these speed enhancements now make accessible folding studies of peptides and potentially small proteins. © 2016 Wiley Periodicals, Inc. PMID:26786647

  9. Graphic comparison of reserve-growth models for conventional oil and accumulation

    USGS Publications Warehouse

    Klett, T.R.

    2003-01-01

    The U.S. Geological Survey (USGS) periodically assesses crude oil, natural gas, and natural gas liquids resources of the world. The assessment procedure requires estimated recover-able oil and natural gas volumes (field size, cumulative production plus remaining reserves) in discovered fields. Because initial reserves are typically conservative, subsequent estimates increase through time as these fields are developed and produced. The USGS assessment of petroleum resources makes estimates, or forecasts, of the potential additions to reserves in discovered oil and gas fields resulting from field development, and it also estimates the potential fully developed sizes of undiscovered fields. The term ?reserve growth? refers to the commonly observed upward adjustment of reserve estimates. Because such additions are related to increases in the total size of a field, the USGS uses field sizes to model reserve growth. Future reserve growth in existing fields is a major component of remaining U.S. oil and natural gas resources and has therefore become a necessary element of U.S. petroleum resource assessments. Past and currently proposed reserve-growth models compared herein aid in the selection of a suitable set of forecast functions to provide an estimate of potential additions to reserves from reserve growth in the ongoing National Oil and Gas Assessment Project (NOGA). Reserve growth is modeled by construction of a curve that represents annual fractional changes of recoverable oil and natural gas volumes (for fields and reservoirs), which provides growth factors. Growth factors are used to calculate forecast functions, which are sets of field- or reservoir-size multipliers. Comparisons of forecast functions were made based on datasets used to construct the models, field type, modeling method, and length of forecast span. Comparisons were also made between forecast functions based on field-level and reservoir- level growth, and between forecast functions based on older

  10. INCORPORATING CONCENTRATION DEPENDENCE IN STABLE ISOTOPE MIXING MODELS

    EPA Science Inventory

    Stable isotopes are frequently used to quantify the contributions of multiple sources to a mixture; e.g., C and N isotopic signatures can be used to determine the fraction of three food sources in a consumer's diet. The standard dual isotope, three source linear mixing model ass...

  11. The Worm Process for the Ising Model is Rapidly Mixing

    NASA Astrophysics Data System (ADS)

    Collevecchio, Andrea; Garoni, Timothy M.; Hyndman, Timothy; Tokarev, Daniel

    2016-07-01

    We prove rapid mixing of the worm process for the zero-field ferromagnetic Ising model, on all finite connected graphs, and at all temperatures. As a corollary, we obtain a fully-polynomial randomized approximation scheme for the Ising susceptibility, and for a certain restriction of the two-point correlation function.

  12. A Nonlinear Mixed Effects Model for Latent Variables

    ERIC Educational Resources Information Center

    Harring, Jeffrey R.

    2009-01-01

    The nonlinear mixed effects model for continuous repeated measures data has become an increasingly popular and versatile tool for investigating nonlinear longitudinal change in observed variables. In practice, for each individual subject, multiple measurements are obtained on a single response variable over time or condition. This structure can be…

  13. COMBINING SOURCES IN STABLE ISOTOPE MIXING MODELS: ALTERNATIVE METHODS

    EPA Science Inventory

    Stable isotope mixing models are often used to quantify source contributions to a mixture. Examples include pollution source identification; trophic web studies; analysis of water sources for soils, plants, or water bodies; and many others. A common problem is having too many s...

  14. Sensitivity Analysis of Mixed Models for Incomplete Longitudinal Data

    ERIC Educational Resources Information Center

    Xu, Shu; Blozis, Shelley A.

    2011-01-01

    Mixed models are used for the analysis of data measured over time to study population-level change and individual differences in change characteristics. Linear and nonlinear functions may be used to describe a longitudinal response, individuals need not be observed at the same time points, and missing data, assumed to be missing at random (MAR),…

  15. Mixed Rasch Modeling of the Self-Rating Depression Scale

    ERIC Educational Resources Information Center

    Hong, Sehee; Min, Sae-Young

    2007-01-01

    In this study, mixed Rasch modeling was used on the Self-Rating Depression Scale (SDS), a widely used measure of depression, among a non-Western sample of 618 Korean college students. The results revealed three latent classes and confirmed the unidimensionality of the SDS. In addition, there was a significant effect for gender in terms of class…

  16. Historical development of stable isotope mixing models in ecology

    EPA Science Inventory

    More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...

  17. Development of stable isotope mixing models in ecology - Dublin

    EPA Science Inventory

    More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...

  18. Development of stable isotope mixing models in ecology - Fremantle

    EPA Science Inventory

    More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...

  19. Development of stable isotope mixing models in ecology - Sydney

    EPA Science Inventory

    More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...

  20. Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models

    ERIC Educational Resources Information Center

    Wagler, Amy E.

    2014-01-01

    Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…

  1. Development of stable isotope mixing models in ecology - Perth

    EPA Science Inventory

    More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...

  2. The Worm Process for the Ising Model is Rapidly Mixing

    NASA Astrophysics Data System (ADS)

    Collevecchio, Andrea; Garoni, Timothy M.; Hyndman, Timothy; Tokarev, Daniel

    2016-09-01

    We prove rapid mixing of the worm process for the zero-field ferromagnetic Ising model, on all finite connected graphs, and at all temperatures. As a corollary, we obtain a fully-polynomial randomized approximation scheme for the Ising susceptibility, and for a certain restriction of the two-point correlation function.

  3. A graphical interface based model for wind turbine drive train dynamics

    SciTech Connect

    Manwell, J.F.; McGowan, J.G.; Abdulwahid, U.; Rogers, A.; McNiff, B.

    1996-12-31

    This paper presents a summary of a wind turbine drive train dynamics code that has been under development at the University of Massachusetts, under National Renewable Energy Laboratory (NREL) support. The code is intended to be used to assist in the proper design and selection of drive train components. This work summarizes the development of the equations of motion for the model, and discusses the method of solution. In addition, a number of comparisons with analytical solutions and experimental field data are given. The summary includes conclusions and suggestions for future work on the model. 13 refs., 10 figs.

  4. Statistical basis and outputs of stable isotope mixing models: Comment on Fry (2013)

    EPA Science Inventory

    A recent article by Fry (2013; Mar Ecol Prog Ser 472:1−13) reviewed approaches to solving underdetermined stable isotope mixing systems, and presented a new graphical approach and set of summary statistics for the analysis of such systems. In his review, Fry (2013) mis-characteri...

  5. The GRASP 3: Graphical Reliability Analysis Simulation Program. Version 3: A users' manual and modelling guide

    NASA Technical Reports Server (NTRS)

    Phillips, D. T.; Manseur, B.; Foster, J. W.

    1982-01-01

    Alternate definitions of system failure create complex analysis for which analytic solutions are available only for simple, special cases. The GRASP methodology is a computer simulation approach for solving all classes of problems in which both failure and repair events are modeled according to the probability laws of the individual components of the system.

  6. Unsupervised Estimation of Mouse Sleep Scores and Dynamics Using a Graphical Model of Electrophysiological Measurements.

    PubMed

    Yaghouby, Farid; O'Hara, Bruce F; Sunderam, Sridhar

    2016-06-01

    The proportion, number of bouts, and mean bout duration of different vigilance states (Wake, NREM, REM) are useful indices of dynamics in experimental sleep research. These metrics are estimated by first scoring state, sometimes using an algorithm, based on electrophysiological measurements such as the electroencephalogram (EEG) and electromyogram (EMG), and computing their values from the score sequence. Isolated errors in the scores can lead to large discrepancies in the estimated sleep metrics. But most algorithms score sleep by classifying the state from EEG/EMG features independently in each time epoch without considering the dynamics across epochs, which could provide contextual information. The objective here is to improve estimation of sleep metrics by fitting a probabilistic dynamical model to mouse EEG/EMG data and then predicting the metrics from the model parameters. Hidden Markov models (HMMs) with multivariate Gaussian observations and Markov state transitions were fitted to unlabeled 24-h EEG/EMG feature time series from 20 mice to model transitions between the latent vigilance states; a similar model with unbiased transition probabilities served as a reference. Sleep metrics predicted from the HMM parameters did not deviate significantly from manual estimates except for rapid eye movement sleep (REM) ([Formula: see text]; Wilcoxon signed-rank test). Changes in value from Light to Dark conditions correlated well with manually estimated differences (Spearman's rho 0.43-0.84) except for REM. HMMs also scored vigilance state with over 90% accuracy. HMMs of EEG/EMG features can therefore characterize sleep dynamics from EEG/EMG measurements, a prerequisite for characterizing the effects of perturbation in sleep monitoring and control applications. PMID:27121993

  7. An epidemic model to evaluate the homogeneous mixing assumption

    NASA Astrophysics Data System (ADS)

    Turnes, P. P.; Monteiro, L. H. A.

    2014-11-01

    Many epidemic models are written in terms of ordinary differential equations (ODE). This approach relies on the homogeneous mixing assumption; that is, the topological structure of the contact network established by the individuals of the host population is not relevant to predict the spread of a pathogen in this population. Here, we propose an epidemic model based on ODE to study the propagation of contagious diseases conferring no immunity. The state variables of this model are the percentages of susceptible individuals, infectious individuals and empty space. We show that this dynamical system can experience transcritical and Hopf bifurcations. Then, we employ this model to evaluate the validity of the homogeneous mixing assumption by using real data related to the transmission of gonorrhea, hepatitis C virus, human immunodeficiency virus, and obesity.

  8. Logit-normal mixed model for Indian monsoon precipitation

    NASA Astrophysics Data System (ADS)

    Dietz, L. R.; Chatterjee, S.

    2014-09-01

    Describing the nature and variability of Indian monsoon precipitation is a topic of much debate in the current literature. We suggest the use of a generalized linear mixed model (GLMM), specifically, the logit-normal mixed model, to describe the underlying structure of this complex climatic event. Four GLMM algorithms are described and simulations are performed to vet these algorithms before applying them to the Indian precipitation data. The logit-normal model was applied to light, moderate, and extreme rainfall. Findings indicated that physical constructs were preserved by the models, and random effects were significant in many cases. We also found GLMM estimation methods were sensitive to tuning parameters and assumptions and therefore, recommend use of multiple methods in applications. This work provides a novel use of GLMM and promotes its addition to the gamut of tools for analysis in studying climate phenomena.

  9. Mixed inflaton and spectator field models after Planck

    SciTech Connect

    Enqvist, Kari; Takahashi, Tomo E-mail: tomot@cc.saga-u.ac.jp

    2013-10-01

    We investigate the possibility that the primordial perturbation has two sources: the inflaton and a spectator field, which is not dynamically important during inflation but which after inflation can contribute to the curvature perturbation. We derive the constraints on the model by using recent Planck results on the spectral index, tensor-to-scalar ratio and nonlinearity parameters f{sub NL} and τ{sub NL} for the cases with and without specifying the inflation and spectator models. If one chooses the spectator to be the curvaton with a quadratic potential, non-Gaussianities can be computed and imply restrictions on possible values of the ratio of the spectator-to-inflaton power R. We also consider a mixed curvaton and chaotic inflation model and show that even quartic chaotic inflation is still feasible in the context of mixed models even with Planck data.

  10. Learning a Tracking and Estimation Integrated Graphical Model for Human Pose Tracking.

    PubMed

    Zhao, Lin; Gao, Xinbo; Tao, Dacheng; Li, Xuelong

    2015-12-01

    We investigate the tracking of 2-D human poses in a video stream to determine the spatial configuration of body parts in each frame, but this is not a trivial task because people may wear different kinds of clothing and may move very quickly and unpredictably. The technology of pose estimation is typically applied, but it ignores the temporal context and cannot provide smooth, reliable tracking results. Therefore, we develop a tracking and estimation integrated model (TEIM) to fully exploit temporal information by integrating pose estimation with visual tracking. However, joint parsing of multiple articulated parts over time is difficult, because a full model with edges capturing all pairwise relationships within and between frames is loopy and intractable. In previous models, approximate inference was usually resorted to, but it cannot promise good results and the computational cost is large. We overcome these problems by exploring the idea of divide and conquer, which decomposes the full model into two much simpler tractable submodels. In addition, a novel two-step iteration strategy is proposed to efficiently conquer the joint parsing problem. Algorithmically, we design TEIM very carefully so that: 1) it enables pose estimation and visual tracking to compensate for each other to achieve desirable tracking results; 2) it is able to deal with the problem of tracking loss; and 3) it only needs past information and is capable of tracking online. Experiments are conducted on two public data sets in the wild with ground truth layout annotations, and the experimental results indicate the effectiveness of the proposed TEIM framework. PMID:25826809

  11. Quasi 1D Modeling of Mixed Compression Supersonic Inlets

    NASA Technical Reports Server (NTRS)

    Kopasakis, George; Connolly, Joseph W.; Paxson, Daniel E.; Woolwine, Kyle J.

    2012-01-01

    The AeroServoElasticity task under the NASA Supersonics Project is developing dynamic models of the propulsion system and the vehicle in order to conduct research for integrated vehicle dynamic performance. As part of this effort, a nonlinear quasi 1-dimensional model of the 2-dimensional bifurcated mixed compression supersonic inlet is being developed. The model utilizes computational fluid dynamics for both the supersonic and subsonic diffusers. The oblique shocks are modeled utilizing compressible flow equations. This model also implements variable geometry required to control the normal shock position. The model is flexible and can also be utilized to simulate other mixed compression supersonic inlet designs. The model was validated both in time and in the frequency domain against the legacy LArge Perturbation INlet code, which has been previously verified using test data. This legacy code written in FORTRAN is quite extensive and complex in terms of the amount of software and number of subroutines. Further, the legacy code is not suitable for closed loop feedback controls design, and the simulation environment is not amenable to systems integration. Therefore, a solution is to develop an innovative, more simplified, mixed compression inlet model with the same steady state and dynamic performance as the legacy code that also can be used for controls design. The new nonlinear dynamic model is implemented in MATLAB Simulink. This environment allows easier development of linear models for controls design for shock positioning. The new model is also well suited for integration with a propulsion system model to study inlet/propulsion system performance, and integration with an aero-servo-elastic system model to study integrated vehicle ride quality, vehicle stability, and efficiency.

  12. Modelling of externally mixed particles in the atmosphere

    NASA Astrophysics Data System (ADS)

    ZHU, Shupeng; Sartelet, Karine; Seigneur, Christian

    2014-05-01

    Particles present in the atmosphere have significant impacts on climate as well as on human health. Thus, it is important to accurately simulate and forecast their concentrations. Most commonly used air quality models assume that particles are internally mixed, largely for computational reasons. However, this assumption is disproved by measurements, especially close to sources. In fact, the externally-mixed properties of particles are important for aerosol source identification, radiative effects and particle evolution. In this study, a new size-composition resolved aerosol model is developed. It can solve the aerosol dynamic evolution for external mixtures taking into account the processes of coagulation, condensation and nucleation. Both the size of particles and the mass fraction of each chemical compound are discretized. For a given particle size, particles of different chemical composition may co-exist. Aerosol dynamics is solved in each grid cell by splitting coagulation and condensation/evaporation-nucleation processes. For the condensation/evaporation, surface equilibrium between gas and aerosol is calculated based on ISORROPIA and the newly developed H2O (Hydrophilic/Hydrophobic Organic) Model. Because size and chemical composition sections evolve during condensation/evaporation, concentrations need to be redistributed on fixed sections after condensation/evaporation to be able to use the model in 3 dimensions. This is done based on the numerical scheme HEMEN, which was initially developed for size redistribution. Chemical components can be grouped into several aggregates to reduce computational cost. The 0D model is validated by comparison to results obtained for internally mixed particles and the effect of mixing is investigated for up to 31 species and 4 aggregates. The model will be integrated into the air quality modeling platform POLYPHEMUS to investigate its performance in modeling air quality by comparing with observations during the MEGAPOLI

  13. Application of large eddy interaction model to a mixing layer

    NASA Technical Reports Server (NTRS)

    Murthy, S. N. B.

    1989-01-01

    The large eddy interaction model (LEIM) is a statistical model of turbulence based on the interaction of selected eddies with the mean flow and all of the eddies in a turbulent shear flow. It can be utilized as the starting point for obtaining physical structures in the flow. The possible application of the LEIM to a mixing layer formed between two parallel, incompressible flows with a small temperature difference is developed by invoking a detailed similarity between the spectra of velocity and temperature.

  14. Overview of the Graphical User Interface for the GERMcode (GCR Event-Based Risk Model)

    NASA Technical Reports Server (NTRS)

    Kim, Myung-Hee Y.; Cucinotta, Francis A.

    2010-01-01

    The descriptions of biophysical events from heavy ions are of interest in radiobiology, cancer therapy, and space exploration. The biophysical description of the passage of heavy ions in tissue and shielding materials is best described by a stochastic approach that includes both ion track structure and nuclear interactions. A new computer model called the GCR Event-based Risk Model (GERM) code was developed for the description of biophysical events from heavy ion beams at the NASA Space Radiation Laboratory (NSRL). The GERMcode calculates basic physical and biophysical quantities of high-energy protons and heavy ions that have been studied at NSRL for the purpose of simulating space radiobiological effects. For mono-energetic beams, the code evaluates the linear-energy transfer (LET), range (R), and absorption in tissue equivalent material for a given Charge (Z), Mass Number (A) and kinetic energy (E) of an ion. In addition, a set of biophysical properties are evaluated such as the Poisson distribution of ion or delta-ray hits for a specified cellular area, cell survival curves, and mutation and tumor probabilities. The GERMcode also calculates the radiation transport of the beam line for either a fixed number of user-specified depths or at multiple positions along the Bragg curve of the particle. The contributions from primary ion and nuclear secondaries are evaluated. The GERMcode accounts for the major nuclear interaction processes of importance for describing heavy ion beams, including nuclear fragmentation, elastic scattering, and knockout-cascade processes by using the quantum multiple scattering fragmentation (QMSFRG) model. The QMSFRG model has been shown to be in excellent agreement with available experimental data for nuclear fragmentation cross sections, and has been used by the GERMcode for application to thick target experiments. The GERMcode provides scientists participating in NSRL experiments with the data needed for the interpretation of their

  15. Overview of the Graphical User Interface for the GERM Code (GCR Event-Based Risk Model

    NASA Technical Reports Server (NTRS)

    Kim, Myung-Hee; Cucinotta, Francis A.

    2010-01-01

    The descriptions of biophysical events from heavy ions are of interest in radiobiology, cancer therapy, and space exploration. The biophysical description of the passage of heavy ions in tissue and shielding materials is best described by a stochastic approach that includes both ion track structure and nuclear interactions. A new computer model called the GCR Event-based Risk Model (GERM) code was developed for the description of biophysical events from heavy ion beams at the NASA Space Radiation Laboratory (NSRL). The GERM code calculates basic physical and biophysical quantities of high-energy protons and heavy ions that have been studied at NSRL for the purpose of simulating space radiobiological effects. For mono-energetic beams, the code evaluates the linear-energy transfer (LET), range (R), and absorption in tissue equivalent material for a given Charge (Z), Mass Number (A) and kinetic energy (E) of an ion. In addition, a set of biophysical properties are evaluated such as the Poisson distribution of ion or delta-ray hits for a specified cellular area, cell survival curves, and mutation and tumor probabilities. The GERM code also calculates the radiation transport of the beam line for either a fixed number of user-specified depths or at multiple positions along the Bragg curve of the particle. The contributions from primary ion and nuclear secondaries are evaluated. The GERM code accounts for the major nuclear interaction processes of importance for describing heavy ion beams, including nuclear fragmentation, elastic scattering, and knockout-cascade processes by using the quantum multiple scattering fragmentation (QMSFRG) model. The QMSFRG model has been shown to be in excellent agreement with available experimental data for nuclear fragmentation cross sections, and has been used by the GERM code for application to thick target experiments. The GERM code provides scientists participating in NSRL experiments with the data needed for the interpretation of their

  16. Dynamic behaviours of mix-game model and its application

    NASA Astrophysics Data System (ADS)

    Gou, Cheng-Ling

    2006-06-01

    In this paper a minority game (MG) is modified by adding into it some agents who play a majority game. Such a game is referred to as a mix-game. The highlight of this model is that the two groups of agents in the mix-game have different bounded abilities to deal with historical information and to count their own performance. Through simulations, it is found that the local volatilities change a lot by adding some agents who play the majority game into the MG and the change of local volatilities greatly depends on different combinations of historical memories of the two groups. Furthermore, the analyses of the underlying mechanisms for this finding are made. The applications of mix-game mode are also given as an example.

  17. Identity-by-Descent-Based Phasing and Imputation in Founder Populations Using Graphical Models

    PubMed Central

    Palin, Kimmo; Campbell, Harry; Wright, Alan F; Wilson, James F; Durbin, Richard

    2011-01-01

    Accurate knowledge of haplotypes, the combination of alleles co-residing on a single copy of a chromosome, enables powerful gene mapping and sequence imputation methods. Since humans are diploid, haplotypes must be derived from genotypes by a phasing process. In this study, we present a new computational model for haplotype phasing based on pairwise sharing of haplotypes inferred to be Identical-By-Descent (IBD). We apply the Bayesian network based model in a new phasing algorithm, called systematic long-range phasing (SLRP), that can capitalize on the close genetic relationships in isolated founder populations, and show with simulated and real genome-wide genotype data that SLRP substantially reduces the rate of phasing errors compared to previous phasing algorithms. Furthermore, the method accurately identifies regions of IBD, enabling linkage-like studies without pedigrees, and can be used to impute most genotypes with very low error rate. Genet. Epidemiol. 2011. © 2011 Wiley Periodicals, Inc.35:853-860, 2011 PMID:22006673

  18. New models for hyperspectral anomaly detection and un-mixing

    NASA Astrophysics Data System (ADS)

    Bernhardt, M.; Heather, J. P.; Smith, M. I.

    2005-06-01

    It is now established that hyperspectral images of many natural backgrounds have statistics with fat-tails. In spite of this, many of the algorithms that are used to process them appeal to the multivariate Gaussian model. In this paper we consider biologically motivated generative models that might explain observed mixtures of vegetation in natural backgrounds. The degree to which these models match the observed fat-tailed distributions is investigated. Having shown how fat-tailed statistics arise naturally from the generative process, the models are put to work in new anomaly detection and un-mixing algorithms. The performance of these algorithms is compared with more traditional approaches.

  19. A New Mixed Model Based on the Velocity Structure Function

    NASA Astrophysics Data System (ADS)

    Brun, Christophe; Friedrich, Rainer; Da Silva, Carlos B.; Métais, Olivier

    We propose a new mixed model for Large Eddy-Simulation based on the 3D spatial velocity increment. This approach blends the non-linear properties of the Increment model (Brun & Friedrich (2001)) with the eddy viscosity characteristics of the Structure Function model (Métais & Lesieur (1992)). The behaviour of this subgrid scale model is studied both via a priori tests of a plane jet at ReH=3000 and Large Eddy-Simulation of a round jet at ReD=25000. This approach allows to describe both forward and backward energy transfer encountered in transitional shear flows.

  20. An integrative C. elegans protein-protein interaction network with reliability assessment based on a probabilistic graphical model.

    PubMed

    Huang, Xiao-Tai; Zhu, Yuan; Chan, Leanne Lai Hang; Zhao, Zhongying; Yan, Hong

    2016-01-01

    In Caenorhabditis elegans, a large number of protein-protein interactions (PPIs) are identified by different experiments. However, a comprehensive weighted PPI network, which is essential for signaling pathway inference, is not yet available in this model organism. Therefore, we firstly construct an integrative PPI network in C. elegans with 12,951 interactions involving 5039 proteins from seven molecular interaction databases. Then, a reliability score based on a probabilistic graphical model (RSPGM) is proposed to assess PPIs. It assumes that the random number of interactions between two proteins comes from the Bernoulli distribution to avoid multi-links. The main parameter of the RSPGM score contains a few latent variables which can be considered as several common properties between two proteins. Validations on high-confidence yeast datasets show that RSPGM provides more accurate evaluation than other approaches, and the PPIs in the reconstructed PPI network have higher biological relevance than that in the original network in terms of gene ontology, gene expression, essentiality and the prediction of known protein complexes. Furthermore, this weighted integrative PPI network in C. elegans is employed on inferring interaction path of the canonical Wnt/β-catenin pathway as well. Most genes on the inferred interaction path have been validated to be Wnt pathway components. Therefore, RSPGM is essential and effective for evaluating PPIs and inferring interaction path. Finally, the PPI network with RSPGM scores can be queried and visualized on a user interactive website, which is freely available at . PMID:26555698

  1. Graphics processing unit accelerated three-dimensional model for the simulation of pulsed low-temperature plasmas

    SciTech Connect

    Fierro, Andrew Dickens, James; Neuber, Andreas

    2014-12-15

    A 3-dimensional particle-in-cell/Monte Carlo collision simulation that is fully implemented on a graphics processing unit (GPU) is described and used to determine low-temperature plasma characteristics at high reduced electric field, E/n, in nitrogen gas. Details of implementation on the GPU using the NVIDIA Compute Unified Device Architecture framework are discussed with respect to efficient code execution. The software is capable of tracking around 10 × 10{sup 6} particles with dynamic weighting and a total mesh size larger than 10{sup 8} cells. Verification of the simulation is performed by comparing the electron energy distribution function and plasma transport parameters to known Boltzmann Equation (BE) solvers. Under the assumption of a uniform electric field and neglecting the build-up of positive ion space charge, the simulation agrees well with the BE solvers. The model is utilized to calculate plasma characteristics of a pulsed, parallel plate discharge. A photoionization model provides the simulation with additional electrons after the initial seeded electron density has drifted towards the anode. Comparison of the performance benefits between the GPU-implementation versus a CPU-implementation is considered, and a speed-up factor of 13 for a 3D relaxation Poisson solver is obtained. Furthermore, a factor 60 speed-up is realized for parallelization of the electron processes.

  2. Salient and Non-Salient Fiducial Detection using a Probabilistic Graphical Model

    PubMed Central

    Benitez-Quiroz, C. Fabian; Rivera, Samuel; Gotardo, Paulo F.U.; Martinez, Aleix M.

    2013-01-01

    Deformable shape detection is an important problem in computer vision and pattern recognition. However, standard detectors are typically limited to locating only a few salient landmarks such as landmarks near edges or areas of high contrast, often conveying insufficient shape information. This paper presents a novel statistical pattern recognition approach to locate a dense set of salient and non-salient landmarks in images of a deformable object. We explore the fact that several object classes exhibit a homogeneous structure such that each landmark position provides some information about the position of the other landmarks. In our model, the relationship between all pairs of landmarks is naturally encoded as a probabilistic graph. Dense landmark detections are then obtained with a new sampling algorithm that, given a set of candidate detections, selects the most likely positions as to maximize the probability of the graph. Our experimental results demonstrate accurate, dense landmark detections within and across different databases. PMID:24187386

  3. Logit-normal mixed model for Indian Monsoon rainfall extremes

    NASA Astrophysics Data System (ADS)

    Dietz, L. R.; Chatterjee, S.

    2014-03-01

    Describing the nature and variability of Indian monsoon rainfall extremes is a topic of much debate in the current literature. We suggest the use of a generalized linear mixed model (GLMM), specifically, the logit-normal mixed model, to describe the underlying structure of this complex climatic event. Several GLMM algorithms are described and simulations are performed to vet these algorithms before applying them to the Indian precipitation data procured from the National Climatic Data Center. The logit-normal model was applied with fixed covariates of latitude, longitude, elevation, daily minimum and maximum temperatures with a random intercept by weather station. In general, the estimation methods concurred in their suggestion of a relationship between the El Niño Southern Oscillation (ENSO) and extreme rainfall variability estimates. This work provides a valuable starting point for extending GLMM to incorporate the intricate dependencies in extreme climate events.

  4. Linear mixing model applied to AVHRR LAC data

    NASA Technical Reports Server (NTRS)

    Holben, Brent N.; Shimabukuro, Yosio E.

    1993-01-01

    A linear mixing model was applied to coarse spatial resolution data from the NOAA Advanced Very High Resolution Radiometer. The reflective component of the 3.55 - 3.93 microns channel was extracted and used with the two reflective channels 0.58 - 0.68 microns and 0.725 - 1.1 microns to run a Constraine Least Squares model to generate vegetation, soil, and shade fraction images for an area in the Western region of Brazil. The Landsat Thematic Mapper data covering the Emas National park region was used for estimating the spectral response of the mixture components and for evaluating the mixing model results. The fraction images were compared with an unsupervised classification derived from Landsat TM data acquired on the same day. The relationship between the fraction images and normalized difference vegetation index images show the potential of the unmixing techniques when using coarse resolution data for global studies.

  5. Effects of microscopic diffusion and rotational mixing on stellar models

    NASA Astrophysics Data System (ADS)

    Chaboyer, Brian Charles

    1993-01-01

    We have calculated evolutionary tracks for halo stars and constructed isochrones with alpha-enhanced compositions which cover the entire globular cluster metallicity range and include the effects of the diffusion of He-4. We find that including the effects of helium diffusion has a negligible effect (less than 0.5 Gyr) on the derived ages of globular clusters. Regardless of the inclusion of helium diffusion, we find a significant age spread of 5 Gyr among the globular clusters. The oldest globular cluster studied was M92 with an age of 17 +/- 2 Gyr old. The stellar models may be tested by comparing the Li-7 depletion and surface rotation rates to observations in young clusters stars. The observed Li-7 abundances clearly indicate that standard or diffusive models do not deplete enough Li-7. Instabilities induced by rotation provide an additional mixing mechanism. For this reason the stellar evolution code was modified to include the combined effects of diffusion and rotational mixing of H-1, He-4 and the trace elements He-3, Li-7 and Be-9. The calibrated solar models have a convection zone depth of 0.709-0.714 solar radius, in excellent agreement with the observed depth of (0.713 +/- 0.003) solar radius. The rotational mixing inhibits the diffusion in the outer parts of the models, leading to a decrease in the envelope diffusion by 50-80 percent. These models are able to reproduce the Li-7 abundances and rotation velocities observed in young cluster stars. Observations of Li-7 abundances in extremely metal poor halo stars provide another test of the stellar models. Standard models do a good job of fitting the observed Li-7 abundances and predict a primordial Li-7 abundance of log N(Li) = 2.24 +/- 0.03. Models of hot stars which include microscopic diffusion, but not rotational mixing, deplete too much Li-7. The (Fe/H) = 2.28 stellar models which include both diffusion and rotational mixing provide an excellent match to the observations, and predict a primordial Li-7

  6. Combining sources in stable isotope mixing models: alternative methods.

    PubMed

    Phillips, Donald L; Newsome, Seth D; Gregg, Jillian W

    2005-08-01

    Stable isotope mixing models are often used to quantify source contributions to a mixture. Examples include pollution source identification; trophic web studies; analysis of water sources for soils, plants; or water bodies, and many others. A common problem is having too many sources to allow a unique solution. We discuss two alternative procedures for addressing this problem. One option is a priori to combine sources with similar signatures so the number of sources is small enough to provide a unique solution. Aggregation should be considered only when isotopic signatures of clustered sources are not significantly different, and sources are related so the combined source group has some functional significance. For example, in a food web analysis, lumping several species within a trophic guild allows more interpretable results than lumping disparate food sources, even if they have similar isotopic signatures. One result of combining mixing model sources is increased uncertainty of the combined end-member isotopic signatures and consequently the source contribution estimates; this effect can be quantified using the IsoError model (http://www.epa.gov/wed/pages/models/isotopes/isoerror1_04.htm). As an alternative to lumping sources before a mixing analysis, the IsoSource mixing model (http://www.epa.gov/wed/pages/models/isosource/isosource.htm) can be used to find all feasible solutions of source contributions consistent with isotopic mass balance. While ranges of feasible contributions for each individual source can often be quite broad, contributions from functionally related groups of sources can be summed a posteriori, producing a range of solutions for the aggregate source that may be considerably narrower. A paleo-human dietary analysis example illustrates this method, which involves a terrestrial meat food source, a combination of three terrestrial plant foods, and a combination of three marine foods. In this case, a posteriori aggregation of sources allowed

  7. Model aerodynamic test results for two variable cycle engine coannular exhaust systems at simulated takeoff and cruise conditions. Comprehensive data report. Volume 3: Graphical data book 1

    NASA Technical Reports Server (NTRS)

    Nelson, D. P.

    1981-01-01

    A graphical presentation of the aerodynamic data acquired during coannular nozzle performance wind tunnel tests is given. The graphical data consist of plots of nozzle gross thrust coefficient, fan nozzle discharge coefficient, and primary nozzle discharge coefficient. Normalized model component static pressure distributions are presented as a function of primary total pressure, fan total pressure, and ambient static pressure for selected operating conditions. In addition, the supersonic cruise configuration data include plots of nozzle efficiency and secondary-to-fan total pressure pumping characteristics. Supersonic and subsonic cruise data are given.

  8. Defining order and timing of mutations during cancer progression: the TO-DAG probabilistic graphical model

    PubMed Central

    Lecca, Paola; Casiraghi, Nicola; Demichelis, Francesca

    2015-01-01

    Somatic mutations arise and accumulate both during tumor genesis and progression. However, the order in which mutations occur is an open question and the inference of the temporal ordering at the gene level could potentially impact on patient treatment. Thus, exploiting recent observations suggesting that the occurrence of mutations is a non-memoryless process, we developed a computational approach to infer timed oncogenetic directed acyclic graphs (TO-DAGs) from human tumor mutation data. Such graphs represent the path and the waiting times of alterations during tumor evolution. The probability of occurrence of each alteration in a path is the probability that the alteration occurs when all alterations prior to it have occurred. The waiting time between an alteration and the subsequent is modeled as a stochastic function of the conditional probability of the event given the occurrence of the previous one. TO-DAG performances have been evaluated both on synthetic data and on somatic non-silent mutations from prostate cancer and melanoma patients and then compared with those of current well-established approaches. TO-DAG shows high performance scores on synthetic data and recognizes mutations in gatekeeper tumor suppressor genes as trigger for several downstream mutational events in the human tumor data. PMID:26528329

  9. Rapid estimation of lives of deficient superpave mixes and laboratory-based accelerated mix testing models

    NASA Astrophysics Data System (ADS)

    Manandhar, Chandra Bahadur

    The engineers from the Kansas Department of Transportation (KDOT) often have to decide whether or not to accept non-conforming Superpave mixtures during construction. The first part of this study focused on estimating lives of deficient Superpave pavements incorporating nonconforming Superpave mixtures. These criteria were based on the Hamburg Wheel-Tracking Device (HWTD) test results and analysis. The second part of this study focused on developing accelerated mix testing models to considerably reduce test duration. To accomplish the first objective, nine fine-graded Superpave mixes of 12.5-mm nominal maximum aggregate size (NMAS) with asphalt grade PG 64-22 from six administrative districts of KDOT were selected. Specimens were prepared at three different target air void levels Ndesign gyrations and four target simulated in-place density levels with the Superpave gyratory compactor. Average number of wheel passes to 20-mm rut depth, creep slope, stripping slope, and stripping inflection point in HWTD tests were recorded and then used in the statistical analysis. Results showed that, in general, higher simulated in-place density up to a certain limit of 91% to 93%, results in a higher number of wheel passes until 20-mm rut depth in HWTD tests. A Superpave mixture with very low air voids Ndesign (2%) level performed very poorly in the HWTD test. HWTD tests were also performed on six 12.5-mm NMAS mixtures with air voids Ndesign of 4% for six projects, simulated in-place density of 93%, two temperature levels and five load levels with binder grades of PG 64-22, PG 64-28, and PG 70-22. Field cores of 150-mm in diameter from three projects in three KDOT districts with 12.5-mm NMAS and asphalt grade of PG 64-22 were also obtained and tested in HWTD for model evaluation. HWTD test results indicated as expected. Statistical analysis was performed and accelerated mix testing models were developed to determine the effect of increased temperature and load on the duration of

  10. Modeling of Low Feed-Through CD Mix Implosions

    NASA Astrophysics Data System (ADS)

    Pino, Jesse; MacLaren, Steven; Greenough, Jeff; Casey, Daniel; Dittrich, Tom; Kahn, Shahab; Kyrala, George; Ma, Tammy; Salmonson, Jay; Smalyuk, Vladimir; Tipton, Robert

    2015-11-01

    The CD Mix campaign previously demonstrated the use of nuclear diagnostics to study the mix of separated reactants in plastic capsule implosions at the National Ignition Facility. However, the previous implosions suffered from large instability growth seeded from perturbations on the outside of the capsule. Recently, the separated reactants technique has been applied to two platforms designed to minimize this feed-through and isolate local mix at the gas-ablator interface: the Two Shock (TS) and Adiabat-Shaped (AS) Platforms. Additionally, the background contamination of Deuterium in the gas has been greatly reduced, allowing for simultaneous observation of TT, DT, and DD neutrons, which respectively give information about core gas performance, gas-shell atomic mix, and heating of the shell. In this talk, we describe efforts to model these implosions using high-resolution 2D ARES simulations with both a Reynolds-Averaged Navier Stokes method and an enhanced diffusivity model. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-674867.

  11. Uncertainty in mixing models: a blessing in disguise?

    NASA Astrophysics Data System (ADS)

    Delsman, J. R.; Oude Essink, G. H. P.

    2012-04-01

    Despite the abundance of tracer-based studies in catchment hydrology over the past decades, relatively few studies have addressed the uncertainty associated with these studies in much detail. This uncertainty stems from analytical error, spatial and temporal variance in end-member composition, and from not incorporating all relevant processes in the necessarily simplistic mixing models. Instead of applying standard EMMA methodology, we used end-member mixing model analysis within a Monte Carlo framework to quantify the uncertainty surrounding our analysis. Borrowing from the well-known GLUE methodology, we discarded mixing models that could not satisfactorily explain sample concentrations and analyzed the posterior parameter set. This use of environmental tracers aided in disentangling hydrological pathways in a Dutch polder catchment. This 10 km2 agricultural catchment is situated in the coastal region of the Netherlands. Brackish groundwater seepage, originating from Holocene marine transgressions, adversely affects water quality in this catchment. Current water management practice is aimed at improving water quality by flushing the catchment with fresh water from the river Rhine. Climate change is projected to decrease future fresh water availability, signifying the need for a more sustainable water management practice and a better understanding of the functioning of the catchment. The end-member mixing analysis increased our understanding of the hydrology of the studied catchment. The use of a GLUE-like framework for applying the end-member mixing analysis not only quantified the uncertainty associated with the analysis, the analysis of the posterior parameter set also identified the existence of catchment processes otherwise overlooked.

  12. Class Evolution Tree: A Graphical Tool to Support Decisions on the Number of Classes in Exploratory Categorical Latent Variable Modeling for Rehabilitation Research

    ERIC Educational Resources Information Center

    Kriston, Levente; Melchior, Hanne; Hergert, Anika; Bergelt, Corinna; Watzke, Birgit; Schulz, Holger; von Wolff, Alessa

    2011-01-01

    The aim of our study was to develop a graphical tool that can be used in addition to standard statistical criteria to support decisions on the number of classes in explorative categorical latent variable modeling for rehabilitation research. Data from two rehabilitation research projects were used. In the first study, a latent profile analysis was…

  13. A Graphical Simulation of Vapor-Liquid Equilibrium for Use as an Undergraduate Laboratory Experiment and to Demonstrate the Concept of Mathematical Modeling.

    ERIC Educational Resources Information Center

    Whitman, David L.; Terry, Ronald E.

    1985-01-01

    Demonstrating petroleum engineering concepts in undergraduate laboratories often requires expensive and time-consuming experiments. To eliminate these problems, a graphical simulation technique was developed for junior-level laboratories which illustrate vapor-liquid equilibrium and the use of mathematical modeling. A description of this…

  14. WE-E-BRE-05: Ensemble of Graphical Models for Predicting Radiation Pneumontis Risk

    SciTech Connect

    Lee, S; Ybarra, N; Jeyaseelan, K; El Naqa, I; Faria, S; Kopek, N

    2014-06-15

    Purpose: We propose a prior knowledge-based approach to construct an interaction graph of biological and dosimetric radiation pneumontis (RP) covariates for the purpose of developing a RP risk classifier. Methods: We recruited 59 NSCLC patients who received curative radiotherapy with minimum 6 month follow-up. 16 RP events was observed (CTCAE grade ≥2). Blood serum was collected from every patient before (pre-RT) and during RT (mid-RT). From each sample the concentration of the following five candidate biomarkers were taken as covariates: alpha-2-macroglobulin (α2M), angiotensin converting enzyme (ACE), transforming growth factor β (TGF-β), interleukin-6 (IL-6), and osteopontin (OPN). Dose-volumetric parameters were also included as covariates. The number of biological and dosimetric covariates was reduced by a variable selection scheme implemented by L1-regularized logistic regression (LASSO). Posterior probability distribution of interaction graphs between the selected variables was estimated from the data under the literature-based prior knowledge to weight more heavily the graphs that contain the expected associations. A graph ensemble was formed by averaging the most probable graphs weighted by their posterior, creating a Bayesian Network (BN)-based RP risk classifier. Results: The LASSO selected the following 7 RP covariates: (1) pre-RT concentration level of α2M, (2) α2M level mid- RT/pre-RT, (3) pre-RT IL6 level, (4) IL6 level mid-RT/pre-RT, (5) ACE mid-RT/pre-RT, (6) PTV volume, and (7) mean lung dose (MLD). The ensemble BN model achieved the maximum sensitivity/specificity of 81%/84% and outperformed univariate dosimetric predictors as shown by larger AUC values (0.78∼0.81) compared with MLD (0.61), V20 (0.65) and V30 (0.70). The ensembles obtained by incorporating the prior knowledge improved classification performance for the ensemble size 5∼50. Conclusion: We demonstrated a probabilistic ensemble method to detect robust associations between

  15. A Adaptive Mixing Depth Model for AN Industrialized Shoreline Area.

    NASA Astrophysics Data System (ADS)

    Dunk, Richard H.

    1993-01-01

    Internal boundary layer characteristics are often overlooked in atmospheric diffusion modeling applications but are essential for accurate air quality assessment. This study focuses on a unique air pollution problem that is partially resolved by representative internal boundary layer description and prediction. Emissions from a secondary non-ferrous smelter located adjacent to a large waterway, which is situated near a major coastal zone, became suspect in causing adverse air quality. In an effort to prove or disprove this allegation, "accepted" air quality modeling was performed. Predicted downwind concentrations indicated that the smelter plume was not responsible for causing regulatory standards to be exceeded. However, chronic community complaints continued to be directed toward the smelter facility. Further investigation into the problem revealed that complaint occurrences coincided with onshore southeasterly flows. Internal boundary layer development during onshore flow was assumed to produce a mixing depth conducive to plume trapping or fumigation. The preceding premise led to the utilization of estimated internal boundary layer depths for dispersion model input in an attempt to improve prediction accuracy. Monitored downwind ambient air concentrations showed that model predictions were still substantially lower than actual values. After analyzing the monitored values and comparing them with actual plume observations conducted during several onshore flow occurrences, the author hypothesized that the waterway could cause a damping effect on internal boundary layer development. This effective decrease in mixing depths would explain the abnormally high ambient air concentrations experienced during onshore flows. Therefore, a full-scale field study was designed and implemented to study the waterway's influence on mixing depth characteristics. The resultant data were compiled and formulated into an area-specific mixing depth model that can be adapted to

  16. Fermion masses and mixing in general warped extra dimensional models

    NASA Astrophysics Data System (ADS)

    Frank, Mariana; Hamzaoui, Cherif; Pourtolami, Nima; Toharia, Manuel

    2015-06-01

    We analyze fermion masses and mixing in a general warped extra dimensional model, where all the Standard Model (SM) fields, including the Higgs, are allowed to propagate in the bulk. In this context, a slightly broken flavor symmetry imposed universally on all fermion fields, without distinction, can generate the full flavor structure of the SM, including quarks, charged leptons and neutrinos. For quarks and charged leptons, the exponential sensitivity of their wave functions to small flavor breaking effects yield hierarchical masses and mixing as it is usual in warped models with fermions in the bulk. In the neutrino sector, the exponential wave-function factors can be flavor blind and thus insensitive to the small flavor symmetry breaking effects, directly linking their masses and mixing angles to the flavor symmetric structure of the five-dimensional neutrino Yukawa couplings. The Higgs must be localized in the bulk and the model is more successful in generalized warped scenarios where the metric background solution is different than five-dimensional anti-de Sitter (AdS5 ). We study these features in two simple frameworks, flavor complimentarity and flavor democracy, which provide specific predictions and correlations between quarks and leptons, testable as more precise data in the neutrino sector becomes available.

  17. Extension of the stochastic mixing model to cumulonimbus clouds

    SciTech Connect

    Raymond, D.J.; Blyth, A.M. )

    1992-11-01

    The stochastic mixing model of cumulus clouds is extended to the case in which ice and precipitation form. A simple cloud microphysical model is adopted in which ice crystals and aggregates are carried along with the updraft, whereas raindrops, graupel, and hail are assumed to immediately fall out. The model is then applied to the 2 August 1984 case study of convection over the Magdalena Mountains of central New Mexico, with excellent results. The formation of ice and precipitation can explain the transition of this system from a cumulus congestus cloud to thunderstorm. 28 refs.

  18. A Mixed-Culture Biofilm Model with Cross-Diffusion.

    PubMed

    Rahman, Kazi A; Sudarsan, Rangarajan; Eberl, Hermann J

    2015-11-01

    We propose a deterministic continuum model for mixed-culture biofilms. A crucial aspect is that movement of one species is affected by the presence of the other. This leads to a degenerate cross-diffusion system that generalizes an earlier single-species biofilm model. Two derivations of this new model are given. One, like cellular automata biofilm models, starts from a discrete in space lattice differential equation where the spatial interaction is described by microscopic rules. The other one starts from the same continuous mass balances that are the basis of other deterministic biofilm models, but it gives up a simplifying assumption of these models that has recently been criticized as being too restrictive in terms of ecological structure. We show that both model derivations lead to the same PDE model, if corresponding closure assumptions are introduced. To investigate the role of cross-diffusion, we conduct numerical simulations of three biofilm systems: competition, allelopathy and a mixed system formed by an aerobic and an anaerobic species. In all cases, we find that accounting for cross-diffusion affects local distribution of biomass, but it does not affect overall lumped quantities such as the total amount of biomass in the system. PMID:26582360

  19. Shell Model Depiction of Isospin Mixing in sd Shell

    SciTech Connect

    Lam, Yi Hua; Smirnova, Nadya A.; Caurier, Etienne

    2011-11-30

    We constructed a new empirical isospin-symmetry breaking (ISB) Hamiltonian in the sd(1s{sub 1/2}, 0d{sub 5/2} and 0d{sub 3/2}) shell-model space. In this contribution, we present its application to two important case studies: (i){beta}-delayed proton emission from {sup 22}Al and (ii) isospin-mixing correction to superallowed 0{sup +}{yields}0{sup +}{beta}-decay ft-values.

  20. Pricing turbo warrants under mixed-exponential jump diffusion model

    NASA Astrophysics Data System (ADS)

    Yu, Jianfeng; Xu, Weidong

    2016-06-01

    Turbo warrant is a special type of barrier options in which the rebate is calculated as another exotic option. In this paper, using Laplace transforms we obtain the valuation of turbo warrant under the mixed-exponential jump diffusion model, which is able to approximate any jump size distribution. The numerical Laplace inversion examples verify that the analytical solutions are accurate. The results of simulation confirm the argument that jump risk should not be ignored in the valuation of turbo warrants.

  1. Estimating Preferential Flow in Karstic Aquifers Using Statistical Mixed Models

    PubMed Central

    Anaya, Angel A.; Padilla, Ingrid; Macchiavelli, Raul; Vesper, Dorothy J.; Meeker, John D.; Alshawabkeh, Akram N.

    2013-01-01

    Karst aquifers are highly productive groundwater systems often associated with conduit flow. These systems can be highly vulnerable to contamination, resulting in a high potential for contaminant exposure to humans and ecosystems. This work develops statistical models to spatially characterize flow and transport patterns in karstified limestone and determines the effect of aquifer flow rates on these patterns. A laboratory-scale Geo-HydroBed model is used to simulate flow and transport processes in a karstic limestone unit. The model consists of stainless-steel tanks containing a karstified limestone block collected from a karst aquifer formation in northern Puerto Rico. Experimental work involves making a series of flow and tracer injections, while monitoring hydraulic and tracer response spatially and temporally. Statistical mixed models are applied to hydraulic data to determine likely pathways of preferential flow in the limestone units. The models indicate a highly heterogeneous system with dominant, flow-dependent preferential flow regions. Results indicate that regions of preferential flow tend to expand at higher groundwater flow rates, suggesting a greater volume of the system being flushed by flowing water at higher rates. Spatial and temporal distribution of tracer concentrations indicates the presence of conduit-like and diffuse flow transport in the system, supporting the notion of both combined transport mechanisms in the limestone unit. The temporal response of tracer concentrations at different locations in the model coincide with, and confirms the preferential flow distribution generated with the statistical mixed models used in the study. PMID:23802921

  2. Modeling and diagnosing interface mix in layered ICF implosions

    NASA Astrophysics Data System (ADS)

    Weber, C. R.; Berzak Hopkins, L. F.; Clark, D. S.; Haan, S. W.; Ho, D. D.; Meezan, N. B.; Milovich, J. L.; Robey, H. F.; Smalyuk, V. A.; Thomas, C. A.

    2015-11-01

    Mixing at the fuel-ablator interface of an inertial confinement fusion (ICF) implosion can arise from an unfavorable in-flight Atwood number between the cryogenic DT fuel and the ablator. High-Z dopant is typically added to the ablator to control the Atwood number, but recent high-density carbon (HDC) capsules have been shot at the National Ignition Facility (NIF) without this added dopant. Highly resolved post-shot modeling of these implosions shows that there was significant mixing of ablator material into the dense DT fuel. This mix lowers the fuel density and results in less overall compression, helping to explain the measured ratio of down scattered-to-primary neutrons. Future experimental designs will seek to improve this issue through adding dopant and changing the x-ray spectra with a different hohlraum wall material. To test these changes, we are designing an experimental platform to look at the growth of this mixing layer. This technique uses side-on radiography to measure the spatial extent of an embedded high-Z tracer layer near the interface. Work performed under the auspices of the U.S. D.O.E. by Lawrence Livermore National Laboratory under Contract No. DE-AC52-07NA27344.

  3. Effects of mixing in threshold models of social behavior

    NASA Astrophysics Data System (ADS)

    Akhmetzhanov, Andrei R.; Worden, Lee; Dushoff, Jonathan

    2013-07-01

    We consider the dynamics of an extension of the influential Granovetter model of social behavior, where individuals are affected by their personal preferences and observation of the neighbors’ behavior. Individuals are arranged in a network (usually the square lattice), and each has a state and a fixed threshold for behavior changes. We simulate the system asynchronously by picking a random individual and we either update its state or exchange it with another randomly chosen individual (mixing). We describe the dynamics analytically in the fast-mixing limit by using the mean-field approximation and investigate it mainly numerically in the case of finite mixing. We show that the dynamics converge to a manifold in state space, which determines the possible equilibria, and show how to estimate the projection of this manifold by using simulated trajectories, emitted from different initial points. We show that the effects of considering the network can be decomposed into finite-neighborhood effects, and finite-mixing-rate effects, which have qualitatively similar effects. Both of these effects increase the tendency of the system to move from a less-desired equilibrium to the “ground state.” Our findings can be used to probe shifts in behavioral norms and have implications for the role of information flow in determining when social norms that have become unpopular in particular communities (such as foot binding or female genital cutting) persist or vanish.

  4. Mixing during intravertebral arterial infusions in an in vitro model.

    PubMed

    Lutz, Robert J; Warren, Kathy; Balis, Frank; Patronas, Nicholas; Dedrick, Robert L

    2002-06-01

    Regional delivery of drugs can offer a pharmacokinetic advantage in the treatment of localized tumors. One method of regional delivery is by intra-arterial infusion into the basilar/vertebral artery network that provides local access to infratentorial tumors, which are frequent locations of childhood brain cancers. Proper delivery of drug by infused solutions requires adequate mixing of the infusate at the site of infusion within the artery lumen. Our mixing studies with an in vitro model of the vertebral artery network indicate that streaming of drug solution is likely to occur at low, steady infusion rates of 2 ml/min. Streaming leads to maldistribution of drug to distal perfused brain regions and may result in toxic levels in some regions while concurrently yielding subtherapeutic levels in adjacent regions. According to our model findings, distribution to both brain hemispheres is not likely following infusion into a single vertebral artery even if the infusate is well-mixed at the infusion site. This outcome results from the unique fluid flow properties of two converging channels, which are represented by the left and right vertebral branches converging into the basilar. Fluid in the model remains stratified on the side of the basilar artery served by the infused vertebral artery. Careful thought and planning of the methods of intravertebral drug infusions for treating posterior fossa tumors are required to assure proper distribution of the drug to the desired tissue regions. Improper delivery may be responsible for some noted toxicities or for failure of the treatments. PMID:12164691

  5. Variable selection for semiparametric mixed models in longitudinal studies.

    PubMed

    Ni, Xiao; Zhang, Daowen; Zhang, Hao Helen

    2010-03-01

    We propose a double-penalized likelihood approach for simultaneous model selection and estimation in semiparametric mixed models for longitudinal data. Two types of penalties are jointly imposed on the ordinary log-likelihood: the roughness penalty on the nonparametric baseline function and a nonconcave shrinkage penalty on linear coefficients to achieve model sparsity. Compared to existing estimation equation based approaches, our procedure provides valid inference for data with missing at random, and will be more efficient if the specified model is correct. Another advantage of the new procedure is its easy computation for both regression components and variance parameters. We show that the double-penalized problem can be conveniently reformulated into a linear mixed model framework, so that existing software can be directly used to implement our method. For the purpose of model inference, we derive both frequentist and Bayesian variance estimation for estimated parametric and nonparametric components. Simulation is used to evaluate and compare the performance of our method to the existing ones. We then apply the new method to a real data set from a lactation study. PMID:19397585

  6. Further considerations on modeling the sea breeze with a mixed-layer model

    NASA Technical Reports Server (NTRS)

    Anthes, R. A.; Keyser, D.; Deardorff, J. W.

    1982-01-01

    Mixed-layer models have been used to simulate low-level flows under a variety of situations, including flow over complex terrain and in the vicinity of coastal zones. The advantage of mixed-layer models compared to multilevel models is their simplicity and minimal computational requirements. A disadvantage is that the atmosphere above the mixed layer is not modeled explicitly and approximations pertaining to this layer become necessary. This paper examines five approximations for treating this upper layer for a simple sea-breeze circulation. Approximating the flow immediately above the mixed-layer height h by the mixed-layer velocity and using this velocity to advect potential temperature above h gives a better simulation of the sea breeze than the approximation used by Anthes et al. (1980), which neglected horizontal advection at this level.

  7. IMaGe: Iterative Multilevel Probabilistic Graphical Model for Detection and Segmentation of Multiple Sclerosis Lesions in Brain MRI.

    PubMed

    Subbanna, Nagesh; Precup, Doina; Arnold, Douglas; Arbel, Tal

    2015-01-01

    In this paper, we present IMaGe, a new, iterative two-stage probabilistic graphical model for detection and segmentation of Multiple Sclerosis (MS) lesions. Our model includes two levels of Markov Random Fields (MRFs). At the bottom level, a regular grid voxel-based MRF identifies potential lesion voxels, as well as other tissue classes, using local and neighbourhood intensities and class priors. Contiguous voxels of a particular tissue type are grouped into regions. A higher, non-lattice MRF is then constructed, in which each node corresponds to a region, and edges are defined based on neighbourhood relationships between regions. The goal of this MRF is to evaluate the probability of candidate lesions, based on group intensity, texture and neighbouring regions. The inferred information is then propagated to the voxel-level MRF. This process of iterative inference between the two levels repeats as long as desired. The iterations suppress false positives and refine lesion boundaries. The framework is trained on 660 MRI volumes of MS patients enrolled in clinical trials from 174 different centres, and tested on a separate multi-centre clinical trial data set with 535 MRI volumes. All data consists of T1, T2, PD and FLAIR contrasts. In comparison to other MRF methods, such as, and a traditional MRF, IMaGe is much more sensitive (with slightly better PPV). It outperforms its nearest competitor by around 20% when detecting very small lesions (3-10 voxels). This is a significant result, as such lesions constitute around 40% of the total number of lesions. PMID:26221699

  8. Development of consistent equivalent models by mixed-model search

    NASA Technical Reports Server (NTRS)

    Guo, X.; Stoica, A.; Zebulum, R.; Keymeulen, D.

    2003-01-01

    This paper introduces a new approach to the development of equivalent models. Models of various accuracy and simulation speed may be needed in different contexts of design and analysis, or within different simulators.

  9. Effects of Microscopic Diffusion and Rotational Mixing on Stellar Models

    NASA Astrophysics Data System (ADS)

    Chaboyer, Brian

    1994-02-01

    Evolutionary tracks and isochrones were calculated with alpha-enhanced compositions which cover the entire globular cluster metallicity range and include the effects of the diffusion of ^4He. Including the effects of helium diffusion has a negligible effect (< 0.5 Gyr) on the derived ages of globular clusters. Regardless of the inclusion of helium diffusion, a significant age spread of ~5$ Gyr exists among the globular clusters. The oldest globular cluster studied was M92 with an age of 17 +/- 2 Gyr old. The stellar models may be tested by comparing the Li depletion and surface rotation rates to observations in young clusters stars. The observed Li abundances clearly indicate that standard or diffusive models do not deplete enough Li. Instabilities induced by rotation provide an additional mixing mechanism. For this reason the stellar evolution code was modified to include the combined effects of diffusion and rotational mixing on ^1H, ^4He and the trace elements ^3He, ^6Li, ^7Li, and ^9Be. The calibrated solar models have a convection zone depth of 0.709 - 0.714~R_odot, in excellent agreement with the observed depth of (0.713 +/- 0.003)~R_odot. The rotational mixing inhibits the diffusion in the outer parts of the models, leading to a decrease in the envelope diffusion by 30 - 50%. The combined models are able to simultaneously match the Li abundances observed in the Pleiades, UMaG, Hyades, NGC 752 and M67. They also match the observed rotation periods in the Hyades. However, these models are unable to explain the presence of the rapidly rotating G and K stars in the Pleiades. Observations of Li abundances in extremely metal poor halo stars provide another test of the stellar models. All models which use Kurucz (1992) model atmospheres to determine the surface boundary conditions are unable to match the observed Li depletion in cool halo stars. Models which use the gray atmosphere approximation provide a much better fit to the data. Standard models do a good job

  10. A new unsteady mixing model to predict NO(x) production during rapid mixing in a dual-stage combustor

    NASA Technical Reports Server (NTRS)

    Menon, Suresh

    1992-01-01

    An advanced gas turbine engine to power supersonic transport aircraft is currently under study. In addition to high combustion efficiency requirements, environmental concerns have placed stringent restrictions on the pollutant emissions from these engines. A combustor design with the potential for minimizing pollutants such as NO(x) emissions is undergoing experimental evaluation. A major technical issue in the design of this combustor is how to rapidly mix the hot, fuel-rich primary zone product with the secondary diluent air to obtain a fuel-lean mixture for combustion in the second stage. Numerical predictions using steady-state methods cannot account for the unsteady phenomena in the mixing region. Therefore, to evaluate the effect of unsteady mixing and combustion processes, a novel unsteady mixing model is demonstrated here. This model has been used to study multispecies mixing as well as propane-air and hydrogen-air jet nonpremixed flames, and has been used to predict NO(x) production in the mixing region. Comparison with available experimental data show good agreement, thereby providing validation of the mixing model. With this demonstration, this mixing model is ready to be implemented in conjunction with steady-state prediction methods and provide an improved engineering design analysis tool.

  11. A mixed system modeling two-directional pedestrian flows.

    PubMed

    Goatin, Paola; Mimault, Matthias

    2015-04-01

    In this article, we present a simplified model to describe the dynamics of two groups of pedestrians moving in opposite directions in a corridor. The model consists of a 2 x 2 system of conservation laws of mixed hyperbolic-elliptic type. We study the basic properties of the system to understand why and how bounded oscillations in numerical simulations arise. We show that Lax-Friedrichs scheme ensures the invariance of the domain and we investigate the existence of measure-valued solutions as limit of a subsequence of approximate solutions. PMID:25811441

  12. Modeling and Simulation of Fluid Mixing Laser Experiments and Supernova

    SciTech Connect

    James Glimm

    2009-06-04

    The three year plan for this project was to develop novel theories and advanced simulation methods leading to a systematic understanding of turbulent mixing. A primary focus is the comparison of simulation models (Direct Numerical Simulation (DNS), Large Eddy Simulations (LES), full two fluid simulations and subgrid averaged models) to experiments. The comprehension and reduction of experimental and simulation data are central goals of this proposal. We model 2D and 3D perturbations of planar or circular interfaces. We compare these tests with models derived from averaged equations (our own and those of others). As a second focus, we develop physics based subgrid simulation models of diffusion across an interface, with physical but no numerical mass diffusion. Multiple layers and reshock are considered here.

  13. Intercomparison of garnet barometers and implications for garnet mixing models

    SciTech Connect

    Anovitz, L.M.; Essene, E.J.

    1985-01-01

    Several well-calibrated barometers are available in the system Ca-Fe-Ti-Al-Si-O, including: Alm+3Ru-3Ilm+Sil+2Qtz (GRAIL), 2Alm+Grreverse arrow6Ru=6Ilm+3An+3Qtz (GRIPS); 2Alm+Gr=3Fa+3An (FAG); 3AnGr+Ky+Qtz (GASP); 2Fs-Fa+Qtz (FFQ); and Gr+Qtz=An+2Wo (WAGS). GRIPS, GRAIL and GASP form a linearly dependent set such that any two should yield the third given an a/X model for the grossular/almandine solid-solution. Application to barometry of garnet granulite assemblages from the Grenville in Ontario yields average pressures 0.1 kb lower for GRIPS and 0.4 kb higher for FAGS using our mixing model. Results from Parry Island, Ontario, yield 8.7 kb from GRAIL as opposed to 9.1 kb using Ganguly and Saxena's model. For GASP, Parry Island assemblages yield 8.4 kb with the authors calibration. Ganguly and Saxena's model gives 5.4 kb using Gasparik's reversals and 8.1 kb using the position of GASP calculated from GRIPS and GRAIL. These corrections allow GRIPS, GRAIL, GASP and FAGS to yield consistent pressures to +/- 0.5 kb in regional metamorphic terranes. Application of their mixing model outside of the fitted range 700-1000 K is not encouraged as extrapolation may yield erroneous results.

  14. Nonlinear spectral mixing theory to model multispectral signatures

    SciTech Connect

    Borel, C.C.

    1996-02-01

    Nonlinear spectral mixing occurs due to multiple reflections and transmissions between discrete surfaces, e.g. leaves or facets of a rough surface. The radiosity method is an energy conserving computational method used in thermal engineering and it models nonlinear spectral mixing realistically and accurately. In contrast to the radiative transfer method the radiosity method takes into account the discreteness of the scattering surfaces (e.g. exact location, orientation and shape) such as leaves and includes mutual shading between them. An analytic radiosity-based scattering model for vegetation was developed and used to compute vegetation indices for various configurations. The leaf reflectance and transmittance was modeled using the PROSPECT model for various amounts of water, chlorophyll and variable leaf structure. The soil background was modeled using SOILSPEC with a linear mixture of reflectances of sand, clay and peat. A neural network and a geometry based retrieval scheme were used to retrieve leaf area index and chlorophyll concentration for dense canopies. Only simulated canopy reflectances in the 6 visible through short wave IR Landsat TM channels were used. The authors used an empirical function to compute the signal-to-noise ratio of a retrieved quantity.

  15. Mixing characteristics of sludge simulant in a model anaerobic digester.

    PubMed

    Low, Siew Cheng; Eshtiaghi, Nicky; Slatter, Paul; Baudez, Jean-Christophe; Parthasarathy, Rajarathinam

    2016-03-01

    This study aims to investigate the mixing characteristics of a transparent sludge simulant in a mechanically agitated model digester using flow visualisation technique. Video images of the flow patterns were obtained by recording the progress of an acid-base reaction and analysed to determine the active and inactive volumes as a function of time. The doughnut-shaped inactive region formed above and below the impeller in low concentration simulant decreases in size with time and disappears finally. The 'cavern' shaped active mixing region formed around the impeller in simulant solutions with higher concentrations increases with increasing agitation time and reaches a steady state equilibrium size, which is a function of specific power input. These results indicate that the active volume is jointly determined by simulant rheology and specific power input. A mathematical correlation is proposed to estimate the active volume as a function of simulant concentration in terms of yield Reynolds number. PMID:26739143

  16. Mixing in age-structured population models of infectious diseases.

    PubMed

    Glasser, John; Feng, Zhilan; Moylan, Andrew; Del Valle, Sara; Castillo-Chavez, Carlos

    2012-01-01

    Infectious diseases are controlled by reducing pathogen replication within or transmission between hosts. Models can reliably evaluate alternative strategies for curtailing transmission, but only if interpersonal mixing is represented realistically. Compartmental modelers commonly use convex combinations of contacts within and among groups of similarly aged individuals, respectively termed preferential and proportionate mixing. Recently published face-to-face conversation and time-use studies suggest that parents and children and co-workers also mix preferentially. As indirect effects arise from the off-diagonal elements of mixing matrices, these observations are exceedingly important. Accordingly, we refined the formula published by Jacquez et al. [19] to account for these newly-observed patterns and estimated age-specific fractions of contacts with each preferred group. As the ages of contemporaries need not be identical nor those of parents and children to differ by exactly the generation time, we also estimated the variances of the Gaussian distributions with which we replaced the Kronecker delta commonly used in theoretical studies. Our formulae reproduce observed patterns and can be used, given contacts, to estimate probabilities of infection on contact, infection rates, and reproduction numbers. As examples, we illustrate these calculations for influenza based on "attack rates" from a prospective household study during the 1957 pandemic and for varicella based on cumulative incidence estimated from a cross-sectional serological survey conducted from 1988-94, together with contact rates from the several face-to-face conversation and time-use studies. Susceptibility to infection on contact generally declines with age, but may be elevated among adolescents and adults with young children. PMID:22037144

  17. A Web Graphics Primer.

    ERIC Educational Resources Information Center

    Buchanan, Larry

    1999-01-01

    Discusses the basic technical concepts of using graphics in World Wide Web pages, including: color depth and dithering, dots-per-inch, image size, file types, Graphics Interchange Formats (GIFs), Joint Photographic Experts Group (JPEG), format, and software recommendations. (AEF)

  18. Box-Cox Mixed Logit Model for Travel Behavior Analysis

    NASA Astrophysics Data System (ADS)

    Orro, Alfonso; Novales, Margarita; Benitez, Francisco G.

    2010-09-01

    To represent the behavior of travelers when they are deciding how they are going to get to their destination, discrete choice models, based on the random utility theory, have become one of the most widely used tools. The field in which these models were developed was halfway between econometrics and transport engineering, although the latter now constitutes one of their principal areas of application. In the transport field, they have mainly been applied to mode choice, but also to the selection of destination, route, and other important decisions such as the vehicle ownership. In usual practice, the most frequently employed discrete choice models implement a fixed coefficient utility function that is linear in the parameters. The principal aim of this paper is to present the viability of specifying utility functions with random coefficients that are nonlinear in the parameters, in applications of discrete choice models to transport. Nonlinear specifications in the parameters were present in discrete choice theory at its outset, although they have seldom been used in practice until recently. The specification of random coefficients, however, began with the probit and the hedonic models in the 1970s, and, after a period of apparent little practical interest, has burgeoned into a field of intense activity in recent years with the new generation of mixed logit models. In this communication, we present a Box-Cox mixed logit model, original of the authors. It includes the estimation of the Box-Cox exponents in addition to the parameters of the random coefficients distribution. Probability of choose an alternative is an integral that will be calculated by simulation. The estimation of the model is carried out by maximizing the simulated log-likelihood of a sample of observed individual choices between alternatives. The differences between the predictions yielded by models that are inconsistent with real behavior have been studied with simulation experiments.

  19. Using Search Algorithms and Probabilistic Graphical Models to Understand the Influence of Atmospheric Circulation on Western US Drought

    NASA Astrophysics Data System (ADS)

    Malevich, S. B.; Woodhouse, C. A.

    2015-12-01

    This work explores a new approach to quantify cool-season mid-latitude circulation dynamics as they relate western US streamflow variability and drought. This information is used to probabilistically associate patterns of synoptic atmospheric circulation with spatial patterns of drought in western US streamflow. Cool-season storms transport moisture from the Pacific Ocean and are a primary source for western US streamflow. Studies overthe past several decades have emphasized that the western US hydroclimate is influenced by the intensity and phasing of ocean and atmosphere dynamics and teleconnections, such as ENSO and North Pacific variability. These complex interactions are realized in atmospheric circulation along the west coast of North America. The region's atmospheric circulation can encourage a preferential flow in winter storm tracks from the Pacific, and thus influence the moisture conditions of a given river basin over the course of the cool season. These dynamics have traditionally been measured with atmospheric indices based on values from fixed points in space or principal component loadings. This study uses collective search agents to quantify the position and intensity of potentially non-stationary atmosphere features in climate reanalysis datasets, relative to regional hydrology. Results underline the spatio-temporal relationship between semi-permanent atmosphere characteristics and naturalized streamflow from major river basins of the western US. A probabilistic graphical model quantifies this relationship while accounting for uncertainty from noisy climate processes, and eventually, limitations from dataset length. This creates probabilities for semi-permanent atmosphere features which we hope to associate with extreme droughts of the paleo record, based on our understanding of atmosphere-streamflow relations observed in the instrumental record.

  20. Graphical assessment of incremental value of novel markers in prediction models: From statistical to decision analytical perspectives.

    PubMed

    Steyerberg, Ewout W; Vedder, Moniek M; Leening, Maarten J G; Postmus, Douwe; D'Agostino, Ralph B; Van Calster, Ben; Pencina, Michael J

    2015-07-01

    New markers may improve prediction of diagnostic and prognostic outcomes. We aimed to review options for graphical display and summary measures to assess the predictive value of markers over standard, readily available predictors. We illustrated various approaches using previously published data on 3264 participants from the Framingham Heart Study, where 183 developed coronary heart disease (10-year risk 5.6%). We considered performance measures for the incremental value of adding HDL cholesterol to a prediction model. An initial assessment may consider statistical significance (HR = 0.65, 95% confidence interval 0.53 to 0.80; likelihood ratio p < 0.001), and distributions of predicted risks (densities or box plots) with various summary measures. A range of decision thresholds is considered in predictiveness and receiver operating characteristic curves, where the area under the curve (AUC) increased from 0.762 to 0.774 by adding HDL. We can furthermore focus on reclassification of participants with and without an event in a reclassification graph, with the continuous net reclassification improvement (NRI) as a summary measure. When we focus on one particular decision threshold, the changes in sensitivity and specificity are central. We propose a net reclassification risk graph, which allows us to focus on the number of reclassified persons and their event rates. Summary measures include the binary AUC, the two-category NRI, and decision analytic variants such as the net benefit (NB). Various graphs and summary measures can be used to assess the incremental predictive value of a marker. Important insights for impact on decision making are provided by a simple graph for the net reclassification risk. PMID:25042996

  1. Modeling of Transient Flow Mixing of Streams Injected into a Mixing Chamber

    NASA Technical Reports Server (NTRS)

    Voytovych, Dmytro M.; Merkle, Charles L.; Lucht, Robert P.; Hulka, James R.; Jones, Gregg W.

    2006-01-01

    Ignition is recognized as one the critical drivers in the reliability of multiple-start rocket engines. Residual combustion products from previous engine operation can condense on valves and related structures thereby creating difficulties for subsequent starting procedures. Alternative ignition methods that require fewer valves can mitigate the valve reliability problem, but require improved understanding of the spatial and temporal propellant distribution in the pre-ignition chamber. Current design tools based mainly on one-dimensional analysis and empirical models cannot predict local details of the injection and ignition processes. The goal of this work is to evaluate the capability of the modern computational fluid dynamics (CFD) tools in predicting the transient flow mixing in pre-ignition environment by comparing the results with the experimental data. This study is a part of a program to improve analytical methods and methodologies to analyze reliability and durability of combustion devices. In the present paper we describe a series of detailed computational simulations of the unsteady mixing events as the cold propellants are first introduced into the chamber as a first step in providing this necessary environmental description. The present computational modeling represents a complement to parallel experimental simulations' and includes comparisons with experimental results from that effort. A large number of rocket engine ignition studies has been previously reported. Here we limit our discussion to the work discussed in Refs. 2, 3 and 4 which is both similar to and different from the present approach. The similarities arise from the fact that both efforts involve detailed experimental/computational simulations of the ignition problem. The differences arise from the underlying philosophy of the two endeavors. The approach in Refs. 2 to 4 is a classical ignition study in which the focus is on the response of a propellant mixture to an ignition source, with

  2. A Bayesian nonlinear mixed-effects disease progression model

    PubMed Central

    Kim, Seongho; Jang, Hyejeong; Wu, Dongfeng; Abrams, Judith

    2016-01-01

    A nonlinear mixed-effects approach is developed for disease progression models that incorporate variation in age in a Bayesian framework. We further generalize the probability model for sensitivity to depend on age at diagnosis, time spent in the preclinical state and sojourn time. The developed models are then applied to the Johns Hopkins Lung Project data and the Health Insurance Plan for Greater New York data using Bayesian Markov chain Monte Carlo and are compared with the estimation method that does not consider random-effects from age. Using the developed models, we obtain not only age-specific individual-level distributions, but also population-level distributions of sensitivity, sojourn time and transition probability. PMID:26798562

  3. Mixing and shocks in geophysical shallow water models

    NASA Astrophysics Data System (ADS)

    Jacobson, Tivon

    In the first section, a reduced two-layer shallow water model for fluid mixing is described. The model is a nonlinear hyperbolic quasilinear system of partial differential equations, derived by taking the limit as the upper layer becomes infinitely deep. It resembles the shallow water equations, but with an active buoyancy. Fluid entrainment is supposed to occur from the upper layer to the lower. Several physically motivated closures are proposed, including a robust closure based on maximizing a mixing entropy (also defined and derived) at shocks. The structure of shock solutions is examined. The Riemann problem is solved by setting the shock speed to maximize the production of mixing entropy. Shock-resolving finite-volume numerical models are presented with and without topographic forcing. Explicit shock tracking is required for strong shocks. The constraint that turbulent energy production be positive is considered. The model has geophysical applications in studying the dynamics of dense sill overflows in the ocean. The second section discusses stationary shocks of the shallow water equations in a reentrant rotating channel with wind stress and topography. Asymptotic predictions for the shock location, strength, and associated energy dissipation are developed by taking the topographic perturbation to be small. The scaling arguments for the asymptotics are developed by demanding integrated energy and momentum balance, with the result that the free surface perturbation is of the order of the square root of the topographic perturbation. Shock formation requires that linear waves be nondispersive, which sets a solvability condition on the mean flow and which leads to a class of generalized Kelvin waves. Two-dimensional shock-resolving numerical simulations validate the asymptotic expressions and demonstrate the presence of stationary separated flow shocks in some cases. Geophysical applications are considered. Overview sections on shock-resolving numerical methods

  4. Subgrid models for mass and thermal diffusion in turbulent mixing

    SciTech Connect

    Sharp, David H; Lim, Hyunkyung; Li, Xiao - Lin; Gilmm, James G

    2008-01-01

    We are concerned with the chaotic flow fields of turbulent mixing. Chaotic flow is found in an extreme form in multiply shocked Richtmyer-Meshkov unstable flows. The goal of a converged simulation for this problem is twofold: to obtain converged solutions for macro solution features, such as the trajectories of the principal shock waves, mixing zone edges, and mean densities and velocities within each phase, and also for such micro solution features as the joint probability distributions of the temperature and species concentration. We introduce parameterized subgrid models of mass and thermal diffusion, to define large eddy simulations (LES) that replicate the micro features observed in the direct numerical simulation (DNS). The Schmidt numbers and Prandtl numbers are chosen to represent typical liquid, gas and plasma parameter values. Our main result is to explore the variation of the Schmidt, Prandtl and Reynolds numbers by three orders of magnitude, and the mesh by a factor of 8 per linear dimension (up to 3200 cells per dimension), to allow exploration of both DNS and LES regimes and verification of the simulations for both macro and micro observables. We find mesh convergence for key properties describing the molecular level of mixing, including chemical reaction rates between the distinct fluid species. We find results nearly independent of Reynolds number for Re 300, 6000, 600K . Methodologically, the results are also new. In common with the shock capturing community, we allow and maintain sharp solution gradients, and we enhance these gradients through use of front tracking. In common with the turbulence modeling community, we include subgrid scale models with no adjustable parameters for LES. To the authors' knowledge, these two methodologies have not been previously combined. In contrast to both of these methodologies, our use of Front Tracking, with DNS or LES resolution of the momentum equation at or near the Kolmogorov scale, but without resolving the

  5. MIXING MODELING ANALYSIS FOR SRS SALT WASTE DISPOSITION

    SciTech Connect

    Lee, S.

    2011-01-18

    Nuclear waste at Savannah River Site (SRS) waste tanks consists of three different types of waste forms. They are the lighter salt solutions referred to as supernate, the precipitated salts as salt cake, and heavier fine solids as sludge. The sludge is settled on the tank floor. About half of the residual waste radioactivity is contained in the sludge, which is only about 8 percentage of the total waste volume. Mixing study to be evaluated here for the Salt Disposition Integration (SDI) project focuses on supernate preparations in waste tanks prior to transfer to the Salt Waste Processing Facility (SWPF) feed tank. The methods to mix and blend the contents of the SRS blend tanks were evalutaed to ensure that the contents are properly blended before they are transferred from the blend tank such as Tank 50H to the SWPF feed tank. The work consists of two principal objectives to investigate two different pumps. One objective is to identify a suitable pumping arrangement that will adequately blend/mix two miscible liquids to obtain a uniform composition in the tank with a minimum level of sludge solid particulate in suspension. The other is to estimate the elevation in the tank at which the transfer pump inlet should be located where the solid concentration of the entrained fluid remains below the acceptance criterion (0.09 wt% or 1200 mg/liter) during transfer operation to the SWPF. Tank 50H is a Waste Tank that will be used to prepare batches of salt feed for SWPF. The salt feed must be a homogeneous solution satisfying the acceptance criterion of the solids entrainment during transfer operation. The work described here consists of two modeling areas. They are the mixing modeling analysis during miscible liquid blending operation, and the flow pattern analysis during transfer operation of the blended liquid. The modeling results will provide quantitative design and operation information during the mixing/blending process and the transfer operation of the blended

  6. Marginally specified generalized linear mixed models: a robust approach.

    PubMed

    Mills, J E; Field, C A; Dupuis, D J

    2002-12-01

    Longitudinal data modeling is complicated by the necessity to deal appropriately with the correlation between observations made on the same individual. Building on an earlier nonrobust version proposed by Heagerty (1999, Biometrics 55, 688-698), our robust marginally specified generalized linear mixed model (ROBMS-GLMM) provides an effective method for dealing with such data. This model is one of the first to allow both population-averaged and individual-specific inference. As well, it adopts the flexibility and interpretability of generalized linear mixed models for introducing dependence but builds a regression structure for the marginal mean, allowing valid application with time-dependent (exogenous) and time-independent covariates. These new estimators are obtained as solutions of a robustified likelihood equation involving Huber's least favorable distribution and a collection of weights. Huber's least favorable distribution produces estimates that are resistant to certain deviations from the random effects distributional assumptions. Innovative weighting strategies enable the ROBMS-GLMM to perform well when faced with outlying observations both in the response and covariates. We illustrate the methodology with an analysis of a prospective longitudinal study of laryngoscopic endotracheal intubation, a skill that numerous health-care professionals are expected to acquire. The principal goal of our research is to achieve robust inference in longitudinal analyses. PMID:12495126

  7. Graphics at DESY

    NASA Astrophysics Data System (ADS)

    Schilling, Peter K.

    1989-12-01

    After a short history of computer graphics at DESY the introduction of graphic workstations based on true and "quasi" standards is described. An overview of graphics hardware and software at DESY is given as well as the communication facilities used. Some remarks about current and future development finish the paper.

  8. Model of Mixing Layer With Multicomponent Evaporating Drops

    NASA Technical Reports Server (NTRS)

    Bellan, Josette; Le Clercq, Patrick

    2004-01-01

    A mathematical model of a three-dimensional mixing layer laden with evaporating fuel drops composed of many chemical species has been derived. The study is motivated by the fact that typical real petroleum fuels contain hundreds of chemical species. Previously, for the sake of computational efficiency, spray studies were performed using either models based on a single representative species or models based on surrogate fuels of at most 15 species. The present multicomponent model makes it possible to perform more realistic simulations by accounting for hundreds of chemical species in a computationally efficient manner. The model is used to perform Direct Numerical Simulations in continuing studies directed toward understanding the behavior of liquid petroleum fuel sprays. The model includes governing equations formulated in an Eulerian and a Lagrangian reference frame for the gas and the drops, respectively. This representation is consistent with the expected volumetrically small loading of the drops in gas (of the order of 10 3), although the mass loading can be substantial because of the high ratio (of the order of 103) between the densities of liquid and gas. The drops are treated as point sources of mass, momentum, and energy; this representation is consistent with the drop size being smaller than the Kolmogorov scale. Unsteady drag, added-mass effects, Basset history forces, and collisions between the drops are neglected, and the gas is assumed calorically perfect. The model incorporates the concept of continuous thermodynamics, according to which the chemical composition of a fuel is described probabilistically, by use of a distribution function. Distribution functions generally depend on many parameters. However, for mixtures of homologous species, the distribution can be approximated with acceptable accuracy as a sole function of the molecular weight. The mixing layer is initially laden with drops in its lower stream, and the drops are colder than the gas

  9. "No One's the Boss of My Painting:" A Model of the Early Development of Artistic Graphic Representation

    ERIC Educational Resources Information Center

    Louis, Linda

    2013-01-01

    This article reports on the most recent phase of an ongoing research program that examines the artistic graphic representational behavior and paintings of children between the ages of four and seven. The goal of this research program is to articulate a contemporary account of artistic growth and to illuminate how young children's changing…

  10. Spatial Visualization Research and Theories: Their Importance in the Development of an Engineering and Technical Design Graphics Curriculum Model.

    ERIC Educational Resources Information Center

    Miller, Craig L.; Bertoline, Gary R.

    1991-01-01

    An overview that gives an introduction to the theories, terms, concepts, and prior research conducted on visualization is presented. This information is to be used as a basis for developing spatial research studies that lend support to the theory that the engineering and technical design graphics curriculum is important in the development of…

  11. Stochastic Mixing Model with Power Law Decay of Variance

    NASA Technical Reports Server (NTRS)

    Fedotov, S.; Ihme, M.; Pitsch, H.

    2003-01-01

    Here we present a simple stochastic mixing model based on the law of large numbers (LLN). The reason why the LLN is involved in our formulation of the mixing problem is that the random conserved scalar c = c(t,x(t)) appears to behave as a sample mean. It converges to the mean value mu, while the variance sigma(sup 2)(sub c) (t) decays approximately as t(exp -1). Since the variance of the scalar decays faster than a sample mean (typically is greater than unity), we will introduce some non-linear modifications into the corresponding pdf-equation. The main idea is to develop a robust model which is independent from restrictive assumptions about the shape of the pdf. The remainder of this paper is organized as follows. In Section 2 we derive the integral equation from a stochastic difference equation describing the evolution of the pdf of a passive scalar in time. The stochastic difference equation introduces an exchange rate gamma(sub n) which we model in a first step as a deterministic function. In a second step, we generalize gamma(sub n) as a stochastic variable taking fluctuations in the inhomogeneous environment into account. In Section 3 we solve the non-linear integral equation numerically and analyze the influence of the different parameters on the decay rate. The paper finishes with a conclusion.

  12. Analysis on sheet cyclic plastic deformation using mixed hardening model

    NASA Astrophysics Data System (ADS)

    Li, Qun; Jin, Miao; Yuxin, Zhu

    2013-05-01

    Treating the cyclic deformation problem of sheet flowing through drawbead as the object of the research, using HILL anisotropy yield criterion and mixed hardening model, the cyclic plastic deformation mechanism of sheet was studied, the deformation characteristics of sheet subjected to cyclic loads were revealed, and the influence of Bauschinger effect on stress-strain circulating relationship and the influence of bending neutral layer migration on the stress of sheet's intermediate integral point were analyzed as well. The effectiveness of the model was verified by experiments. The results of analysis were showed that the stress values influenced by Bauschinger effect were different at the yield point of reverse loading and the point of unloading during the cyclic deformation. The stress rate at the yield point of reverse loading and the point of unloading in different loading branches was also different. The stress-strain circulating relationship in different loading branches can be approximately treated as bilinear. The tangent modulus of each loading branch showed a significant downward trend as the times of the reverse loading increased. The tangent modulus calculated by the mixed hardening model after the second loading branch reduced to less than 21% of the first loading tangent modulus. Effected by the neutral layer migration, the stress-strain curve of integral point of sheet's intermediate layer showed alternating transition phenomenon of the tensile stress and compressive stress.

  13. Chaos in the Mixed Even-Spin Models

    NASA Astrophysics Data System (ADS)

    Chen, Wei-Kuo

    2014-06-01

    We consider a disordered system obtained by coupling two mixed even-spin models together. The chaos problem is concerned with the behavior of the coupled system when the external parameters in the two models, such as, temperature, disorder, or external field, are slightly different. It is conjectured that the overlap between two independently sampled spin configurations from, respectively, the Gibbs measures of the two models is essentially concentrated around a constant under the coupled Gibbs measure. Using the extended Guerra replica symmetry breaking bound together with a recent development of controlling the overlap using the Ghirlanda-Guerra identities as well as a new family of identities, we present rigorous results on chaos in temperature. In addition, chaos in disorder and in external field are addressed.

  14. Demonstrating Patterns in the Views Of Stakeholders Regarding Ethically-Salient Issues in Clinical Research: A Novel Use of Graphical Models in Empirical Ethics Inquiry

    PubMed Central

    Kim, Jane Paik; Roberts, Laura Weiss

    2015-01-01

    Background Empirical ethics inquiry works from the notion that stakeholder perspectives are necessary for gauging the ethical acceptability of human studies and assuring that research aligns with societal expectations. Although common, studies involving different populations often entail comparisons of trends that problematize the interpretation of results. Using graphical model selection – a technique aimed at transcending limitations of conventional methods – this report presents data on the ethics of clinical research with two objectives: (1) to display the patterns of views held by ill and healthy individuals in clinical research as a test of the study’s original hypothesis and (2) to introduce graphical model selection as a key analytic tool for ethics research. Methods In this IRB-approved, NIH-funded project, data were collected from 60 mentally ill and 43 physically ill clinical research protocol volunteers, 47 healthy protocol-consented participants, and 29 healthy individuals without research protocol experience. Respondents were queried on the ethical acceptability of research involving people with mental and physical illness (i.e., cancer, HIV, depression, schizophrenia, and post-traumatic stress disorder) and non-illness related sources of vulnerability (e.g., age, class, gender, ethnicity). Using a statistical algorithm, we selected graphical models to display interrelationships among responses to questions. Results Both mentally and physically ill protocol volunteers revealed a high degree of connectivity among ethically-salient perspectives. Healthy participants, irrespective of research protocol experience, revealed patterns of views that were not highly connected. Conclusion Between ill and healthy protocol participants, the pattern of views is vastly different. Experience with illness was tied to dense connectivity, whereas healthy individuals expressed views with sparse connections. In offering a nuanced perspective on the interrelation of

  15. A continuous mixing model for pdf simulations and its applications to combusting shear flows

    NASA Technical Reports Server (NTRS)

    Hsu, A. T.; Chen, J.-Y.

    1991-01-01

    The problem of time discontinuity (or jump condition) in the coalescence/dispersion (C/D) mixing model is addressed in this work. A C/D mixing model continuous in time is introduced. With the continuous mixing model, the process of chemical reaction can be fully coupled with mixing. In the case of homogeneous turbulence decay, the new model predicts a pdf very close to a Gaussian distribution, with finite higher moments also close to that of a Gaussian distribution. Results from the continuous mixing model are compared with both experimental data and numerical results from conventional C/D models.

  16. Estimating anatomical trajectories with Bayesian mixed-effects modeling

    PubMed Central

    Ziegler, G.; Penny, W.D.; Ridgway, G.R.; Ourselin, S.; Friston, K.J.

    2015-01-01

    We introduce a mass-univariate framework for the analysis of whole-brain structural trajectories using longitudinal Voxel-Based Morphometry data and Bayesian inference. Our approach to developmental and aging longitudinal studies characterizes heterogeneous structural growth/decline between and within groups. In particular, we propose a probabilistic generative model that parameterizes individual and ensemble average changes in brain structure using linear mixed-effects models of age and subject-specific covariates. Model inversion uses Expectation Maximization (EM), while voxelwise (empirical) priors on the size of individual differences are estimated from the data. Bayesian inference on individual and group trajectories is realized using Posterior Probability Maps (PPM). In addition to parameter inference, the framework affords comparisons of models with varying combinations of model order for fixed and random effects using model evidence. We validate the model in simulations and real MRI data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) project. We further demonstrate how subject specific characteristics contribute to individual differences in longitudinal volume changes in healthy subjects, Mild Cognitive Impairment (MCI), and Alzheimer's Disease (AD). PMID:26190405

  17. Estimating anatomical trajectories with Bayesian mixed-effects modeling.

    PubMed

    Ziegler, G; Penny, W D; Ridgway, G R; Ourselin, S; Friston, K J

    2015-11-01

    We introduce a mass-univariate framework for the analysis of whole-brain structural trajectories using longitudinal Voxel-Based Morphometry data and Bayesian inference. Our approach to developmental and aging longitudinal studies characterizes heterogeneous structural growth/decline between and within groups. In particular, we propose a probabilistic generative model that parameterizes individual and ensemble average changes in brain structure using linear mixed-effects models of age and subject-specific covariates. Model inversion uses Expectation Maximization (EM), while voxelwise (empirical) priors on the size of individual differences are estimated from the data. Bayesian inference on individual and group trajectories is realized using Posterior Probability Maps (PPM). In addition to parameter inference, the framework affords comparisons of models with varying combinations of model order for fixed and random effects using model evidence. We validate the model in simulations and real MRI data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) project. We further demonstrate how subject specific characteristics contribute to individual differences in longitudinal volume changes in healthy subjects, Mild Cognitive Impairment (MCI), and Alzheimer's Disease (AD). PMID:26190405

  18. A Mixed Model for Real-Time, Interactive Simulation of a Cable Passing Through Several Pulleys

    SciTech Connect

    Garcia-Fernandez, Ignacio; Pla-Castells, Marta; Martinez-Dura, Rafael J.

    2007-09-06

    A model of a cable and pulleys is presented that can be used in Real Time Computer Graphics applications. The model is formulated by the coupling of a damped spring and a variable coefficient wave equation, and can be integrated in more complex mechanical models of lift systems, such as cranes, elevators, etc. with a high degree of interactivity.

  19. Fermion flavor mixing in models with dynamical mass generation

    SciTech Connect

    Benes, Petr

    2010-03-15

    We present a model-independent method of dealing with fermion flavor mixing in the case when instead of constant, momentum-independent mass matrices one has rather momentum-dependent self-energies. This situation is typical for strongly coupled models of dynamical fermion mass generation. We demonstrate our approach on the example of quark mixing. We show that quark self-energies with a generic momentum dependence lead to an effective Cabibbo-Kobayashi-Maskawa matrix, which turns out to be in general nonunitary, in accordance with previous claims of other authors, and to nontrivial flavor changing electromagnetic and neutral currents. We also discuss some conceptual consequences of the momentum-dependent self-energies and show that in such a case the interaction basis and the mass basis are not related by a unitary transformation. In fact, we argue that the latter is merely an effective concept, in a specified sense. While focusing mainly on the fermionic self-energies, we also study the effects of momentum-dependent radiative corrections to the gauge bosons and to the proper vertices. Our approach is based on an application of the Lehmann-Symanzik-Zimmermann reduction formula and for the special case of constant self-energies it gives the same results as the standard approach based on the diagonalization of mass matrices.

  20. System dynamics of behaviour-evolutionary mix-game models

    NASA Astrophysics Data System (ADS)

    Gou, Cheng-Ling; Gao, Jie-Ping; Chen, Fang

    2010-11-01

    In real financial markets there are two kinds of traders: one is fundamentalist, and the other is a trend-follower. The mix-game model is proposed to mimic such phenomena. In a mix-game model there are two groups of agents: Group 1 plays the majority game and Group 2 plays the minority game. In this paper, we investigate such a case that some traders in real financial markets could change their investment behaviours by assigning the evolutionary abilities to agents: if the winning rates of agents are smaller than a threshold, they will join the other group; and agents will repeat such an evolution at certain time intervals. Through the simulations, we obtain the following findings: (i) the volatilities of systems increase with the increase of the number of agents in Group 1 and the times of behavioural changes of all agents; (ii) the performances of agents in both groups and the stabilities of systems become better if all agents take more time to observe their new investment behaviours; (iii) there are two-phase zones of market and non-market and two-phase zones of evolution and non-evolution; (iv) parameter configurations located within the cross areas between the zones of markets and the zones of evolution are suited for simulating the financial markets.

  1. A graphical modeling tool for evaluating nitrogen loading to and nitrate transport in ground water in the mid-Snake region, south-central Idaho

    USGS Publications Warehouse

    Clark, David W.; Skinner, Kenneth D.; Pollock, David W.

    2006-01-01

    A flow and transport model was created with a graphical user interface to simplify the evaluation of nitrogen loading and nitrate transport in the mid-Snake region in south-central Idaho. This model and interface package, the Snake River Nitrate Scenario Simulator, uses the U.S. Geological Survey's MODFLOW 2000 and MOC3D models. The interface, which is enabled for use with geographic information systems (GIS), was created using ESRI's royalty-free MapObjects LT software. The interface lets users view initial nitrogen-loading conditions (representing conditions as of 1998), alter the nitrogen loading within selected zones by specifying a multiplication factor and applying it to the initial condition, run the flow and transport model, and view a graphical representation of the modeling results. The flow and transport model of the Snake River Nitrate Scenario Simulator was created by rediscretizing and recalibrating a clipped portion of an existing regional flow model. The new subregional model was recalibrated with newly available water-level data and spring and ground-water nitrate concentration data for the study area. An updated nitrogen input GIS layer controls the application of nitrogen to the flow and transport model. Users can alter the nitrogen application to the flow and transport model by altering the nitrogen load in predefined spatial zones contained within similar political, hydrologic, and size-constrained boundaries.

  2. Relativistic hydrodynamics on graphic cards

    NASA Astrophysics Data System (ADS)

    Gerhard, Jochen; Lindenstruth, Volker; Bleicher, Marcus

    2013-02-01

    We show how to accelerate relativistic hydrodynamics simulations using graphic cards (graphic processing units, GPUs). These improvements are of highest relevance e.g. to the field of high-energetic nucleus-nucleus collisions at RHIC and LHC where (ideal and dissipative) relativistic hydrodynamics is used to calculate the evolution of hot and dense QCD matter. The results reported here are based on the Sharp And Smooth Transport Algorithm (SHASTA), which is employed in many hydrodynamical models and hybrid simulation packages, e.g. the Ultrarelativistic Quantum Molecular Dynamics model (UrQMD). We have redesigned the SHASTA using the OpenCL computing framework to work on accelerators like graphic processing units (GPUs) as well as on multi-core processors. With the redesign of the algorithm the hydrodynamic calculations have been accelerated by a factor 160 allowing for event-by-event calculations and better statistics in hybrid calculations.

  3. Graphical programming at Sandia National Laboratories

    SciTech Connect

    McDonald, M.J.; Palmquist, R.D.; Desjarlais, L.

    1993-09-01

    Sandia has developed an advanced operational control system approach, called Graphical Programming, to design, program, and operate robotic systems. The Graphical Programming approach produces robot systems that are faster to develop and use, safer in operation, and cheaper overall than altemative teleoperation or autonomous robot control systems. Graphical Programming also provides an efficient and easy-to-use interface to traditional robot systems for use in setup and programming tasks. This paper provides an overview of the Graphical Programming approach and lists key features of Graphical Programming systems. Graphical Programming uses 3-D visualization and simulation software with intuitive operator interfaces for the programming and control of complex robotic systems. Graphical Programming Supervisor software modules allow an operator to command and simulate complex tasks in a graphic preview mode and, when acceptable, command the actual robots and monitor their motions with the graphic system. Graphical Programming Supervisors maintain registration with the real world and allow the robot to perform tasks that cannot be accurately represented with models alone by using a combination of model and sensor-based control.

  4. Functional Nonlinear Mixed Effects Models For Longitudinal Image Data

    PubMed Central

    Luo, Xinchao; Zhu, Lixing; Kong, Linglong; Zhu, Hongtu

    2015-01-01

    Motivated by studying large-scale longitudinal image data, we propose a novel functional nonlinear mixed effects modeling (FN-MEM) framework to model the nonlinear spatial-temporal growth patterns of brain structure and function and their association with covariates of interest (e.g., time or diagnostic status). Our FNMEM explicitly quantifies a random nonlinear association map of individual trajectories. We develop an efficient estimation method to estimate the nonlinear growth function and the covariance operator of the spatial-temporal process. We propose a global test and a simultaneous confidence band for some specific growth patterns. We conduct Monte Carlo simulation to examine the finite-sample performance of the proposed procedures. We apply FNMEM to investigate the spatial-temporal dynamics of white-matter fiber skeletons in a national database for autism research. Our FNMEM may provide a valuable tool for charting the developmental trajectories of various neuropsychiatric and neurodegenerative disorders. PMID:26213453

  5. A mixing evolution model for bidirectional microblog user networks

    NASA Astrophysics Data System (ADS)

    Yuan, Wei-Guo; Liu, Yun

    2015-08-01

    Microblogs have been widely used as a new form of online social networking. Based on the user profile data collected from Sina Weibo, we find that the number of microblog user bidirectional friends approximately corresponds with the lognormal distribution. We then build two microblog user networks with real bidirectional relationships, both of which have not only small-world and scale-free but also some special properties, such as double power-law degree distribution, disassortative network, hierarchical and rich-club structure. Moreover, by detecting the community structures of the two real networks, we find both of their community scales follow an exponential distribution. Based on the empirical analysis, we present a novel evolution network model with mixed connection rules, including lognormal fitness preferential and random attachment, nearest neighbor interconnected in the same community, and global random associations in different communities. The simulation results show that our model is consistent with real network in many topology features.

  6. Linear mixing model applied to coarse resolution satellite data

    NASA Technical Reports Server (NTRS)

    Holben, Brent N.; Shimabukuro, Yosio E.

    1992-01-01

    A linear mixing model typically applied to high resolution data such as Airborne Visible/Infrared Imaging Spectrometer, Thematic Mapper, and Multispectral Scanner System is applied to the NOAA Advanced Very High Resolution Radiometer coarse resolution satellite data. The reflective portion extracted from the middle IR channel 3 (3.55 - 3.93 microns) is used with channels 1 (0.58 - 0.68 microns) and 2 (0.725 - 1.1 microns) to run the Constrained Least Squares model to generate fraction images for an area in the west central region of Brazil. The derived fraction images are compared with an unsupervised classification and the fraction images derived from Landsat TM data acquired in the same day. In addition, the relationship betweeen these fraction images and the well known NDVI images are presented. The results show the great potential of the unmixing techniques for applying to coarse resolution data for global studies.

  7. A Mixed Approach for Modeling Blood Flow in Brain Microcirculation

    NASA Astrophysics Data System (ADS)

    Lorthois, Sylvie; Peyrounette, Myriam; Davit, Yohan; Quintard, Michel; Groupe d'Etude sur les Milieux Poreux Team

    2015-11-01

    Consistent with its distribution and exchange functions, the vascular system of the human brain cortex is a superposition of two components. At small-scale, a homogeneous and space-filling mesh-like capillary network. At large scale, quasi-fractal branched veins and arteries. From a modeling perspective, this is the superposition of: (a) a continuum model resulting from the homogenization of slow transport in the small-scale capillary network; and (b) a discrete network approach describing fast transport in the arteries and veins, which cannot be homogenized because of their fractal nature. This problematic is analogous to fast conducting wells embedded in a reservoir rock in petroleum engineering. An efficient method to reduce the computational cost is to use relatively large grid blocks for the continuum model. This makes it difficult to accurately couple both components. We solve this issue by adapting the ``well model'' concept used in petroleum engineering to brain specific 3D situations. We obtain a unique linear system describing the discrete network, the continuum and the well model. Results are presented for realistic arterial and venous geometries. The mixed approach is compared with full network models including various idealized capillary networks of known permeability. ERC BrainMicroFlow GA615102.

  8. Application of Hierarchical Linear Models/Linear Mixed-Effects Models in School Effectiveness Research

    ERIC Educational Resources Information Center

    Ker, H. W.

    2014-01-01

    Multilevel data are very common in educational research. Hierarchical linear models/linear mixed-effects models (HLMs/LMEs) are often utilized to analyze multilevel data nowadays. This paper discusses the problems of utilizing ordinary regressions for modeling multilevel educational data, compare the data analytic results from three regression…

  9. Mesoscale Modeling During Mixed-Phase Arctic Cloud Experiment

    SciTech Connect

    Avramov, A.; Harringston, J.Y.; Verlinde, J.

    2005-03-18

    Mixed-phase arctic stratus clouds are the predominant cloud type in the Arctic (Curry et al. 2000) and through various feedback mechanisms exert a strong influence on the Arctic climate. Perhaps one of the most intriguing of their features is that they tend to have liquid tops that precipitate ice. Despite the fact that this situation is colloidally unstable, these cloud systems are quite long lived - from a few days to over a couple of weeks. It has been hypothesized that mixed-phase clouds are maintained through a balance between liquid water condensation resulting from the cloud-top radiative cooling and ice removal by precipitation (Pinto 1998; Harrington et al. 1999). In their modeling study Harrington et al. (1999) found that the maintenance of this balance depends strongly on the ambient concentration of ice forming nucleus (IFN). In a follow-up study, Jiang et al. (2002), using only 30% of IFN concentration predicted by Meyers et al. (1992) IFN parameterization were able to obtain results similar to the observations reported by Pinto (1998). The IFN concentration measurements collected during the Mixed-Phase Arctic Cloud Experiment (M-PACE), conducted in October 2004 over the North Slope of Alaska and the Beaufort Sea (Verlinde et al. 2005), also showed much lower values then those predicted (Prenne, pers. comm.) by currently accepted ice nucleation parameterizations (e.g. Meyers et al. 1992). The goal of this study is to use the extensive IFN data taken during M-PACE to examine what effects low IFN concentrations have on mesoscale cloud structure and coastal dynamics.

  10. Tactical application of an atmospheric mixed-layer model

    NASA Astrophysics Data System (ADS)

    Graves, R. M.

    1982-12-01

    Modern Naval weapon and sensor systems are strongly influenced by the marine environment. Foremost among the atmospheric effects is ducting of electromagnetic energy by refractive layers in the atmosphere. To assess the effect of ducting on electromagnetic emissions, the Navy developed the Integrated Refractive Effects Prediction System (IREPS). Research at Naval Postgraduate School (NPS) has led to development of a state-of-the-art model which can be used to predict changes to the refractive profile of the lower atmosphere. The model uses radiosonde data and surface meteorological observations to predict changes in refractive conditions and low level cloud/fog formation over 18 to 30 hour periods. The model shows some skill in forecasting duct regions when subsidence rates can be specified to within +/-.0015 m/s. This thesis shows the applicability of the NPS marine atmospheric mixed layer model to fleet tactics. Atmospheric refractive effects on specific emitters can be predicted when model predictions are used in conjunction with IREPS.

  11. A new unsteady mixing model to predict NO(x) production during rapid mixing in a dual-stage combustor

    NASA Technical Reports Server (NTRS)

    Menon, Suresh; Mcmurtry, Patrick A.; Kerstein, Alan R.; Chen, J.-Y.

    1992-01-01

    An advanced gas turbine engine to power supersonic transport aircraft is currently under study. In addition to high combustion efficiency requirements, environmental concerns have placed stringent restrictions on the pollutant emissions from these engines. A dual-stage combustor with the potential for minimizing pollutants such as NO(x) emissions is undergoing experimental evaluation. A major technical issue in the design of this combustor is how to rapidly mix the hot, fuel-rich primary stage product with the secondary diluent air to obtain a fuel-lean mixture for combustion in the secondary stage. Numerical design studies using steady-state methods cannot account for the unsteady phenomena in the mixing region. Therefore, to evaluate the effect of unsteady mixing and combustion processes, a novel unsteady mixing model is demonstrated here. This model has been used in a stand-alone mode to study mixing and combustion in hydrogen-air nonpremixed jet flames. NO(x) production in these jet flames was also predicted. Comparison of the computed results with experimental data show good agreement thereby providing validation of the mixing model.

  12. IMAT graphics manual

    NASA Technical Reports Server (NTRS)

    Stockwell, Alan E.; Cooper, Paul A.

    1991-01-01

    The Integrated Multidisciplinary Analysis Tool (IMAT) consists of a menu driven executive system coupled with a relational database which links commercial structures, structural dynamics and control codes. The IMAT graphics system, a key element of the software, provides a common interface for storing, retrieving, and displaying graphical information. The IMAT Graphics Manual shows users of commercial analysis codes (MATRIXx, MSC/NASTRAN and I-DEAS) how to use the IMAT graphics system to obtain high quality graphical output using familiar plotting procedures. The manual explains the key features of the IMAT graphics system, illustrates their use with simple step-by-step examples, and provides a reference for users who wish to take advantage of the flexibility of the software to customize their own applications.

  13. Efficient material flow in mixed model assembly lines.

    PubMed

    Alnahhal, Mohammed; Noche, Bernd

    2013-01-01

    In this study, material flow from decentralized supermarkets to stations in mixed model assembly lines using tow (tugger) trains is investigated. Train routing, scheduling, and loading problems are investigated in parallel to minimize the number of trains, variability in loading and in routes lengths, and line-side inventory holding costs. The general framework for solving these problems in parallel contains analytical equations, Dynamic Programming (DP), and Mixed Integer Programming (MIP). Matlab in conjunction with LP-solve software was used to formulate the problem. An example was presented to explain the idea. Results which were obtained in very short CPU time showed the effect of using time buffer among routes on the feasible space and on the optimal solution. Results also showed the effect of the objective, concerning reducing the variability in loading, on the results of routing, scheduling, and loading. Moreover, results showed the importance of considering the maximum line-side inventory beside the capacity of the train in the same time in finding the optimal solution. PMID:24024101

  14. Robust, Adaptive Functional Regression in Functional Mixed Model Framework

    PubMed Central

    Zhu, Hongxiao; Brown, Philip J.; Morris, Jeffrey S.

    2012-01-01

    Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient

  15. Nonlinear resonances and mixing in a simple shallow lake model

    NASA Astrophysics Data System (ADS)

    Sandor, Balazs

    2013-04-01

    Large-scale transport in environmental flows is often dominantly determined by the velocity field of the flow. Diffusion of certain quantities, like pollutants and temperature, can be neglected with respect to advective transport. Understanding the topological features of the velocity field is thus very important for the qualitative analysis of the large-scale mixing properties of these passive scalars in water bodies. Large horizontal circulating zones (often called gyres) are prevalent structures of wind induced shallow lake flows. In this presentation we analyse the currents generated by wind in a square shaped shallow lake. In case of a steady flow field, induced by a time-independent wind stress field, the typical flow pattern consists of two counter-rotating gyres. When applying periodic disturbances in the wind stress field, mixing regions of different widths develop between the gyres. This region is filled with coherent structures, strongly increasing advective transport in the lake. Meanwhile, the inner regions of the gyres remain stable; their outer periodic orbits serve as transport barriers. Our statement is that the width of the mixing region reaches its maximum at a certain scale of wind disturbation frequencies. This characteristic frequency scale corresponds to the typical circulation frequencies of the gyres. Our flow model consists of a two dimensional, depth-averaged flow field of the volume preserving water body with wind surface stress. The flow has a stream function that satisfies the linearised shallow-water vorticity transport equation. This corresponds to a Hamiltonian system, where the stream function plays the role of the Hamiltonian In the steady state the gyres consist of periodic orbits, so this is an (one degree of freedom) integrable mechanical system, like the undamped pendulum. In the periodically disturbed case the system remains Hamiltonian with a topological similarity to the phase portrait of the forced pendulum. Thus we can

  16. Mixed-Effects Modeling with Crossed Random Effects for Subjects and Items

    ERIC Educational Resources Information Center

    Baayen, R. H.; Davidson, D. J.; Bates, D. M.

    2008-01-01

    This paper provides an introduction to mixed-effects models for the analysis of repeated measurement data with subjects and items as crossed random effects. A worked-out example of how to use recent software for mixed-effects modeling is provided. Simulation studies illustrate the advantages offered by mixed-effects analyses compared to…

  17. The TEKLIB graphic library

    NASA Technical Reports Server (NTRS)

    Bostic, S. W.

    1983-01-01

    TEKLIB is a library of procedures written in TI PASCAL to perform basic graphic tasks. TEKLIB was written to provide an interface between a graphics terminal and the TI 990. The TI 990 is used as a controller for the Finite Element Machine which is an array of microprocessors designed to solve problems by finite element methods in parallel. The use of TEKLIB provides a means of inputting data graphically and displaying output.

  18. COMPUTATIONAL FLUID DYNAMICS MODELING OF SCALED HANFORD DOUBLE SHELL TANK MIXING - CFD MODELING SENSITIVITY STUDY RESULTS

    SciTech Connect

    JACKSON VL

    2011-08-31

    The primary purpose of the tank mixing and sampling demonstration program is to mitigate the technical risks associated with the ability of the Hanford tank farm delivery and celtification systems to measure and deliver a uniformly mixed high-level waste (HLW) feed to the Waste Treatment and Immobilization Plant (WTP) Uniform feed to the WTP is a requirement of 24590-WTP-ICD-MG-01-019, ICD-19 - Interface Control Document for Waste Feed, although the exact definition of uniform is evolving in this context. Computational Fluid Dynamics (CFD) modeling has been used to assist in evaluating scaleup issues, study operational parameters, and predict mixing performance at full-scale.

  19. Sensitivity of fluvial sediment source apportionment to mixing model assumptions: A Bayesian model comparison

    NASA Astrophysics Data System (ADS)

    Cooper, Richard J.; Krueger, Tobias; Hiscock, Kevin M.; Rawlins, Barry G.

    2014-11-01

    Mixing models have become increasingly common tools for apportioning fluvial sediment load to various sediment sources across catchments using a wide variety of Bayesian and frequentist modeling approaches. In this study, we demonstrate how different model setups can impact upon resulting source apportionment estimates in a Bayesian framework via a one-factor-at-a-time (OFAT) sensitivity analysis. We formulate 13 versions of a mixing model, each with different error assumptions and model structural choices, and apply them to sediment geochemistry data from the River Blackwater, Norfolk, UK, to apportion suspended particulate matter (SPM) contributions from three sources (arable topsoils, road verges, and subsurface material) under base flow conditions between August 2012 and August 2013. Whilst all 13 models estimate subsurface sources to be the largest contributor of SPM (median ˜76%), comparison of apportionment estimates reveal varying degrees of sensitivity to changing priors, inclusion of covariance terms, incorporation of time-variant distributions, and methods of proportion characterization. We also demonstrate differences in apportionment results between a full and an empirical Bayesian setup, and between a Bayesian and a frequentist optimization approach. This OFAT sensitivity analysis reveals that mixing model structural choices and error assumptions can significantly impact upon sediment source apportionment results, with estimated median contributions in this study varying by up to 21% between model versions. Users of mixing models are therefore strongly advised to carefully consider and justify their choice of model structure prior to conducting sediment source apportionment investigations.

  20. Multilevel Latent Class Models with Dirichlet Mixing Distribution

    PubMed Central

    Di, Chong-Zhi; Bandeen-Roche, Karen

    2010-01-01

    Summary Latent class analysis (LCA) and latent class regression (LCR) are widely used for modeling multivariate categorical outcomes in social science and biomedical studies. Standard analyses assume data of different respondents to be mutually independent, excluding application of the methods to familial and other designs in which participants are clustered. In this paper, we consider multilevel latent class models, in which subpopulation mixing probabilities are treated as random effects that vary among clusters according to a common Dirichlet distribution. We apply the Expectation-Maximization (EM) algorithm for model fitting by maximum likelihood (ML). This approach works well, but is computationally intensive when either the number of classes or the cluster size is large. We propose a maximum pairwise likelihood (MPL) approach via a modified EM algorithm for this case. We also show that a simple latent class analysis, combined with robust standard errors, provides another consistent, robust, but less efficient inferential procedure. Simulation studies suggest that the three methods work well in finite samples, and that the MPL estimates often enjoy comparable precision as the ML estimates. We apply our methods to the analysis of comorbid symptoms in the Obsessive Compulsive Disorder study. Our models' random effects structure has more straightforward interpretation than those of competing methods, thus should usefully augment tools available for latent class analysis of multilevel data. PMID:20560936

  1. Bayesian Gaussian Copula Factor Models for Mixed Data

    PubMed Central

    Murray, Jared S.; Dunson, David B.; Carin, Lawrence; Lucas, Joseph E.

    2013-01-01

    Gaussian factor models have proven widely useful for parsimoniously characterizing dependence in multivariate data. There is a rich literature on their extension to mixed categorical and continuous variables, using latent Gaussian variables or through generalized latent trait models acommodating measurements in the exponential family. However, when generalizing to non-Gaussian measured variables the latent variables typically influence both the dependence structure and the form of the marginal distributions, complicating interpretation and introducing artifacts. To address this problem we propose a novel class of Bayesian Gaussian copula factor models which decouple the latent factors from the marginal distributions. A semiparametric specification for the marginals based on the extended rank likelihood yields straightforward implementation and substantial computational gains. We provide new theoretical and empirical justifications for using this likelihood in Bayesian inference. We propose new default priors for the factor loadings and develop efficient parameter-expanded Gibbs sampling for posterior computation. The methods are evaluated through simulations and applied to a dataset in political science. The models in this paper are implemented in the R package bfa.1 PMID:23990691

  2. Subgrid models for mass and thermal diffusion in turbulent mixing

    NASA Astrophysics Data System (ADS)

    Lim, H.; Yu, Y.; Glimm, J.; Li, X.-L.; Sharp, D. H.

    2010-12-01

    We propose a new method for the large eddy simulation (LES) of turbulent mixing flows. The method yields convergent probability distribution functions (PDFs) for temperature and concentration and a chemical reaction rate when applied to reshocked Richtmyer-Meshkov (RM) unstable flows. Because such a mesh convergence is an unusual and perhaps original capability for LES of RM flows, we review previous validation studies of the principal components of the algorithm. The components are (i) a front tracking code, FronTier, to control numerical mass diffusion and (ii) dynamic subgrid scale (SGS) models to compensate for unresolved scales in the LES. We also review the relevant code comparison studies. We compare our results to a simple model based on 1D diffusion, taking place in the geometry defined statistically by the interface (the 50% isoconcentration surface between the two fluids). Several conclusions important to physics could be drawn from our study. We model chemical reactions with no closure approximations beyond those in the LES of the fluid variables itself, and as with dynamic SGS models, these closures contain no adjustable parameters. The chemical reaction rate is specified by the joint PDF for temperature and concentration. We observe a bimodal distribution for the PDF and we observe significant dependence on fluid transport parameters.

  3. Chemical geothermometers and mixing models for geothermal systems

    USGS Publications Warehouse

    Fournier, R.O.

    1977-01-01

    Qualitative chemical geothermometers utilize anomalous concentrations of various "indicator" elements in groundwaters, streams, soils, and soil gases to outline favorable places to explore for geothermal energy. Some of the qualitative methods, such as the delineation of mercury and helium anomalies in soil gases, do not require the presence of hot springs or fumaroles. However, these techniques may also outline fossil thermal areas that are now cold. Quantitative chemical geothermometers and mixing models can provide information about present probable minimum subsurface temperatures. Interpretation is easiest where several hot or warm springs are present in a given area. At this time the most widely used quantitative chemical geothermometers are silica, Na/K, and Na-K-Ca. ?? 1976.

  4. Neutrino mixing in a left-right model

    NASA Astrophysics Data System (ADS)

    Martins Simões, J. A.; Ponciano, J. A.

    We study the mixing among different generations of massive neutrino fields in a model can accommodate a consistent pattern for neutral fermion masses as well as neutrino oscillations. The left and right sectors can be connected by a new neutral current. PACS: 12.60.-i, 14.60.St, 14.60.Pq

  5. Sensitivity of fluvial sediment source apportionment to mixing model assumptions: A Bayesian model comparison

    PubMed Central

    Cooper, Richard J; Krueger, Tobias; Hiscock, Kevin M; Rawlins, Barry G

    2014-01-01

    Mixing models have become increasingly common tools for apportioning fluvial sediment load to various sediment sources across catchments using a wide variety of Bayesian and frequentist modeling approaches. In this study, we demonstrate how different model setups can impact upon resulting source apportionment estimates in a Bayesian framework via a one-factor-at-a-time (OFAT) sensitivity analysis. We formulate 13 versions of a mixing model, each with different error assumptions and model structural choices, and apply them to sediment geochemistry data from the River Blackwater, Norfolk, UK, to apportion suspended particulate matter (SPM) contributions from three sources (arable topsoils, road verges, and subsurface material) under base flow conditions between August 2012 and August 2013. Whilst all 13 models estimate subsurface sources to be the largest contributor of SPM (median ∼76%), comparison of apportionment estimates reveal varying degrees of sensitivity to changing priors, inclusion of covariance terms, incorporation of time-variant distributions, and methods of proportion characterization. We also demonstrate differences in apportionment results between a full and an empirical Bayesian setup, and between a Bayesian and a frequentist optimization approach. This OFAT sensitivity analysis reveals that mixing model structural choices and error assumptions can significantly impact upon sediment source apportionment results, with estimated median contributions in this study varying by up to 21% between model versions. Users of mixing models are therefore strongly advised to carefully consider and justify their choice of model structure prior to conducting sediment source apportionment investigations. Key Points An OFAT sensitivity analysis of sediment fingerprinting mixing models is conducted Bayesian models display high sensitivity to error assumptions and structural choices Source apportionment results differ between Bayesian and frequentist approaches PMID

  6. Recycle of mixed automotive plastics: A model study

    NASA Astrophysics Data System (ADS)

    Woramongconchai, Somsak

    decreased with increased twin-screw extrusion temperature. The flexural modulus of the recycled mixed automotive plastics expected in 2003 was higher than the 1980s and 1990 recycle. Flexural strength effects were not large enough for serious consideration, but were more dominant when compared to those in the 1980s and 1990s. Impact strengths at 20-30 J/m were the lowest value compared to the 1980s and 1990s mixed automotive recycle. Torque rheometry, dynamic mechanical analysis and optical and electron microscopy agreed with each other on the characterization of the processability and morphology of the blends. LLDPE and HDPE were miscible while PP was partially miscible with polyethylene. ABS and nylon-6 were immiscible with the polyolefins, but partially miscible with each other. As expected, the polyurethane foam was immiscible with the other components. The minor components of the model recycle of mixed automotive materials were probably partially miscible with ABS/nylon-6, but there were multiple and unresolved phases in the major blends.

  7. Research on mixed network architecture collaborative application model

    NASA Astrophysics Data System (ADS)

    Jing, Changfeng; Zhao, Xi'an; Liang, Song

    2009-10-01

    When facing complex requirements of city development, ever-growing spatial data, rapid development of geographical business and increasing business complexity, collaboration between multiple users and departments is needed urgently, however conventional GIS software (such as Client/Server model or Browser/Server model) are not support this well. Collaborative application is one of the good resolutions. Collaborative application has four main problems to resolve: consistency and co-edit conflict, real-time responsiveness, unconstrained operation, spatial data recoverability. In paper, application model called AMCM is put forward based on agent and multi-level cache. AMCM can be used in mixed network structure and supports distributed collaborative. Agent is an autonomous, interactive, initiative and reactive computing entity in a distributed environment. Agent has been used in many fields such as compute science and automation. Agent brings new methods for cooperation and the access for spatial data. Multi-level cache is a part of full data. It reduces the network load and improves the access and handle of spatial data, especially, in editing the spatial data. With agent technology, we make full use of its characteristics of intelligent for managing the cache and cooperative editing that brings a new method for distributed cooperation and improves the efficiency.

  8. Trends in stratospheric ozone profiles using functional mixed models

    NASA Astrophysics Data System (ADS)

    Park, A. Y.; Guillas, S.; Petropavlovskikh, I.

    2013-05-01

    This paper is devoted to the modeling of altitude-dependent patterns of ozone variations over time. Umkher ozone profiles (quarter of Umkehr layer) from 1978 to 2011 are investigated at two locations: Boulder (USA) and Arosa (Switzerland). The study consists of two statistical stages. First we approximate ozone profiles employing an appropriate basis. To capture primary modes of ozone variations without losing essential information, a functional principal component analysis is performed as it penalizes roughness of the function and smooths excessive variations in the shape of the ozone profiles. As a result, data driven basis functions are obtained. Secondly we estimate the effects of covariates - month, year (trend), quasi biennial oscillation, the Solar cycle, arctic oscillation and the El Niño/Southern Oscillation cycle - on the principal component scores of ozone profiles over time using generalized additive models. The effects are smooth functions of the covariates, and are represented by knot-based regression cubic splines. Finally we employ generalized additive mixed effects models incorporating a more complex error structure that reflects the observed seasonality in the data. The analysis provides more accurate estimates of influences and trends, together with enhanced uncertainty quantification. We are able to capture fine variations in the time evolution of the profiles such as the semi-annual oscillation. We conclude by showing the trends by altitude over Boulder. The strongly declining trends over 2003-2011 for altitudes of 32-64 hPa show that stratospheric ozone is not yet fully recovering.

  9. Using Bayesian Stable Isotope Mixing Models to Enhance Marine Ecosystem Models

    EPA Science Inventory

    The use of stable isotopes in food web studies has proven to be a valuable tool for ecologists. We investigated the use of Bayesian stable isotope mixing models as constraints for an ecosystem model of a temperate seagrass system on the Atlantic coast of France. δ13C and δ15N i...

  10. Cruise observation and numerical modeling of turbulent mixing in the Pearl River estuary in summer

    NASA Astrophysics Data System (ADS)

    Pan, Jiayi; Gu, Yanzhen

    2016-06-01

    The turbulent mixing in the Pearl River estuary and plume area is analyzed by using cruise data and simulation results of the Regional Ocean Model System (ROMS). The cruise observations reveal that strong mixing appeared in the bottom layer on larger ebb in the estuary. Modeling simulations are consistent with the observation results, and suggest that inside the estuary and in the near-shore water, the mixing is stronger on ebb than on flood. The mixing generation mechanism analysis based on modeling data reveals that bottom stress is responsible for the generation of turbulence in the estuary, for the re-circulating plume area, internal shear instability plays an important role in the mixing, and wind may induce the surface mixing in the plume far-field. The estuary mixing is controlled by the tidal strength, and in the re-circulating plume bulge, the wind stirring may reinforce the internal shear instability mixing.

  11. Graphics mini manual

    NASA Technical Reports Server (NTRS)

    Taylor, Nancy L.; Randall, Donald P.; Bowen, John T.; Johnson, Mary M.; Roland, Vincent R.; Matthews, Christine G.; Gates, Raymond L.; Skeens, Kristi M.; Nolf, Scott R.; Hammond, Dana P.

    1990-01-01

    The computer graphics capabilities available at the Center are introduced and their use is explained. More specifically, the manual identifies and describes the various graphics software and hardware components, details the interfaces between these components, and provides information concerning the use of these components at LaRC.

  12. Quantitative Graphics in Newspapers.

    ERIC Educational Resources Information Center

    Tankard, James W., Jr.

    The use of quantitative graphics in newspapers requires achieving a balance between being accurate and getting the attention of the reader. The statistical representations in newspapers are drawn by graphic designers whose key technique is fusion--the striking combination of two visual images. This technique often results in visual puns,…

  13. How Computer Graphics Work.

    ERIC Educational Resources Information Center

    Prosise, Jeff

    This document presents the principles behind modern computer graphics without straying into the arcane languages of mathematics and computer science. Illustrations accompany the clear, step-by-step explanations that describe how computers draw pictures. The 22 chapters of the book are organized into 5 sections. "Part 1: Computer Graphics in…

  14. Mixed axion/neutralino cold dark matter in supersymmetric models

    SciTech Connect

    Baer, Howard; Lessa, Andre; Rajagopalan, Shibi; Sreethawong, Warintorn E-mail: lessa@nhn.ou.edu E-mail: wstan@nhn.ou.edu

    2011-06-01

    We consider supersymmetric (SUSY) models wherein the strong CP problem is solved by the Peccei-Quinn (PQ) mechanism with a concommitant axion/axino supermultiplet. We examine R-parity conserving models where the neutralino is the lightest SUSY particle, so that a mixture of neutralinos and axions serve as cold dark matter (a Z-tilde {sub 1} CDM). The mixed a Z-tilde {sub 1} CDM scenario can match the measured dark matter abundance for SUSY models which typically give too low a value of the usual thermal neutralino abundance, such as models with wino-like or higgsino-like dark matter. The usual thermal neutralino abundance can be greatly enhanced by the decay of thermally-produced axinos (ã) to neutralinos, followed by neutralino re-annihilation at temperatures much lower than freeze-out. In this case, the relic density is usually neutralino dominated, and goes as ∼ (f{sub a}/N)/m{sub ã}{sup 3/2}. If axino decay occurs before neutralino freeze-out, then instead the neutralino abundance can be augmented by relic axions to match the measured abundance. Entropy production from late-time axino decays can diminish the axion abundance, but ultimately not the neutralino abundance. In a Z-tilde {sub 1} CDM models, it may be possible to detect both a WIMP and an axion as dark matter relics. We also discuss possible modifications of our results due to production and decay of saxions. In the appendices, we present expressions for the Hubble expansion rate and the axion and neutralino relic densities in radiation, matter and decaying-particle dominated universes.

  15. Mixed-Effects Logistic Regression Models for Indirectly Observed Discrete Outcome Variables

    ERIC Educational Resources Information Center

    Vermunt, Jeroen K.

    2005-01-01

    A well-established approach to modeling clustered data introduces random effects in the model of interest. Mixed-effects logistic regression models can be used to predict discrete outcome variables when observations are correlated. An extension of the mixed-effects logistic regression model is presented in which the dependent variable is a latent…

  16. Extended Mixed-Efects Item Response Models with the MH-RM Algorithm

    ERIC Educational Resources Information Center

    Chalmers, R. Philip

    2015-01-01

    A mixed-effects item response theory (IRT) model is presented as a logical extension of the generalized linear mixed-effects modeling approach to formulating explanatory IRT models. Fixed and random coefficients in the extended model are estimated using a Metropolis-Hastings Robbins-Monro (MH-RM) stochastic imputation algorithm to accommodate for…

  17. Debugging software with animated graphics

    NASA Astrophysics Data System (ADS)

    Horn, Dawn; Scrip, Becky; Scrip, Bill

    1997-07-01

    The traditional use of graphics and animation in engineering software development has been to demonstrate the function and utility of individual engineering tools. This paper illustrates the use of graphical rendering and animation for debugging large integrated simulations. The tools presented are part of the THAAD integrated system effectiveness simulation (TISES). TISES has integrated different segment software models to be able to perform analysis of a full THAAD (theater high altitude area defense) battalion. Within each model are implicit coordinates, transformations, reference values (i.e. earth radius) used which may or may not match those of adjacent models. Each interface or integration between the models introduces a source of error. TISES also utilized many different input parameters from a variety of external sources that can be a source of error. The TISES development team has found graphics and animation to be extremely helpful in testing and debugging these interface problems. This paper includes examples of input data verification, model to model interface, and model versus model perceptions that have been utilized in TISES development.

  18. Using Graphic Organizers in Intercultural Education

    ERIC Educational Resources Information Center

    Ciascai, Liliana

    2009-01-01

    Graphic organizers are instruments of representation, illustration and modeling of information. In the educational practice they are used for building, and systematization of knowledge. Graphic organizers are instruments that addressed mostly visual learning style, but their use is beneficial to all learners. In this paper we illustrate the use of…

  19. Mixed dark matter in left-right symmetric models

    NASA Astrophysics Data System (ADS)

    Berlin, Asher; Fox, Patrick J.; Hooper, Dan; Mohlabeng, Gopolang

    2016-06-01

    Motivated by the recently reported diboson and dijet excesses in Run 1 data at ATLAS and CMS, we explore models of mixed dark matter in left-right symmetric theories. In this study, we calculate the relic abundance and the elastic scattering cross section with nuclei for a number of dark matter candidates that appear within the fermionic multiplets of left-right symmetric models. In contrast to the case of pure multiplets, WIMP-nucleon scattering proceeds at tree-level, and hence the projected reach of future direct detection experiments such as LUX-ZEPLIN and XENON1T will cover large regions of parameter space for TeV-scale thermal dark matter. Decays of the heavy charged W' boson to particles in the dark sector can potentially shift the right-handed gauge coupling to larger values when fixed to the rate of the Run 1 excesses, moving towards the theoretically attractive scenario, gR = gL. This region of parameter space may be probed by future collider searches for new Higgs bosons or electroweak fermions.

  20. Validation of hydrogen gas stratification and mixing models

    DOE PAGESBeta

    Wu, Hsingtzu; Zhao, Haihua

    2015-05-26

    Two validation benchmarks confirm that the BMIX++ code is capable of simulating unintended hydrogen release scenarios efficiently. The BMIX++ (UC Berkeley mechanistic MIXing code in C++) code has been developed to accurately and efficiently predict the fluid mixture distribution and heat transfer in large stratified enclosures for accident analyses and design optimizations. The BMIX++ code uses a scaling based one-dimensional method to achieve large reduction in computational effort compared to a 3-D computational fluid dynamics (CFD) simulation. Two BMIX++ benchmark models have been developed. One is for a single buoyant jet in an open space and another is for amore » large sealed enclosure with both a jet source and a vent near the floor. Both of them have been validated by comparisons with experimental data. Excellent agreements are observed. The entrainment coefficients of 0.09 and 0.08 are found to fit the experimental data for hydrogen leaks with the Froude number of 99 and 268 best, respectively. In addition, the BIX++ simulation results of the average helium concentration for an enclosure with a vent and a single jet agree with the experimental data within a margin of about 10% for jet flow rates ranging from 1.21 × 10⁻⁴ to 3.29 × 10⁻⁴ m³/s. In conclusion, computing time for each BMIX++ model with a normal desktop computer is less than 5 min.« less

  1. Validation of hydrogen gas stratification and mixing models

    SciTech Connect

    Wu, Hsingtzu; Zhao, Haihua

    2015-05-26

    Two validation benchmarks confirm that the BMIX++ code is capable of simulating unintended hydrogen release scenarios efficiently. The BMIX++ (UC Berkeley mechanistic MIXing code in C++) code has been developed to accurately and efficiently predict the fluid mixture distribution and heat transfer in large stratified enclosures for accident analyses and design optimizations. The BMIX++ code uses a scaling based one-dimensional method to achieve large reduction in computational effort compared to a 3-D computational fluid dynamics (CFD) simulation. Two BMIX++ benchmark models have been developed. One is for a single buoyant jet in an open space and another is for a large sealed enclosure with both a jet source and a vent near the floor. Both of them have been validated by comparisons with experimental data. Excellent agreements are observed. The entrainment coefficients of 0.09 and 0.08 are found to fit the experimental data for hydrogen leaks with the Froude number of 99 and 268 best, respectively. In addition, the BIX++ simulation results of the average helium concentration for an enclosure with a vent and a single jet agree with the experimental data within a margin of about 10% for jet flow rates ranging from 1.21 × 10⁻⁴ to 3.29 × 10⁻⁴ m³/s. In conclusion, computing time for each BMIX++ model with a normal desktop computer is less than 5 min.

  2. Coolant mixing and distribution in a transparent reactor model

    SciTech Connect

    Fanning, M.W.; Haury, G.; Pflug, L.; Rothe, P.H.

    1983-11-01

    Following a small break loss-of-coolant accident in a pressurized water reactor, coolant water may be injected at high pressure to help cool the core. This paper reports the results of tests which determined the mixing and distribution of the coolant in a 1/5-scale transparent model of the reactor. The model components included the reactor vessel, cold leg pipe, pump, and loop seal with steam generator and hot leg simulators completing the flow loop. Tests were conducted for a no-refill condition with constant liquid inventory in the facility and zero flow of the primary water. Salt water, dyed red was used for the coolant water to create prototypical density differences in this atmospheric facility. Steady state fluid distribution was determined from flow and density measurements and complete mass balances. Interpretation of the quantitative results was aided by extensive flow visualization studies which include still photographs and motion picture films for all tests. The test parameters included the fluid density ratio, the flow rate of coolant water, and the flow rate of primary water injected in the vessel downcomer to simulate a natural circulation flow through vent valves between the reactor core and the downcomer. Four locations of the small break were tested.

  3. Validation of hydrogen gas stratification and mixing models

    SciTech Connect

    Wu, Hsingtzu; Zhao, Haihua

    2015-11-01

    Two validation benchmarks confirm that the BMIX++ code is capable of simulating unintended hydrogen release scenarios efficiently. The BMIX++ (UC Berkeley mechanistic MIXing code in C++) code has been developed to accurately and efficiently predict the fluid mixture distribution and heat transfer in large stratified enclosures for accident analyses and design optimizations. The BMIX++ code uses a scaling based one-dimensional method to achieve large reduction in computational effort compared to a 3-D computational fluid dynamics (CFD) simulation. Two BMIX++ benchmark models have been developed. One is for a single buoyant jet in an open space and another is for a large sealed enclosure with both a jet source and a vent near the floor. Both of them have been validated by comparisons with experimental data. Excellent agreements are observed. The entrainment coefficients of 0.09 and 0.08 are found to fit the experimental data for hydrogen leaks with the Froude number of 99 and 268 best, respectively. In addition, the BIX++ simulation results of the average helium concentration for an enclosure with a vent and a single jet agree with the experimental data within a margin of about 10% for jet flow rates ranging from 1.21 × 10⁻⁴ to 3.29 × 10⁻⁴ m³/s. Computing time for each BMIX++ model with a normal desktop computer is less than 5 min.

  4. Mixed dark matter in left-right symmetric models

    DOE PAGESBeta

    Berlin, Asher; Fox, Patrick J.; Hooper, Dan; Mohlabeng, Gopolang

    2016-06-08

    Motivated by the recently reported diboson and dijet excesses in Run 1 data at ATLAS and CMS, we explore models of mixed dark matter in left-right symmetric theories. In this study, we calculate the relic abundance and the elastic scattering cross section with nuclei for a number of dark matter candidates that appear within the fermionic multiplets of left-right symmetric models. In contrast to the case of pure multiplets, WIMP-nucleon scattering proceeds at tree-level, and hence the projected reach of future direct detection experiments such as LUX-ZEPLIN and XENON1T will cover large regions of parameter space for TeV-scale thermal darkmore » matter. Decays of the heavy charged W(') boson to particles in the dark sector can potentially shift the right-handed gauge coupling to larger values when fixed to the rate of the Run 1 excesses, moving towards the theoretically attractive scenario, gR = gL. Furthermore, this region of parameter space may be probed by future collider searches for new Higgs bosons or electroweak fermions.« less

  5. Fast point-based method of a computer-generated hologram for a triangle-patch model by using a graphics processing unit.

    PubMed

    Sugawara, Takuya; Ogihara, Yuki; Sakamoto, Yuji

    2016-01-20

    The point-based method and fast-Fourier-transform-based method are commonly used for calculation methods of computer-generation holograms. This paper proposes a novel fast calculation method for a patch model, which uses the point-based method. The method provides a calculation time that is proportional to the number of patches but not to that of the point light sources. This means that the method is suitable for calculating a wide area covered by patches quickly. Experiments using a graphics processing unit indicated that the proposed method is about 8 times or more faster than the ordinary point-based method. PMID:26835949

  6. Strategies for the use of mixed-effects models in continuous forest inventories.

    PubMed

    Westfall, James A

    2016-04-01

    Forest inventory data often consists of measurements taken on field plots as well as values predicted from statistical models, e.g., tree biomass. Many of these models only include fixed-effects parameters either because at the time the models were established, mixed-effects model theory had not yet been thoroughly developed or the use of mixed models was deemed unnecessary or too complex. Over the last two decades, considerable research has been conducted on the use of mixed models in forestry, such that mixed models and their applications are generally well understood. However, most of these assessments have focused on static validation data, and mixed model applications in the context of continuous forest inventories have not been evaluated. In comparison to fixed-effects models, the results of this study showed that mixed models can provide considerable reductions in prediction bias and variance for the population and also for subpopulations therein. However, the random effects resulting from the initial model fit deteriorated rapidly over time, such that some field data is needed to effectively recalibrate the random effects for each inventory cycle. Thus, implementation of mixed models requires ongoing maintenance to reap the benefits of improved predictive behavior. Forest inventory managers must determine if this gain in predictive power outweighs the additional effort needed to employ mixed models in a temporal framework. PMID:27010710

  7. Approximate Counting of Graphical Realizations

    PubMed Central

    2015-01-01

    In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007), for regular directed graphs (by Greenhill, 2011) and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013). Several heuristics on counting the number of possible realizations exist (via sampling processes), and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases) the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS) for counting of all realizations. PMID:26161994

  8. Approximate Counting of Graphical Realizations.

    PubMed

    Erdős, Péter L; Kiss, Sándor Z; Miklós, István; Soukup, Lajos

    2015-01-01

    In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007), for regular directed graphs (by Greenhill, 2011) and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013). Several heuristics on counting the number of possible realizations exist (via sampling processes), and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases) the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS) for counting of all realizations. PMID:26161994

  9. An improved mixing model providing joint statistics of scalar and scalar dissipation

    SciTech Connect

    Meyer, Daniel W.; Jenny, Patrick

    2008-11-15

    For the calculation of nonpremixed turbulent flames with thin reaction zones the joint probability density function (PDF) of the mixture fraction and its dissipation rate plays an important role. The corresponding PDF transport equation involves a mixing model for the closure of the molecular mixing term. Here, the parameterized scalar profile (PSP) mixing model is extended to provide the required joint statistics. Model predictions are validated using direct numerical simulation (DNS) data of a passive scalar mixing in a statistically homogeneous turbulent flow. Comparisons between the DNS and the model predictions are provided, which involve different initial scalar-field lengthscales. (author)

  10. Identifying genetically driven clinical phenotypes using linear mixed models.

    PubMed

    Mosley, Jonathan D; Witte, John S; Larkin, Emma K; Bastarache, Lisa; Shaffer, Christian M; Karnes, Jason H; Stein, C Michael; Phillips, Elizabeth; Hebbring, Scott J; Brilliant, Murray H; Mayer, John; Ye, Zhan; Roden, Dan M; Denny, Joshua C

    2016-01-01

    We hypothesized that generalized linear mixed models (GLMMs), which estimate the additive genetic variance underlying phenotype variability, would facilitate rapid characterization of clinical phenotypes from an electronic health record. We evaluated 1,288 phenotypes in 29,349 subjects of European ancestry with single-nucleotide polymorphism (SNP) genotyping on the Illumina Exome Beadchip. We show that genetic liability estimates are primarily driven by SNPs identified by prior genome-wide association studies and SNPs within the human leukocyte antigen (HLA) region. We identify 44 (false discovery rate q<0.05) phenotypes associated with HLA SNP variation and show that hypothyroidism is genetically correlated with Type I diabetes (rG=0.31, s.e. 0.12, P=0.003). We also report novel SNP associations for hypothyroidism near HLA-DQA1/HLA-DQB1 at rs6906021 (combined odds ratio (OR)=1.2 (95% confidence interval (CI): 1.1-1.2), P=9.8 × 10(-11)) and for polymyalgia rheumatica near C6orf10 at rs6910071 (OR=1.5 (95% CI: 1.3-1.6), P=1.3 × 10(-10)). Phenome-wide application of GLMMs identifies phenotypes with important genetic drivers, and focusing on these phenotypes can identify novel genetic associations. PMID:27109359

  11. Genomic Heritability of Bovine Growth Using a Mixed Model

    PubMed Central

    Ryu, Jihye; Lee, Chaeyoung

    2014-01-01

    This study investigated heritability for bovine growth estimated with genomewide single nucleotide polymorphism (SNP) information obtained from a DNA microarray chip. Three hundred sixty seven Korean cattle were genotyped with the Illumina BovineSNP50 BeadChip, and 39,112 SNPs of 364 animals filtered by quality assurance were analyzed to estimate heritability of body weights at 6, 9, 12, 15, 18, 21, and 24 months of age. Restricted maximum likelihood estimate of heritability was obtained using covariance structure of genomic relationships among animals in a mixed model framework. Heritability estimates ranged from 0.58 to 0.76 for body weights at different ages. The heritability estimates using genomic information in this study were larger than those which had been estimated previously using pedigree information. The results revealed a trend that the heritability for body weight increased at a younger age (6 months). This suggests an early genetic evaluation for bovine growth using genomic information to increase genetic merits of animals. PMID:25358309

  12. Identifying genetically driven clinical phenotypes using linear mixed models

    PubMed Central

    Mosley, Jonathan D.; Witte, John S.; Larkin, Emma K.; Bastarache, Lisa; Shaffer, Christian M.; Karnes, Jason H.; Stein, C. Michael; Phillips, Elizabeth; Hebbring, Scott J.; Brilliant, Murray H.; Mayer, John; Ye, Zhan; Roden, Dan M.; Denny, Joshua C.

    2016-01-01

    We hypothesized that generalized linear mixed models (GLMMs), which estimate the additive genetic variance underlying phenotype variability, would facilitate rapid characterization of clinical phenotypes from an electronic health record. We evaluated 1,288 phenotypes in 29,349 subjects of European ancestry with single-nucleotide polymorphism (SNP) genotyping on the Illumina Exome Beadchip. We show that genetic liability estimates are primarily driven by SNPs identified by prior genome-wide association studies and SNPs within the human leukocyte antigen (HLA) region. We identify 44 (false discovery rate q<0.05) phenotypes associated with HLA SNP variation and show that hypothyroidism is genetically correlated with Type I diabetes (rG=0.31, s.e. 0.12, P=0.003). We also report novel SNP associations for hypothyroidism near HLA-DQA1/HLA-DQB1 at rs6906021 (combined odds ratio (OR)=1.2 (95% confidence interval (CI): 1.1–1.2), P=9.8 × 10−11) and for polymyalgia rheumatica near C6orf10 at rs6910071 (OR=1.5 (95% CI: 1.3–1.6), P=1.3 × 10−10). Phenome-wide application of GLMMs identifies phenotypes with important genetic drivers, and focusing on these phenotypes can identify novel genetic associations. PMID:27109359

  13. Estimation of trends in rainfall extremes with mixed effects models

    NASA Astrophysics Data System (ADS)

    Kamruzzaman, M.; Beecham, S.; Metcalfe, A. V.

    2016-02-01

    Estimates of seasonal rainfall maxima at durations as short as 6 min are needed for many applications including the design and analysis of urban drainage systems. It is also important to investigate whether or not there is evidence of changes in these extremes, both as an indicator of the sensitivity of rainfall to anthropogenic and natural climate change and as an aid to the calibration of future scenarios. Estimation of trends in extreme values in a region needs to be based on all the available data if precision is to be achieved. However, extremes at different periods of accumulation at neighbouring sites are not independent because there are temporal and spatial correlations, respectively. A linear mixed effects (lme) model allows for this correlation structure, and can be fitted to unequal record lengths at different sites. The modelling technique is demonstrated with an analysis of monthly maximum rainfall, at nine aggregations between 6 min and 24 h, from six sites, with record lengths between 10 and 25 years, from a region in South Australia. In terms of mean value, there is no evidence of a trend or change in the seasonal distribution of the monthly extreme rainfall. However, there is a strong evidence of an increase in variability of monthly extreme rainfall, estimated as a 58% increase in absolute value of deviation from the mean over a 25 year period. Rainfall records are often only available as a daily accumulation. A formula for the ratio of the monthly maxima at durations shorter than 24 h, down to 6 min, to the 24 h monthly maximum, in terms of: duration, month of the year, and a site specific adjustment is estimated. There is a clear seasonal variation in the ratios and there is evidence of a difference between rainfall stations.

  14. Engineering Graphics Educational Outcomes for the Global Engineer: An Update

    ERIC Educational Resources Information Center

    Barr, R. E.

    2012-01-01

    This paper discusses the formulation of educational outcomes for engineering graphics that span the global enterprise. Results of two repeated faculty surveys indicate that new computer graphics tools and techniques are now the preferred mode of engineering graphical communication. Specifically, 3-D computer modeling, assembly modeling, and model…

  15. A graphical ICU workstation.

    PubMed Central

    Higgins, S. B.; Jiang, K.; Swindell, B. B.; Bernard, G. R.

    1991-01-01

    A workstation designed to facilitate electronic charting in the intensive care unit is described. The system design incorporates a graphical, windows-based user interface. The system captures all data formerly recorded on the paper flowsheet including direct patient measurements, nursing assessment, patient care procedures, and nursing notes. It has the ability to represent charted data in a variety of graphical formats, thereby providing additional insights to facilitate the management of the critically ill patient. Initial nursing evaluation is described. PMID:1807712

  16. Flowfield computer graphics

    NASA Technical Reports Server (NTRS)

    Desautel, Richard

    1993-01-01

    The objectives of this research include supporting the Aerothermodynamics Branch's research by developing graphical visualization tools for both the branch's adaptive grid code and flow field ray tracing code. The completed research for the reporting period includes development of a graphical user interface (GUI) and its implementation into the NAS Flowfield Analysis Software Tool kit (FAST), for both the adaptive grid code (SAGE) and the flow field ray tracing code (CISS).

  17. Modeling of mixing processes: Fluids, particulates, and powders

    SciTech Connect

    Ottino, J.M.; Hansen, S.

    1995-12-31

    Work under this grant involves two main areas: (1) Mixing of Viscous Liquids, this first area comprising aggregation, fragmentation and dispersion, and (2) Mixing of Powders. In order to produce a coherent self-contained picture, we report primarily on results obtained under (1), and within this area, mostly on computational studies of particle aggregation in regular and chaotic flows. Numerical simulations show that the average cluster size of compact clusters grows algebraically, while the average cluster size of fractal clusters grows exponentially; companion mathematical arguments are used to describe the initial growth of average cluster size and polydispersity. It is found that when the system is well mixed and the capture radius independent of mass, the polydispersity is constant for long-times and the cluster size distribution is self-similar. Furthermore, our simulations indicate that the fractal nature of the clusters is dependent upon the mixing.

  18. Computer modeling of forced mixing in waste storage tanks

    SciTech Connect

    Eyler, L.L.; Michener, T.E.

    1992-04-01

    Numerical simulation results of fluid dynamic and physical processes in radioactive waste storage tanks are presented. Investigations include simulation of jet mixing pump induced flows intended to mix and maintain particulate material uniformly distributed throughout the liquid volume. Physical effects of solids are included in the code. These are particle size through a settling velocity and mixture properties through density and viscosity. Calculations have been accomplished for a centrally located, rotationally-oscillating, horizontally-directed jet mixing pump for two cases. One case is with low jet velocity and high settling velocity. It results in nonuniform distribution. The other case is with high jet velocity and low settling velocity. It results in uniform conditions. Results are being used to aid in experiment design and to understand mixing in the waste tanks. These results are to be used in conjunction with scaled experiments to define limits of pump operation to maintain uniformity of the mixture in the storage tanks during waste retrieval operations.

  19. Fermion masses and mixings from heterotic orbifold models

    SciTech Connect

    Park, Jae-hyeon

    2005-12-02

    We search for a possibility of getting realistic fermion mass ratios and mixing angles from renormalizable couplings on the Z6-I heterotic orbifold with one pair of Higgs doublets. In the quark sector, we find cases with reasonable results if we ignore the first family. In the lepton sector, we can fit the charged lepton mass ratios, the neutrino mass squared difference ratio, and the lepton mixing angles, considering all three families00.

  20. Three-dimensional modeling of the mixing state of particles over Greater Paris

    NASA Astrophysics Data System (ADS)

    Zhu, Shupeng; Sartelet, Karine; Zhang, Yang; Nenes, Athanasios

    2016-05-01

    A size-composition resolved aerosol model (SCRAM) is coupled to the Polyphemus air quality platform and evaluated over Greater Paris. SCRAM simulates the particle mixing state and solves the aerosol dynamic evolution taking into account the processes of coagulation, condensation/evaporation, and nucleation. Both the size and mass fractions of chemical components of particles are discretized. The performance of SCRAM to model air quality over Greater Paris is evaluated by comparison to PM2.5, PM10, and Aerosol Optical Depth (AOD) measurements. Because air quality models usually assume that particles are internally mixed, the impact of the mixing state on aerosols formation, composition, optical properties, and their ability to be activated as cloud condensation nuclei (CCN) is investigated. The simulation results show that more than half (up to 80% during rush hours) of black carbon particles are barely mixed at the urban site of Paris, while they are more mixed with organic species at a rural site. The comparisons between the internal-mixing simulation and the mixing state-resolved simulation show that the internal-mixing assumption leads to lower nitrate and higher ammonium concentrations in the particulate phase. Moreover, the internal-mixing assumption leads to lower single scattering albedo, and the difference of aerosol optical depth caused by the mixing state assumption can be as high as 72.5%. Furthermore, the internal-mixing assumption leads to lower CCN activation percentage at low supersaturation, but higher CCN activation percentage at high supersaturation.

  1. The Vineyard Yeast Microbiome, a Mixed Model Microbial Map

    PubMed Central

    Setati, Mathabatha Evodia; Jacobson, Daniel; Andong, Ursula-Claire; Bauer, Florian

    2012-01-01

    Vineyards harbour a wide variety of microorganisms that play a pivotal role in pre- and post-harvest grape quality and will contribute significantly to the final aromatic properties of wine. The aim of the current study was to investigate the spatial distribution of microbial communities within and between individual vineyard management units. For the first time in such a study, we applied the Theory of Sampling (TOS) to sample gapes from adjacent and well established commercial vineyards within the same terroir unit and from several sampling points within each individual vineyard. Cultivation-based and molecular data sets were generated to capture the spatial heterogeneity in microbial populations within and between vineyards and analysed with novel mixed-model networks, which combine sample correlations and microbial community distribution probabilities. The data demonstrate that farming systems have a significant impact on fungal diversity but more importantly that there is significant species heterogeneity between samples in the same vineyard. Cultivation-based methods confirmed that while the same oxidative yeast species dominated in all vineyards, the least treated vineyard displayed significantly higher species richness, including many yeasts with biocontrol potential. The cultivatable yeast population was not fully representative of the more complex populations seen with molecular methods, and only the molecular data allowed discrimination amongst farming practices with multivariate and network analysis methods. Importantly, yeast species distribution is subject to significant intra-vineyard spatial fluctuations and the frequently reported heterogeneity of tank samples of grapes harvested from single vineyards at the same stage of ripeness might therefore, at least in part, be due to the differing microbiota in different sections of the vineyard. PMID:23300721

  2. Some Recent Advances in Computer Graphics.

    ERIC Educational Resources Information Center

    Whitted, Turner

    1982-01-01

    General principles of computer graphics are reviewed, including discussions of display hardware, geometric modeling, algorithms, and applications in science, computer-aided design, flight training, communications, business, art, and entertainment. (JN)

  3. A Proposed Model of Retransformed Qualitative Data within a Mixed Methods Research Design

    ERIC Educational Resources Information Center

    Palladino, John M.

    2009-01-01

    Most models of mixed methods research design provide equal emphasis of qualitative and quantitative data analyses and interpretation. Other models stress one method more than the other. The present article is a discourse about the investigator's decision to employ a mixed method design to examine special education teachers' advocacy and…

  4. CONVERTING ISOTOPE RATIOS TO DIET COMPOSITION - THE USE OF MIXING MODELS

    EPA Science Inventory

    Investigations of wildlife foraging ecology with stable isotope analysis are increasing. Converting isotope values to proportions of different foods in a consumer's diet requires the use of mixing models. Simple mixing models based on mass balance equations have been used for d...

  5. Best practices for use of stable isotope mixing models in food-web studies

    EPA Science Inventory

    Stable isotope mixing models are increasingly used to quantify contributions of resources to consumers. While potentially powerful tools, these mixing models have the potential to be misused, abused, and misinterpreted. Here we draw on our collective experiences to address the qu...

  6. Item Purification in Differential Item Functioning Using Generalized Linear Mixed Models

    ERIC Educational Resources Information Center

    Liu, Qian

    2011-01-01

    For this dissertation, four item purification procedures were implemented onto the generalized linear mixed model for differential item functioning (DIF) analysis, and the performance of these item purification procedures was investigated through a series of simulations. Among the four procedures, forward and generalized linear mixed model (GLMM)…

  7. Students Perception towards the Implementation of Computer Graphics Technology in Class via Unified Theory of Acceptance and Use of Technology (UTAUT) Model

    NASA Astrophysics Data System (ADS)

    Binti Shamsuddin, Norsila

    Technology advancement and development in a higher learning institution is a chance for students to be motivated to learn in depth in the information technology areas. Students should take hold of the opportunity to blend their skills towards these technologies as preparation for them when graduating. The curriculum itself can rise up the students' interest and persuade them to be directly involved in the evolvement of the technology. The aim of this study is to see how deep is the students' involvement as well as their acceptance towards the adoption of the technology used in Computer Graphics and Image Processing subjects. The study will be towards the Bachelor students in Faculty of Industrial Information Technology (FIIT), Universiti Industri Selangor (UNISEL); Bac. In Multimedia Industry, BSc. Computer Science and BSc. Computer Science (Software Engineering). This study utilizes the new Unified Theory of Acceptance and Use of Technology (UTAUT) to further validate the model and enhance our understanding of the adoption of Computer Graphics and Image Processing Technologies. Four (4) out of eight (8) independent factors in UTAUT will be studied towards the dependent factor.

  8. SutraGUI, a graphical-user interface for SUTRA, a model for ground-water flow with solute or energy transport

    USGS Publications Warehouse

    Winston, Richard B.; Voss, Clifford I.

    2004-01-01

    This report describes SutraGUI, a flexible graphical user-interface (GUI) that supports two-dimensional (2D) and three-dimensional (3D) simulation with the U.S. Geological Survey (USGS) SUTRA ground-water-flow and transport model (Voss and Provost, 2002). SutraGUI allows the user to create SUTRA ground-water models graphically. SutraGUI provides all of the graphical functionality required for setting up and running SUTRA simulations that range from basic to sophisticated, but it is also possible for advanced users to apply programmable features within Argus ONE to meet the unique demands of particular ground-water modeling projects. SutraGUI is a public-domain computer program designed to run with the proprietary Argus ONE? package, which provides 2D Geographic Information System (GIS) and meshing support. For 3D simulation, GIS and meshing support is provided by programming contained within SutraGUI. When preparing a 3D SUTRA model, the model and all of its features are viewed within Argus 1 in 2D projection. For 2D models, SutraGUI is only slightly changed in functionality from the previous 2D-only version (Voss and others, 1997) and it provides visualization of simulation results. In 3D, only model preparation is supported by SutraGUI, and 3D simulation results may be viewed in SutraPlot (Souza, 1999) or Model Viewer (Hsieh and Winston, 2002). A comprehensive online Help system is included in SutraGUI. For 3D SUTRA models, the 3D model domain is conceptualized as bounded on the top and bottom by 2D surfaces. The 3D domain may also contain internal surfaces extending across the model that divide the domain into tabular units, which can represent hydrogeologic strata or other features intended by the user. These surfaces can be non-planar and non-horizontal. The 3D mesh is defined by one or more 2D meshes at different elevations that coincide with these surfaces. If the nodes in the 3D mesh are vertically aligned, only a single 2D mesh is needed. For nonaligned

  9. Graphical algorithm for integration of genetic and biological data: proof of principle using psoriasis as a model

    PubMed Central

    Tsoi, Lam C.; Elder, James T.; Abecasis, Goncalo R.

    2015-01-01

    Motivation: Pathway analysis to reveal biological mechanisms for results from genetic association studies have great potential to better understand complex traits with major human disease impact. However, current approaches have not been optimized to maximize statistical power to identify enriched functions/pathways, especially when the genetic data derives from studies using platforms (e.g. Immunochip and Metabochip) customized to have pre-selected markers from previously identified top-rank loci. We present here a novel approach, called Minimum distance-based Enrichment Analysis for Genetic Association (MEAGA), with the potential to address both of these important concerns. Results: MEAGA performs enrichment analysis using graphical algorithms to identify sub-graphs among genes and measure their closeness in interaction database. It also incorporates a statistic summarizing the numbers and total distances of the sub-graphs, depicting the overlap between observed genetic signals and defined function/pathway gene-sets. MEAGA uses sampling technique to approximate empirical and multiple testing-corrected P-values. We show in simulation studies that MEAGA is more powerful compared to count-based strategies in identifying disease-associated functions/pathways, and the increase in power is influenced by the shortest distances among associated genes in the interactome. We applied MEAGA to the results of a meta-analysis of psoriasis using Immunochip datasets, and showed that associated genes are significantly enriched in immune-related functions and closer with each other in the protein–protein interaction network. Availability and implementation: http://genome.sph.umich.edu/wiki/MEAGA Contact: tsoi.teen@gmail.com or goncalo@umich.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25480373

  10. Application of mixing-controlled combustion models to gas turbine combustors

    NASA Technical Reports Server (NTRS)

    Nguyen, Hung Lee

    1990-01-01

    Gas emissions were studied from a staged Rich Burn/Quick-Quench Mix/Lean Burn combustor were studied under test conditions encountered in High Speed Research engines. The combustor was modeled at conditions corresponding to different engine power settings, and the effect of primary dilution airflow split on emissions, flow field, flame size and shape, and combustion intensity, as well as mixing, was investigated. A mathematical model was developed from a two-equation model of turbulence, a quasi-global kinetics mechanism for the oxidation of propane, and the Zeldovich mechanism for nitric oxide formation. A mixing-controlled combustion model was used to account for turbulent mixing effects on the chemical reaction rate. This model assumes that the chemical reaction rate is much faster than the turbulent mixing rate.

  11. A Graphical Physics Course

    NASA Astrophysics Data System (ADS)

    Wood, Roy C.

    2001-11-01

    There has been a desire in recent years to introduce physics to students at the middle school, or freshmen high school level. However, traditional physics courses involve a great deal of mathematics, and this makes physics unattractive to many of them. In the last few decades, courses have been developed with a focus that is more conceptual than mathematical, and is generally referred to as conceptual physics. These two types of courses emphasize two methods that physicist use to solve physics problems. However, there is a third, graphical method that is also useful, and complements mathematical and verbal reasoning. A course emphasizing graphical methods would deal with quantitative graphical diagrams, as well as qualitative diagrams. Examples of quantitative graphical diagrams are scaled force diagrams and scaled optical ray-tracing diagrams. A course based on this type of approach would involve measurements and uncertainties, and would involve active (hands-on) student participation suitable for younger students. This talk will discuss a graphical physics course, and its benefits to younger students.

  12. A time-dependent Mixing Model for PDF Methods in Heterogeneous Aquifers

    NASA Astrophysics Data System (ADS)

    Schüler, Lennart; Suciu, Nicolae; Knabner, Peter; Attinger, Sabine

    2016-04-01

    Predicting the transport of groundwater contaminations remains a demanding task, especially with respect to the heterogeneity of the subsurface and the large measurement uncertainties. A risk analysis also includes the quantification of the uncertainty in order to evaluate how accurate the predictions are. Probability density function (PDF) methods are a promising alternative to predicting the transport of solutes in groundwater under uncertainty. They make it possible to derive the evolution equations of the mean concentration and the concentration variance, which can be used as a first measure of uncertainty. A mixing model, also known as a dissipation model, is essential for both methods. Finding a satisfactory mixing model is still an open question and due to the rather elaborate PDF methods, a difficult undertaking. Both the PDF equation and the concentration variance equation depend on the same mixing model. This connection is used to find and test an improved mixing model for the much easier to handle concentration variance. Subsequently, this mixing model is transferred to the PDF equation and tested. The newly proposed mixing model yields significantly improved results for both variance modelling and PDF modelling. The implications of the new mixing model for different kinds of flow conditions are discussed and some comments are made on efficiently handling spatially resolved higher moments.

  13. Multiple jet mixing flowfields in an isothermal model combustor

    NASA Astrophysics Data System (ADS)

    Ghosh, A.; Schulz, R. J.; Giel, T. V., Jr.

    1986-01-01

    The purpose of the present experimental investigation of confined, multiple turbulent jet-mixing with recirculation, in an axisymmetric duct that simulated a combustor, was the examination of flow fields that employ injector plates for the mixing of fuels and oxidizers. Quantitative descriptions of the velocity and turbulence fields were obtained with a vectorized, two-component laser Doppler velocimeter. The results obtained indicate that the annular slit injector jet generates a two-dimensional combustor flow that is in accord with theoretical studies, although rings of discrete injector jets create very complex, fully three-dimensional combustor flow fields.

  14. Quantum theory of multiwave mixing - Squeezed-vacuum model

    NASA Astrophysics Data System (ADS)

    An, Sunghyuck; Sargent, Murray, III

    1989-12-01

    The present paper combines a Langevin quantum-regression method with a denisty-operator approach to derive the master equation for the quantum theory of multiwave mixing in a very efficient way. The approach is quite general and is particularly valuable for analyzing complicated media such as semiconductors. It is used in the present paper to derive the quantum multiwave-mixing equations in a squeezed vacuum. Improved formulas are found for resonance fluorescence in a squeezed vacuum as well as the squeezing coefficients in a squeezed vacuum. Comparing squeezing spectra in squeezed and ordinary vacuums, significantly enhanced squeezing for the appropriate pump-vacuum relative phase is found.

  15. Prediction of microbial growth in mixed culture with a competition model.

    PubMed

    Fujikawa, Hiroshi; Sakha, Mohammad Z

    2014-01-01

    Prediction of microbial growth in mixed culture was studied with a competition model that we had developed recently. The model, which is composed of the new logistic model and the Lotka-Volterra model, is shown to successfully describe the microbial growth of two species in mixed culture using Staphylococcus aureus, Escherichia coli, and Salmonella. With the parameter values of the model obtained from the experimental data on monoculture and mixed culture with two species, it then succeeded in predicting the simultaneous growth of the three species in mixed culture inoculated with various cell concentrations. To our knowledge, it is the first time for a prediction model for multiple (three) microbial species to be reported. The model, which is not built on any premise for specific microorganisms, may become a basic competition model for microorganisms in food and food materials. PMID:24975413

  16. GnuForPlot Graphics

    Energy Science and Technology Software Center (ESTSC)

    2015-11-04

    Gnuforplot Graphics is a Fortran90 program designed to generate two and three dimensional plots of data on a personal computer. The program uses calls to the open source code Gnuplot to generate the plots. Two Fortran90 programs have been written to use the Gnuplot graphics capabilities. The first program, named Plotsetup.f90 reads data from output files created by either the Stadium or LeachXS/Orchestra modeling codes and saves the data in arrays for plotting. This programmore » then calls Gnuforplot which takes the data array along with user specified parameters to set plot specifications and issues Gnuplot commands that generate the screen plots. The user can view the plots and optionally save copies in jpeg format.« less

  17. Blasting, graphical interfaces and Unix

    SciTech Connect

    Knudsen, S.

    1993-11-01

    A discrete element computer program, DMC (Distinct Motion Code) was developed to simulate blast-induced rock motion. To simplify the complex task of entering material and explosive design parameters as well as bench configuration, a full-featured graphical interface has been developed. DMC is currently executed on both Sun SPARCstation 2 and Sun SPARCstation 10 platforms and routinely used to model bench and crater blasting problems. This paper will document the design and development of the full-featured interface to DMC. The development of the interface will be tracked through the various stages, highlighting the adjustments made to allow the necessary parameters to be entered in terms and units that field blasters understand. The paper also discusses a novel way of entering non-integer numbers and the techniques necessary to display blasting parameters in an understandable visual manner. A video presentation will demonstrate the graphics interface and explains its use.

  18. Blasting, graphical interfaces and Unix

    SciTech Connect

    Knudsen, S.

    1994-12-31

    A discrete element computer program, DMC (Distinct Motion Code) was developed to simulate blast-induced rock motion. To simplify the complex task of entering material and explosive design parameters as well as bench configuration, a full-featured graphical interface has been developed. DMC is currently executed on both Sun SPARCstation 2 and Sun SPARCstation 10 platforms and routinely used to model bench and crater blasting problems. This paper will document the design and development of the full-featured interface to DMC. The development of the interface will be tracked through the various stages, highlighting the adjustments made to allow the necessary parameters to be entered in terms and units that field blasters understand. The paper also discusses a novel way of entering non-integer numbers and the techniques necessary to display blasting parameters in an understandable visual manner. A video presentation will demonstrate the graphics interface and explains its use.

  19. GnuForPlot Graphics

    SciTech Connect

    2015-11-04

    Gnuforplot Graphics is a Fortran90 program designed to generate two and three dimensional plots of data on a personal computer. The program uses calls to the open source code Gnuplot to generate the plots. Two Fortran90 programs have been written to use the Gnuplot graphics capabilities. The first program, named Plotsetup.f90 reads data from output files created by either the Stadium or LeachXS/Orchestra modeling codes and saves the data in arrays for plotting. This program then calls Gnuforplot which takes the data array along with user specified parameters to set plot specifications and issues Gnuplot commands that generate the screen plots. The user can view the plots and optionally save copies in jpeg format.

  20. Mixed model of repeated measures versus slope models in Alzheimer's disease clinical trials.

    PubMed

    Donohue, M C; Aisen, P S

    2012-04-01

    Randomized clinical trials of Alzheimer's disease (AD) and Mild Cognitive Impairment (MCI) typically assess intervention efficacy with measures of cognitive or functional assessments repeated every six months for one to two years. The Mixed Model of Repeated Measures (MMRM), which assumes an "unstructured mean" by treating time as categorical, is attractive because it makes no assumptions about the shape of the mean trajectory of the outcome over time. However, categorical time models may be over-parameterized and inefficient in detecting treatment effects relative to continuous time models of, say, the linear trend of the outcome over time. Mixed effects models can also be extended to model quadratic time effects, although it is questionable whether the duration and interval of observations in AD and MCI studies is sufficient to support such models. Furthermore, it is unknown which of these models are most robust to missing data, which plagues AD and MCI studies. We review the literature and compare estimates of treatment effects from four potential models fit to data from five AD Cooperative Study (ADCS) trials in MCI and AD. PMID:22499459

  1. MEASUREMENTS AND MODELS FOR HAZARDOUS CHEMICAL AND MIXED WASTES

    EPA Science Inventory

    Mixed hazardous and low-level radioactive wastes are in storage at DOE sites around the United States, awaiting treatment and disposal. These hazardous chemical wastes contain many components in multiple phases, presenting very difficult handling and treatment problems. These was...

  2. The effects of mixing on stratospheric Age of Air in global models

    NASA Astrophysics Data System (ADS)

    Garny, Hella; Birner, Thomas; Bönisch, Harald

    2014-05-01

    The stratospheric Brewer-Dobson circulation is often quantified by the integrated transport measure stratospheric age of air (AoA). AoA is influenced both by mean transport along the residual circulation and by two-way mixing. Therefore, AoA is a good measure of the overall capabilities of a global model to simulate stratospheric transport. Currently, a large spread in the simulation of AoA by global models is found. In this study we use a method that allows us to quantify the effect of mixing on AoA from global model data. AoA is contrasted with a hypothetical age - the age air would have if it was only transported by the residual circulation, the residual circulation transit time (RCTT). The difference of AoA and RCTT is interpreted as the additional aging by mixing. Mixing causes air to be older almost in the entire lower stratosphere (AoA > RCTT). This increase in AoA by mixing is largely due to mixing between the tropics and extratropics, that leads to recirculation of air through the stratosphere. A "mixing efficiency" is defined as the ratio of the two-way mixing mass flux across the subtropical barrier to the net (residual) mass flux. This mixing efficiency controls the ratio of tropical mean AoA to RCTT, and thus the relative increase in AoA by mixing. These diagnostics are applied to a set of global model simulations to examine the causes for the spread in the simulation of AoA in different models. It is found that both differences in the residual circulation strength and in mixing contribute to the spread in simulated AoA. The mixing efficiency varies strongly between models - leading to differences in AoA between models even if the residual circulation strength did not differ. Possible causes for the differences in the mixing efficiency might be found in the model dynamics (e.g. the wave spectrum) and/or numerics (e.g. the advection scheme used). The different mixing efficiencies in models also modulate the response of AoA to long-term changes in the

  3. Segregation parameters and pair-exchange mixing models for turbulent nonpremixed flames

    NASA Technical Reports Server (NTRS)

    Chen, J.-Y.; Kollman, W.

    1991-01-01

    The progress of chemical reactions in nonpremixed turbulent flows depends on the coexistence of reactants, which are brought together by mixing. The degree of mixing can strongly influence the chemical reactions and it can be quantified by segregation parameters. In this paper, the relevance of segregation parameters to turbulent mixing and chemical reactions is explored. An analysis of the pair-exchange mixing models is performed and an explanation is given for the peculiar behavior of such models in homogeneous turbulence. The nature of segregation parameters in a H2/Ar-air nonpremixed jet flame is investigated. The results show that Monte Carlo simulation with the modified Curl's mixing model predicts segregation parameters in close agreement with the experimental values, providing an indirect validation for the theoretical model.

  4. Mixed inflaton and spectator field models: CMB constraints and μ distortion

    NASA Astrophysics Data System (ADS)

    Enqvist, Kari; Sekiguchi, Toyokazu; Takahashi, Tomo

    2016-04-01

    We discuss mixed inflaton and spectator field models where both the fields are responsible for the observed density fluctuations. We combine the angular power spectrum of the CMB temperature anisotropy from the Planck 2013 result and other ground-based CMB observations in order to constrain both the general mixed model as well as some specific representative scenarios. Based on the Markov Chain Monte Carlo method, in addition to constraints on model parameters, we obtain the predictive posterior distributions of the CMB spectral μ distortion for those models. We demonstrate that the standard single-field inflaton model typically predicts μ ~ 10-8 with a relatively narrow distribution, whereas for the mixed models, the distribution turns out to be much broader, and μ could be larger by almost an order of magnitude. Hence future experiments of μ distortion could provide a tool for the critical testing of the mixed source models of the primordial perturbation.

  5. Modeling Temporal Behavior in Large Networks: A Dynamic Mixed-Membership Model

    SciTech Connect

    Rossi, R; Gallagher, B; Neville, J; Henderson, K

    2011-11-11

    Given a large time-evolving network, how can we model and characterize the temporal behaviors of individual nodes (and network states)? How can we model the behavioral transition patterns of nodes? We propose a temporal behavior model that captures the 'roles' of nodes in the graph and how they evolve over time. The proposed dynamic behavioral mixed-membership model (DBMM) is scalable, fully automatic (no user-defined parameters), non-parametric/data-driven (no specific functional form or parameterization), interpretable (identifies explainable patterns), and flexible (applicable to dynamic and streaming networks). Moreover, the interpretable behavioral roles are generalizable, computationally efficient, and natively supports attributes. We applied our model for (a) identifying patterns and trends of nodes and network states based on the temporal behavior, (b) predicting future structural changes, and (c) detecting unusual temporal behavior transitions. We use eight large real-world datasets from different time-evolving settings (dynamic and streaming). In particular, we model the evolving mixed-memberships and the corresponding behavioral transitions of Twitter, Facebook, IP-Traces, Email (University), Internet AS, Enron, Reality, and IMDB. The experiments demonstrate the scalability, flexibility, and effectiveness of our model for identifying interesting patterns, detecting unusual structural transitions, and predicting the future structural changes of the network and individual nodes.

  6. Biases in modeled surface snow BC mixing ratios in prescribed-aerosol climate model runs

    NASA Astrophysics Data System (ADS)

    Doherty, S. J.; Bitz, C. M.; Flanner, M. G.

    2014-11-01

    Black carbon (BC) in snow lowers its albedo, increasing the absorption of sunlight, leading to positive radiative forcing, climate warming and earlier snowmelt. A series of recent studies have used prescribed-aerosol deposition flux fields in climate model runs to assess the forcing by black carbon in snow. In these studies, the prescribed mass deposition flux of BC to surface snow is decoupled from the mass deposition flux of snow water to the surface. Here we compare prognostic- and prescribed-aerosol runs and use a series of offline calculations to show that the prescribed-aerosol approach results, on average, in a factor of about 1.5-2.5 high bias in annual-mean surface snow BC mixing ratios in three key regions for snow albedo forcing by BC: Greenland, Eurasia and North America. These biases will propagate directly to positive biases in snow and surface albedo reduction by BC. The bias is shown be due to coupling snowfall that varies on meteorological timescales (daily or shorter) with prescribed BC mass deposition fluxes that are more temporally and spatially smooth. The result is physically non-realistic mixing ratios of BC in surface snow. We suggest that an alternative approach would be to prescribe BC mass mixing ratios in snowfall, rather than BC mass fluxes, and we show that this produces more physically realistic BC mixing ratios in snowfall and in the surface snow layer.

  7. User's instructions for the Guyton circulatory dynamics model using the Univac 1110 batch and demand processing (with graphic capabilities)

    NASA Technical Reports Server (NTRS)

    Archer, G. T.

    1974-01-01

    The model presents a systems analysis of a human circulatory regulation based almost entirely on experimental data and cumulative present knowledge of the many facets of the circulatory system. The model itself consists of eighteen different major systems that enter into circulatory control. These systems are grouped into sixteen distinct subprograms that are melded together to form the total model. The model develops circulatory and fluid regulation in a simultaneous manner. Thus, the effects of hormonal and autonomic control, electrolyte regulation, and excretory dynamics are all important and are all included in the model.

  8. Model analysis of influences of aerosol mixing state upon its optical properties in East Asia

    NASA Astrophysics Data System (ADS)

    Han, Xiao; Zhang, Meigen; Zhu, Lingyun; Xu, Liren

    2013-07-01

    The air quality model system RAMS (Regional Atmospheric Modeling System)-CMAQ (Models-3 Community Multi-scale Air Quality) coupled with an aerosol optical/radiative module was applied to investigate the impact of different aerosol mixing states (i.e., externally mixed, half externally and half internally mixed, and internally mixed) on radiative forcing in East Asia. The simulation results show that the aerosol optical depth (AOD) generally increased when the aerosol mixing state changed from externally mixed to internally mixed, while the single scattering albedo (SSA) decreased. Therefore, the scattering and absorption properties of aerosols can be significantly affected by the change of aerosol mixing states. Comparison of simulated and observed SSAs at five AERONET (Aerosol Robotic Network) sites suggests that SSA could be better estimated by considering aerosol particles to be internally mixed. Model analysis indicates that the impact of aerosol mixing state upon aerosol direct radiative forcing (DRF) is complex. Generally, the cooling effect of aerosols over East Asia are enhanced in the northern part of East Asia (Northern China, Korean peninsula, and the surrounding area of Japan) and are reduced in the southern part of East Asia (Sichuan Basin and Southeast China) by internal mixing process, and the variation range can reach ±5 W m-2. The analysis shows that the internal mixing between inorganic salt and dust is likely the main reason that the cooling effect strengthens. Conversely, the internal mixture of anthropogenic aerosols, including sulfate, nitrate, ammonium, black carbon, and organic carbon, could obviously weaken the cooling effect.

  9. Experimental and computer graphics simulation analyses of the DNA interaction of 1,8-bis-(2-diethylaminoethylamino)-anthracene-9,10-dione, a compound modelled on doxorubicin.

    PubMed

    Islam, S A; Neidle, S; Gandecha, B M; Brown, J R

    1983-09-15

    The crystal structure of the anthraquinone derivative 1,8-bis-(2-diethylaminoethylamino)-anthracene-9,10-dione has been established. This compound was prepared as a potential DNA-intercalating agent based on the proven intercalators doxorubicin and mitoxantrone. Its DNA-binding properties have been examined experimentally by spectroscopic, thermal denaturation and ccc-DNA unwinding techniques: the results are consistent with an intercalative mode of binding to DNA. Computer graphics stimulation of the intercalative docking of this compound into the self-complementary dimer of d(CpG) has provided a minimum energy geometrical arrangement for the bound drug in the intercalation site comparable to that for proflavine when intercalated into the same d(CpG) model system. Entry of the compound into the site can only occur via the major groove. PMID:6626250

  10. Interactive computer graphics applications for compressible aerodynamics

    NASA Technical Reports Server (NTRS)

    Benson, Thomas J.

    1994-01-01

    Three computer applications have been developed to solve inviscid compressible fluids problems using interactive computer graphics. The first application is a compressible flow calculator which solves for isentropic flow, normal shocks, and oblique shocks or centered expansions produced by two dimensional ramps. The second application couples the solutions generated by the first application to a more graphical presentation of the results to produce a desk top simulator of three compressible flow problems: 1) flow past a single compression ramp; 2) flow past two ramps in series; and 3) flow past two opposed ramps. The third application extends the results of the second to produce a design tool which solves for the flow through supersonic external or mixed compression inlets. The applications were originally developed to run on SGI or IBM workstations running GL graphics. They are currently being extended to solve additional types of flow problems and modified to operate on any X-based workstation.

  11. General-Purpose Software For Computer Graphics

    NASA Technical Reports Server (NTRS)

    Rogers, Joseph E.

    1992-01-01

    NASA Device Independent Graphics Library (NASADIG) is general-purpose computer-graphics package for computer-based engineering and management applications which gives opportunity to translate data into effective graphical displays for presentation. Features include two- and three-dimensional plotting, spline and polynomial interpolation, control of blanking of areas, multiple log and/or linear axes, control of legends and text, control of thicknesses of curves, and multiple text fonts. Included are subroutines for definition of areas and axes of plots; setup and display of text; blanking of areas; setup of style, interpolation, and plotting of lines; control of patterns and of shading of colors; control of legends, blocks of text, and characters; initialization of devices; and setting of mixed alphabets. Written in FORTRAN 77.

  12. A Physics-Based Modeling and Real-Time Simulation of Biomechanical Diffusion Process Through Optical Imaged Alveolar Tissues on Graphical Processing Units

    NASA Astrophysics Data System (ADS)

    Kaya, Ilhan; Santhanam, Anand P.; Lee, Kye-Sung; Meemon, Panomsak; Papp, Nicolene; Rolland, Jannick P.

    Tissue engineering has broad applications from creating the much-needed engineered tissue and organ structures for regenerative medicine to providing in vitro testbeds for drug testing. In the latter, application domain, creating alveolar lung tissue, and simulating the diffusion process of oxygen and other possible agents from the air into the blood stream as well as modeling the removal of carbon dioxide and other possible entities from the blood stream are of critical importance to simulating lung functions in various environments. In this chapter, we propose a physics-based model to simulate the alveolar gas exchange and the alveolar diffusionDiffusion alveolar process. Tissue engineers, for the first time, may utilize these simulation results to better understand the underlying gas exchange process and properly adjust the tissue growing cycles. In this work, alveolar tissues are imaged by means of an optical coherence microscopyOptical coherence microscopy (OCM Modality OCM ) system developed in our laboratory. As a consequence, 3D alveoli tissue data with its inherent complex boundary is taken as input to the simulationSimulation diffusion system, which is based on computational fluid mechanics in simulating the alveolar gas exchange. The visualizationVisualization and the simulation of diffusion of the air into the blood through the alveoli tissue is performed using a state-of-art graphics processing unitGraphics processing unit (GPU). Results show the real-time simulation of the gas exchange through the 2D alveoli tissue.

  13. Graphical Language for Data Processing

    NASA Technical Reports Server (NTRS)

    Alphonso, Keith

    2011-01-01

    A graphical language for processing data allows processing elements to be connected with virtual wires that represent data flows between processing modules. The processing of complex data, such as lidar data, requires many different algorithms to be applied. The purpose of this innovation is to automate the processing of complex data, such as LIDAR, without the need for complex scripting and programming languages. The system consists of a set of user-interface components that allow the user to drag and drop various algorithmic and processing components onto a process graph. By working graphically, the user can completely visualize the process flow and create complex diagrams. This innovation supports the nesting of graphs, such that a graph can be included in another graph as a single step for processing. In addition to the user interface components, the system includes a set of .NET classes that represent the graph internally. These classes provide the internal system representation of the graphical user interface. The system includes a graph execution component that reads the internal representation of the graph (as described above) and executes that graph. The execution of the graph follows the interpreted model of execution in that each node is traversed and executed from the original internal representation. In addition, there are components that allow external code elements, such as algorithms, to be easily integrated into the system, thus making the system infinitely expandable.

  14. Printer Graphics Package

    NASA Technical Reports Server (NTRS)

    Blanchard, D. C.

    1986-01-01

    Printer Graphics Package (PGP) is tool for making two-dimensional symbolic plots on line printer. PGP created to support development of Heads-Up Display (HUD) simulation. Standard symbols defined with HUD in mind. Available symbols include circle, triangle, quadrangle, window, line, numbers, and text. Additional symbols easily added or built up from available symbols.

  15. Computer Graphics Verification

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Video processing creates technical animation sequences using studio quality equipment to realistically represent fluid flow over space shuttle surfaces, helicopter rotors, and turbine blades.Computer systems Co-op, Tim Weatherford, performing computer graphics verification. Part of Co-op brochure.

  16. Mathematical Graphic Organizers

    ERIC Educational Resources Information Center

    Zollman, Alan

    2009-01-01

    As part of a math-science partnership, a university mathematics educator and ten elementary school teachers developed a novel approach to mathematical problem solving derived from research on reading and writing pedagogy. Specifically, research indicates that students who use graphic organizers to arrange their ideas improve their comprehension…

  17. Raster graphics display library

    NASA Technical Reports Server (NTRS)

    Grimsrud, Anders; Stephenson, Michael B.

    1987-01-01

    The Raster Graphics Display Library (RGDL) is a high level subroutine package that give the advanced raster graphics display capabilities needed. The RGDL uses FORTRAN source code routines to build subroutines modular enough to use as stand-alone routines in a black box type of environment. Six examples are presented which will teach the use of RGDL in the fastest, most complete way possible. Routines within the display library that are used to produce raster graphics are presented in alphabetical order, each on a separate page. Each user-callable routine is described by function and calling parameters. All common blocks that are used in the display library are listed and the use of each variable within each common block is discussed. A reference on the include files that are necessary to compile the display library is contained. Each include file and its purpose are listed. The link map for MOVIE.BYU version 6, a general purpose computer graphics display system that uses RGDL software, is also contained.

  18. Computing Graphical Confidence Bounds

    NASA Technical Reports Server (NTRS)

    Mezzacappa, M. A.

    1983-01-01

    Approximation for graphical confidence bounds is simple enough to run on programmable calculator. Approximation is used in lieu of numerical tables not always available, and exact calculations, which often require rather sizable computer resources. Approximation verified for collection of up to 50 data points. Method used to analyze tile-strength data on Space Shuttle thermal-protection system.

  19. Designing Award Winning Graphics.

    ERIC Educational Resources Information Center

    Kintigh, Cynthia

    1990-01-01

    Graphic designers, marketing specialists, and campus activities professionals who have won awards for the design of campus programing publicity offer tips in the process of designing successful promotional items, including ingredients of winning pieces and aspects of a productive designer-client relationship. (MSE)

  20. Comics & Graphic Novels

    ERIC Educational Resources Information Center

    Cleaver, Samantha

    2008-01-01

    Not so many years ago, comic books in school were considered the enemy. Students caught sneaking comics between the pages of bulky--and less engaging--textbooks were likely sent to the principal. Today, however, comics, including classics such as "Superman" but also their generally more complex, nuanced cousins, graphic novels, are not only…

  1. Graphic Novels: A Roundup.

    ERIC Educational Resources Information Center

    Kan, Katherine L.

    1994-01-01

    Reviews graphic novels for young adults, including five titles from "The Adventures of Tintin," a French series that often uses ethnic and racial stereotypes which reflect the time in which they were published, and "Wolverine," a Marvel comic character adventure. (Contains six references.) (LRW)

  2. Parametrization of flavor mixing in the standard model

    SciTech Connect

    Fritzsch, H. |; Xing, Z.

    1998-01-01

    It is shown that there exist nine different ways to describe the flavor mixing, in terms of three rotation angles and one CP-violating phase, within the standard electroweak theory of six quarks. For the assignment of the complex phase there essentially exists a continuum of possibilities, if one allows the phase to appear in more than four elements of the mixing matrix. If the phase is restricted to four elements, the phase assignment is uniquely defined. If one imposes the constraint that the phase disappears in a natural way in the chiral limit in which the masses of the u and d quarks are turned off, only three of the nine parametrizations are acceptable. In particular the {open_quotes}standard{close_quotes} parametrization advocated by the Particle Data Group is not permitted. One parametrization, in which the CP-violating phase is restricted to the light quark sector, stands up as the most favorable description of the flavor mixing. {copyright} {ital 1997} {ital The American Physical Society}

  3. Photonic states mixing beyond the plasmon hybridization model

    NASA Astrophysics Data System (ADS)

    Suryadharma, Radius N. S.; Iskandar, Alexander A.; Tjia, May-On

    2016-07-01

    A study is performed on a photonic-state mixing-pattern in an insulator-metal-insulator cylindrical silver nanoshell and its rich variations induced by changes in the geometry and dielectric media of the system, representing the combined influences of plasmon coupling strength and cavity effects. This study is performed in terms of the photonic local density of states (LDOS) calculated using the Green tensor method, in order to elucidate those combined effects. The energy profiles of LDOS inside the dielectric core are shown to exhibit consistently growing number of redshifted photonic states due to an enhanced plasmon coupling induced state mixing arising from decreased shell thickness, increased cavity size effect, and larger symmetry breaking effect induced by increased permittivity difference between the core and the background media. Further, an increase in cavity size leads to increased additional peaks that spread out toward the lower energy regime. A systematic analysis of those variations for a silver nanoshell with a fixed inner radius in vacuum background reveals a certain pattern of those growing number of redshifted states with an analytic expression for the corresponding energy downshifts, signifying a photonic state mixing scheme beyond the commonly adopted plasmon hybridization scheme. Finally, a remarkable correlation is demonstrated between the LDOS energy profiles outside the shell and the corresponding scattering efficiencies.

  4. Experimental constraints on the neutrino oscillations and a simple model of three-flavor mixing

    SciTech Connect

    Raczka, P.A.; Szymacha, A. ); Tatur, S. )

    1994-02-01

    A simple model of neutrino mixing is considered which contains only one right-handed neutrino field coupled, via the mass term, to the three usual left-handed fields. This is the simplest model that allows for three-flavor neutrino oscillations. The existing experimental limits on the neutrino oscillations are used to obtain constraints on the two free-mixing parameters of the model. A specific sum rule relating the oscillation probabilities of different flavors is derived.

  5. DaMoScope and its internet graphics for the visual control of adjusting mathematical models describing experimental data

    NASA Astrophysics Data System (ADS)

    Belousov, V. I.; Ezhela, V. V.; Kuyanov, Yu. V.; Tkachenko, N. P.

    2015-12-01

    The experience of using the dynamic atlas of the experimental data and mathematical models of their description in the problems of adjusting parametric models of observable values depending on kinematic variables is presented. The functional possibilities of an image of a large number of experimental data and the models describing them are shown by examples of data and models of observable values determined by the amplitudes of elastic scattering of hadrons. The Internet implementation of an interactive tool DaMoScope and its interface with the experimental data and codes of adjusted parametric models with the parameters of the best description of data are schematically shown. The DaMoScope codes are freely available.

  6. DaMoScope and its internet graphics for the visual control of adjusting mathematical models describing experimental data

    SciTech Connect

    Belousov, V. I.; Ezhela, V. V.; Kuyanov, Yu. V. Tkachenko, N. P.

    2015-12-15

    The experience of using the dynamic atlas of the experimental data and mathematical models of their description in the problems of adjusting parametric models of observable values depending on kinematic variables is presented. The functional possibilities of an image of a large number of experimental data and the models describing them are shown by examples of data and models of observable values determined by the amplitudes of elastic scattering of hadrons. The Internet implementation of an interactive tool DaMoScope and its interface with the experimental data and codes of adjusted parametric models with the parameters of the best description of data are schematically shown. The DaMoScope codes are freely available.

  7. Mathematical, physical and numerical principles essential for models of turbulent mixing

    SciTech Connect

    Sharp, David Howland; Lim, Hyunkyung; Yu, Yan; Glimm, James G

    2009-01-01

    We propose mathematical, physical and numerical principles which are important for the modeling of turbulent mixing, especially the classical and well studied Rayleigh-Taylor and Richtmyer-Meshkov instabilities which involve acceleration driven mixing of a fluid discontinuity layer, by a steady accerleration or an impulsive force.

  8. Pricing European option under the time-changed mixed Brownian-fractional Brownian model

    NASA Astrophysics Data System (ADS)

    Guo, Zhidong; Yuan, Hongjun

    2014-07-01

    This paper deals with the problem of discrete time option pricing by a mixed Brownian-fractional subdiffusive Black-Scholes model. Under the assumption that the price of the underlying stock follows a time-changed mixed Brownian-fractional Brownian motion, we derive a pricing formula for the European call option in a discrete time setting.

  9. Modeling relationships between calving traits: a comparison between standard and recursive mixed models

    PubMed Central

    2010-01-01

    Background The use of structural equation models for the analysis of recursive and simultaneous relationships between phenotypes has become more popular recently. The aim of this paper is to illustrate how these models can be applied in animal breeding to achieve parameterizations of different levels of complexity and, more specifically, to model phenotypic recursion between three calving traits: gestation length (GL), calving difficulty (CD) and stillbirth (SB). All recursive models considered here postulate heterogeneous recursive relationships between GL and liabilities to CD and SB, and between liability to CD and liability to SB, depending on categories of GL phenotype. Methods Four models were compared in terms of goodness of fit and predictive ability: 1) standard mixed model (SMM), a model with unstructured (co)variance matrices; 2) recursive mixed model 1 (RMM1), assuming that residual correlations are due to the recursive relationships between phenotypes; 3) RMM2, assuming that correlations between residuals and contemporary groups are due to recursive relationships between phenotypes; and 4) RMM3, postulating that the correlations between genetic effects, contemporary groups and residuals are due to recursive relationships between phenotypes. Results For all the RMM considered, the estimates of the structural coefficients were similar. Results revealed a nonlinear relationship between GL and the liabilities both to CD and to SB, and a linear relationship between the liabilities to CD and SB. Differences in terms of goodness of fit and predictive ability of the models considered were negligible, suggesting that RMM3 is plausible. Conclusions The applications examined in this study suggest the plausibility of a nonlinear recursive effect from GL onto CD and SB. Also, the fact that the most restrictive model RMM3, which assumes that the only cause of correlation is phenotypic recursion, performs as well as the others indicates that the phenotypic recursion

  10. Graphics Processing Units (GPU) and the Goddard Earth Observing System atmospheric model (GEOS-5): Implementation and Potential Applications

    NASA Technical Reports Server (NTRS)

    Putnam, William M.

    2011-01-01

    Earth system models like the Goddard Earth Observing System model (GEOS-5) have been pushing the limits of large clusters of multi-core microprocessors, producing breath-taking fidelity in resolving cloud systems at a global scale. GPU computing presents an opportunity for improving the efficiency of these leading edge models. A GPU implementation of GEOS-5 will facilitate the use of cloud-system resolving resolutions in data assimilation and weather prediction, at resolutions near 3.5 km, improving our ability to extract detailed information from high-resolution satellite observations and ultimately produce better weather and climate predictions

  11. Eliciting mixed emotions: a meta-analysis comparing models, types, and measures.

    PubMed

    Berrios, Raul; Totterdell, Peter; Kellett, Stephen

    2015-01-01

    The idea that people can experience two oppositely valenced emotions has been controversial ever since early attempts to investigate the construct of mixed emotions. This meta-analysis examined the robustness with which mixed emotions have been elicited experimentally. A systematic literature search identified 63 experimental studies that instigated the experience of mixed emotions. Studies were distinguished according to the structure of the underlying affect model-dimensional or discrete-as well as according to the type of mixed emotions studied (e.g., happy-sad, fearful-happy, positive-negative). The meta-analysis using a random-effects model revealed a moderate to high effect size for the elicitation of mixed emotions (d IG+ = 0.77), which remained consistent regardless of the structure of the affect model, and across different types of mixed emotions. Several methodological and design moderators were tested. Studies using the minimum index (i.e., the minimum value between a pair of opposite valenced affects) resulted in smaller effect sizes, whereas subjective measures of mixed emotions increased the effect sizes. The presence of more women in the samples was also associated with larger effect sizes. The current study indicates that mixed emotions are a robust, measurable and non-artifactual experience. The results are discussed in terms of the implications for an affect system that has greater versatility and flexibility than previously thought. PMID:25926805

  12. Unit physics performance of a mix model in Eulerian fluid computations

    SciTech Connect

    Vold, Erik; Douglass, Rod

    2011-01-25

    In this report, we evaluate the performance of a K-L drag-buoyancy mix model, described in a reference study by Dimonte-Tipton [1] hereafter denoted as [D-T]. The model was implemented in an Eulerian multi-material AMR code, and the results are discussed here for a series of unit physics tests. The tests were chosen to calibrate the model coefficients against empirical data, principally from RT (Rayleigh-Taylor) and RM (Richtmyer-Meshkov) experiments, and the present results are compared to experiments and to results reported in [D-T]. Results show the Eulerian implementation of the mix model agrees well with expectations for test problems in which there is no convective flow of the mass averaged fluid, i.e., in RT mix or in the decay of homogeneous isotropic turbulence (HIT). In RM shock-driven mix, the mix layer moves through the Eulerian computational grid, and there are differences with the previous results computed in a Lagrange frame [D-T]. The differences are attributed to the mass averaged fluid motion and examined in detail. Shock and re-shock mix are not well matched simultaneously. Results are also presented and discussed regarding model sensitivity to coefficient values and to initial conditions (IC), grid convergence, and the generation of atomically mixed volume fractions.

  13. Real Longitudinal Data Analysis for Real People: Building a Good Enough Mixed Model

    PubMed Central

    Cheng, Jing; Edwards, Lloyd J.; Maldonado-Molina, Mildred M.; Komro, Kelli A.; Muller, Keith E.

    2009-01-01

    Summary Mixed effect models have become very popular, especially for the analysis of longitudinal data. One challenge is how to build a good enough mixed effects model. In this paper, we suggest a systematic strategy for addressing this challenge and introduce easily implemented practical advice to build mixed effect models. A general discussion of scientific strategies motivates the recommended five step procedure for model fitting. The need to model both the mean structure (the fixed effects) and the covariance structure (the random effects and residual error) creates the fundamental flexibility and complexity. Some very practical recommendations help conquer the complexity. Centering, scaling, and full-rank coding all predictor variables radically improves the chances of convergence, computing speed, and numerical accuracy. Applying computational and assumption diagnostics from univariate linear models to mixed model data greatly helps detect and solve related computational problems. Applying computational and assumption diagnostics from univariate linear models to mixed model data can radically improve the chances of convergence, computing speed, and numerical accuracy. The approach helps fit more general covariance models, a crucial step in selecting a credible covariance model needed for defensible inference. A detailed demonstration of the recommended strategy is based on data from a published study of a randomized trial of a multicomponent intervention to prevent young adolescents' alcohol use. The discussion highlights a need for additional covariance and inference tools for mixed models. The discussion also highlights the need for improving how scientists and statisticians teach and review the process of finding a good enough mixed model. PMID:20013937

  14. The Impact of Varied Discrimination Parameters on Mixed-Format Item Response Theory Model Selection

    ERIC Educational Resources Information Center

    Whittaker, Tiffany A.; Chang, Wanchen; Dodd, Barbara G.

    2013-01-01

    Whittaker, Chang, and Dodd compared the performance of model selection criteria when selecting among mixed-format IRT models and found that the criteria did not perform adequately when selecting the more parameterized models. It was suggested by M. S. Johnson that the problems when selecting the more parameterized models may be because of the low…

  15. Stochastic model of Rayleigh-Taylor mixing with time-dependent acceleration

    NASA Astrophysics Data System (ADS)

    Swisher, Nora; Abarzhi, Snezhana

    2015-11-01

    We report the stochastic model of Rayleigh-Taylor (RT) mixing with time-dependent acceleration. RT mixing is a statistically unsteady process, where the means values of the flow quantities as well as the fluctuations around these means are time-dependent. A set of nonlinear stochastic differential equations with multiplicative noise is derived on the basis of rigorous momentum model and group theory analyses to account for the randomness of RT mixing. A broad range of parameter regime is investigated; self-similar asymptotic solutions are found; new regimes of RT mixing dynamics are identified. We show that for power-law asymptotic solutions describing RT mixing the exponent is relatively insensitive and pre-factor is sensitive to the fluctuations, and find the statistic invariants of the dynamics in each of the new regimes. Support of the National Science Foundation is warmly appreciated.

  16. The Brown Muck of $B^0$ and $B^0_s$ Mixing: Beyond the Standard Model

    SciTech Connect

    Bouchard, Christopher Michael

    2011-01-01

    Standard Model contributions to neutral $B$ meson mixing begin at the one loop level where they are further suppressed by a combination of the GIM mechanism and Cabibbo suppression. This combination makes $B$ meson mixing a promising probe of new physics, where as yet undiscovered particles and/or interactions can participate in the virtual loops. Relating underlying interactions of the mixing process to experimental observation requires a precise calculation of the non-perturbative process of hadronization, characterized by hadronic mixing matrix elements. This thesis describes a calculation of the hadronic mixing matrix elements relevant to a large class of new physics models. The calculation is performed via lattice QCD using the MILC collaboration's gauge configurations with $2+1$ dynamical sea quarks.

  17. Evaluation of a Linear Mixing Model to Retrieve Soil and Vegetation Temperatures of Land Targets

    NASA Astrophysics Data System (ADS)

    Yang, Jinxin; Jia, Li; Cui, Yaokui; Zhou, Jie; Menenti, Massimo

    2014-03-01

    A simple linear mixing model of heterogeneous soil-vegetation system and retrieval of component temperatures from directional remote sensing measurements by inverting this model is evaluated in this paper using observations by a thermal camera. The thermal camera was used to obtain multi-angular TIR (Thermal Infra-Red) images over vegetable and orchard canopies. A whole thermal camera image was treated as a pixel of a satellite image to evaluate the model with the two-component system, i.e. soil and vegetation. The evaluation included two parts: evaluation of the linear mixing model and evaluation of the inversion of the model to retrieve component temperatures. For evaluation of the linear mixing model, the RMSE is 0.2 K between the observed and modelled brightness temperatures, which indicates that the linear mixing model works well under most conditions. For evaluation of the model inversion, the RMSE between the model retrieved and the observed vegetation temperatures is 1.6K, correspondingly, the RMSE between the observed and retrieved soil temperatures is 2.0K. According to the evaluation of the sensitivity of retrieved component temperatures on fractional cover, the linear mixing model gives more accurate retrieval accuracies for both soil and vegetation temperatures under intermediate fractional cover conditions.

  18. The pits and falls of graphical presentation

    PubMed Central

    Sperandei, Sandro

    2014-01-01

    Graphics are powerful tools to communicate research results and to gain information from data. However, researchers should be careful when deciding which data to plot and the type of graphic to use, as well as other details. The consequence of bad decisions in these features varies from making research results unclear to distortions of these results, through the creation of “chartjunk” with useless information. This paper is not another tutorial about “good graphics” and “bad graphics”. Instead, it presents guidelines for graphic presentation of research results and some uncommon, but useful examples to communicate basic and complex data types, especially multivariate model results, which are commonly presented only by tables. By the end, there are no answers here, just ideas meant to inspire others on how to create their own graphics. PMID:25351349

  19. Feasibility of using GEMS (Graphical Exposure Modeling System) to perform risk assessments using SARA (Superfund Amendment and Reauthorization Act of 1986), toxic release inventory information. Technical report

    SciTech Connect

    Nuckels, J.H.

    1989-04-13

    Under Title III, Section 313 of the Superfund Amendment and Reauthorization Act of 1986 (SARA) companies which release toxic chemicals into the environment are required to report annually the amount of these toxic releases. Because the chemical toxic-release reports are public information, EPA Region V is concerned that the raw data published in the toxic-chemical-release reports will be misinterpreted and will in turn generate unfounded public concern. The study examines the possibility of using the Graphical Exposure Modeling System (GEMS), a computer program, to transform incoming raw data into better qualified, user ready, public information. Specifically, the report analyzes the compatibility between the raw data reported in the toxic chemical release reports and the input requirements of the GEMS exposure model. An industrial site in East St. Louis, Illinois is used as a test site for the development of the exposure assessment. The study discusses the research and the methods used to perform the exposure assessment. The report also reviews the legislation which requires companies to report toxic-release data, the basics of exposure assessment and the GEMS model, the research methods used, and the findings of the study.

  20. Realtime cerebellum: a large-scale spiking network model of the cerebellum that runs in realtime using a graphics processing unit.

    PubMed

    Yamazaki, Tadashi; Igarashi, Jun

    2013-11-01

    The cerebellum plays an essential role in adaptive motor control. Once we are able to build a cerebellar model that runs in realtime, which means that a computer simulation of 1 s in the simulated world completes within 1 s in the real world, the cerebellar model could be used as a realtime adaptive neural controller for physical hardware such as humanoid robots. In this paper, we introduce "Realtime Cerebellum (RC)", a new implementation of our large-scale spiking network model of the cerebellum, which was originally built to study cerebellar mechanisms for simultaneous gain and timing control and acted as a general-purpose supervised learning machine of spatiotemporal information known as reservoir computing, on a graphics processing unit (GPU). Owing to the massive parallel computing capability of a GPU, RC runs in realtime, while reproducing qualitatively the same simulation results of the Pavlovian delay eyeblink conditioning with the previous version. RC is adopted as a realtime adaptive controller of a humanoid robot, which is instructed to learn a proper timing to swing a bat to hit a flying ball online. These results suggest that RC provides a means to apply the computational power of the cerebellum as a versatile supervised learning machine towards engineering applications. PMID:23434303

  1. EMGD-FE: an open source graphical user interface for estimating isometric muscle forces in the lower limb using an EMG-driven model

    PubMed Central

    2014-01-01

    Background This paper describes the “EMG Driven Force Estimator (EMGD-FE)”, a Matlab® graphical user interface (GUI) application that estimates skeletal muscle forces from electromyography (EMG) signals. Muscle forces are obtained by numerically integrating a system of ordinary differential equations (ODEs) that simulates Hill-type muscle dynamics and that utilises EMG signals as input. In the current version, the GUI can estimate the forces of lower limb muscles executing isometric contractions. Muscles from other parts of the body can be tested as well, although no default values for model parameters are provided. To achieve accurate evaluations, EMG collection is performed simultaneously with torque measurement from a dynamometer. The computer application guides the user, step-by-step, to pre-process the raw EMG signals, create inputs for the muscle model, numerically integrate the ODEs and analyse the results. Results An example of the application’s functions is presented using the quadriceps femoris muscle. Individual muscle force estimations for the four components as well the knee isometric torque are shown. Conclusions The proposed GUI can estimate individual muscle forces from EMG signals of skeletal muscles. The estimation accuracy depends on several factors, including signal collection and modelling hypothesis issues. PMID:24708668

  2. INCORPORATING CONCENTRATION DEPENDENCE IN STABLE ISOTOPE MIXING MODELS: A REPLY TO ROBBINS, HILDERBRAND AND FARLEY (2002)

    EPA Science Inventory

    Phillips & Koch (2002) outlined a new stable isotope mixing model which incorporates differences in elemental concentrations in the determinations of source proportions in a mixture. They illustrated their method with sensitivity analyses and two examples from the wildlife ecolog...

  3. A quantitative approach to combine sources in stable isotope mixing models

    EPA Science Inventory

    Stable isotope mixing models, used to estimate source contributions to a mixture, typically yield highly uncertain estimates when there are many sources and relatively few isotope elements. Previously, ecologists have either accepted the uncertain contribution estimates for indiv...

  4. VISUALIZATION-BASED ANALYSIS FOR A MIXED-INHIBITION BINARY PBPK MODEL: DETERMINATION OF INHIBITION MECHANISM

    EPA Science Inventory

    A physiologically-based pharmacokinetic (PBPK) model incorporating mixed enzyme inhibition was used to determine mechanism of the metabolic interactions occurring during simultaneous inhalation exposures to the organic solvents chloroform and trichloroethylene (TCE).

    V...

  5. Experimental testing and modeling analysis of solute mixing at water distribution pipe junctions.

    PubMed

    Shao, Yu; Jeffrey Yang, Y; Jiang, Lijie; Yu, Tingchao; Shen, Cheng

    2014-06-01

    Flow dynamics at a pipe junction controls particle trajectories, solute mixing and concentrations in downstream pipes. The effect can lead to different outcomes of water quality modeling and, hence, drinking water management in a distribution network. Here we have investigated solute mixing behavior in pipe junctions of five hydraulic types, for which flow distribution factors and analytical equations for network modeling are proposed. First, based on experiments, the degree of mixing at a cross is found to be a function of flow momentum ratio that defines a junction flow distribution pattern and the degree of departure from complete mixing. Corresponding analytical solutions are also validated using computational-fluid-dynamics (CFD) simulations. Second, the analytical mixing model is further extended to double-Tee junctions. Correspondingly the flow distribution factor is modified to account for hydraulic departure from a cross configuration. For a double-Tee(A) junction, CFD simulations show that the solute mixing depends on flow momentum ratio and connection pipe length, whereas the mixing at double-Tee(B) is well represented by two independent single-Tee junctions with a potential water stagnation zone in between. Notably, double-Tee junctions differ significantly from a cross in solute mixing and transport. However, it is noted that these pipe connections are widely, but incorrectly, simplified as cross junctions of assumed complete solute mixing in network skeletonization and water quality modeling. For the studied pipe junction types, analytical solutions are proposed to characterize the incomplete mixing and hence may allow better water quality simulation in a distribution network. PMID:24675269

  6. Graphic Grown Up

    ERIC Educational Resources Information Center

    Kim, Ann

    2009-01-01

    It's no secret that children and YAs are clued in to graphic novels (GNs) and that comics-loving adults are positively giddy that this format is getting the recognition it deserves. Still, there is a whole swath of library card-carrying grown-up readers out there with no idea where to start. Splashy movies such as "300" and "Spider-Man" and their…

  7. Graphical Contingency Analysis Tool

    SciTech Connect

    2010-03-02

    GCA is a visual analytic tool for power grid contingency analysis to provide more decision support for power grid operations. GCA allows power grid operators to quickly gain situational awareness of power grid by converting large amounts of operational data to graphic domain with a color contoured map; identify system trend and foresee and discern emergencies by performing trending analysis; identify the relationships between system configurations and affected assets by conducting clustering analysis; and identify the best action by interactively evaluate candidate actions.

  8. Space Spurred Computer Graphics

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Dicomed Corporation was asked by NASA in the early 1970s to develop processing capabilities for recording images sent from Mars by Viking spacecraft. The company produced a film recorder which increased the intensity levels and the capability for color recording. This development led to a strong technology base resulting in sophisticated computer graphics equipment. Dicomed systems are used to record CAD (computer aided design) and CAM (computer aided manufacturing) equipment, to update maps and produce computer generated animation.

  9. The KL Mix Model Applied to Directly Driven Capsules on the Omega Laser

    SciTech Connect

    Tipton, R E; Mikaelian, K O; Park, H; Dimonte, G; Rygg, J R; Li, C K

    2005-10-10

    The coefficients of the KL mix model were set by Dimonte to match RT and RM instabilities as measured on the Linear Electric Motor (LEM). The KL mix model has been applied to directly-driven capsule implosions with a variety of laser energies, ablator materials, ablator thicknesses and convergence ratios. The KL calculations nearly match the observed Y{sub DD}, Y{sub DT}, Y{sub P}, T{sub ion} and implosion times for many (but not all) capsules.

  10. An explicit SU(12) family and flavor unification model with natural fermion masses and mixings

    SciTech Connect

    Albright, Carl H.; Feger, Robert P.; Kephart, Thomas W.

    2012-07-01

    We present an SU(12) unification model with three light chiral families, avoiding any external flavor symmetries. The hierarchy of quark and lepton masses and mixings is explained by higher dimensional Yukawa interactions involving Higgs bosons that contain SU(5) singlet fields with VEVs about 50 times smaller than the SU(12) unification scale. The presented model has been analyzed in detail and found to be in very good agreement with the observed quark and lepton masses and mixings.

  11. John Herschel's Graphical Method

    NASA Astrophysics Data System (ADS)

    Hankins, Thomas L.

    2011-01-01

    In 1833 John Herschel published an account of his graphical method for determining the orbits of double stars. He had hoped to be the first to determine such orbits, but Felix Savary in France and Johann Franz Encke in Germany beat him to the punch using analytical methods. Herschel was convinced, however, that his graphical method was much superior to analytical methods, because it used the judgment of the hand and eye to correct the inevitable errors of observation. Line graphs of the kind used by Herschel became common only in the 1830s, so Herschel was introducing a new method. He also found computation fatiguing and devised a "wheeled machine" to help him out. Encke was skeptical of Herschel's methods. He said that he lived for calculation and that the English would be better astronomers if they calculated more. It is difficult to believe that the entire Scientific Revolution of the 17th century took place without graphs and that only a few examples appeared in the 18th century. Herschel promoted the use of graphs, not only in astronomy, but also in the study of meteorology and terrestrial magnetism. Because he was the most prominent scientist in England, Herschel's advocacy greatly advanced graphical methods.

  12. Computation of turbulent high speed mixing layers using a two-equation turbulence model

    NASA Technical Reports Server (NTRS)

    Narayan, J. R.; Sekar, B.

    1991-01-01

    A two-equation turbulence model was extended to be applicable for compressible flows. A compressibility correction based on modelling the dilational terms in the Reynolds stress equations were included in the model. The model is used in conjunction with the SPARK code for the computation of high speed mixing layers. The observed trend of decreasing growth rate with increasing convective Mach number in compressible mixing layers is well predicted by the model. The predictions agree well with the experimental data and the results from a compressible Reynolds stress model. The present model appears to be well suited for the study of compressible free shear flows. Preliminary results obtained for the reacting mixing layers are included.

  13. Are mixed explicit/implicit solvation models reliable for studying phosphate hydrolysis? A comparative study of continuum, explicit and mixed solvation models.

    SciTech Connect

    Kamerlin, Shina C. L.; Haranczyk, Maciej; Warshel, Arieh

    2009-05-01

    Phosphate hydrolysis is ubiquitous in biology. However, despite intensive research on this class of reactions, the precise nature of the reaction mechanism remains controversial. In this work, we have examined the hydrolysis of three homologous phosphate diesters. The solvation free energy was simulated by means of either an implicit solvation model (COSMO), hybrid quantum mechanical / molecular mechanical free energy perturbation (QM/MM-FEP) or a mixed solvation model in which N water molecules were explicitly included in the ab initio description of the reacting system (where N=1-3), with the remainder of the solvent being implicitly modelled as a continuum. Here, both COSMO and QM/MM-FEP reproduce Delta Gobs within an error of about 2kcal/mol. However, we demonstrate that in order to obtain any form of reliable results from a mixed model, it is essential to carefully select the explicit water molecules from short QM/MM runs that act as a model for the true infinite system. Additionally, the mixed models tend to be increasingly inaccurate the more explicit water molecules are placed into the system. Thus, our analysis indicates that this approach provides an unreliable way for modelling phosphate hydrolysis in solution.

  14. Eliciting mixed emotions: a meta-analysis comparing models, types, and measures

    PubMed Central

    Berrios, Raul; Totterdell, Peter; Kellett, Stephen

    2015-01-01

    The idea that people can experience two oppositely valenced emotions has been controversial ever since early attempts to investigate the construct of mixed emotions. This meta-analysis examined the robustness with which mixed emotions have been elicited experimentally. A systematic literature search identified 63 experimental studies that instigated the experience of mixed emotions. Studies were distinguished according to the structure of the underlying affect model—dimensional or discrete—as well as according to the type of mixed emotions studied (e.g., happy-sad, fearful-happy, positive-negative). The meta-analysis using a random-effects model revealed a moderate to high effect size for the elicitation of mixed emotions (dIG+ = 0.77), which remained consistent regardless of the structure of the affect model, and across different types of mixed emotions. Several methodological and design moderators were tested. Studies using the minimum index (i.e., the minimum value between a pair of opposite valenced affects) resulted in smaller effect sizes, whereas subjective measures of mixed emotions increased the effect sizes. The presence of more women in the samples was also associated with larger effect sizes. The current study indicates that mixed emotions are a robust, measurable and non-artifactual experience. The results are discussed in terms of the implications for an affect system that has greater versatility and flexibility than previously thought. PMID:25926805

  15. Weak Mixing and Rare Decays in the Littlest Higgs Model

    SciTech Connect

    Bardeen, William A.; /Fermilab

    2007-03-01

    Little Higgs models have been introduced to resolve the fine-tuning problems associated with the stability of the electroweak scale and the constraints imposed by the precision electroweak analysis of experiments testing the Standard Model of particle physics. Flavor physics provides a sensitive probe of the new physics contained in these models at next-to-leading order.

  16. Data on copula modeling of mixed discrete and continuous neural time series.

    PubMed

    Hu, Meng; Li, Mingyao; Li, Wu; Liang, Hualou

    2016-06-01

    Copula is an important tool for modeling neural dependence. Recent work on copula has been expanded to jointly model mixed time series in neuroscience ("Hu et al., 2016, Joint Analysis of Spikes and Local Field Potentials using Copula" [1]). Here we present further data for joint analysis of spike and local field potential (LFP) with copula modeling. In particular, the details of different model orders and the influence of possible spike contamination in LFP data from the same and different electrode recordings are presented. To further facilitate the use of our copula model for the analysis of mixed data, we provide the Matlab codes, together with example data. PMID:27158651

  17. Data on copula modeling of mixed discrete and continuous neural time series

    PubMed Central

    Hu, Meng; Li, Mingyao; Li, Wu; Liang, Hualou

    2016-01-01

    Copula is an important tool for modeling neural dependence. Recent work on copula has been expanded to jointly model mixed time series in neuroscience (“Hu et al., 2016, Joint Analysis of Spikes and Local Field Potentials using Copula” [1]). Here we present further data for joint analysis of spike and local field potential (LFP) with copula modeling. In particular, the details of different model orders and the influence of possible spike contamination in LFP data from the same and different electrode recordings are presented. To further facilitate the use of our copula model for the analysis of mixed data, we provide the Matlab codes, together with example data. PMID:27158651

  18. Investigation of PM10 sources in Santa Catarina, Brazil through graphical interpretation analysis combined with receptor modelling.

    PubMed

    Hoinaski, L; Franco, D; Stuetz, R M; Sivret, E C; Lisboa, H de Melo

    2013-01-01

    Epidemiological studies have documented that elevated airborne particulate matter (PM) concentrations, especially those with an aerodynamic diameter less than 10 microm (PM10), are associated with adverse health effects. Two receptor models, UNMIX and positive matrix factorization (PMF), were used to identify and quantify the sources of PM10 concentrations in Tubarão and Capivari de Baixo, Santa Catarina, Brazil. This region is known for its high pollution levels due to intense industrial activity and exploitation of natural resources. PM10 samples were collected using high volume samplers at two sites in the region and statistical exploratory analysis techniques were applied to identify and assess PM10 sources. The two primary PM10 sources were identified as soil re-suspension/road dust emissions and coal burning emissions, contributing 65-75% and 15-25% of the PM10, respectively. The study confirmed the significance of the influence of local PM10 emissions (power plants, soil re-suspension and road dust emissions) on regional air quality, although no violations of the Brazilian PM10 standards (limit of 150 microg/m3) were observed, with a mean concentration of 27.6 microg/m3 measured in this study. This study demonstrated the usefulness of statistical exploratory analysis techniques in assessing the validity of modelling results and contributing to the interpretation of ambient air quality data. PMID:24527606

  19. Experimental assessment of subgrid mixing models for LES

    NASA Astrophysics Data System (ADS)

    Sun, O. S.; Su, L. K.

    2003-11-01

    Large eddy simulation (LES) models for subgrid scalar flux and dissipation include dynamic structure models,(Chumakov, S. and Rutland, C.J., submitted to phAIAA J.) based on scale similarity ideas, as well as one-equation models that relate subgrid variance and dissipation.(Jiménez, C. phet al.) phPhys. Fluids 13 (2001) Previously, these models have only been tested a posteriori, or a priori using data from direct numerical simulations. Here, these models are evaluated a priori using simultaneous planar laser-induced fluorescence (PLIF) and particle image velocimetry (PIV) measurements in a turbulent crossflowing jet. The models are tested by filtering the experimental data and comparing the results with computed model quantities. The measurements have sufficient resolution to permit accurate determination of subgrid quantities. Of primary interest is the structural accuracy of the models, which can be assessed by computing a correlation coefficient between exact and modeled terms. Preliminary results suggest that the assumptions of scale similarity underlying the dynamic structure models are more valid for modeling subgrid scalar flux than subgrid scalar dissipation.

  20. Career Opportunities in Computer Graphics.

    ERIC Educational Resources Information Center

    Langer, Victor

    1983-01-01

    Reviews the impact of computer graphics on industrial productivity. Details the computer graphics technician curriculum at Milwaukee Area Technical College and the cooperative efforts of business and industry to fund and equip the program. (SK)