Science.gov

Sample records for mixed graphical models

  1. Selection and estimation for mixed graphical models

    PubMed Central

    Chen, Shizhe; Witten, Daniela M.; shojaie, Ali

    2016-01-01

    Summary We consider the problem of estimating the parameters in a pairwise graphical model in which the distribution of each node, conditioned on the others, may have a different exponential family form. We identify restrictions on the parameter space required for the existence of a well-defined joint density, and establish the consistency of the neighbourhood selection approach for graph reconstruction in high dimensions when the true underlying graph is sparse. Motivated by our theoretical results, we investigate the selection of edges between nodes whose conditional distributions take different parametric forms, and show that efficiency can be gained if edge estimates obtained from the regressions of particular nodes are used to reconstruct the graph. These results are illustrated with examples of Gaussian, Bernoulli, Poisson and exponential distributions. Our theoretical findings are corroborated by evidence from simulation studies.

  2. Selection and estimation for mixed graphical models

    PubMed Central

    Chen, Shizhe; Witten, Daniela M.; shojaie, Ali

    2016-01-01

    Summary We consider the problem of estimating the parameters in a pairwise graphical model in which the distribution of each node, conditioned on the others, may have a different exponential family form. We identify restrictions on the parameter space required for the existence of a well-defined joint density, and establish the consistency of the neighbourhood selection approach for graph reconstruction in high dimensions when the true underlying graph is sparse. Motivated by our theoretical results, we investigate the selection of edges between nodes whose conditional distributions take different parametric forms, and show that efficiency can be gained if edge estimates obtained from the regressions of particular nodes are used to reconstruct the graph. These results are illustrated with examples of Gaussian, Bernoulli, Poisson and exponential distributions. Our theoretical findings are corroborated by evidence from simulation studies. PMID:27625437

  3. Mapping eQTL Networks with Mixed Graphical Markov Models

    PubMed Central

    Tur, Inma; Roverato, Alberto; Castelo, Robert

    2014-01-01

    Expression quantitative trait loci (eQTL) mapping constitutes a challenging problem due to, among other reasons, the high-dimensional multivariate nature of gene-expression traits. Next to the expression heterogeneity produced by confounding factors and other sources of unwanted variation, indirect effects spread throughout genes as a result of genetic, molecular, and environmental perturbations. From a multivariate perspective one would like to adjust for the effect of all of these factors to end up with a network of direct associations connecting the path from genotype to phenotype. In this article we approach this challenge with mixed graphical Markov models, higher-order conditional independences, and q-order correlation graphs. These models show that additive genetic effects propagate through the network as function of gene–gene correlations. Our estimation of the eQTL network underlying a well-studied yeast data set leads to a sparse structure with more direct genetic and regulatory associations that enable a straightforward comparison of the genetic control of gene expression across chromosomes. Interestingly, it also reveals that eQTLs explain most of the expression variability of network hub genes. PMID:25271303

  4. Linear mixed-effects models for within-participant psychology experiments: an introductory tutorial and free, graphical user interface (LMMgui).

    PubMed

    Magezi, David A

    2015-01-01

    Linear mixed-effects models (LMMs) are increasingly being used for data analysis in cognitive neuroscience and experimental psychology, where within-participant designs are common. The current article provides an introductory review of the use of LMMs for within-participant data analysis and describes a free, simple, graphical user interface (LMMgui). LMMgui uses the package lme4 (Bates et al., 2014a,b) in the statistical environment R (R Core Team).

  5. Mixed scale joint graphical lasso.

    PubMed

    Pircalabelu, Eugen; Claeskens, Gerda; Waldorp, Lourens J

    2016-10-01

    SummaryWe have developed a method for estimating brain networks from fMRI datasets that have not all been measured using the same set of brain regions. Some of the coarse scale regions have been split in smaller subregions. The proposed penalized estimation procedure selects undirected graphical models with similar structures that combine information from several subjects and several coarseness scales. Both within-scale edges and between-scale edges that identify possible connections between a large region and its subregions are estimated. PMID:27324414

  6. Representing Learning With Graphical Models

    NASA Technical Reports Server (NTRS)

    Buntine, Wray L.; Lum, Henry, Jr. (Technical Monitor)

    1994-01-01

    Probabilistic graphical models are being used widely in artificial intelligence, for instance, in diagnosis and expert systems, as a unified qualitative and quantitative framework for representing and reasoning with probabilities and independencies. Their development and use spans several fields including artificial intelligence, decision theory and statistics, and provides an important bridge between these communities. This paper shows by way of example that these models can be extended to machine learning, neural networks and knowledge discovery by representing the notion of a sample on the graphical model. Not only does this allow a flexible variety of learning problems to be represented, it also provides the means for representing the goal of learning and opens the way for the automatic development of learning algorithms from specifications.

  7. Graphical Models for Ordinal Data

    PubMed Central

    Guo, Jian; Levina, Elizaveta; Michailidis, George; Zhu, Ji

    2014-01-01

    A graphical model for ordinal variables is considered, where it is assumed that the data are generated by discretizing the marginal distributions of a latent multivariate Gaussian distribution. The relationships between these ordinal variables are then described by the underlying Gaussian graphical model and can be inferred by estimating the corresponding concentration matrix. Direct estimation of the model is computationally expensive, but an approximate EM-like algorithm is developed to provide an accurate estimate of the parameters at a fraction of the computational cost. Numerical evidence based on simulation studies shows the strong performance of the algorithm, which is also illustrated on data sets on movie ratings and an educational survey. PMID:26120267

  8. Building Regression Models: The Importance of Graphics.

    ERIC Educational Resources Information Center

    Dunn, Richard

    1989-01-01

    Points out reasons for using graphical methods to teach simple and multiple regression analysis. Argues that a graphically oriented approach has considerable pedagogic advantages in the exposition of simple and multiple regression. Shows that graphical methods may play a central role in the process of building regression models. (Author/LS)

  9. A graphical language for reliability model generation

    NASA Technical Reports Server (NTRS)

    Howell, Sandra V.; Bavuso, Salvatore J.; Haley, Pamela J.

    1990-01-01

    A graphical interface capability of the hybrid automated reliability predictor (HARP) is described. The graphics-oriented (GO) module provides the user with a graphical language for modeling system failure modes through the selection of various fault tree gates, including sequence dependency gates, or by a Markov chain. With this graphical input language, a fault tree becomes a convenient notation for describing a system. In accounting for any sequence dependencies, HARP converts the fault-tree notation to a complex stochastic process that is reduced to a Markov chain which it can then solve for system reliability. The graphics capability is available for use on an IBM-compatible PC, a Sun, and a VAX workstation. The GO module is written in the C programming language and uses the Graphical Kernel System (GKS) standard for graphics implementation. The PC, VAX, and Sun versions of the HARP GO module are currently in beta-testing.

  10. Graphical Models via Univariate Exponential Family Distributions

    PubMed Central

    Yang, Eunho; Ravikumar, Pradeep; Allen, Genevera I.; Liu, Zhandong

    2016-01-01

    Undirected graphical models, or Markov networks, are a popular class of statistical models, used in a wide variety of applications. Popular instances of this class include Gaussian graphical models and Ising models. In many settings, however, it might not be clear which subclass of graphical models to use, particularly for non-Gaussian and non-categorical data. In this paper, we consider a general sub-class of graphical models where the node-wise conditional distributions arise from exponential families. This allows us to derive multivariate graphical model distributions from univariate exponential family distributions, such as the Poisson, negative binomial, and exponential distributions. Our key contributions include a class of M-estimators to fit these graphical model distributions; and rigorous statistical analysis showing that these M-estimators recover the true graphical model structure exactly, with high probability. We provide examples of genomic and proteomic networks learned via instances of our class of graphical models derived from Poisson and exponential distributions. PMID:27570498

  11. Detection of text strings from mixed text/graphics images

    NASA Astrophysics Data System (ADS)

    Tsai, Chien-Hua; Papachristou, Christos A.

    2000-12-01

    A robust system for text strings separation from mixed text/graphics images is presented. Based on a union-find (region growing) strategy the algorithm is thus able to classify the text from graphics and adapts to changes in document type, language category (e.g., English, Chinese and Japanese), text font style and size, and text string orientation within digital images. In addition, it allows for a document skew that usually occurs in documents, without skew correction prior to discrimination while these proposed methods such a projection profile or run length coding are not always suitable for the condition. The method has been tested with a variety of printed documents from different origins with one common set of parameters, and the experimental results of the performance of the algorithm in terms of computational efficiency are demonstrated by using several tested images from the evaluation.

  12. Graphical Models and Computerized Adaptive Testing.

    ERIC Educational Resources Information Center

    Mislevy, Robert J.; Almond, Russell G.

    This paper synthesizes ideas from the fields of graphical modeling and education testing, particularly item response theory (IRT) applied to computerized adaptive testing (CAT). Graphical modeling can offer IRT a language for describing multifaceted skills and knowledge, and disentangling evidence from complex performances. IRT-CAT can offer…

  13. Understanding human functioning using graphical models

    PubMed Central

    2010-01-01

    Background Functioning and disability are universal human experiences. However, our current understanding of functioning from a comprehensive perspective is limited. The development of the International Classification of Functioning, Disability and Health (ICF) on the one hand and recent developments in graphical modeling on the other hand might be combined and open the door to a more comprehensive understanding of human functioning. The objective of our paper therefore is to explore how graphical models can be used in the study of ICF data for a range of applications. Methods We show the applicability of graphical models on ICF data for different tasks: Visualization of the dependence structure of the data set, dimension reduction and comparison of subpopulations. Moreover, we further developed and applied recent findings in causal inference using graphical models to estimate bounds on intervention effects in an observational study with many variables and without knowing the underlying causal structure. Results In each field, graphical models could be applied giving results of high face-validity. In particular, graphical models could be used for visualization of functioning in patients with spinal cord injury. The resulting graph consisted of several connected components which can be used for dimension reduction. Moreover, we found that the differences in the dependence structures between subpopulations were relevant and could be systematically analyzed using graphical models. Finally, when estimating bounds on causal effects of ICF categories on general health perceptions among patients with chronic health conditions, we found that the five ICF categories that showed the strongest effect were plausible. Conclusions Graphical Models are a flexible tool and lend themselves for a wide range of applications. In particular, studies involving ICF data seem to be suited for analysis using graphical models. PMID:20149230

  14. Operations for Learning with Graphical Models

    NASA Technical Reports Server (NTRS)

    Buntine, Wray L.

    1994-01-01

    This paper is a multidisciplinary review of empirical, statistical learning from a graphical model perspective. Well-known examples of graphical models include Bayesian net- works, directed graphs representing a Markov chain, and undirected networks representing a Markov field. These graphical models are extended to model data analysis and empirical learning using the notation of plates. Graphical operations for simplifying and manipulating a problem are provided including decomposition, differentiation, and the manipulation of probability models from the exponential family. These operations adapt existing techniques from statistics and automatic differentiation to graphs. Two standard algorithm schemes for learning are reviewed in a graphical framework: Gibbs sampling and the expectation maximization algorithm. Some algorithms are developed in this graphical framework including a generalized version of linear regression, techniques for feed-forward networks, and learning Gaussian and discrete Bayesian networks from data. The paper concludes by sketching some implications for data analysis and summarizing some popular algorithms that fall within the framework presented. The main original contributions here are the decomposition techniques and the demonstration that graphical models provide a framework for understanding and developing complex learning algorithms.

  15. Probabilistic Graphical Model Representation in Phylogenetics

    PubMed Central

    Höhna, Sebastian; Heath, Tracy A.; Boussau, Bastien; Landis, Michael J.; Ronquist, Fredrik; Huelsenbeck, John P.

    2014-01-01

    Recent years have seen a rapid expansion of the model space explored in statistical phylogenetics, emphasizing the need for new approaches to statistical model representation and software development. Clear communication and representation of the chosen model is crucial for: (i) reproducibility of an analysis, (ii) model development, and (iii) software design. Moreover, a unified, clear and understandable framework for model representation lowers the barrier for beginners and nonspecialists to grasp complex phylogenetic models, including their assumptions and parameter/variable dependencies. Graphical modeling is a unifying framework that has gained in popularity in the statistical literature in recent years. The core idea is to break complex models into conditionally independent distributions. The strength lies in the comprehensibility, flexibility, and adaptability of this formalism, and the large body of computational work based on it. Graphical models are well-suited to teach statistical models, to facilitate communication among phylogeneticists and in the development of generic software for simulation and statistical inference. Here, we provide an introduction to graphical models for phylogeneticists and extend the standard graphical model representation to the realm of phylogenetics. We introduce a new graphical model component, tree plates, to capture the changing structure of the subgraph corresponding to a phylogenetic tree. We describe a range of phylogenetic models using the graphical model framework and introduce modules to simplify the representation of standard components in large and complex models. Phylogenetic model graphs can be readily used in simulation, maximum likelihood inference, and Bayesian inference using, for example, Metropolis–Hastings or Gibbs sampling of the posterior distribution. [Computation; graphical models; inference; modularization; statistical phylogenetics; tree plate.] PMID:24951559

  16. Graphical Models and Computerized Adaptive Testing.

    ERIC Educational Resources Information Center

    Almond, Russell G.; Mislevy, Robert J.

    1999-01-01

    Considers computerized adaptive testing from the perspective of graphical modeling (GM). GM provides methods for making inferences about multifaceted skills and knowledge and for extracting data from complex performances. Provides examples from language-proficiency assessment. (SLD)

  17. Modelling structured data with Probabilistic Graphical Models

    NASA Astrophysics Data System (ADS)

    Forbes, F.

    2016-05-01

    Most clustering and classification methods are based on the assumption that the objects to be clustered are independent. However, in more and more modern applications, data are structured in a way that makes this assumption not realistic and potentially misleading. A typical example that can be viewed as a clustering task is image segmentation where the objects are the pixels on a regular grid and depend on neighbouring pixels on this grid. Also, when data are geographically located, it is of interest to cluster data with an underlying dependence structure accounting for some spatial localisation. These spatial interactions can be naturally encoded via a graph not necessarily regular as a grid. Data sets can then be modelled via Markov random fields and mixture models (e.g. the so-called MRF and Hidden MRF). More generally, probabilistic graphical models are tools that can be used to represent and manipulate data in a structured way while modeling uncertainty. This chapter introduces the basic concepts. The two main classes of probabilistic graphical models are considered: Bayesian networks and Markov networks. The key concept of conditional independence and its link to Markov properties is presented. The main problems that can be solved with such tools are described. Some illustrations are given associated with some practical work.

  18. Graphical workstation capability for reliability modeling

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.; Koppen, Sandra V.; Haley, Pamela J.

    1992-01-01

    In addition to computational capabilities, software tools for estimating the reliability of fault-tolerant digital computer systems must also provide a means of interfacing with the user. Described here is the new graphical interface capability of the hybrid automated reliability predictor (HARP), a software package that implements advanced reliability modeling techniques. The graphics oriented (GO) module provides the user with a graphical language for modeling system failure modes through the selection of various fault-tree gates, including sequence-dependency gates, or by a Markov chain. By using this graphical input language, a fault tree becomes a convenient notation for describing a system. In accounting for any sequence dependencies, HARP converts the fault-tree notation to a complex stochastic process that is reduced to a Markov chain, which it can then solve for system reliability. The graphics capability is available for use on an IBM-compatible PC, a Sun, and a VAX workstation. The GO module is written in the C programming language and uses the graphical kernal system (GKS) standard for graphics implementation. The PC, VAX, and Sun versions of the HARP GO module are currently in beta-testing stages.

  19. Joint estimation of multiple graphical models

    PubMed Central

    Guo, Jian; Levina, Elizaveta; Michailidis, George; Zhu, Ji

    2011-01-01

    Summary Gaussian graphical models explore dependence relationships between random variables, through the estimation of the corresponding inverse covariance matrices. In this paper we develop an estimator for such models appropriate for data from several graphical models that share the same variables and some of the dependence structure. In this setting, estimating a single graphical model would mask the underlying heterogeneity, while estimating separate models for each category does not take advantage of the common structure. We propose a method that jointly estimates the graphical models corresponding to the different categories present in the data, aiming to preserve the common structure, while allowing for differences between the categories. This is achieved through a hierarchical penalty that targets the removal of common zeros in the inverse covariance matrices across categories. We establish the asymptotic consistency and sparsity of the proposed estimator in the high-dimensional case, and illustrate its performance on a number of simulated networks. An application to learning semantic connections between terms from webpages collected from computer science departments is included. PMID:23049124

  20. Planar graphical models which are easy

    SciTech Connect

    Chertkov, Michael; Chernyak, Vladimir

    2009-01-01

    We describe a rich family of binary variables statistical mechanics models on planar graphs which are equivalent to Gaussian Grassmann Graphical models (free fermions). Calculation of partition function (weighted counting) in the models is easy (of polynomial complexity) as reduced to evaluation of determinants of matrixes linear in the number of variables. In particular, this family of models covers Holographic Algorithms of Valiant and extends on the Gauge Transformations discussed in our previous works.

  1. Interactive graphics for geometry modeling

    NASA Technical Reports Server (NTRS)

    Wozny, M. J.

    1984-01-01

    An interactive vector capability to create geometry and a raster color shaded rendering capability to sample and verify interim geometric design steps through color snapshots is described. The development is outlined of the underlying methodology which facilitates computer aided engineering and design. At present, raster systems cannot match the interactivity and line-drawing capability of refresh vector systems. Consequently, an intermediate step in mechanical design is used to create objects interactively on the vector display and then scan convert the wireframe model to render it as a color shaded object on a raster display. Several algorithms are presented for rendering such objects. Superquadric solid primitive extend the class of primitives normally used in solid modelers.

  2. Item Screening in Graphical Loglinear Rasch Models

    ERIC Educational Resources Information Center

    Kreiner, Svend; Christensen, Karl Bang

    2011-01-01

    In behavioural sciences, local dependence and DIF are common, and purification procedures that eliminate items with these weaknesses often result in short scales with poor reliability. Graphical loglinear Rasch models (Kreiner & Christensen, in "Statistical Methods for Quality of Life Studies," ed. by M. Mesbah, F.C. Cole & M.T. Lee, Kluwer…

  3. Evaluating survival model performance: a graphical approach.

    PubMed

    Mandel, M; Galai, N; Simchen, E

    2005-06-30

    In the last decade, many statistics have been suggested to evaluate the performance of survival models. These statistics evaluate the overall performance of a model ignoring possible variability in performance over time. Using an extension of measures used in binary regression, we propose a graphical method to depict the performance of a survival model over time. The method provides estimates of performance at specific time points and can be used as an informal test for detecting time varying effects of covariates in the Cox model framework. The method is illustrated on real and simulated data using Cox proportional hazard model and rank statistics.

  4. Planar graphical models which are easy

    NASA Astrophysics Data System (ADS)

    Chernyak, Vladimir Y.; Chertkov, Michael

    2010-11-01

    We describe a rich family of binary variables statistical mechanics models on a given planar graph which are equivalent to Gaussian Grassmann graphical models (free fermions) defined on the same graph. Calculation of the partition function (weighted counting) for such a model is easy (of polynomial complexity) as it is reducible to evaluation of a Pfaffian of a matrix of size equal to twice the number of edges in the graph. In particular, this approach touches upon holographic algorithms of Valiant and utilizes the gauge transformations discussed in our previous works.

  5. Teaching "Instant Experience" with Graphical Model Validation Techniques

    ERIC Educational Resources Information Center

    Ekstrøm, Claus Thorn

    2014-01-01

    Graphical model validation techniques for linear normal models are often used to check the assumptions underlying a statistical model. We describe an approach to provide "instant experience" in looking at a graphical model validation plot, so it becomes easier to validate if any of the underlying assumptions are violated.

  6. ModelMate - A graphical user interface for model analysis

    USGS Publications Warehouse

    Banta, Edward R.

    2011-01-01

    ModelMate is a graphical user interface designed to facilitate use of model-analysis programs with models. This initial version of ModelMate supports one model-analysis program, UCODE_2005, and one model software program, MODFLOW-2005. ModelMate can be used to prepare input files for UCODE_2005, run UCODE_2005, and display analysis results. A link to the GW_Chart graphing program facilitates visual interpretation of results. ModelMate includes capabilities for organizing directories used with the parallel-processing capabilities of UCODE_2005 and for maintaining files in those directories to be identical to a set of files in a master directory. ModelMate can be used on its own or in conjunction with ModelMuse, a graphical user interface for MODFLOW-2005 and PHAST.

  7. Mixed Methods Analysis and Information Visualization: Graphical Display for Effective Communication of Research Results

    ERIC Educational Resources Information Center

    Onwuegbuzie, Anthony J.; Dickinson, Wendy B.

    2008-01-01

    In this paper, we introduce various graphical methods that can be used to represent data in mixed research. First, we present a broad taxonomy of visual representation. Next, we use this taxonomy to provide an overview of visual techniques for quantitative data display and qualitative data display. Then, we propose what we call "crossover" visual…

  8. A Guide to the Literature on Learning Graphical Models

    NASA Technical Reports Server (NTRS)

    Buntine, Wray L.; Friedland, Peter (Technical Monitor)

    1994-01-01

    This literature review discusses different methods under the general rubric of learning Bayesian networks from data, and more generally, learning probabilistic graphical models. Because many problems in artificial intelligence, statistics and neural networks can be represented as a probabilistic graphical model, this area provides a unifying perspective on learning. This paper organizes the research in this area along methodological lines of increasing complexity.

  9. Accelerating Correlated Quantum Chemistry Calculations Using Graphical Processing Units and a Mixed Precision Matrix Multiplication Library.

    PubMed

    Olivares-Amaya, Roberto; Watson, Mark A; Edgar, Richard G; Vogt, Leslie; Shao, Yihan; Aspuru-Guzik, Alán

    2010-01-12

    Two new tools for the acceleration of computational chemistry codes using graphical processing units (GPUs) are presented. First, we propose a general black-box approach for the efficient GPU acceleration of matrix-matrix multiplications where the matrix size is too large for the whole computation to be held in the GPU's onboard memory. Second, we show how to improve the accuracy of matrix multiplications when using only single-precision GPU devices by proposing a heterogeneous computing model, whereby single- and double-precision operations are evaluated in a mixed fashion on the GPU and central processing unit, respectively. The utility of the library is illustrated for quantum chemistry with application to the acceleration of resolution-of-the-identity second-order Møller-Plesset perturbation theory calculations for molecules, which we were previously unable to treat. In particular, for the 168-atom valinomycin molecule in a cc-pVDZ basis set, we observed speedups of 13.8, 7.8, and 10.1 times for single-, double- and mixed-precision general matrix multiply (SGEMM, DGEMM, and MGEMM), respectively. The corresponding errors in the correlation energy were reduced from -10.0 to -1.2 kcal mol(-1) for SGEMM and MGEMM, respectively, while higher accuracy can be easily achieved with a different choice of cutoff parameter.

  10. Interactive graphical model building using telepresence and virtual reality

    SciTech Connect

    Cooke, C.; Stansfield, S.

    1993-10-01

    This paper presents a prototype system developed at Sandia National Laboratories to create and verify computer-generated graphical models of remote physical environments. The goal of the system is to create an interface between an operator and a computer vision system so that graphical models can be created interactively. Virtual reality and telepresence are used to allow interaction between the operator, computer, and remote environment. A stereo view of the remote environment is produced by two CCD cameras. The cameras are mounted on a three degree-of-freedom platform which is slaved to a mechanically-tracked, stereoscopic viewing device. This gives the operator a sense of immersion in the physical environment. The stereo video is enhanced by overlaying the graphical model onto it. Overlay of the graphical model onto the stereo video allows visual verification of graphical models. Creation of a graphical model is accomplished by allowing the operator to assist the computer in modeling. The operator controls a 3-D cursor to mark objects to be modeled. The computer then automatically extracts positional and geometric information about the object and creates the graphical model.

  11. PKgraph: an R package for graphically diagnosing population pharmacokinetic models.

    PubMed

    Sun, Xiaoyong; Wu, Kai; Cook, Dianne

    2011-12-01

    Population pharmacokinetic (PopPK) modeling has become increasing important in drug development because it handles unbalanced design, sparse data and the study of individual variation. However, the increased complexity of the model makes it more of a challenge to diagnose the fit. Graphics can play an important and unique role in PopPK model diagnostics. The software described in this paper, PKgraph, provides a graphical user interface for PopPK model diagnosis. It also provides an integrated and comprehensive platform for the analysis of pharmacokinetic data including exploratory data analysis, goodness of model fit, model validation and model comparison. Results from a variety of modeling fitting software, including NONMEM, Monolix, SAS and R, can be used. PKgraph is programmed in R, and uses the R packages lattice, ggplot2 for static graphics, and rggobi for interactive graphics.

  12. Multibody dynamics model building using graphical interfaces

    NASA Technical Reports Server (NTRS)

    Macala, Glenn A.

    1989-01-01

    In recent years, the extremely laborious task of manually deriving equations of motion for the simulation of multibody spacecraft dynamics has largely been eliminated. Instead, the dynamicist now works with commonly available general purpose dynamics simulation programs which generate the equations of motion either explicitly or implicitly via computer codes. The user interface to these programs has predominantly been via input data files, each with its own required format and peculiarities, causing errors and frustrations during program setup. Recent progress in a more natural method of data input for dynamics programs: the graphical interface, is described.

  13. A general graphical user interface for automatic reliability modeling

    NASA Technical Reports Server (NTRS)

    Liceaga, Carlos A.; Siewiorek, Daniel P.

    1991-01-01

    Reported here is a general Graphical User Interface (GUI) for automatic reliability modeling of Processor Memory Switch (PMS) structures using a Markov model. This GUI is based on a hierarchy of windows. One window has graphical editing capabilities for specifying the system's communication structure, hierarchy, reconfiguration capabilities, and requirements. Other windows have field texts, popup menus, and buttons for specifying parameters and selecting actions. An example application of the GUI is given.

  14. A probabilistic graphical model based stochastic input model construction

    SciTech Connect

    Wan, Jiang; Zabaras, Nicholas

    2014-09-01

    Model reduction techniques have been widely used in modeling of high-dimensional stochastic input in uncertainty quantification tasks. However, the probabilistic modeling of random variables projected into reduced-order spaces presents a number of computational challenges. Due to the curse of dimensionality, the underlying dependence relationships between these random variables are difficult to capture. In this work, a probabilistic graphical model based approach is employed to learn the dependence by running a number of conditional independence tests using observation data. Thus a probabilistic model of the joint PDF is obtained and the PDF is factorized into a set of conditional distributions based on the dependence structure of the variables. The estimation of the joint PDF from data is then transformed to estimating conditional distributions under reduced dimensions. To improve the computational efficiency, a polynomial chaos expansion is further applied to represent the random field in terms of a set of standard random variables. This technique is combined with both linear and nonlinear model reduction methods. Numerical examples are presented to demonstrate the accuracy and efficiency of the probabilistic graphical model based stochastic input models. - Highlights: • Data-driven stochastic input models without the assumption of independence of the reduced random variables. • The problem is transformed to a Bayesian network structure learning problem. • Examples are given in flows in random media.

  15. Progress in mix modeling

    SciTech Connect

    Harrison, A.K.

    1997-03-14

    We have identified the Cranfill multifluid turbulence model (Cranfill, 1992) as a starting point for development of subgrid models of instability, turbulent and mixing processes. We have differenced the closed system of equations in conservation form, and coded them in the object-oriented hydrodynamics code FLAG, which is to be used as a testbed for such models.

  16. ADVANCED MIXING MODELS

    SciTech Connect

    Lee, S; Richard Dimenna, R; David Tamburello, D

    2008-11-13

    The process of recovering the waste in storage tanks at the Savannah River Site (SRS) typically requires mixing the contents of the tank with one to four dual-nozzle jet mixers located within the tank. The typical criteria to establish a mixed condition in a tank are based on the number of pumps in operation and the time duration of operation. To ensure that a mixed condition is achieved, operating times are set conservatively long. This approach results in high operational costs because of the long mixing times and high maintenance and repair costs for the same reason. A significant reduction in both of these costs might be realized by reducing the required mixing time based on calculating a reliable indicator of mixing with a suitably validated computer code. The work described in this report establishes the basis for further development of the theory leading to the identified mixing indicators, the benchmark analyses demonstrating their consistency with widely accepted correlations, and the application of those indicators to SRS waste tanks to provide a better, physically based estimate of the required mixing time. Waste storage tanks at SRS contain settled sludge which varies in height from zero to 10 ft. The sludge has been characterized and modeled as micron-sized solids, typically 1 to 5 microns, at weight fractions as high as 20 to 30 wt%, specific gravities to 1.4, and viscosities up to 64 cp during motion. The sludge is suspended and mixed through the use of submersible slurry jet pumps. To suspend settled sludge, water is added to the tank as a slurry medium and stirred with the jet pump. Although there is considerable technical literature on mixing and solid suspension in agitated tanks, very little literature has been published on jet mixing in a large-scale tank. If shorter mixing times can be shown to support Defense Waste Processing Facility (DWPF) or other feed requirements, longer pump lifetimes can be achieved with associated operational cost and

  17. Integrating Surface Modeling into the Engineering Design Graphics Curriculum

    ERIC Educational Resources Information Center

    Hartman, Nathan W.

    2006-01-01

    It has been suggested there is a knowledge base that surrounds the use of 3D modeling within the engineering design process and correspondingly within engineering design graphics education. While solid modeling receives a great deal of attention and discussion relative to curriculum efforts, and rightly so, surface modeling is an equally viable 3D…

  18. ADVANCED MIXING MODELS

    SciTech Connect

    Lee, S; Dimenna, R; Tamburello, D

    2011-02-14

    height from zero to 10 ft. The sludge has been characterized and modeled as micron-sized solids, typically 1 to 5 microns, at weight fractions as high as 20 to 30 wt%, specific gravities to 1.4, and viscosities up to 64 cp during motion. The sludge is suspended and mixed through the use of submersible slurry jet pumps. To suspend settled sludge, water is added to the tank as a slurry medium and stirred with the jet pump. Although there is considerable technical literature on mixing and solid suspension in agitated tanks, very little literature has been published on jet mixing in a large-scale tank. One of the main objectives in the waste processing is to provide feed of a uniform slurry composition at a certain weight percentage (e.g. typically {approx}13 wt% at SRS) over an extended period of time. In preparation of the sludge for slurrying, several important questions have been raised with regard to sludge suspension and mixing of the solid suspension in the bulk of the tank: (1) How much time is required to prepare a slurry with a uniform solid composition? (2) How long will it take to suspend and mix the sludge for uniform composition in any particular waste tank? (3) What are good mixing indicators to answer the questions concerning sludge mixing stated above in a general fashion applicable to any waste tank/slurry pump geometry and fluid/sludge combination?

  19. VR Lab ISS Graphics Models Data Package

    NASA Technical Reports Server (NTRS)

    Paddock, Eddie; Homan, Dave; Bell, Brad; Miralles, Evely; Hoblit, Jeff

    2016-01-01

    All the ISS models are saved in AC3D model format which is a text based format that can be loaded into blender and exported to other formats from there including FBX. The models are saved in two different levels of detail, one being labeled "LOWRES" and the other labeled "HIRES". There are two ".str" files (HIRES _ scene _ load.str and LOWRES _ scene _ load.str) that give the hierarchical relationship of the different nodes and the models associated with each node for both the "HIRES" and "LOWRES" model sets. All the images used for texturing are stored in Windows ".bmp" format for easy importing.

  20. Teaching Geometry through Dynamic Modeling in Introductory Engineering Graphics.

    ERIC Educational Resources Information Center

    Wiebe, Eric N.; Branoff, Ted J.; Hartman, Nathan W.

    2003-01-01

    Examines how constraint-based 3D modeling can be used as a vehicle for rethinking instructional approaches to engineering design graphics. Focuses on moving from a mode of instruction based on the crafting by students and assessment by instructors of static 2D drawings and 3D models. Suggests that the new approach is better aligned with…

  1. A Constructivist Design and Learning Model: Time for a Graphic.

    ERIC Educational Resources Information Center

    Rogers, Patricia L.; Mack, Michael

    At the University of Minnesota, a model, visual representation or "graphic" that incorporated both a systematic design process and a constructivist approach was used as a framework for course design. This paper describes experiences of applying the Instructional Context Design (ICD) framework in both the K-12 and higher education settings. The…

  2. MAGIC: Model and Graphic Information Converter

    NASA Technical Reports Server (NTRS)

    Herbert, W. C.

    2009-01-01

    MAGIC is a software tool capable of converting highly detailed 3D models from an open, standard format, VRML 2.0/97, into the proprietary DTS file format used by the Torque Game Engine from GarageGames. MAGIC is used to convert 3D simulations from authoritative sources into the data needed to run the simulations in NASA's Distributed Observer Network. The Distributed Observer Network (DON) is a simulation presentation tool built by NASA to facilitate the simulation sharing requirements of the Data Presentation and Visualization effort within the Constellation Program. DON is built on top of the Torque Game Engine (TGE) and has chosen TGE's Dynamix Three Space (DTS) file format to represent 3D objects within simulations.

  3. Workflow modeling in the graphic arts and printing industry

    NASA Astrophysics Data System (ADS)

    Tuijn, Chris

    2003-12-01

    The last few years, a lot of effort has been spent on the standardization of the workflow in the graphic arts and printing industry. The main reasons for this standardization are two-fold: first of all, the need to represent all aspects of products, processes and resources in a uniform, digital framework and, secondly, the need to have different systems communicate with each other without having to implement dedicated drivers or protocols. Since many years, a number of organizations in the IT sector have been quite busy developing models and languages on the topic of workflow modeling. In addition to the more formal methods (such as, e.g., extended finite state machines, Petri Nets, Markov Chains etc.) introduced a number of decades ago, more pragmatic methods have been proposed quite recently. We hereby think in particular of the activities of the Workflow Management Coalition that resulted in an XML based Process Definition Language. Although one might be tempted to use the already established standards in the graphic environment, one should be well aware of the complexity and uniqueness of the graphic arts workflow. In this paper, we will show that it is quite hard though not impossible to model the graphic arts workflow using the already established workflow systems. After a brief summary of the graphic arts workflow requirements, we will show why the traditional models are less suitable to use. It will turn out that one of the main reasons for the incompatibility is that the graphic arts workflow is primarily resource driven; this means that the activation of processes depends on the status of different incoming resources. The fact that processes can start running with a partial availability of the input resources is a further complication that asks for additional knowledge on process level. In the second part of this paper, we will discuss in more detail the different software components that are available in any graphic enterprise. In the last part, we will

  4. Exactness of belief propagation for some graphical models with loops

    NASA Astrophysics Data System (ADS)

    Chertkov, Michael

    2008-10-01

    It is well known that an arbitrary graphical model of statistical inference defined on a tree, i.e. on a graph without loops, is solved exactly and efficiently by an iterative belief propagation (BP) algorithm convergent to the unique minimum of the so-called Bethe free energy functional. For a general graphical model on a loopy graph, the functional may show multiple minima, the iterative BP algorithm may converge to one of the minima or may not converge at all, and the global minimum of the Bethe free energy functional is not guaranteed to correspond to the optimal maximum likelihood (ML) solution in the zero-temperature limit. However, there are exceptions to this general rule, discussed by Kolmogorov and Wainwright (2005) and by Bayati et al (2006, 2008) in two different contexts, where the zero-temperature version of the BP algorithm finds the ML solution for special models on graphs with loops. These two models share a key feature: their ML solutions can be found by an efficient linear programming (LP) algorithm with a totally uni-modular (TUM) matrix of constraints. Generalizing the two models, we consider a class of graphical models reducible in the zero-temperature limit to LP with TUM constraints. Assuming that a gedanken algorithm, g-BP, for finding the global minimum of the Bethe free energy is available, we show that in the limit of zero temperature, g-BP outputs the ML solution. Our consideration is based on equivalence established between gapless linear programming (LP) relaxation of the graphical model in the T → 0 limit and the respective LP version of the Bethe free energy minimization.

  5. Parallelizing the Cellular Potts Model on graphics processing units

    NASA Astrophysics Data System (ADS)

    Tapia, José Juan; D'Souza, Roshan M.

    2011-04-01

    The Cellular Potts Model (CPM) is a lattice based modeling technique used for simulating cellular structures in computational biology. The computational complexity of the model means that current serial implementations restrict the size of simulation to a level well below biological relevance. Parallelization on computing clusters enables scaling the size of the simulation but marginally addresses computational speed due to the limited memory bandwidth between nodes. In this paper we present new data-parallel algorithms and data structures for simulating the Cellular Potts Model on graphics processing units. Our implementations handle most terms in the Hamiltonian, including cell-cell adhesion constraint, cell volume constraint, cell surface area constraint, and cell haptotaxis. We use fine level checkerboards with lock mechanisms using atomic operations to enable consistent updates while maintaining a high level of parallelism. A new data-parallel memory allocation algorithm has been developed to handle cell division. Tests show that our implementation enables simulations of >10 cells with lattice sizes of up to 256 3 on a single graphics card. Benchmarks show that our implementation runs ˜80× faster than serial implementations, and ˜5× faster than previous parallel implementations on computing clusters consisting of 25 nodes. The wide availability and economy of graphics cards mean that our techniques will enable simulation of realistically sized models at a fraction of the time and cost of previous implementations and are expected to greatly broaden the scope of CPM applications.

  6. Detecting relationships between physiological variables using graphical models.

    PubMed Central

    Imhoff, Michael; Fried, Ronald; Gather, Ursula

    2002-01-01

    In intensive care physiological variables of the critically ill are measured and recorded in short time intervals. The proper extraction and interpretation of the information contained in this flood of information can hardly be done by experience alone. Intelligent alarm systems are needed to provide suitable bedside decision support. So far there is no commonly accepted standard for detecting the actual clinical state from the patient record. We use the statistical methodology of graphical models based on partial correlations for detecting time-varying relationships between physiological variables. Graphical models provide information on the relationships among physiological variables that is helpful e.g. for variable selection. Separate analyses for different pathophysiological states show that distinct clinical states are characterized by distinct partial correlation structures. Hence, this technique can provide new insights into physiological mechanisms. PMID:12463843

  7. Probabilistic graphic models applied to identification of diseases.

    PubMed

    Sato, Renato Cesar; Sato, Graziela Tiemy Kajita

    2015-01-01

    Decision-making is fundamental when making diagnosis or choosing treatment. The broad dissemination of computed systems and databases allows systematization of part of decisions through artificial intelligence. In this text, we present basic use of probabilistic graphic models as tools to analyze causality in health conditions. This method has been used to make diagnosis of Alzheimer´s disease, sleep apnea and heart diseases.

  8. Probabilistic graphic models applied to identification of diseases

    PubMed Central

    Sato, Renato Cesar; Sato, Graziela Tiemy Kajita

    2015-01-01

    ABSTRACT Decision-making is fundamental when making diagnosis or choosing treatment. The broad dissemination of computed systems and databases allows systematization of part of decisions through artificial intelligence. In this text, we present basic use of probabilistic graphic models as tools to analyze causality in health conditions. This method has been used to make diagnosis of Alzheimer´s disease, sleep apnea and heart diseases. PMID:26154555

  9. Identifying gene regulatory network rewiring using latent differential graphical models.

    PubMed

    Tian, Dechao; Gu, Quanquan; Ma, Jian

    2016-09-30

    Gene regulatory networks (GRNs) are highly dynamic among different tissue types. Identifying tissue-specific gene regulation is critically important to understand gene function in a particular cellular context. Graphical models have been used to estimate GRN from gene expression data to distinguish direct interactions from indirect associations. However, most existing methods estimate GRN for a specific cell/tissue type or in a tissue-naive way, or do not specifically focus on network rewiring between different tissues. Here, we describe a new method called Latent Differential Graphical Model (LDGM). The motivation of our method is to estimate the differential network between two tissue types directly without inferring the network for individual tissues, which has the advantage of utilizing much smaller sample size to achieve reliable differential network estimation. Our simulation results demonstrated that LDGM consistently outperforms other Gaussian graphical model based methods. We further evaluated LDGM by applying to the brain and blood gene expression data from the GTEx consortium. We also applied LDGM to identify network rewiring between cancer subtypes using the TCGA breast cancer samples. Our results suggest that LDGM is an effective method to infer differential network using high-throughput gene expression data to identify GRN dynamics among different cellular conditions.

  10. Identifying gene regulatory network rewiring using latent differential graphical models

    PubMed Central

    Tian, Dechao; Gu, Quanquan; Ma, Jian

    2016-01-01

    Gene regulatory networks (GRNs) are highly dynamic among different tissue types. Identifying tissue-specific gene regulation is critically important to understand gene function in a particular cellular context. Graphical models have been used to estimate GRN from gene expression data to distinguish direct interactions from indirect associations. However, most existing methods estimate GRN for a specific cell/tissue type or in a tissue-naive way, or do not specifically focus on network rewiring between different tissues. Here, we describe a new method called Latent Differential Graphical Model (LDGM). The motivation of our method is to estimate the differential network between two tissue types directly without inferring the network for individual tissues, which has the advantage of utilizing much smaller sample size to achieve reliable differential network estimation. Our simulation results demonstrated that LDGM consistently outperforms other Gaussian graphical model based methods. We further evaluated LDGM by applying to the brain and blood gene expression data from the GTEx consortium. We also applied LDGM to identify network rewiring between cancer subtypes using the TCGA breast cancer samples. Our results suggest that LDGM is an effective method to infer differential network using high-throughput gene expression data to identify GRN dynamics among different cellular conditions. PMID:27378774

  11. Mixed additive models

    NASA Astrophysics Data System (ADS)

    Carvalho, Francisco; Covas, Ricardo

    2016-06-01

    We consider mixed models y =∑i =0 w Xiβi with V (y )=∑i =1 w θiMi Where Mi=XiXi⊤ , i = 1, . . ., w, and µ = X0β0. For these we will estimate the variance components θ1, . . ., θw, aswell estimable vectors through the decomposition of the initial model into sub-models y(h), h ∈ Γ, with V (y (h ))=γ (h )Ig (h )h ∈Γ . Moreover we will consider L extensions of these models, i.e., y˚=Ly+ɛ, where L=D (1n1, . . ., 1nw) and ɛ, independent of y, has null mean vector and variance covariance matrix θw+1Iw, where w =∑i =1 n wi .

  12. The Michigan Space Weather Modeling Framework (SWMF) Graphical User Interface

    NASA Astrophysics Data System (ADS)

    de Zeeuw, D.; Gombosi, T.; Toth, G.; Ridley, A.

    2007-05-01

    The Michigan Space Weather Modeling Framework (SWMF) is a powerful tool available for the community that has been used to model from the Sun to Earth and beyond. As a research tool, however, it still requires user experience with parallel compute clusters and visualization tools. Thus, we have developed a graphical user interface (GUI) that assists with configuring, compiling, and running the SWMF, as well as visualizing the model output. This is accomplished through a portable web interface. Live examples will be demonstrated and visualization of several archived events will be shown.

  13. Layered Graphical Models for Tracking Partially Occluded Objects.

    PubMed

    Ablavsky, Vitaly; Sclaroff, Stan

    2011-09-01

    We propose a representation for scenes containing relocatable objects that can cause partial occlusions of people in a camera's field of view. In many practical applications, relocatable objects tend to appear often; therefore, models for them can be learned offline and stored in a database. We formulate an occluder-centric representation, called a graphical model layer, where a person's motion in the ground plane is defined as a first-order Markov process on activity zones, while image evidence is aggregated in 2D observation regions that are depth-ordered with respect to the occlusion mask of the relocatable object. We represent real-world scenes as a composition of depth-ordered, interacting graphical model layers, and account for image evidence in a way that handles mutual overlap of the observation regions and their occlusions by the relocatable objects. These layers interact: Proximate ground-plane zones of different model instances are linked to allow a person to move between the layers, and image evidence is shared between the observation regions of these models. We demonstrate our formulation in tracking pedestrians in the vicinity of parked vehicles. Our results compare favorably with a sprite-learning algorithm, with a pedestrian tracker based on deformable contours, and with pedestrian detectors. PMID:21383394

  14. SN_GUI: a graphical user interface for snowpack modeling

    NASA Astrophysics Data System (ADS)

    Spreitzhofer, G.; Fierz, C.; Lehning, M.

    2004-10-01

    SNOWPACK is a physical snow cover model. The model not only serves as a valuable research tool, but also runs operationally on a network of high Alpine automatic weather and snow measurement sites. In order to facilitate the operation of SNOWPACK and the interpretation of the results obtained by this model, a user-friendly graphical user interface for snowpack modeling, named SN_GUI, was created. This Java-based and thus platform-independent tool can be operated in two modes, one designed to fulfill the requirements of avalanche warning services (e.g. by providing information about critical layers within the snowpack that are closely related to the avalanche activity), and the other one offering a variety of additional options satisfying the needs of researchers. The user of SN_GUI is graphically guided through the entire process of creating snow cover simulations. The starting point is the efficient creation of input parameter files for SNOWPACK, followed by the launching of SNOWPACK with a variety of parameter settings. Finally, after the successful termination of the run, a number of interactive display options may be used to visualize the model output. Among these are vertical profiles and time profiles for many parameters. Besides other features, SN_GUI allows the use of various color, time and coordinate scales, and the comparison of measured and observed parameters.

  15. Collaborative multi organ segmentation by integrating deformable and graphical models.

    PubMed

    Uzunbaş, Mustafa Gökhan; Chen, Chao; Zhang, Shaoting; Poh, Kilian M; Li, Kang; Metaxas, Dimitris

    2013-01-01

    Organ segmentation is a challenging problem on which significant progress has been made. Deformable models (DM) and graphical models (GM) are two important categories of optimization based image segmentation methods. Efforts have been made on integrating two types of models into one framework. However, previous methods are not designed for segmenting multiple organs simultaneously and accurately. In this paper, we propose a hybrid multi organ segmentation approach by integrating DM and GM in a coupled optimization framework. Specifically, we show that region-based deformable models can be integrated with Markov Random Fields (MRF), such that multiple models' evolutions are driven by a maximum a posteriori (MAP) inference. It brings global and local deformation constraints into a unified framework for simultaneous segmentation of multiple objects in an image. We validate this proposed method on two challenging problems of multi organ segmentation, and the results are promising. PMID:24579136

  16. Implementing the lattice Boltzmann model on commodity graphics hardware

    NASA Astrophysics Data System (ADS)

    Kaufman, Arie; Fan, Zhe; Petkov, Kaloian

    2009-06-01

    Modern graphics processing units (GPUs) can perform general-purpose computations in addition to the native specialized graphics operations. Due to the highly parallel nature of graphics processing, the GPU has evolved into a many-core coprocessor that supports high data parallelism. Its performance has been growing at a rate of squared Moore's law, and its peak floating point performance exceeds that of the CPU by an order of magnitude. Therefore, it is a viable platform for time-sensitive and computationally intensive applications. The lattice Boltzmann model (LBM) computations are carried out via linear operations at discrete lattice sites, which can be implemented efficiently using a GPU-based architecture. Our simulations produce results comparable to the CPU version while improving performance by an order of magnitude. We have demonstrated that the GPU is well suited for interactive simulations in many applications, including simulating fire, smoke, lightweight objects in wind, jellyfish swimming in water, and heat shimmering and mirage (using the hybrid thermal LBM). We further advocate the use of a GPU cluster for large scale LBM simulations and for high performance computing. The Stony Brook Visual Computing Cluster has been the platform for several applications, including simulations of real-time plume dispersion in complex urban environments and thermal fluid dynamics in a pressurized water reactor. Major GPU vendors have been targeting the high performance computing market with GPU hardware implementations. Software toolkits such as NVIDIA CUDA provide a convenient development platform that abstracts the GPU and allows access to its underlying stream computing architecture. However, software programming for a GPU cluster remains a challenging task. We have therefore developed the Zippy framework to simplify GPU cluster programming. Zippy is based on global arrays combined with the stream programming model and it hides the low-level details of the

  17. Developing satellite ground control software through graphical models

    NASA Technical Reports Server (NTRS)

    Bailin, Sidney; Henderson, Scott; Paterra, Frank; Truszkowski, Walt

    1992-01-01

    This paper discusses a program of investigation into software development as graphical modeling. The goal of this work is a more efficient development and maintenance process for the ground-based software that controls unmanned scientific satellites launched by NASA. The main hypothesis of the program is that modeling of the spacecraft and its subsystems, and reasoning about such models, can--and should--form the key activities of software development; by using such models as inputs, the generation of code to perform various functions (such as simulation and diagnostics of spacecraft components) can be automated. Moreover, we contend that automation can provide significant support for reasoning about the software system at the diagram level.

  18. Graphical LASSO based Model Selection for Time Series

    NASA Astrophysics Data System (ADS)

    Jung, Alexander; Hannak, Gabor; Goertz, Norbert

    2015-10-01

    We propose a novel graphical model selection (GMS) scheme for high-dimensional stationary time series or discrete time process. The method is based on a natural generalization of the graphical LASSO (gLASSO), introduced originally for GMS based on i.i.d. samples, and estimates the conditional independence graph (CIG) of a time series from a finite length observation. The gLASSO for time series is defined as the solution of an l1-regularized maximum (approximate) likelihood problem. We solve this optimization problem using the alternating direction method of multipliers (ADMM). Our approach is nonparametric as we do not assume a finite dimensional (e.g., an autoregressive) parametric model for the observed process. Instead, we require the process to be sufficiently smooth in the spectral domain. For Gaussian processes, we characterize the performance of our method theoretically by deriving an upper bound on the probability that our algorithm fails to correctly identify the CIG. Numerical experiments demonstrate the ability of our method to recover the correct CIG from a limited amount of samples.

  19. Ice-sheet modelling accelerated by graphics cards

    NASA Astrophysics Data System (ADS)

    Brædstrup, Christian Fredborg; Damsgaard, Anders; Egholm, David Lundbek

    2014-11-01

    Studies of glaciers and ice sheets have increased the demand for high performance numerical ice flow models over the past decades. When exploring the highly non-linear dynamics of fast flowing glaciers and ice streams, or when coupling multiple flow processes for ice, water, and sediment, researchers are often forced to use super-computing clusters. As an alternative to conventional high-performance computing hardware, the Graphical Processing Unit (GPU) is capable of massively parallel computing while retaining a compact design and low cost. In this study, we present a strategy for accelerating a higher-order ice flow model using a GPU. By applying the newest GPU hardware, we achieve up to 180× speedup compared to a similar but serial CPU implementation. Our results suggest that GPU acceleration is a competitive option for ice-flow modelling when compared to CPU-optimised algorithms parallelised by the OpenMP or Message Passing Interface (MPI) protocols.

  20. An Accurate and Dynamic Computer Graphics Muscle Model

    NASA Technical Reports Server (NTRS)

    Levine, David Asher

    1997-01-01

    A computer based musculo-skeletal model was developed at the University in the departments of Mechanical and Biomedical Engineering. This model accurately represents human shoulder kinematics. The result of this model is the graphical display of bones moving through an appropriate range of motion based on inputs of EMGs and external forces. The need existed to incorporate a geometric muscle model in the larger musculo-skeletal model. Previous muscle models did not accurately represent muscle geometries, nor did they account for the kinematics of tendons. This thesis covers the creation of a new muscle model for use in the above musculo-skeletal model. This muscle model was based on anatomical data from the Visible Human Project (VHP) cadaver study. Two-dimensional digital images from the VHP were analyzed and reconstructed to recreate the three-dimensional muscle geometries. The recreated geometries were smoothed, reduced, and sliced to form data files defining the surfaces of each muscle. The muscle modeling function opened these files during run-time and recreated the muscle surface. The modeling function applied constant volume limitations to the muscle and constant geometry limitations to the tendons.

  1. Groundwater modeling and remedial optimization design using graphical user interfaces

    SciTech Connect

    Deschaine, L.M.

    1997-05-01

    The ability to accurately predict the behavior of chemicals in groundwater systems under natural flow circumstances or remedial screening and design conditions is the cornerstone to the environmental industry. The ability to do this efficiently and effectively communicate the information to the client and regulators is what differentiates effective consultants from ineffective consultants. Recent advances in groundwater modeling graphical user interfaces (GUIs) are doing for numerical modeling what Windows{trademark} did for DOS{trademark}. GUI facilitates both the modeling process and the information exchange. This Test Drive evaluates the performance of two GUIs--Groundwater Vistas and ModIME--on an actual groundwater model calibration and remedial design optimization project. In the early days of numerical modeling, data input consisted of large arrays of numbers that required intensive labor to input and troubleshoot. Model calibration was also manual, as was interpreting the reams of computer output for each of the tens or hundreds of simulations required to calibrate and perform optimal groundwater remedial design. During this period, the majority of the modelers effort (and budget) was spent just getting the model running, as opposed to solving the environmental challenge at hand. GUIs take the majority of the grunt work out of the modeling process, thereby allowing the modeler to focus on designing optimal solutions.

  2. De novo protein conformational sampling using a probabilistic graphical model

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Debswapna; Cheng, Jianlin

    2015-11-01

    Efficient exploration of protein conformational space remains challenging especially for large proteins when assembling discretized structural fragments extracted from a protein structure data database. We propose a fragment-free probabilistic graphical model, FUSION, for conformational sampling in continuous space and assess its accuracy using ‘blind’ protein targets with a length up to 250 residues from the CASP11 structure prediction exercise. The method reduces sampling bottlenecks, exhibits strong convergence, and demonstrates better performance than the popular fragment assembly method, ROSETTA, on relatively larger proteins with a length of more than 150 residues in our benchmark set. FUSION is freely available through a web server at http://protein.rnet.missouri.edu/FUSION/.

  3. Mode Estimation for High Dimensional Discrete Tree Graphical Models

    PubMed Central

    Chen, Chao; Liu, Han; Metaxas, Dimitris N.; Zhao, Tianqi

    2014-01-01

    This paper studies the following problem: given samples from a high dimensional discrete distribution, we want to estimate the leading (δ, ρ)-modes of the underlying distributions. A point is defined to be a (δ, ρ)-mode if it is a local optimum of the density within a δ-neighborhood under metric ρ. As we increase the “scale” parameter δ, the neighborhood size increases and the total number of modes monotonically decreases. The sequence of the (δ, ρ)-modes reveal intrinsic topographical information of the underlying distributions. Though the mode finding problem is generally intractable in high dimensions, this paper unveils that, if the distribution can be approximated well by a tree graphical model, mode characterization is significantly easier. An efficient algorithm with provable theoretical guarantees is proposed and is applied to applications like data analysis and multiple predictions. PMID:25620859

  4. User's instructions for the GE cardiovascular model to simulate LBNP and tilt experiments, with graphic capabilities

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The present form of this cardiovascular model simulates both 1-g and zero-g LBNP (lower body negative pressure) experiments and tilt experiments. In addition, the model simulates LBNP experiments at any body angle. The model is currently accessible on the Univac 1110 Time-Shared System in an interactive operational mode. Model output may be in tabular form and/or graphic form. The graphic capabilities are programmed for the Tektronix 4010 graphics terminal and the Univac 1110.

  5. Kinematic modelling of disc galaxies using graphics processing units

    NASA Astrophysics Data System (ADS)

    Bekiaris, G.; Glazebrook, K.; Fluke, C. J.; Abraham, R.

    2016-01-01

    With large-scale integral field spectroscopy (IFS) surveys of thousands of galaxies currently under-way or planned, the astronomical community is in need of methods, techniques and tools that will allow the analysis of huge amounts of data. We focus on the kinematic modelling of disc galaxies and investigate the potential use of massively parallel architectures, such as the graphics processing unit (GPU), as an accelerator for the computationally expensive model-fitting procedure. We review the algorithms involved in model-fitting and evaluate their suitability for GPU implementation. We employ different optimization techniques, including the Levenberg-Marquardt and nested sampling algorithms, but also a naive brute-force approach based on nested grids. We find that the GPU can accelerate the model-fitting procedure up to a factor of ˜100 when compared to a single-threaded CPU, and up to a factor of ˜10 when compared to a multithreaded dual CPU configuration. Our method's accuracy, precision and robustness are assessed by successfully recovering the kinematic properties of simulated data, and also by verifying the kinematic modelling results of galaxies from the GHASP and DYNAMO surveys as found in the literature. The resulting GBKFIT code is available for download from: http://supercomputing.swin.edu.au/gbkfit.

  6. Handling geophysical flows: Numerical modelling using Graphical Processing Units

    NASA Astrophysics Data System (ADS)

    Garcia-Navarro, Pilar; Lacasta, Asier; Juez, Carmelo; Morales-Hernandez, Mario

    2016-04-01

    Computational tools may help engineers in the assessment of sediment transport during the decision-making processes. The main requirements are that the numerical results have to be accurate and simulation models must be fast. The present work is based on the 2D shallow water equations in combination with the 2D Exner equation [1]. The resulting numerical model accuracy was already discussed in previous work. Regarding the speed of the computation, the Exner equation slows down the already costly 2D shallow water model as the number of variables to solve is increased and the numerical stability is more restrictive. On the other hand, the movement of poorly sorted material over steep areas constitutes a hazardous environmental problem. Computational tools help in the predictions of such landslides [2]. In order to overcome this problem, this work proposes the use of Graphical Processing Units (GPUs) for decreasing significantly the simulation time [3, 4]. The numerical scheme implemented in GPU is based on a finite volume scheme. The mathematical model and the numerical implementation are compared against experimental and field data. In addition, the computational times obtained with the Graphical Hardware technology are compared against Single-Core (sequential) and Multi-Core (parallel) CPU implementations. References [Juez et al.(2014)] Juez, C., Murillo, J., & Garca-Navarro, P. (2014) A 2D weakly-coupled and efficient numerical model for transient shallow flow and movable bed. Advances in Water Resources. 71 93-109. [Juez et al.(2013)] Juez, C., Murillo, J., & Garca-Navarro, P. (2013) . 2D simulation of granular flow over irregular steep slopes using global and local coordinates. Journal of Computational Physics. 225 166-204. [Lacasta et al.(2014)] Lacasta, A., Morales-Hernndez, M., Murillo, J., & Garca-Navarro, P. (2014) An optimized GPU implementation of a 2D free surface simulation model on unstructured meshes Advances in Engineering Software. 78 1-15. [Lacasta

  7. Dynamics of Mental Model Construction from Text and Graphics

    ERIC Educational Resources Information Center

    Hochpöchler, Ulrike; Schnotz, Wolfgang; Rasch, Thorsten; Ullrich, Mark; Horz, Holger; McElvany, Nele; Baumert, Jürgen

    2013-01-01

    When students read for learning, they frequently are required to integrate text and graphics information into coherent knowledge structures. The following study aimed at analyzing how students deal with texts and how they deal with graphics when they try to integrate the two sources of information. Furthermore, the study investigated differences…

  8. Graphic-based musculoskeletal model for biomechanical analyses and animation.

    PubMed

    Chao, Edmund Y S

    2003-04-01

    The ability to combine physiology and engineering analyses with computer sciences has opened the door to the possibility of creating the 'Virtual Human' reality. This paper presents a broad foundation for a full-featured biomechanical simulator for the human musculoskeletal system physiology. This simulation technology unites the expertise in biomechanical analysis and graphic modeling to investigate joint and connective tissue mechanics at the structural level and to visualize the results in both static and animated forms together with the model. Adaptable anatomical models including prosthetic implants and fracture fixation devices and a robust computational infrastructure for static, kinematic, kinetic, and stress analyses under varying boundary and loading conditions are incorporated on a common platform, the VIMS (Virtual Interactive Musculoskeletal System). Within this software system, a manageable database containing long bone dimensions, connective tissue material properties and a library of skeletal joint system functional activities and loading conditions are also available and they can easily be modified, updated and expanded. Application software is also available to allow end-users to perform biomechanical analyses interactively. This paper details the design, capabilities, and features of the VIMS development at Johns Hopkins University, an effort possible only through academic and commercial collaborations. Examples using these models and the computational algorithms in a virtual laboratory environment are used to demonstrate the utility of this unique database and simulation technology. This integrated system will impact on medical education, basic research, device development and application, and clinical patient care related to musculoskeletal diseases, trauma, and rehabilitation.

  9. MixSIAR: advanced stable isotope mixing models in R

    EPA Science Inventory

    Background/Question/Methods The development of stable isotope mixing models has coincided with modeling products (e.g. IsoSource, MixSIR, SIAR), where methodological advances are published in parity with software packages. However, while mixing model theory has recently been ex...

  10. Modeling Mix in ICF Implosions

    NASA Astrophysics Data System (ADS)

    Weber, C. R.; Clark, D. S.; Chang, B.; Eder, D. C.; Haan, S. W.; Jones, O. S.; Marinak, M. M.; Peterson, J. L.; Robey, H. F.

    2014-10-01

    The observation of ablator material mixing into the hot spot of ICF implosions correlates with reduced yield in National Ignition Campaign (NIC) experiments. Higher Z ablator material radiatively cools the central hot spot, inhibiting thermonuclear burn. This talk focuses on modeling a ``high-mix'' implosion from the NIC, where greater than 1000 ng of ablator material was inferred to have mixed into the hot spot. Standard post-shot modeling of this implosion does not predict the large amounts of ablator mix necessary to explain the data. Other issues are explored in this talk and sensitivity to the method of radiation transport is found. Compared with radiation diffusion, Sn transport can increase ablation front growth and alter the blow-off dynamics of capsule dust. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  11. Graphical User Interface for Simulink Integrated Performance Analysis Model

    NASA Technical Reports Server (NTRS)

    Durham, R. Caitlyn

    2009-01-01

    The J-2X Engine (built by Pratt & Whitney Rocketdyne,) in the Upper Stage of the Ares I Crew Launch Vehicle, will only start within a certain range of temperature and pressure for Liquid Hydrogen and Liquid Oxygen propellants. The purpose of the Simulink Integrated Performance Analysis Model is to verify that in all reasonable conditions the temperature and pressure of the propellants are within the required J-2X engine start boxes. In order to run the simulation, test variables must be entered at all reasonable values of parameters such as heat leak and mass flow rate. To make this testing process as efficient as possible in order to save the maximum amount of time and money, and to show that the J-2X engine will start when it is required to do so, a graphical user interface (GUI) was created to allow the input of values to be used as parameters in the Simulink Model, without opening or altering the contents of the model. The GUI must allow for test data to come from Microsoft Excel files, allow those values to be edited before testing, place those values into the Simulink Model, and get the output from the Simulink Model. The GUI was built using MATLAB, and will run the Simulink simulation when the Simulate option is activated. After running the simulation, the GUI will construct a new Microsoft Excel file, as well as a MATLAB matrix file, using the output values for each test of the simulation so that they may graphed and compared to other values.

  12. JACK - ANTHROPOMETRIC MODELING SYSTEM FOR SILICON GRAPHICS WORKSTATIONS

    NASA Technical Reports Server (NTRS)

    Smith, B.

    1994-01-01

    JACK is an interactive graphics program developed at the University of Pennsylvania that displays and manipulates articulated geometric figures. JACK is typically used to observe how a human mannequin interacts with its environment and what effects body types will have upon the performance of a task in a simulated environment. Any environment can be created, and any number of mannequins can be placed anywhere in that environment. JACK includes facilities to construct limited geometric objects, position figures, perform a variety of analyses on the figures, describe the motion of the figures and specify lighting and surface property information for rendering high quality images. JACK is supplied with a variety of body types pre-defined and known to the system. There are both male and female bodies, ranging from the 5th to the 95th percentile, based on NASA Standard 3000. Each mannequin is fully articulated and reflects the joint limitations of a normal human. JACK is an editor for manipulating previously defined objects known as "Peabody" objects. Used to describe the figures as well as the internal data structure for representing them, Peabody is a language with a powerful and flexible mechanism for representing connectivity between objects, both the joints between individual segments within a figure and arbitrary connections between different figures. Peabody objects are generally comprised of several individual figures, each one a collection of segments. Each segment has a geometry represented by PSURF files that consist of polygons or curved surface patches. Although JACK does not have the capability to create new objects, objects may be created by other geometric modeling programs and then translated into the PSURF format. Environment files are a collection of figures and attributes that may be dynamically moved under the control of an animation file. The animation facilities allow the user to create a sequence of commands that duplicate the movements of a

  13. A Gaussian graphical model approach to climate networks

    NASA Astrophysics Data System (ADS)

    Zerenner, Tanja; Friederichs, Petra; Lehnertz, Klaus; Hense, Andreas

    2014-06-01

    Distinguishing between direct and indirect connections is essential when interpreting network structures in terms of dynamical interactions and stability. When constructing networks from climate data the nodes are usually defined on a spatial grid. The edges are usually derived from a bivariate dependency measure, such as Pearson correlation coefficients or mutual information. Thus, the edges indistinguishably represent direct and indirect dependencies. Interpreting climate data fields as realizations of Gaussian Random Fields (GRFs), we have constructed networks according to the Gaussian Graphical Model (GGM) approach. In contrast to the widely used method, the edges of GGM networks are based on partial correlations denoting direct dependencies. Furthermore, GRFs can be represented not only on points in space, but also by expansion coefficients of orthogonal basis functions, such as spherical harmonics. This leads to a modified definition of network nodes and edges in spectral space, which is motivated from an atmospheric dynamics perspective. We construct and analyze networks from climate data in grid point space as well as in spectral space, and derive the edges from both Pearson and partial correlations. Network characteristics, such as mean degree, average shortest path length, and clustering coefficient, reveal that the networks posses an ordered and strongly locally interconnected structure rather than small-world properties. Despite this, the network structures differ strongly depending on the construction method. Straightforward approaches to infer networks from climate data while not regarding any physical processes may contain too strong simplifications to describe the dynamics of the climate system appropriately.

  14. Node-Based Learning of Multiple Gaussian Graphical Models

    PubMed Central

    Mohan, Karthik; London, Palma; Fazel, Maryam; Witten, Daniela; Lee, Su-In

    2014-01-01

    We consider the problem of estimating high-dimensional Gaussian graphical models corresponding to a single set of variables under several distinct conditions. This problem is motivated by the task of recovering transcriptional regulatory networks on the basis of gene expression data containing heterogeneous samples, such as different disease states, multiple species, or different developmental stages. We assume that most aspects of the conditional dependence networks are shared, but that there are some structured differences between them. Rather than assuming that similarities and differences between networks are driven by individual edges, we take a node-based approach, which in many cases provides a more intuitive interpretation of the network differences. We consider estimation under two distinct assumptions: (1) differences between the K networks are due to individual nodes that are perturbed across conditions, or (2) similarities among the K networks are due to the presence of common hub nodes that are shared across all K networks. Using a row-column overlap norm penalty function, we formulate two convex optimization problems that correspond to these two assumptions. We solve these problems using an alternating direction method of multipliers algorithm, and we derive a set of necessary and sufficient conditions that allows us to decompose the problem into independent subproblems so that our algorithm can be scaled to high-dimensional settings. Our proposal is illustrated on synthetic data, a webpage data set, and a brain cancer gene expression data set. PMID:25309137

  15. A Gaussian graphical model approach to climate networks

    SciTech Connect

    Zerenner, Tanja; Friederichs, Petra; Hense, Andreas; Lehnertz, Klaus

    2014-06-15

    Distinguishing between direct and indirect connections is essential when interpreting network structures in terms of dynamical interactions and stability. When constructing networks from climate data the nodes are usually defined on a spatial grid. The edges are usually derived from a bivariate dependency measure, such as Pearson correlation coefficients or mutual information. Thus, the edges indistinguishably represent direct and indirect dependencies. Interpreting climate data fields as realizations of Gaussian Random Fields (GRFs), we have constructed networks according to the Gaussian Graphical Model (GGM) approach. In contrast to the widely used method, the edges of GGM networks are based on partial correlations denoting direct dependencies. Furthermore, GRFs can be represented not only on points in space, but also by expansion coefficients of orthogonal basis functions, such as spherical harmonics. This leads to a modified definition of network nodes and edges in spectral space, which is motivated from an atmospheric dynamics perspective. We construct and analyze networks from climate data in grid point space as well as in spectral space, and derive the edges from both Pearson and partial correlations. Network characteristics, such as mean degree, average shortest path length, and clustering coefficient, reveal that the networks posses an ordered and strongly locally interconnected structure rather than small-world properties. Despite this, the network structures differ strongly depending on the construction method. Straightforward approaches to infer networks from climate data while not regarding any physical processes may contain too strong simplifications to describe the dynamics of the climate system appropriately.

  16. Understanding of Relation Structures of Graphical Models by Lower Secondary Students

    ERIC Educational Resources Information Center

    van Buuren, Onne; Heck, André; Ellermeijer, Ton

    2016-01-01

    A learning path has been developed on system dynamical graphical modelling, integrated into the Dutch lower secondary physics curriculum. As part of the developmental research for this learning path, students' understanding of the relation structures shown in the diagrams of graphical system dynamics based models has been investigated. One of our…

  17. A mixed relaxed clock model.

    PubMed

    Lartillot, Nicolas; Phillips, Matthew J; Ronquist, Fredrik

    2016-07-19

    Over recent years, several alternative relaxed clock models have been proposed in the context of Bayesian dating. These models fall in two distinct categories: uncorrelated and autocorrelated across branches. The choice between these two classes of relaxed clocks is still an open question. More fundamentally, the true process of rate variation may have both long-term trends and short-term fluctuations, suggesting that more sophisticated clock models unfolding over multiple time scales should ultimately be developed. Here, a mixed relaxed clock model is introduced, which can be mechanistically interpreted as a rate variation process undergoing short-term fluctuations on the top of Brownian long-term trends. Statistically, this mixed clock represents an alternative solution to the problem of choosing between autocorrelated and uncorrelated relaxed clocks, by proposing instead to combine their respective merits. Fitting this model on a dataset of 105 placental mammals, using both node-dating and tip-dating approaches, suggests that the two pure clocks, Brownian and white noise, are rejected in favour of a mixed model with approximately equal contributions for its uncorrelated and autocorrelated components. The tip-dating analysis is particularly sensitive to the choice of the relaxed clock model. In this context, the classical pure Brownian relaxed clock appears to be overly rigid, leading to biases in divergence time estimation. By contrast, the use of a mixed clock leads to more recent and more reasonable estimates for the crown ages of placental orders and superorders. Altogether, the mixed clock introduced here represents a first step towards empirically more adequate models of the patterns of rate variation across phylogenetic trees.This article is part of the themed issue 'Dating species divergences using rocks and clocks'.

  18. A mixed relaxed clock model

    PubMed Central

    2016-01-01

    Over recent years, several alternative relaxed clock models have been proposed in the context of Bayesian dating. These models fall in two distinct categories: uncorrelated and autocorrelated across branches. The choice between these two classes of relaxed clocks is still an open question. More fundamentally, the true process of rate variation may have both long-term trends and short-term fluctuations, suggesting that more sophisticated clock models unfolding over multiple time scales should ultimately be developed. Here, a mixed relaxed clock model is introduced, which can be mechanistically interpreted as a rate variation process undergoing short-term fluctuations on the top of Brownian long-term trends. Statistically, this mixed clock represents an alternative solution to the problem of choosing between autocorrelated and uncorrelated relaxed clocks, by proposing instead to combine their respective merits. Fitting this model on a dataset of 105 placental mammals, using both node-dating and tip-dating approaches, suggests that the two pure clocks, Brownian and white noise, are rejected in favour of a mixed model with approximately equal contributions for its uncorrelated and autocorrelated components. The tip-dating analysis is particularly sensitive to the choice of the relaxed clock model. In this context, the classical pure Brownian relaxed clock appears to be overly rigid, leading to biases in divergence time estimation. By contrast, the use of a mixed clock leads to more recent and more reasonable estimates for the crown ages of placental orders and superorders. Altogether, the mixed clock introduced here represents a first step towards empirically more adequate models of the patterns of rate variation across phylogenetic trees. This article is part of the themed issue ‘Dating species divergences using rocks and clocks’. PMID:27325829

  19. Viscoelastic Finite Difference Modeling Using Graphics Processing Units

    NASA Astrophysics Data System (ADS)

    Fabien-Ouellet, G.; Gloaguen, E.; Giroux, B.

    2014-12-01

    Full waveform seismic modeling requires a huge amount of computing power that still challenges today's technology. This limits the applicability of powerful processing approaches in seismic exploration like full-waveform inversion. This paper explores the use of Graphics Processing Units (GPU) to compute a time based finite-difference solution to the viscoelastic wave equation. The aim is to investigate whether the adoption of the GPU technology is susceptible to reduce significantly the computing time of simulations. The code presented herein is based on the freely accessible software of Bohlen (2002) in 2D provided under a General Public License (GNU) licence. This implementation is based on a second order centred differences scheme to approximate time differences and staggered grid schemes with centred difference of order 2, 4, 6, 8, and 12 for spatial derivatives. The code is fully parallel and is written using the Message Passing Interface (MPI), and it thus supports simulations of vast seismic models on a cluster of CPUs. To port the code from Bohlen (2002) on GPUs, the OpenCl framework was chosen for its ability to work on both CPUs and GPUs and its adoption by most of GPU manufacturers. In our implementation, OpenCL works in conjunction with MPI, which allows computations on a cluster of GPU for large-scale model simulations. We tested our code for model sizes between 1002 and 60002 elements. Comparison shows a decrease in computation time of more than two orders of magnitude between the GPU implementation run on a AMD Radeon HD 7950 and the CPU implementation run on a 2.26 GHz Intel Xeon Quad-Core. The speed-up varies depending on the order of the finite difference approximation and generally increases for higher orders. Increasing speed-ups are also obtained for increasing model size, which can be explained by kernel overheads and delays introduced by memory transfers to and from the GPU through the PCI-E bus. Those tests indicate that the GPU memory size

  20. Bayesian stable isotope mixing models

    EPA Science Inventory

    In this paper we review recent advances in Stable Isotope Mixing Models (SIMMs) and place them into an over-arching Bayesian statistical framework which allows for several useful extensions. SIMMs are used to quantify the proportional contributions of various sources to a mixtur...

  1. An Item Response Unfolding Model for Graphic Rating Scales

    ERIC Educational Resources Information Center

    Liu, Ying

    2009-01-01

    The graphic rating scale, a measurement tool used in many areas of psychology, usually takes a form of a fixed-length line segment, with both ends bounded and labeled as extreme responses. The raters mark somewhere on the line, and the length of the line segment from one endpoint to the mark is taken as the measure. An item response unfolding…

  2. Graphics modelling of non-contact thickness measuring robotics work cell

    NASA Technical Reports Server (NTRS)

    Warren, Charles W.

    1990-01-01

    A system was developed for measuring, in real time, the thickness of a sprayable insulation during its application. The system was graphically modelled, off-line, using a state-of-the-art graphics workstation and associated software. This model was to contain a 3D color model of a workcell containing a robot and an air bearing turntable. A communication link was established between the graphics workstations and the robot's controller. Sequences of robot motion generated by the computer simulation are transmitted to the robot for execution.

  3. Quark Mixing and Preon Model

    NASA Astrophysics Data System (ADS)

    Senju, H.

    1991-07-01

    Inspired by unique features of the preon-subpreon model, we propose a new scheme for quark mixing. In our scheme, the mass relations m_{d} << m_{s} << m_{b} and m_{u} << m_{c} << m_{t} are naturally understood. The resultant CKM matrix has very nice properties. The fact that |V_{us}| and |V_{cd}| are remarkably large compared with other off-diagonal elements is naturally understood. |V_{cb}| =~ |V_{ts}| is predicted and their small values are explained. |V_{ub}| and |V_{td}| are predicted to be much smaller than |V_{cb}|. The parametrization invariant measure of CP violation, J, is predicted to be |V_{ud}| |V_{ub}| |V_{td}| sin phi. The mass relations and mixings of q', q'', l_{s} and leptons are also discussed.

  4. Graphics-based intelligent search and abstracting using Data Modeling

    NASA Astrophysics Data System (ADS)

    Jaenisch, Holger M.; Handley, James W.; Case, Carl T.; Songy, Claude G.

    2002-11-01

    This paper presents an autonomous text and context-mining algorithm that converts text documents into point clouds for visual search cues. This algorithm is applied to the task of data-mining a scriptural database comprised of the Old and New Testaments from the Bible and the Book of Mormon, Doctrine and Covenants, and the Pearl of Great Price. Results are generated which graphically show the scripture that represents the average concept of the database and the mining of the documents down to the verse level.

  5. Top View of a Computer Graphic Model of the Opportunity Lander and Rover

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site] PIA05265

    A computer graphics model of the Opportunity lander and rover are super-imposed on top of the martian terrain where Opportunity landed.

  6. Interactive computer graphic surface modeling of three-dimensional solid domains for boundary element analysis

    NASA Technical Reports Server (NTRS)

    Perucchio, R.; Ingraffea, A. R.

    1984-01-01

    The establishment of the boundary element method (BEM) as a valid tool for solving problems in structural mechanics and in other fields of applied physics is discussed. The development of an integrated interactive computer graphic system for the application of the BEM to three dimensional problems in elastostatics is described. The integration of interactive computer graphic techniques and the BEM takes place at the preprocessing and postprocessing stages of the analysis process, when, respectively, the data base is generated and the results are interpreted. The interactive computer graphic modeling techniques used for generating and discretizing the boundary surfaces of a solid domain are outlined.

  7. RevBayes: Bayesian Phylogenetic Inference Using Graphical Models and an Interactive Model-Specification Language

    PubMed Central

    Höhna, Sebastian; Landis, Michael J.

    2016-01-01

    Programs for Bayesian inference of phylogeny currently implement a unique and fixed suite of models. Consequently, users of these software packages are simultaneously forced to use a number of programs for a given study, while also lacking the freedom to explore models that have not been implemented by the developers of those programs. We developed a new open-source software package, RevBayes, to address these problems. RevBayes is entirely based on probabilistic graphical models, a powerful generic framework for specifying and analyzing statistical models. Phylogenetic-graphical models can be specified interactively in RevBayes, piece by piece, using a new succinct and intuitive language called Rev. Rev is similar to the R language and the BUGS model-specification language, and should be easy to learn for most users. The strength of RevBayes is the simplicity with which one can design, specify, and implement new and complex models. Fortunately, this tremendous flexibility does not come at the cost of slower computation; as we demonstrate, RevBayes outperforms competing software for several standard analyses. Compared with other programs, RevBayes has fewer black-box elements. Users need to explicitly specify each part of the model and analysis. Although this explicitness may initially be unfamiliar, we are convinced that this transparency will improve understanding of phylogenetic models in our field. Moreover, it will motivate the search for improvements to existing methods by brazenly exposing the model choices that we make to critical scrutiny. RevBayes is freely available at http://www.RevBayes.com. [Bayesian inference; Graphical models; MCMC; statistical phylogenetics.] PMID:27235697

  8. RevBayes: Bayesian Phylogenetic Inference Using Graphical Models and an Interactive Model-Specification Language.

    PubMed

    Höhna, Sebastian; Landis, Michael J; Heath, Tracy A; Boussau, Bastien; Lartillot, Nicolas; Moore, Brian R; Huelsenbeck, John P; Ronquist, Fredrik

    2016-07-01

    Programs for Bayesian inference of phylogeny currently implement a unique and fixed suite of models. Consequently, users of these software packages are simultaneously forced to use a number of programs for a given study, while also lacking the freedom to explore models that have not been implemented by the developers of those programs. We developed a new open-source software package, RevBayes, to address these problems. RevBayes is entirely based on probabilistic graphical models, a powerful generic framework for specifying and analyzing statistical models. Phylogenetic-graphical models can be specified interactively in RevBayes, piece by piece, using a new succinct and intuitive language called Rev. Rev is similar to the R language and the BUGS model-specification language, and should be easy to learn for most users. The strength of RevBayes is the simplicity with which one can design, specify, and implement new and complex models. Fortunately, this tremendous flexibility does not come at the cost of slower computation; as we demonstrate, RevBayes outperforms competing software for several standard analyses. Compared with other programs, RevBayes has fewer black-box elements. Users need to explicitly specify each part of the model and analysis. Although this explicitness may initially be unfamiliar, we are convinced that this transparency will improve understanding of phylogenetic models in our field. Moreover, it will motivate the search for improvements to existing methods by brazenly exposing the model choices that we make to critical scrutiny. RevBayes is freely available at http://www.RevBayes.com [Bayesian inference; Graphical models; MCMC; statistical phylogenetics.].

  9. RevBayes: Bayesian Phylogenetic Inference Using Graphical Models and an Interactive Model-Specification Language.

    PubMed

    Höhna, Sebastian; Landis, Michael J; Heath, Tracy A; Boussau, Bastien; Lartillot, Nicolas; Moore, Brian R; Huelsenbeck, John P; Ronquist, Fredrik

    2016-07-01

    Programs for Bayesian inference of phylogeny currently implement a unique and fixed suite of models. Consequently, users of these software packages are simultaneously forced to use a number of programs for a given study, while also lacking the freedom to explore models that have not been implemented by the developers of those programs. We developed a new open-source software package, RevBayes, to address these problems. RevBayes is entirely based on probabilistic graphical models, a powerful generic framework for specifying and analyzing statistical models. Phylogenetic-graphical models can be specified interactively in RevBayes, piece by piece, using a new succinct and intuitive language called Rev. Rev is similar to the R language and the BUGS model-specification language, and should be easy to learn for most users. The strength of RevBayes is the simplicity with which one can design, specify, and implement new and complex models. Fortunately, this tremendous flexibility does not come at the cost of slower computation; as we demonstrate, RevBayes outperforms competing software for several standard analyses. Compared with other programs, RevBayes has fewer black-box elements. Users need to explicitly specify each part of the model and analysis. Although this explicitness may initially be unfamiliar, we are convinced that this transparency will improve understanding of phylogenetic models in our field. Moreover, it will motivate the search for improvements to existing methods by brazenly exposing the model choices that we make to critical scrutiny. RevBayes is freely available at http://www.RevBayes.com [Bayesian inference; Graphical models; MCMC; statistical phylogenetics.]. PMID:27235697

  10. Understanding of Relation Structures of Graphical Models by Lower Secondary Students

    NASA Astrophysics Data System (ADS)

    van Buuren, Onne; Heck, André; Ellermeijer, Ton

    2016-10-01

    A learning path has been developed on system dynamical graphical modelling, integrated into the Dutch lower secondary physics curriculum. As part of the developmental research for this learning path, students' understanding of the relation structures shown in the diagrams of graphical system dynamics based models has been investigated. One of our main findings is that only some students understand these structures correctly. Reality-based interpretation of the diagrams can conceal an incorrect understanding of diagram structures. As a result, students seemingly have no problems interpreting the diagrams until they are asked to construct a graphical model. Misconceptions have been identified that are the consequence of the fact that the equations are not clearly communicated by the diagrams or because the icons used in the diagrams mislead novice modellers. Suggestions are made for improvements.

  11. Graphical Means for Inspecting Qualitative Models of System Behaviour

    ERIC Educational Resources Information Center

    Bouwer, Anders; Bredeweg, Bert

    2010-01-01

    This article presents the design and evaluation of a tool for inspecting conceptual models of system behaviour. The basis for this research is the Garp framework for qualitative simulation. This framework includes modelling primitives, such as entities, quantities and causal dependencies, which are combined into model fragments and scenarios.…

  12. Word-level language modeling for P300 spellers based on discriminative graphical models

    NASA Astrophysics Data System (ADS)

    Delgado Saa, Jaime F.; de Pesters, Adriana; McFarland, Dennis; Çetin, Müjdat

    2015-04-01

    Objective. In this work we propose a probabilistic graphical model framework that uses language priors at the level of words as a mechanism to increase the performance of P300-based spellers. Approach. This paper is concerned with brain-computer interfaces based on P300 spellers. Motivated by P300 spelling scenarios involving communication based on a limited vocabulary, we propose a probabilistic graphical model framework and an associated classification algorithm that uses learned statistical models of language at the level of words. Exploiting such high-level contextual information helps reduce the error rate of the speller. Main results. Our experimental results demonstrate that the proposed approach offers several advantages over existing methods. Most importantly, it increases the classification accuracy while reducing the number of times the letters need to be flashed, increasing the communication rate of the system. Significance. The proposed approach models all the variables in the P300 speller in a unified framework and has the capability to correct errors in previous letters in a word, given the data for the current one. The structure of the model we propose allows the use of efficient inference algorithms, which in turn makes it possible to use this approach in real-time applications.

  13. Joint Estimation of Multiple Graphical Models from High Dimensional Time Series

    PubMed Central

    Qiu, Huitong; Han, Fang; Liu, Han; Caffo, Brian

    2015-01-01

    Summary In this manuscript we consider the problem of jointly estimating multiple graphical models in high dimensions. We assume that the data are collected from n subjects, each of which consists of T possibly dependent observations. The graphical models of subjects vary, but are assumed to change smoothly corresponding to a measure of closeness between subjects. We propose a kernel based method for jointly estimating all graphical models. Theoretically, under a double asymptotic framework, where both (T, n) and the dimension d can increase, we provide the explicit rate of convergence in parameter estimation. It characterizes the strength one can borrow across different individuals and the impact of data dependence on parameter estimation. Empirically, experiments on both synthetic and real resting state functional magnetic resonance imaging (rs-fMRI) data illustrate the effectiveness of the proposed method. PMID:26924939

  14. PRay - A graphical user interface for interactive visualization and modification of rayinvr models

    NASA Astrophysics Data System (ADS)

    Fromm, T.

    2016-01-01

    PRay is a graphical user interface for interactive displaying and editing of velocity models for seismic refraction. It is optimized for editing rayinvr models but can also be used as a dynamic viewer for ray tracing results from other software. The main features are the graphical editing of nodes and fast adjusting of the display (stations and phases). It can be extended by user-defined shell scripts and links to phase picking software. PRay is open source software written in the scripting language Perl, runs on Unix-like operating systems including Mac OS X and provides a version controlled source code repository for community development.

  15. Three-dimensional interactive graphics for displaying and modelling microscopic data.

    PubMed

    Basinski, M; Deatherage, J F

    1990-09-01

    EUCLID is a three-dimensional (3D) general purpose graphics display package for interactive manipulation of vector, surface and solid drawings on Evans and Sutherland PS300 series graphics processors. It is useful for displaying, comparing, measuring and modelling 3D microscopic images in real time. EUCLID can assemble groups of drawings into a composite drawing, while retaining the ability to operate upon the individual drawings within the composite drawing separately. EUCLID is capable of real time geometrical transformations (scaling, translation and rotation in two coordinate frames) and stereo and perspective viewing transformations. Because of its flexibility, EUCLID is especially useful for fitting models into 3D microscopic images.

  16. A STACKED GRAPHICAL MODEL FOR ASSOCIATING INFORMATION FROM TEXT AND IMAGES IN FIGURES

    PubMed Central

    KOU, ZHENZHEN; COHEN, WILLIAM W.; MURPHY, ROBERT F.

    2010-01-01

    There is extensive interest in mining data from full text. We have built a system called SLIF (for Subcellular Location Image Finder), which extracts information on one particular aspect of biology from a combination of text and images in journal articles. Associating the information from the text and image requires matching sub-figures with the sentences in the text. We introduced a stacked graphical model to match the labels of sub-figures with labels of sentences. The experimental results show that the stacked graphical model can take advantage of the context information and achieve a satisfactory accuracy. PMID:17990497

  17. Nonintersecting string model and graphical approach: equivalence with a Potts model

    SciTech Connect

    Perk, J.H.H.; Wu, F.Y.

    1986-03-01

    Using a graphical method the authors establish the exact equivalence of the partition function of a q-state nonintersecting string (NIS) model on an arbitrary planar, even-valenced lattice with that of a q/sub 2/-state Potts model on a relaxed lattice. The NIS model considered in this paper is one in which the vertex weights are expressible as sums of those of basic vertex types, and the resulting Potts model generally has multispin interactions. For the square and Kagome lattices this leads to the equivalence of a staggered NIS model with Potts models with anisotropic pair interactions, indicating that these NIS models have a first-order transition for q greater than 2. For the triangular lattice the NIS model turns out to be the five-vertex model of Wu and Lin and it relates to a Potts model with two- and three-site interactions. The most general model the authors discuss is an oriented NIS model which contains the six-vertex model and the NIS models of Stroganov and Schultz as special cases.

  18. Nonintersecting string model and graphical approach: Equivalence with a Potts model

    NASA Astrophysics Data System (ADS)

    Perk, J. H. H.; Wu, F. Y.

    1986-03-01

    Using a graphical method we establish the exact equivalence of the partition function of a q-state nonintersecting string (NIS) model on an arbitrary planar, even-valenced, lattice with that of a q2-state Potts model on a related lattice. The NIS model considered in this paper is one in which the vertex weights are expressible as sums of those of basic vertex types, and the resulting Potts model generally has multispin interactions. For the square and Kagomé lattices this leads to the equivalence of a staggered NIS model with Potts models with anisotropic pair interactions, indicating that these NIS models have a first-order transition for q > 2. For the triangular lattice the NIS model turns out to be the five-vertex model of Wu and Lin and it relates to a Potts model with two- and three-site interactions. The most general model we discuss is an oriented NIS model which contains the six-vertex model and the NIS models of Stroganov and Schultz as special cases.

  19. Use and abuse of mixing models (MixSIAR)

    EPA Science Inventory

    Background/Question/MethodsCharacterizing trophic links in food webs is a fundamental ecological question. In our efforts to quantify energy flow through food webs, ecologists have increasingly used mixing models to analyze biological tracer data, often from stable isotopes. Whil...

  20. Probabilistic assessment of agricultural droughts using graphical models

    NASA Astrophysics Data System (ADS)

    Ramadas, Meenu; Govindaraju, Rao S.

    2015-07-01

    Agricultural droughts are often characterized by soil moisture in the root zone of the soil, but crop needs are rarely factored into the analysis. Since water needs vary with crops, agricultural drought incidences in a region can be characterized better if crop responses to soil water deficits are also accounted for in the drought index. This study investigates agricultural droughts driven by plant stress due to soil moisture deficits using crop stress functions available in the literature. Crop water stress is assumed to begin at the soil moisture level corresponding to incipient stomatal closure, and reaches its maximum at the crop's wilting point. Using available location-specific crop acreage data, a weighted crop water stress function is computed. A new probabilistic agricultural drought index is then developed within a hidden Markov model (HMM) framework that provides model uncertainty in drought classification and accounts for time dependence between drought states. The proposed index allows probabilistic classification of the drought states and takes due cognizance of the stress experienced by the crop due to soil moisture deficit. The capabilities of HMM model formulations for assessing agricultural droughts are compared to those of current drought indices such as standardized precipitation evapotranspiration index (SPEI) and self-calibrating Palmer drought severity index (SC-PDSI). The HMM model identified critical drought events and several drought occurrences that are not detected by either SPEI or SC-PDSI, and shows promise as a tool for agricultural drought studies.

  1. Graphical modeling of the joint distribution of alleles at associated loci.

    PubMed

    Thomas, Alun; Camp, Nicola J

    2004-06-01

    Pairwise linkage disequilibrium, haplotype blocks, and recombination hotspots provide only a partial description of the patterns of dependences and independences between the allelic states at proximal loci. On the gross scale, where recombination and spatial relationships dominate, the associations can be reasonably described in these terms. However, on the fine scale of current high-density maps, the mutation process is also important and creates associations between loci that are independent of the physical ordering and that can not be summarized with pairwise measures of association. Graphical modeling provides a standard statistical framework for characterizing precisely these sorts of complex stochastic data. Although graphical models are often used in situations in which assumptions lead naturally to specific models, it is less well known that estimation of graphical models is also a developed field. We show how decomposable graphical models can be fitted to dense genetic data. The objective function is the maximized log likelihood for the model penalized by a multiple of the model's degrees of freedom. We also describe how this can be modified to incorporate prior information of locus position. Simulated annealing is used to find good solutions. Part of the appeal of this approach is that categorical phenotypes can be included in the same analysis and association with polymorphisms can be assessed jointly with the interlocus associations. We illustrate our method with genotypic data from 25 loci in the ELAC2 gene. The results contain third- and fourth-order locus interactions and show that, at this density of markers, linkage disequilibrium is not a simple function of physical distance. Graphical models provide more flexibility to express these features of the joint distribution of alleles than do monotonic functions connecting physical and genetic maps.

  2. (Hyper)-graphical models in biomedical image analysis.

    PubMed

    Paragios, Nikos; Ferrante, Enzo; Glocker, Ben; Komodakis, Nikos; Parisot, Sarah; Zacharaki, Evangelia I

    2016-10-01

    Computational vision, visual computing and biomedical image analysis have made tremendous progress over the past two decades. This is mostly due the development of efficient learning and inference algorithms which allow better and richer modeling of image and visual understanding tasks. Hyper-graph representations are among the most prominent tools to address such perception through the casting of perception as a graph optimization problem. In this paper, we briefly introduce the importance of such representations, discuss their strength and limitations, provide appropriate strategies for their inference and present their application to address a variety of problems in biomedical image analysis. PMID:27377331

  3. (Hyper)-graphical models in biomedical image analysis.

    PubMed

    Paragios, Nikos; Ferrante, Enzo; Glocker, Ben; Komodakis, Nikos; Parisot, Sarah; Zacharaki, Evangelia I

    2016-10-01

    Computational vision, visual computing and biomedical image analysis have made tremendous progress over the past two decades. This is mostly due the development of efficient learning and inference algorithms which allow better and richer modeling of image and visual understanding tasks. Hyper-graph representations are among the most prominent tools to address such perception through the casting of perception as a graph optimization problem. In this paper, we briefly introduce the importance of such representations, discuss their strength and limitations, provide appropriate strategies for their inference and present their application to address a variety of problems in biomedical image analysis.

  4. The anova to mixed model transition.

    PubMed

    Boisgontier, Matthieu P; Cheval, Boris

    2016-09-01

    A transition towards mixed models is underway in science. This transition started up because the requirements for using analyses of variances are often not met and mixed models clearly provide a better framework. Neuroscientists have been slower than others in changing their statistical habits and are now urged to act.

  5. A Monthly Water-Balance Model Driven By a Graphical User Interface

    USGS Publications Warehouse

    McCabe, Gregory J.; Markstrom, Steven L.

    2007-01-01

    This report describes a monthly water-balance model driven by a graphical user interface, referred to as the Thornthwaite monthly water-balance program. Computations of monthly water-balance components of the hydrologic cycle are made for a specified location. The program can be used as a research tool, an assessment tool, and a tool for classroom instruction.

  6. Parallelized CCHE2D flow model with CUDA Fortran on Graphics Process Units

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper presents the CCHE2D implicit flow model parallelized using CUDA Fortran programming technique on Graphics Processing Units (GPUs). A parallelized implicit Alternating Direction Implicit (ADI) solver using Parallel Cyclic Reduction (PCR) algorithm on GPU is developed and tested. This solve...

  7. Quantifying uncertainty in stable isotope mixing models

    DOE PAGESBeta

    Davis, Paul; Syme, James; Heikoop, Jeffrey; Fessenden-Rahn, Julianna; Perkins, George; Newman, Brent; Chrystal, Abbey E.; Hagerty, Shannon B.

    2015-05-19

    Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, SIAR [Parnell et al., 2010] a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (δ15N and δ18O) but all methods testedmore » are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the uncertainty in calculated

  8. Quantifying uncertainty in stable isotope mixing models

    NASA Astrophysics Data System (ADS)

    Davis, Paul; Syme, James; Heikoop, Jeffrey; Fessenden-Rahn, Julianna; Perkins, George; Newman, Brent; Chrystal, Abbey E.; Hagerty, Shannon B.

    2015-05-01

    Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, Stable Isotope Analysis in R (SIAR), a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (δ15N and δ18O) but all methods tested are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the uncertainty in calculated

  9. Quantifying uncertainty in stable isotope mixing models

    SciTech Connect

    Davis, Paul; Syme, James; Heikoop, Jeffrey; Fessenden-Rahn, Julianna; Perkins, George; Newman, Brent; Chrystal, Abbey E.; Hagerty, Shannon B.

    2015-05-19

    Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, SIAR [Parnell et al., 2010] a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (δ15N and δ18O) but all methods tested are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the

  10. Modeling and diagnosis of structural systems through sparse dynamic graphical models

    NASA Astrophysics Data System (ADS)

    Bornn, Luke; Farrar, Charles R.; Higdon, David; Murphy, Kevin P.

    2016-06-01

    Since their introduction into the structural health monitoring field, time-domain statistical models have been applied with considerable success. Current approaches still have several flaws, however, as they typically ignore the structure of the system, using individual sensor data for modeling and diagnosis. This paper introduces a Bayesian framework containing much of the previous work with autoregressive models as a special case. In addition, the framework allows for natural inclusion of structural knowledge through the form of prior distributions on the model parameters. Acknowledging the need for computational efficiency, we extend the framework through the use of decomposable graphical models, exploiting sparsity in the system to give models that are simple to fit and understand. This sparsity can be specified from knowledge of the system, from the data itself, or through a combination of the two. Using both simulated and real data, we demonstrate the capability of the model to capture the dynamics of the system and to provide clear indications of structural change and damage. We also demonstrate how learning the sparsity in the system gives insight into the structure's physical properties.

  11. Experiments with a low-cost system for computer graphics material model acquisition

    NASA Astrophysics Data System (ADS)

    Rushmeier, Holly; Lockerman, Yitzhak; Cartwright, Luke; Pitera, David

    2015-03-01

    We consider the design of an inexpensive system for acquiring material models for computer graphics rendering applications in animation, games and conceptual design. To be useful in these applications a system must be able to model a rich range of appearances in a computationally tractable form. The range of appearance of interest in computer graphics includes materials that have spatially varying properties, directionality, small-scale geometric structure, and subsurface scattering. To be computationally tractable, material models for graphics must be compact, editable, and efficient to numerically evaluate for ray tracing importance sampling. To construct appropriate models for a range of interesting materials, we take the approach of separating out directly and indirectly scattered light using high spatial frequency patterns introduced by Nayar et al. in 2006. To acquire the data at low cost, we use a set of Raspberry Pi computers and cameras clamped to miniature projectors. We explore techniques to separate out surface and subsurface indirect lighting. This separation would allow the fitting of simple, and so tractable, analytical models to features of the appearance model. The goal of the system is to provide models for physically accurate renderings that are visually equivalent to viewing the original physical materials.

  12. Graphical assessment of internal and external calibration of logistic regression models by using loess smoothers.

    PubMed

    Austin, Peter C; Steyerberg, Ewout W

    2014-02-10

    Predicting the probability of the occurrence of a binary outcome or condition is important in biomedical research. While assessing discrimination is an essential issue in developing and validating binary prediction models, less attention has been paid to methods for assessing model calibration. Calibration refers to the degree of agreement between observed and predicted probabilities and is often assessed by testing for lack-of-fit. The objective of our study was to examine the ability of graphical methods to assess the calibration of logistic regression models. We examined lack of internal calibration, which was related to misspecification of the logistic regression model, and external calibration, which was related to an overfit model or to shrinkage of the linear predictor. We conducted an extensive set of Monte Carlo simulations with a locally weighted least squares regression smoother (i.e., the loess algorithm) to examine the ability of graphical methods to assess model calibration. We found that loess-based methods were able to provide evidence of moderate departures from linearity and indicate omission of a moderately strong interaction. Misspecification of the link function was harder to detect. Visual patterns were clearer with higher sample sizes, higher incidence of the outcome, or higher discrimination. Loess-based methods were also able to identify the lack of calibration in external validation samples when an overfit regression model had been used. In conclusion, loess-based smoothing methods are adequate tools to graphically assess calibration and merit wider application.

  13. Lamb wave propagation modelling and simulation using parallel processing architecture and graphical cards

    NASA Astrophysics Data System (ADS)

    Paćko, P.; Bielak, T.; Spencer, A. B.; Staszewski, W. J.; Uhl, T.; Worden, K.

    2012-07-01

    This paper demonstrates new parallel computation technology and an implementation for Lamb wave propagation modelling in complex structures. A graphical processing unit (GPU) and computer unified device architecture (CUDA), available in low-cost graphical cards in standard PCs, are used for Lamb wave propagation numerical simulations. The local interaction simulation approach (LISA) wave propagation algorithm has been implemented as an example. Other algorithms suitable for parallel discretization can also be used in practice. The method is illustrated using examples related to damage detection. The results demonstrate good accuracy and effective computational performance of very large models. The wave propagation modelling presented in the paper can be used in many practical applications of science and engineering.

  14. VISUAL PLUMES MIXING ZONE MODELING SOFTWARE

    EPA Science Inventory

    The U.S. Environmental Protection Agency has a long history of both supporting plume model development and providing mixing zone modeling software. The Visual Plumes model is the most recent addition to the suite of public-domain models available through the EPA-Athens Center f...

  15. Extended Generalized Linear Latent and Mixed Model

    ERIC Educational Resources Information Center

    Segawa, Eisuke; Emery, Sherry; Curry, Susan J.

    2008-01-01

    The generalized linear latent and mixed modeling (GLLAMM framework) includes many models such as hierarchical and structural equation models. However, GLLAMM cannot currently accommodate some models because it does not allow some parameters to be random. GLLAMM is extended to overcome the limitation by adding a submodel that specifies a…

  16. Square Root Graphical Models: Multivariate Generalizations of Univariate Exponential Families that Permit Positive Dependencies

    PubMed Central

    Inouye, David I.; Ravikumar, Pradeep; Dhillon, Inderjit S.

    2016-01-01

    We develop Square Root Graphical Models (SQR), a novel class of parametric graphical models that provides multivariate generalizations of univariate exponential family distributions. Previous multivariate graphical models (Yang et al., 2015) did not allow positive dependencies for the exponential and Poisson generalizations. However, in many real-world datasets, variables clearly have positive dependencies. For example, the airport delay time in New York—modeled as an exponential distribution—is positively related to the delay time in Boston. With this motivation, we give an example of our model class derived from the univariate exponential distribution that allows for almost arbitrary positive and negative dependencies with only a mild condition on the parameter matrix—a condition akin to the positive definiteness of the Gaussian covariance matrix. Our Poisson generalization allows for both positive and negative dependencies without any constraints on the parameter values. We also develop parameter estimation methods using node-wise regressions with ℓ1 regularization and likelihood approximation methods using sampling. Finally, we demonstrate our exponential generalization on a synthetic dataset and a real-world dataset of airport delay times. PMID:27563373

  17. From least squares to multilevel modeling: A graphical introduction to Bayesian inference

    NASA Astrophysics Data System (ADS)

    Loredo, Thomas J.

    2016-01-01

    This tutorial presentation will introduce some of the key ideas and techniques involved in applying Bayesian methods to problems in astrostatistics. The focus will be on the big picture: understanding the foundations (interpreting probability, Bayes's theorem, the law of total probability and marginalization), making connections to traditional methods (propagation of errors, least squares, chi-squared, maximum likelihood, Monte Carlo simulation), and highlighting problems where a Bayesian approach can be particularly powerful (Poisson processes, density estimation and curve fitting with measurement error). The "graphical" component of the title reflects an emphasis on pictorial representations of some of the math, but also on the use of graphical models (multilevel or hierarchical models) for analyzing complex data. Code for some examples from the talk will be available to participants, in Python and in the Stan probabilistic programming language.

  18. Statistical and graphical methods for evaluating solute transport models: Overview and application

    NASA Astrophysics Data System (ADS)

    Loague, Keith; Green, Richard E.

    1991-01-01

    Mathematical modeling is the major tool to predict the mobility and the persistence of pollutants to and within groundwater systems. Several comprehensive institutional models have been developed in recent years for this purpose. However, evaluation procedures are not well established for models of saturated-unsaturated soil-water flow and chemical transport. This paper consists of three parts: (1) an overview of various aspects of mathematical modeling focused upon solute transport models; (2) an introduction to statistical criteria and graphical displays that can be useful for model evaluation; and (3) an example of model evaluation for a mathematical model of pesticide leaching. The model testing example uses observed and predicted atrazine concentration profiles from a small catchment in Georgia. The model tested is the EPA pesticide root zone model (PRZM).

  19. From Nominal to Quantitative Codification of Content-Neutral Variables in Graphics Research: The Beginnings of a Manifest Content Model.

    ERIC Educational Resources Information Center

    Crow, Wendell C.

    This paper suggests ways in which manifest, physical attributes of graphic elements can be described and measured. It also proposes a preliminary conceptual model that accounts for the readily apparent, measurable variables in a visual message. The graphic elements that are described include format, typeface, and photographs/artwork. The…

  20. Learned graphical models for probabilistic planning provide a new class of movement primitives.

    PubMed

    Rückert, Elmar A; Neumann, Gerhard; Toussaint, Marc; Maass, Wolfgang

    2012-01-01

    BIOLOGICAL MOVEMENT GENERATION COMBINES THREE INTERESTING ASPECTS: its modular organization in movement primitives (MPs), its characteristics of stochastic optimality under perturbations, and its efficiency in terms of learning. A common approach to motor skill learning is to endow the primitives with dynamical systems. Here, the parameters of the primitive indirectly define the shape of a reference trajectory. We propose an alternative MP representation based on probabilistic inference in learned graphical models with new and interesting properties that complies with salient features of biological movement control. Instead of endowing the primitives with dynamical systems, we propose to endow MPs with an intrinsic probabilistic planning system, integrating the power of stochastic optimal control (SOC) methods within a MP. The parameterization of the primitive is a graphical model that represents the dynamics and intrinsic cost function such that inference in this graphical model yields the control policy. We parameterize the intrinsic cost function using task-relevant features, such as the importance of passing through certain via-points. The system dynamics as well as intrinsic cost function parameters are learned in a reinforcement learning (RL) setting. We evaluate our approach on a complex 4-link balancing task. Our experiments show that our movement representation facilitates learning significantly and leads to better generalization to new task settings without re-learning.

  1. Outlier robust nonlinear mixed model estimation.

    PubMed

    Williams, James D; Birch, Jeffrey B; Abdel-Salam, Abdel-Salam G

    2015-04-15

    In standard analyses of data well-modeled by a nonlinear mixed model, an aberrant observation, either within a cluster, or an entire cluster itself, can greatly distort parameter estimates and subsequent standard errors. Consequently, inferences about the parameters are misleading. This paper proposes an outlier robust method based on linearization to estimate fixed effects parameters and variance components in the nonlinear mixed model. An example is given using the four-parameter logistic model and bioassay data, comparing the robust parameter estimates with the nonrobust estimates given by SAS(®).

  2. Scotogenic model for co-bimaximal mixing

    NASA Astrophysics Data System (ADS)

    Ferreira, P. M.; Grimus, W.; Jurčiukonis, D.; Lavoura, L.

    2016-07-01

    We present a scotogenic model, i.e. a one-loop neutrino mass model with dark right-handed neutrino gauge singlets and one inert dark scalar gauge doublet η, which has symmetries that lead to co-bimaximal mixing, i.e. to an atmospheric mixing angle θ 23 = 45° and to a CP -violating phase δ = ±π /2, while the mixing angle θ 13 remains arbitrary. The symmetries consist of softly broken lepton numbers L α ( α = e, μ, τ ), a non-standard CP symmetry, and three Z_2 symmetries. We indicate two possibilities for extending the model to the quark sector. Since the model has, besides η, three scalar gauge doublets, we perform a thorough discussion of its scalar sector. We demonstrate that it can accommodate a Standard Model-like scalar with mass 125 GeV, with all the other charged and neutral scalars having much higher masses.

  3. Animated computer graphics models of space and earth sciences data generated via the massively parallel processor

    NASA Technical Reports Server (NTRS)

    Treinish, Lloyd A.; Gough, Michael L.; Wildenhain, W. David

    1987-01-01

    The capability was developed of rapidly producing visual representations of large, complex, multi-dimensional space and earth sciences data sets via the implementation of computer graphics modeling techniques on the Massively Parallel Processor (MPP) by employing techniques recently developed for typically non-scientific applications. Such capabilities can provide a new and valuable tool for the understanding of complex scientific data, and a new application of parallel computing via the MPP. A prototype system with such capabilities was developed and integrated into the National Space Science Data Center's (NSSDC) Pilot Climate Data System (PCDS) data-independent environment for computer graphics data display to provide easy access to users. While developing these capabilities, several problems had to be solved independently of the actual use of the MPP, all of which are outlined.

  4. The effects of a dynamic graphical model during simulation-based training of console operation skill

    NASA Technical Reports Server (NTRS)

    Farquhar, John D.; Regian, J. Wesley

    1993-01-01

    LOADER is a Windows-based simulation of a complex procedural task. The task requires subjects to execute long sequences of console-operation actions (e.g., button presses, switch actuations, dial rotations) to accomplish specific goals. The LOADER interface is a graphical computer-simulated console which controls railroad cars, tracks, and cranes in a fictitious railroad yard. We hypothesized that acquisition of LOADER performance skill would be supported by the representation of a dynamic graphical model linking console actions to goal and goal states in the 'railroad yard'. Twenty-nine subjects were randomly assigned to one of two treatments (i.e., dynamic model or no model). During training, both groups received identical text-based instruction in an instructional-window above the LOADER interface. One group, however, additionally saw a dynamic version of the bird's-eye view of the railroad yard. After training, both groups were tested under identical conditions. They were asked to perform the complete procedure without guidance and without access to either type of railroad yard representation. Results indicate that rather than becoming dependent on the animated rail yard model, subjects in the dynamic model condition apparently internalized the model, as evidenced by their performance after the model was removed.

  5. Learning a structured graphical model with boosted top-down features for ultrasound image segmentation.

    PubMed

    Hao, Zhihui; Wang, Qiang; Wang, Xiaotao; Kim, Jung Bae; Hwang, Youngkyoo; Cho, Baek Hwan; Guo, Ping; Lee, Won Ki

    2013-01-01

    A key problem for many medical image segmentation tasks is the combination of different-level knowledge. We propose a novel scheme of embedding detected regions into a superpixel based graphical model, by which we achieve a full leverage on various image cues for ultrasound lesion segmentation. Region features are mapped into a higher-dimensional space via a boosted model to become well controlled. Parameters for regions, superpixels and a new affinity term are learned simultaneously within the framework of structured learning. Experiments on a breast ultrasound image data set confirm the effectiveness of the proposed approach as well as our two novel modules.

  6. A Module for Graphical Display of Model Results with the CBP Toolbox

    SciTech Connect

    Smith, F.

    2015-04-21

    This report describes work performed by the Savannah River National Laboratory (SRNL) in fiscal year 2014 to add enhanced graphical capabilities to display model results in the Cementitious Barriers Project (CBP) Toolbox. Because Version 2.0 of the CBP Toolbox has just been released, the graphing enhancements described in this report have not yet been integrated into a new version of the Toolbox. Instead they have been tested using a standalone GoldSim model and, while they are substantially complete, may undergo further refinement before full implementation. Nevertheless, this report is issued to document the FY14 development efforts which will provide a basis for further development of the CBP Toolbox.

  7. Students Learning from Model-Produced Graphics in an Undergraduate Climate Change Science Class

    NASA Astrophysics Data System (ADS)

    Gautier, C.

    2004-12-01

    We present results based on the analysis of inquiry-based modeling activities in a climate change science course. This course is an inquiry-based course in which questions are explicitly solicited from students in different forms. With much preparation and scaffolding in the form of reading assignments and preliminary questions asking, mini-lectures and modeling to them what is expected, students are eventually asked to come up with quantitative scientific questions that they can address with a radiative transfer model. The issues they must address with their questions are related to the radiative forcing concept and include: clouds, greenhouse gases, aerosols and land-use effects on climate. For each of their experiments, students analyze graphs that are automatically generated from the model and also produce their own graphics and simple analytical models based on the tabular data generated by the model. Our analysis focuses on how practices in generating, interpreting, discussing, and integrating graphs relevant to climate change help students learn about climate change science. Students' presentation and discussion of their results in the form of graphics will be analyzed and the way in which the students chose to proceed with analytical representations of results for theory establishment will be investigated.

  8. Model Independent Bounds on Kinetic Mixing

    SciTech Connect

    Hook, Anson; Izaguirre, Eder; Wacker, Jay G.; /SLAC

    2011-08-22

    New Abelian vector bosons can kinetically mix with the hypercharge gauge boson of the Standard Model. This letter computes the model independent limits on vector bosons with masses from 1 GeV to 1 TeV. The limits arise from the numerous e{sup +}e{sup -} experiments that have been performed in this energy range and bound the kinetic mixing by {epsilon} {approx}< 0.03 for most of the mass range studied, regardless of any additional interactions that the new vector boson may have.

  9. Boosting probabilistic graphical model inference by incorporating prior knowledge from multiple sources.

    PubMed

    Praveen, Paurush; Fröhlich, Holger

    2013-01-01

    Inferring regulatory networks from experimental data via probabilistic graphical models is a popular framework to gain insights into biological systems. However, the inherent noise in experimental data coupled with a limited sample size reduces the performance of network reverse engineering. Prior knowledge from existing sources of biological information can address this low signal to noise problem by biasing the network inference towards biologically plausible network structures. Although integrating various sources of information is desirable, their heterogeneous nature makes this task challenging. We propose two computational methods to incorporate various information sources into a probabilistic consensus structure prior to be used in graphical model inference. Our first model, called Latent Factor Model (LFM), assumes a high degree of correlation among external information sources and reconstructs a hidden variable as a common source in a Bayesian manner. The second model, a Noisy-OR, picks up the strongest support for an interaction among information sources in a probabilistic fashion. Our extensive computational studies on KEGG signaling pathways as well as on gene expression data from breast cancer and yeast heat shock response reveal that both approaches can significantly enhance the reconstruction accuracy of Bayesian Networks compared to other competing methods as well as to the situation without any prior. Our framework allows for using diverse information sources, like pathway databases, GO terms and protein domain data, etc. and is flexible enough to integrate new sources, if available.

  10. ESTIMATING HETEROGENEOUS GRAPHICAL MODELS FOR DISCRETE DATA WITH AN APPLICATION TO ROLL CALL VOTING

    PubMed Central

    Guo, Jian; Cheng, Jie; Levina, Elizaveta; Michailidis, George; Zhu, Ji

    2016-01-01

    We consider the problem of jointly estimating a collection of graphical models for discrete data, corresponding to several categories that share some common structure. An example for such a setting is voting records of legislators on different issues, such as defense, energy, and healthcare. We develop a Markov graphical model to characterize the heterogeneous dependence structures arising from such data. The model is fitted via a joint estimation method that preserves the underlying common graph structure, but also allows for differences between the networks. The method employs a group penalty that targets the common zero interaction effects across all the networks. We apply the method to describe the internal networks of the U.S. Senate on several important issues. Our analysis reveals individual structure for each issue, distinct from the underlying well-known bipartisan structure common to all categories which we are able to extract separately. We also establish consistency of the proposed method both for parameter estimation and model selection, and evaluate its numerical performance on a number of simulated examples. PMID:27182289

  11. [Systematization and hygienic standardization of environmental factors on the basis of common graphic models].

    PubMed

    Galkin, A A

    2012-01-01

    On the basis of graphic models of the human response to environmental factors, two main types of complex quantitative influence as well as interrelation between determined effects at the level of an individual, and stochastic effects on population were revealed. Two main kinds of factors have been suggested to be distinguished. They are essential factors and accidental factors. The essential factors are common for environment. The accidental factors are foreign for environment. The above two kinds are different in approaches of hygienic standardization Accidental factors need a dot-like approach, whereas a two-level range approach is suitable for the essential factors.

  12. The Mixed Effects Trend Vector Model

    ERIC Educational Resources Information Center

    de Rooij, Mark; Schouteden, Martijn

    2012-01-01

    Maximum likelihood estimation of mixed effect baseline category logit models for multinomial longitudinal data can be prohibitive due to the integral dimension of the random effects distribution. We propose to use multidimensional unfolding methodology to reduce the dimensionality of the problem. As a by-product, readily interpretable graphical…

  13. FastGGM: An Efficient Algorithm for the Inference of Gaussian Graphical Model in Biological Networks.

    PubMed

    Wang, Ting; Ren, Zhao; Ding, Ying; Fang, Zhou; Sun, Zhe; MacDonald, Matthew L; Sweet, Robert A; Wang, Jieru; Chen, Wei

    2016-02-01

    Biological networks provide additional information for the analysis of human diseases, beyond the traditional analysis that focuses on single variables. Gaussian graphical model (GGM), a probability model that characterizes the conditional dependence structure of a set of random variables by a graph, has wide applications in the analysis of biological networks, such as inferring interaction or comparing differential networks. However, existing approaches are either not statistically rigorous or are inefficient for high-dimensional data that include tens of thousands of variables for making inference. In this study, we propose an efficient algorithm to implement the estimation of GGM and obtain p-value and confidence interval for each edge in the graph, based on a recent proposal by Ren et al., 2015. Through simulation studies, we demonstrate that the algorithm is faster by several orders of magnitude than the current implemented algorithm for Ren et al. without losing any accuracy. Then, we apply our algorithm to two real data sets: transcriptomic data from a study of childhood asthma and proteomic data from a study of Alzheimer's disease. We estimate the global gene or protein interaction networks for the disease and healthy samples. The resulting networks reveal interesting interactions and the differential networks between cases and controls show functional relevance to the diseases. In conclusion, we provide a computationally fast algorithm to implement a statistically sound procedure for constructing Gaussian graphical model and making inference with high-dimensional biological data. The algorithm has been implemented in an R package named "FastGGM".

  14. A graphical method to assess distribution assumption in group-based trajectory models.

    PubMed

    Elsensohn, Mad-Hélénie; Klich, Amna; Ecochard, René; Bastard, Mathieu; Genolini, Christophe; Etard, Jean-François; Gustin, Marie-Paule

    2016-04-01

    Group-based trajectory models had a rapid development in the analysis of longitudinal data in clinical research. In these models, the assumption of homoscedasticity of the residuals is frequently made but this assumption is not always met. We developed here an easy-to-perform graphical method to assess the assumption of homoscedasticity of the residuals to apply especially in group-based trajectory models. The method is based on drawing an envelope to visualize the local dispersion of the residuals around each typical trajectory. Its efficiency is demonstrated using data on CD4 lymphocyte counts in patients with human immunodeficiency virus put on antiretroviral therapy. Four distinct distributions that take into account increasing parts of the variability of the observed data are presented. Significant differences in group structures and trajectory patterns were found according to the chosen distribution. These differences might have large impacts on the final trajectories and their characteristics; thus on potential medical decisions. With a single glance, the graphical criteria allow the choice of the distribution that best capture data variability and help dealing with a potential heteroscedasticity problem.

  15. FastGGM: An Efficient Algorithm for the Inference of Gaussian Graphical Model in Biological Networks.

    PubMed

    Wang, Ting; Ren, Zhao; Ding, Ying; Fang, Zhou; Sun, Zhe; MacDonald, Matthew L; Sweet, Robert A; Wang, Jieru; Chen, Wei

    2016-02-01

    Biological networks provide additional information for the analysis of human diseases, beyond the traditional analysis that focuses on single variables. Gaussian graphical model (GGM), a probability model that characterizes the conditional dependence structure of a set of random variables by a graph, has wide applications in the analysis of biological networks, such as inferring interaction or comparing differential networks. However, existing approaches are either not statistically rigorous or are inefficient for high-dimensional data that include tens of thousands of variables for making inference. In this study, we propose an efficient algorithm to implement the estimation of GGM and obtain p-value and confidence interval for each edge in the graph, based on a recent proposal by Ren et al., 2015. Through simulation studies, we demonstrate that the algorithm is faster by several orders of magnitude than the current implemented algorithm for Ren et al. without losing any accuracy. Then, we apply our algorithm to two real data sets: transcriptomic data from a study of childhood asthma and proteomic data from a study of Alzheimer's disease. We estimate the global gene or protein interaction networks for the disease and healthy samples. The resulting networks reveal interesting interactions and the differential networks between cases and controls show functional relevance to the diseases. In conclusion, we provide a computationally fast algorithm to implement a statistically sound procedure for constructing Gaussian graphical model and making inference with high-dimensional biological data. The algorithm has been implemented in an R package named "FastGGM". PMID:26872036

  16. Optimizing ion channel models using a parallel genetic algorithm on graphical processors.

    PubMed

    Ben-Shalom, Roy; Aviv, Amit; Razon, Benjamin; Korngreen, Alon

    2012-01-01

    We have recently shown that we can semi-automatically constrain models of voltage-gated ion channels by combining a stochastic search algorithm with ionic currents measured using multiple voltage-clamp protocols. Although numerically successful, this approach is highly demanding computationally, with optimization on a high performance Linux cluster typically lasting several days. To solve this computational bottleneck we converted our optimization algorithm for work on a graphical processing unit (GPU) using NVIDIA's CUDA. Parallelizing the process on a Fermi graphic computing engine from NVIDIA increased the speed ∼180 times over an application running on an 80 node Linux cluster, considerably reducing simulation times. This application allows users to optimize models for ion channel kinetics on a single, inexpensive, desktop "super computer," greatly reducing the time and cost of building models relevant to neuronal physiology. We also demonstrate that the point of algorithm parallelization is crucial to its performance. We substantially reduced computing time by solving the ODEs (Ordinary Differential Equations) so as to massively reduce memory transfers to and from the GPU. This approach may be applied to speed up other data intensive applications requiring iterative solutions of ODEs.

  17. Bayesian Learning in Sparse Graphical Factor Models via Variational Mean-Field Annealing

    PubMed Central

    Yoshida, Ryo; West, Mike

    2010-01-01

    We describe a class of sparse latent factor models, called graphical factor models (GFMs), and relevant sparse learning algorithms for posterior mode estimation. Linear, Gaussian GFMs have sparse, orthogonal factor loadings matrices, that, in addition to sparsity of the implied covariance matrices, also induce conditional independence structures via zeros in the implied precision matrices. We describe the models and their use for robust estimation of sparse latent factor structure and data/signal reconstruction. We develop computational algorithms for model exploration and posterior mode search, addressing the hard combinatorial optimization involved in the search over a huge space of potential sparse configurations. A mean-field variational technique coupled with annealing is developed to successively generate “artificial” posterior distributions that, at the limiting temperature in the annealing schedule, define required posterior modes in the GFM parameter space. Several detailed empirical studies and comparisons to related approaches are discussed, including analyses of handwritten digit image and cancer gene expression data. PMID:20890391

  18. Modified graphical autocatalytic set model of combustion process in circulating fluidized bed boiler

    NASA Astrophysics Data System (ADS)

    Yusof, Nurul Syazwani; Bakar, Sumarni Abu; Ismail, Razidah

    2014-07-01

    Circulating Fluidized Bed Boiler (CFB) is a device for generating steam by burning fossil fuels in a furnace operating under a special hydrodynamic condition. Autocatalytic Set has provided a graphical model of chemical reactions that occurred during combustion process in CFB. Eight important chemical substances known as species were represented as nodes and catalytic relationships between nodes are represented by the edges in the graph. In this paper, the model is extended and modified by considering other relevant chemical reactions that also exist during the process. Catalytic relationship among the species in the model is discussed. The result reveals that the modified model is able to gives more explanation of the relationship among the species during the process at initial time t.

  19. Simplified models of mixed dark matter

    SciTech Connect

    Cheung, Clifford; Sanford, David E-mail: dsanford@caltech.edu

    2014-02-01

    We explore simplified models of mixed dark matter (DM), defined here to be a stable relic composed of a singlet and an electroweak charged state. Our setup describes a broad spectrum of thermal DM candidates that can naturally accommodate the observed DM abundance but are subject to substantial constraints from current and upcoming direct detection experiments. We identify ''blind spots'' at which the DM-Higgs coupling is identically zero, thus nullifying direct detection constraints on spin independent scattering. Furthermore, we characterize the fine-tuning in mixing angles, i.e. well-tempering, required for thermal freeze-out to accommodate the observed abundance. Present and projected limits from LUX and XENON1T force many thermal relic models into blind spot tuning, well-tempering, or both. This simplified model framework generalizes bino-Higgsino DM in the MSSM, singlino-Higgsino DM in the NMSSM, and scalar DM candidates that appear in models of extended Higgs sectors.

  20. Computer Graphics.

    ERIC Educational Resources Information Center

    Halpern, Jeanne W.

    1970-01-01

    Computer graphics have been called the most exciting development in computer technology. At the University of Michigan, three kinds of graphics output equipment are now being used: symbolic printers, line plotters or drafting devices, and cathode-ray tubes (CRT). Six examples are given that demonstrate the range of graphics use at the University.…

  1. Joint sulcal detection on cortical surfaces with graphical models and boosted priors.

    PubMed

    Shi, Yonggang; Tu, Zhuowen; Reiss, Allan L; Dutton, Rebecca A; Lee, Agatha D; Galaburda, Albert M; Dinov, Ivo; Thompson, Paul M; Toga, Arthur W

    2009-03-01

    In this paper, we propose an automated approach for the joint detection of major sulci on cortical surfaces. By representing sulci as nodes in a graphical model, we incorporate Markovian relations between sulci and formulate their detection as a maximum a posteriori (MAP) estimation problem over the joint space of major sulci. To make the inference tractable, a sample space with a finite number of candidate curves is automatically generated at each node based on the Hamilton-Jacobi skeleton of sulcal regions. Using the AdaBoost algorithm, we learn both individual and pairwise shape priors of sulcal curves from training data, which are then used to define potential functions in the graphical model based on the connection between AdaBoost and logistic regression. Finally belief propagation is used to perform the MAP inference and select the joint detection results from the sample spaces of candidate curves. In our experiments, we quantitatively validate our algorithm with manually traced curves and demonstrate the automatically detected curves can capture the main body of sulci very accurately. A comparison with independently detected results is also conducted to illustrate the advantage of the joint detection approach. PMID:19244008

  2. Linear Mixed Models: Gum and Beyond

    NASA Astrophysics Data System (ADS)

    Arendacká, Barbora; Täubner, Angelika; Eichstädt, Sascha; Bruns, Thomas; Elster, Clemens

    2014-04-01

    In Annex H.5, the Guide to the Evaluation of Uncertainty in Measurement (GUM) [1] recognizes the necessity to analyze certain types of experiments by applying random effects ANOVA models. These belong to the more general family of linear mixed models that we focus on in the current paper. Extending the short introduction provided by the GUM, our aim is to show that the more general, linear mixed models cover a wider range of situations occurring in practice and can be beneficial when employed in data analysis of long-term repeated experiments. Namely, we point out their potential as an aid in establishing an uncertainty budget and as means for gaining more insight into the measurement process. We also comment on computational issues and to make the explanations less abstract, we illustrate all the concepts with the help of a measurement campaign conducted in order to challenge the uncertainty budget in calibration of accelerometers.

  3. ModelMuse - A Graphical User Interface for MODFLOW-2005 and PHAST

    USGS Publications Warehouse

    Winston, Richard B.

    2009-01-01

    ModelMuse is a graphical user interface (GUI) for the U.S. Geological Survey (USGS) models MODFLOW-2005 and PHAST. This software package provides a GUI for creating the flow and transport input file for PHAST and the input files for MODFLOW-2005. In ModelMuse, the spatial data for the model is independent of the grid, and the temporal data is independent of the stress periods. Being able to input these data independently allows the user to redefine the spatial and temporal discretization at will. This report describes the basic concepts required to work with ModelMuse. These basic concepts include the model grid, data sets, formulas, objects, the method used to assign values to data sets, and model features. The ModelMuse main window has a top, front, and side view of the model that can be used for editing the model, and a 3-D view of the model that can be used to display properties of the model. ModelMuse has tools to generate and edit the model grid. It also has a variety of interpolation methods and geographic functions that can be used to help define the spatial variability of the model. ModelMuse can be used to execute both MODFLOW-2005 and PHAST and can also display the results of MODFLOW-2005 models. An example of using ModelMuse with MODFLOW-2005 is included in this report. Several additional examples are described in the help system for ModelMuse, which can be accessed from the Help menu.

  4. Fertility intentions and outcomes: Implementing the Theory of Planned Behavior with graphical models.

    PubMed

    Mencarini, Letizia; Vignoli, Daniele; Gottard, Anna

    2015-03-01

    This paper studies fertility intentions and their outcomes, analyzing the complete path leading to fertility behavior according to the social psychological model of Theory Planned Behavior (TPB). We move beyond existing research using graphical models to have a precise understanding, and a formal description, of the developmental fertility decision-making process. Our findings yield new results for the Italian case which are empirically robust and theoretically coherent, adding important insights to the effectiveness of the TPB for fertility research. In line with TPB, all intentions' primary antecedents are found to be determinants of the level of fertility intentions, but do not affect fertility outcomes, being pre-filtered by fertility intentions. Nevertheless, in contrast with TPB, background factors are not fully mediated by intentions' primary antecedents, influencing directly fertility intentions and even fertility behaviors. PMID:26047838

  5. Raster graphics extensions to the core system

    NASA Technical Reports Server (NTRS)

    Foley, J. D.

    1984-01-01

    A conceptual model of raster graphics systems was developed. The model integrates core-like graphics package concepts with contemporary raster display architectures. The conceptual model of raster graphics introduces multiple pixel matrices with associated index tables.

  6. A random distribution reacting mixing layer model

    NASA Technical Reports Server (NTRS)

    Jones, Richard A.

    1994-01-01

    A methodology for simulation of molecular mixing and the resulting velocity and temperature fields has been developed. The ideas are applied to the flow conditions present in the NASA Lewis Planar Reacting Shear Layer (PRSL) facility, and results compared to experimental data. A gaussian transverse turbulent velocity distribution is used in conjunction with a linearly increasing time scale to describe the mixing of different regions of the flow. Equilibrium reaction calculations are then performed on the mix to arrive at a new species composition and temperature. Velocities are determined through summation of momentum contributions. The analysis indicates a combustion efficiency of the order of 80 percent for the reacting mixing layer, and a turbulent Schmidt number of 2/3. The success of the model is attributed to the simulation of large-scale transport of fluid. The favorable comparison shows that a relatively quick and simple PC calculation is capable of simulating the basic flow structure in the reacting and non-reacting shear layer present in the facility given basic assumptions about turbulence properties.

  7. A random distribution reacting mixing layer model

    NASA Technical Reports Server (NTRS)

    Jones, Richard A.; Marek, C. John; Myrabo, Leik N.; Nagamatsu, Henry T.

    1994-01-01

    A methodology for simulation of molecular mixing, and the resulting velocity and temperature fields has been developed. The ideas are applied to the flow conditions present in the NASA Lewis Research Center Planar Reacting Shear Layer (PRSL) facility, and results compared to experimental data. A gaussian transverse turbulent velocity distribution is used in conjunction with a linearly increasing time scale to describe the mixing of different regions of the flow. Equilibrium reaction calculations are then performed on the mix to arrive at a new species composition and temperature. Velocities are determined through summation of momentum contributions. The analysis indicates a combustion efficiency of the order of 80 percent for the reacting mixing layer, and a turbulent Schmidt number of 2/3. The success of the model is attributed to the simulation of large-scale transport of fluid. The favorable comparison shows that a relatively quick and simple PC calculation is capable of simulating the basic flow structure in the reacting and nonreacting shear layer present in the facility given basic assumptions about turbulence properties.

  8. FastGGM: An Efficient Algorithm for the Inference of Gaussian Graphical Model in Biological Networks

    PubMed Central

    Ding, Ying; Fang, Zhou; Sun, Zhe; MacDonald, Matthew L.; Sweet, Robert A.; Wang, Jieru; Chen, Wei

    2016-01-01

    Biological networks provide additional information for the analysis of human diseases, beyond the traditional analysis that focuses on single variables. Gaussian graphical model (GGM), a probability model that characterizes the conditional dependence structure of a set of random variables by a graph, has wide applications in the analysis of biological networks, such as inferring interaction or comparing differential networks. However, existing approaches are either not statistically rigorous or are inefficient for high-dimensional data that include tens of thousands of variables for making inference. In this study, we propose an efficient algorithm to implement the estimation of GGM and obtain p-value and confidence interval for each edge in the graph, based on a recent proposal by Ren et al., 2015. Through simulation studies, we demonstrate that the algorithm is faster by several orders of magnitude than the current implemented algorithm for Ren et al. without losing any accuracy. Then, we apply our algorithm to two real data sets: transcriptomic data from a study of childhood asthma and proteomic data from a study of Alzheimer’s disease. We estimate the global gene or protein interaction networks for the disease and healthy samples. The resulting networks reveal interesting interactions and the differential networks between cases and controls show functional relevance to the diseases. In conclusion, we provide a computationally fast algorithm to implement a statistically sound procedure for constructing Gaussian graphical model and making inference with high-dimensional biological data. The algorithm has been implemented in an R package named “FastGGM”. PMID:26872036

  9. Two graphical user interfaces for managing and analyzing MODFLOW groundwater-model scenarios

    USGS Publications Warehouse

    Banta, Edward R.

    2014-01-01

    Scenario Manager and Scenario Analyzer are graphical user interfaces that facilitate the use of calibrated, MODFLOW-based groundwater models for investigating possible responses to proposed stresses on a groundwater system. Scenario Manager allows a user, starting with a calibrated model, to design and run model scenarios by adding or modifying stresses simulated by the model. Scenario Analyzer facilitates the process of extracting data from model output and preparing such display elements as maps, charts, and tables. Both programs are designed for users who are familiar with the science on which groundwater modeling is based but who may not have a groundwater modeler’s expertise in building and calibrating a groundwater model from start to finish. With Scenario Manager, the user can manipulate model input to simulate withdrawal or injection wells, time-variant specified hydraulic heads, recharge, and such surface-water features as rivers and canals. Input for stresses to be simulated comes from user-provided geographic information system files and time-series data files. A Scenario Manager project can contain multiple scenarios and is self-documenting. Scenario Analyzer can be used to analyze output from any MODFLOW-based model; it is not limited to use with scenarios generated by Scenario Manager. Model-simulated values of hydraulic head, drawdown, solute concentration, and cell-by-cell flow rates can be presented in display elements. Map data can be represented as lines of equal value (contours) or as a gradated color fill. Charts and tables display time-series data obtained from output generated by a transient-state model run or from user-provided text files of time-series data. A display element can be based entirely on output of a single model run, or, to facilitate comparison of results of multiple scenarios, an element can be based on output from multiple model runs. Scenario Analyzer can export display elements and supporting metadata as a Portable

  10. uPy: a ubiquitous computer graphics Python API with Biological Modeling Applications

    PubMed Central

    Autin, L.; Johnson, G.; Hake, J.; Olson, A.; Sanner, M.

    2015-01-01

    In this paper we describe uPy, an extension module for the Python programming language that provides a uniform abstraction of the APIs of several 3D computer graphics programs called hosts, including: Blender, Maya, Cinema4D, and DejaVu. A plugin written with uPy is a unique piece of code that will run in all uPy-supported hosts. We demonstrate the creation of complex plug-ins for molecular/cellular modeling and visualization and discuss how uPy can more generally simplify programming for many types of projects (not solely science applications) intended for multi-host distribution. uPy is available at http://upy.scripps.edu PMID:24806987

  11. A Graphical Model to Determine the Subcellular Protein Location in Artificial Tissues

    PubMed Central

    Glory-Afshar, Estelle; Osuna-Highley, Elvira; Granger, Brian; Murphy, Robert F.

    2010-01-01

    Location proteomics is concerned with the systematic analysis of the subcellular location of proteins. In order to perform comprehensive analysis of all protein location patterns, automated methods are needed. With the goal of extending automated subcellular location pattern analysis methods to high resolution images of tissues, 3D confocal microscope images of polarized CaCo2 cells immunostained for various proteins were collected. A three-color staining protocol was developed that permits parallel imaging of proteins of interest as well as DNA and the actin cytoskeleton. The collection is composed of 11 to 21 images for each of the 9 proteins that depict major subcellular patterns. A classifier was trained to recognize the subcellular location pattern of segmented cells with an accuracy of 89.2%. Using the Prior Updating method allowed improvement of this accuracy to 99.6%. This study demonstrates the benefit of using a graphical model approach for improving the pattern classification in tissue images. PMID:21625289

  12. The balance of competition and facilitation in plant communities: A graphical model

    SciTech Connect

    Holmgren, M.; Scheffer, M.; Huston, M.

    1995-06-01

    It has been hypothesized that plants are increasingly shade intolerant in drier conditions. Although many field patterns can be understood from this theory, the conspicuous {open_quotes}nurse plant{close_quotes} phenomenon in dry areas seems to contradict the theory. We derive a graphical model to illustrate how the interplay of facilitation and competition can be understood from two ingredients: the plant responses to the combined effects of light and water, and the effect of plant canopies on microsite light and water. We show that in drier conditions the light compensation point (i.e., photosynthesis equals respiration) is higher. However, at very high light levels, growth is suppressed due to low moisture conditions. Under dry, high light conditions facilitative effects dominate, whereas competitive effects dominate on the wet part of the gradient.

  13. Glossiness of Colored Papers based on Computer Graphics Model and Its Measuring Method

    NASA Astrophysics Data System (ADS)

    Aida, Teizo

    In the case of colored papers, the color of surface effects strongly upon the gloss of its paper. The new glossiness for such a colored paper is suggested in this paper. First, using the Achromatic and Chromatic Munsell colored chips, the author obtained experimental equation which represents the relation between lightness V ( or V and saturation C ) and psychological glossiness Gph of these chips. Then, the author defined a new glossiness G for the colored papers, based on the above mentioned experimental equations Gph and Cook-Torrance's reflection model which are widely used in the filed of Computer Graphics. This new glossiness is shown to be nearly proportional to the psychological glossiness Gph. The measuring system for the new glossiness G is furthermore descrived. The measuring time for one specimen is within 1 minute.

  14. A Graphical User Interface for Parameterizing Biochemical Models of Photosynthesis and Chlorophyll Fluorescence

    NASA Astrophysics Data System (ADS)

    Kornfeld, A.; Van der Tol, C.; Berry, J. A.

    2015-12-01

    Recent advances in optical remote sensing of photosynthesis offer great promise for estimating gross primary productivity (GPP) at leaf, canopy and even global scale. These methods -including solar-induced chlorophyll fluorescence (SIF) emission, fluorescence spectra, and hyperspectral features such as the red edge and the photochemical reflectance index (PRI) - can be used to greatly enhance the predictive power of global circulation models (GCMs) by providing better constraints on GPP. The way to use measured optical data to parameterize existing models such as SCOPE (Soil Canopy Observation, Photochemistry and Energy fluxes) is not trivial, however. We have therefore extended a biochemical model to include fluorescence and other parameters in a coupled treatment. To help parameterize the model, we then use nonlinear curve-fitting routines to determine the parameter set that enables model results to best fit leaf-level gas exchange and optical data measurements. To make the tool more accessible to all practitioners, we have further designed a graphical user interface (GUI) based front-end to allow researchers to analyze data with a minimum of effort while, at the same time, allowing them to change parameters interactively to visualize how variation in model parameters affect predicted outcomes such as photosynthetic rates, electron transport, and chlorophyll fluorescence. Here we discuss the tool and its effectiveness, using recently-gathered leaf-level data.

  15. Graphics Processing Unit (GPU) Acceleration of the Goddard Earth Observing System Atmospheric Model

    NASA Technical Reports Server (NTRS)

    Putnam, Williama

    2011-01-01

    The Goddard Earth Observing System 5 (GEOS-5) is the atmospheric model used by the Global Modeling and Assimilation Office (GMAO) for a variety of applications, from long-term climate prediction at relatively coarse resolution, to data assimilation and numerical weather prediction, to very high-resolution cloud-resolving simulations. GEOS-5 is being ported to a graphics processing unit (GPU) cluster at the NASA Center for Climate Simulation (NCCS). By utilizing GPU co-processor technology, we expect to increase the throughput of GEOS-5 by at least an order of magnitude, and accelerate the process of scientific exploration across all scales of global modeling, including: The large-scale, high-end application of non-hydrostatic, global, cloud-resolving modeling at 10- to I-kilometer (km) global resolutions Intermediate-resolution seasonal climate and weather prediction at 50- to 25-km on small clusters of GPUs Long-range, coarse-resolution climate modeling, enabled on a small box of GPUs for the individual researcher After being ported to the GPU cluster, the primary physics components and the dynamical core of GEOS-5 have demonstrated a potential speedup of 15-40 times over conventional processor cores. Performance improvements of this magnitude reduce the required scalability of 1-km, global, cloud-resolving models from an unfathomable 6 million cores to an attainable 200,000 GPU-enabled cores.

  16. NATURAL graphics

    NASA Technical Reports Server (NTRS)

    Jones, R. H.

    1984-01-01

    The hardware and software developments in computer graphics are discussed. Major topics include: system capabilities, hardware design, system compatibility, and software interface with the data base management system.

  17. Model Selection with the Linear Mixed Model for Longitudinal Data

    ERIC Educational Resources Information Center

    Ryoo, Ji Hoon

    2011-01-01

    Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or…

  18. Higher-order ice-sheet modelling accelerated by multigrid on graphics cards

    NASA Astrophysics Data System (ADS)

    Brædstrup, Christian; Egholm, David

    2013-04-01

    Higher-order ice flow modelling is a very computer intensive process owing primarily to the nonlinear influence of the horizontal stress coupling. When applied for simulating long-term glacial landscape evolution, the ice-sheet models must consider very long time series, while both high temporal and spatial resolution is needed to resolve small effects. The use of higher-order and full stokes models have therefore seen very limited usage in this field. However, recent advances in graphics card (GPU) technology for high performance computing have proven extremely efficient in accelerating many large-scale scientific computations. The general purpose GPU (GPGPU) technology is cheap, has a low power consumption and fits into a normal desktop computer. It could therefore provide a powerful tool for many glaciologists working on ice flow models. Our current research focuses on utilising the GPU as a tool in ice-sheet and glacier modelling. To this extent we have implemented the Integrated Second-Order Shallow Ice Approximation (iSOSIA) equations on the device using the finite difference method. To accelerate the computations, the GPU solver uses a non-linear Red-Black Gauss-Seidel iterator coupled with a Full Approximation Scheme (FAS) multigrid setup to further aid convergence. The GPU finite difference implementation provides the inherent parallelization that scales from hundreds to several thousands of cores on newer cards. We demonstrate the efficiency of the GPU multigrid solver using benchmark experiments.

  19. Toward Better Modeling of Supercritical Turbulent Mixing

    NASA Technical Reports Server (NTRS)

    Selle, Laurent; Okongo'o, Nora; Bellan, Josette; Harstad, Kenneth

    2008-01-01

    study was done as part of an effort to develop computational models representing turbulent mixing under thermodynamic supercritical (here, high pressure) conditions. The question was whether the large-eddy simulation (LES) approach, developed previously for atmospheric-pressure compressible-perfect-gas and incompressible flows, can be extended to real-gas non-ideal (including supercritical) fluid mixtures. [In LES, the governing equations are approximated such that the flow field is spatially filtered and subgrid-scale (SGS) phenomena are represented by models.] The study included analyses of results from direct numerical simulation (DNS) of several such mixing layers based on the Navier-Stokes, total-energy, and conservation- of-chemical-species governing equations. Comparison of LES and DNS results revealed the need to augment the atmospheric- pressure LES equations with additional SGS momentum and energy terms. These new terms are the direct result of high-density-gradient-magnitude regions found in the DNS and observed experimentally under fully turbulent flow conditions. A model has been derived for the new term in the momentum equation and was found to perform well at small filter size but to deteriorate with increasing filter size. Several alternative models were derived for the new SGS term in the energy equation that would need further investigations to determine if they are too computationally intensive in LES.

  20. DINAMO: a coupled sequence alignment editor/molecular graphics tool for interactive homology modeling of proteins.

    PubMed

    Hansen, M; Bentz, J; Baucom, A; Gregoret, L

    1998-01-01

    Gaining functional information about a novel protein is a universal problem in biomedical research. With the explosive growth of the protein sequence and structural databases, it is becoming increasingly common for researchers to attempt to build a three-dimensional model of their protein of interest in order to gain information about its structure and interactions with other molecules. The two most reliable methods for predicting the structure of a protein are homology modeling, in which the novel sequence is modeled on the known three-dimensional structure of a related protein, and fold recognition (threading), where the sequence is scored against a library of fold models, and the highest scoring model is selected. The sequence alignment to a known structure can be ambiguous, and human intervention is often required to optimize the model. We describe an interactive model building and assessment tool in which a sequence alignment editor is dynamically coupled to a molecular graphics display. By means of a set of assessment tools, the user may optimize his or her alignment to satisfy the known heuristics of protein structure. Adjustments to the sequence alignment made by the user are reflected in the displayed model by color and other visual cues. For instance, residues are colored by hydrophobicity in both the three-dimensional model and in the sequence alignment. This aids the user in identifying undesirable buried polar residues. Several different evaluation metrics may be selected including residue conservation, residue properties, and visualization of predicted secondary structure. These characteristics may be mapped to the model both singly and in combination. DINAMO is a Java-based tool that may be run either over the web or installed locally. Its modular architecture also allows Java-literate users to add plug-ins of their own design.

  1. Inferring Caravaggio's studio lighting and praxis in The calling of St. Matthew by computer graphics modeling

    NASA Astrophysics Data System (ADS)

    Stork, David G.; Nagy, Gabor

    2010-02-01

    We explored the working methods of the Italian Baroque master Caravaggio through computer graphics reconstruction of his studio, with special focus on his use of lighting and illumination in The calling of St. Matthew. Although he surely took artistic liberties while constructing this and other works and did not strive to provide a "photographic" rendering of the tableau before him, there are nevertheless numerous visual clues to the likely studio conditions and working methods within the painting: the falloff of brightness along the rear wall, the relative brightness of the faces of figures, and the variation in sharpness of cast shadows (i.e., umbrae and penumbrae). We explored two studio lighting hypotheses: that the primary illumination was local (and hence artificial) and that it was distant solar. We find that the visual evidence can be consistent with local (artificial) illumination if Caravaggio painted his figures separately, adjusting the brightness on each to compensate for the falloff in illumination. Alternatively, the evidence is consistent with solar illumination only if the rear wall had particular reflectance properties, as described by a bi-directional reflectance distribution function, BRDF. (Ours is the first research applying computer graphics to the understanding of artists' praxis that models subtle reflectance properties of surfaces through BRDFs, a technique that may find use in studies of other artists.) A somewhat puzzling visual feature-unnoted in the scholarly literature-is the upward-slanting cast shadow in the upper-right corner of the painting. We found this shadow is naturally consistent with a local illuminant passing through a small window perpendicular to the viewer's line of sight, but could also be consistent with solar illumination if the shadow was due to a slanted, overhanging section of a roof outside the artist's studio. Our results place likely conditions upon any hypotheses concerning Caravaggio's working methods and

  2. Business Graphics

    NASA Technical Reports Server (NTRS)

    1987-01-01

    Genigraphics Corporation's Masterpiece 8770 FilmRecorder is an advanced high resolution system designed to improve and expand a company's in-house graphics production. GRAFTIME/software package was designed to allow office personnel with minimal training to produce professional level graphics for business communications and presentations. Products are no longer being manufactured.

  3. Graphic Arts.

    ERIC Educational Resources Information Center

    Kempe, Joseph; Kinde, Bruce

    This curriculum guide is intended to assist vocational instructors in preparing students for entry-level employment in the graphic arts field and getting them ready for advanced training in the workplace. The package contains an overview of new and emerging graphic arts technologies, competency/skill and task lists for the occupations of…

  4. Graphic Storytelling

    ERIC Educational Resources Information Center

    Thompson, John

    2009-01-01

    Graphic storytelling is a medium that allows students to make and share stories, while developing their art communication skills. American comics today are more varied in genre, approach, and audience than ever before. When considering the impact of Japanese manga on the youth, graphic storytelling emerges as a powerful player in pop culture. In…

  5. A Curriculum Model: Engineering Design Graphics Course Updates Based on Industrial and Academic Institution Requirements

    ERIC Educational Resources Information Center

    Meznarich, R. A.; Shava, R. C.; Lightner, S. L.

    2009-01-01

    Engineering design graphics courses taught in colleges or universities should provide and equip students preparing for employment with the basic occupational graphics skill competences required by engineering and technology disciplines. Academic institutions should introduce and include topics that cover the newer and more efficient graphics…

  6. A Comparison of Learning Style Models and Assessment Instruments for University Graphics Educators

    ERIC Educational Resources Information Center

    Harris, La Verne Abe; Sadowski, Mary S.; Birchman, Judy A.

    2006-01-01

    Kolb (2004) and others have defined learning style as a preference by which students learn and remember what they have learned. This presentation will include a summary of learning style research published in the "Engineering Design Graphics Journal" over the past 15 years on the topic of learning styles and graphics education. The presenters will…

  7. Numerical modelling of mixed-sediment consolidation

    NASA Astrophysics Data System (ADS)

    Grasso, Florent; Le Hir, Pierre; Bassoullet, Philippe

    2015-04-01

    Sediment transport modelling in estuarine environments, characterised by cohesive and non-cohesive sediment mixtures, has to consider a time variation of erodibility due to consolidation. Generally, validated by settling column experiments, mud consolidation is now fairly well simulated; however, numerical models still have difficulty to simulate accurately the sedimentation and consolidation of mixed sediments for a wide range of initial conditions. This is partly due to the difficulty to formulate the contribution of sand in the hindered settling regime when segregation does not clearly occur. Based on extensive settling experiments with mud-sand mixtures, the objective of this study was to improve the numerical modelling of mixed-sediment consolidation by focusing on segregation processes. We used constitutive relationships following the fractal theory associated with a new segregation formulation based on the relative mud concentration. Using specific sets of parameters calibrated for each test—with different initial sediment concentration and sand content—the model achieved excellent prediction skills for simulating sediment height evolutions and concentration vertical profiles. It highlighted the model capacity to simulate properly the segregation occurrence for mud-sand mixtures characterised by a wide range of initial conditions. Nevertheless, calibration parameters varied significantly, as the fractal number ranged from 2.64 to 2.77. This study investigated the relevance of using a common set of parameters, which is generally required for 3D sediment transport modelling. Simulations were less accurate but remained satisfactory in an operational approach. Finally, a specific formulation for natural estuarine environments was proposed, simulating correctly the sedimentation-consolidation processes of mud-sand mixtures through 3D sediment transport modelling.

  8. Reducing Modeling Error of Graphical Methods for Estimating Volume of Distribution Measurements in PIB-PET study

    PubMed Central

    Guo, Hongbin; Renaut, Rosemary A; Chen, Kewei; Reiman, Eric M

    2010-01-01

    Graphical analysis methods are widely used in positron emission tomography quantification because of their simplicity and model independence. But they may, particularly for reversible kinetics, lead to bias in the estimated parameters. The source of the bias is commonly attributed to noise in the data. Assuming a two-tissue compartmental model, we investigate the bias that originates from modeling error. This bias is an intrinsic property of the simplified linear models used for limited scan durations, and it is exaggerated by random noise and numerical quadrature error. Conditions are derived under which Logan's graphical method either over- or under-estimates the distribution volume in the noise-free case. The bias caused by modeling error is quantified analytically. The presented analysis shows that the bias of graphical methods is inversely proportional to the dissociation rate. Furthermore, visual examination of the linearity of the Logan plot is not sufficient for guaranteeing that equilibrium has been reached. A new model which retains the elegant properties of graphical analysis methods is presented, along with a numerical algorithm for its solution. We perform simulations with the fibrillar amyloid β radioligand [11C] benzothiazole-aniline using published data from the University of Pittsburgh and Rotterdam groups. The results show that the proposed method significantly reduces the bias due to modeling error. Moreover, the results for data acquired over a 70 minutes scan duration are at least as good as those obtained using existing methods for data acquired over a 90 minutes scan duration. PMID:20493196

  9. Exploratory graphical models of functional and structural connectivity patterns for Alzheimer's Disease diagnosis

    PubMed Central

    Ortiz, Andrés; Munilla, Jorge; Álvarez-Illán, Ignacio; Górriz, Juan M.; Ramírez, Javier

    2015-01-01

    Alzheimer's Disease (AD) is the most common neurodegenerative disease in elderly people. Its development has been shown to be closely related to changes in the brain connectivity network and in the brain activation patterns along with structural changes caused by the neurodegenerative process. Methods to infer dependence between brain regions are usually derived from the analysis of covariance between activation levels in the different areas. However, these covariance-based methods are not able to estimate conditional independence between variables to factor out the influence of other regions. Conversely, models based on the inverse covariance, or precision matrix, such as Sparse Gaussian Graphical Models allow revealing conditional independence between regions by estimating the covariance between two variables given the rest as constant. This paper uses Sparse Inverse Covariance Estimation (SICE) methods to learn undirected graphs in order to derive functional and structural connectivity patterns from Fludeoxyglucose (18F-FDG) Position Emission Tomography (PET) data and segmented Magnetic Resonance images (MRI), drawn from the ADNI database, for Control, MCI (Mild Cognitive Impairment Subjects), and AD subjects. Sparse computation fits perfectly here as brain regions usually only interact with a few other areas. The models clearly show different metabolic covariation patters between subject groups, revealing the loss of strong connections in AD and MCI subjects when compared to Controls. Similarly, the variance between GM (Gray Matter) densities of different regions reveals different structural covariation patterns between the different groups. Thus, the different connectivity patterns for controls and AD are used in this paper to select regions of interest in PET and GM images with discriminative power for early AD diagnosis. Finally, functional an structural models are combined to leverage the classification accuracy. The results obtained in this work show the

  10. Exploratory graphical models of functional and structural connectivity patterns for Alzheimer's Disease diagnosis.

    PubMed

    Ortiz, Andrés; Munilla, Jorge; Álvarez-Illán, Ignacio; Górriz, Juan M; Ramírez, Javier

    2015-01-01

    Alzheimer's Disease (AD) is the most common neurodegenerative disease in elderly people. Its development has been shown to be closely related to changes in the brain connectivity network and in the brain activation patterns along with structural changes caused by the neurodegenerative process. Methods to infer dependence between brain regions are usually derived from the analysis of covariance between activation levels in the different areas. However, these covariance-based methods are not able to estimate conditional independence between variables to factor out the influence of other regions. Conversely, models based on the inverse covariance, or precision matrix, such as Sparse Gaussian Graphical Models allow revealing conditional independence between regions by estimating the covariance between two variables given the rest as constant. This paper uses Sparse Inverse Covariance Estimation (SICE) methods to learn undirected graphs in order to derive functional and structural connectivity patterns from Fludeoxyglucose (18F-FDG) Position Emission Tomography (PET) data and segmented Magnetic Resonance images (MRI), drawn from the ADNI database, for Control, MCI (Mild Cognitive Impairment Subjects), and AD subjects. Sparse computation fits perfectly here as brain regions usually only interact with a few other areas. The models clearly show different metabolic covariation patters between subject groups, revealing the loss of strong connections in AD and MCI subjects when compared to Controls. Similarly, the variance between GM (Gray Matter) densities of different regions reveals different structural covariation patterns between the different groups. Thus, the different connectivity patterns for controls and AD are used in this paper to select regions of interest in PET and GM images with discriminative power for early AD diagnosis. Finally, functional an structural models are combined to leverage the classification accuracy. The results obtained in this work show the

  11. Inference of ICF implosion core mix using experimental data and theoretical mix modeling

    SciTech Connect

    Sherrill, Leslie Welser; Haynes, Donald A; Cooley, James H; Sherrill, Manolo E; Mancini, Roberto C; Tommasini, Riccardo; Golovkin, Igor E; Haan, Steven W

    2009-01-01

    The mixing between fuel and shell materials in Inertial Confinement Fusion (lCF) implosion cores is a current topic of interest. The goal of this work was to design direct-drive ICF experiments which have varying levels of mix, and subsequently to extract information on mixing directly from the experimental data using spectroscopic techniques. The experimental design was accomplished using hydrodynamic simulations in conjunction with Haan's saturation model, which was used to predict the mix levels of candidate experimental configurations. These theoretical predictions were then compared to the mixing information which was extracted from the experimental data, and it was found that Haan's mix model predicted trends in the width of the mix layer as a function of initial shell thickness. These results contribute to an assessment of the range of validity and predictive capability of the Haan saturation model, as well as increasing confidence in the methods used to extract mixing information from experimental data.

  12. Mixing parameterizations in ocean climate modeling

    NASA Astrophysics Data System (ADS)

    Moshonkin, S. N.; Gusev, A. V.; Zalesny, V. B.; Byshev, V. I.

    2016-03-01

    Results of numerical experiments with an eddy-permitting ocean circulation model on the simulation of the climatic variability of the North Atlantic and the Arctic Ocean are analyzed. We compare the ocean simulation quality with using different subgrid mixing parameterizations. The circulation model is found to be sensitive to a mixing parametrization. The computation of viscosity and diffusivity coefficients by an original splitting algorithm of the evolution equations for turbulence characteristics is found to be as efficient as traditional Monin-Obukhov parameterizations. At the same time, however, the variability of ocean climate characteristics is simulated more adequately. The simulation of salinity fields in the entire study region improves most significantly. Turbulent processes have a large effect on the circulation in the long-term through changes in the density fields. The velocity fields in the Gulf Stream and in the entire North Atlantic Subpolar Cyclonic Gyre are reproduced more realistically. The surface level height in the Arctic Basin is simulated more faithfully, marking the Beaufort Gyre better. The use of the Prandtl number as a function of the Richardson number improves the quality of ocean modeling.

  13. Mixing parametrizations for ocean climate modelling

    NASA Astrophysics Data System (ADS)

    Gusev, Anatoly; Moshonkin, Sergey; Diansky, Nikolay; Zalesny, Vladimir

    2016-04-01

    The algorithm is presented of splitting the total evolutionary equations for the turbulence kinetic energy (TKE) and turbulence dissipation frequency (TDF), which is used to parameterize the viscosity and diffusion coefficients in ocean circulation models. The turbulence model equations are split into the stages of transport-diffusion and generation-dissipation. For the generation-dissipation stage, the following schemes are implemented: the explicit-implicit numerical scheme, analytical solution and the asymptotic behavior of the analytical solutions. The experiments were performed with different mixing parameterizations for the modelling of Arctic and the Atlantic climate decadal variability with the eddy-permitting circulation model INMOM (Institute of Numerical Mathematics Ocean Model) using vertical grid refinement in the zone of fully developed turbulence. The proposed model with the split equations for turbulence characteristics is similar to the contemporary differential turbulence models, concerning the physical formulations. At the same time, its algorithm has high enough computational efficiency. Parameterizations with using the split turbulence model make it possible to obtain more adequate structure of temperature and salinity at decadal timescales, compared to the simpler Pacanowski-Philander (PP) turbulence parameterization. Parameterizations with using analytical solution or numerical scheme at the generation-dissipation step of the turbulence model leads to better representation of ocean climate than the faster parameterization using the asymptotic behavior of the analytical solution. At the same time, the computational efficiency left almost unchanged relative to the simple PP parameterization. Usage of PP parametrization in the circulation model leads to realistic simulation of density and circulation with violation of T,S-relationships. This error is majorly avoided with using the proposed parameterizations containing the split turbulence model

  14. Modeling populations of rotationally mixed massive stars

    NASA Astrophysics Data System (ADS)

    Brott, I.

    2011-02-01

    Massive stars can be considered as cosmic engines. With their high luminosities, strong stellar winds and violent deaths they drive the evolution of galaxies through-out the history of the universe. Despite the importance of massive stars, their evolution is still poorly understood. Two major issues have plagued evolutionary models of massive stars until today: mixing and mass loss On the main sequence, the effects of mass loss remain limited in the considered mass and metallicity range, this thesis concentrates on the role of mixing in massive stars. This thesis approaches this problem just on the cross road between observations and simulations. The main question: Do evolutionary models of single stars, accounting for the effects of rotation, reproduce the observed properties of real stars. In particular we are interested if the evolutionary models can reproduce the surface abundance changes during the main-sequence phase. To constrain our models we build a population synthesis model for the sample of the VLT-FLAMES Survey of Massive stars, for which star-formation history and rotational velocity distribution are well constrained. We consider the four main regions of the Hunter diagram. Nitrogen un-enriched slow rotators and nitrogen enriched fast rotators that are predicted by theory. Nitrogen enriched slow rotators and nitrogen unenriched fast rotators that are not predicted by our model. We conclude that currently these comparisons are not sufficient to verify the theory of rotational mixing. Physical processes in addition to rotational mixing appear necessary to explain the stars in the later two regions. The chapters of this Thesis have been published in the following Journals: Ch. 2: ``Rotating Massive Main-Sequence Stars I: Grids of Evolutionary Models and Isochrones'', I. Brott, S. E. de Mink, M. Cantiello, N. Langer, A. de Koter, C. J. Evans, I. Hunter, C. Trundle, J.S. Vink submitted to Astronomy & Astrop hysics Ch. 3: ``The VLT-FLAMES Survey of Massive

  15. Mixed Membership Distributions with Applications to Modeling Multiple Strategy Usage

    ERIC Educational Resources Information Center

    Galyardt, April

    2012-01-01

    This dissertation examines two related questions. "How do mixed membership models work?" and "Can mixed membership be used to model how students use multiple strategies to solve problems?". Mixed membership models have been used in thousands of applications from text and image processing to genetic microarray analysis. Yet…

  16. Development of a graphical user interface in GIS raster format for the finite difference ground-water model code, MODFLOW

    SciTech Connect

    Heinzer, T.; Hansen, D.T.; Greer, W.; Sebhat, M.

    1996-12-31

    A geographic information system (GIS) was used in developing a graphical user interface (GUI) for use with the US Geological Survey`s finite difference ground-water flow model, MODFLOW. The GUI permits the construction of a MODFLOW based ground-water flow model from scratch in a GIS environment. The model grid, input data and output are stored as separate raster data sets which may be viewed, edited, and manipulated in a graphic environment. Other GIS data sets can be displayed with the model data sets for reference and evaluation. The GUI sets up a directory structure for storage of the files associated with the ground-water model and the raster data sets created by the interface. The GUI stores model coefficients and model output as raster values. Values stored by these raster data sets are formatted for use with the ground-water flow model code.

  17. Graphical determination of metal bioavailability to soil invertebrates utilizing the Langmuir sorption model

    SciTech Connect

    Donkin, S.G.

    1997-09-01

    A new method of performing soil toxicity tests with free-living nematodes exposed to several metals and soil types has been adapted to the Langmuir sorption model in an attempt at bridging the gap between physico-chemical and biological data gathered in the complex soil matrix. Pseudo-Langmuir sorption isotherms have been developed using nematode toxic responses (lethality, in this case) in place of measured solvated metal, in order to more accurately model bioavailability. This method allows the graphical determination of Langmuir coefficients describing maximum sorption capacities and sorption affinities of various metal-soil combinations in the context of real biological responses of indigenous organisms. Results from nematode mortality tests with zinc, cadmium, copper, and lead in four soil types and water were used for isotherm construction. The level of agreement between these results and available literature data on metal sorption behavior in soils suggests that biologically relevant data may be successfully fitted to sorption models such as the Langmuir. This would allow for accurate prediction of soil contaminant concentrations which have minimal effect on indigenous invertebrates.

  18. Parallel flow accumulation algorithms for graphical processing units with application to RUSLE model

    NASA Astrophysics Data System (ADS)

    Sten, Johan; Lilja, Harri; Hyväluoma, Jari; Westerholm, Jan; Aspnäs, Mats

    2016-04-01

    Digital elevation models (DEMs) are widely used in the modeling of surface hydrology, which typically includes the determination of flow directions and flow accumulation. The use of high-resolution DEMs increases the accuracy of flow accumulation computation, but as a drawback, the computational time may become excessively long if large areas are analyzed. In this paper we investigate the use of graphical processing units (GPUs) for efficient flow accumulation calculations. We present two new parallel flow accumulation algorithms based on dependency transfer and topological sorting and compare them to previously published flow transfer and indegree-based algorithms. We benchmark the GPU implementations against industry standards, ArcGIS and SAGA. With the flow-transfer D8 flow routing model and binary input data, a speed up of 19 is achieved compared to ArcGIS and 15 compared to SAGA. We show that on GPUs the topological sort-based flow accumulation algorithm leads on average to a speedup by a factor of 7 over the flow-transfer algorithm. Thus a total speed up of the order of 100 is achieved. We test the algorithms by applying them to the Revised Universal Soil Loss Equation (RUSLE) erosion model. For this purpose we present parallel versions of the slope, LS factor and RUSLE algorithms and show that the RUSLE erosion results for an area of 12 km x 24 km containing 72 million cells can be calculated in less than a second. Since flow accumulation is needed in many hydrological models, the developed algorithms may find use in many other applications than RUSLE modeling. The algorithm based on topological sorting is particularly promising for dynamic hydrological models where flow accumulations are repeatedly computed over an unchanged DEM.

  19. Multiple model adaptive control with mixing

    NASA Astrophysics Data System (ADS)

    Kuipers, Matthew

    Despite the remarkable theoretical accomplishments and successful applications of adaptive control, the field is not sufficiently mature to solve challenging control problems requiring strict performance and safety guarantees. Towards addressing these issues, a novel deterministic multiple-model adaptive control approach called adaptive mixing control is proposed. In this approach, adaptation comes from a high-level system called the supervisor that mixes into feedback a number of candidate controllers, each finely-tuned to a subset of the parameter space. The mixing signal, the supervisor's output, is generated by estimating the unknown parameters and, at every instant of time, calculating the contribution level of each candidate controller based on certainty equivalence. The proposed architecture provides two characteristics relevant to solving stringent, performance-driven applications. First, the full-suite of linear time invariant control tools is available. A disadvantage of conventional adaptive control is its restriction to utilizing only those control laws whose solutions can be feasibly computed in real-time, such as model reference and pole-placement type controllers. Because its candidate controllers are computed off line, the proposed approach suffers no such restriction. Second, the supervisor's output is smooth and does not necessarily depend on explicit a priori knowledge of the disturbance model. These characteristics can lead to improved performance by avoiding the unnecessary switching and chattering behaviors associated with some other multiple adaptive control approaches. The stability and robustness properties of the adaptive scheme are analyzed. It is shown that the mean-square regulation error is of the order of the modeling error. And when the parameter estimate converges to its true value, which is guaranteed if a persistence of excitation condition is satisfied, the adaptive closed-loop system converges exponentially fast to a closed

  20. Colocalization Estimation Using Graphical Modeling and Variational Bayesian Expectation Maximization: Towards a Parameter-Free Approach.

    PubMed

    Awate, Suyash P; Radhakrishnan, Thyagarajan

    2015-01-01

    In microscopy imaging, colocalization between two biological entities (e.g., protein-protein or protein-cell) refers to the (stochastic) dependencies between the spatial locations of the two entities in the biological specimen. Measuring colocalization between two entities relies on fluorescence imaging of the specimen using two fluorescent chemicals, each of which indicates the presence/absence of one of the entities at any pixel location. State-of-the-art methods for estimating colocalization rely on post-processing image data using an adhoc sequence of algorithms with many free parameters that are tuned visually. This leads to loss of reproducibility of the results. This paper proposes a brand-new framework for estimating the nature and strength of colocalization directly from corrupted image data by solving a single unified optimization problem that automatically deals with noise, object labeling, and parameter tuning. The proposed framework relies on probabilistic graphical image modeling and a novel inference scheme using variational Bayesian expectation maximization for estimating all model parameters, including colocalization, from data. Results on simulated and real-world data demonstrate improved performance over the state of the art.

  1. Configuring a Graphical User Interface for Managing Local HYSPLIT Model Runs Through AWIPS

    NASA Technical Reports Server (NTRS)

    Wheeler, mark M.; Blottman, Peter F.; Sharp, David W.; Hoeth, Brian; VanSpeybroeck, Kurt M.

    2009-01-01

    Responding to incidents involving the release of harmful airborne pollutants is a continual challenge for Weather Forecast Offices in the National Weather Service. When such incidents occur, current protocol recommends forecaster-initiated requests of NOAA's Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) model output through the National Centers of Environmental Prediction to obtain critical dispersion guidance. Individual requests are submitted manually through a secured web site, with desired multiple requests submitted in sequence, for the purpose of obtaining useful trajectory and concentration forecasts associated with the significant release of harmful chemical gases, radiation, wildfire smoke, etc., into local the atmosphere. To help manage the local HYSPLIT for both routine and emergency use, a graphical user interface was designed for operational efficiency. The interface allows forecasters to quickly determine the current HYSPLIT configuration for the list of predefined sites (e.g., fixed sites and floating sites), and to make any necessary adjustments to key parameters such as Input Model. Number of Forecast Hours, etc. When using the interface, forecasters will obtain desired output more confidently and without the danger of corrupting essential configuration files.

  2. Mixing model analysis of telescopic lunar spectra

    NASA Astrophysics Data System (ADS)

    Lucey, Paul G.; Clark, Beth C.; Hawke, B. Ray

    1993-03-01

    We have analyzed very high quality reflectance spectra of the lunar surface from the University of Hawaii lunar spectral data collection using a spectral mixing model. The spectra analyzed are those of 45 mare sites and 75 highland sites. The spectra were selected on the basis of very high signal to noise ratios based on error bars and point to point scatter, and on quality of removal of telluric water bands. The spectral mixing model used 7 components, not all of which were used in each fit. Four of the components were mineral spectra of the orthopyroxene, a clinopyroxene, an olivine and an anorthite, measured at the Brown University's RELAB. All of the minerals were 45-90 micron splits. Lunar soil contains other components which have the effect of reddening and darkening the soil as well as reducing spectral contrast. In addition, lunar soil contains spectral neutral bright material (likely very fine grained feldspar) which serves to reduce spectral contrast and brighten soils. Early attempts to fit many of the spectra pointed out the need for a component which has a very broad smooth absorption feature centered near 1.1 microns. Glass is a good candidate for this component. For the bright component we used a flat reflectance of 70 percent to represent fine grained feldspar. For the 'glass' component we used a telescopic spectrum of a pyroclastic glass present on the Aristarchus plateau which is characterized by a strong smooth band centered at 1.07 microns. In addition to exhibiting the glass band this spectrum is very red and has a low albedo. On the assumption that the dark component and the red component are agglutinates, which is reasonable but not necessarily true, we sought a dark red component. To derive its properties we modelled the spectrum of an Apollo 16 soil (16xxx) and assumed the dark red component to comprise 60 percent of the soil, appropriate to agglutinate abundance in mature soil. We adjusted the albedo and slope of a straight line

  3. Mixing model analysis of telescopic lunar spectra

    NASA Technical Reports Server (NTRS)

    Lucey, Paul G.; Clark, Beth C.; Hawke, B. Ray

    1993-01-01

    We have analyzed very high quality reflectance spectra of the lunar surface from the University of Hawaii lunar spectral data collection using a spectral mixing model. The spectra analyzed are those of 45 mare sites and 75 highland sites. The spectra were selected on the basis of very high signal to noise ratios based on error bars and point to point scatter, and on quality of removal of telluric water bands. The spectral mixing model used 7 components, not all of which were used in each fit. Four of the components were mineral spectra of the orthopyroxene, a clinopyroxene, an olivine and an anorthite, measured at the Brown University's RELAB. All of the minerals were 45-90 micron splits. Lunar soil contains other components which have the effect of reddening and darkening the soil as well as reducing spectral contrast. In addition, lunar soil contains spectral neutral bright material (likely very fine grained feldspar) which serves to reduce spectral contrast and brighten soils. Early attempts to fit many of the spectra pointed out the need for a component which has a very broad smooth absorption feature centered near 1.1 microns. Glass is a good candidate for this component. For the bright component we used a flat reflectance of 70 percent to represent fine grained feldspar. For the 'glass' component we used a telescopic spectrum of a pyroclastic glass present on the Aristarchus plateau which is characterized by a strong smooth band centered at 1.07 microns. In addition to exhibiting the glass band this spectrum is very red and has a low albedo. On the assumption that the dark component and the red component are agglutinates, which is reasonable but not necessarily true, we sought a dark red component. To derive its properties we modelled the spectrum of an Apollo 16 soil (16xxx) and assumed the dark red component to comprise 60 percent of the soil, appropriate to agglutinate abundance in mature soil. We adjusted the albedo and slope of a straight line

  4. Linkage analysis with an alternative formulation for the mixed model of inheritance: The finite polygenic mixed model

    SciTech Connect

    Stricker, C.; Fernando, R.L.; Elston, R.C.

    1995-12-01

    This paper presents an extension of the finite polygenic mixed model of Fernando et al. to linkage analysis. The finite polygenic mixed model, extended for linkage analysis, leads to a likelihood that can be calculated using efficient algorithms developed for oligogenic models. For comparison, linkage analysis of 5 simulated 4021-member pedigrees was performed using the usual mixed model of inheritance, approximated by Hasstedt, and the finite polygenic mixed model extended for linkage analysis presented here. Maximum likelihood estimates of the finite polygenic mixed model could be inferred to be closer to the simulated values in these pedigrees. 31 refs., 2 tabs.

  5. Linkage Analysis with an Alternative Formulation for the Mixed Model of Inheritance: The Finite Polygenic Mixed Model

    PubMed Central

    Stricker, C.; Fernando, R. L.; Elston, R. C.

    1995-01-01

    This paper presents an extension of the finite polygenic mixed model of FERNANDO et al. (1994) to linkage analysis. The finite polygenic mixed model, extended for linkage analysis, leads to a likelihood that can be calculated using efficient algorithms developed for oligogenic models. For comparison, linkage analysis of 5 simulated 4021-member pedigrees was performed using the usual mixed model of inheritance, approximated by HASSTEDT (1982), and the finite polygenic mixed model extended for linkage analysis presented here. Maximum likelihood estimates of the finite polygenic mixed model could be inferred to be closer to the simulated values in these pedigrees. PMID:8601502

  6. Extended model for Richtmyer-Meshkov mix

    SciTech Connect

    Mikaelian, K O

    2009-11-18

    We examine four Richtmyer-Meshkov (RM) experiments on shock-generated turbulent mix and find them to be in good agreement with our earlier simple model in which the growth rate h of the mixing layer following a shock or reshock is constant and given by 2{alpha}A{Delta}v, independent of initial conditions h{sub 0}. Here A is the Atwood number ({rho}{sub B}-{rho}{sub A})/({rho}{sub B} + {rho}{sub A}), {rho}{sub A,B} are the densities of the two fluids, {Delta}V is the jump in velocity induced by the shock or reshock, and {alpha} is the constant measured in Rayleigh-Taylor (RT) experiments: {alpha}{sup bubble} {approx} 0.05-0.07, {alpha}{sup spike} {approx} (1.8-2.5){alpha}{sup bubble} for A {approx} 0.7-1.0. In the extended model the growth rate beings to day after a time t*, when h = h*, slowing down from h = h{sub 0} + 2{alpha}A{Delta}vt to h {approx} t{sup {theta}} behavior, with {theta}{sup bubble} {approx} 0.25 and {theta}{sup spike} {approx} 0.36 for A {approx} 0.7. They ascribe this change-over to loss of memory of the direction of the shock or reshock, signaling transition from highly directional to isotropic turbulence. In the simplest extension of the model h*/h{sub 0} is independent of {Delta}v and depends only on A. They find that h*/h{sub 0} {approx} 2.5-3.5 for A {approx} 0.7-1.0.

  7. Combining features in a graphical model to predict protein binding sites.

    PubMed

    Wierschin, Torsten; Wang, Keyu; Welter, Marlon; Waack, Stephan; Stanke, Mario

    2015-05-01

    Large efforts have been made in classifying residues as binding sites in proteins using machine learning methods. The prediction task can be translated into the computational challenge of assigning each residue the label binding site or non-binding site. Observational data comes from various possibly highly correlated sources. It includes the structure of the protein but not the structure of the complex. The model class of conditional random fields (CRFs) has previously successfully been used for protein binding site prediction. Here, a new CRF-approach is presented that models the dependencies of residues using a general graphical structure defined as a neighborhood graph and thus our model makes fewer independence assumptions on the labels than sequential labeling approaches. A novel node feature "change in free energy" is introduced into the model, which is then denoted by ΔF-CRF. Parameters are trained with an online large-margin algorithm. Using the standard feature class relative accessible surface area alone, the general graph-structure CRF already achieves higher prediction accuracy than the linear chain CRF of Li et al. ΔF-CRF performs significantly better on a large range of false positive rates than the support-vector-machine-based program PresCont of Zellner et al. on a homodimer set containing 128 chains. ΔF-CRF has a broader scope than PresCont since it is not constrained to protein subgroups and requires no multiple sequence alignment. The improvement is attributed to the advantageous combination of the novel node feature with the standard feature and to the adopted parameter training method.

  8. Design Graphics

    NASA Technical Reports Server (NTRS)

    1990-01-01

    A mathematician, David R. Hedgley, Jr. developed a computer program that considers whether a line in a graphic model of a three-dimensional object should or should not be visible. Known as the Hidden Line Computer Code, the program automatically removes superfluous lines and displays an object from a specific viewpoint, just as the human eye would see it. An example of how one company uses the program is the experience of Birdair which specializes in production of fabric skylights and stadium covers. The fabric called SHEERFILL is a Teflon coated fiberglass material developed in cooperation with DuPont Company. SHEERFILL glazed structures are either tension structures or air-supported tension structures. Both are formed by patterned fabric sheets supported by a steel or aluminum frame or cable network. Birdair uses the Hidden Line Computer Code, to illustrate a prospective structure to an architect or owner. The program generates a three- dimensional perspective with the hidden lines removed. This program is still used by Birdair and continues to be commercially available to the public.

  9. Multi-domain Hierarchical Free-Sketch Recognition Using Graphical Models

    NASA Astrophysics Data System (ADS)

    Alvarado, Christine

    In recent years there has been an increasing interest in sketch-based user interfaces, but the problem of robust free-sketch recognition remains largely unsolved. This chapter presents a graphical-model-based approach to free-sketch recognition that uses context to improve recognition accuracy without placing unnatural constraints on the way the user draws. Our approach uses context to guide the search for possible interpretations and uses a novel form of dynamically constructed Bayesian networks to evaluate these interpretations. An evaluation of this approach on two domains—family trees and circuit diagrams—reveals that in both domains the use of context to reclassify low-level shapes significantly reduces recognition error over a baseline system that does not reinterpret low-level classifications. Finally, we discuss an emerging technique to solve a major remaining challenge for multi-domain sketch recognition revealed by our evaluation: the problem of grouping strokes into individual symbols reliably and efficiently, without placing unnatural constraints on the user's drawing style.

  10. Estimation of growth parameters using a nonlinear mixed Gompertz model.

    PubMed

    Wang, Z; Zuidhof, M J

    2004-06-01

    In order to maximize the utility of simulation models for decision making, accurate estimation of growth parameters and associated variances is crucial. A mixed Gompertz growth model was used to account for between-bird variation and heterogeneous variance. The mixed model had several advantages over the fixed effects model. The mixed model partitioned BW variation into between- and within-bird variation, and the covariance structure assumed with the random effect accounted for part of the BW correlation across ages in the same individual. The amount of residual variance decreased by over 55% with the mixed model. The mixed model reduced estimation biases that resulted from selective sampling. For analysis of longitudinal growth data, the mixed effects growth model is recommended.

  11. MixSIAR: A Bayesian stable isotope mixing model for characterizing intrapopulation niche variation

    EPA Science Inventory

    Background/Question/Methods The science of stable isotope mixing models has tended towards the development of modeling products (e.g. IsoSource, MixSIR, SIAR), where methodological advances or syntheses of the current state of the art are published in parity with software packa...

  12. Single calcium channel domain gating of synaptic vesicle fusion at fast synapses; analysis by graphic modeling

    PubMed Central

    Stanley, Elise F

    2015-01-01

    At fast-transmitting presynaptic terminals Ca2+ enter through voltage gated calcium channels (CaVs) and bind to a synaptic vesicle (SV) -associated calcium sensor (SV-sensor) to gate fusion and discharge. An open CaV generates a high-concentration plume, or nanodomain of Ca2+ that dissipates precipitously with distance from the pore. At most fast synapses, such as the frog neuromuscular junction (NMJ), the SV sensors are located sufficiently close to individual CaVs to be gated by single nanodomains. However, at others, such as the mature rodent calyx of Held (calyx of Held), the physiology is more complex with evidence that CaVs that are both close and distant from the SV sensor and it is argued that release is gated primarily by the overlapping Ca2+ nanodomains from many CaVs. We devised a 'graphic modeling' method to sum Ca2+ from individual CaVs located at varying distances from the SV-sensor to determine the SV release probability and also the fraction of that probability that can be attributed to single domain gating. This method was applied first to simplified, low and high CaV density model release sites and then to published data on the contrasting frog NMJ and the rodent calyx of Held native synapses. We report 3 main predictions: the SV-sensor is positioned very close to the point at which the SV fuses with the membrane; single domain-release gating predominates even at synapses where the SV abuts a large cluster of CaVs, and even relatively remote CaVs can contribute significantly to single domain-based gating. PMID:26457441

  13. On Local Homogeneity and Stochastically Ordered Mixed Rasch Models

    ERIC Educational Resources Information Center

    Kreiner, Svend; Hansen, Mogens; Hansen, Carsten Rosenberg

    2006-01-01

    Mixed Rasch models add latent classes to conventional Rasch models, assuming that the Rasch model applies within each class and that relative difficulties of items are different in two or more latent classes. This article considers a family of stochastically ordered mixed Rasch models, with ordinal latent classes characterized by increasing total…

  14. Downsizer - A Graphical User Interface-Based Application for Browsing, Acquiring, and Formatting Time-Series Data for Hydrologic Modeling

    USGS Publications Warehouse

    Ward-Garrison, Christian; Markstrom, Steven L.; Hay, Lauren E.

    2009-01-01

    The U.S. Geological Survey Downsizer is a computer application that selects, downloads, verifies, and formats station-based time-series data for environmental-resource models, particularly the Precipitation-Runoff Modeling System. Downsizer implements the client-server software architecture. The client presents a map-based, graphical user interface that is intuitive to modelers; the server provides streamflow and climate time-series data from over 40,000 measurement stations across the United States. This report is the Downsizer user's manual and provides (1) an overview of the software design, (2) installation instructions, (3) a description of the graphical user interface, (4) a description of selected output files, and (5) troubleshooting information.

  15. The Effectiveness of an Interactive 3-Dimensional Computer Graphics Model for Medical Education

    PubMed Central

    Konishi, Takeshi; Tamura, Yoko; Moriguchi, Hiroki

    2012-01-01

    Background Medical students often have difficulty achieving a conceptual understanding of 3-dimensional (3D) anatomy, such as bone alignment, muscles, and complex movements, from 2-dimensional (2D) images. To this end, animated and interactive 3-dimensional computer graphics (3DCG) can provide better visual information to users. In medical fields, research on the advantages of 3DCG in medical education is relatively new. Objective To determine the educational effectiveness of interactive 3DCG. Methods We divided 100 participants (27 men, mean (SD) age 17.9 (0.6) years, and 73 women, mean (SD) age 18.1 (1.1) years) from the Health Sciences University of Mongolia (HSUM) into 3DCG (n = 50) and textbook-only (control) (n = 50) groups. The control group used a textbook and 2D images, while the 3DCG group was trained to use the interactive 3DCG shoulder model in addition to a textbook. We conducted a questionnaire survey via an encrypted satellite network between HSUM and Tokushima University. The questionnaire was scored on a 5-point Likert scale from strongly disagree (score 1) to strongly agree (score 5). Results Interactive 3DCG was effective in undergraduate medical education. Specifically, there was a significant difference in mean (SD) scores between the 3DCG and control groups in their response to questionnaire items regarding content (4.26 (0.69) vs 3.85 (0.68), P = .001) and teaching methods (4.33 (0.65) vs 3.74 (0.79), P < .001), but no significant difference in the Web category. Participants also provided meaningful comments on the advantages of interactive 3DCG. Conclusions Interactive 3DCG materials have positive effects on medical education when properly integrated into conventional education. In particular, our results suggest that interactive 3DCG is more efficient than textbooks alone in medical education and can motivate students to understand complex anatomical structures. PMID:23611759

  16. Robot graphic simulation testbed

    NASA Technical Reports Server (NTRS)

    Cook, George E.; Sztipanovits, Janos; Biegl, Csaba; Karsai, Gabor; Springfield, James F.

    1991-01-01

    The objective of this research was twofold. First, the basic capabilities of ROBOSIM (graphical simulation system) were improved and extended by taking advantage of advanced graphic workstation technology and artificial intelligence programming techniques. Second, the scope of the graphic simulation testbed was extended to include general problems of Space Station automation. Hardware support for 3-D graphics and high processing performance make high resolution solid modeling, collision detection, and simulation of structural dynamics computationally feasible. The Space Station is a complex system with many interacting subsystems. Design and testing of automation concepts demand modeling of the affected processes, their interactions, and that of the proposed control systems. The automation testbed was designed to facilitate studies in Space Station automation concepts.

  17. CONVECTIVE OVERSHOOT MIXING IN MODELS OF THE STELLAR INTERIOR

    SciTech Connect

    Zhang, Q. S.

    2013-04-01

    Convective overshoot mixing plays an important role in stellar structure and evolution. However, overshoot mixing is also a long-standing problem; it is one of the most uncertain factors in stellar physics. As is well known, convective overshoot mixing is determined by the radial turbulent flux of the chemical component. In this paper, a local model of the radial turbulent flux of the chemical component is established based on hydrodynamic equations and some model assumptions and is tested in stellar models. The main conclusions are as follows. (1) The local model shows that convective overshoot mixing could be regarded as a diffusion process and the diffusion coefficient for different chemical elements is the same. However, if the non-local terms i.e., the gradient of the third-order moments, are taken into account, the diffusion coefficient for each chemical element should in general be different. (2) The diffusion coefficient of convective/overshoot mixing shows different behaviors in the convection zone and in the overshoot region because the characteristic length scale of the mixing is large in the convection zone and small in the overshoot region. Overshoot mixing should be regarded as a weak mixing process. (3) The diffusion coefficient of mixing is tested in stellar models, and it is found that a single choice of our central mixing parameter leads to consistent results for a solar convective envelope model as well as for core convection models of stars with masses from 2 M to 10 M.

  18. Reacting to Graphic Horror: A Model of Empathy and Emotional Behavior.

    ERIC Educational Resources Information Center

    Tamborini, Ron; And Others

    1990-01-01

    Studies viewer response to graphic horror films. Reports that undergraduate mass communication students viewed clips from two horror films and a scientific television program. Concludes that people who score high on measures for wandering imagination, fictional involvement, humanistic orientation, and emotional contagion tend to find horror films…

  19. Perception in statistical graphics

    NASA Astrophysics Data System (ADS)

    VanderPlas, Susan Ruth

    There has been quite a bit of research on statistical graphics and visualization, generally focused on new types of graphics, new software to create graphics, interactivity, and usability studies. Our ability to interpret and use statistical graphics hinges on the interface between the graph itself and the brain that perceives and interprets it, and there is substantially less research on the interplay between graph, eye, brain, and mind than is sufficient to understand the nature of these relationships. The goal of the work presented here is to further explore the interplay between a static graph, the translation of that graph from paper to mental representation (the journey from eye to brain), and the mental processes that operate on that graph once it is transferred into memory (mind). Understanding the perception of statistical graphics should allow researchers to create more effective graphs which produce fewer distortions and viewer errors while reducing the cognitive load necessary to understand the information presented in the graph. Taken together, these experiments should lay a foundation for exploring the perception of statistical graphics. There has been considerable research into the accuracy of numerical judgments viewers make from graphs, and these studies are useful, but it is more effective to understand how errors in these judgments occur so that the root cause of the error can be addressed directly. Understanding how visual reasoning relates to the ability to make judgments from graphs allows us to tailor graphics to particular target audiences. In addition, understanding the hierarchy of salient features in statistical graphics allows us to clearly communicate the important message from data or statistical models by constructing graphics which are designed specifically for the perceptual system.

  20. Mixed Barrier Model for the Mixed Glass Former Effect in Ion Conducting Glasses

    NASA Astrophysics Data System (ADS)

    Schuch, Michael; Müller, Christian R.; Maass, Philipp; Martin, Steve W.

    2009-04-01

    Mixing two types of glass formers in ion conducting glasses can be exploited to lower conductivity activation energy and thereby increasing the ionic conductivity, a phenomenon known as the mixed glass former effect (MGFE). We develop a model for this MGFE, where activation barriers for individual ion jumps get lowered in inhomogeneous environments containing both types of network forming units. Fits of the model to experimental data allow one to estimate the strength of the barrier reduction, and they indicate a spatial clustering of the two types of network formers. The model predicts a time-temperature superposition of conductivity spectra onto a common master curve independent of the mixing ratio.

  1. ModelMuse: A U.S. Geological Survey Open-Source, Graphical User Interface for Groundwater Models

    NASA Astrophysics Data System (ADS)

    Winston, R. B.

    2013-12-01

    ModelMuse is a free publicly-available graphical preprocessor used to generate the input and display the output for several groundwater models. It is written in Object Pascal and the source code is available on the USGS software web site. Supported models include the MODFLOW family of models, PHAST (version 1), and SUTRA version 2.2. With MODFLOW and PHAST, the user generates a grid and uses 'objects' (points, lines, and polygons) to define boundary conditions and the spatial variation in aquifer properties. Because the objects define the spatial variation, the grid can be changed without the user needing to re-enter spatial data. The same paradigm is used with SUTRA except that the user generates a quadrilateral finite-element mesh instead of a rectangular grid. The user interacts with the model in a top view and in a vertical cross section. The cross section can be at any angle or location. There is also a three-dimensional view of the model. For SUTRA, a new method of visualizing the permeability and related properties has been introduced. In three dimensional SUTRA models, the user specifies the permeability tensor by specifying permeability in three mutually orthogonal directions that can be oriented in space in any direction. Because it is important for the user to be able to check both the magnitudes and directions of the permeabilities, ModelMuse displays the permeabilities as either a two-dimensional or a three-dimensional vector plot. Color is used to differentiate the maximum, middle, and minimum permeability vectors. The magnitude of the permeability is shown by the vector length. The vector angle shows the direction of the maximum, middle, or minimum permeability. Contour and color plots can also be used to display model input and output data.

  2. Polygenic Modeling with Bayesian Sparse Linear Mixed Models

    PubMed Central

    Zhou, Xiang; Carbonetto, Peter; Stephens, Matthew

    2013-01-01

    Both linear mixed models (LMMs) and sparse regression models are widely used in genetics applications, including, recently, polygenic modeling in genome-wide association studies. These two approaches make very different assumptions, so are expected to perform well in different situations. However, in practice, for a given dataset one typically does not know which assumptions will be more accurate. Motivated by this, we consider a hybrid of the two, which we refer to as a “Bayesian sparse linear mixed model” (BSLMM) that includes both these models as special cases. We address several key computational and statistical issues that arise when applying BSLMM, including appropriate prior specification for the hyper-parameters and a novel Markov chain Monte Carlo algorithm for posterior inference. We apply BSLMM and compare it with other methods for two polygenic modeling applications: estimating the proportion of variance in phenotypes explained (PVE) by available genotypes, and phenotype (or breeding value) prediction. For PVE estimation, we demonstrate that BSLMM combines the advantages of both standard LMMs and sparse regression modeling. For phenotype prediction it considerably outperforms either of the other two methods, as well as several other large-scale regression methods previously suggested for this problem. Software implementing our method is freely available from http://stephenslab.uchicago.edu/software.html. PMID:23408905

  3. Radiolysis Model Formulation for Integration with the Mixed Potential Model

    SciTech Connect

    Buck, Edgar C.; Wittman, Richard S.

    2014-07-10

    The U.S. Department of Energy Office of Nuclear Energy (DOE-NE), Office of Fuel Cycle Technology has established the Used Fuel Disposition Campaign (UFDC) to conduct the research and development activities related to storage, transportation, and disposal of used nuclear fuel (UNF) and high-level radioactive waste. Within the UFDC, the components for a general system model of the degradation and subsequent transport of UNF is being developed to analyze the performance of disposal options [Sassani et al., 2012]. Two model components of the near-field part of the problem are the ANL Mixed Potential Model and the PNNL Radiolysis Model. This report is in response to the desire to integrate the two models as outlined in [Buck, E.C, J.L. Jerden, W.L. Ebert, R.S. Wittman, (2013) “Coupling the Mixed Potential and Radiolysis Models for Used Fuel Degradation,” FCRD-UFD-2013-000290, M3FT-PN0806058

  4. Lagrangian mixed layer modeling of the western equatorial Pacific

    NASA Technical Reports Server (NTRS)

    Shinoda, Toshiaki; Lukas, Roger

    1995-01-01

    Processes that control the upper ocean thermohaline structure in the western equatorial Pacific are examined using a Lagrangian mixed layer model. The one-dimensional bulk mixed layer model of Garwood (1977) is integrated along the trajectories derived from a nonlinear 1 1/2 layer reduced gravity model forced with actual wind fields. The Global Precipitation Climatology Project (GPCP) data are used to estimate surface freshwater fluxes for the mixed layer model. The wind stress data which forced the 1 1/2 layer model are used for the mixed layer model. The model was run for the period 1987-1988. This simple model is able to simulate the isothermal layer below the mixed layer in the western Pacific warm pool and its variation. The subduction mechanism hypothesized by Lukas and Lindstrom (1991) is evident in the model results. During periods of strong South Equatorial Current, the warm and salty mixed layer waters in the central Pacific are subducted below the fresh shallow mixed layer in the western Pacific. However, this subduction mechanism is not evident when upwelling Rossby waves reach the western equatorial Pacific or when a prominent deepening of the mixed layer occurs in the western equatorial Pacific or when a prominent deepening of the mixed layer occurs in the western equatorial Pacific due to episodes of strong wind and light precipitation associated with the El Nino-Southern Oscillation. Comparison of the results between the Lagrangian mixed layer model and a locally forced Eulerian mixed layer model indicated that horizontal advection of salty waters from the central Pacific strongly affects the upper ocean salinity variation in the western Pacific, and that this advection is necessary to maintain the upper ocean thermohaline structure in this region.

  5. Interactive computer graphics

    NASA Astrophysics Data System (ADS)

    Purser, K.

    1980-08-01

    Design layouts have traditionally been done on a drafting board by drawing a two-dimensional representation with section cuts and side views to describe the exact three-dimensional model. With the advent of computer graphics, a three-dimensional model can be created directly. The computer stores the exact three-dimensional model, which can be examined from any angle and at any scale. A brief overview of interactive computer graphics, how models are made and some of the benefits/limitations are described.

  6. Lidar observations of mixed layer dynamics - Tests of parameterized entrainment models of mixed layer growth rate

    NASA Technical Reports Server (NTRS)

    Boers, R.; Eloranta, E. W.; Coulter, R. L.

    1984-01-01

    Ground based lidar measurements of the atmospheric mixed layer depth, the entrainment zone depth and the wind speed and wind direction were used to test various parameterized entrainment models of mixed layer growth rate. Six case studies under clear air convective conditions over flat terrain in central Illinois are presented. It is shown that surface heating alone accounts for a major portion of the rise of the mixed layer on all days. A new set of entrainment model constants was determined which optimized height predictions for the dataset. Under convective conditions, the shape of the mixed layer height prediction curves closely resembled the observed shapes. Under conditions when significant wind shear was present, the shape of the height prediction curve departed from the data suggesting deficiencies in the parameterization of shear production. Development of small cumulus clouds on top of the layer is shown to affect mixed layer depths in the afternoon growth phase.

  7. [Linear mixed modeling of branch biomass for Korean pine plantation].

    PubMed

    Dong, Li-Hu; Li, Feng-Ri; Jia, Wei-Wei

    2013-12-01

    Based on the measurement of 3643 branch biomass samples of 60 Korean pine (Pinus koraiensis) trees from Mengjiagang Forest Farm, Heilongjiang Province, all subset regressions techniques were used to develop the branch biomass model (branch, foliage, and total biomass models). The optimal base model of branch biomass was developed as lnw = k1 + k2 lnL(b) + k3 lnD(b). Then, linear mixed models were developed based on PROC MIXED of SAS 9.3 software, and evaluated with AIC, BIC, Log Likelihood and Likelihood ratio tests. The results showed that the foliage and total biomass models with parameters k1, k2 and k3 as mixed effects showed the best performance. The branch biomass model with parameters k5 and k2 as mixed effects showed the best performance. Finally, we evaluated the optimal base model and the mixed model of branch biomass. Model validation confirmed that the mixed model was better than the optimal base model. The mixed model with random parameters could not only provide more accurate and precise prediction, but also showed the individual difference based on variance-covariance structure.

  8. Accurate and computationally efficient mixing models for the simulation of turbulent mixing with PDF methods

    NASA Astrophysics Data System (ADS)

    Meyer, Daniel W.; Jenny, Patrick

    2013-08-01

    Different simulation methods are applicable to study turbulent mixing. When applying probability density function (PDF) methods, turbulent transport, and chemical reactions appear in closed form, which is not the case in second moment closure methods (RANS). Moreover, PDF methods provide the entire joint velocity-scalar PDF instead of a limited set of moments. In PDF methods, however, a mixing model is required to account for molecular diffusion. In joint velocity-scalar PDF methods, mixing models should also account for the joint velocity-scalar statistics, which is often under appreciated in applications. The interaction by exchange with the conditional mean (IECM) model accounts for these joint statistics, but requires velocity-conditional scalar means that are expensive to compute in spatially three dimensional settings. In this work, two alternative mixing models are presented that provide more accurate PDF predictions at reduced computational cost compared to the IECM model, since no conditional moments have to be computed. All models are tested for different mixing benchmark cases and their computational efficiencies are inspected thoroughly. The benchmark cases involve statistically homogeneous and inhomogeneous settings dealing with three streams that are characterized by two passive scalars. The inhomogeneous case clearly illustrates the importance of accounting for joint velocity-scalar statistics in the mixing model. Failure to do so leads to significant errors in the resulting scalar means, variances and other statistics.

  9. Computer graphics and the graphic artist

    NASA Technical Reports Server (NTRS)

    Taylor, N. L.; Fedors, E. G.; Pinelli, T. E.

    1985-01-01

    A centralized computer graphics system is being developed at the NASA Langley Research Center. This system was required to satisfy multiuser needs, ranging from presentation quality graphics prepared by a graphic artist to 16-mm movie simulations generated by engineers and scientists. While the major thrust of the central graphics system was directed toward engineering and scientific applications, hardware and software capabilities to support the graphic artists were integrated into the design. This paper briefly discusses the importance of computer graphics in research; the central graphics system in terms of systems, software, and hardware requirements; the application of computer graphics to graphic arts, discussed in terms of the requirements for a graphic arts workstation; and the problems encountered in applying computer graphics to the graphic arts. The paper concludes by presenting the status of the central graphics system.

  10. Taxonomy Of Magma Mixing I: Magma Mixing Metrics And The Thermochemistry Of Magma Hybridization Illuminated With A Toy Model

    NASA Astrophysics Data System (ADS)

    Spera, F. J.; Bohrson, W. A.; Schmidt, J.

    2013-12-01

    The rock record preserves abundant evidence of magma mixing in the form of mafic enclaves and mixed pumice in volcanic eruptions, syn-plutonic mafic or silicic dikes and intrusive complexes, replenishment events recorded in cumulates from layered intrusions, and crystal scale heterogeneity in phenocrysts and cumulate minerals. These evidently show that magma mixing in conjunction with crystallization (perfect fractional or incremental batch) is a first-order petrogenetic process. Magma mixing (sensu lato) occurs across a spectrum of mixed states from magma mingling to complete blending. The degree of mixing is quantified (Oldenburg et al, 1989) using two measures: the statistics of the segregation length scales (scale of segregation, L*) and the spatial contrast in composition (C) relative to the mean C (intensity of segregation, I). Mingling of dissimilar magmas produces a heterogeneous mixture containing discrete regions of end member melts and populations of crystals with L* = finite and I > 0. When L*→∞ and I→0 , the mixing magmas become hybridized and can be studied thermodynamically. Such hybrid magma is a multiphase equilibrium mixture of homogeneous melt, unzoned crystals and possible bubbles of a supercritical fluid. Here, we use a toy model to elucidate the principles of magma hybridization in a binary system (components A and B with pure crystals of α or β phase) with simple thermodynamics to build an outcome taxonomy. This binary system is not unlike the system Anorthite-Diopside, the classic low-pressure model basalt system. In the toy model, there are seven parameters describing the phase equilibria (eutectic T and X, specific heat, melting T and fusion enthalpies of α and β crystals) and five variables describing the magma mixing conditions: end member bulk compositions, temperatures and fraction of resident magma (M) that blends with recharge (R) magma to form a single equilibrium hybrid magma. There are 24 possible initial states when M

  11. Models of neutrino mass, mixing and CP violation

    NASA Astrophysics Data System (ADS)

    King, Stephen F.

    2015-12-01

    In this topical review we argue that neutrino mass and mixing data motivates extending the Standard Model (SM) to include a non-Abelian discrete flavour symmetry in order to accurately predict the large leptonic mixing angles and {C}{P} violation. We begin with an overview of the SM puzzles, followed by a description of some classic lepton mixing patterns. Lepton mixing may be regarded as a deviation from tri-bimaximal mixing, with charged lepton corrections leading to solar mixing sum rules, or tri-maximal lepton mixing leading to atmospheric mixing rules. We survey neutrino mass models, using a roadmap based on the open questions in neutrino physics. We then focus on the seesaw mechanism with right-handed neutrinos, where sequential dominance (SD) can account for large lepton mixing angles and {C}{P} violation, with precise predictions emerging from constrained SD (CSD). We define the flavour problem and discuss progress towards a theory of favour using GUTs and discrete family symmetry. We classify models as direct, semidirect or indirect, according to the relation between the Klein symmetry of the mass matrices and the discrete family symmetry, in all cases focussing on spontaneous {C}{P} violation. Finally we give two examples of realistic and highly predictive indirect models with CSD, namely an A to Z of flavour with Pati-Salam and a fairly complete A 4 × SU(5) SUSY GUT of flavour, where both models have interesting implications for leptogenesis.

  12. A multifluid mix model with material strength effects

    SciTech Connect

    Chang, C. H.; Scannapieco, A. J.

    2012-04-23

    We present a new multifluid mix model. Its features include material strength effects and pressure and temperature nonequilibrium between mixing materials. It is applicable to both interpenetration and demixing of immiscible fluids and diffusion of miscible fluids. The presented model exhibits the appropriate smooth transition in mathematical form as the mixture evolves from multiphase to molecular mixing, extending its applicability to the intermediate stages in which both types of mixing are present. Virtual mass force and momentum exchange have been generalized for heterogeneous multimaterial mixtures. The compression work has been extended so that the resulting species energy equations are consistent with the pressure force and material strength.

  13. A New Model for Mix It Up

    ERIC Educational Resources Information Center

    Holladay, Jennifer

    2009-01-01

    Since 2002, Teaching Tolerance's Mix It Up at Lunch Day program has helped millions of students cross social boundaries and create more inclusive school communities. Its goal is to create a safe, purposeful opportunity for students to break down the patterns of social self-segregation that too often plague schools. Research conducted in 2006 by…

  14. Documentation of a graphical display program for the saturated- unsaturated transport (SUTRA) finite-element simulation model

    USGS Publications Warehouse

    Souza, W.R.

    1987-01-01

    This report documents a graphical display program for the U. S. Geological Survey finite-element groundwater flow and solute transport model. Graphic features of the program, SUTRA-PLOT (SUTRA-PLOT = saturated/unsaturated transport), include: (1) plots of the finite-element mesh, (2) velocity vector plots, (3) contour plots of pressure, solute concentration, temperature, or saturation, and (4) a finite-element interpolator for gridding data prior to contouring. SUTRA-PLOT is written in FORTRAN 77 on a PRIME 750 computer system, and requires Version 9.0 or higher of the DISSPLA graphics library. The program requires two input files: the SUTRA input data list and the SUTRA simulation output listing. The program is menu driven and specifications for individual types of plots are entered and may be edited interactively. Installation instruction, a source code listing, and a description of the computer code are given. Six examples of plotting applications are used to demonstrate various features of the plotting program. (Author 's abstract)

  15. An Investigation of Item Fit Statistics for Mixed IRT Models

    ERIC Educational Resources Information Center

    Chon, Kyong Hee

    2009-01-01

    The purpose of this study was to investigate procedures for assessing model fit of IRT models for mixed format data. In this study, various IRT model combinations were fitted to data containing both dichotomous and polytomous item responses, and the suitability of the chosen model mixtures was evaluated based on a number of model fit procedures.…

  16. On the coalescence-dispersion modeling of turbulent molecular mixing

    NASA Technical Reports Server (NTRS)

    Givi, Peyman; Kosaly, George

    1987-01-01

    The general coalescence-dispersion (C/D) closure provides phenomenological modeling of turbulent molecular mixing. The models of Curl and Dopazo and O'Brien appear as two limiting C/D models that bracket the range of results one can obtain by various models. This finding is used to investigate the sensitivtiy of the results to the choice of the model. Inert scalar mixing is found to be less model-sensitive than mixing accompanied by chemical reaction. Infinitely fast chemistry approximation is used to relate the C/D approach to Toor's earlier results. Pure mixing and infinite rate chemistry calculations are compared to study further a recent result of Hsieh and O'Brien who found that higher concentration moments are not sensitive to chemistry.

  17. Building Models in the Classroom: Taking Advantage of Sophisticated Geomorphic Numerical Tools Using a Simple Graphical User Interface

    NASA Astrophysics Data System (ADS)

    Roy, S. G.; Koons, P. O.; Gerbi, C. C.; Capps, D. K.; Tucker, G. E.; Rogers, Z. A.

    2014-12-01

    Sophisticated numerical tools exist for modeling geomorphic processes and linking them to tectonic and climatic systems, but they are often seen as inaccessible for users with an exploratory level of interest. We have improved the accessibility of landscape evolution models by producing a simple graphics user interface (GUI) that takes advantage of the Channel-Hillslope Integrated Landscape Development (CHILD) model. Model access is flexible: the user can edit values for basic geomorphic, tectonic, and climate parameters, or obtain greater control by defining the spatiotemporal distributions of those parameters. Users can make educated predictions by choosing their own parametric values for the governing equations and interpreting the results immediately through model graphics. This method of modeling allows users to iteratively build their understanding through experimentation. Use of this GUI is intended for inquiry and discovery-based learning activities. We discuss a number of examples of how the GUI can be used at the upper high school, introductory university, and advanced university level. Effective teaching modules initially focus on an inquiry-based example guided by the instructor. As students become familiar with the GUI and the CHILD model, the class can shift to more student-centered exploration and experimentation. To make model interpretations more robust, digital elevation models can be imported and direct comparisons can be made between CHILD model results and natural topography. The GUI is available online through the University of Maine's Earth and Climate Sciences website, through the Community Surface Dynamics Modeling System (CSDMS) model repository, or by contacting the corresponding author.

  18. Computer modeling of jet mixing in INEL waste tanks

    SciTech Connect

    Meyer, P.A.

    1994-01-01

    The objective of this study is to examine the feasibility of using submerged jet mixing pumps to mobilize and suspend settled sludge materials in INEL High Level Radioactive Waste Tanks. Scenarios include removing the heel (a shallow liquid and sludge layer remaining after tank emptying processes) and mobilizing and suspending solids in full or partially full tanks. The approach used was to (1) briefly review jet mixing theory, (2) review erosion literature in order to identify and estimate important sludge characterization parameters (3) perform computer modeling of submerged liquid mixing jets in INEL tank geometries, (4) develop analytical models from which pump operating conditions and mixing times can be estimated, and (5) analyze model results to determine overall feasibility of using jet mixing pumps and make design recommendations.

  19. Nonlinear diffusion model for Rayleigh-Taylor mixing.

    PubMed

    Boffetta, G; De Lillo, F; Musacchio, S

    2010-01-22

    The complex evolution of turbulent mixing in Rayleigh-Taylor convection is studied in terms of eddy diffusivity models for the mean temperature profile. It is found that a nonlinear model, derived within the general framework of Prandtl mixing theory, reproduces accurately the evolution of turbulent profiles obtained from numerical simulations. Our model allows us to give very precise predictions for the turbulent heat flux and for the Nusselt number in the ultimate state regime of thermal convection.

  20. Development of two mix model postprocessors for the investigation of shell mix in indirect drive implosion cores

    SciTech Connect

    Welser-Sherrill, L.; Mancini, R. C.; Haynes, D. A.; Haan, S. W.; Koch, J. A.; Izumi, N.; Tommasini, R.; Golovkin, I. E.; MacFarlane, J. J.; Radha, P. B.; Delettrez, J. A.; Regan, S. P.; Smalyuk, V. A.

    2007-07-15

    The presence of shell mix in inertial confinement fusion implosion cores is an important characteristic. Mixing in this experimental regime is primarily due to hydrodynamic instabilities, such as Rayleigh-Taylor and Richtmyer-Meshkov, which can affect implosion dynamics. Two independent theoretical mix models, Youngs' model and the Haan saturation model, were used to estimate the level of Rayleigh-Taylor mixing in a series of indirect drive experiments. The models were used to predict the radial width of the region containing mixed fuel and shell materials. The results for Rayleigh-Taylor mixing provided by Youngs' model are considered to be a lower bound for the mix width, while those generated by Haan's model incorporate more experimental characteristics and consequently have larger mix widths. These results are compared with an independent experimental analysis, which infers a larger mix width based on all instabilities and effects captured in the experimental data.

  1. Development of a Medicaid Behavioral Health Case-Mix Model

    ERIC Educational Resources Information Center

    Robst, John

    2009-01-01

    Many Medicaid programs have either fully or partially carved out mental health services. The evaluation of carve-out plans requires a case-mix model that accounts for differing health status across Medicaid managed care plans. This article develops a diagnosis-based case-mix adjustment system specific to Medicaid behavioral health care. Several…

  2. Diagnostic tools for mixing models of stream water chemistry

    USGS Publications Warehouse

    Hooper, R.P.

    2003-01-01

    Mixing models provide a useful null hypothesis against which to evaluate processes controlling stream water chemical data. Because conservative mixing of end-members with constant concentration is a linear process, a number of simple mathematical and multivariate statistical methods can be applied to this problem. Although mixing models have been most typically used in the context of mixing soil and groundwater end-members, an extension of the mathematics of mixing models is presented that assesses the "fit" of a multivariate data set to a lower dimensional mixing subspace without the need for explicitly identified end-members. Diagnostic tools are developed to determine the approximate rank of the data set and to assess lack of fit of the data. This permits identification of processes that violate the assumptions of the mixing model and can suggest the dominant processes controlling stream water chemical variation. These same diagnostic tools can be used to assess the fit of the chemistry of one site into the mixing subspace of a different site, thereby permitting an assessment of the consistency of controlling end-members across sites. This technique is applied to a number of sites at the Panola Mountain Research Watershed located near Atlanta, Georgia.

  3. Kinetic mixing effect in the 3 -3 -1 -1 model

    NASA Astrophysics Data System (ADS)

    Dong, P. V.; Si, D. T.

    2016-06-01

    We show that the mixing effect of the neutral gauge bosons in the 3 -3 -1 -1 model comes from two sources. The first one is due to the 3 -3 -1 -1 gauge symmetry breaking as usual, whereas the second one results from the kinetic mixing between the gauge bosons of U (1 )X and U (1 )N groups, which are used to determine the electric charge and baryon minus lepton numbers, respectively. Such mixings modify the ρ -parameter and the known couplings of Z with fermions. The constraints that arise from flavor-changing neutral currents due to the gauge boson mixings and nonuniversal fermion generations are also given.

  4. Analysis of Compressible Mixing Layers Using Dilatational Covariances Model

    NASA Technical Reports Server (NTRS)

    Thangam, S.; Zhou, Y.; Ristorcelli, J. R.

    1996-01-01

    Compressible mixing layers are analyzed using a dilatational covariances model based on a pseudo-sound constitutive relation. The calculations are used to evaluate the different physical phenomena affecting compressible mixing layers. The rate of growth of the mixing layer is retarded by both the compressible dissipation and the pressure-dilatational covariances. The pressure-dilatational, essentially a nonequilibrium effect, reduces the amount of excess production over dissipation available for the turbulence energy growth. The pseudo-sound model also includes a history dependent portion: this is also investigated. All constants in the model and used in these computations are predicted by the theory.

  5. Shell model of optimal passive-scalar mixing

    NASA Astrophysics Data System (ADS)

    Miles, Christopher; Doering, Charles

    2015-11-01

    Optimal mixing is significant to process engineering within industries such as food, chemical, pharmaceutical, and petrochemical. An important question in this field is ``How should one stir to create a homogeneous mixture while being energetically efficient?'' To answer this question, we consider an initially unmixed scalar field representing some concentration within a fluid on a periodic domain. This passive-scalar field is advected by the velocity field, our control variable, constrained by a physical quantity such as energy or enstrophy. We consider two objectives: local-in-time (LIT) optimization (what will maximize the mixing rate now?) and global-in-time (GIT) optimization (what will maximize mixing at the end time?). Throughout this work we use the H-1 mix-norm to measure mixing. To gain a better understanding, we provide a simplified mixing model by using a shell model of passive-scalar advection. LIT optimization in this shell model gives perfect mixing in finite time for the energy-constrained case and exponential decay to the perfect-mixed state for the enstrophy-constrained case. Although we only enforce that the time-average energy (or enstrophy) equals a chosen value in GIT optimization, interestingly, the optimal control keeps this value constant over time.

  6. VISUAL PLUMES MIXING ZONE MODELING SOFTWARE

    EPA Science Inventory

    The US Environmental Protection Agency has a history of developing plume models and providing technical assistance. The Visual Plumes model (VP) is a recent addition to the public-domain models available on the EPA Center for Exposure Assessment Modeling (CEAM) web page. The Wind...

  7. Mixing model with multi-particle interactions for Lagrangian simulations of turbulent mixing

    NASA Astrophysics Data System (ADS)

    Watanabe, T.; Nagata, K.

    2016-08-01

    We report on the numerical study of the mixing volume model (MVM) for molecular diffusion in Lagrangian simulations of turbulent mixing problems. The MVM is based on the multi-particle interaction in a finite volume (mixing volume). A priori test of the MVM, based on the direct numerical simulations of planar jets, is conducted in the turbulent region and the interfacial layer between the turbulent and non-turbulent fluids. The results show that the MVM predicts well the mean effects of the molecular diffusion under various numerical and flow parameters. The number of the mixing particles should be large for predicting a value of the molecular diffusion term positively correlated to the exact value. The size of the mixing volume relative to the Kolmogorov scale η is important in the performance of the MVM. The scalar transfer across the turbulent/non-turbulent interface is well captured by the MVM especially with the small mixing volume. Furthermore, the MVM with multiple mixing particles is tested in the hybrid implicit large-eddy-simulation/Lagrangian-particle-simulation (LES-LPS) of the planar jet with the characteristic length of the mixing volume of O(100η). Despite the large mixing volume, the MVM works well and decays the scalar variance in a rate close to the reference LES. The statistics in the LPS are very robust to the number of the particles used in the simulations and the computational grid size of the LES. Both in the turbulent core region and the intermittent region, the LPS predicts a scalar field well correlated to the LES.

  8. Testing metrics of mixing using a chaotic advection model

    NASA Astrophysics Data System (ADS)

    Grahn, J.; McDonald, A. J.

    2012-04-01

    This study describes an evaluation of different dynamical measures and their ability to diagnose horizontal transport and mixing in atmospheric flows. This quantification can then be used to select optimal measures which can be applied to satellite and re-analyses data to identify likely regions where the indirect effect of Energetic Particle Precipitation (EPP) is important. As a "test bench" for mixing measures a two dimensional idealized atmospheric model has been developed (Pierrehumbert et al, 1992 , Shuckburgh et al 2003). It is completely defined by a set of only five parameters. Although it is an oversimplification of real atmospheric flows, it exhibits the main dynamical characteristics of the stratosphere near the polar vortex. At the same time, it's simplicity gives us the opportunity to make detailed investigations on the quality of the mixing measures. By using this analytical model with a Lagrangian trajectory model we can examine the impact of the flow on the distribution of any trace gas. We have chosen to examine two mixing measures, namely finite time Lyaponov exponents (FTLE) and the Renyi entropy (RE). The former is a numerical realization of the Lyapunov exponent (Wolf et al, 1984), a measure of the amount of separation of nearby trajectories of a dynamical system. The FTLE has been used in studies before as a measure of mixing (i.e. Pierrehumbert et al, 1992; Shuckburgh et al 2003; Garny et al, 2007). The Renyi entropy is a measure originating from information theory and has also been studied before in the context of atmospheric mixing (Krützmann et al, 2008). Initial analysis seems to show a relatively strong anti-correlation between these mixing measures. In particular, high FTLE (which relate to strongly divergent regions) identify mixing barriers and are generally linked to low values of RE. Results from an analysis of a range of model realizations with varying amounts of prescribed mixing will be performed to robustly quantify the

  9. Agility and mixed-model furniture production

    NASA Astrophysics Data System (ADS)

    Yao, Andrew C.

    2000-10-01

    The manufacture of upholstered furniture provides an excellent opportunity to analyze the effect of a comprehensive communication system on classical production management functions. The objective of the research is to study the scheduling heuristics that embrace the concepts inherent in MRP, JIT and TQM while recognizing the need for agility in a somewhat complex and demanding environment. An on-line, real-time data capture system provides the status and location of production lots, components, subassemblies for schedule control. Current inventory status of raw material and purchased items are required in order to develop and adhere to schedules. For the large variety of styles and fabrics customers may order, the communication system must provide timely, accurate and comprehensive information for intelligent decisions with respect to the product mix and production resources.

  10. SutraPlot, a graphical post-processor for SUTRA, a model for ground-water flow with solute or energy transport

    USGS Publications Warehouse

    Souza, W.R.

    1999-01-01

    This report documents a graphical display post-processor (SutraPlot) for the U.S. Geological Survey Saturated-Unsaturated flow and solute or energy TRAnsport simulation model SUTRA, Version 2D3D.1. This version of SutraPlot is an upgrade to SutraPlot for the 2D-only SUTRA model (Souza, 1987). It has been modified to add 3D functionality, a graphical user interface (GUI), and enhanced graphic output options. Graphical options for 2D SUTRA (2-dimension) simulations include: drawing the 2D finite-element mesh, mesh boundary, and velocity vectors; plots of contours for pressure, saturation, concentration, and temperature within the model region; 2D finite-element based gridding and interpolation; and 2D gridded data export files. Graphical options for 3D SUTRA (3-dimension) simulations include: drawing the 3D finite-element mesh; plots of contours for pressure, saturation, concentration, and temperature in 2D sections of the 3D model region; 3D finite-element based gridding and interpolation; drawing selected regions of velocity vectors (projected on principal coordinate planes); and 3D gridded data export files. Installation instructions and a description of all graphic options are presented. A sample SUTRA problem is described and three step-by-step SutraPlot applications are provided. In addition, the methodology and numerical algorithms for the 2D and 3D finite-element based gridding and interpolation, developed for SutraPlot, are described. 1

  11. Sensitivity of the urban airshed model to mixing height profiles

    SciTech Connect

    Rao, S.T.; Sistla, G.; Ku, J.Y.; Zhou, N.; Hao, W.

    1994-12-31

    The United States Environmental Protection Agency (USEPA) has recommended the use of the Urban Airshed Model (UAM), a grid-based photochemical model, for regulatory applications. One of the important parameters in applications of the UAM is the height of the mixed layer or the diffusion break. In this study, we examine the sensitivity of the UAM-predicted ozone concentrations to (a) a spatially invariant diurnal mixing height profile, and (b) a spatially varying diurnal mixing height profile for a high ozone episode of July 1988 for the New York Airshed. The 1985/88 emissions inventory used in the EPA`s Regional Oxidant Modeling simulations has been regridded for this study. Preliminary results suggest that the spatially varying case yields a higher peak ozone concentrations compared to the spatially invariant mixing height simulation, with differences in the peak ozone ranging from a few ppb to about 40 ppb for the days simulated. These differences are attributed to the differences in the shape of the mixing height profiles and its rate of growth during the morning hours when peak emissions are injected into the atmosphere. Examination of the impact of emissions reductions associated with these two mixing height profiles indicates that NO{sub x}-focussed controls provide a greater change in the predicted ozone peak under spatially invariant mixing heights than under the spatially varying mixing height profile. On the other hand, VOC-focussed controls provide a greater change in the predicted peak ozone levels under spatially varying mixing heights than under the spatially invariant mixing height profile.

  12. A Bayesian Semiparametric Latent Variable Model for Mixed Responses

    ERIC Educational Resources Information Center

    Fahrmeir, Ludwig; Raach, Alexander

    2007-01-01

    In this paper we introduce a latent variable model (LVM) for mixed ordinal and continuous responses, where covariate effects on the continuous latent variables are modelled through a flexible semiparametric Gaussian regression model. We extend existing LVMs with the usual linear covariate effects by including nonparametric components for nonlinear…

  13. A Comparison of Item Fit Statistics for Mixed IRT Models

    ERIC Educational Resources Information Center

    Chon, Kyong Hee; Lee, Won-Chan; Dunbar, Stephen B.

    2010-01-01

    In this study we examined procedures for assessing model-data fit of item response theory (IRT) models for mixed format data. The model fit indices used in this study include PARSCALE's G[superscript 2], Orlando and Thissen's S-X[superscript 2] and S-G[superscript 2], and Stone's chi[superscript 2*] and G[superscript 2*]. To investigate the…

  14. A Novel Graphical User Interface for High-Efficacy Modeling of Human Perceptual Similarity Opinions

    SciTech Connect

    Kress, James M; Xu, Songhua; Tourassi, Georgia

    2013-01-01

    We present a novel graphical user interface (GUI) that facilitates high-efficacy collection of perceptual similarity opinions of a user in an effective and intuitive manner. The GUI is based on a hybrid mechanism that combines ranking and rating. Namely, it presents a base image for rating its similarity to seven peripheral images that are displayed simultaneously following a circular layout. The user is asked to report the base image s pairwise similarity to each peripheral image on a fixed scale while preserving the relative ranking among all peripheral images. The collected data are then used to predict the user s subjective opinions regarding the perceptual similarity of images. We tested this new approach against two methods commonly used in perceptual similarity studies: (1) a ranking method that presents triplets of images for selecting the image pair with the highest internal similarity and (2) a rating method that presents pairs of images for rating their relative similarity on a fixed scale. We aimed to determine which data collection method was the most time efficient and effective for predicting a user s perceptual opinions regarding the similarity of mammographic masses. Our study was conducted with eight individuals. By using the proposed GUI, we were able to derive individual user profiles that were 41.4% to 46.9% more accurate than those derived with the other two data collection GUIs. The accuracy improvement was statistically significant.

  15. Modelling multi-phase liquid-sediment scour and resuspension induced by rapid flows using Smoothed Particle Hydrodynamics (SPH) accelerated with a Graphics Processing Unit (GPU)

    NASA Astrophysics Data System (ADS)

    Fourtakas, G.; Rogers, B. D.

    2016-06-01

    A two-phase numerical model using Smoothed Particle Hydrodynamics (SPH) is applied to two-phase liquid-sediments flows. The absence of a mesh in SPH is ideal for interfacial and highly non-linear flows with changing fragmentation of the interface, mixing and resuspension. The rheology of sediment induced under rapid flows undergoes several states which are only partially described by previous research in SPH. This paper attempts to bridge the gap between the geotechnics, non-Newtonian and Newtonian flows by proposing a model that combines the yielding, shear and suspension layer which are needed to predict accurately the global erosion phenomena, from a hydrodynamics prospective. The numerical SPH scheme is based on the explicit treatment of both phases using Newtonian and the non-Newtonian Bingham-type Herschel-Bulkley-Papanastasiou constitutive model. This is supplemented by the Drucker-Prager yield criterion to predict the onset of yielding of the sediment surface and a concentration suspension model. The multi-phase model has been compared with experimental and 2-D reference numerical models for scour following a dry-bed dam break yielding satisfactory results and improvements over well-known SPH multi-phase models. With 3-D simulations requiring a large number of particles, the code is accelerated with a graphics processing unit (GPU) in the open-source DualSPHysics code. The implementation and optimisation of the code achieved a speed up of x58 over an optimised single thread serial code. A 3-D dam break over a non-cohesive erodible bed simulation with over 4 million particles yields close agreement with experimental scour and water surface profiles.

  16. Graphic engine resource management

    NASA Astrophysics Data System (ADS)

    Bautin, Mikhail; Dwarakinath, Ashok; Chiueh, Tzi-cker

    2008-01-01

    Modern consumer-grade 3D graphic cards boast a computation/memory resource that can easily rival or even exceed that of standard desktop PCs. Although these cards are mainly designed for 3D gaming applications, their enormous computational power has attracted developers to port an increasing number of scientific computation programs to these cards, including matrix computation, collision detection, cryptography, database sorting, etc. As more and more applications run on 3D graphic cards, there is a need to allocate the computation/memory resource on these cards among the sharing applications more fairly and efficiently. In this paper, we describe the design, implementation and evaluation of a Graphic Processing Unit (GPU) scheduler based on Deficit Round Robin scheduling that successfully allocates to every process an equal share of the GPU time regardless of their demand. This scheduler, called GERM, estimates the execution time of each GPU command group based on dynamically collected statistics, and controls each process's GPU command production rate through its CPU scheduling priority. Measurements on the first GERM prototype show that this approach can keep the maximal GPU time consumption difference among concurrent GPU processes consistently below 5% for a variety of application mixes.

  17. Masses and mixings in a grand unified toy model

    NASA Astrophysics Data System (ADS)

    McKeen, David; Rosner, Jonathan L.; Thalapillil, Arun M.

    2007-10-01

    The generation of the fermion mass hierarchy in the standard model of particle physics is a long-standing puzzle. The recent discoveries from neutrino physics suggest that the mixing in the lepton sector is large compared to the quark mixings. To understand this asymmetry between the quark and lepton mixings is an important aim for particle physics. In this regard, two promising approaches from the theoretical side are grand unified theories and family symmetries. In this paper we try to understand certain general features of grand unified theories with Abelian family symmetries by taking the simplest SU(5) grand unified theory as a prototype. We construct an SU(5) toy model with U(1)F⊗Z2'⊗Z2''⊗Z2''' family symmetry that, in a natural way, duplicates the observed mass hierarchy and mixing matrices to lowest approximation. The system for generating the mass hierarchy is through a Froggatt-Nielsen type mechanism. One idea that we use in the model is that the quark and charged lepton sectors are hierarchical with small mixing angles while the light neutrino sector is democratic with larger mixing angles. We also discuss some of the difficulties in incorporating finer details into the model without making further assumptions or adding a large scalar sector.

  18. New mixing angles in the left-right symmetric model

    NASA Astrophysics Data System (ADS)

    Kokado, Akira; Saito, Takesi

    2015-12-01

    In the left-right symmetric model neutral gauge fields are characterized by three mixing angles θ12,θ23,θ13 between three gauge fields Bμ,WLμ 3,WRμ 3, which produce mass eigenstates Aμ,Zμ,Zμ', when G =S U (2 )L×S U (2 )R×U (1 )B-L×D is spontaneously broken down until U (1 )em . We find a new mixing angle θ', which corresponds to the Weinberg angle θW in the standard model with the S U (2 )L×U (1 )Y gauge symmetry, from these mixing angles. It is then shown that any mixing angle θi j can be expressed by ɛ and θ', where ɛ =gL/gR is a ratio of running left-right gauge coupling strengths. We observe that light gauge bosons are described by θ' only, whereas heavy gauge bosons are described by two parameters ɛ and θ'.

  19. Comparison between the SIMPLE and ENERGY mixing models

    SciTech Connect

    Burns, K.J.; Todreas, N.E.

    1980-07-01

    The SIMPLE and ENERGY mixing models were compared in order to investigate the limitations of SIMPLE's analytically formulated mixing parameter, relative to the experimentally calibrated ENERGY mixing parameters. For interior subchannels, it was shown that when the SIMPLE and ENERGY parameters are reduced to a common form, there is good agreement between the two models for a typical fuel geometry. However, large discrepancies exist for typical blanket (lower P/D) geometries. Furthermore, the discrepancies between the mixing parameters result in significant differences in terms of the temperature profiles generated by the ENERGY code utilizing these mixing parameters as input. For edge subchannels, the assumptions made in the development of the SIMPLE model were extended to the rectangular edge subchannel geometry used in ENERGY. The resulting effective eddy diffusivities (used by the ENERGY code) associated with the SIMPLE model are again closest to those of the ENERGY model for the fuel assembly geometry. Finally, the SIMPLE model's neglect of a net swirl effect in the edge region is most limiting for assemblies exhibiting relatively large radial power skews.

  20. A Mixed Effects Randomized Item Response Model

    ERIC Educational Resources Information Center

    Fox, J.-P.; Wyrick, Cheryl

    2008-01-01

    The randomized response technique ensures that individual item responses, denoted as true item responses, are randomized before observing them and so-called randomized item responses are observed. A relationship is specified between randomized item response data and true item response data. True item response data are modeled with a (non)linear…

  1. Generalized Dynamic Factor Models for Mixed-Measurement Time Series

    PubMed Central

    Cui, Kai; Dunson, David B.

    2013-01-01

    In this article, we propose generalized Bayesian dynamic factor models for jointly modeling mixed-measurement time series. The framework allows mixed-scale measurements associated with each time series, with different measurements having different distributions in the exponential family conditionally on time-varying latent factor(s). Efficient Bayesian computational algorithms are developed for posterior inference on both the latent factors and model parameters, based on a Metropolis Hastings algorithm with adaptive proposals. The algorithm relies on a Greedy Density Kernel Approximation (GDKA) and parameter expansion with latent factor normalization. We tested the framework and algorithms in simulated studies and applied them to the analysis of intertwined credit and recovery risk for Moody’s rated firms from 1982–2008, illustrating the importance of jointly modeling mixed-measurement time series. The article has supplemental materials available online. PMID:24791133

  2. Linear equality constraints in the general linear mixed model.

    PubMed

    Edwards, L J; Stewart, P W; Muller, K E; Helms, R W

    2001-12-01

    Scientists may wish to analyze correlated outcome data with constraints among the responses. For example, piecewise linear regression in a longitudinal data analysis can require use of a general linear mixed model combined with linear parameter constraints. Although well developed for standard univariate models, there are no general results that allow a data analyst to specify a mixed model equation in conjunction with a set of constraints on the parameters. We resolve the difficulty by precisely describing conditions that allow specifying linear parameter constraints that insure the validity of estimates and tests in a general linear mixed model. The recommended approach requires only straightforward and noniterative calculations to implement. We illustrate the convenience and advantages of the methods with a comparison of cognitive developmental patterns in a study of individuals from infancy to early adulthood for children from low-income families.

  3. Methodological uncertainty in resource mixing models for generalist fishes.

    PubMed

    Galván, D E; Sweeting, C J; Polunin, N V C

    2012-08-01

    Carbon and nitrogen stable isotope ratios are used to assess diet composition by determining bounds for the relative contributions of different prey to a predator's diet. This approach is predicated on the assumption that the isotope ratios of predator tissues are similar to those of dominant food sources after accounting for trophic discrimination (Δ(x)X), and is formulated as linear mixing models based on mass balance equations. However, Δ(x)X is species- and tissue-specific and may be affected by factors such as diet quality and quantity. From the different methods proposed to solve mass balance equations, some assume Δ(x)X to be exact values whilst others (based on Bayesian statistics) incorporate variability and inherent uncertainty. Using field data from omnivorous reef fishes, our study illustrates how uncertainty may be taken into account in non-Bayesian models. We also illustrate how dietary interpretation is a function of both absolute Δ(x)X and its associated uncertainty in both Bayesian and non-Bayesian isotope mixing models. Finally, collated literature illustrate that uncertainty surrounding Δ(x)X is often too restricted. Together, these data suggest the high sensitivity of mixing models to variation in trophic discrimination is a consequence of inappropriately constrained uncertainty against highly variable Δ(x)X. This study thus provides guidance on the interpretation of existing published mixing model results and in robust analysis of new resource mixing scenarios.

  4. A graphical systems model to facilitate hypothesis-driven ecotoxicogenomics research on the teleost brain-pituitary-gonadal axis

    SciTech Connect

    Villeneuve, Daniel L.; Larkin, Patrick; Knoebl, Iris; Miracle, Ann L.; Kahl, Michael D.; Jensen, Kathleen M.; Makynen, Elizabeth A.; Durhan, Elizabeth J.; Carter, Barbara J.; Denslow, Nancy D.; Ankley, Gerald T.

    2007-01-01

    Conceptual or graphical systems models are powerful tools that can help facilitate hypothesis-based ecotoxicogenomic research and aid mechanistic interpretation of toxicogenomic results. This paper presents a novel conceptual model of the teleost brain-pituitary-gonadal axis designed to aid ecotoxigenomics research on endocrine-disrupting chemicals using small fish models. Application of the model to toxicogenomics research was illustrated in the context of a recent study that examined the effects of the competitive aromatase inhibitor, fadrozole, on mRNA transcript abundance in gonad, brain, and liver tissue of exposed fathead minnows using a novel fathead minnow oligonucleotide microarray and quantitative real-time polymerase chain reaction. Changes in transcript abundance observed in the ovaries of females exposed to 6.3 ug fadrozole/L for 7 d were functionally consistent with fadrozole’s mechanism of action, and expected compensatory responses of the BPG-axis to fadrozole’s effects. Furthermore, array results helped identify additional elements (genes/proteins) that could be included in the model to potentially increase it’s predictive capacity. However, model-based predictions did not readily explain the lack of differential mRNA expression (relative to controls) observed in the ovary of females exposed to 60 ug fadrozole/L for 7 d. Both the utility and limitations of conceptual systems models as tools for hypothesis-driven ecotoxicogenomics research are discussed.

  5. MULTIVARIATE LINEAR MIXED MODELS FOR MULTIPLE OUTCOMES. (R824757)

    EPA Science Inventory

    We propose a multivariate linear mixed (MLMM) for the analysis of multiple outcomes, which generalizes the latent variable model of Sammel and Ryan. The proposed model assumes a flexible correlation structure among the multiple outcomes, and allows a global test of the impact of ...

  6. Analyzing Mixed-Dyadic Data Using Structural Equation Models

    ERIC Educational Resources Information Center

    Peugh, James L.; DiLillo, David; Panuzio, Jillian

    2013-01-01

    Mixed-dyadic data, collected from distinguishable (nonexchangeable) or indistinguishable (exchangeable) dyads, require statistical analysis techniques that model the variation within dyads and between dyads appropriately. The purpose of this article is to provide a tutorial for performing structural equation modeling analyses of cross-sectional…

  7. Teaching Service Modelling to a Mixed Class: An Integrated Approach

    ERIC Educational Resources Information Center

    Deng, Jeremiah D.; Purvis, Martin K.

    2015-01-01

    Service modelling has become an increasingly important area in today's telecommunications and information systems practice. We have adapted a Network Design course in order to teach service modelling to a mixed class of both the telecommunication engineering and information systems backgrounds. An integrated approach engaging mathematics teaching…

  8. Model for compound formation during ion-beam mixing

    SciTech Connect

    Desimoni, J.; Traverse, A. )

    1993-11-01

    We propose an ion-beam-mixing model that accounts for compound formation at a boundary between two materials during ion irradiation. It is based on Fick's law together with a chemical driving force in order to simulate the chemical reaction at the boundary. The behavior of the squared thickness of the mixed layer, [ital X][sup 2], with the irradiation fluence, [Phi], has been found in several mixing experiments to be either quadratic ([ital X][sup 2][alpha][Phi][sup 2]) or linear ([ital X][sup 2][alpha][Phi]), a result which is qualitatively reproduced. Depending on the fluence range, compound formation or diffusion is the limiting process of mixing kinetics. A criterion is established in terms of the ratio of the diffusion coefficient [ital D] due to irradiation to the chemical reaction rate squared which allows us to predict quadratic or linear behavior. When diffusion is the limiting process, [ital D] is enhanced by a factor which accounts for the formation of a compound in the mixed layer. Good agreement is found between the calculated mixing rates and the data taken from mixing experiments in metal/Si bilayers.

  9. Improved Estimation of Human Lipoprotein Kinetics with Mixed Effects Models

    PubMed Central

    Berglund, Martin; Adiels, Martin; Taskinen, Marja-Riitta; Borén, Jan; Wennberg, Bernt

    2015-01-01

    Context Mathematical models may help the analysis of biological systems by providing estimates of otherwise un-measurable quantities such as concentrations and fluxes. The variability in such systems makes it difficult to translate individual characteristics to group behavior. Mixed effects models offer a tool to simultaneously assess individual and population behavior from experimental data. Lipoproteins and plasma lipids are key mediators for cardiovascular disease in metabolic disorders such as diabetes mellitus type 2. By the use of mathematical models and tracer experiments fluxes and production rates of lipoproteins may be estimated. Results We developed a mixed effects model to study lipoprotein kinetics in a data set of 15 healthy individuals and 15 patients with type 2 diabetes. We compare the traditional and the mixed effects approach in terms of group estimates at various sample and data set sizes. Conclusion We conclude that the mixed effects approach provided better estimates using the full data set as well as with both sparse and truncated data sets. Sample size estimates showed that to compare lipoprotein secretion the mixed effects approach needed almost half the sample size as the traditional method. PMID:26422201

  10. Formation of alumina-ceria mixed oxide in model systems

    NASA Astrophysics Data System (ADS)

    Skála, Tomáš; Tsud, Nataliya; Prince, Kevin C.; Matolín, Vladimír

    2011-02-01

    Interaction of aluminium with cerium oxide was studied by photoelectron spectroscopy of Al/CeO2(1 1 1) and CeO2/Al(1 1 1) model systems. It was found in both cases that metallic aluminium was immediately oxidized, CeO2 was partially reduced and a mixed oxide with cerium present as Ce3+ was formed. The compound is probably cerium aluminate CeAlO3 mixed with Al2O3 or Ce2O3. In both cases the intermixing was limited by the diffusion of aluminium into ceria. The excess of deposited material above this limit formed AlOx and CeO2 overlayers on the top of the mixed oxide + aluminate/CeO2 and mixed oxide + aluminate/Al films, respectively.

  11. Discrete symmetries and model-independent patterns of lepton mixing

    NASA Astrophysics Data System (ADS)

    Hernandez, D.; Smirnov, A. Yu.

    2013-03-01

    In the context of discrete flavor symmetries, we elaborate a method that allows one to obtain relations between the mixing parameters in a model-independent way. Under very general conditions, we show that flavor groups of the von Dyck type, that are not necessarily finite, determine the absolute values of the entries of one column of the mixing matrix. We apply our formalism to finite subgroups of the infinite von Dyck groups, such as the modular groups, and find cases that yield an excellent agreement with the best fit values for the mixing angles. We explore the Klein group as the residual symmetry of the neutrino sector and explain the permutation property that appears between the elements of the mixing matrix in this case.

  12. Modeling and Analysis of Mixed Synchronous/Asynchronous Systems

    NASA Technical Reports Server (NTRS)

    Driscoll, Kevin R.; Madl. Gabor; Hall, Brendan

    2012-01-01

    Practical safety-critical distributed systems must integrate safety critical and non-critical data in a common platform. Safety critical systems almost always consist of isochronous components that have synchronous or asynchronous interface with other components. Many of these systems also support a mix of synchronous and asynchronous interfaces. This report presents a study on the modeling and analysis of asynchronous, synchronous, and mixed synchronous/asynchronous systems. We build on the SAE Architecture Analysis and Design Language (AADL) to capture architectures for analysis. We present preliminary work targeted to capture mixed low- and high-criticality data, as well as real-time properties in a common Model of Computation (MoC). An abstract, but representative, test specimen system was created as the system to be modeled.

  13. Fermion masses and mixing in Δ (27 ) flavor model

    NASA Astrophysics Data System (ADS)

    Abbas, Mohammed; Khalil, Shaaban

    2015-03-01

    An extension of the Standard Model (SM) based on the non-Abelian discrete group Δ (27 ) is considered. The Δ (27 ) flavor symmetry is spontaneously broken only by gauge singlet scalar fields, therefore our model is free from any flavor changing neutral current (FCNC). We show that the model accounts simultaneously for the observed quark and lepton masses and their mixing. In the quark sector, we find that the up-quark mass matrix is flavor diagonal and the Cabbibo-Kobayashi-Maskawa (CKM) mixing matrix arises from down quarks. In the lepton sector, we show that the charged lepton mass matrix is almost diagonal. We also adopt type-I seesaw mechanism to generate neutrino masses. A deviated mixing matrix from tri-bimaximal Maki-Nakagawa-Sakata (MNS), with a correlation between sin θ13 and sin2θ23 are illustrated.

  14. Mixed waste treatment model: Basis and analysis

    SciTech Connect

    Palmer, B.A.

    1995-09-01

    The Department of Energy`s Programmatic Environmental Impact Statement (PEIS) required treatment system capacities for risk and cost calculation. Los Alamos was tasked with providing these capacities to the PEIS team. This involved understanding the Department of Energy (DOE) Complex waste, making the necessary changes to correct for problems, categorizing the waste for treatment, and determining the treatment system requirements. The treatment system requirements depended on the incoming waste, which varied for each PEIS case. The treatment system requirements also depended on the type of treatment that was desired. Because different groups contributing to the PEIS needed specific types of results, we provided the treatment system requirements in a variety of forms. In total, some 40 data files were created for the TRU cases, and for the MLLW case, there were 105 separate data files. Each data file represents one treatment case consisting of the selected waste from various sites, a selected treatment system, and the reporting requirements for such a case. The treatment system requirements in their most basic form are the treatment process rates for unit operations in the desired treatment system, based on a 10-year working life and 20-year accumulation of the waste. These results were reported in cubic meters and for the MLLW case, in kilograms as well. The treatment system model consisted of unit operations that are linked together. Each unit operation`s function depended on the input waste streams, waste matrix, and contaminants. Each unit operation outputs one or more waste streams whose matrix, contaminants, and volume/mass may have changed as a result of the treatment. These output streams are then routed to the appropriate unit operation for additional treatment until the output waste stream meets the treatment requirements for disposal. The total waste for each unit operation was calculated as well as the waste for each matrix treated by the unit.

  15. Computer modeling of ORNL storage tank sludge mobilization and mixing

    SciTech Connect

    Terrones, G.; Eyler, L.L.

    1993-09-01

    This report presents and analyzes the results of the computer modeling of mixing and mobilization of sludge in horizontal, cylindrical storage tanks using submerged liquid jets. The computer modeling uses the TEMPEST computational fluid dynamics computer program. The horizontal, cylindrical storage tank configuration is similar to the Melton Valley Storage Tanks (MVST) at Oak Ridge National (ORNL). The MVST tank contents exhibit non-homogeneous, non-Newtonian rheology characteristics. The eventual goals of the simulations are to determine under what conditions sludge mobilization using submerged liquid jets is feasible in tanks of this configuration, and to estimate mixing times required to approach homogeneity of the contents of the tanks.

  16. Spread in model climate sensitivity traced to atmospheric convective mixing.

    PubMed

    Sherwood, Steven C; Bony, Sandrine; Dufresne, Jean-Louis

    2014-01-01

    Equilibrium climate sensitivity refers to the ultimate change in global mean temperature in response to a change in external forcing. Despite decades of research attempting to narrow uncertainties, equilibrium climate sensitivity estimates from climate models still span roughly 1.5 to 5 degrees Celsius for a doubling of atmospheric carbon dioxide concentration, precluding accurate projections of future climate. The spread arises largely from differences in the feedback from low clouds, for reasons not yet understood. Here we show that differences in the simulated strength of convective mixing between the lower and middle tropical troposphere explain about half of the variance in climate sensitivity estimated by 43 climate models. The apparent mechanism is that such mixing dehydrates the low-cloud layer at a rate that increases as the climate warms, and this rate of increase depends on the initial mixing strength, linking the mixing to cloud feedback. The mixing inferred from observations appears to be sufficiently strong to imply a climate sensitivity of more than 3 degrees for a doubling of carbon dioxide. This is significantly higher than the currently accepted lower bound of 1.5 degrees, thereby constraining model projections towards relatively severe future warming.

  17. The software architecture of climate models: a graphical comparison of CMIP5 and EMICAR5 configurations

    NASA Astrophysics Data System (ADS)

    Alexander, K.; Easterbrook, S. M.

    2015-04-01

    We analyze the source code of eight coupled climate models, selected from those that participated in the CMIP5 (Taylor et al., 2012) or EMICAR5 (Eby et al., 2013; Zickfeld et al., 2013) intercomparison projects. For each model, we sort the preprocessed code into components and subcomponents based on dependency structure. We then create software architecture diagrams that show the relative sizes of these components/subcomponents and the flow of data between them. The diagrams also illustrate several major classes of climate model design; the distribution of complexity between components, which depends on historical development paths as well as the conscious goals of each institution; and the sharing of components between different modeling groups. These diagrams offer insights into the similarities and differences in structure between climate models, and have the potential to be useful tools for communication between scientists, scientific institutions, and the public.

  18. The software architecture of climate models: a graphical comparison of CMIP5 and EMICAR5 configurations

    NASA Astrophysics Data System (ADS)

    Alexander, K.; Easterbrook, S. M.

    2015-01-01

    We analyse the source code of eight coupled climate models, selected from those that participated in the CMIP5 (Taylor et al., 2012) or EMICAR5 (Eby et al., 2013; Zickfeld et al., 2013) intercomparison projects. For each model, we sort the preprocessed code into components and subcomponents based on dependency structure. We then create software architecture diagrams which show the relative sizes of these components/subcomponents and the flow of data between them. The diagrams also illustrate several major classes of climate model design; the distribution of complexity between components, which depends on historical development paths as well as the conscious goals of each institution; and the sharing of components between different modelling groups. These diagrams offer insights into the similarities and differences between models, and have the potential to be useful tools for communication between scientists, scientific institutions, and the public.

  19. Parallelization and improvements of the generalized born model with a simple sWitching function for modern graphics processors.

    PubMed

    Arthur, Evan J; Brooks, Charles L

    2016-04-15

    Two fundamental challenges of simulating biologically relevant systems are the rapid calculation of the energy of solvation and the trajectory length of a given simulation. The Generalized Born model with a Simple sWitching function (GBSW) addresses these issues by using an efficient approximation of Poisson-Boltzmann (PB) theory to calculate each solute atom's free energy of solvation, the gradient of this potential, and the subsequent forces of solvation without the need for explicit solvent molecules. This study presents a parallel refactoring of the original GBSW algorithm and its implementation on newly available, low cost graphics chips with thousands of processing cores. Depending on the system size and nonbonded force cutoffs, the new GBSW algorithm offers speed increases of between one and two orders of magnitude over previous implementations while maintaining similar levels of accuracy. We find that much of the algorithm scales linearly with an increase of system size, which makes this water model cost effective for solvating large systems. Additionally, we utilize our GPU-accelerated GBSW model to fold the model system chignolin, and in doing so we demonstrate that these speed enhancements now make accessible folding studies of peptides and potentially small proteins.

  20. Statistical basis and outputs of stable isotope mixing models: Comment on Fry (2013)

    EPA Science Inventory

    A recent article by Fry (2013; Mar Ecol Prog Ser 472:1−13) reviewed approaches to solving underdetermined stable isotope mixing systems, and presented a new graphical approach and set of summary statistics for the analysis of such systems. In his review, Fry (2013) mis-characteri...

  1. Usefulness of Bayesian graphical models for early prediction of disease progression in multiple sclerosis.

    PubMed

    Bergamaschi, R; Romani, A; Tonietti, S; Citterio, A; Berzuini, C; Cosi, V

    2000-01-01

    Previous studies of possible prognostic indicators for multiple sclerosis have been based on "classic" Cox's proportional hazards regression model, as well as on equivalent or simpler approaches, restricting their attention to variables measured either at disease onset or at a few points during follow-up. The aim of our study was to analyse the risk of reaching secondary progression in MS patients with a relapsing-remitting initial course, using two different statistical approaches: a Cox's proportional-hazards model and a Bayesian latent-variable model with Markov chain Monte Carlo methods of computation. In comparison with a standard statistical approach, our model is advantageous because, exploiting all the information gleaned from the patient as it is gradually made available, it is capable to detect even small prognostic effects. PMID:11205356

  2. Graphical Technique to Support the Teaching/Learning Process of Software Process Reference Models

    NASA Astrophysics Data System (ADS)

    Espinosa-Curiel, Ismael Edrein; Rodríguez-Jacobo, Josefina; Fernández-Zepeda, José Alberto

    In this paper, we propose a set of diagrams to visualize software process reference models (PRM). The diagrams, called dimods, are the combination of some visual and process modeling techniques such as rich pictures, mind maps, IDEF and RAD diagrams. We show the use of this technique by designing a set of dimods for the Mexican Software Industry Process Model (MoProSoft). Additionally, we perform an evaluation of the usefulness of dimods. The result of the evaluation shows that dimods may be a support tool that facilitates the understanding, memorization, and learning of software PRMs in both, software development organizations and universities. The results also show that dimods may have advantages over the traditional description methods for these types of models.

  3. Modeling and Simulation of Fluid Mixing Laser Experiments and Supernova

    SciTech Connect

    Glimm, James

    2008-06-24

    The three year plan for this project is to develop novel theories and advanced simulation methods leading to a systematic understanding of turbulent mixing. A primary focus is the comparison of simulation models (both Direct Numerical Simulation and subgrid averaged models) to experiments. The comprehension and reduction of experimental and simulation data are central goals of this proposal. We will model 2D and 3D perturbations of planar interfaces. We will compare these tests with models derived from averaged equations (our own and those of others). As a second focus, we will develop physics based subgrid simulation models of diffusion across an interface, with physical but no numerical mass diffusion. We will conduct analytic studies of mix, in support of these objectives. Advanced issues, including multiple layers and reshock, will be considered.

  4. Probabilistic Graphical Models for the Analysis and Synthesis of Musical Audio

    NASA Astrophysics Data System (ADS)

    Hoffmann, Matthew Douglas

    Content-based Music Information Retrieval (MIR) systems seek to automatically extract meaningful information from musical audio signals. This thesis applies new and existing generative probabilistic models to several content-based MIR tasks: timbral similarity estimation, semantic annotation and retrieval, and latent source discovery and separation. In order to estimate how similar two songs sound to one another, we employ a Hierarchical Dirichlet Process (HDP) mixture model to discover a shared representation of the distribution of timbres in each song. Comparing songs under this shared representation yields better query-by-example retrieval quality and scalability than previous approaches. To predict what tags are likely to apply to a song (e.g., "rap," "happy," or "driving music"), we develop the Codeword Bernoulli Average (CBA) model, a simple and fast mixture-of-experts model. Despite its simplicity, CBA performs at least as well as state-of-the-art approaches at automatically annotating songs and finding to what songs in a database a given tag most applies. Finally, we address the problem of latent source discovery and separation by developing two Bayesian nonparametric models, the Shift-Invariant HDP and Gamma Process NMF. These models allow us to discover what sounds (e.g. bass drums, guitar chords, etc.) are present in a song or set of songs and to isolate or suppress individual source. These models' ability to decide how many latent sources are necessary to model the data is particularly valuable in this application, since it is impossible to guess a priori how many sounds will appear in a given song or set of songs. Once they have been fit to data, probabilistic models can also be used to drive the synthesis of new musical audio, both for creative purposes and to qualitatively diagnose what information a model does and does not capture. We also adapt the SIHDP model to create new versions of input audio with arbitrary sample sets, for example, to create

  5. Graphic comparison of reserve-growth models for conventional oil and accumulation

    USGS Publications Warehouse

    Klett, T.R.

    2003-01-01

    The U.S. Geological Survey (USGS) periodically assesses crude oil, natural gas, and natural gas liquids resources of the world. The assessment procedure requires estimated recover-able oil and natural gas volumes (field size, cumulative production plus remaining reserves) in discovered fields. Because initial reserves are typically conservative, subsequent estimates increase through time as these fields are developed and produced. The USGS assessment of petroleum resources makes estimates, or forecasts, of the potential additions to reserves in discovered oil and gas fields resulting from field development, and it also estimates the potential fully developed sizes of undiscovered fields. The term ?reserve growth? refers to the commonly observed upward adjustment of reserve estimates. Because such additions are related to increases in the total size of a field, the USGS uses field sizes to model reserve growth. Future reserve growth in existing fields is a major component of remaining U.S. oil and natural gas resources and has therefore become a necessary element of U.S. petroleum resource assessments. Past and currently proposed reserve-growth models compared herein aid in the selection of a suitable set of forecast functions to provide an estimate of potential additions to reserves from reserve growth in the ongoing National Oil and Gas Assessment Project (NOGA). Reserve growth is modeled by construction of a curve that represents annual fractional changes of recoverable oil and natural gas volumes (for fields and reservoirs), which provides growth factors. Growth factors are used to calculate forecast functions, which are sets of field- or reservoir-size multipliers. Comparisons of forecast functions were made based on datasets used to construct the models, field type, modeling method, and length of forecast span. Comparisons were also made between forecast functions based on field-level and reservoir- level growth, and between forecast functions based on older

  6. A Nonlinear Mixed Effects Model for Latent Variables

    ERIC Educational Resources Information Center

    Harring, Jeffrey R.

    2009-01-01

    The nonlinear mixed effects model for continuous repeated measures data has become an increasingly popular and versatile tool for investigating nonlinear longitudinal change in observed variables. In practice, for each individual subject, multiple measurements are obtained on a single response variable over time or condition. This structure can be…

  7. Development of stable isotope mixing models in ecology - Fremantle

    EPA Science Inventory

    More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...

  8. Development of stable isotope mixing models in ecology - Perth

    EPA Science Inventory

    More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...

  9. Development of stable isotope mixing models in ecology - Sydney

    EPA Science Inventory

    More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...

  10. Historical development of stable isotope mixing models in ecology

    EPA Science Inventory

    More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...

  11. COMBINING SOURCES IN STABLE ISOTOPE MIXING MODELS: ALTERNATIVE METHODS

    EPA Science Inventory

    Stable isotope mixing models are often used to quantify source contributions to a mixture. Examples include pollution source identification; trophic web studies; analysis of water sources for soils, plants, or water bodies; and many others. A common problem is having too many s...

  12. Development of stable isotope mixing models in ecology - Dublin

    EPA Science Inventory

    More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...

  13. INCORPORATING CONCENTRATION DEPENDENCE IN STABLE ISOTOPE MIXING MODELS

    EPA Science Inventory

    Stable isotopes are frequently used to quantify the contributions of multiple sources to a mixture; e.g., C and N isotopic signatures can be used to determine the fraction of three food sources in a consumer's diet. The standard dual isotope, three source linear mixing model ass...

  14. The Worm Process for the Ising Model is Rapidly Mixing

    NASA Astrophysics Data System (ADS)

    Collevecchio, Andrea; Garoni, Timothy M.; Hyndman, Timothy; Tokarev, Daniel

    2016-09-01

    We prove rapid mixing of the worm process for the zero-field ferromagnetic Ising model, on all finite connected graphs, and at all temperatures. As a corollary, we obtain a fully-polynomial randomized approximation scheme for the Ising susceptibility, and for a certain restriction of the two-point correlation function.

  15. Sensitivity Analysis of Mixed Models for Incomplete Longitudinal Data

    ERIC Educational Resources Information Center

    Xu, Shu; Blozis, Shelley A.

    2011-01-01

    Mixed models are used for the analysis of data measured over time to study population-level change and individual differences in change characteristics. Linear and nonlinear functions may be used to describe a longitudinal response, individuals need not be observed at the same time points, and missing data, assumed to be missing at random (MAR),…

  16. Assessing and Explaining Differential Item Functioning Using Logistic Mixed Models

    ERIC Educational Resources Information Center

    Van den Noortgate, Wim; De Boeck, Paul

    2005-01-01

    Although differential item functioning (DIF) theory traditionally focuses on the behavior of individual items in two (or a few) specific groups, in educational measurement contexts, it is often plausible to regard the set of items as a random sample from a broader category. This article presents logistic mixed models that can be used to model…

  17. Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models

    ERIC Educational Resources Information Center

    Wagler, Amy E.

    2014-01-01

    Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…

  18. Mixed Rasch Modeling of the Self-Rating Depression Scale

    ERIC Educational Resources Information Center

    Hong, Sehee; Min, Sae-Young

    2007-01-01

    In this study, mixed Rasch modeling was used on the Self-Rating Depression Scale (SDS), a widely used measure of depression, among a non-Western sample of 618 Korean college students. The results revealed three latent classes and confirmed the unidimensionality of the SDS. In addition, there was a significant effect for gender in terms of class…

  19. A graphical interface based model for wind turbine drive train dynamics

    SciTech Connect

    Manwell, J.F.; McGowan, J.G.; Abdulwahid, U.; Rogers, A.; McNiff, B.

    1996-12-31

    This paper presents a summary of a wind turbine drive train dynamics code that has been under development at the University of Massachusetts, under National Renewable Energy Laboratory (NREL) support. The code is intended to be used to assist in the proper design and selection of drive train components. This work summarizes the development of the equations of motion for the model, and discusses the method of solution. In addition, a number of comparisons with analytical solutions and experimental field data are given. The summary includes conclusions and suggestions for future work on the model. 13 refs., 10 figs.

  20. The GRASP 3: Graphical Reliability Analysis Simulation Program. Version 3: A users' manual and modelling guide

    NASA Technical Reports Server (NTRS)

    Phillips, D. T.; Manseur, B.; Foster, J. W.

    1982-01-01

    Alternate definitions of system failure create complex analysis for which analytic solutions are available only for simple, special cases. The GRASP methodology is a computer simulation approach for solving all classes of problems in which both failure and repair events are modeled according to the probability laws of the individual components of the system.

  1. Unsupervised Estimation of Mouse Sleep Scores and Dynamics Using a Graphical Model of Electrophysiological Measurements.

    PubMed

    Yaghouby, Farid; O'Hara, Bruce F; Sunderam, Sridhar

    2016-06-01

    The proportion, number of bouts, and mean bout duration of different vigilance states (Wake, NREM, REM) are useful indices of dynamics in experimental sleep research. These metrics are estimated by first scoring state, sometimes using an algorithm, based on electrophysiological measurements such as the electroencephalogram (EEG) and electromyogram (EMG), and computing their values from the score sequence. Isolated errors in the scores can lead to large discrepancies in the estimated sleep metrics. But most algorithms score sleep by classifying the state from EEG/EMG features independently in each time epoch without considering the dynamics across epochs, which could provide contextual information. The objective here is to improve estimation of sleep metrics by fitting a probabilistic dynamical model to mouse EEG/EMG data and then predicting the metrics from the model parameters. Hidden Markov models (HMMs) with multivariate Gaussian observations and Markov state transitions were fitted to unlabeled 24-h EEG/EMG feature time series from 20 mice to model transitions between the latent vigilance states; a similar model with unbiased transition probabilities served as a reference. Sleep metrics predicted from the HMM parameters did not deviate significantly from manual estimates except for rapid eye movement sleep (REM) ([Formula: see text]; Wilcoxon signed-rank test). Changes in value from Light to Dark conditions correlated well with manually estimated differences (Spearman's rho 0.43-0.84) except for REM. HMMs also scored vigilance state with over 90% accuracy. HMMs of EEG/EMG features can therefore characterize sleep dynamics from EEG/EMG measurements, a prerequisite for characterizing the effects of perturbation in sleep monitoring and control applications. PMID:27121993

  2. An epidemic model to evaluate the homogeneous mixing assumption

    NASA Astrophysics Data System (ADS)

    Turnes, P. P.; Monteiro, L. H. A.

    2014-11-01

    Many epidemic models are written in terms of ordinary differential equations (ODE). This approach relies on the homogeneous mixing assumption; that is, the topological structure of the contact network established by the individuals of the host population is not relevant to predict the spread of a pathogen in this population. Here, we propose an epidemic model based on ODE to study the propagation of contagious diseases conferring no immunity. The state variables of this model are the percentages of susceptible individuals, infectious individuals and empty space. We show that this dynamical system can experience transcritical and Hopf bifurcations. Then, we employ this model to evaluate the validity of the homogeneous mixing assumption by using real data related to the transmission of gonorrhea, hepatitis C virus, human immunodeficiency virus, and obesity.

  3. Logit-normal mixed model for Indian monsoon precipitation

    NASA Astrophysics Data System (ADS)

    Dietz, L. R.; Chatterjee, S.

    2014-09-01

    Describing the nature and variability of Indian monsoon precipitation is a topic of much debate in the current literature. We suggest the use of a generalized linear mixed model (GLMM), specifically, the logit-normal mixed model, to describe the underlying structure of this complex climatic event. Four GLMM algorithms are described and simulations are performed to vet these algorithms before applying them to the Indian precipitation data. The logit-normal model was applied to light, moderate, and extreme rainfall. Findings indicated that physical constructs were preserved by the models, and random effects were significant in many cases. We also found GLMM estimation methods were sensitive to tuning parameters and assumptions and therefore, recommend use of multiple methods in applications. This work provides a novel use of GLMM and promotes its addition to the gamut of tools for analysis in studying climate phenomena.

  4. Mixed inflaton and spectator field models after Planck

    SciTech Connect

    Enqvist, Kari; Takahashi, Tomo E-mail: tomot@cc.saga-u.ac.jp

    2013-10-01

    We investigate the possibility that the primordial perturbation has two sources: the inflaton and a spectator field, which is not dynamically important during inflation but which after inflation can contribute to the curvature perturbation. We derive the constraints on the model by using recent Planck results on the spectral index, tensor-to-scalar ratio and nonlinearity parameters f{sub NL} and τ{sub NL} for the cases with and without specifying the inflation and spectator models. If one chooses the spectator to be the curvaton with a quadratic potential, non-Gaussianities can be computed and imply restrictions on possible values of the ratio of the spectator-to-inflaton power R. We also consider a mixed curvaton and chaotic inflation model and show that even quartic chaotic inflation is still feasible in the context of mixed models even with Planck data.

  5. Quasi 1D Modeling of Mixed Compression Supersonic Inlets

    NASA Technical Reports Server (NTRS)

    Kopasakis, George; Connolly, Joseph W.; Paxson, Daniel E.; Woolwine, Kyle J.

    2012-01-01

    The AeroServoElasticity task under the NASA Supersonics Project is developing dynamic models of the propulsion system and the vehicle in order to conduct research for integrated vehicle dynamic performance. As part of this effort, a nonlinear quasi 1-dimensional model of the 2-dimensional bifurcated mixed compression supersonic inlet is being developed. The model utilizes computational fluid dynamics for both the supersonic and subsonic diffusers. The oblique shocks are modeled utilizing compressible flow equations. This model also implements variable geometry required to control the normal shock position. The model is flexible and can also be utilized to simulate other mixed compression supersonic inlet designs. The model was validated both in time and in the frequency domain against the legacy LArge Perturbation INlet code, which has been previously verified using test data. This legacy code written in FORTRAN is quite extensive and complex in terms of the amount of software and number of subroutines. Further, the legacy code is not suitable for closed loop feedback controls design, and the simulation environment is not amenable to systems integration. Therefore, a solution is to develop an innovative, more simplified, mixed compression inlet model with the same steady state and dynamic performance as the legacy code that also can be used for controls design. The new nonlinear dynamic model is implemented in MATLAB Simulink. This environment allows easier development of linear models for controls design for shock positioning. The new model is also well suited for integration with a propulsion system model to study inlet/propulsion system performance, and integration with an aero-servo-elastic system model to study integrated vehicle ride quality, vehicle stability, and efficiency.

  6. Application of large eddy interaction model to a mixing layer

    NASA Technical Reports Server (NTRS)

    Murthy, S. N. B.

    1989-01-01

    The large eddy interaction model (LEIM) is a statistical model of turbulence based on the interaction of selected eddies with the mean flow and all of the eddies in a turbulent shear flow. It can be utilized as the starting point for obtaining physical structures in the flow. The possible application of the LEIM to a mixing layer formed between two parallel, incompressible flows with a small temperature difference is developed by invoking a detailed similarity between the spectra of velocity and temperature.

  7. Modelling of externally mixed particles in the atmosphere

    NASA Astrophysics Data System (ADS)

    ZHU, Shupeng; Sartelet, Karine; Seigneur, Christian

    2014-05-01

    Particles present in the atmosphere have significant impacts on climate as well as on human health. Thus, it is important to accurately simulate and forecast their concentrations. Most commonly used air quality models assume that particles are internally mixed, largely for computational reasons. However, this assumption is disproved by measurements, especially close to sources. In fact, the externally-mixed properties of particles are important for aerosol source identification, radiative effects and particle evolution. In this study, a new size-composition resolved aerosol model is developed. It can solve the aerosol dynamic evolution for external mixtures taking into account the processes of coagulation, condensation and nucleation. Both the size of particles and the mass fraction of each chemical compound are discretized. For a given particle size, particles of different chemical composition may co-exist. Aerosol dynamics is solved in each grid cell by splitting coagulation and condensation/evaporation-nucleation processes. For the condensation/evaporation, surface equilibrium between gas and aerosol is calculated based on ISORROPIA and the newly developed H2O (Hydrophilic/Hydrophobic Organic) Model. Because size and chemical composition sections evolve during condensation/evaporation, concentrations need to be redistributed on fixed sections after condensation/evaporation to be able to use the model in 3 dimensions. This is done based on the numerical scheme HEMEN, which was initially developed for size redistribution. Chemical components can be grouped into several aggregates to reduce computational cost. The 0D model is validated by comparison to results obtained for internally mixed particles and the effect of mixing is investigated for up to 31 species and 4 aggregates. The model will be integrated into the air quality modeling platform POLYPHEMUS to investigate its performance in modeling air quality by comparing with observations during the MEGAPOLI

  8. Learning a Tracking and Estimation Integrated Graphical Model for Human Pose Tracking.

    PubMed

    Zhao, Lin; Gao, Xinbo; Tao, Dacheng; Li, Xuelong

    2015-12-01

    We investigate the tracking of 2-D human poses in a video stream to determine the spatial configuration of body parts in each frame, but this is not a trivial task because people may wear different kinds of clothing and may move very quickly and unpredictably. The technology of pose estimation is typically applied, but it ignores the temporal context and cannot provide smooth, reliable tracking results. Therefore, we develop a tracking and estimation integrated model (TEIM) to fully exploit temporal information by integrating pose estimation with visual tracking. However, joint parsing of multiple articulated parts over time is difficult, because a full model with edges capturing all pairwise relationships within and between frames is loopy and intractable. In previous models, approximate inference was usually resorted to, but it cannot promise good results and the computational cost is large. We overcome these problems by exploring the idea of divide and conquer, which decomposes the full model into two much simpler tractable submodels. In addition, a novel two-step iteration strategy is proposed to efficiently conquer the joint parsing problem. Algorithmically, we design TEIM very carefully so that: 1) it enables pose estimation and visual tracking to compensate for each other to achieve desirable tracking results; 2) it is able to deal with the problem of tracking loss; and 3) it only needs past information and is capable of tracking online. Experiments are conducted on two public data sets in the wild with ground truth layout annotations, and the experimental results indicate the effectiveness of the proposed TEIM framework.

  9. Overview of the Graphical User Interface for the GERMcode (GCR Event-Based Risk Model)

    NASA Technical Reports Server (NTRS)

    Kim, Myung-Hee Y.; Cucinotta, Francis A.

    2010-01-01

    The descriptions of biophysical events from heavy ions are of interest in radiobiology, cancer therapy, and space exploration. The biophysical description of the passage of heavy ions in tissue and shielding materials is best described by a stochastic approach that includes both ion track structure and nuclear interactions. A new computer model called the GCR Event-based Risk Model (GERM) code was developed for the description of biophysical events from heavy ion beams at the NASA Space Radiation Laboratory (NSRL). The GERMcode calculates basic physical and biophysical quantities of high-energy protons and heavy ions that have been studied at NSRL for the purpose of simulating space radiobiological effects. For mono-energetic beams, the code evaluates the linear-energy transfer (LET), range (R), and absorption in tissue equivalent material for a given Charge (Z), Mass Number (A) and kinetic energy (E) of an ion. In addition, a set of biophysical properties are evaluated such as the Poisson distribution of ion or delta-ray hits for a specified cellular area, cell survival curves, and mutation and tumor probabilities. The GERMcode also calculates the radiation transport of the beam line for either a fixed number of user-specified depths or at multiple positions along the Bragg curve of the particle. The contributions from primary ion and nuclear secondaries are evaluated. The GERMcode accounts for the major nuclear interaction processes of importance for describing heavy ion beams, including nuclear fragmentation, elastic scattering, and knockout-cascade processes by using the quantum multiple scattering fragmentation (QMSFRG) model. The QMSFRG model has been shown to be in excellent agreement with available experimental data for nuclear fragmentation cross sections, and has been used by the GERMcode for application to thick target experiments. The GERMcode provides scientists participating in NSRL experiments with the data needed for the interpretation of their

  10. Overview of the Graphical User Interface for the GERM Code (GCR Event-Based Risk Model

    NASA Technical Reports Server (NTRS)

    Kim, Myung-Hee; Cucinotta, Francis A.

    2010-01-01

    The descriptions of biophysical events from heavy ions are of interest in radiobiology, cancer therapy, and space exploration. The biophysical description of the passage of heavy ions in tissue and shielding materials is best described by a stochastic approach that includes both ion track structure and nuclear interactions. A new computer model called the GCR Event-based Risk Model (GERM) code was developed for the description of biophysical events from heavy ion beams at the NASA Space Radiation Laboratory (NSRL). The GERM code calculates basic physical and biophysical quantities of high-energy protons and heavy ions that have been studied at NSRL for the purpose of simulating space radiobiological effects. For mono-energetic beams, the code evaluates the linear-energy transfer (LET), range (R), and absorption in tissue equivalent material for a given Charge (Z), Mass Number (A) and kinetic energy (E) of an ion. In addition, a set of biophysical properties are evaluated such as the Poisson distribution of ion or delta-ray hits for a specified cellular area, cell survival curves, and mutation and tumor probabilities. The GERM code also calculates the radiation transport of the beam line for either a fixed number of user-specified depths or at multiple positions along the Bragg curve of the particle. The contributions from primary ion and nuclear secondaries are evaluated. The GERM code accounts for the major nuclear interaction processes of importance for describing heavy ion beams, including nuclear fragmentation, elastic scattering, and knockout-cascade processes by using the quantum multiple scattering fragmentation (QMSFRG) model. The QMSFRG model has been shown to be in excellent agreement with available experimental data for nuclear fragmentation cross sections, and has been used by the GERM code for application to thick target experiments. The GERM code provides scientists participating in NSRL experiments with the data needed for the interpretation of their

  11. Model aerodynamic test results for two variable cycle engine coannular exhaust systems at simulated takeoff and cruise conditions. Comprehensive data report. Volume 3: Graphical data book 1

    NASA Technical Reports Server (NTRS)

    Nelson, D. P.

    1981-01-01

    A graphical presentation of the aerodynamic data acquired during coannular nozzle performance wind tunnel tests is given. The graphical data consist of plots of nozzle gross thrust coefficient, fan nozzle discharge coefficient, and primary nozzle discharge coefficient. Normalized model component static pressure distributions are presented as a function of primary total pressure, fan total pressure, and ambient static pressure for selected operating conditions. In addition, the supersonic cruise configuration data include plots of nozzle efficiency and secondary-to-fan total pressure pumping characteristics. Supersonic and subsonic cruise data are given.

  12. Graphics processing unit accelerated three-dimensional model for the simulation of pulsed low-temperature plasmas

    SciTech Connect

    Fierro, Andrew Dickens, James; Neuber, Andreas

    2014-12-15

    A 3-dimensional particle-in-cell/Monte Carlo collision simulation that is fully implemented on a graphics processing unit (GPU) is described and used to determine low-temperature plasma characteristics at high reduced electric field, E/n, in nitrogen gas. Details of implementation on the GPU using the NVIDIA Compute Unified Device Architecture framework are discussed with respect to efficient code execution. The software is capable of tracking around 10 × 10{sup 6} particles with dynamic weighting and a total mesh size larger than 10{sup 8} cells. Verification of the simulation is performed by comparing the electron energy distribution function and plasma transport parameters to known Boltzmann Equation (BE) solvers. Under the assumption of a uniform electric field and neglecting the build-up of positive ion space charge, the simulation agrees well with the BE solvers. The model is utilized to calculate plasma characteristics of a pulsed, parallel plate discharge. A photoionization model provides the simulation with additional electrons after the initial seeded electron density has drifted towards the anode. Comparison of the performance benefits between the GPU-implementation versus a CPU-implementation is considered, and a speed-up factor of 13 for a 3D relaxation Poisson solver is obtained. Furthermore, a factor 60 speed-up is realized for parallelization of the electron processes.

  13. New models for hyperspectral anomaly detection and un-mixing

    NASA Astrophysics Data System (ADS)

    Bernhardt, M.; Heather, J. P.; Smith, M. I.

    2005-06-01

    It is now established that hyperspectral images of many natural backgrounds have statistics with fat-tails. In spite of this, many of the algorithms that are used to process them appeal to the multivariate Gaussian model. In this paper we consider biologically motivated generative models that might explain observed mixtures of vegetation in natural backgrounds. The degree to which these models match the observed fat-tailed distributions is investigated. Having shown how fat-tailed statistics arise naturally from the generative process, the models are put to work in new anomaly detection and un-mixing algorithms. The performance of these algorithms is compared with more traditional approaches.

  14. Identifying Functional Co-activation Patterns in Neuroimaging Studies via Poisson Graphical Models

    PubMed Central

    Xue, Wenqiong; Kang, Jian; Bowman, F. DuBois; Wager, Tor D.; Guo, Jian

    2014-01-01

    Summary Studying the interactions between different brain regions is essential to achieve a more complete understanding of brain function. In this paper, we focus on identifying functional co-activation patterns and undirected functional networks in neuroimaging studies. We build a functional brain network, using a sparse covariance matrix, with elements representing associations between region-level peak activations. We adopt a penalized likelihood approach to impose sparsity on the covariance matrix based on an extended multivariate Poisson model. We obtain penalized maximum likelihood estimates via the expectation-maximization (EM) algorithm and optimize an associated tuning parameter by maximizing the predictive log-likelihood. Permutation tests on the brain co-activation patterns provide region pair and network-level inference. Simulations suggest that the proposed approach has minimal biases and provides a coverage rate close to 95% of covariance estimations. Conducting a meta-analysis of 162 functional neuroimaging studies on emotions, our model identifies a functional network that consists of connected regions within the basal ganglia, limbic system, and other emotion-related brain regions. We characterize this network through statistical inference on region-pair connections as well as by graph measures. PMID:25147001

  15. Class Evolution Tree: A Graphical Tool to Support Decisions on the Number of Classes in Exploratory Categorical Latent Variable Modeling for Rehabilitation Research

    ERIC Educational Resources Information Center

    Kriston, Levente; Melchior, Hanne; Hergert, Anika; Bergelt, Corinna; Watzke, Birgit; Schulz, Holger; von Wolff, Alessa

    2011-01-01

    The aim of our study was to develop a graphical tool that can be used in addition to standard statistical criteria to support decisions on the number of classes in explorative categorical latent variable modeling for rehabilitation research. Data from two rehabilitation research projects were used. In the first study, a latent profile analysis was…

  16. Linear mixing model applied to AVHRR LAC data

    NASA Technical Reports Server (NTRS)

    Holben, Brent N.; Shimabukuro, Yosio E.

    1993-01-01

    A linear mixing model was applied to coarse spatial resolution data from the NOAA Advanced Very High Resolution Radiometer. The reflective component of the 3.55 - 3.93 microns channel was extracted and used with the two reflective channels 0.58 - 0.68 microns and 0.725 - 1.1 microns to run a Constraine Least Squares model to generate vegetation, soil, and shade fraction images for an area in the Western region of Brazil. The Landsat Thematic Mapper data covering the Emas National park region was used for estimating the spectral response of the mixture components and for evaluating the mixing model results. The fraction images were compared with an unsupervised classification derived from Landsat TM data acquired on the same day. The relationship between the fraction images and normalized difference vegetation index images show the potential of the unmixing techniques when using coarse resolution data for global studies.

  17. Logit-normal mixed model for Indian Monsoon rainfall extremes

    NASA Astrophysics Data System (ADS)

    Dietz, L. R.; Chatterjee, S.

    2014-03-01

    Describing the nature and variability of Indian monsoon rainfall extremes is a topic of much debate in the current literature. We suggest the use of a generalized linear mixed model (GLMM), specifically, the logit-normal mixed model, to describe the underlying structure of this complex climatic event. Several GLMM algorithms are described and simulations are performed to vet these algorithms before applying them to the Indian precipitation data procured from the National Climatic Data Center. The logit-normal model was applied with fixed covariates of latitude, longitude, elevation, daily minimum and maximum temperatures with a random intercept by weather station. In general, the estimation methods concurred in their suggestion of a relationship between the El Niño Southern Oscillation (ENSO) and extreme rainfall variability estimates. This work provides a valuable starting point for extending GLMM to incorporate the intricate dependencies in extreme climate events.

  18. Numerical investigation of algebraic oceanic turbulent mixing-layer models

    NASA Astrophysics Data System (ADS)

    Chacón-Rebollo, T.; Gómez-Mármol, M.; Rubino, S.

    2013-11-01

    In this paper we investigate the finite-time and asymptotic behaviour of algebraic turbulent mixing-layer models by numerical simulation. We compare the performances given by three different settings of the eddy viscosity. We consider Richardson number-based vertical eddy viscosity models. Two of these are classical algebraic turbulence models usually used in numerical simulations of global oceanic circulation, i.e. the Pacanowski-Philander and the Gent models, while the other one is a more recent model (Bennis et al., 2010) proposed to prevent numerical instabilities generated by physically unstable configurations. The numerical schemes are based on the standard finite element method. We perform some numerical tests for relatively large deviations of realistic initial conditions provided by the Tropical Atmosphere Ocean (TAO) array. These initial conditions correspond to states close to mixing-layer profiles, measured on the Equatorial Pacific region called the West-Pacific Warm Pool. We conclude that mixing-layer profiles could be considered as kinds of "absorbing configurations" in finite time that asymptotically evolve to steady states under the application of negative surface energy fluxes.

  19. Combining sources in stable isotope mixing models: alternative methods.

    PubMed

    Phillips, Donald L; Newsome, Seth D; Gregg, Jillian W

    2005-08-01

    Stable isotope mixing models are often used to quantify source contributions to a mixture. Examples include pollution source identification; trophic web studies; analysis of water sources for soils, plants; or water bodies, and many others. A common problem is having too many sources to allow a unique solution. We discuss two alternative procedures for addressing this problem. One option is a priori to combine sources with similar signatures so the number of sources is small enough to provide a unique solution. Aggregation should be considered only when isotopic signatures of clustered sources are not significantly different, and sources are related so the combined source group has some functional significance. For example, in a food web analysis, lumping several species within a trophic guild allows more interpretable results than lumping disparate food sources, even if they have similar isotopic signatures. One result of combining mixing model sources is increased uncertainty of the combined end-member isotopic signatures and consequently the source contribution estimates; this effect can be quantified using the IsoError model (http://www.epa.gov/wed/pages/models/isotopes/isoerror1_04.htm). As an alternative to lumping sources before a mixing analysis, the IsoSource mixing model (http://www.epa.gov/wed/pages/models/isosource/isosource.htm) can be used to find all feasible solutions of source contributions consistent with isotopic mass balance. While ranges of feasible contributions for each individual source can often be quite broad, contributions from functionally related groups of sources can be summed a posteriori, producing a range of solutions for the aggregate source that may be considerably narrower. A paleo-human dietary analysis example illustrates this method, which involves a terrestrial meat food source, a combination of three terrestrial plant foods, and a combination of three marine foods. In this case, a posteriori aggregation of sources allowed

  20. Graphical Modeling of Gene Expression in Monocytes Suggests Molecular Mechanisms Explaining Increased Atherosclerosis in Smokers

    PubMed Central

    Verdugo, Ricardo A.; Zeller, Tanja; Rotival, Maxime; Wild, Philipp S.; Münzel, Thomas; Lackner, Karl J.; Weidmann, Henri; Ninio, Ewa; Trégouët, David-Alexandre; Cambien, François; Blankenberg, Stefan; Tiret, Laurence

    2013-01-01

    Smoking is a risk factor for atherosclerosis with reported widespread effects on gene expression in circulating blood cells. We hypothesized that a molecular signature mediating the relation between smoking and atherosclerosis may be found in the transcriptome of circulating monocytes. Genome-wide expression profiles and counts of atherosclerotic plaques in carotid arteries were collected in 248 smokers and 688 non-smokers from the general population. Patterns of co-expressed genes were identified by Independent Component Analysis (ICA) and network structure of the pattern-specific gene modules was inferred by the PC-algorithm. A likelihood-based causality test was implemented to select patterns that fit models containing a path “smoking→gene expression→plaques”. Robustness of the causal inference was assessed by bootstrapping. At a FDR ≤0.10, 3,368 genes were associated to smoking or plaques, of which 93% were associated to smoking only. SASH1 showed the strongest association to smoking and PPARG the strongest association to plaques. Twenty-nine gene patterns were identified by ICA. Modules containing SASH1 and PPARG did not show evidence for the “smoking→gene expression→plaques” causality model. Conversely, three modules had good support for causal effects and exhibited a network topology consistent with gene expression mediating the relation between smoking and plaques. The network with the strongest support for causal effects was connected to plaques through SLC39A8, a gene with known association to HDL-cholesterol and cellular uptake of cadmium from tobacco, while smoking was directly connected to GAS6, a gene reported to have anti-inflammatory effects in atherosclerosis and to be up-regulated in the placenta of women smoking during pregnancy. Our analysis of the transcriptome of monocytes recovered genes relevant for association to smoking and atherosclerosis, and connected genes that before, were only studied in separate contexts

  1. Rapid estimation of lives of deficient superpave mixes and laboratory-based accelerated mix testing models

    NASA Astrophysics Data System (ADS)

    Manandhar, Chandra Bahadur

    The engineers from the Kansas Department of Transportation (KDOT) often have to decide whether or not to accept non-conforming Superpave mixtures during construction. The first part of this study focused on estimating lives of deficient Superpave pavements incorporating nonconforming Superpave mixtures. These criteria were based on the Hamburg Wheel-Tracking Device (HWTD) test results and analysis. The second part of this study focused on developing accelerated mix testing models to considerably reduce test duration. To accomplish the first objective, nine fine-graded Superpave mixes of 12.5-mm nominal maximum aggregate size (NMAS) with asphalt grade PG 64-22 from six administrative districts of KDOT were selected. Specimens were prepared at three different target air void levels Ndesign gyrations and four target simulated in-place density levels with the Superpave gyratory compactor. Average number of wheel passes to 20-mm rut depth, creep slope, stripping slope, and stripping inflection point in HWTD tests were recorded and then used in the statistical analysis. Results showed that, in general, higher simulated in-place density up to a certain limit of 91% to 93%, results in a higher number of wheel passes until 20-mm rut depth in HWTD tests. A Superpave mixture with very low air voids Ndesign (2%) level performed very poorly in the HWTD test. HWTD tests were also performed on six 12.5-mm NMAS mixtures with air voids Ndesign of 4% for six projects, simulated in-place density of 93%, two temperature levels and five load levels with binder grades of PG 64-22, PG 64-28, and PG 70-22. Field cores of 150-mm in diameter from three projects in three KDOT districts with 12.5-mm NMAS and asphalt grade of PG 64-22 were also obtained and tested in HWTD for model evaluation. HWTD test results indicated as expected. Statistical analysis was performed and accelerated mix testing models were developed to determine the effect of increased temperature and load on the duration of

  2. Defining order and timing of mutations during cancer progression: the TO-DAG probabilistic graphical model

    PubMed Central

    Lecca, Paola; Casiraghi, Nicola; Demichelis, Francesca

    2015-01-01

    Somatic mutations arise and accumulate both during tumor genesis and progression. However, the order in which mutations occur is an open question and the inference of the temporal ordering at the gene level could potentially impact on patient treatment. Thus, exploiting recent observations suggesting that the occurrence of mutations is a non-memoryless process, we developed a computational approach to infer timed oncogenetic directed acyclic graphs (TO-DAGs) from human tumor mutation data. Such graphs represent the path and the waiting times of alterations during tumor evolution. The probability of occurrence of each alteration in a path is the probability that the alteration occurs when all alterations prior to it have occurred. The waiting time between an alteration and the subsequent is modeled as a stochastic function of the conditional probability of the event given the occurrence of the previous one. TO-DAG performances have been evaluated both on synthetic data and on somatic non-silent mutations from prostate cancer and melanoma patients and then compared with those of current well-established approaches. TO-DAG shows high performance scores on synthetic data and recognizes mutations in gatekeeper tumor suppressor genes as trigger for several downstream mutational events in the human tumor data. PMID:26528329

  3. Uncertainty in mixing models: a blessing in disguise?

    NASA Astrophysics Data System (ADS)

    Delsman, J. R.; Oude Essink, G. H. P.

    2012-04-01

    Despite the abundance of tracer-based studies in catchment hydrology over the past decades, relatively few studies have addressed the uncertainty associated with these studies in much detail. This uncertainty stems from analytical error, spatial and temporal variance in end-member composition, and from not incorporating all relevant processes in the necessarily simplistic mixing models. Instead of applying standard EMMA methodology, we used end-member mixing model analysis within a Monte Carlo framework to quantify the uncertainty surrounding our analysis. Borrowing from the well-known GLUE methodology, we discarded mixing models that could not satisfactorily explain sample concentrations and analyzed the posterior parameter set. This use of environmental tracers aided in disentangling hydrological pathways in a Dutch polder catchment. This 10 km2 agricultural catchment is situated in the coastal region of the Netherlands. Brackish groundwater seepage, originating from Holocene marine transgressions, adversely affects water quality in this catchment. Current water management practice is aimed at improving water quality by flushing the catchment with fresh water from the river Rhine. Climate change is projected to decrease future fresh water availability, signifying the need for a more sustainable water management practice and a better understanding of the functioning of the catchment. The end-member mixing analysis increased our understanding of the hydrology of the studied catchment. The use of a GLUE-like framework for applying the end-member mixing analysis not only quantified the uncertainty associated with the analysis, the analysis of the posterior parameter set also identified the existence of catchment processes otherwise overlooked.

  4. WE-E-BRE-05: Ensemble of Graphical Models for Predicting Radiation Pneumontis Risk

    SciTech Connect

    Lee, S; Ybarra, N; Jeyaseelan, K; El Naqa, I; Faria, S; Kopek, N

    2014-06-15

    Purpose: We propose a prior knowledge-based approach to construct an interaction graph of biological and dosimetric radiation pneumontis (RP) covariates for the purpose of developing a RP risk classifier. Methods: We recruited 59 NSCLC patients who received curative radiotherapy with minimum 6 month follow-up. 16 RP events was observed (CTCAE grade ≥2). Blood serum was collected from every patient before (pre-RT) and during RT (mid-RT). From each sample the concentration of the following five candidate biomarkers were taken as covariates: alpha-2-macroglobulin (α2M), angiotensin converting enzyme (ACE), transforming growth factor β (TGF-β), interleukin-6 (IL-6), and osteopontin (OPN). Dose-volumetric parameters were also included as covariates. The number of biological and dosimetric covariates was reduced by a variable selection scheme implemented by L1-regularized logistic regression (LASSO). Posterior probability distribution of interaction graphs between the selected variables was estimated from the data under the literature-based prior knowledge to weight more heavily the graphs that contain the expected associations. A graph ensemble was formed by averaging the most probable graphs weighted by their posterior, creating a Bayesian Network (BN)-based RP risk classifier. Results: The LASSO selected the following 7 RP covariates: (1) pre-RT concentration level of α2M, (2) α2M level mid- RT/pre-RT, (3) pre-RT IL6 level, (4) IL6 level mid-RT/pre-RT, (5) ACE mid-RT/pre-RT, (6) PTV volume, and (7) mean lung dose (MLD). The ensemble BN model achieved the maximum sensitivity/specificity of 81%/84% and outperformed univariate dosimetric predictors as shown by larger AUC values (0.78∼0.81) compared with MLD (0.61), V20 (0.65) and V30 (0.70). The ensembles obtained by incorporating the prior knowledge improved classification performance for the ensemble size 5∼50. Conclusion: We demonstrated a probabilistic ensemble method to detect robust associations between

  5. A Adaptive Mixing Depth Model for AN Industrialized Shoreline Area.

    NASA Astrophysics Data System (ADS)

    Dunk, Richard H.

    1993-01-01

    Internal boundary layer characteristics are often overlooked in atmospheric diffusion modeling applications but are essential for accurate air quality assessment. This study focuses on a unique air pollution problem that is partially resolved by representative internal boundary layer description and prediction. Emissions from a secondary non-ferrous smelter located adjacent to a large waterway, which is situated near a major coastal zone, became suspect in causing adverse air quality. In an effort to prove or disprove this allegation, "accepted" air quality modeling was performed. Predicted downwind concentrations indicated that the smelter plume was not responsible for causing regulatory standards to be exceeded. However, chronic community complaints continued to be directed toward the smelter facility. Further investigation into the problem revealed that complaint occurrences coincided with onshore southeasterly flows. Internal boundary layer development during onshore flow was assumed to produce a mixing depth conducive to plume trapping or fumigation. The preceding premise led to the utilization of estimated internal boundary layer depths for dispersion model input in an attempt to improve prediction accuracy. Monitored downwind ambient air concentrations showed that model predictions were still substantially lower than actual values. After analyzing the monitored values and comparing them with actual plume observations conducted during several onshore flow occurrences, the author hypothesized that the waterway could cause a damping effect on internal boundary layer development. This effective decrease in mixing depths would explain the abnormally high ambient air concentrations experienced during onshore flows. Therefore, a full-scale field study was designed and implemented to study the waterway's influence on mixing depth characteristics. The resultant data were compiled and formulated into an area-specific mixing depth model that can be adapted to

  6. Fermion masses and mixing in general warped extra dimensional models

    NASA Astrophysics Data System (ADS)

    Frank, Mariana; Hamzaoui, Cherif; Pourtolami, Nima; Toharia, Manuel

    2015-06-01

    We analyze fermion masses and mixing in a general warped extra dimensional model, where all the Standard Model (SM) fields, including the Higgs, are allowed to propagate in the bulk. In this context, a slightly broken flavor symmetry imposed universally on all fermion fields, without distinction, can generate the full flavor structure of the SM, including quarks, charged leptons and neutrinos. For quarks and charged leptons, the exponential sensitivity of their wave functions to small flavor breaking effects yield hierarchical masses and mixing as it is usual in warped models with fermions in the bulk. In the neutrino sector, the exponential wave-function factors can be flavor blind and thus insensitive to the small flavor symmetry breaking effects, directly linking their masses and mixing angles to the flavor symmetric structure of the five-dimensional neutrino Yukawa couplings. The Higgs must be localized in the bulk and the model is more successful in generalized warped scenarios where the metric background solution is different than five-dimensional anti-de Sitter (AdS5 ). We study these features in two simple frameworks, flavor complimentarity and flavor democracy, which provide specific predictions and correlations between quarks and leptons, testable as more precise data in the neutrino sector becomes available.

  7. Computations of instability and turbulent mixing by Nikiforov's model

    NASA Astrophysics Data System (ADS)

    Razin, A. N.; Bolshakov, I. V.

    2014-08-01

    The results of modeling several laboratory experiments, including a large class of advanced experimental studies of turbulent flows, are presented. The results of the Meshkov's "cylindrical" and "planar" experiments on the confluence of two zones of turbulent mixing, the experiments of Poggi, Barre, and Uberoi have been carried out using the Nikiforov's model. The presented results attest that the Nikiforov's model qualitatively describes the considered class of flows if the mean gas-dynamic quantities are computed with a high accuracy in the technique, and the width of the front of the finite-difference shock wave does not depend on the size of the computational grid cell.

  8. Extension of the stochastic mixing model to cumulonimbus clouds

    SciTech Connect

    Raymond, D.J.; Blyth, A.M. )

    1992-11-01

    The stochastic mixing model of cumulus clouds is extended to the case in which ice and precipitation form. A simple cloud microphysical model is adopted in which ice crystals and aggregates are carried along with the updraft, whereas raindrops, graupel, and hail are assumed to immediately fall out. The model is then applied to the 2 August 1984 case study of convection over the Magdalena Mountains of central New Mexico, with excellent results. The formation of ice and precipitation can explain the transition of this system from a cumulus congestus cloud to thunderstorm. 28 refs.

  9. A Mixed-Culture Biofilm Model with Cross-Diffusion.

    PubMed

    Rahman, Kazi A; Sudarsan, Rangarajan; Eberl, Hermann J

    2015-11-01

    We propose a deterministic continuum model for mixed-culture biofilms. A crucial aspect is that movement of one species is affected by the presence of the other. This leads to a degenerate cross-diffusion system that generalizes an earlier single-species biofilm model. Two derivations of this new model are given. One, like cellular automata biofilm models, starts from a discrete in space lattice differential equation where the spatial interaction is described by microscopic rules. The other one starts from the same continuous mass balances that are the basis of other deterministic biofilm models, but it gives up a simplifying assumption of these models that has recently been criticized as being too restrictive in terms of ecological structure. We show that both model derivations lead to the same PDE model, if corresponding closure assumptions are introduced. To investigate the role of cross-diffusion, we conduct numerical simulations of three biofilm systems: competition, allelopathy and a mixed system formed by an aerobic and an anaerobic species. In all cases, we find that accounting for cross-diffusion affects local distribution of biomass, but it does not affect overall lumped quantities such as the total amount of biomass in the system. PMID:26582360

  10. A Mixed-Culture Biofilm Model with Cross-Diffusion.

    PubMed

    Rahman, Kazi A; Sudarsan, Rangarajan; Eberl, Hermann J

    2015-11-01

    We propose a deterministic continuum model for mixed-culture biofilms. A crucial aspect is that movement of one species is affected by the presence of the other. This leads to a degenerate cross-diffusion system that generalizes an earlier single-species biofilm model. Two derivations of this new model are given. One, like cellular automata biofilm models, starts from a discrete in space lattice differential equation where the spatial interaction is described by microscopic rules. The other one starts from the same continuous mass balances that are the basis of other deterministic biofilm models, but it gives up a simplifying assumption of these models that has recently been criticized as being too restrictive in terms of ecological structure. We show that both model derivations lead to the same PDE model, if corresponding closure assumptions are introduced. To investigate the role of cross-diffusion, we conduct numerical simulations of three biofilm systems: competition, allelopathy and a mixed system formed by an aerobic and an anaerobic species. In all cases, we find that accounting for cross-diffusion affects local distribution of biomass, but it does not affect overall lumped quantities such as the total amount of biomass in the system.

  11. Analysis of mixed model in gear transmission based on ADAMS

    NASA Astrophysics Data System (ADS)

    Li, Xiufeng; Wang, Yabin

    2012-09-01

    The traditional method of mechanical gear driving simulation includes gear pair method and solid to solid contact method. The former has higher solving efficiency but lower results accuracy; the latter usually obtains higher precision of results while the calculation process is complex, also it is not easy to converge. Currently, most of the researches are focused on the description of geometric models and the definition of boundary conditions. However, none of them can solve the problems fundamentally. To improve the simulation efficiency while ensure the results with high accuracy, a mixed model method which uses gear tooth profiles to take the place of the solid gear to simulate gear movement is presented under these circumstances. In the process of modeling, build the solid models of the mechanism in the SolidWorks firstly; Then collect the point coordinates of outline curves of the gear using SolidWorks API and create fit curves in Adams based on the point coordinates; Next, adjust the position of those fitting curves according to the position of the contact area; Finally, define the loading conditions, boundary conditions and simulation parameters. The method provides gear shape information by tooth profile curves; simulates the mesh process through tooth profile curve to curve contact and offer mass as well as inertia data via solid gear models. This simulation process combines the two models to complete the gear driving analysis. In order to verify the validity of the method presented, both theoretical derivation and numerical simulation on a runaway escapement are conducted. The results show that the computational efficiency of the mixed model method is 1.4 times over the traditional method which contains solid to solid contact. Meanwhile, the simulation results are more closely to theoretical calculations. Consequently, mixed model method has a high application value regarding to the study of the dynamics of gear mechanism.

  12. Pore Scale Modeling of Mixing-Induced Carbonate Precipitation

    NASA Astrophysics Data System (ADS)

    Steefel, C.; Molins, S.; Shen, C.; Trebotich, D.

    2011-12-01

    Mixing of groundwaters of differing chemical composition can lead to precipitation of minerals, potentially modifying the transport and chemical properties of the subsurface materials. Carbonate minerals are particularly common secondary phases that form as a result of mixing, although in many instances their formation is also affected by a suite of complex dissolution and precipitation reactions that change the pH and alkalinity of groundwater. In the case of mixing, several distinct regimes are recognized, depending on the supersaturation generated by the mixing process. In the case where high degrees of supersaturation with respect to carbonate occur as a result of mixing (e.g., log Q/Keq > 1.5, where Q is the ion activity product and Keq is the equilibrium constant), homogeneous nucleation can generate reactive surface area for continued carbonate growth. In this case, no interaction between the mixing fluid and immobile solid phases is needed. In contrast, where supersaturation is more limited (log Q/Keq = 0.5 to 1.5), precipitation generally takes place via heterogeneous nucleation, in which case a templated mineral surface (normally carbonate) is required. Heterogeneous nucleation of carbonates is typically second order with respect to the supersaturation. At lower degrees of supersaturation (log Q/Keq < 0.5), precipitation takes place via crystal growth on discrete surface features of the carbonate mineral (e.g., via spiral growth) surface and shows a first order or quasi-first order dependence on supersaturation. Thus, the supersaturation induced by mixing largely controls the order of the reaction and the extent of interaction with pre-existing mineral surfaces in the subsurface. These in turn impact how the physical and chemical properties of the medium are modified by carbonate precipitation. We are investigating these carbonate precipitation regimes using pore scale reactive transport modeling based on Direct Numerical Simulation methods. Our

  13. Shell Model Depiction of Isospin Mixing in sd Shell

    SciTech Connect

    Lam, Yi Hua; Smirnova, Nadya A.; Caurier, Etienne

    2011-11-30

    We constructed a new empirical isospin-symmetry breaking (ISB) Hamiltonian in the sd(1s{sub 1/2}, 0d{sub 5/2} and 0d{sub 3/2}) shell-model space. In this contribution, we present its application to two important case studies: (i){beta}-delayed proton emission from {sup 22}Al and (ii) isospin-mixing correction to superallowed 0{sup +}{yields}0{sup +}{beta}-decay ft-values.

  14. Higgs-radion mixing in stabilized brane world models

    NASA Astrophysics Data System (ADS)

    Boos, Edward E.; Bunichev, Viacheslav E.; Perfilov, Maxim A.; Smolyakov, Mikhail N.; Volobuev, Igor P.

    2015-11-01

    We consider a quartic interaction of the Higgs and Goldberger-Wise fields, which connects the mechanism of the extra dimension size stabilization with spontaneous symmetry breaking on our brane and gives rise to a coupling of the Higgs field to the radion and its KK tower. We estimate a possible influence of this coupling on the Higgs-radion mixing and study restrictions on model parameters from the LHC data.

  15. Pricing turbo warrants under mixed-exponential jump diffusion model

    NASA Astrophysics Data System (ADS)

    Yu, Jianfeng; Xu, Weidong

    2016-06-01

    Turbo warrant is a special type of barrier options in which the rebate is calculated as another exotic option. In this paper, using Laplace transforms we obtain the valuation of turbo warrant under the mixed-exponential jump diffusion model, which is able to approximate any jump size distribution. The numerical Laplace inversion examples verify that the analytical solutions are accurate. The results of simulation confirm the argument that jump risk should not be ignored in the valuation of turbo warrants.

  16. Estimating Preferential Flow in Karstic Aquifers Using Statistical Mixed Models

    PubMed Central

    Anaya, Angel A.; Padilla, Ingrid; Macchiavelli, Raul; Vesper, Dorothy J.; Meeker, John D.; Alshawabkeh, Akram N.

    2013-01-01

    Karst aquifers are highly productive groundwater systems often associated with conduit flow. These systems can be highly vulnerable to contamination, resulting in a high potential for contaminant exposure to humans and ecosystems. This work develops statistical models to spatially characterize flow and transport patterns in karstified limestone and determines the effect of aquifer flow rates on these patterns. A laboratory-scale Geo-HydroBed model is used to simulate flow and transport processes in a karstic limestone unit. The model consists of stainless-steel tanks containing a karstified limestone block collected from a karst aquifer formation in northern Puerto Rico. Experimental work involves making a series of flow and tracer injections, while monitoring hydraulic and tracer response spatially and temporally. Statistical mixed models are applied to hydraulic data to determine likely pathways of preferential flow in the limestone units. The models indicate a highly heterogeneous system with dominant, flow-dependent preferential flow regions. Results indicate that regions of preferential flow tend to expand at higher groundwater flow rates, suggesting a greater volume of the system being flushed by flowing water at higher rates. Spatial and temporal distribution of tracer concentrations indicates the presence of conduit-like and diffuse flow transport in the system, supporting the notion of both combined transport mechanisms in the limestone unit. The temporal response of tracer concentrations at different locations in the model coincide with, and confirms the preferential flow distribution generated with the statistical mixed models used in the study. PMID:23802921

  17. Modeling and diagnosing interface mix in layered ICF implosions

    NASA Astrophysics Data System (ADS)

    Weber, C. R.; Berzak Hopkins, L. F.; Clark, D. S.; Haan, S. W.; Ho, D. D.; Meezan, N. B.; Milovich, J. L.; Robey, H. F.; Smalyuk, V. A.; Thomas, C. A.

    2015-11-01

    Mixing at the fuel-ablator interface of an inertial confinement fusion (ICF) implosion can arise from an unfavorable in-flight Atwood number between the cryogenic DT fuel and the ablator. High-Z dopant is typically added to the ablator to control the Atwood number, but recent high-density carbon (HDC) capsules have been shot at the National Ignition Facility (NIF) without this added dopant. Highly resolved post-shot modeling of these implosions shows that there was significant mixing of ablator material into the dense DT fuel. This mix lowers the fuel density and results in less overall compression, helping to explain the measured ratio of down scattered-to-primary neutrons. Future experimental designs will seek to improve this issue through adding dopant and changing the x-ray spectra with a different hohlraum wall material. To test these changes, we are designing an experimental platform to look at the growth of this mixing layer. This technique uses side-on radiography to measure the spatial extent of an embedded high-Z tracer layer near the interface. Work performed under the auspices of the U.S. D.O.E. by Lawrence Livermore National Laboratory under Contract No. DE-AC52-07NA27344.

  18. Effects of mixing in threshold models of social behavior

    NASA Astrophysics Data System (ADS)

    Akhmetzhanov, Andrei R.; Worden, Lee; Dushoff, Jonathan

    2013-07-01

    We consider the dynamics of an extension of the influential Granovetter model of social behavior, where individuals are affected by their personal preferences and observation of the neighbors’ behavior. Individuals are arranged in a network (usually the square lattice), and each has a state and a fixed threshold for behavior changes. We simulate the system asynchronously by picking a random individual and we either update its state or exchange it with another randomly chosen individual (mixing). We describe the dynamics analytically in the fast-mixing limit by using the mean-field approximation and investigate it mainly numerically in the case of finite mixing. We show that the dynamics converge to a manifold in state space, which determines the possible equilibria, and show how to estimate the projection of this manifold by using simulated trajectories, emitted from different initial points. We show that the effects of considering the network can be decomposed into finite-neighborhood effects, and finite-mixing-rate effects, which have qualitatively similar effects. Both of these effects increase the tendency of the system to move from a less-desired equilibrium to the “ground state.” Our findings can be used to probe shifts in behavioral norms and have implications for the role of information flow in determining when social norms that have become unpopular in particular communities (such as foot binding or female genital cutting) persist or vanish.

  19. Testing hypotheses in ecoimmunology using mixed models: disentangling hierarchical correlations.

    PubMed

    Downs, C J; Dochtermann, N A

    2014-09-01

    Considerable research in ecoimmunology focuses on investigating variation in immune responses and linking this variation to physiological trade-offs, ecological traits, and environmental conditions. Variation in immune responses exists within and among individuals, among populations, and among taxonomic groupings. Understanding how variation and covariation are distributed and how they differ across these levels is necessary for drawing appropriate ecological and evolutionary inferences. Moreover, variation at the among-individual level directly connects to underlying quantitative genetic parameters. In order to fully understand immune responses in evolutionary and ecological contexts and to reveal phylogenetic constraints on evolution, statistical approaches must allow (co)variance to be partitioned among levels of individual, population, and phylogenetic organization (e.g., population, species, genera, and various higher taxa). Herein, we describe how multi-response mixed-effects models can be used to partition variation in immune responses among different hierarchical levels, specifically within-individuals, among-individuals, and among-species. We use simulated data to demonstrate that mixed models allow for proper partitioning of (co)variances. Importantly, these simulations also demonstrate that conventional statistical tools grossly misestimate relevant parameters, which urges caution in relating ecoimmunological hypotheses to existing empirical research. We conclude by discussing the advantages and caveats of a mixed-effects modeling approach.

  20. Mixing during intravertebral arterial infusions in an in vitro model.

    PubMed

    Lutz, Robert J; Warren, Kathy; Balis, Frank; Patronas, Nicholas; Dedrick, Robert L

    2002-06-01

    Regional delivery of drugs can offer a pharmacokinetic advantage in the treatment of localized tumors. One method of regional delivery is by intra-arterial infusion into the basilar/vertebral artery network that provides local access to infratentorial tumors, which are frequent locations of childhood brain cancers. Proper delivery of drug by infused solutions requires adequate mixing of the infusate at the site of infusion within the artery lumen. Our mixing studies with an in vitro model of the vertebral artery network indicate that streaming of drug solution is likely to occur at low, steady infusion rates of 2 ml/min. Streaming leads to maldistribution of drug to distal perfused brain regions and may result in toxic levels in some regions while concurrently yielding subtherapeutic levels in adjacent regions. According to our model findings, distribution to both brain hemispheres is not likely following infusion into a single vertebral artery even if the infusate is well-mixed at the infusion site. This outcome results from the unique fluid flow properties of two converging channels, which are represented by the left and right vertebral branches converging into the basilar. Fluid in the model remains stratified on the side of the basilar artery served by the infused vertebral artery. Careful thought and planning of the methods of intravertebral drug infusions for treating posterior fossa tumors are required to assure proper distribution of the drug to the desired tissue regions. Improper delivery may be responsible for some noted toxicities or for failure of the treatments. PMID:12164691

  1. IMaGe: Iterative Multilevel Probabilistic Graphical Model for Detection and Segmentation of Multiple Sclerosis Lesions in Brain MRI.

    PubMed

    Subbanna, Nagesh; Precup, Doina; Arnold, Douglas; Arbel, Tal

    2015-01-01

    In this paper, we present IMaGe, a new, iterative two-stage probabilistic graphical model for detection and segmentation of Multiple Sclerosis (MS) lesions. Our model includes two levels of Markov Random Fields (MRFs). At the bottom level, a regular grid voxel-based MRF identifies potential lesion voxels, as well as other tissue classes, using local and neighbourhood intensities and class priors. Contiguous voxels of a particular tissue type are grouped into regions. A higher, non-lattice MRF is then constructed, in which each node corresponds to a region, and edges are defined based on neighbourhood relationships between regions. The goal of this MRF is to evaluate the probability of candidate lesions, based on group intensity, texture and neighbouring regions. The inferred information is then propagated to the voxel-level MRF. This process of iterative inference between the two levels repeats as long as desired. The iterations suppress false positives and refine lesion boundaries. The framework is trained on 660 MRI volumes of MS patients enrolled in clinical trials from 174 different centres, and tested on a separate multi-centre clinical trial data set with 535 MRI volumes. All data consists of T1, T2, PD and FLAIR contrasts. In comparison to other MRF methods, such as, and a traditional MRF, IMaGe is much more sensitive (with slightly better PPV). It outperforms its nearest competitor by around 20% when detecting very small lesions (3-10 voxels). This is a significant result, as such lesions constitute around 40% of the total number of lesions. PMID:26221699

  2. Variable selection for semiparametric mixed models in longitudinal studies.

    PubMed

    Ni, Xiao; Zhang, Daowen; Zhang, Hao Helen

    2010-03-01

    We propose a double-penalized likelihood approach for simultaneous model selection and estimation in semiparametric mixed models for longitudinal data. Two types of penalties are jointly imposed on the ordinary log-likelihood: the roughness penalty on the nonparametric baseline function and a nonconcave shrinkage penalty on linear coefficients to achieve model sparsity. Compared to existing estimation equation based approaches, our procedure provides valid inference for data with missing at random, and will be more efficient if the specified model is correct. Another advantage of the new procedure is its easy computation for both regression components and variance parameters. We show that the double-penalized problem can be conveniently reformulated into a linear mixed model framework, so that existing software can be directly used to implement our method. For the purpose of model inference, we derive both frequentist and Bayesian variance estimation for estimated parametric and nonparametric components. Simulation is used to evaluate and compare the performance of our method to the existing ones. We then apply the new method to a real data set from a lactation study.

  3. Numerical issues for coupling biological models with isopycnal mixing schemes

    NASA Astrophysics Data System (ADS)

    Gnanadesikan, Anand

    1999-01-01

    In regions of sloping isopycnals, isopycnal mixing acting in conjunction with biological cycling can produce patterns in the nutrient field which have negative values of tracer in light water and unrealistically large values of tracer in dense water. Under certain circumstances, these patterns can start to grow unstably. This paper discusses why such behavior occurs. Using a simple four-box model, it demonstrates that the instability appears when the isopycnal slopes exceed the grid aspect ratio ( Δz/ Δx). In contrast to other well known instabilities of the CFL type, this instability does not depend on the time step or time-stepping scheme. Instead it arises from a fundamental incompatibility between two requirements for isopycnal mixing schemes, namely that they should produce no net flux of passive tracer across an isopycnal and everywhere reduce tracer extrema. In order to guarantee no net flux of tracer across an isopycnal, some upgradient fluxes across certain parts of an isopycnal are required to balance downgradient fluxes across other parts of the isopycnal. However, these upgradient fluxes can cause local maxima in the nutrient field to become self-reinforcing. Although this is less of a problem in larger domains, there is still a strong tendency for isopycnal mixing to overconcentrate tracer in the dense water. The introduction of eddy-induced advection is shown to be capable of counteracting the upgradient fluxes of nutrient which cause problems, stabilizing the solution. The issue is not simply a numerical curiosity. When used in a GCM, different parameterizations of eddy mixing result in noticeably different distributions of nutrient and large differences in biological production. While much of this is attributable to differences in convection and circulation, the numerical errors described here may also play an important role in runs with isopycnal mixing alone.

  4. A mixed system modeling two-directional pedestrian flows.

    PubMed

    Goatin, Paola; Mimault, Matthias

    2015-04-01

    In this article, we present a simplified model to describe the dynamics of two groups of pedestrians moving in opposite directions in a corridor. The model consists of a 2 x 2 system of conservation laws of mixed hyperbolic-elliptic type. We study the basic properties of the system to understand why and how bounded oscillations in numerical simulations arise. We show that Lax-Friedrichs scheme ensures the invariance of the domain and we investigate the existence of measure-valued solutions as limit of a subsequence of approximate solutions. PMID:25811441

  5. An A 4 × {{{Z}}_4} model for neutrino mixing

    NASA Astrophysics Data System (ADS)

    BenTov, Yoni; He, Xiao-Gang; Zee, A.

    2012-12-01

    The A 4 × U(1) flavor model of He, Keum, and Volkas is extended to provide a minimal modification to tribimaximal mixing that accommodates a nonzero reactor angle θ 13 0.1. The sequestering problem is circumvented by forbidding superheavy scales and large coupling constants which would otherwise generate sizable RG flows. The model is compatible with (but does not require) a stable or metastable dark matter candidate in the form of a complex scalar field with unit charge under a discrete subgroup {{{Z}}_4} of the U(1) flavor symmetry.

  6. Modeling and Simulation of Fluid Mixing Laser Experiments and Supernova

    SciTech Connect

    James Glimm

    2009-06-04

    The three year plan for this project was to develop novel theories and advanced simulation methods leading to a systematic understanding of turbulent mixing. A primary focus is the comparison of simulation models (Direct Numerical Simulation (DNS), Large Eddy Simulations (LES), full two fluid simulations and subgrid averaged models) to experiments. The comprehension and reduction of experimental and simulation data are central goals of this proposal. We model 2D and 3D perturbations of planar or circular interfaces. We compare these tests with models derived from averaged equations (our own and those of others). As a second focus, we develop physics based subgrid simulation models of diffusion across an interface, with physical but no numerical mass diffusion. Multiple layers and reshock are considered here.

  7. A new unsteady mixing model to predict NO(x) production during rapid mixing in a dual-stage combustor

    NASA Technical Reports Server (NTRS)

    Menon, Suresh

    1992-01-01

    An advanced gas turbine engine to power supersonic transport aircraft is currently under study. In addition to high combustion efficiency requirements, environmental concerns have placed stringent restrictions on the pollutant emissions from these engines. A combustor design with the potential for minimizing pollutants such as NO(x) emissions is undergoing experimental evaluation. A major technical issue in the design of this combustor is how to rapidly mix the hot, fuel-rich primary zone product with the secondary diluent air to obtain a fuel-lean mixture for combustion in the second stage. Numerical predictions using steady-state methods cannot account for the unsteady phenomena in the mixing region. Therefore, to evaluate the effect of unsteady mixing and combustion processes, a novel unsteady mixing model is demonstrated here. This model has been used to study multispecies mixing as well as propane-air and hydrogen-air jet nonpremixed flames, and has been used to predict NO(x) production in the mixing region. Comparison with available experimental data show good agreement, thereby providing validation of the mixing model. With this demonstration, this mixing model is ready to be implemented in conjunction with steady-state prediction methods and provide an improved engineering design analysis tool.

  8. A Web Graphics Primer.

    ERIC Educational Resources Information Center

    Buchanan, Larry

    1999-01-01

    Discusses the basic technical concepts of using graphics in World Wide Web pages, including: color depth and dithering, dots-per-inch, image size, file types, Graphics Interchange Formats (GIFs), Joint Photographic Experts Group (JPEG), format, and software recommendations. (AEF)

  9. Intercomparison of garnet barometers and implications for garnet mixing models

    SciTech Connect

    Anovitz, L.M.; Essene, E.J.

    1985-01-01

    Several well-calibrated barometers are available in the system Ca-Fe-Ti-Al-Si-O, including: Alm+3Ru-3Ilm+Sil+2Qtz (GRAIL), 2Alm+Grreverse arrow6Ru=6Ilm+3An+3Qtz (GRIPS); 2Alm+Gr=3Fa+3An (FAG); 3AnGr+Ky+Qtz (GASP); 2Fs-Fa+Qtz (FFQ); and Gr+Qtz=An+2Wo (WAGS). GRIPS, GRAIL and GASP form a linearly dependent set such that any two should yield the third given an a/X model for the grossular/almandine solid-solution. Application to barometry of garnet granulite assemblages from the Grenville in Ontario yields average pressures 0.1 kb lower for GRIPS and 0.4 kb higher for FAGS using our mixing model. Results from Parry Island, Ontario, yield 8.7 kb from GRAIL as opposed to 9.1 kb using Ganguly and Saxena's model. For GASP, Parry Island assemblages yield 8.4 kb with the authors calibration. Ganguly and Saxena's model gives 5.4 kb using Gasparik's reversals and 8.1 kb using the position of GASP calculated from GRIPS and GRAIL. These corrections allow GRIPS, GRAIL, GASP and FAGS to yield consistent pressures to +/- 0.5 kb in regional metamorphic terranes. Application of their mixing model outside of the fitted range 700-1000 K is not encouraged as extrapolation may yield erroneous results.

  10. Nonlinear spectral mixing theory to model multispectral signatures

    SciTech Connect

    Borel, C.C.

    1996-02-01

    Nonlinear spectral mixing occurs due to multiple reflections and transmissions between discrete surfaces, e.g. leaves or facets of a rough surface. The radiosity method is an energy conserving computational method used in thermal engineering and it models nonlinear spectral mixing realistically and accurately. In contrast to the radiative transfer method the radiosity method takes into account the discreteness of the scattering surfaces (e.g. exact location, orientation and shape) such as leaves and includes mutual shading between them. An analytic radiosity-based scattering model for vegetation was developed and used to compute vegetation indices for various configurations. The leaf reflectance and transmittance was modeled using the PROSPECT model for various amounts of water, chlorophyll and variable leaf structure. The soil background was modeled using SOILSPEC with a linear mixture of reflectances of sand, clay and peat. A neural network and a geometry based retrieval scheme were used to retrieve leaf area index and chlorophyll concentration for dense canopies. Only simulated canopy reflectances in the 6 visible through short wave IR Landsat TM channels were used. The authors used an empirical function to compute the signal-to-noise ratio of a retrieved quantity.

  11. Misspecification of the covariance structure in generalized linear mixed models.

    PubMed

    Chavance, M; Escolano, S

    2016-04-01

    When fitting marginal models to correlated outcomes, the so-called sandwich variance is commonly used. However, this is not the case when fitting mixed models. Using two data sets, we illustrate the problems that can be encountered. We show that the differences or the ratios between the naive and sandwich standard deviations of the fixed effects estimators provide convenient means of assessing the fit of the model, as both are consistent when the covariance structure is correctly specified, but only the latter is when that structure is misspecified. When the number of statistical units is not too small, the sandwich formula correctly estimates the variance of the fixed effects estimator even if the random effects are misspecified, and it can be used in a diagnostic tool for assessing the misspecification of the random effects. A simple comparison with the naive variance is sufficient and we propose considering a ratio of the naive and sandwich standard deviation out of the [3/4; 4/3] interval as signaling a risk of erroneous inference due to a model misspecification. We strongly advocate broader use of the sandwich variance for statistical inference about the fixed effects in mixed models.

  12. Mixing characteristics of sludge simulant in a model anaerobic digester.

    PubMed

    Low, Siew Cheng; Eshtiaghi, Nicky; Slatter, Paul; Baudez, Jean-Christophe; Parthasarathy, Rajarathinam

    2016-03-01

    This study aims to investigate the mixing characteristics of a transparent sludge simulant in a mechanically agitated model digester using flow visualisation technique. Video images of the flow patterns were obtained by recording the progress of an acid-base reaction and analysed to determine the active and inactive volumes as a function of time. The doughnut-shaped inactive region formed above and below the impeller in low concentration simulant decreases in size with time and disappears finally. The 'cavern' shaped active mixing region formed around the impeller in simulant solutions with higher concentrations increases with increasing agitation time and reaches a steady state equilibrium size, which is a function of specific power input. These results indicate that the active volume is jointly determined by simulant rheology and specific power input. A mathematical correlation is proposed to estimate the active volume as a function of simulant concentration in terms of yield Reynolds number. PMID:26739143

  13. Mixing of dye in a model scald tank.

    PubMed

    Cason, J A; Shackelford, A D

    1999-10-01

    A model scald tank was constructed to study the mixing pattern of water in a poultry scalding system. Tank dimensions were approximately 6 m long by 10.5 cm wide with a water depth of 18 cm. Water was vigorously agitated with compressed air delivered through a 1.9-cm polyvinyl chloride pipe on the bottom of the tank. Food coloring was added to the tank at a single point, and water samples were taken at distances of 0, 0.5, 1.0, 1.5, and 2.5 m every 30 s for 10 min, with 0 or 10 L/min water flow through the tank. Dye concentration was determined spectrophotometrically. A chain drive was then installed above the tank with aluminum paddles (area about 25% of tank cross-sectional area) attached to the chain every 15.2 cm to simulate the movement of carcasses through the water at 140 carcasses per minute. Food coloring was added to the tank, and water samples were taken every 15 s for 2.5 min, with 0 or 13.5 L/min water flow through the tank. A computer program based on perfect mixing of water in small slices or cells within the tank was adjusted until predicted dye movement matched sampling data, with correlations of 0.91 or better at all sampling points. For scalder designs with uniform mixing of water, the computer model can predict mixing patterns, including counterflow conditions in a single tank, well enough to yield realistic residence time patterns for bacteria suspended in scald water. PMID:10536796

  14. Box-Cox Mixed Logit Model for Travel Behavior Analysis

    NASA Astrophysics Data System (ADS)

    Orro, Alfonso; Novales, Margarita; Benitez, Francisco G.

    2010-09-01

    To represent the behavior of travelers when they are deciding how they are going to get to their destination, discrete choice models, based on the random utility theory, have become one of the most widely used tools. The field in which these models were developed was halfway between econometrics and transport engineering, although the latter now constitutes one of their principal areas of application. In the transport field, they have mainly been applied to mode choice, but also to the selection of destination, route, and other important decisions such as the vehicle ownership. In usual practice, the most frequently employed discrete choice models implement a fixed coefficient utility function that is linear in the parameters. The principal aim of this paper is to present the viability of specifying utility functions with random coefficients that are nonlinear in the parameters, in applications of discrete choice models to transport. Nonlinear specifications in the parameters were present in discrete choice theory at its outset, although they have seldom been used in practice until recently. The specification of random coefficients, however, began with the probit and the hedonic models in the 1970s, and, after a period of apparent little practical interest, has burgeoned into a field of intense activity in recent years with the new generation of mixed logit models. In this communication, we present a Box-Cox mixed logit model, original of the authors. It includes the estimation of the Box-Cox exponents in addition to the parameters of the random coefficients distribution. Probability of choose an alternative is an integral that will be calculated by simulation. The estimation of the model is carried out by maximizing the simulated log-likelihood of a sample of observed individual choices between alternatives. The differences between the predictions yielded by models that are inconsistent with real behavior have been studied with simulation experiments.

  15. Using Search Algorithms and Probabilistic Graphical Models to Understand the Influence of Atmospheric Circulation on Western US Drought

    NASA Astrophysics Data System (ADS)

    Malevich, S. B.; Woodhouse, C. A.

    2015-12-01

    This work explores a new approach to quantify cool-season mid-latitude circulation dynamics as they relate western US streamflow variability and drought. This information is used to probabilistically associate patterns of synoptic atmospheric circulation with spatial patterns of drought in western US streamflow. Cool-season storms transport moisture from the Pacific Ocean and are a primary source for western US streamflow. Studies overthe past several decades have emphasized that the western US hydroclimate is influenced by the intensity and phasing of ocean and atmosphere dynamics and teleconnections, such as ENSO and North Pacific variability. These complex interactions are realized in atmospheric circulation along the west coast of North America. The region's atmospheric circulation can encourage a preferential flow in winter storm tracks from the Pacific, and thus influence the moisture conditions of a given river basin over the course of the cool season. These dynamics have traditionally been measured with atmospheric indices based on values from fixed points in space or principal component loadings. This study uses collective search agents to quantify the position and intensity of potentially non-stationary atmosphere features in climate reanalysis datasets, relative to regional hydrology. Results underline the spatio-temporal relationship between semi-permanent atmosphere characteristics and naturalized streamflow from major river basins of the western US. A probabilistic graphical model quantifies this relationship while accounting for uncertainty from noisy climate processes, and eventually, limitations from dataset length. This creates probabilities for semi-permanent atmosphere features which we hope to associate with extreme droughts of the paleo record, based on our understanding of atmosphere-streamflow relations observed in the instrumental record.

  16. Graphical assessment of incremental value of novel markers in prediction models: From statistical to decision analytical perspectives.

    PubMed

    Steyerberg, Ewout W; Vedder, Moniek M; Leening, Maarten J G; Postmus, Douwe; D'Agostino, Ralph B; Van Calster, Ben; Pencina, Michael J

    2015-07-01

    New markers may improve prediction of diagnostic and prognostic outcomes. We aimed to review options for graphical display and summary measures to assess the predictive value of markers over standard, readily available predictors. We illustrated various approaches using previously published data on 3264 participants from the Framingham Heart Study, where 183 developed coronary heart disease (10-year risk 5.6%). We considered performance measures for the incremental value of adding HDL cholesterol to a prediction model. An initial assessment may consider statistical significance (HR = 0.65, 95% confidence interval 0.53 to 0.80; likelihood ratio p < 0.001), and distributions of predicted risks (densities or box plots) with various summary measures. A range of decision thresholds is considered in predictiveness and receiver operating characteristic curves, where the area under the curve (AUC) increased from 0.762 to 0.774 by adding HDL. We can furthermore focus on reclassification of participants with and without an event in a reclassification graph, with the continuous net reclassification improvement (NRI) as a summary measure. When we focus on one particular decision threshold, the changes in sensitivity and specificity are central. We propose a net reclassification risk graph, which allows us to focus on the number of reclassified persons and their event rates. Summary measures include the binary AUC, the two-category NRI, and decision analytic variants such as the net benefit (NB). Various graphs and summary measures can be used to assess the incremental predictive value of a marker. Important insights for impact on decision making are provided by a simple graph for the net reclassification risk. PMID:25042996

  17. Graphics and Listening Comprehension.

    ERIC Educational Resources Information Center

    Ruhe, Valerie

    1996-01-01

    Examines the effectiveness of graphics as lecture comprehension supports for low-proficiency English-as-a-Second-Language (ESL) listeners. The study compared the performance of Asian students in Canada listening to an audiotape while viewing an organizational graphic with that of a control group. Findings indicate that the graphics enhanced…

  18. Application of a mixing-ratios based formulation to model mixing-driven dissolution experiments

    NASA Astrophysics Data System (ADS)

    Guadagnini, Alberto; Sanchez-Vila, Xavier; Saaltink, Maarten W.; Bussini, Michele; Berkowitz, Brian

    2009-05-01

    We address the question of how one can combine theoretical and numerical modeling approaches with limited measurements from laboratory flow cell experiments to realistically quantify salient features of complex mixing-driven multicomponent reactive transport problems in porous media. Flow cells are commonly used to examine processes affecting reactive transport through porous media, under controlled conditions. An advantage of flow cells is their suitability for relatively fast and reliable experiments, although measuring spatial distributions of a state variable within the cell is often difficult. In general, fluid is sampled only at the flow cell outlet, and concentration measurements are usually interpreted in terms of integrated reaction rates. In reactive transport problems, however, the spatial distribution of the reaction rates within the cell might be more important than the bulk integrated value. Recent advances in theoretical and numerical modeling of complex reactive transport problems [De Simoni M, Carrera J, Sanchez-Vila X, Guadagnini A. A procedure for the solution of multicomponent reactive transport problems. Water Resour Res 2005;41:W11410. doi: 10.1029/2005WR004056, De Simoni M, Sanchez-Vila X, Carrera J, Saaltink MW. A mixing ratios-based formulation for multicomponent reactive transport. Water Resour Res 2007;43:W07419. doi: 10.1029/2006WR005256] result in a methodology conducive to a simple exact expression for the space-time distribution of reaction rates in the presence of homogeneous or heterogeneous reactions in chemical equilibrium. The key points of the methodology are that a general reactive transport problem, involving a relatively high number of chemical species, can be formulated in terms of a set of decoupled partial differential equations, and the amount of reactants evolving into products depends on the rate at which solutions mix. The main objective of the current study is to show how this methodology can be used in conjunction

  19. Modeling of Transient Flow Mixing of Streams Injected into a Mixing Chamber

    NASA Technical Reports Server (NTRS)

    Voytovych, Dmytro M.; Merkle, Charles L.; Lucht, Robert P.; Hulka, James R.; Jones, Gregg W.

    2006-01-01

    Ignition is recognized as one the critical drivers in the reliability of multiple-start rocket engines. Residual combustion products from previous engine operation can condense on valves and related structures thereby creating difficulties for subsequent starting procedures. Alternative ignition methods that require fewer valves can mitigate the valve reliability problem, but require improved understanding of the spatial and temporal propellant distribution in the pre-ignition chamber. Current design tools based mainly on one-dimensional analysis and empirical models cannot predict local details of the injection and ignition processes. The goal of this work is to evaluate the capability of the modern computational fluid dynamics (CFD) tools in predicting the transient flow mixing in pre-ignition environment by comparing the results with the experimental data. This study is a part of a program to improve analytical methods and methodologies to analyze reliability and durability of combustion devices. In the present paper we describe a series of detailed computational simulations of the unsteady mixing events as the cold propellants are first introduced into the chamber as a first step in providing this necessary environmental description. The present computational modeling represents a complement to parallel experimental simulations' and includes comparisons with experimental results from that effort. A large number of rocket engine ignition studies has been previously reported. Here we limit our discussion to the work discussed in Refs. 2, 3 and 4 which is both similar to and different from the present approach. The similarities arise from the fact that both efforts involve detailed experimental/computational simulations of the ignition problem. The differences arise from the underlying philosophy of the two endeavors. The approach in Refs. 2 to 4 is a classical ignition study in which the focus is on the response of a propellant mixture to an ignition source, with

  20. A Bayesian nonlinear mixed-effects disease progression model

    PubMed Central

    Kim, Seongho; Jang, Hyejeong; Wu, Dongfeng; Abrams, Judith

    2016-01-01

    A nonlinear mixed-effects approach is developed for disease progression models that incorporate variation in age in a Bayesian framework. We further generalize the probability model for sensitivity to depend on age at diagnosis, time spent in the preclinical state and sojourn time. The developed models are then applied to the Johns Hopkins Lung Project data and the Health Insurance Plan for Greater New York data using Bayesian Markov chain Monte Carlo and are compared with the estimation method that does not consider random-effects from age. Using the developed models, we obtain not only age-specific individual-level distributions, but also population-level distributions of sensitivity, sojourn time and transition probability. PMID:26798562

  1. Subgrid models for mass and thermal diffusion in turbulent mixing

    SciTech Connect

    Sharp, David H; Lim, Hyunkyung; Li, Xiao - Lin; Gilmm, James G

    2008-01-01

    We are concerned with the chaotic flow fields of turbulent mixing. Chaotic flow is found in an extreme form in multiply shocked Richtmyer-Meshkov unstable flows. The goal of a converged simulation for this problem is twofold: to obtain converged solutions for macro solution features, such as the trajectories of the principal shock waves, mixing zone edges, and mean densities and velocities within each phase, and also for such micro solution features as the joint probability distributions of the temperature and species concentration. We introduce parameterized subgrid models of mass and thermal diffusion, to define large eddy simulations (LES) that replicate the micro features observed in the direct numerical simulation (DNS). The Schmidt numbers and Prandtl numbers are chosen to represent typical liquid, gas and plasma parameter values. Our main result is to explore the variation of the Schmidt, Prandtl and Reynolds numbers by three orders of magnitude, and the mesh by a factor of 8 per linear dimension (up to 3200 cells per dimension), to allow exploration of both DNS and LES regimes and verification of the simulations for both macro and micro observables. We find mesh convergence for key properties describing the molecular level of mixing, including chemical reaction rates between the distinct fluid species. We find results nearly independent of Reynolds number for Re 300, 6000, 600K . Methodologically, the results are also new. In common with the shock capturing community, we allow and maintain sharp solution gradients, and we enhance these gradients through use of front tracking. In common with the turbulence modeling community, we include subgrid scale models with no adjustable parameters for LES. To the authors' knowledge, these two methodologies have not been previously combined. In contrast to both of these methodologies, our use of Front Tracking, with DNS or LES resolution of the momentum equation at or near the Kolmogorov scale, but without resolving the

  2. MIXING MODELING ANALYSIS FOR SRS SALT WASTE DISPOSITION

    SciTech Connect

    Lee, S.

    2011-01-18

    Nuclear waste at Savannah River Site (SRS) waste tanks consists of three different types of waste forms. They are the lighter salt solutions referred to as supernate, the precipitated salts as salt cake, and heavier fine solids as sludge. The sludge is settled on the tank floor. About half of the residual waste radioactivity is contained in the sludge, which is only about 8 percentage of the total waste volume. Mixing study to be evaluated here for the Salt Disposition Integration (SDI) project focuses on supernate preparations in waste tanks prior to transfer to the Salt Waste Processing Facility (SWPF) feed tank. The methods to mix and blend the contents of the SRS blend tanks were evalutaed to ensure that the contents are properly blended before they are transferred from the blend tank such as Tank 50H to the SWPF feed tank. The work consists of two principal objectives to investigate two different pumps. One objective is to identify a suitable pumping arrangement that will adequately blend/mix two miscible liquids to obtain a uniform composition in the tank with a minimum level of sludge solid particulate in suspension. The other is to estimate the elevation in the tank at which the transfer pump inlet should be located where the solid concentration of the entrained fluid remains below the acceptance criterion (0.09 wt% or 1200 mg/liter) during transfer operation to the SWPF. Tank 50H is a Waste Tank that will be used to prepare batches of salt feed for SWPF. The salt feed must be a homogeneous solution satisfying the acceptance criterion of the solids entrainment during transfer operation. The work described here consists of two modeling areas. They are the mixing modeling analysis during miscible liquid blending operation, and the flow pattern analysis during transfer operation of the blended liquid. The modeling results will provide quantitative design and operation information during the mixing/blending process and the transfer operation of the blended

  3. Design and Implementation of an Assessment Model for Students Entering Vocational Education Programs in the State of Colorado. Graphic Arts.

    ERIC Educational Resources Information Center

    Hartley, Nancy K.; And Others

    This basic vocational related skills assessment module in graphic arts is one of sixteen modules designed to help teachers assess and identify some of the areas in which special needs students may encounter learning difficulties. The materials in the module allow for informal assessment in three basic areas: academic skills, motor skills, and…

  4. "No One's the Boss of My Painting:" A Model of the Early Development of Artistic Graphic Representation

    ERIC Educational Resources Information Center

    Louis, Linda

    2013-01-01

    This article reports on the most recent phase of an ongoing research program that examines the artistic graphic representational behavior and paintings of children between the ages of four and seven. The goal of this research program is to articulate a contemporary account of artistic growth and to illuminate how young children's changing…

  5. Spatial Visualization Research and Theories: Their Importance in the Development of an Engineering and Technical Design Graphics Curriculum Model.

    ERIC Educational Resources Information Center

    Miller, Craig L.; Bertoline, Gary R.

    1991-01-01

    An overview that gives an introduction to the theories, terms, concepts, and prior research conducted on visualization is presented. This information is to be used as a basis for developing spatial research studies that lend support to the theory that the engineering and technical design graphics curriculum is important in the development of…

  6. Model of Mixing Layer With Multicomponent Evaporating Drops

    NASA Technical Reports Server (NTRS)

    Bellan, Josette; Le Clercq, Patrick

    2004-01-01

    A mathematical model of a three-dimensional mixing layer laden with evaporating fuel drops composed of many chemical species has been derived. The study is motivated by the fact that typical real petroleum fuels contain hundreds of chemical species. Previously, for the sake of computational efficiency, spray studies were performed using either models based on a single representative species or models based on surrogate fuels of at most 15 species. The present multicomponent model makes it possible to perform more realistic simulations by accounting for hundreds of chemical species in a computationally efficient manner. The model is used to perform Direct Numerical Simulations in continuing studies directed toward understanding the behavior of liquid petroleum fuel sprays. The model includes governing equations formulated in an Eulerian and a Lagrangian reference frame for the gas and the drops, respectively. This representation is consistent with the expected volumetrically small loading of the drops in gas (of the order of 10 3), although the mass loading can be substantial because of the high ratio (of the order of 103) between the densities of liquid and gas. The drops are treated as point sources of mass, momentum, and energy; this representation is consistent with the drop size being smaller than the Kolmogorov scale. Unsteady drag, added-mass effects, Basset history forces, and collisions between the drops are neglected, and the gas is assumed calorically perfect. The model incorporates the concept of continuous thermodynamics, according to which the chemical composition of a fuel is described probabilistically, by use of a distribution function. Distribution functions generally depend on many parameters. However, for mixtures of homologous species, the distribution can be approximated with acceptable accuracy as a sole function of the molecular weight. The mixing layer is initially laden with drops in its lower stream, and the drops are colder than the gas

  7. Stochastic Mixing Model with Power Law Decay of Variance

    NASA Technical Reports Server (NTRS)

    Fedotov, S.; Ihme, M.; Pitsch, H.

    2003-01-01

    Here we present a simple stochastic mixing model based on the law of large numbers (LLN). The reason why the LLN is involved in our formulation of the mixing problem is that the random conserved scalar c = c(t,x(t)) appears to behave as a sample mean. It converges to the mean value mu, while the variance sigma(sup 2)(sub c) (t) decays approximately as t(exp -1). Since the variance of the scalar decays faster than a sample mean (typically is greater than unity), we will introduce some non-linear modifications into the corresponding pdf-equation. The main idea is to develop a robust model which is independent from restrictive assumptions about the shape of the pdf. The remainder of this paper is organized as follows. In Section 2 we derive the integral equation from a stochastic difference equation describing the evolution of the pdf of a passive scalar in time. The stochastic difference equation introduces an exchange rate gamma(sub n) which we model in a first step as a deterministic function. In a second step, we generalize gamma(sub n) as a stochastic variable taking fluctuations in the inhomogeneous environment into account. In Section 3 we solve the non-linear integral equation numerically and analyze the influence of the different parameters on the decay rate. The paper finishes with a conclusion.

  8. A model for imperfect mixing in a CSTR

    NASA Astrophysics Data System (ADS)

    Bar-Eli, Kedma; Noyes, Richard M.

    1986-09-01

    When a chemical reaction is carried out in a continuously stirred tank reactor, the behavior may be significantly affected by the efficiency with which the entering chemicals are mixed with the main contents of the reactor. We have developed a model for this effect which assumes that a feed of premixed chemicals remains for a while in totally segregated packets before they are rapidly and perfectly mixed with the rest of the system. The time of this initial segregation is affected by the efficiency of stirring in the reactor. The model has been tested by computations on a mechanism developed by Roelofs et al. for a reaction which would oscillate even in a closed system. It has also been tested by computations on the rapid autocatalytic oxidation of cerous ion by bromate in the presence of a small amount of bromide. The results are qualitatively consistent with effects observed experimentally and in computations with other models including a somewhat similar one by Kumpinsky and Epstein. More quantitative tests should recognize the difference whether two streams of chemicals enter the reactor independently or are premixed before they do so.

  9. GENERALIZED PARTIALLY LINEAR MIXED-EFFECTS MODELS INCORPORATING MISMEASURED COVARIATES

    PubMed Central

    Liang, Hua

    2009-01-01

    In this article we consider a semiparametric generalized mixed-effects model, and propose combining local linear regression, and penalized quasilikelihood and local quasilikelihood techniques to estimate both population and individual parameters and nonparametric curves. The proposed estimators take into account the local correlation structure of the longitudinal data. We establish normality for the estimators of the parameter and asymptotic expansion for the estimators of the nonparametric part. For practical implementation, we propose an appropriate algorithm. We also consider the measurement error problem in covariates in our model, and suggest a strategy for adjusting the effects of measurement errors. We apply the proposed models and methods to study the relation between virologic and immunologic responses in AIDS clinical trials, in which virologic response is classified into binary variables. A dataset from an AIDS clinical study is analyzed. PMID:20160899

  10. A Mixed Model for Real-Time, Interactive Simulation of a Cable Passing Through Several Pulleys

    SciTech Connect

    Garcia-Fernandez, Ignacio; Pla-Castells, Marta; Martinez-Dura, Rafael J.

    2007-09-06

    A model of a cable and pulleys is presented that can be used in Real Time Computer Graphics applications. The model is formulated by the coupling of a damped spring and a variable coefficient wave equation, and can be integrated in more complex mechanical models of lift systems, such as cranes, elevators, etc. with a high degree of interactivity.

  11. Graphical programming at Sandia National Laboratories

    SciTech Connect

    McDonald, M.J.; Palmquist, R.D.; Desjarlais, L.

    1993-09-01

    Sandia has developed an advanced operational control system approach, called Graphical Programming, to design, program, and operate robotic systems. The Graphical Programming approach produces robot systems that are faster to develop and use, safer in operation, and cheaper overall than altemative teleoperation or autonomous robot control systems. Graphical Programming also provides an efficient and easy-to-use interface to traditional robot systems for use in setup and programming tasks. This paper provides an overview of the Graphical Programming approach and lists key features of Graphical Programming systems. Graphical Programming uses 3-D visualization and simulation software with intuitive operator interfaces for the programming and control of complex robotic systems. Graphical Programming Supervisor software modules allow an operator to command and simulate complex tasks in a graphic preview mode and, when acceptable, command the actual robots and monitor their motions with the graphic system. Graphical Programming Supervisors maintain registration with the real world and allow the robot to perform tasks that cannot be accurately represented with models alone by using a combination of model and sensor-based control.

  12. The Systems Biology Graphical Notation.

    PubMed

    Le Novère, Nicolas; Hucka, Michael; Mi, Huaiyu; Moodie, Stuart; Schreiber, Falk; Sorokin, Anatoly; Demir, Emek; Wegner, Katja; Aladjem, Mirit I; Wimalaratne, Sarala M; Bergman, Frank T; Gauges, Ralph; Ghazal, Peter; Kawaji, Hideya; Li, Lu; Matsuoka, Yukiko; Villéger, Alice; Boyd, Sarah E; Calzone, Laurence; Courtot, Melanie; Dogrusoz, Ugur; Freeman, Tom C; Funahashi, Akira; Ghosh, Samik; Jouraku, Akiya; Kim, Sohyoung; Kolpakov, Fedor; Luna, Augustin; Sahle, Sven; Schmidt, Esther; Watterson, Steven; Wu, Guanming; Goryanin, Igor; Kell, Douglas B; Sander, Chris; Sauro, Herbert; Snoep, Jacky L; Kohn, Kurt; Kitano, Hiroaki

    2009-08-01

    Circuit diagrams and Unified Modeling Language diagrams are just two examples of standard visual languages that help accelerate work by promoting regularity, removing ambiguity and enabling software tool support for communication of complex information. Ironically, despite having one of the highest ratios of graphical to textual information, biology still lacks standard graphical notations. The recent deluge of biological knowledge makes addressing this deficit a pressing concern. Toward this goal, we present the Systems Biology Graphical Notation (SBGN), a visual language developed by a community of biochemists, modelers and computer scientists. SBGN consists of three complementary languages: process diagram, entity relationship diagram and activity flow diagram. Together they enable scientists to represent networks of biochemical interactions in a standard, unambiguous way. We believe that SBGN will foster efficient and accurate representation, visualization, storage, exchange and reuse of information on all kinds of biological knowledge, from gene regulation, to metabolism, to cellular signaling.

  13. Relativistic hydrodynamics on graphic cards

    NASA Astrophysics Data System (ADS)

    Gerhard, Jochen; Lindenstruth, Volker; Bleicher, Marcus

    2013-02-01

    We show how to accelerate relativistic hydrodynamics simulations using graphic cards (graphic processing units, GPUs). These improvements are of highest relevance e.g. to the field of high-energetic nucleus-nucleus collisions at RHIC and LHC where (ideal and dissipative) relativistic hydrodynamics is used to calculate the evolution of hot and dense QCD matter. The results reported here are based on the Sharp And Smooth Transport Algorithm (SHASTA), which is employed in many hydrodynamical models and hybrid simulation packages, e.g. the Ultrarelativistic Quantum Molecular Dynamics model (UrQMD). We have redesigned the SHASTA using the OpenCL computing framework to work on accelerators like graphic processing units (GPUs) as well as on multi-core processors. With the redesign of the algorithm the hydrodynamic calculations have been accelerated by a factor 160 allowing for event-by-event calculations and better statistics in hybrid calculations.

  14. Estimating anatomical trajectories with Bayesian mixed-effects modeling

    PubMed Central

    Ziegler, G.; Penny, W.D.; Ridgway, G.R.; Ourselin, S.; Friston, K.J.

    2015-01-01

    We introduce a mass-univariate framework for the analysis of whole-brain structural trajectories using longitudinal Voxel-Based Morphometry data and Bayesian inference. Our approach to developmental and aging longitudinal studies characterizes heterogeneous structural growth/decline between and within groups. In particular, we propose a probabilistic generative model that parameterizes individual and ensemble average changes in brain structure using linear mixed-effects models of age and subject-specific covariates. Model inversion uses Expectation Maximization (EM), while voxelwise (empirical) priors on the size of individual differences are estimated from the data. Bayesian inference on individual and group trajectories is realized using Posterior Probability Maps (PPM). In addition to parameter inference, the framework affords comparisons of models with varying combinations of model order for fixed and random effects using model evidence. We validate the model in simulations and real MRI data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) project. We further demonstrate how subject specific characteristics contribute to individual differences in longitudinal volume changes in healthy subjects, Mild Cognitive Impairment (MCI), and Alzheimer's Disease (AD). PMID:26190405

  15. Estimating anatomical trajectories with Bayesian mixed-effects modeling.

    PubMed

    Ziegler, G; Penny, W D; Ridgway, G R; Ourselin, S; Friston, K J

    2015-11-01

    We introduce a mass-univariate framework for the analysis of whole-brain structural trajectories using longitudinal Voxel-Based Morphometry data and Bayesian inference. Our approach to developmental and aging longitudinal studies characterizes heterogeneous structural growth/decline between and within groups. In particular, we propose a probabilistic generative model that parameterizes individual and ensemble average changes in brain structure using linear mixed-effects models of age and subject-specific covariates. Model inversion uses Expectation Maximization (EM), while voxelwise (empirical) priors on the size of individual differences are estimated from the data. Bayesian inference on individual and group trajectories is realized using Posterior Probability Maps (PPM). In addition to parameter inference, the framework affords comparisons of models with varying combinations of model order for fixed and random effects using model evidence. We validate the model in simulations and real MRI data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) project. We further demonstrate how subject specific characteristics contribute to individual differences in longitudinal volume changes in healthy subjects, Mild Cognitive Impairment (MCI), and Alzheimer's Disease (AD).

  16. Estimating anatomical trajectories with Bayesian mixed-effects modeling.

    PubMed

    Ziegler, G; Penny, W D; Ridgway, G R; Ourselin, S; Friston, K J

    2015-11-01

    We introduce a mass-univariate framework for the analysis of whole-brain structural trajectories using longitudinal Voxel-Based Morphometry data and Bayesian inference. Our approach to developmental and aging longitudinal studies characterizes heterogeneous structural growth/decline between and within groups. In particular, we propose a probabilistic generative model that parameterizes individual and ensemble average changes in brain structure using linear mixed-effects models of age and subject-specific covariates. Model inversion uses Expectation Maximization (EM), while voxelwise (empirical) priors on the size of individual differences are estimated from the data. Bayesian inference on individual and group trajectories is realized using Posterior Probability Maps (PPM). In addition to parameter inference, the framework affords comparisons of models with varying combinations of model order for fixed and random effects using model evidence. We validate the model in simulations and real MRI data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) project. We further demonstrate how subject specific characteristics contribute to individual differences in longitudinal volume changes in healthy subjects, Mild Cognitive Impairment (MCI), and Alzheimer's Disease (AD). PMID:26190405

  17. A graphical modeling tool for evaluating nitrogen loading to and nitrate transport in ground water in the mid-Snake region, south-central Idaho

    USGS Publications Warehouse

    Clark, David W.; Skinner, Kenneth D.; Pollock, David W.

    2006-01-01

    A flow and transport model was created with a graphical user interface to simplify the evaluation of nitrogen loading and nitrate transport in the mid-Snake region in south-central Idaho. This model and interface package, the Snake River Nitrate Scenario Simulator, uses the U.S. Geological Survey's MODFLOW 2000 and MOC3D models. The interface, which is enabled for use with geographic information systems (GIS), was created using ESRI's royalty-free MapObjects LT software. The interface lets users view initial nitrogen-loading conditions (representing conditions as of 1998), alter the nitrogen loading within selected zones by specifying a multiplication factor and applying it to the initial condition, run the flow and transport model, and view a graphical representation of the modeling results. The flow and transport model of the Snake River Nitrate Scenario Simulator was created by rediscretizing and recalibrating a clipped portion of an existing regional flow model. The new subregional model was recalibrated with newly available water-level data and spring and ground-water nitrate concentration data for the study area. An updated nitrogen input GIS layer controls the application of nitrogen to the flow and transport model. Users can alter the nitrogen application to the flow and transport model by altering the nitrogen load in predefined spatial zones contained within similar political, hydrologic, and size-constrained boundaries.

  18. A continuous mixing model for pdf simulations and its applications to combusting shear flows

    NASA Technical Reports Server (NTRS)

    Hsu, A. T.; Chen, J.-Y.

    1991-01-01

    The problem of time discontinuity (or jump condition) in the coalescence/dispersion (C/D) mixing model is addressed in this work. A C/D mixing model continuous in time is introduced. With the continuous mixing model, the process of chemical reaction can be fully coupled with mixing. In the case of homogeneous turbulence decay, the new model predicts a pdf very close to a Gaussian distribution, with finite higher moments also close to that of a Gaussian distribution. Results from the continuous mixing model are compared with both experimental data and numerical results from conventional C/D models.

  19. A modified EM algorithm for estimation in generalized mixed models.

    PubMed

    Steele, B M

    1996-12-01

    Application of the EM algorithm for estimation in the generalized mixed model has been largely unsuccessful because the E-step cannot be determined in most instances. The E-step computes the conditional expectation of the complete data log-likelihood and when the random effect distribution is normal, this expectation remains an intractable integral. The problem can be approached by numerical or analytic approximations; however, the computational burden imposed by numerical integration methods and the absence of an accurate analytic approximation have limited the use of the EM algorithm. In this paper, Laplace's method is adapted for analytic approximation within the E-step. The proposed algorithm is computationally straightforward and retains much of the conceptual simplicity of the conventional EM algorithm, although the usual convergence properties are not guaranteed. The proposed algorithm accommodates multiple random factors and random effect distributions besides the normal, e.g., the log-gamma distribution. Parameter estimates obtained for several data sets and through simulation show that this modified EM algorithm compares favorably with other generalized mixed model methods.

  20. Fermion flavor mixing in models with dynamical mass generation

    SciTech Connect

    Benes, Petr

    2010-03-15

    We present a model-independent method of dealing with fermion flavor mixing in the case when instead of constant, momentum-independent mass matrices one has rather momentum-dependent self-energies. This situation is typical for strongly coupled models of dynamical fermion mass generation. We demonstrate our approach on the example of quark mixing. We show that quark self-energies with a generic momentum dependence lead to an effective Cabibbo-Kobayashi-Maskawa matrix, which turns out to be in general nonunitary, in accordance with previous claims of other authors, and to nontrivial flavor changing electromagnetic and neutral currents. We also discuss some conceptual consequences of the momentum-dependent self-energies and show that in such a case the interaction basis and the mass basis are not related by a unitary transformation. In fact, we argue that the latter is merely an effective concept, in a specified sense. While focusing mainly on the fermionic self-energies, we also study the effects of momentum-dependent radiative corrections to the gauge bosons and to the proper vertices. Our approach is based on an application of the Lehmann-Symanzik-Zimmermann reduction formula and for the special case of constant self-energies it gives the same results as the standard approach based on the diagonalization of mass matrices.

  1. Mixed neutralino dark matter in nonuniversal gaugino mass models

    SciTech Connect

    Chattopadhyay, Utpal; Das, Debottam; Roy, D. P.

    2009-05-01

    We have considered nonuniversal gaugino mass models of supergravity, arising from a mixture of two superfield contributions to the gauge kinetic term, belonging to a singlet and a nonsinglet representation of the grand unified theory group. In particular we analyze two models, where the contributing superfields belong to the singlet and the 75-dimensional, and the singlet and the 200-dimensional representations of SU(5). The resulting lightest superparticle is a mixed bino-Higgsino state in the first case and a mixed bino-wino-Higgsino state in the second. In both cases one obtains cosmologically compatible dark matter relic density over broad regions of the parameter space. We predict promising signals in direct dark matter detection experiments as well as in indirect detection experiments via high energy neutrinos coming from their pair annihilation in the Sun. Besides, we find interesting {gamma}-ray signal rates that will be probed in the Fermi gamma-ray space telescope. We also expect promising collider signals at LHC in both cases.

  2. Study of a mixed dispersal population dynamics model

    SciTech Connect

    Chugunova, Marina; Jadamba, Baasansuren; Kao, Chiu -Yen; Klymko, Christine F.; Thomas, Evelyn; Zhao, Bingyu

    2015-07-10

    In this study, we consider a mixed dispersal model with periodic and Dirichlet boundary conditions and its corresponding linear eigenvalue problem. This model describes the time evolution of a population which disperses both locally and non-locally. We investigate how long time dynamics depend on the parameter values. Furthermore, we study the minimization of the principal eigenvalue under the constraints that the resource function is bounded from above and below, and with a fixed total integral. Biologically, this minimization problem is motivated by the question of determining the optimal spatial arrangement of favorable and unfavorable regions for the species to die out more slowly or survive more easily. Our numerical simulations indicate that the optimal favorable region tends to be a simply-connected domain. Numerous results are shown to demonstrate various scenarios of optimal favorable regions for periodic and Dirichlet boundary conditions.

  3. A mixed damage model for unsaturated porous media

    NASA Astrophysics Data System (ADS)

    Arson, Chloé; Gatmiri, Behrouz

    2009-02-01

    The aim of this study is to present a framework for the modeling of damage in continuous unsaturated porous geomaterials. The damage variable is a second-order tensor. The model is formulated in net stress and suction independent state variables. Correspondingly, the strain tensor is split into two independent thermodynamic strain components. The proposed framework mixes micro-mechanical and phenomenological approaches. On the one hand, the effective stress concept of Continuum Damage Mechanics is used in order to compute the damaged rigidities. On the other hand, the concept of equivalent mechanical state is introduced in order to get a simple phenomenological formulation of the behavior laws. Cracking effects are also taken into account in the fluid transfer laws. To cite this article: C. Arson, B. Gatmiri, C. R. Mecanique 337 (2009).

  4. Study of a mixed dispersal population dynamics model

    DOE PAGESBeta

    Chugunova, Marina; Jadamba, Baasansuren; Kao, Chiu -Yen; Klymko, Christine F.; Thomas, Evelyn; Zhao, Bingyu

    2016-08-27

    In this study, we consider a mixed dispersal model with periodic and Dirichlet boundary conditions and its corresponding linear eigenvalue problem. This model describes the time evolution of a population which disperses both locally and non-locally. We investigate how long time dynamics depend on the parameter values. Furthermore, we study the minimization of the principal eigenvalue under the constraints that the resource function is bounded from above and below, and with a fixed total integral. Biologically, this minimization problem is motivated by the question of determining the optimal spatial arrangement of favorable and unfavorable regions for the species to diemore » out more slowly or survive more easily. Our numerical simulations indicate that the optimal favorable region tends to be a simply-connected domain. Numerous results are shown to demonstrate various scenarios of optimal favorable regions for periodic and Dirichlet boundary conditions.« less

  5. A mixing evolution model for bidirectional microblog user networks

    NASA Astrophysics Data System (ADS)

    Yuan, Wei-Guo; Liu, Yun

    2015-08-01

    Microblogs have been widely used as a new form of online social networking. Based on the user profile data collected from Sina Weibo, we find that the number of microblog user bidirectional friends approximately corresponds with the lognormal distribution. We then build two microblog user networks with real bidirectional relationships, both of which have not only small-world and scale-free but also some special properties, such as double power-law degree distribution, disassortative network, hierarchical and rich-club structure. Moreover, by detecting the community structures of the two real networks, we find both of their community scales follow an exponential distribution. Based on the empirical analysis, we present a novel evolution network model with mixed connection rules, including lognormal fitness preferential and random attachment, nearest neighbor interconnected in the same community, and global random associations in different communities. The simulation results show that our model is consistent with real network in many topology features.

  6. Linear mixing model applied to coarse resolution satellite data

    NASA Technical Reports Server (NTRS)

    Holben, Brent N.; Shimabukuro, Yosio E.

    1992-01-01

    A linear mixing model typically applied to high resolution data such as Airborne Visible/Infrared Imaging Spectrometer, Thematic Mapper, and Multispectral Scanner System is applied to the NOAA Advanced Very High Resolution Radiometer coarse resolution satellite data. The reflective portion extracted from the middle IR channel 3 (3.55 - 3.93 microns) is used with channels 1 (0.58 - 0.68 microns) and 2 (0.725 - 1.1 microns) to run the Constrained Least Squares model to generate fraction images for an area in the west central region of Brazil. The derived fraction images are compared with an unsupervised classification and the fraction images derived from Landsat TM data acquired in the same day. In addition, the relationship betweeen these fraction images and the well known NDVI images are presented. The results show the great potential of the unmixing techniques for applying to coarse resolution data for global studies.

  7. A phase mixing model for the frequency-doubling illusion.

    PubMed

    Wielaard, James; Smith, R Theodore

    2013-10-01

    We introduce a temporal phase mixing model for a description of the frequency-doubling illusion (FDI). The model is generic in the sense that it can be set to refer to retinal ganglion cells, lateral geniculate cells, as well as simple cells in the primary visual cortex (V1). Model parameters, however, strongly suggest that the FDI originates in the cortex. The model shows how noise in the response phases of cells in V1, or in further processing of these phases, easily produces observed behavior of FDI onset as a function of spatiotemporal frequencies. It also shows how this noise can accommodate physiologically plausible spatial delays in comparing neural signals over a distance. The model offers an explanation for the disappearance of the FDI at sufficiently high spatial frequencies via increasingly correlated coding of neighboring grating stripes. Further, when the FDI is equated to vanishing perceptual discrimination between asynchronous contrast-reversal gratings, the model proposes the possibility that the FDI shows a resonance behavior at sufficiently high spatial frequencies, by which it is alternately perceived and not perceived in sequential temporal frequency bands.

  8. Estimating preferential flow in karstic aquifers using statistical mixed models.

    PubMed

    Anaya, Angel A; Padilla, Ingrid; Macchiavelli, Raul; Vesper, Dorothy J; Meeker, John D; Alshawabkeh, Akram N

    2014-01-01

    Karst aquifers are highly productive groundwater systems often associated with conduit flow. These systems can be highly vulnerable to contamination, resulting in a high potential for contaminant exposure to humans and ecosystems. This work develops statistical models to spatially characterize flow and transport patterns in karstified limestone and determines the effect of aquifer flow rates on these patterns. A laboratory-scale Geo-HydroBed model is used to simulate flow and transport processes in a karstic limestone unit. The model consists of stainless steel tanks containing a karstified limestone block collected from a karst aquifer formation in northern Puerto Rico. Experimental work involves making a series of flow and tracer injections, while monitoring hydraulic and tracer response spatially and temporally. Statistical mixed models (SMMs) are applied to hydraulic data to determine likely pathways of preferential flow in the limestone units. The models indicate a highly heterogeneous system with dominant, flow-dependent preferential flow regions. Results indicate that regions of preferential flow tend to expand at higher groundwater flow rates, suggesting a greater volume of the system being flushed by flowing water at higher rates. Spatial and temporal distribution of tracer concentrations indicates the presence of conduit-like and diffuse flow transport in the system, supporting the notion of both combined transport mechanisms in the limestone unit. The temporal response of tracer concentrations at different locations in the model coincide with, and confirms the preferential flow distribution generated with the SMMs used in the study.

  9. A Mixed Approach for Modeling Blood Flow in Brain Microcirculation

    NASA Astrophysics Data System (ADS)

    Lorthois, Sylvie; Peyrounette, Myriam; Davit, Yohan; Quintard, Michel; Groupe d'Etude sur les Milieux Poreux Team

    2015-11-01

    Consistent with its distribution and exchange functions, the vascular system of the human brain cortex is a superposition of two components. At small-scale, a homogeneous and space-filling mesh-like capillary network. At large scale, quasi-fractal branched veins and arteries. From a modeling perspective, this is the superposition of: (a) a continuum model resulting from the homogenization of slow transport in the small-scale capillary network; and (b) a discrete network approach describing fast transport in the arteries and veins, which cannot be homogenized because of their fractal nature. This problematic is analogous to fast conducting wells embedded in a reservoir rock in petroleum engineering. An efficient method to reduce the computational cost is to use relatively large grid blocks for the continuum model. This makes it difficult to accurately couple both components. We solve this issue by adapting the ``well model'' concept used in petroleum engineering to brain specific 3D situations. We obtain a unique linear system describing the discrete network, the continuum and the well model. Results are presented for realistic arterial and venous geometries. The mixed approach is compared with full network models including various idealized capillary networks of known permeability. ERC BrainMicroFlow GA615102.

  10. AFC3D: A 3D graphical tool to model assimilation and fractional crystallization with and without recharge in the R environment

    NASA Astrophysics Data System (ADS)

    Guzmán, Silvina; Carniel, Roberto; Caffe, Pablo J.

    2014-03-01

    AFC3D is an original graphical free software developed in the framework of the R scientific environment and dedicated to the modelling of assimilation and fractional crystallization without (AFC) and with (AFC-r) recharge, facilitating the search for the solutions of the equations originally proposed by DePaolo (1981, 1985) and first solved in a graphical way by Aitcheson and Forrest (1994). The software presented here allows a graphical 3D representation of ρ (mass of assimilated crust/mass of original magma) as a function of r (rate of crustal assimilation/rate of fractional crystallization) and β (recharge rate of magma replenishment / rate of assimilation) for each element/isotope, finding a coherent set of (r, β, ρ) parameter triples in a mostly automated way. Mathematically optimized solutions are derived, which can and should then be discussed and evaluated from a geological and petrological point of view by the end user. The presented contribution presents the software and a series of models published in the literature, which are discussed as case studies of application and whose solutions are sometimes enhanced based on the results provided by the software.

  11. The gradient function as an exploratory goodness-of-fit assessment of the random-effects distribution in mixed models.

    PubMed

    Verbeke, Geert; Molenberghs, Geert

    2013-07-01

    Inference in mixed models is often based on the marginal distribution obtained from integrating out random effects over a pre-specified, often parametric, distribution. In this paper, we present the so-called gradient function as a simple graphical exploratory diagnostic tool to assess whether the assumed random-effects distribution produces an adequate fit to the data, in terms of marginal likelihood. The method does not require any calculations in addition to the computations needed to fit the model, and can be applied to a wide range of mixed models (linear, generalized linear, non-linear), with univariate as well as multivariate random effects, as long as the distribution for the outcomes conditional on the random effects is correctly specified. In case of model misspecification, the gradient function gives an important, albeit informal, indication on how the model can be improved in terms of random-effects distribution. The diagnostic value of the gradient function is extensively illustrated using some simulated examples, as well as in the analysis of a real longitudinal study with binary outcome values.

  12. Application of Hierarchical Linear Models/Linear Mixed-Effects Models in School Effectiveness Research

    ERIC Educational Resources Information Center

    Ker, H. W.

    2014-01-01

    Multilevel data are very common in educational research. Hierarchical linear models/linear mixed-effects models (HLMs/LMEs) are often utilized to analyze multilevel data nowadays. This paper discusses the problems of utilizing ordinary regressions for modeling multilevel educational data, compare the data analytic results from three regression…

  13. Mesoscale Modeling During Mixed-Phase Arctic Cloud Experiment

    SciTech Connect

    Avramov, A.; Harringston, J.Y.; Verlinde, J.

    2005-03-18

    Mixed-phase arctic stratus clouds are the predominant cloud type in the Arctic (Curry et al. 2000) and through various feedback mechanisms exert a strong influence on the Arctic climate. Perhaps one of the most intriguing of their features is that they tend to have liquid tops that precipitate ice. Despite the fact that this situation is colloidally unstable, these cloud systems are quite long lived - from a few days to over a couple of weeks. It has been hypothesized that mixed-phase clouds are maintained through a balance between liquid water condensation resulting from the cloud-top radiative cooling and ice removal by precipitation (Pinto 1998; Harrington et al. 1999). In their modeling study Harrington et al. (1999) found that the maintenance of this balance depends strongly on the ambient concentration of ice forming nucleus (IFN). In a follow-up study, Jiang et al. (2002), using only 30% of IFN concentration predicted by Meyers et al. (1992) IFN parameterization were able to obtain results similar to the observations reported by Pinto (1998). The IFN concentration measurements collected during the Mixed-Phase Arctic Cloud Experiment (M-PACE), conducted in October 2004 over the North Slope of Alaska and the Beaufort Sea (Verlinde et al. 2005), also showed much lower values then those predicted (Prenne, pers. comm.) by currently accepted ice nucleation parameterizations (e.g. Meyers et al. 1992). The goal of this study is to use the extensive IFN data taken during M-PACE to examine what effects low IFN concentrations have on mesoscale cloud structure and coastal dynamics.

  14. Modeling of mixed-mode chromatography of peptides.

    PubMed

    Bernardi, Susanna; Gétaz, David; Forrer, Nicola; Morbidelli, Massimo

    2013-03-29

    Mixed-mode chromatographic materials are more and more often used for the purification of biomolecules, such as peptides and proteins. In many instances they in fact exhibit better selectivity values and therefore improve the purification efficiency compared to classical materials. In this work, a model to describe biomolecules retention in cation-exchange/reversed-phase (CIEX-RP) mixed-mode columns under diluted conditions has been developed. The model accounts for the effect of the salt and organic modifier concentration on the biomolecule Henry coefficient through three parameters: α, β and γ. The α parameter is related to the adsorption strength and ligand density, β represents the number of organic modifier molecules necessary to displace one adsorbed biomolecule and γ represents the number of salt molecules necessary to desorb one biomolecule. The latter parameter is strictly related to the number of charges on the biomolecule surface interacting with the ion-exchange ligands and it is shown experimentally that its value is close to the biomolecule net charge. The model reliability has been validated by a large set of experimental data including retention times of two different peptides (goserelin and insulin) on five columns: a reversed-phase C8 column and four CIEX-RP columns with different percentages of sulfonic groups and various concentration values of the salt and organic modifier. It has been found that the percentage of sulfonic groups on the surface strongly affects the peptides adsorption strength, and in particular, in the cases investigated, a CIEX ligand density around 0.04μmol/m(2) leads to optimal retention values. PMID:23433883

  15. A new unsteady mixing model to predict NO(x) production during rapid mixing in a dual-stage combustor

    NASA Technical Reports Server (NTRS)

    Menon, Suresh; Mcmurtry, Patrick A.; Kerstein, Alan R.; Chen, J.-Y.

    1992-01-01

    An advanced gas turbine engine to power supersonic transport aircraft is currently under study. In addition to high combustion efficiency requirements, environmental concerns have placed stringent restrictions on the pollutant emissions from these engines. A dual-stage combustor with the potential for minimizing pollutants such as NO(x) emissions is undergoing experimental evaluation. A major technical issue in the design of this combustor is how to rapidly mix the hot, fuel-rich primary stage product with the secondary diluent air to obtain a fuel-lean mixture for combustion in the secondary stage. Numerical design studies using steady-state methods cannot account for the unsteady phenomena in the mixing region. Therefore, to evaluate the effect of unsteady mixing and combustion processes, a novel unsteady mixing model is demonstrated here. This model has been used in a stand-alone mode to study mixing and combustion in hydrogen-air nonpremixed jet flames. NO(x) production in these jet flames was also predicted. Comparison of the computed results with experimental data show good agreement thereby providing validation of the mixing model.

  16. Efficient material flow in mixed model assembly lines.

    PubMed

    Alnahhal, Mohammed; Noche, Bernd

    2013-01-01

    In this study, material flow from decentralized supermarkets to stations in mixed model assembly lines using tow (tugger) trains is investigated. Train routing, scheduling, and loading problems are investigated in parallel to minimize the number of trains, variability in loading and in routes lengths, and line-side inventory holding costs. The general framework for solving these problems in parallel contains analytical equations, Dynamic Programming (DP), and Mixed Integer Programming (MIP). Matlab in conjunction with LP-solve software was used to formulate the problem. An example was presented to explain the idea. Results which were obtained in very short CPU time showed the effect of using time buffer among routes on the feasible space and on the optimal solution. Results also showed the effect of the objective, concerning reducing the variability in loading, on the results of routing, scheduling, and loading. Moreover, results showed the importance of considering the maximum line-side inventory beside the capacity of the train in the same time in finding the optimal solution. PMID:24024101

  17. The pits and falls of graphical presentation.

    PubMed

    Sperandei, Sandro

    2014-01-01

    Graphics are powerful tools to communicate research results and to gain information from data. However, researchers should be careful when deciding which data to plot and the type of graphic to use, as well as other details. The consequence of bad decisions in these features varies from making research results unclear to distortions of these results, through the creation of "chartjunk" with useless information. This paper is not another tutorial about "good graphics" and "bad graphics". Instead, it presents guidelines for graphic presentation of research results and some uncommon, but useful examples to communicate basic and complex data types, especially multivariate model results, which are commonly presented only by tables. By the end, there are no answers here, just ideas meant to inspire others on how to create their own graphics.

  18. Mixed-Effects Modeling with Crossed Random Effects for Subjects and Items

    ERIC Educational Resources Information Center

    Baayen, R. H.; Davidson, D. J.; Bates, D. M.

    2008-01-01

    This paper provides an introduction to mixed-effects models for the analysis of repeated measurement data with subjects and items as crossed random effects. A worked-out example of how to use recent software for mixed-effects modeling is provided. Simulation studies illustrate the advantages offered by mixed-effects analyses compared to…

  19. COMPUTATIONAL FLUID DYNAMICS MODELING OF SCALED HANFORD DOUBLE SHELL TANK MIXING - CFD MODELING SENSITIVITY STUDY RESULTS

    SciTech Connect

    JACKSON VL

    2011-08-31

    The primary purpose of the tank mixing and sampling demonstration program is to mitigate the technical risks associated with the ability of the Hanford tank farm delivery and celtification systems to measure and deliver a uniformly mixed high-level waste (HLW) feed to the Waste Treatment and Immobilization Plant (WTP) Uniform feed to the WTP is a requirement of 24590-WTP-ICD-MG-01-019, ICD-19 - Interface Control Document for Waste Feed, although the exact definition of uniform is evolving in this context. Computational Fluid Dynamics (CFD) modeling has been used to assist in evaluating scaleup issues, study operational parameters, and predict mixing performance at full-scale.

  20. Bayesian Gaussian Copula Factor Models for Mixed Data

    PubMed Central

    Murray, Jared S.; Dunson, David B.; Carin, Lawrence; Lucas, Joseph E.

    2013-01-01

    Gaussian factor models have proven widely useful for parsimoniously characterizing dependence in multivariate data. There is a rich literature on their extension to mixed categorical and continuous variables, using latent Gaussian variables or through generalized latent trait models acommodating measurements in the exponential family. However, when generalizing to non-Gaussian measured variables the latent variables typically influence both the dependence structure and the form of the marginal distributions, complicating interpretation and introducing artifacts. To address this problem we propose a novel class of Bayesian Gaussian copula factor models which decouple the latent factors from the marginal distributions. A semiparametric specification for the marginals based on the extended rank likelihood yields straightforward implementation and substantial computational gains. We provide new theoretical and empirical justifications for using this likelihood in Bayesian inference. We propose new default priors for the factor loadings and develop efficient parameter-expanded Gibbs sampling for posterior computation. The methods are evaluated through simulations and applied to a dataset in political science. The models in this paper are implemented in the R package bfa.1 PMID:23990691

  1. Subgrid models for mass and thermal diffusion in turbulent mixing

    NASA Astrophysics Data System (ADS)

    Lim, H.; Yu, Y.; Glimm, J.; Li, X.-L.; Sharp, D. H.

    2010-12-01

    We propose a new method for the large eddy simulation (LES) of turbulent mixing flows. The method yields convergent probability distribution functions (PDFs) for temperature and concentration and a chemical reaction rate when applied to reshocked Richtmyer-Meshkov (RM) unstable flows. Because such a mesh convergence is an unusual and perhaps original capability for LES of RM flows, we review previous validation studies of the principal components of the algorithm. The components are (i) a front tracking code, FronTier, to control numerical mass diffusion and (ii) dynamic subgrid scale (SGS) models to compensate for unresolved scales in the LES. We also review the relevant code comparison studies. We compare our results to a simple model based on 1D diffusion, taking place in the geometry defined statistically by the interface (the 50% isoconcentration surface between the two fluids). Several conclusions important to physics could be drawn from our study. We model chemical reactions with no closure approximations beyond those in the LES of the fluid variables itself, and as with dynamic SGS models, these closures contain no adjustable parameters. The chemical reaction rate is specified by the joint PDF for temperature and concentration. We observe a bimodal distribution for the PDF and we observe significant dependence on fluid transport parameters.

  2. Bayesian Gaussian Copula Factor Models for Mixed Data.

    PubMed

    Murray, Jared S; Dunson, David B; Carin, Lawrence; Lucas, Joseph E

    2013-06-01

    Gaussian factor models have proven widely useful for parsimoniously characterizing dependence in multivariate data. There is a rich literature on their extension to mixed categorical and continuous variables, using latent Gaussian variables or through generalized latent trait models acommodating measurements in the exponential family. However, when generalizing to non-Gaussian measured variables the latent variables typically influence both the dependence structure and the form of the marginal distributions, complicating interpretation and introducing artifacts. To address this problem we propose a novel class of Bayesian Gaussian copula factor models which decouple the latent factors from the marginal distributions. A semiparametric specification for the marginals based on the extended rank likelihood yields straightforward implementation and substantial computational gains. We provide new theoretical and empirical justifications for using this likelihood in Bayesian inference. We propose new default priors for the factor loadings and develop efficient parameter-expanded Gibbs sampling for posterior computation. The methods are evaluated through simulations and applied to a dataset in political science. The models in this paper are implemented in the R package bfa.

  3. Chemical geothermometers and mixing models for geothermal systems

    USGS Publications Warehouse

    Fournier, R.O.

    1977-01-01

    Qualitative chemical geothermometers utilize anomalous concentrations of various "indicator" elements in groundwaters, streams, soils, and soil gases to outline favorable places to explore for geothermal energy. Some of the qualitative methods, such as the delineation of mercury and helium anomalies in soil gases, do not require the presence of hot springs or fumaroles. However, these techniques may also outline fossil thermal areas that are now cold. Quantitative chemical geothermometers and mixing models can provide information about present probable minimum subsurface temperatures. Interpretation is easiest where several hot or warm springs are present in a given area. At this time the most widely used quantitative chemical geothermometers are silica, Na/K, and Na-K-Ca. ?? 1976.

  4. A mixed Rasch model of dual-process conditional reasoning.

    PubMed

    Bonnefon, Jean-François; Eid, Michael; Vautier, Stéphane; Jmel, Saïd

    2008-05-01

    A fine-grained dual-process approach to conditional reasoning is advocated: Responses to conditional syllogisms are reached through the operation of either one of two systems, each of which can rely on two different mechanisms. System1 relies either on pragmatic implicatures or on the retrieval of information from semantic memory; System2 operates first through inhibition of System1, then (but not always) through activation of analytical processes. It follows that reasoners will fall into one of four groups of increasing reasoning ability, each group being uniquely characterized by (a) the modal pattern of individual answers to blocks of affirming the consequent (AC), denying the antecedent (DA), and modus tollens (MT) syllogisms featuring the same conditional; and (b) the average rate of determinate answers to AC, DA, and MT. This account receives indirect support from the extant literature and direct support from a mixed Rasch model of responses given to 18 syllogisms by 486 adult reasoners.

  5. Neutrino mixing in a left-right model

    NASA Astrophysics Data System (ADS)

    Martins Simões, J. A.; Ponciano, J. A.

    We study the mixing among different generations of massive neutrino fields in a model can accommodate a consistent pattern for neutral fermion masses as well as neutrino oscillations. The left and right sectors can be connected by a new neutral current. PACS: 12.60.-i, 14.60.St, 14.60.Pq

  6. Sensitivity of fluvial sediment source apportionment to mixing model assumptions: A Bayesian model comparison

    PubMed Central

    Cooper, Richard J; Krueger, Tobias; Hiscock, Kevin M; Rawlins, Barry G

    2014-01-01

    Mixing models have become increasingly common tools for apportioning fluvial sediment load to various sediment sources across catchments using a wide variety of Bayesian and frequentist modeling approaches. In this study, we demonstrate how different model setups can impact upon resulting source apportionment estimates in a Bayesian framework via a one-factor-at-a-time (OFAT) sensitivity analysis. We formulate 13 versions of a mixing model, each with different error assumptions and model structural choices, and apply them to sediment geochemistry data from the River Blackwater, Norfolk, UK, to apportion suspended particulate matter (SPM) contributions from three sources (arable topsoils, road verges, and subsurface material) under base flow conditions between August 2012 and August 2013. Whilst all 13 models estimate subsurface sources to be the largest contributor of SPM (median ∼76%), comparison of apportionment estimates reveal varying degrees of sensitivity to changing priors, inclusion of covariance terms, incorporation of time-variant distributions, and methods of proportion characterization. We also demonstrate differences in apportionment results between a full and an empirical Bayesian setup, and between a Bayesian and a frequentist optimization approach. This OFAT sensitivity analysis reveals that mixing model structural choices and error assumptions can significantly impact upon sediment source apportionment results, with estimated median contributions in this study varying by up to 21% between model versions. Users of mixing models are therefore strongly advised to carefully consider and justify their choice of model structure prior to conducting sediment source apportionment investigations. Key Points An OFAT sensitivity analysis of sediment fingerprinting mixing models is conducted Bayesian models display high sensitivity to error assumptions and structural choices Source apportionment results differ between Bayesian and frequentist approaches PMID

  7. Research on mixed network architecture collaborative application model

    NASA Astrophysics Data System (ADS)

    Jing, Changfeng; Zhao, Xi'an; Liang, Song

    2009-10-01

    When facing complex requirements of city development, ever-growing spatial data, rapid development of geographical business and increasing business complexity, collaboration between multiple users and departments is needed urgently, however conventional GIS software (such as Client/Server model or Browser/Server model) are not support this well. Collaborative application is one of the good resolutions. Collaborative application has four main problems to resolve: consistency and co-edit conflict, real-time responsiveness, unconstrained operation, spatial data recoverability. In paper, application model called AMCM is put forward based on agent and multi-level cache. AMCM can be used in mixed network structure and supports distributed collaborative. Agent is an autonomous, interactive, initiative and reactive computing entity in a distributed environment. Agent has been used in many fields such as compute science and automation. Agent brings new methods for cooperation and the access for spatial data. Multi-level cache is a part of full data. It reduces the network load and improves the access and handle of spatial data, especially, in editing the spatial data. With agent technology, we make full use of its characteristics of intelligent for managing the cache and cooperative editing that brings a new method for distributed cooperation and improves the efficiency.

  8. Recycle of mixed automotive plastics: A model study

    NASA Astrophysics Data System (ADS)

    Woramongconchai, Somsak

    decreased with increased twin-screw extrusion temperature. The flexural modulus of the recycled mixed automotive plastics expected in 2003 was higher than the 1980s and 1990 recycle. Flexural strength effects were not large enough for serious consideration, but were more dominant when compared to those in the 1980s and 1990s. Impact strengths at 20-30 J/m were the lowest value compared to the 1980s and 1990s mixed automotive recycle. Torque rheometry, dynamic mechanical analysis and optical and electron microscopy agreed with each other on the characterization of the processability and morphology of the blends. LLDPE and HDPE were miscible while PP was partially miscible with polyethylene. ABS and nylon-6 were immiscible with the polyolefins, but partially miscible with each other. As expected, the polyurethane foam was immiscible with the other components. The minor components of the model recycle of mixed automotive materials were probably partially miscible with ABS/nylon-6, but there were multiple and unresolved phases in the major blends.

  9. Using Graphic Organizers in Intercultural Education

    ERIC Educational Resources Information Center

    Ciascai, Liliana

    2009-01-01

    Graphic organizers are instruments of representation, illustration and modeling of information. In the educational practice they are used for building, and systematization of knowledge. Graphic organizers are instruments that addressed mostly visual learning style, but their use is beneficial to all learners. In this paper we illustrate the use of…

  10. Graphics Specialist (AFSC 23151).

    ERIC Educational Resources Information Center

    Air Univ., Gunter AFS, Ala. Extension Course Inst.

    This three-volume set of student texts is intended for use in an extension course to prepare Air Force graphics specialists. The first volume deals with basic equipment, materials, lettering, and drafting (including geometric and graphic construction). Addressed in the second volume are composition and layout techniques and the fundamentals of…

  11. How Computer Graphics Work.

    ERIC Educational Resources Information Center

    Prosise, Jeff

    This document presents the principles behind modern computer graphics without straying into the arcane languages of mathematics and computer science. Illustrations accompany the clear, step-by-step explanations that describe how computers draw pictures. The 22 chapters of the book are organized into 5 sections. "Part 1: Computer Graphics in…

  12. Quantitative Graphics in Newspapers.

    ERIC Educational Resources Information Center

    Tankard, James W., Jr.

    The use of quantitative graphics in newspapers requires achieving a balance between being accurate and getting the attention of the reader. The statistical representations in newspapers are drawn by graphic designers whose key technique is fusion--the striking combination of two visual images. This technique often results in visual puns,…

  13. Using Bayesian Stable Isotope Mixing Models to Enhance Marine Ecosystem Models

    EPA Science Inventory

    The use of stable isotopes in food web studies has proven to be a valuable tool for ecologists. We investigated the use of Bayesian stable isotope mixing models as constraints for an ecosystem model of a temperate seagrass system on the Atlantic coast of France. δ13C and δ15N i...

  14. A time dependent mixing model to close PDF equations for transport in heterogeneous aquifers

    NASA Astrophysics Data System (ADS)

    Schüler, L.; Suciu, N.; Knabner, P.; Attinger, S.

    2016-10-01

    Probability density function (PDF) methods are a promising alternative to predicting the transport of solutes in groundwater under uncertainty. They make it possible to derive the evolution equations of the mean concentration and the concentration variance, used in moment methods. The mixing model, describing the transport of the PDF in concentration space, is essential for both methods. Finding a satisfactory mixing model is still an open question and due to the rather elaborate PDF methods, a difficult undertaking. Both the PDF equation and the concentration variance equation depend on the same mixing model. This connection is used to find and test an improved mixing model for the much easier to handle concentration variance. Subsequently, this mixing model is transferred to the PDF equation and tested. The newly proposed mixing model yields significantly improved results for both variance modelling and PDF modelling.

  15. Cruise observation and numerical modeling of turbulent mixing in the Pearl River estuary in summer

    NASA Astrophysics Data System (ADS)

    Pan, Jiayi; Gu, Yanzhen

    2016-06-01

    The turbulent mixing in the Pearl River estuary and plume area is analyzed by using cruise data and simulation results of the Regional Ocean Model System (ROMS). The cruise observations reveal that strong mixing appeared in the bottom layer on larger ebb in the estuary. Modeling simulations are consistent with the observation results, and suggest that inside the estuary and in the near-shore water, the mixing is stronger on ebb than on flood. The mixing generation mechanism analysis based on modeling data reveals that bottom stress is responsible for the generation of turbulence in the estuary, for the re-circulating plume area, internal shear instability plays an important role in the mixing, and wind may induce the surface mixing in the plume far-field. The estuary mixing is controlled by the tidal strength, and in the re-circulating plume bulge, the wind stirring may reinforce the internal shear instability mixing.

  16. Mixed axion/neutralino cold dark matter in supersymmetric models

    SciTech Connect

    Baer, Howard; Lessa, Andre; Rajagopalan, Shibi; Sreethawong, Warintorn E-mail: lessa@nhn.ou.edu E-mail: wstan@nhn.ou.edu

    2011-06-01

    We consider supersymmetric (SUSY) models wherein the strong CP problem is solved by the Peccei-Quinn (PQ) mechanism with a concommitant axion/axino supermultiplet. We examine R-parity conserving models where the neutralino is the lightest SUSY particle, so that a mixture of neutralinos and axions serve as cold dark matter (a Z-tilde {sub 1} CDM). The mixed a Z-tilde {sub 1} CDM scenario can match the measured dark matter abundance for SUSY models which typically give too low a value of the usual thermal neutralino abundance, such as models with wino-like or higgsino-like dark matter. The usual thermal neutralino abundance can be greatly enhanced by the decay of thermally-produced axinos (ã) to neutralinos, followed by neutralino re-annihilation at temperatures much lower than freeze-out. In this case, the relic density is usually neutralino dominated, and goes as ∼ (f{sub a}/N)/m{sub ã}{sup 3/2}. If axino decay occurs before neutralino freeze-out, then instead the neutralino abundance can be augmented by relic axions to match the measured abundance. Entropy production from late-time axino decays can diminish the axion abundance, but ultimately not the neutralino abundance. In a Z-tilde {sub 1} CDM models, it may be possible to detect both a WIMP and an axion as dark matter relics. We also discuss possible modifications of our results due to production and decay of saxions. In the appendices, we present expressions for the Hubble expansion rate and the axion and neutralino relic densities in radiation, matter and decaying-particle dominated universes.

  17. Mixed-Effects Logistic Regression Models for Indirectly Observed Discrete Outcome Variables

    ERIC Educational Resources Information Center

    Vermunt, Jeroen K.

    2005-01-01

    A well-established approach to modeling clustered data introduces random effects in the model of interest. Mixed-effects logistic regression models can be used to predict discrete outcome variables when observations are correlated. An extension of the mixed-effects logistic regression model is presented in which the dependent variable is a latent…

  18. Extended Mixed-Efects Item Response Models with the MH-RM Algorithm

    ERIC Educational Resources Information Center

    Chalmers, R. Philip

    2015-01-01

    A mixed-effects item response theory (IRT) model is presented as a logical extension of the generalized linear mixed-effects modeling approach to formulating explanatory IRT models. Fixed and random coefficients in the extended model are estimated using a Metropolis-Hastings Robbins-Monro (MH-RM) stochastic imputation algorithm to accommodate for…

  19. Mixed dark matter in left-right symmetric models

    NASA Astrophysics Data System (ADS)

    Berlin, Asher; Fox, Patrick J.; Hooper, Dan; Mohlabeng, Gopolang

    2016-06-01

    Motivated by the recently reported diboson and dijet excesses in Run 1 data at ATLAS and CMS, we explore models of mixed dark matter in left-right symmetric theories. In this study, we calculate the relic abundance and the elastic scattering cross section with nuclei for a number of dark matter candidates that appear within the fermionic multiplets of left-right symmetric models. In contrast to the case of pure multiplets, WIMP-nucleon scattering proceeds at tree-level, and hence the projected reach of future direct detection experiments such as LUX-ZEPLIN and XENON1T will cover large regions of parameter space for TeV-scale thermal dark matter. Decays of the heavy charged W' boson to particles in the dark sector can potentially shift the right-handed gauge coupling to larger values when fixed to the rate of the Run 1 excesses, moving towards the theoretically attractive scenario, gR = gL. This region of parameter space may be probed by future collider searches for new Higgs bosons or electroweak fermions.

  20. Mixed dark matter in left-right symmetric models

    DOE PAGESBeta

    Berlin, Asher; Fox, Patrick J.; Hooper, Dan; Mohlabeng, Gopolang

    2016-06-08

    Motivated by the recently reported diboson and dijet excesses in Run 1 data at ATLAS and CMS, we explore models of mixed dark matter in left-right symmetric theories. In this study, we calculate the relic abundance and the elastic scattering cross section with nuclei for a number of dark matter candidates that appear within the fermionic multiplets of left-right symmetric models. In contrast to the case of pure multiplets, WIMP-nucleon scattering proceeds at tree-level, and hence the projected reach of future direct detection experiments such as LUX-ZEPLIN and XENON1T will cover large regions of parameter space for TeV-scale thermal darkmore » matter. Decays of the heavy charged W' boson to particles in the dark sector can potentially shift the right-handed gauge coupling to larger values when fixed to the rate of the Run 1 excesses, moving towards the theoretically attractive scenario, gR = gL. Furthermore, this region of parameter space may be probed by future collider searches for new Higgs bosons or electroweak fermions.« less

  1. Validation of hydrogen gas stratification and mixing models

    DOE PAGESBeta

    Wu, Hsingtzu; Zhao, Haihua

    2015-05-26

    Two validation benchmarks confirm that the BMIX++ code is capable of simulating unintended hydrogen release scenarios efficiently. The BMIX++ (UC Berkeley mechanistic MIXing code in C++) code has been developed to accurately and efficiently predict the fluid mixture distribution and heat transfer in large stratified enclosures for accident analyses and design optimizations. The BMIX++ code uses a scaling based one-dimensional method to achieve large reduction in computational effort compared to a 3-D computational fluid dynamics (CFD) simulation. Two BMIX++ benchmark models have been developed. One is for a single buoyant jet in an open space and another is for amore » large sealed enclosure with both a jet source and a vent near the floor. Both of them have been validated by comparisons with experimental data. Excellent agreements are observed. The entrainment coefficients of 0.09 and 0.08 are found to fit the experimental data for hydrogen leaks with the Froude number of 99 and 268 best, respectively. In addition, the BIX++ simulation results of the average helium concentration for an enclosure with a vent and a single jet agree with the experimental data within a margin of about 10% for jet flow rates ranging from 1.21 × 10⁻⁴ to 3.29 × 10⁻⁴ m³/s. In conclusion, computing time for each BMIX++ model with a normal desktop computer is less than 5 min.« less

  2. Validation of hydrogen gas stratification and mixing models

    SciTech Connect

    Wu, Hsingtzu; Zhao, Haihua

    2015-05-26

    Two validation benchmarks confirm that the BMIX++ code is capable of simulating unintended hydrogen release scenarios efficiently. The BMIX++ (UC Berkeley mechanistic MIXing code in C++) code has been developed to accurately and efficiently predict the fluid mixture distribution and heat transfer in large stratified enclosures for accident analyses and design optimizations. The BMIX++ code uses a scaling based one-dimensional method to achieve large reduction in computational effort compared to a 3-D computational fluid dynamics (CFD) simulation. Two BMIX++ benchmark models have been developed. One is for a single buoyant jet in an open space and another is for a large sealed enclosure with both a jet source and a vent near the floor. Both of them have been validated by comparisons with experimental data. Excellent agreements are observed. The entrainment coefficients of 0.09 and 0.08 are found to fit the experimental data for hydrogen leaks with the Froude number of 99 and 268 best, respectively. In addition, the BIX++ simulation results of the average helium concentration for an enclosure with a vent and a single jet agree with the experimental data within a margin of about 10% for jet flow rates ranging from 1.21 × 10⁻⁴ to 3.29 × 10⁻⁴ m³/s. In conclusion, computing time for each BMIX++ model with a normal desktop computer is less than 5 min.

  3. Fast point-based method of a computer-generated hologram for a triangle-patch model by using a graphics processing unit.

    PubMed

    Sugawara, Takuya; Ogihara, Yuki; Sakamoto, Yuji

    2016-01-20

    The point-based method and fast-Fourier-transform-based method are commonly used for calculation methods of computer-generation holograms. This paper proposes a novel fast calculation method for a patch model, which uses the point-based method. The method provides a calculation time that is proportional to the number of patches but not to that of the point light sources. This means that the method is suitable for calculating a wide area covered by patches quickly. Experiments using a graphics processing unit indicated that the proposed method is about 8 times or more faster than the ordinary point-based method.

  4. Fast point-based method of a computer-generated hologram for a triangle-patch model by using a graphics processing unit.

    PubMed

    Sugawara, Takuya; Ogihara, Yuki; Sakamoto, Yuji

    2016-01-20

    The point-based method and fast-Fourier-transform-based method are commonly used for calculation methods of computer-generation holograms. This paper proposes a novel fast calculation method for a patch model, which uses the point-based method. The method provides a calculation time that is proportional to the number of patches but not to that of the point light sources. This means that the method is suitable for calculating a wide area covered by patches quickly. Experiments using a graphics processing unit indicated that the proposed method is about 8 times or more faster than the ordinary point-based method. PMID:26835949

  5. Linear mixed-effects modeling approach to FMRI group analysis

    PubMed Central

    Chen, Gang; Saad, Ziad S.; Britton, Jennifer C.; Pine, Daniel S.; Cox, Robert W.

    2013-01-01

    Conventional group analysis is usually performed with Student-type t-test, regression, or standard AN(C)OVA in which the variance–covariance matrix is presumed to have a simple structure. Some correction approaches are adopted when assumptions about the covariance structure is violated. However, as experiments are designed with different degrees of sophistication, these traditional methods can become cumbersome, or even be unable to handle the situation at hand. For example, most current FMRI software packages have difficulty analyzing the following scenarios at group level: (1) taking within-subject variability into account when there are effect estimates from multiple runs or sessions; (2) continuous explanatory variables (covariates) modeling in the presence of a within-subject (repeated measures) factor, multiple subject-grouping (between-subjects) factors, or the mixture of both; (3) subject-specific adjustments in covariate modeling; (4) group analysis with estimation of hemodynamic response (HDR) function by multiple basis functions; (5) various cases of missing data in longitudinal studies; and (6) group studies involving family members or twins. Here we present a linear mixed-effects modeling (LME) methodology that extends the conventional group analysis approach to analyze many complicated cases, including the six prototypes delineated above, whose analyses would be otherwise either difficult or unfeasible under traditional frameworks such as AN(C)OVA and general linear model (GLM). In addition, the strength of the LME framework lies in its flexibility to model and estimate the variance–covariance structures for both random effects and residuals. The intraclass correlation (ICC) values can be easily obtained with an LME model with crossed random effects, even at the presence of confounding fixed effects. The simulations of one prototypical scenario indicate that the LME modeling keeps a balance between the control for false positives and the

  6. Engineering Graphics Educational Outcomes for the Global Engineer: An Update

    ERIC Educational Resources Information Center

    Barr, R. E.

    2012-01-01

    This paper discusses the formulation of educational outcomes for engineering graphics that span the global enterprise. Results of two repeated faculty surveys indicate that new computer graphics tools and techniques are now the preferred mode of engineering graphical communication. Specifically, 3-D computer modeling, assembly modeling, and model…

  7. MoldaNet: a network distributed molecular graphics and modelling program that integrates secure signed applet and Java 3D technologies.

    PubMed

    Yoshida, H; Rzepa, H S; Tonge, A P

    1998-06-01

    MoldaNet is a molecular graphics and modelling program that integrates several new Java technologies, including authentication as a Secure Signed Applet, and implementation of Java 3D classes to enable access to hardware graphics acceleration. It is the first example of a novel class of Internet-based distributed computational chemistry tool designed to eliminate the need for user pre-installation of software on their client computer other than a standard Internet browser. The creation of a properly authenticated tool using a signed digital X.509 certificate permits the user to employ MoldaNet to read and write the files to a local file store; actions that are normally disallowed in Java applets. The modularity of the Java language also allows straightforward inclusion of Java3D and Chemical Markup Language classes in MoldaNet to permit the user to filter their model into 3D model descriptors such as VRML97 or CML for saving on local disk. The implications for both distance-based training environments and chemical commerce are noted.

  8. Approximate Counting of Graphical Realizations

    PubMed Central

    2015-01-01

    In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007), for regular directed graphs (by Greenhill, 2011) and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013). Several heuristics on counting the number of possible realizations exist (via sampling processes), and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases) the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS) for counting of all realizations. PMID:26161994

  9. Strategies for the use of mixed-effects models in continuous forest inventories.

    PubMed

    Westfall, James A

    2016-04-01

    Forest inventory data often consists of measurements taken on field plots as well as values predicted from statistical models, e.g., tree biomass. Many of these models only include fixed-effects parameters either because at the time the models were established, mixed-effects model theory had not yet been thoroughly developed or the use of mixed models was deemed unnecessary or too complex. Over the last two decades, considerable research has been conducted on the use of mixed models in forestry, such that mixed models and their applications are generally well understood. However, most of these assessments have focused on static validation data, and mixed model applications in the context of continuous forest inventories have not been evaluated. In comparison to fixed-effects models, the results of this study showed that mixed models can provide considerable reductions in prediction bias and variance for the population and also for subpopulations therein. However, the random effects resulting from the initial model fit deteriorated rapidly over time, such that some field data is needed to effectively recalibrate the random effects for each inventory cycle. Thus, implementation of mixed models requires ongoing maintenance to reap the benefits of improved predictive behavior. Forest inventory managers must determine if this gain in predictive power outweighs the additional effort needed to employ mixed models in a temporal framework. PMID:27010710

  10. Identifying genetically driven clinical phenotypes using linear mixed models

    PubMed Central

    Mosley, Jonathan D.; Witte, John S.; Larkin, Emma K.; Bastarache, Lisa; Shaffer, Christian M.; Karnes, Jason H.; Stein, C. Michael; Phillips, Elizabeth; Hebbring, Scott J.; Brilliant, Murray H.; Mayer, John; Ye, Zhan; Roden, Dan M.; Denny, Joshua C.

    2016-01-01

    We hypothesized that generalized linear mixed models (GLMMs), which estimate the additive genetic variance underlying phenotype variability, would facilitate rapid characterization of clinical phenotypes from an electronic health record. We evaluated 1,288 phenotypes in 29,349 subjects of European ancestry with single-nucleotide polymorphism (SNP) genotyping on the Illumina Exome Beadchip. We show that genetic liability estimates are primarily driven by SNPs identified by prior genome-wide association studies and SNPs within the human leukocyte antigen (HLA) region. We identify 44 (false discovery rate q<0.05) phenotypes associated with HLA SNP variation and show that hypothyroidism is genetically correlated with Type I diabetes (rG=0.31, s.e. 0.12, P=0.003). We also report novel SNP associations for hypothyroidism near HLA-DQA1/HLA-DQB1 at rs6906021 (combined odds ratio (OR)=1.2 (95% confidence interval (CI): 1.1–1.2), P=9.8 × 10−11) and for polymyalgia rheumatica near C6orf10 at rs6910071 (OR=1.5 (95% CI: 1.3–1.6), P=1.3 × 10−10). Phenome-wide application of GLMMs identifies phenotypes with important genetic drivers, and focusing on these phenotypes can identify novel genetic associations. PMID:27109359

  11. Genomic Heritability of Bovine Growth Using a Mixed Model

    PubMed Central

    Ryu, Jihye; Lee, Chaeyoung

    2014-01-01

    This study investigated heritability for bovine growth estimated with genomewide single nucleotide polymorphism (SNP) information obtained from a DNA microarray chip. Three hundred sixty seven Korean cattle were genotyped with the Illumina BovineSNP50 BeadChip, and 39,112 SNPs of 364 animals filtered by quality assurance were analyzed to estimate heritability of body weights at 6, 9, 12, 15, 18, 21, and 24 months of age. Restricted maximum likelihood estimate of heritability was obtained using covariance structure of genomic relationships among animals in a mixed model framework. Heritability estimates ranged from 0.58 to 0.76 for body weights at different ages. The heritability estimates using genomic information in this study were larger than those which had been estimated previously using pedigree information. The results revealed a trend that the heritability for body weight increased at a younger age (6 months). This suggests an early genetic evaluation for bovine growth using genomic information to increase genetic merits of animals. PMID:25358309

  12. An improved mixing model providing joint statistics of scalar and scalar dissipation

    SciTech Connect

    Meyer, Daniel W.; Jenny, Patrick

    2008-11-15

    For the calculation of nonpremixed turbulent flames with thin reaction zones the joint probability density function (PDF) of the mixture fraction and its dissipation rate plays an important role. The corresponding PDF transport equation involves a mixing model for the closure of the molecular mixing term. Here, the parameterized scalar profile (PSP) mixing model is extended to provide the required joint statistics. Model predictions are validated using direct numerical simulation (DNS) data of a passive scalar mixing in a statistically homogeneous turbulent flow. Comparisons between the DNS and the model predictions are provided, which involve different initial scalar-field lengthscales. (author)

  13. From linear to generalized linear mixed models: A case study in repeated measures

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Compared to traditional linear mixed models, generalized linear mixed models (GLMMs) can offer better correspondence between response variables and explanatory models, yielding more efficient estimates and tests in the analysis of data from designed experiments. Using proportion data from a designed...

  14. Real-time simulation of a spiking neural network model of the basal ganglia circuitry using general purpose computing on graphics processing units.

    PubMed

    Igarashi, Jun; Shouno, Osamu; Fukai, Tomoki; Tsujino, Hiroshi

    2011-11-01

    Real-time simulation of a biologically realistic spiking neural network is necessary for evaluation of its capacity to interact with real environments. However, the real-time simulation of such a neural network is difficult due to its high computational costs that arise from two factors: (1) vast network size and (2) the complicated dynamics of biologically realistic neurons. In order to address these problems, mainly the latter, we chose to use general purpose computing on graphics processing units (GPGPUs) for simulation of such a neural network, taking advantage of the powerful computational capability of a graphics processing unit (GPU). As a target for real-time simulation, we used a model of the basal ganglia that has been developed according to electrophysiological and anatomical knowledge. The model consists of heterogeneous populations of 370 spiking model neurons, including computationally heavy conductance-based models, connected by 11,002 synapses. Simulation of the model has not yet been performed in real-time using a general computing server. By parallelization of the model on the NVIDIA Geforce GTX 280 GPU in data-parallel and task-parallel fashion, faster-than-real-time simulation was robustly realized with only one-third of the GPU's total computational resources. Furthermore, we used the GPU's full computational resources to perform faster-than-real-time simulation of three instances of the basal ganglia model; these instances consisted of 1100 neurons and 33,006 synapses and were synchronized at each calculation step. Finally, we developed software for simultaneous visualization of faster-than-real-time simulation output. These results suggest the potential power of GPGPU techniques in real-time simulation of realistic neural networks.

  15. Flowfield computer graphics

    NASA Technical Reports Server (NTRS)

    Desautel, Richard

    1993-01-01

    The objectives of this research include supporting the Aerothermodynamics Branch's research by developing graphical visualization tools for both the branch's adaptive grid code and flow field ray tracing code. The completed research for the reporting period includes development of a graphical user interface (GUI) and its implementation into the NAS Flowfield Analysis Software Tool kit (FAST), for both the adaptive grid code (SAGE) and the flow field ray tracing code (CISS).

  16. Application of the LPL model to mixed radiations

    SciTech Connect

    Curtis, S.B.

    1991-09-01

    The LPL (Lethal, Potentially Lethal) formulation was used to analyze sets of cell survival data from mixes of (1) alpha particles and X rays and (2) neon ions and X rays. The hypothesis tested was whether survival after mixed radiation could be predicted by simply adding the total number of lethal and potentially lethal lesions from each radiation in the theoretical survival expression. Results show that all data appear to conform satisfactorily to the LPL hypothesis except for the mixed neon-ion and X-ray results with a large dose of X rays (8 Gy) given first. 8 refs., 6 figs., 1 tab.

  17. A model for turbulent mixing based on shadow-position conditioning

    NASA Astrophysics Data System (ADS)

    Pope, Stephen B.

    2013-11-01

    In the modeling and simulation of mixing and reaction in turbulent flows using probability density function (PDF) methods, a key component is the mixing model, which represents the mixing effected by molecular diffusion. A new model, called the shadow-position mixing model (SPMM), is introduced and its performance is illustrated for two test cases. The model involves a new variable—the shadow position—and mixing is modeled as a relaxation of the composition to its mean conditional on the shadow position. The model is constructed to be consistent with turbulent dispersion theory, and to be local in the composition space, both to adequate approximations. The connections between the SPMM and previous mixing models are discussed. The first test case of a scalar mixing layer shows that the SPMM yields scalar statistics in broad agreement with experimental data. The second test case of a reactive scalar mixing layer with idealized non-premixed combustion shows that the SPMM correctly yields stable combustion, whereas simpler models incorrectly lead to extinction. The model satisfies all required realizability and transformation properties and correctly yields Gaussian distributions in appropriate circumstances. The SPMM is generally applicable to turbulent reactive flows using different PDF approaches in the contexts of both Reynolds-averaged Navier-Stokes modeling and large-eddy simulation.

  18. Costs of predator-induced phenotypic plasticity: a graphical model for predicting the contribution of nonconsumptive and consumptive effects of predators on prey.

    PubMed

    Peacor, Scott D; Peckarsky, Barbara L; Trussell, Geoffrey C; Vonesh, James R

    2013-01-01

    Defensive modifications in prey traits that reduce predation risk can also have negative effects on prey fitness. Such nonconsumptive effects (NCEs) of predators are common, often quite strong, and can even dominate the net effect of predators. We develop an intuitive graphical model to identify and explore the conditions promoting strong NCEs. The model illustrates two conditions necessary and sufficient for large NCEs: (1) trait change has a large cost, and (2) the benefit of reduced predation outweighs the costs, such as reduced growth rate. A corollary condition is that potential predation in the absence of trait change must be large. In fact, the sum total of the consumptive effects (CEs) and NCEs may be any value bounded by the magnitude of the predation rate in the absence of the trait change. The model further illustrates how, depending on the effect of increased trait change on resulting costs and benefits, any combination of strong and weak NCEs and CEs is possible. The model can also be used to examine how changes in environmental factors (e.g., refuge safety) or variation among predator-prey systems (e.g., different benefits of a prey trait change) affect NCEs. Results indicate that simple rules of thumb may not apply; factors that increase the cost of trait change or that increase the degree to which an animal changes a trait, can actually cause smaller (rather than larger) NCEs. We provide examples of how this graphical model can provide important insights for empirical studies from two natural systems. Implementation of this approach will improve our understanding of how and when NCEs are expected to dominate the total effect of predators. Further, application of the models will likely promote a better linkage between experimental and theoretical studies of NCEs, and foster synthesis across systems.

  19. Age of stratospheric air and aging by mixing in global models

    NASA Astrophysics Data System (ADS)

    Garny, Hella; Dietmüller, Simone; Plöger, Felix; Birner, Thomas; Bönisch, Harald; Jöckel, Patrick

    2016-04-01

    The Brewer-Dobson circulation is often quantified by the integrated transport measure age of air (AoA). AoA is affected by all transport processes, including transport along the residual mean mass circulation and two-way mixing. A large spread in the simulation of AoA by current global models exists. Using CCMVal-2 and CCMI-1 global model data, we show that this spread can only in small parts be attributed to differences in the simulated residual circulation. Instead, large differences in the "mixing efficiency" strongly contribute to the differences in the simulated AoA. The "mixing efficiency" is defined as the ratio of the two-way mixing mass flux across the subtropical barrier to the net (residual) mass flux, and this mixing efficiency controls the relative increase in AoA by mixing. We derive the mixing efficiency from global model data using the analytical solution of a simplified version of the tropical leaky pipe (TLP) model, in which vertical diffusion is neglected. Thus, it is assumed that only residual mean transport and horizontal two-way mixing across the subtropical barrier controls AoA. However, in global models vertical mixing and numerical diffusion modify AoA, and these processes likely contribute to the differences in the mixing efficiency between models. We explore the contributions of diffusion and mixing on mean AoA by a) using simulations with the tropical leaky pipe model including vertical diffusion and b) explicit calculations of aging by mixing on resolved scales. Using the TLP model, we show that vertical diffusion leads to a decrease in tropical AoA, i.e. counteracts the increase in tropical mean AoA due to horizontal mixing. Thus, neglecting vertical diffusion leads to an underestimation of the mixing efficiency. With explicit calculations of aging by mixing via integration of daily local mixing tendencies along residual circulation trajectories, we explore the contributions of vertical and horizontal mixing for aging by mixing. The

  20. Computer modeling of forced mixing in waste storage tanks

    SciTech Connect

    Eyler, L.L.; Michener, T.E.

    1992-04-01

    Numerical simulation results of fluid dynamic and physical processes in radioactive waste storage tanks are presented. Investigations include simulation of jet mixing pump induced flows intended to mix and maintain particulate material uniformly distributed throughout the liquid volume. Physical effects of solids are included in the code. These are particle size through a settling velocity and mixture properties through density and viscosity. Calculations have been accomplished for a centrally located, rotationally-oscillating, horizontally-directed jet mixing pump for two cases. One case is with low jet velocity and high settling velocity. It results in nonuniform distribution. The other case is with high jet velocity and low settling velocity. It results in uniform conditions. Results are being used to aid in experiment design and to understand mixing in the waste tanks. These results are to be used in conjunction with scaled experiments to define limits of pump operation to maintain uniformity of the mixture in the storage tanks during waste retrieval operations.

  1. Modeling of mixing processes: Fluids, particulates, and powders

    SciTech Connect

    Ottino, J.M.; Hansen, S.

    1995-12-31

    Work under this grant involves two main areas: (1) Mixing of Viscous Liquids, this first area comprising aggregation, fragmentation and dispersion, and (2) Mixing of Powders. In order to produce a coherent self-contained picture, we report primarily on results obtained under (1), and within this area, mostly on computational studies of particle aggregation in regular and chaotic flows. Numerical simulations show that the average cluster size of compact clusters grows algebraically, while the average cluster size of fractal clusters grows exponentially; companion mathematical arguments are used to describe the initial growth of average cluster size and polydispersity. It is found that when the system is well mixed and the capture radius independent of mass, the polydispersity is constant for long-times and the cluster size distribution is self-similar. Furthermore, our simulations indicate that the fractal nature of the clusters is dependent upon the mixing.

  2. Modeling Internal Tides and Mixing Over Ocean Ridges

    NASA Astrophysics Data System (ADS)

    Slinn, D. N.; Levine, M. D.

    2002-12-01

    Moored observations from the Hawaiian Ocean Mixing Experiment (HOME) Survey component suggest an increase in diapycnal mixing events during spring tides in the region above the slope. To study possible mixing mechanisms, we utilize large eddy simulations of the benthic boundary layer, using a domain on the order of 200 m thick, with environmental parameters from the HOME. When the barotropic tidal flow is upslope, the stratification near the boundary is greatly reduced as denser deep water is advected above the less dense water retained in the boundary layer. This leads to statically unstable situations and persistent strong mixing events that are several tens of meters thick and last for approximately one quarter of the tidal period. Conversely, during the down-slope tidal flow, denser fluid remains trapped in the boundary layer as less dense upslope fluid is advected downward, leading to very strong stratification near the boundary, which shuts down vertical mixing over the slope. The current structure and statistics of the overturning are compared with the field observations. We demonstrate also from the simulations that the Coriolis force plays an important role in both increasing the levels of turbulence in the boundary layer and in producing an efficient mechanism for fluid exchange between the interior and the boundary. Comparisons with numerical experiments where the Coriolis force is artificially turned off show decreased levels of turbulent mixing and less complex velocity shear structure in the boundary layer. Numerical dye release and Lagrangian drifter experiments indicate that horizontal exchange flows at the inertial period produce a pathway for the new intermediate density water formed in the mixing process to leave the boundary layer. We generalize our results to consider combinations of oceanic parameter ranges of slope, inertial, buoyancy, and tidal frequencies and amplitudes that could combine to produce increased boundary mixing.

  3. Three-dimensional modeling of the mixing state of particles over Greater Paris

    NASA Astrophysics Data System (ADS)

    Zhu, Shupeng; Sartelet, Karine; Zhang, Yang; Nenes, Athanasios

    2016-05-01

    A size-composition resolved aerosol model (SCRAM) is coupled to the Polyphemus air quality platform and evaluated over Greater Paris. SCRAM simulates the particle mixing state and solves the aerosol dynamic evolution taking into account the processes of coagulation, condensation/evaporation, and nucleation. Both the size and mass fractions of chemical components of particles are discretized. The performance of SCRAM to model air quality over Greater Paris is evaluated by comparison to PM2.5, PM10, and Aerosol Optical Depth (AOD) measurements. Because air quality models usually assume that particles are internally mixed, the impact of the mixing state on aerosols formation, composition, optical properties, and their ability to be activated as cloud condensation nuclei (CCN) is investigated. The simulation results show that more than half (up to 80% during rush hours) of black carbon particles are barely mixed at the urban site of Paris, while they are more mixed with organic species at a rural site. The comparisons between the internal-mixing simulation and the mixing state-resolved simulation show that the internal-mixing assumption leads to lower nitrate and higher ammonium concentrations in the particulate phase. Moreover, the internal-mixing assumption leads to lower single scattering albedo, and the difference of aerosol optical depth caused by the mixing state assumption can be as high as 72.5%. Furthermore, the internal-mixing assumption leads to lower CCN activation percentage at low supersaturation, but higher CCN activation percentage at high supersaturation.

  4. Students Perception towards the Implementation of Computer Graphics Technology in Class via Unified Theory of Acceptance and Use of Technology (UTAUT) Model

    NASA Astrophysics Data System (ADS)

    Binti Shamsuddin, Norsila

    Technology advancement and development in a higher learning institution is a chance for students to be motivated to learn in depth in the information technology areas. Students should take hold of the opportunity to blend their skills towards these technologies as preparation for them when graduating. The curriculum itself can rise up the students' interest and persuade them to be directly involved in the evolvement of the technology. The aim of this study is to see how deep is the students' involvement as well as their acceptance towards the adoption of the technology used in Computer Graphics and Image Processing subjects. The study will be towards the Bachelor students in Faculty of Industrial Information Technology (FIIT), Universiti Industri Selangor (UNISEL); Bac. In Multimedia Industry, BSc. Computer Science and BSc. Computer Science (Software Engineering). This study utilizes the new Unified Theory of Acceptance and Use of Technology (UTAUT) to further validate the model and enhance our understanding of the adoption of Computer Graphics and Image Processing Technologies. Four (4) out of eight (8) independent factors in UTAUT will be studied towards the dependent factor.

  5. On the importance of cognitive profiling: A graphical modelling analysis of domain-specific and domain-general deficits after stroke.

    PubMed

    Massa, M Sofia; Wang, Naxian; Bickerton, Wa-Ling; Demeyere, Nele; Riddoch, M Jane; Humphreys, Glyn W

    2015-10-01

    Cognitive problems following stroke are typically analysed using either short but relatively uninformative general tests or through detailed but time consuming tests of domain specific deficits (e.g., in language, memory, praxis). Here we present an analysis of neuropsychological deficits detected using a screen designed to fall between other screens by being 'broad' (testing multiple cognitive abilities) but 'shallow' (sampling the abilities briefly, to be time efficient) - the BCoS. Assessment using the Birmingham Cognitive Screen (BCoS) enables the relations between 'domain specific' and 'domain general' cognitive deficits to be evaluated as the test generates an overall cognitive profile for individual patients. We analysed data from 287 patients tested at a sub-acute stage of stroke (<3 months). Graphical modelling techniques were used to investigate the associative structure and conditional independence between deficits within and across the domains sampled by BCoS (attention and executive functions, language, memory, praxis and number processing). The patterns of deficit within each domain conformed to existing cognitive models. However, these within-domain patterns underwent substantial change when the whole dataset was modelled, indicating that domain-specific deficits can only be understood in relation to linked changes in domain-general processes. The data point to the importance of using over-arching cognitive screens, measuring domain-general as well as domain-specific processes, in order to account for neuropsychological deficits after stroke. The paper also highlights the utility of using graphical modelling to understand the relations between cognitive components in complex datasets. PMID:26232552

  6. SutraGUI, a graphical-user interface for SUTRA, a model for ground-water flow with solute or energy transport

    USGS Publications Warehouse

    Winston, Richard B.; Voss, Clifford I.

    2004-01-01

    This report describes SutraGUI, a flexible graphical user-interface (GUI) that supports two-dimensional (2D) and three-dimensional (3D) simulation with the U.S. Geological Survey (USGS) SUTRA ground-water-flow and transport model (Voss and Provost, 2002). SutraGUI allows the user to create SUTRA ground-water models graphically. SutraGUI provides all of the graphical functionality required for setting up and running SUTRA simulations that range from basic to sophisticated, but it is also possible for advanced users to apply programmable features within Argus ONE to meet the unique demands of particular ground-water modeling projects. SutraGUI is a public-domain computer program designed to run with the proprietary Argus ONE? package, which provides 2D Geographic Information System (GIS) and meshing support. For 3D simulation, GIS and meshing support is provided by programming contained within SutraGUI. When preparing a 3D SUTRA model, the model and all of its features are viewed within Argus 1 in 2D projection. For 2D models, SutraGUI is only slightly changed in functionality from the previous 2D-only version (Voss and others, 1997) and it provides visualization of simulation results. In 3D, only model preparation is supported by SutraGUI, and 3D simulation results may be viewed in SutraPlot (Souza, 1999) or Model Viewer (Hsieh and Winston, 2002). A comprehensive online Help system is included in SutraGUI. For 3D SUTRA models, the 3D model domain is conceptualized as bounded on the top and bottom by 2D surfaces. The 3D domain may also contain internal surfaces extending across the model that divide the domain into tabular units, which can represent hydrogeologic strata or other features intended by the user. These surfaces can be non-planar and non-horizontal. The 3D mesh is defined by one or more 2D meshes at different elevations that coincide with these surfaces. If the nodes in the 3D mesh are vertically aligned, only a single 2D mesh is needed. For nonaligned

  7. Decoupled schemes for a non-stationary mixed Stokes-Darcy model

    NASA Astrophysics Data System (ADS)

    Mu, Mo; Zhu, Xiaohong

    2010-04-01

    We study numerical methods for solving a non-stationary mixed Stokes-Darcy problem that models coupled fluid flow and porous media flow. A decoupling approach based on interface approximation via temporal extrapolation is proposed for devising decoupled marching algorithms for the mixed model. Error estimates are derived and numerical experiments are conducted to demonstrate the computational effectiveness of the decoupling approach.

  8. A Proposed Model of Retransformed Qualitative Data within a Mixed Methods Research Design

    ERIC Educational Resources Information Center

    Palladino, John M.

    2009-01-01

    Most models of mixed methods research design provide equal emphasis of qualitative and quantitative data analyses and interpretation. Other models stress one method more than the other. The present article is a discourse about the investigator's decision to employ a mixed method design to examine special education teachers' advocacy and…

  9. The Analysis of Repeated Measurements with Mixed-Model Adjusted "F" Tests

    ERIC Educational Resources Information Center

    Kowalchuk, Rhonda K.; Keselman, H. J.; Algina, James; Wolfinger, Russell D.

    2004-01-01

    One approach to the analysis of repeated measures data allows researchers to model the covariance structure of their data rather than presume a certain structure, as is the case with conventional univariate and multivariate test statistics. This mixed-model approach, available through SAS PROC MIXED, was compared to a Welch-James type statistic.…

  10. CONVERTING ISOTOPE RATIOS TO DIET COMPOSITION - THE USE OF MIXING MODELS

    EPA Science Inventory

    Investigations of wildlife foraging ecology with stable isotope analysis are increasing. Converting isotope values to proportions of different foods in a consumer's diet requires the use of mixing models. Simple mixing models based on mass balance equations have been used for d...

  11. Best practices for use of stable isotope mixing models in food-web studies

    EPA Science Inventory

    Stable isotope mixing models are increasingly used to quantify contributions of resources to consumers. While potentially powerful tools, these mixing models have the potential to be misused, abused, and misinterpreted. Here we draw on our collective experiences to address the qu...

  12. Item Purification in Differential Item Functioning Using Generalized Linear Mixed Models

    ERIC Educational Resources Information Center

    Liu, Qian

    2011-01-01

    For this dissertation, four item purification procedures were implemented onto the generalized linear mixed model for differential item functioning (DIF) analysis, and the performance of these item purification procedures was investigated through a series of simulations. Among the four procedures, forward and generalized linear mixed model (GLMM)…

  13. Graphical algorithm for integration of genetic and biological data: proof of principle using psoriasis as a model

    PubMed Central

    Tsoi, Lam C.; Elder, James T.; Abecasis, Goncalo R.

    2015-01-01

    Motivation: Pathway analysis to reveal biological mechanisms for results from genetic association studies have great potential to better understand complex traits with major human disease impact. However, current approaches have not been optimized to maximize statistical power to identify enriched functions/pathways, especially when the genetic data derives from studies using platforms (e.g. Immunochip and Metabochip) customized to have pre-selected markers from previously identified top-rank loci. We present here a novel approach, called Minimum distance-based Enrichment Analysis for Genetic Association (MEAGA), with the potential to address both of these important concerns. Results: MEAGA performs enrichment analysis using graphical algorithms to identify sub-graphs among genes and measure their closeness in interaction database. It also incorporates a statistic summarizing the numbers and total distances of the sub-graphs, depicting the overlap between observed genetic signals and defined function/pathway gene-sets. MEAGA uses sampling technique to approximate empirical and multiple testing-corrected P-values. We show in simulation studies that MEAGA is more powerful compared to count-based strategies in identifying disease-associated functions/pathways, and the increase in power is influenced by the shortest distances among associated genes in the interactome. We applied MEAGA to the results of a meta-analysis of psoriasis using Immunochip datasets, and showed that associated genes are significantly enriched in immune-related functions and closer with each other in the protein–protein interaction network. Availability and implementation: http://genome.sph.umich.edu/wiki/MEAGA Contact: tsoi.teen@gmail.com or goncalo@umich.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25480373

  14. GnuForPlot Graphics

    2015-11-04

    Gnuforplot Graphics is a Fortran90 program designed to generate two and three dimensional plots of data on a personal computer. The program uses calls to the open source code Gnuplot to generate the plots. Two Fortran90 programs have been written to use the Gnuplot graphics capabilities. The first program, named Plotsetup.f90 reads data from output files created by either the Stadium or LeachXS/Orchestra modeling codes and saves the data in arrays for plotting. This programmore » then calls Gnuforplot which takes the data array along with user specified parameters to set plot specifications and issues Gnuplot commands that generate the screen plots. The user can view the plots and optionally save copies in jpeg format.« less

  15. GnuForPlot Graphics

    SciTech Connect

    2015-11-04

    Gnuforplot Graphics is a Fortran90 program designed to generate two and three dimensional plots of data on a personal computer. The program uses calls to the open source code Gnuplot to generate the plots. Two Fortran90 programs have been written to use the Gnuplot graphics capabilities. The first program, named Plotsetup.f90 reads data from output files created by either the Stadium or LeachXS/Orchestra modeling codes and saves the data in arrays for plotting. This program then calls Gnuforplot which takes the data array along with user specified parameters to set plot specifications and issues Gnuplot commands that generate the screen plots. The user can view the plots and optionally save copies in jpeg format.

  16. Application of mixing-controlled combustion models to gas turbine combustors

    NASA Technical Reports Server (NTRS)

    Nguyen, Hung Lee

    1990-01-01

    Gas emissions were studied from a staged Rich Burn/Quick-Quench Mix/Lean Burn combustor were studied under test conditions encountered in High Speed Research engines. The combustor was modeled at conditions corresponding to different engine power settings, and the effect of primary dilution airflow split on emissions, flow field, flame size and shape, and combustion intensity, as well as mixing, was investigated. A mathematical model was developed from a two-equation model of turbulence, a quasi-global kinetics mechanism for the oxidation of propane, and the Zeldovich mechanism for nitric oxide formation. A mixing-controlled combustion model was used to account for turbulent mixing effects on the chemical reaction rate. This model assumes that the chemical reaction rate is much faster than the turbulent mixing rate.

  17. Modeling of space vehicle propellant mixing. [cryogenic propellants

    NASA Technical Reports Server (NTRS)

    Aydelott, J. C.

    1983-01-01

    An experimental program was conducted to examine the liquid flow patterns that result from the axial-jet mixing of ethanol in 10-cm-diameter spherical and cylindrical containers under zero-, reduced-, and normal-gravity conditions. Dimensionless parameters were developed that characterized the observed liquid flow patterns and the bulk-liquid mixing phenomena. The correlations developed, were used to analyze a typical liquid hydrogen tank and internal thermodynamic vent system for a shuttle-compatible space tug similar to current orbit transfer vehicle concepts.

  18. A time-dependent Mixing Model for PDF Methods in Heterogeneous Aquifers

    NASA Astrophysics Data System (ADS)

    Schüler, Lennart; Suciu, Nicolae; Knabner, Peter; Attinger, Sabine

    2016-04-01

    Predicting the transport of groundwater contaminations remains a demanding task, especially with respect to the heterogeneity of the subsurface and the large measurement uncertainties. A risk analysis also includes the quantification of the uncertainty in order to evaluate how accurate the predictions are. Probability density function (PDF) methods are a promising alternative to predicting the transport of solutes in groundwater under uncertainty. They make it possible to derive the evolution equations of the mean concentration and the concentration variance, which can be used as a first measure of uncertainty. A mixing model, also known as a dissipation model, is essential for both methods. Finding a satisfactory mixing model is still an open question and due to the rather elaborate PDF methods, a difficult undertaking. Both the PDF equation and the concentration variance equation depend on the same mixing model. This connection is used to find and test an improved mixing model for the much easier to handle concentration variance. Subsequently, this mixing model is transferred to the PDF equation and tested. The newly proposed mixing model yields significantly improved results for both variance modelling and PDF modelling. The implications of the new mixing model for different kinds of flow conditions are discussed and some comments are made on efficiently handling spatially resolved higher moments.

  19. Prediction of microbial growth in mixed culture with a competition model.

    PubMed

    Fujikawa, Hiroshi; Sakha, Mohammad Z

    2014-01-01

    Prediction of microbial growth in mixed culture was studied with a competition model that we had developed recently. The model, which is composed of the new logistic model and the Lotka-Volterra model, is shown to successfully describe the microbial growth of two species in mixed culture using Staphylococcus aureus, Escherichia coli, and Salmonella. With the parameter values of the model obtained from the experimental data on monoculture and mixed culture with two species, it then succeeded in predicting the simultaneous growth of the three species in mixed culture inoculated with various cell concentrations. To our knowledge, it is the first time for a prediction model for multiple (three) microbial species to be reported. The model, which is not built on any premise for specific microorganisms, may become a basic competition model for microorganisms in food and food materials. PMID:24975413

  20. EuroForMix: An open source software based on a continuous model to evaluate STR DNA profiles from a mixture of contributors with artefacts.

    PubMed

    Bleka, Øyvind; Storvik, Geir; Gill, Peter

    2016-03-01

    We have released a software named EuroForMix to analyze STR DNA profiles in a user-friendly graphical user interface. The software implements a model to explain the allelic peak height on a continuous scale in order to carry out weight-of-evidence calculations for profiles which could be from a mixture of contributors. Through a properly parameterized model we are able to do inference on mixture proportions, the peak height properties, stutter proportion and degradation. In addition, EuroForMix includes models for allele drop-out, allele drop-in and sub-population structure. EuroForMix supports two inference approaches for likelihood ratio calculations. The first approach uses maximum likelihood estimation of the unknown parameters. The second approach is Bayesian based which requires prior distributions to be specified for the parameters involved. The user may specify any number of known and unknown contributors in the model, however we find that there is a practical computing time limit which restricts the model to a maximum of four unknown contributors. EuroForMix is the first freely open source, continuous model (accommodating peak height, stutter, drop-in, drop-out, population substructure and degradation), to be reported in the literature. It therefore serves an important purpose to act as an unrestricted platform to compare different solutions that are available. The implementation of the continuous model used in the software showed close to identical results to the R-package DNAmixtures, which requires a HUGIN Expert license to be used. An additional feature in EuroForMix is the ability for the user to adapt the Bayesian inference framework by incorporating their own prior information.

  1. Nonlinearity detection in hyperspectral images using a polynomial post-nonlinear mixing model.

    PubMed

    Altmann, Yoann; Dobigeon, Nicolas; Tourneret, Jean-Yves

    2013-04-01

    This paper studies a nonlinear mixing model for hyperspectral image unmixing and nonlinearity detection. The proposed model assumes that the pixel reflectances are nonlinear functions of pure spectral components contaminated by an additive white Gaussian noise. These nonlinear functions are approximated by polynomials leading to a polynomial post-nonlinear mixing model. We have shown in a previous paper that the parameters involved in the resulting model can be estimated using least squares methods. A generalized likelihood ratio test based on the estimator of the nonlinearity parameter is proposed to decide whether a pixel of the image results from the commonly used linear mixing model or from a more general nonlinear mixing model. To compute the test statistic associated with the nonlinearity detection, we propose to approximate the variance of the estimated nonlinearity parameter by its constrained Cramér-Rao bound. The performance of the detection strategy is evaluated via simulations conducted on synthetic and real data. More precisely, synthetic data have been generated according to the standard linear mixing model and three nonlinear models from the literature. The real data investigated in this study are extracted from the Cuprite image, which shows that some minerals seem to be nonlinearly mixed in this image. Finally, it is interesting to note that the estimated abundance maps obtained with the post-nonlinear mixing model are in good agreement with results obtained in previous studies.

  2. General-Purpose Software For Computer Graphics

    NASA Technical Reports Server (NTRS)

    Rogers, Joseph E.

    1992-01-01

    NASA Device Independent Graphics Library (NASADIG) is general-purpose computer-graphics package for computer-based engineering and management applications which gives opportunity to translate data into effective graphical displays for presentation. Features include two- and three-dimensional plotting, spline and polynomial interpolation, control of blanking of areas, multiple log and/or linear axes, control of legends and text, control of thicknesses of curves, and multiple text fonts. Included are subroutines for definition of areas and axes of plots; setup and display of text; blanking of areas; setup of style, interpolation, and plotting of lines; control of patterns and of shading of colors; control of legends, blocks of text, and characters; initialization of devices; and setting of mixed alphabets. Written in FORTRAN 77.

  3. Interactive computer graphics applications for compressible aerodynamics

    NASA Technical Reports Server (NTRS)

    Benson, Thomas J.

    1994-01-01

    Three computer applications have been developed to solve inviscid compressible fluids problems using interactive computer graphics. The first application is a compressible flow calculator which solves for isentropic flow, normal shocks, and oblique shocks or centered expansions produced by two dimensional ramps. The second application couples the solutions generated by the first application to a more graphical presentation of the results to produce a desk top simulator of three compressible flow problems: 1) flow past a single compression ramp; 2) flow past two ramps in series; and 3) flow past two opposed ramps. The third application extends the results of the second to produce a design tool which solves for the flow through supersonic external or mixed compression inlets. The applications were originally developed to run on SGI or IBM workstations running GL graphics. They are currently being extended to solve additional types of flow problems and modified to operate on any X-based workstation.

  4. Experimental and computer graphics simulation analyses of the DNA interaction of 1,8-bis-(2-diethylaminoethylamino)-anthracene-9,10-dione, a compound modelled on doxorubicin.

    PubMed

    Islam, S A; Neidle, S; Gandecha, B M; Brown, J R

    1983-09-15

    The crystal structure of the anthraquinone derivative 1,8-bis-(2-diethylaminoethylamino)-anthracene-9,10-dione has been established. This compound was prepared as a potential DNA-intercalating agent based on the proven intercalators doxorubicin and mitoxantrone. Its DNA-binding properties have been examined experimentally by spectroscopic, thermal denaturation and ccc-DNA unwinding techniques: the results are consistent with an intercalative mode of binding to DNA. Computer graphics stimulation of the intercalative docking of this compound into the self-complementary dimer of d(CpG) has provided a minimum energy geometrical arrangement for the bound drug in the intercalation site comparable to that for proflavine when intercalated into the same d(CpG) model system. Entry of the compound into the site can only occur via the major groove.

  5. Modeling Temporal Behavior in Large Networks: A Dynamic Mixed-Membership Model

    SciTech Connect

    Rossi, R; Gallagher, B; Neville, J; Henderson, K

    2011-11-11

    Given a large time-evolving network, how can we model and characterize the temporal behaviors of individual nodes (and network states)? How can we model the behavioral transition patterns of nodes? We propose a temporal behavior model that captures the 'roles' of nodes in the graph and how they evolve over time. The proposed dynamic behavioral mixed-membership model (DBMM) is scalable, fully automatic (no user-defined parameters), non-parametric/data-driven (no specific functional form or parameterization), interpretable (identifies explainable patterns), and flexible (applicable to dynamic and streaming networks). Moreover, the interpretable behavioral roles are generalizable, computationally efficient, and natively supports attributes. We applied our model for (a) identifying patterns and trends of nodes and network states based on the temporal behavior, (b) predicting future structural changes, and (c) detecting unusual temporal behavior transitions. We use eight large real-world datasets from different time-evolving settings (dynamic and streaming). In particular, we model the evolving mixed-memberships and the corresponding behavioral transitions of Twitter, Facebook, IP-Traces, Email (University), Internet AS, Enron, Reality, and IMDB. The experiments demonstrate the scalability, flexibility, and effectiveness of our model for identifying interesting patterns, detecting unusual structural transitions, and predicting the future structural changes of the network and individual nodes.

  6. Graphical Language for Data Processing

    NASA Technical Reports Server (NTRS)

    Alphonso, Keith

    2011-01-01

    A graphical language for processing data allows processing elements to be connected with virtual wires that represent data flows between processing modules. The processing of complex data, such as lidar data, requires many different algorithms to be applied. The purpose of this innovation is to automate the processing of complex data, such as LIDAR, without the need for complex scripting and programming languages. The system consists of a set of user-interface components that allow the user to drag and drop various algorithmic and processing components onto a process graph. By working graphically, the user can completely visualize the process flow and create complex diagrams. This innovation supports the nesting of graphs, such that a graph can be included in another graph as a single step for processing. In addition to the user interface components, the system includes a set of .NET classes that represent the graph internally. These classes provide the internal system representation of the graphical user interface. The system includes a graph execution component that reads the internal representation of the graph (as described above) and executes that graph. The execution of the graph follows the interpreted model of execution in that each node is traversed and executed from the original internal representation. In addition, there are components that allow external code elements, such as algorithms, to be easily integrated into the system, thus making the system infinitely expandable.

  7. User's instructions for the Guyton circulatory dynamics model using the Univac 1110 batch and demand processing (with graphic capabilities)

    NASA Technical Reports Server (NTRS)

    Archer, G. T.

    1974-01-01

    The model presents a systems analysis of a human circulatory regulation based almost entirely on experimental data and cumulative present knowledge of the many facets of the circulatory system. The model itself consists of eighteen different major systems that enter into circulatory control. These systems are grouped into sixteen distinct subprograms that are melded together to form the total model. The model develops circulatory and fluid regulation in a simultaneous manner. Thus, the effects of hormonal and autonomic control, electrolyte regulation, and excretory dynamics are all important and are all included in the model.

  8. Model analysis of influences of aerosol mixing state upon its optical properties in East Asia

    NASA Astrophysics Data System (ADS)

    Han, Xiao; Zhang, Meigen; Zhu, Lingyun; Xu, Liren

    2013-07-01

    The air quality model system RAMS (Regional Atmospheric Modeling System)-CMAQ (Models-3 Community Multi-scale Air Quality) coupled with an aerosol optical/radiative module was applied to investigate the impact of different aerosol mixing states (i.e., externally mixed, half externally and half internally mixed, and internally mixed) on radiative forcing in East Asia. The simulation results show that the aerosol optical depth (AOD) generally increased when the aerosol mixing state changed from externally mixed to internally mixed, while the single scattering albedo (SSA) decreased. Therefore, the scattering and absorption properties of aerosols can be significantly affected by the change of aerosol mixing states. Comparison of simulated and observed SSAs at five AERONET (Aerosol Robotic Network) sites suggests that SSA could be better estimated by considering aerosol particles to be internally mixed. Model analysis indicates that the impact of aerosol mixing state upon aerosol direct radiative forcing (DRF) is complex. Generally, the cooling effect of aerosols over East Asia are enhanced in the northern part of East Asia (Northern China, Korean peninsula, and the surrounding area of Japan) and are reduced in the southern part of East Asia (Sichuan Basin and Southeast China) by internal mixing process, and the variation range can reach ±5 W m-2. The analysis shows that the internal mixing between inorganic salt and dust is likely the main reason that the cooling effect strengthens. Conversely, the internal mixture of anthropogenic aerosols, including sulfate, nitrate, ammonium, black carbon, and organic carbon, could obviously weaken the cooling effect.

  9. Raster graphics display library

    NASA Technical Reports Server (NTRS)

    Grimsrud, Anders; Stephenson, Michael B.

    1987-01-01

    The Raster Graphics Display Library (RGDL) is a high level subroutine package that give the advanced raster graphics display capabilities needed. The RGDL uses FORTRAN source code routines to build subroutines modular enough to use as stand-alone routines in a black box type of environment. Six examples are presented which will teach the use of RGDL in the fastest, most complete way possible. Routines within the display library that are used to produce raster graphics are presented in alphabetical order, each on a separate page. Each user-callable routine is described by function and calling parameters. All common blocks that are used in the display library are listed and the use of each variable within each common block is discussed. A reference on the include files that are necessary to compile the display library is contained. Each include file and its purpose are listed. The link map for MOVIE.BYU version 6, a general purpose computer graphics display system that uses RGDL software, is also contained.

  10. Comics & Graphic Novels

    ERIC Educational Resources Information Center

    Cleaver, Samantha

    2008-01-01

    Not so many years ago, comic books in school were considered the enemy. Students caught sneaking comics between the pages of bulky--and less engaging--textbooks were likely sent to the principal. Today, however, comics, including classics such as "Superman" but also their generally more complex, nuanced cousins, graphic novels, are not only…

  11. Mathematical Graphic Organizers

    ERIC Educational Resources Information Center

    Zollman, Alan

    2009-01-01

    As part of a math-science partnership, a university mathematics educator and ten elementary school teachers developed a novel approach to mathematical problem solving derived from research on reading and writing pedagogy. Specifically, research indicates that students who use graphic organizers to arrange their ideas improve their comprehension…

  12. Graphic Novels: A Roundup.

    ERIC Educational Resources Information Center

    Kan, Katherine L.

    1994-01-01

    Reviews graphic novels for young adults, including five titles from "The Adventures of Tintin," a French series that often uses ethnic and racial stereotypes which reflect the time in which they were published, and "Wolverine," a Marvel comic character adventure. (Contains six references.) (LRW)

  13. Graphically Enhanced Science Notebooks

    ERIC Educational Resources Information Center

    Minogue, James; Wiebe, Eric; Madden, Lauren; Bedward, John; Carter, Mike

    2010-01-01

    A common mode of communication in the elementary classroom is the science notebook. In this article, the authors outline the ways in which "graphically enhanced science notebooks" can help engage students in complete and robust inquiry. Central to this approach is deliberate attention to the efficient and effective use of student-generated…

  14. Printer Graphics Package

    NASA Technical Reports Server (NTRS)

    Blanchard, D. C.

    1986-01-01

    Printer Graphics Package (PGP) is tool for making two-dimensional symbolic plots on line printer. PGP created to support development of Heads-Up Display (HUD) simulation. Standard symbols defined with HUD in mind. Available symbols include circle, triangle, quadrangle, window, line, numbers, and text. Additional symbols easily added or built up from available symbols.

  15. Generating ensembles and measuring mixing in a model granular system

    NASA Astrophysics Data System (ADS)

    Puckett, James G.; Lechenault, Frédéric; Daniels, Karen E.

    2009-06-01

    A major open question in the field of granular materials is the identification of relevant state variables which can predict macroscopic behavior. We experimentally investigate the mixing properties of an idealized granular liquid in the vicinity of its jamming transition, through the generation of ensembles of configurations under various boundary conditions. Our apparatus consists of a two-dimensional aggregate of particles which rearrange under agitation from the outer boundaries. As expected, the system acts like a slow liquid at low pressure or low packing fraction, and jams at higher pressure or high packing fraction. We characterize mixing in the system by computing the topological entropy of the braids formed by the trajectories of the grains. This entropy is shown to be well-defined and very sensitive to the approach to jamming, reflecting the dynamical arrest of the assembly.

  16. Bootstrapping mixed correlators in the 3D Ising model

    NASA Astrophysics Data System (ADS)

    Kos, Filip; Poland, David; Simmons-Duffin, David

    2014-11-01

    We study the conformal bootstrap for systems of correlators involving nonidentical operators. The constraints of crossing symmetry and unitarity for such mixed correlators can be phrased in the language of semidefinite programming. We apply this formalism to the simplest system of mixed correlators in 3D CFTs with a ℤ2 global symmetry. For the leading ℤ2-odd operator σ and ℤ2-even operator ɛ, we obtain numerical constraints on the allowed dimensions (Δ σ , Δ ɛ ) assuming that σ and ɛ are the only relevant scalars in the theory. These constraints yield a small closed region in (Δ σ , Δ ɛ ) space compatible with the known values in the 3D Ising CFT.

  17. Photonic states mixing beyond the plasmon hybridization model

    NASA Astrophysics Data System (ADS)

    Suryadharma, Radius N. S.; Iskandar, Alexander A.; Tjia, May-On

    2016-07-01

    A study is performed on a photonic-state mixing-pattern in an insulator-metal-insulator cylindrical silver nanoshell and its rich variations induced by changes in the geometry and dielectric media of the system, representing the combined influences of plasmon coupling strength and cavity effects. This study is performed in terms of the photonic local density of states (LDOS) calculated using the Green tensor method, in order to elucidate those combined effects. The energy profiles of LDOS inside the dielectric core are shown to exhibit consistently growing number of redshifted photonic states due to an enhanced plasmon coupling induced state mixing arising from decreased shell thickness, increased cavity size effect, and larger symmetry breaking effect induced by increased permittivity difference between the core and the background media. Further, an increase in cavity size leads to increased additional peaks that spread out toward the lower energy regime. A systematic analysis of those variations for a silver nanoshell with a fixed inner radius in vacuum background reveals a certain pattern of those growing number of redshifted states with an analytic expression for the corresponding energy downshifts, signifying a photonic state mixing scheme beyond the commonly adopted plasmon hybridization scheme. Finally, a remarkable correlation is demonstrated between the LDOS energy profiles outside the shell and the corresponding scattering efficiencies.

  18. Deviations from tribimaximal neutrino mixing using a model with Δ(27) symmetry

    NASA Astrophysics Data System (ADS)

    Harrison, P. F.; Krishnan, R.; Scott, W. G.

    2014-07-01

    We present a model of neutrino mixing based on the flavor group Δ(27) in order to account for the observation of a nonzero reactor mixing angle (θ13). The model provides a common flavor structure for the charged-lepton and the neutrino sectors, giving their mass matrices a "circulant-plus-diagonal" form. Mass matrices of this form readily lead to mixing patterns with realistic deviations from tribimaximal mixing, including nonzero θ13. With the parameters constrained by existing measurements, our model predicts an inverted neutrino mass hierarchy. We obtain two distinct sets of solutions in which the atmospheric mixing angle lies in the first and the second octants. The first (second) octant solution predicts the lightest neutrino mass, m3 29 meV(m3 65 meV) and the CP phase, δ CP ˜-(π )/(4) (δ CP ˜(π )/(2)), offering the possibility of large observable CP violating effects in future experiments.

  19. Experimental constraints on the neutrino oscillations and a simple model of three-flavor mixing

    SciTech Connect

    Raczka, P.A.; Szymacha, A. ); Tatur, S. )

    1994-02-01

    A simple model of neutrino mixing is considered which contains only one right-handed neutrino field coupled, via the mass term, to the three usual left-handed fields. This is the simplest model that allows for three-flavor neutrino oscillations. The existing experimental limits on the neutrino oscillations are used to obtain constraints on the two free-mixing parameters of the model. A specific sum rule relating the oscillation probabilities of different flavors is derived.

  20. DaMoScope and its internet graphics for the visual control of adjusting mathematical models describing experimental data

    NASA Astrophysics Data System (ADS)

    Belousov, V. I.; Ezhela, V. V.; Kuyanov, Yu. V.; Tkachenko, N. P.

    2015-12-01

    The experience of using the dynamic atlas of the experimental data and mathematical models of their description in the problems of adjusting parametric models of observable values depending on kinematic variables is presented. The functional possibilities of an image of a large number of experimental data and the models describing them are shown by examples of data and models of observable values determined by the amplitudes of elastic scattering of hadrons. The Internet implementation of an interactive tool DaMoScope and its interface with the experimental data and codes of adjusted parametric models with the parameters of the best description of data are schematically shown. The DaMoScope codes are freely available.

  1. DaMoScope and its internet graphics for the visual control of adjusting mathematical models describing experimental data

    SciTech Connect

    Belousov, V. I.; Ezhela, V. V.; Kuyanov, Yu. V. Tkachenko, N. P.

    2015-12-15

    The experience of using the dynamic atlas of the experimental data and mathematical models of their description in the problems of adjusting parametric models of observable values depending on kinematic variables is presented. The functional possibilities of an image of a large number of experimental data and the models describing them are shown by examples of data and models of observable values determined by the amplitudes of elastic scattering of hadrons. The Internet implementation of an interactive tool DaMoScope and its interface with the experimental data and codes of adjusted parametric models with the parameters of the best description of data are schematically shown. The DaMoScope codes are freely available.

  2. Modeling relationships between calving traits: a comparison between standard and recursive mixed models

    PubMed Central

    2010-01-01

    Background The use of structural equation models for the analysis of recursive and simultaneous relationships between phenotypes has become more popular recently. The aim of this paper is to illustrate how these models can be applied in animal breeding to achieve parameterizations of different levels of complexity and, more specifically, to model phenotypic recursion between three calving traits: gestation length (GL), calving difficulty (CD) and stillbirth (SB). All recursive models considered here postulate heterogeneous recursive relationships between GL and liabilities to CD and SB, and between liability to CD and liability to SB, depending on categories of GL phenotype. Methods Four models were compared in terms of goodness of fit and predictive ability: 1) standard mixed model (SMM), a model with unstructured (co)variance matrices; 2) recursive mixed model 1 (RMM1), assuming that residual correlations are due to the recursive relationships between phenotypes; 3) RMM2, assuming that correlations between residuals and contemporary groups are due to recursive relationships between phenotypes; and 4) RMM3, postulating that the correlations between genetic effects, contemporary groups and residuals are due to recursive relationships between phenotypes. Results For all the RMM considered, the estimates of the structural coefficients were similar. Results revealed a nonlinear relationship between GL and the liabilities both to CD and to SB, and a linear relationship between the liabilities to CD and SB. Differences in terms of goodness of fit and predictive ability of the models considered were negligible, suggesting that RMM3 is plausible. Conclusions The applications examined in this study suggest the plausibility of a nonlinear recursive effect from GL onto CD and SB. Also, the fact that the most restrictive model RMM3, which assumes that the only cause of correlation is phenotypic recursion, performs as well as the others indicates that the phenotypic recursion

  3. Mathematical, physical and numerical principles essential for models of turbulent mixing

    SciTech Connect

    Sharp, David Howland; Lim, Hyunkyung; Yu, Yan; Glimm, James G

    2009-01-01

    We propose mathematical, physical and numerical principles which are important for the modeling of turbulent mixing, especially the classical and well studied Rayleigh-Taylor and Richtmyer-Meshkov instabilities which involve acceleration driven mixing of a fluid discontinuity layer, by a steady accerleration or an impulsive force.

  4. User's Guide for Mixed-Size Sediment Transport Model for Networks of One-Dimensional Open Channels

    USGS Publications Warehouse

    Bennett, James P.

    2001-01-01

    This user's guide describes a mathematical model for predicting the transport of mixed sizes of sediment by flow in networks of one-dimensional open channels. The simulation package is useful for general sediment routing problems, prediction of erosion and deposition following dam removal, and scour in channels at road embankment crossings or other artificial structures. The model treats input hydrographs as stepwise steady-state, and the flow computation algorithm automatically switches between sub- and supercritical flow as dictated by channel geometry and discharge. A variety of boundary conditions including weirs and rating curves may be applied both external and internal to the flow network. The model may be used to compute flow around islands and through multiple openings in embankments, but the network must be 'simple' in the sense that the flow directions in all channels can be specified before simulation commences. The location and shape of channel banks are user specified, and all bedelevation changes take place between these banks and above a user-specified bedrock elevation. Computation of sediment-transport emphasizes the sand-size range (0.0625-2.0 millimeter) but the user may select any desired range of particle diameters including silt and finer (<0.0625 millimeter). As part of data input, the user may set the original bed-sediment composition of any number of layers of known thickness. The model computes the time evolution of total transport and the size composition of bed- and suspended-load sand through any cross section of interest. It also tracks bed -surface elevation and size composition. The model is written in the FORTRAN programming language for implementation on personal computers using the WINDOWS operating system and, along with certain graphical output display capability, is accessed from a graphical user interface (GUI). The GUI provides a framework for selecting input files and parameters of a number of components of the sediment

  5. discrete group as a source of the quark mass and mixing pattern in models

    NASA Astrophysics Data System (ADS)

    Cárcamo Hernández, A. E.; Martinez, R.; Nisperuza, Jorge

    2015-02-01

    We propose a model based on the gauge symmetry with an extra discrete group, which successfully accounts for the SM quark mass and mixing pattern. The observed hierarchy of the SM quark masses and quark mixing matrix elements arises from the and symmetries, which are broken at a very high scale by the scalar singlets (,) and , charged under these symmetries, respectively. The Cabbibo mixing arises from the down-type quark sector whereas the up quark sector generates the remaining quark mixing angles. The obtained magnitudes of the CKM matrix elements, the CP violating phase, and the Jarlskog invariant are in agreement with the experimental data.

  6. A Simple Scheme to Implement a Nonlocal Turbulent Convection Model for Convective Overshoot Mixing

    NASA Astrophysics Data System (ADS)

    Zhang, Q. S.

    2016-02-01

    Classical “ballistic” overshoot models show some contradictions and are not consistent with numerical simulations and asteroseismic studies. Asteroseismic studies imply that overshoot is a weak mixing process. A diffusion model is suitable to deal with it. The form of diffusion coefficient in a diffusion model is crucial. Because overshoot mixing is related to convective heat transport (i.e., entropy mixing), there should be some similarity between them. A recent overshoot mixing model shows consistency between composition mixing and entropy mixing in the overshoot region. A prerequisite to apply the model is to know the dissipation rate of turbulent kinetic energy. The dissipation rate can be worked out by solving turbulent convection models (TCMs). But it is difficult to apply TCMs because of some numerical problems and the enormous time cost. In order to find a convenient way, we have used the asymptotic solution and simplified the TCM to a single linear equation for turbulent kinetic energy. This linear model is easy to implement in calculations of stellar evolution with negligible extra time cost. We have tested the linear model in stellar evolution, and have found that it can well reproduce the turbulent kinetic energy profile of the full TCM, as well as the diffusion coefficient, abundance profile, and stellar evolutionary tracks. We have also studied the effects of different values of the model parameters and have found that the effect due to the modification of temperature gradient in the overshoot region is slight.

  7. Graphics Processing Units (GPU) and the Goddard Earth Observing System atmospheric model (GEOS-5): Implementation and Potential Applications

    NASA Technical Reports Server (NTRS)

    Putnam, William M.

    2011-01-01

    Earth system models like the Goddard Earth Observing System model (GEOS-5) have been pushing the limits of large clusters of multi-core microprocessors, producing breath-taking fidelity in resolving cloud systems at a global scale. GPU computing presents an opportunity for improving the efficiency of these leading edge models. A GPU implementation of GEOS-5 will facilitate the use of cloud-system resolving resolutions in data assimilation and weather prediction, at resolutions near 3.5 km, improving our ability to extract detailed information from high-resolution satellite observations and ultimately produce better weather and climate predictions

  8. Mixing-State Sensitivity of Aerosol Absorption in the EMAC Chemistry-Climate Model

    NASA Astrophysics Data System (ADS)

    Klingmueller, Klaus; Steil, Benedikt; Bruehl, Christoph; Tost, Holger; Lelieveld, Jos

    2014-05-01

    The modelling of aerosol radiative forcing is a major cause of uncertainty in the assessment of global and regional atmospheric energy budgets and climate change. One reason is the strong dependence of the aerosol optical properties on the mixing state of aerosol components like black carbon and sulphates. Using the atmospheric chemistry-climate model EMAC, we study the radiative transfer assuming various mixing states. The aerosol optics code we employ builds on the AEROPT submodel which assumes homogeneous internal mixing utilising the volume average refractive index mixing rule. We have extended the submodel to additionally account for external mixing, partial external mixing and multilayered particles. Furthermore, we have implemented the volume average dielectric constant and Maxwell Garnett Mixing rule. We present results from regional case studies employing a new column version of the aerosol optical properties and radiative transfer code of EMAC, considering columns over China, India and Africa. The regional results are complemented by global results from a simulation for the year 2005. Our findings corroborate much stronger absorption by internal than external mixtures. Well mixed aerosol often is a good approximation for particles with a black carbon core, whereas particles with black carbon at the surface absorb significantly less. Therefore, we conclude that it is generally recommended to take the inner structure of internally mixed particles into account.

  9. Mixed discrete-continuum models: A summary of experiences in test interpretation and model prediction

    NASA Astrophysics Data System (ADS)

    Carrera, Jesus; Martinez-Landa, Lurdes

    A number of conceptual models have been proposed for simulating groundwater flow and solute transport in fractured systems. They span the range from continuum porous equivalents to discrete channel networks. The objective of this paper is to show the application of an intermediate approach (mixed discrete-continuum models) to three cases. The approach consists of identifying the dominant fractures (i.e., those carrying most of the flow) and modeling them explicitly as two-dimensional features embedded in a three-dimensional continuum representing the remaining fracture network. The method is based on the observation that most of the water flows through a few fractures, so that explicitly modeling them should help in properly accounting for a large portion of the total water flow. The applicability of the concept is tested in three cases. The first one refers to the Chalk River Block (Canada) in which a model calibrated against a long crosshole test successfully predicted the response to other tests performed in different fractures. The second case refers to hydraulic characterization of a large-scale (about 2 km) site at El Cabril (Spain). A model calibrated against long records (five years) of natural head fluctuations could be used to predict a one-month-long hydraulic test and heads variations after construction of a waste disposal site. The last case refers to hydraulic characterization performed at the Grimsel Test Site in the context of the Full-scale Engineered Barrier EXperiment (FEBEX). Extensive borehole and geologic mapping data were used to build a model that was calibrated against five crosshole tests. The resulting large-scale model predicted steady-state heads and inflows into the test tunnel. The conclusion is that, in all cases, the difficulties associated with the mixed discrete-continuum approach could be overcome and that the resulting models displayed some predictive capabilities.

  10. Eliciting mixed emotions: a meta-analysis comparing models, types, and measures.

    PubMed

    Berrios, Raul; Totterdell, Peter; Kellett, Stephen

    2015-01-01

    The idea that people can experience two oppositely valenced emotions has been controversial ever since early attempts to investigate the construct of mixed emotions. This meta-analysis examined the robustness with which mixed emotions have been elicited experimentally. A systematic literature search identified 63 experimental studies that instigated the experience of mixed emotions. Studies were distinguished according to the structure of the underlying affect model-dimensional or discrete-as well as according to the type of mixed emotions studied (e.g., happy-sad, fearful-happy, positive-negative). The meta-analysis using a random-effects model revealed a moderate to high effect size for the elicitation of mixed emotions (d IG+ = 0.77), which remained consistent regardless of the structure of the affect model, and across different types of mixed emotions. Several methodological and design moderators were tested. Studies using the minimum index (i.e., the minimum value between a pair of opposite valenced affects) resulted in smaller effect sizes, whereas subjective measures of mixed emotions increased the effect sizes. The presence of more women in the samples was also associated with larger effect sizes. The current study indicates that mixed emotions are a robust, measurable and non-artifactual experience. The results are discussed in terms of the implications for an affect system that has greater versatility and flexibility than previously thought. PMID:25926805

  11. Real Longitudinal Data Analysis for Real People: Building a Good Enough Mixed Model

    PubMed Central

    Cheng, Jing; Edwards, Lloyd J.; Maldonado-Molina, Mildred M.; Komro, Kelli A.; Muller, Keith E.

    2009-01-01

    Summary Mixed effect models have become very popular, especially for the analysis of longitudinal data. One challenge is how to build a good enough mixed effects model. In this paper, we suggest a systematic strategy for addressing this challenge and introduce easily implemented practical advice to build mixed effect models. A general discussion of scientific strategies motivates the recommended five step procedure for model fitting. The need to model both the mean structure (the fixed effects) and the covariance structure (the random effects and residual error) creates the fundamental flexibility and complexity. Some very practical recommendations help conquer the complexity. Centering, scaling, and full-rank coding all predictor variables radically improves the chances of convergence, computing speed, and numerical accuracy. Applying computational and assumption diagnostics from univariate linear models to mixed model data greatly helps detect and solve related computational problems. Applying computational and assumption diagnostics from univariate linear models to mixed model data can radically improve the chances of convergence, computing speed, and numerical accuracy. The approach helps fit more general covariance models, a crucial step in selecting a credible covariance model needed for defensible inference. A detailed demonstration of the recommended strategy is based on data from a published study of a randomized trial of a multicomponent intervention to prevent young adolescents' alcohol use. The discussion highlights a need for additional covariance and inference tools for mixed models. The discussion also highlights the need for improving how scientists and statisticians teach and review the process of finding a good enough mixed model. PMID:20013937

  12. Unit physics performance of a mix model in Eulerian fluid computations

    SciTech Connect

    Vold, Erik; Douglass, Rod

    2011-01-25

    In this report, we evaluate the performance of a K-L drag-buoyancy mix model, described in a reference study by Dimonte-Tipton [1] hereafter denoted as [D-T]. The model was implemented in an Eulerian multi-material AMR code, and the results are discussed here for a series of unit physics tests. The tests were chosen to calibrate the model coefficients against empirical data, principally from RT (Rayleigh-Taylor) and RM (Richtmyer-Meshkov) experiments, and the present results are compared to experiments and to results reported in [D-T]. Results show the Eulerian implementation of the mix model agrees well with expectations for test problems in which there is no convective flow of the mass averaged fluid, i.e., in RT mix or in the decay of homogeneous isotropic turbulence (HIT). In RM shock-driven mix, the mix layer moves through the Eulerian computational grid, and there are differences with the previous results computed in a Lagrange frame [D-T]. The differences are attributed to the mass averaged fluid motion and examined in detail. Shock and re-shock mix are not well matched simultaneously. Results are also presented and discussed regarding model sensitivity to coefficient values and to initial conditions (IC), grid convergence, and the generation of atomically mixed volume fractions.

  13. The Impact of Varied Discrimination Parameters on Mixed-Format Item Response Theory Model Selection

    ERIC Educational Resources Information Center

    Whittaker, Tiffany A.; Chang, Wanchen; Dodd, Barbara G.

    2013-01-01

    Whittaker, Chang, and Dodd compared the performance of model selection criteria when selecting among mixed-format IRT models and found that the criteria did not perform adequately when selecting the more parameterized models. It was suggested by M. S. Johnson that the problems when selecting the more parameterized models may be because of the low…

  14. EMGD-FE: an open source graphical user interface for estimating isometric muscle forces in the lower limb using an EMG-driven model

    PubMed Central

    2014-01-01

    Background This paper describes the “EMG Driven Force Estimator (EMGD-FE)”, a Matlab® graphical user interface (GUI) application that estimates skeletal muscle forces from electromyography (EMG) signals. Muscle forces are obtained by numerically integrating a system of ordinary differential equations (ODEs) that simulates Hill-type muscle dynamics and that utilises EMG signals as input. In the current version, the GUI can estimate the forces of lower limb muscles executing isometric contractions. Muscles from other parts of the body can be tested as well, although no default values for model parameters are provided. To achieve accurate evaluations, EMG collection is performed simultaneously with torque measurement from a dynamometer. The computer application guides the user, step-by-step, to pre-process the raw EMG signals, create inputs for the muscle model, numerically integrate the ODEs and analyse the results. Results An example of the application’s functions is presented using the quadriceps femoris muscle. Individual muscle force estimations for the four components as well the knee isometric torque are shown. Conclusions The proposed GUI can estimate individual muscle forces from EMG signals of skeletal muscles. The estimation accuracy depends on several factors, including signal collection and modelling hypothesis issues. PMID:24708668

  15. Realtime cerebellum: a large-scale spiking network model of the cerebellum that runs in realtime using a graphics processing unit.

    PubMed

    Yamazaki, Tadashi; Igarashi, Jun

    2013-11-01

    The cerebellum plays an essential role in adaptive motor control. Once we are able to build a cerebellar model that runs in realtime, which means that a computer simulation of 1 s in the simulated world completes within 1 s in the real world, the cerebellar model could be used as a realtime adaptive neural controller for physical hardware such as humanoid robots. In this paper, we introduce "Realtime Cerebellum (RC)", a new implementation of our large-scale spiking network model of the cerebellum, which was originally built to study cerebellar mechanisms for simultaneous gain and timing control and acted as a general-purpose supervised learning machine of spatiotemporal information known as reservoir computing, on a graphics processing unit (GPU). Owing to the massive parallel computing capability of a GPU, RC runs in realtime, while reproducing qualitatively the same simulation results of the Pavlovian delay eyeblink conditioning with the previous version. RC is adopted as a realtime adaptive controller of a humanoid robot, which is instructed to learn a proper timing to swing a bat to hit a flying ball online. These results suggest that RC provides a means to apply the computational power of the cerebellum as a versatile supervised learning machine towards engineering applications.

  16. The Brown Muck of $B^0$ and $B^0_s$ Mixing: Beyond the Standard Model

    SciTech Connect

    Bouchard, Christopher Michael

    2011-01-01

    Standard Model contributions to neutral $B$ meson mixing begin at the one loop level where they are further suppressed by a combination of the GIM mechanism and Cabibbo suppression. This combination makes $B$ meson mixing a promising probe of new physics, where as yet undiscovered particles and/or interactions can participate in the virtual loops. Relating underlying interactions of the mixing process to experimental observation requires a precise calculation of the non-perturbative process of hadronization, characterized by hadronic mixing matrix elements. This thesis describes a calculation of the hadronic mixing matrix elements relevant to a large class of new physics models. The calculation is performed via lattice QCD using the MILC collaboration's gauge configurations with $2+1$ dynamical sea quarks.

  17. Evaluation of a Linear Mixing Model to Retrieve Soil and Vegetation Temperatures of Land Targets

    NASA Astrophysics Data System (ADS)

    Yang, Jinxin; Jia, Li; Cui, Yaokui; Zhou, Jie; Menenti, Massimo

    2014-03-01

    A simple linear mixing model of heterogeneous soil-vegetation system and retrieval of component temperatures from directional remote sensing measurements by inverting this model is evaluated in this paper using observations by a thermal camera. The thermal camera was used to obtain multi-angular TIR (Thermal Infra-Red) images over vegetable and orchard canopies. A whole thermal camera image was treated as a pixel of a satellite image to evaluate the model with the two-component system, i.e. soil and vegetation. The evaluation included two parts: evaluation of the linear mixing model and evaluation of the inversion of the model to retrieve component temperatures. For evaluation of the linear mixing model, the RMSE is 0.2 K between the observed and modelled brightness temperatures, which indicates that the linear mixing model works well under most conditions. For evaluation of the model inversion, the RMSE between the model retrieved and the observed vegetation temperatures is 1.6K, correspondingly, the RMSE between the observed and retrieved soil temperatures is 2.0K. According to the evaluation of the sensitivity of retrieved component temperatures on fractional cover, the linear mixing model gives more accurate retrieval accuracies for both soil and vegetation temperatures under intermediate fractional cover conditions.

  18. A GRAPHICAL DIAGNOSTIC METHOD FOR ASSESSING THE ROTATION IN FACTOR ANALYTICAL MODELS OF ATMOSPHERIC POLLUTION. (R831078)

    EPA Science Inventory

    Factor analytic tools such as principal component analysis (PCA) and positive matrix factorization (PMF), suffer from rotational ambiguity in the results: different solutions (factors) provide equally good fits to the measured data. The PMF model imposes non-negativity of both...

  19. A quantitative approach to combine sources in stable isotope mixing models

    EPA Science Inventory

    Stable isotope mixing models, used to estimate source contributions to a mixture, typically yield highly uncertain estimates when there are many sources and relatively few isotope elements. Previously, ecologists have either accepted the uncertain contribution estimates for indiv...

  20. VISUALIZATION-BASED ANALYSIS FOR A MIXED-INHIBITION BINARY PBPK MODEL: DETERMINATION OF INHIBITION MECHANISM

    EPA Science Inventory

    A physiologically-based pharmacokinetic (PBPK) model incorporating mixed enzyme inhibition was used to determine mechanism of the metabolic interactions occurring during simultaneous inhalation exposures to the organic solvents chloroform and trichloroethylene (TCE).

    V...

  1. Mixed meal modeling and disturbance rejection in type I diabetic patients.

    PubMed

    Roy, Anirban; Parker, Robert S

    2006-01-01

    A mixed meal model was developed to capture the gut absorption of glucose, protein, and free fatty acid (FFA) from a mixed meal into the circulatory system. The output of the meal model served as a disturbance to the extended minimal model, which successfully captured plasma FFA, glucose and insulin concentration dynamics and interactions. A model predictive controller (MPC) was synthesized to reject meal disturbances and maintain normoglycemia. The dynamic fit of blood glucose after mixed meal consumption was consistent with the published data. The results from the closed-loop simulations were also promising; the MFC was able to maintain the glucose concentration within the normoglycemic range during and after consumption of a mixed meal.

  2. The Birth Environment of the Solar System Inferred from a "Mixing-Fallback" Supernova Model

    NASA Astrophysics Data System (ADS)

    Miki, J.; Takigawa, A.; Tachibana, S.; Huss, G. R.

    2007-03-01

    The birth environment of the solar system was evaluated from abundances of short-lived radionuclides and a mixing-fallback supernova model. The solar system may have formed within several parsec from a massive star with >20 solar mass.

  3. INCORPORATING CONCENTRATION DEPENDENCE IN STABLE ISOTOPE MIXING MODELS: A REPLY TO ROBBINS, HILDERBRAND AND FARLEY (2002)

    EPA Science Inventory

    Phillips & Koch (2002) outlined a new stable isotope mixing model which incorporates differences in elemental concentrations in the determinations of source proportions in a mixture. They illustrated their method with sensitivity analyses and two examples from the wildlife ecolog...

  4. Space Spurred Computer Graphics

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Dicomed Corporation was asked by NASA in the early 1970s to develop processing capabilities for recording images sent from Mars by Viking spacecraft. The company produced a film recorder which increased the intensity levels and the capability for color recording. This development led to a strong technology base resulting in sophisticated computer graphics equipment. Dicomed systems are used to record CAD (computer aided design) and CAM (computer aided manufacturing) equipment, to update maps and produce computer generated animation.

  5. Graphical Contingency Analysis Tool

    SciTech Connect

    2010-03-02

    GCA is a visual analytic tool for power grid contingency analysis to provide more decision support for power grid operations. GCA allows power grid operators to quickly gain situational awareness of power grid by converting large amounts of operational data to graphic domain with a color contoured map; identify system trend and foresee and discern emergencies by performing trending analysis; identify the relationships between system configurations and affected assets by conducting clustering analysis; and identify the best action by interactively evaluate candidate actions.

  6. Graphic Grown Up

    ERIC Educational Resources Information Center

    Kim, Ann

    2009-01-01

    It's no secret that children and YAs are clued in to graphic novels (GNs) and that comics-loving adults are positively giddy that this format is getting the recognition it deserves. Still, there is a whole swath of library card-carrying grown-up readers out there with no idea where to start. Splashy movies such as "300" and "Spider-Man" and their…

  7. Mixing in microchannels based on hydrodynamic focusing and time-interleaved segmentation: modelling and experiment.

    PubMed

    Nguyen, Nam-Trung; Huang, Xiaoyang

    2005-11-01

    This paper theoretically and experimentally investigates a micromixer based on combined hydrodynamic focusing and time-interleaved segmentation. Both hydrodynamic focusing and time-interleaved segmentation are used in the present study to reduce mixing path, to shorten mixing time, and to enhance mixing quality. While hydrodynamic focusing reduces the transversal mixing path, time-interleaved sequential segmentation shortens the axial mixing path. With the same viscosity in the different streams, the focused width can be adjusted by the flow rate ratio. The axial mixing path or the segment length can be controlled by the switching frequency and the mean velocity of the flow. Mixing ratio can be controlled by both flow rate ratio and pulse width modulation of the switching signal. This paper first presents a time-dependent two-dimensional analytical model for the mixing concept. The model considers an arbitrary mixing ratio between solute and solvent as well as the axial Taylor-Aris dispersion. A micromixer was designed and fabricated based on lamination of four polymer layers. The layers were machined using a CO2 laser. Time-interleaved segmentation was realized by two piezoelectric valves. The sheath streams for hydrodynamic focusing are introduced through the other two inlets. A special measurement set-up was designed with synchronization of the mixer's switching signal and the camera's trigger signal. The set-up allows a relatively slow and low-resolution CCD camera to freeze and to capture a large transient concentration field. The concentration profile along the mixing channel agrees qualitatively well with the analytical model. The analytical model and the device promise to be suitable tools for studying Taylor-Aris dispersion near the entrance of a flat microchannel.

  8. Career Opportunities in Computer Graphics.

    ERIC Educational Resources Information Center

    Langer, Victor

    1983-01-01

    Reviews the impact of computer graphics on industrial productivity. Details the computer graphics technician curriculum at Milwaukee Area Technical College and the cooperative efforts of business and industry to fund and equip the program. (SK)

  9. An explicit SU(12) family and flavor unification model with natural fermion masses and mixings

    SciTech Connect

    Albright, Carl H.; Feger, Robert P.; Kephart, Thomas W.

    2012-07-01

    We present an SU(12) unification model with three light chiral families, avoiding any external flavor symmetries. The hierarchy of quark and lepton masses and mixings is explained by higher dimensional Yukawa interactions involving Higgs bosons that contain SU(5) singlet fields with VEVs about 50 times smaller than the SU(12) unification scale. The presented model has been analyzed in detail and found to be in very good agreement with the observed quark and lepton masses and mixings.

  10. Sgoldstino-Higgs mixing in models with low-scale supersymmetry breaking

    NASA Astrophysics Data System (ADS)

    Astapov, K. O.; Demidov, S. V.

    2015-01-01

    We consider a supersymmetric extension of the Standard Model with low-scale supersymmetry breaking. Besides usual superpartners it contains additional chiral goldstino supermultiplet whose scalar components — sgoldstinos — can mix with scalars from the Higgs sector of the model. We show that this mixing can have considerable impact on phenomenology of the lightest Higgs boson and scalar sgoldstino. In particular, the latter can be a good candidate for explanation of 2 σ LEP excess with mass around 98 GeV.

  11. Computation of turbulent high speed mixing layers using a two-equation turbulence model

    NASA Technical Reports Server (NTRS)

    Narayan, J. R.; Sekar, B.

    1991-01-01

    A two-equation turbulence model was extended to be applicable for compressible flows. A compressibility correction based on modelling the dilational terms in the Reynolds stress equations were included in the model. The model is used in conjunction with the SPARK code for the computation of high speed mixing layers. The observed trend of decreasing growth rate with increasing convective Mach number in compressible mixing layers is well predicted by the model. The predictions agree well with the experimental data and the results from a compressible Reynolds stress model. The present model appears to be well suited for the study of compressible free shear flows. Preliminary results obtained for the reacting mixing layers are included.

  12. Integrating Commercial Off-The-Shelf (COTS) graphics and extended memory packages with CLIPS

    NASA Technical Reports Server (NTRS)

    Callegari, Andres C.

    1990-01-01

    This paper addresses the question of how to mix CLIPS with graphics and how to overcome PC's memory limitations by using the extended memory available in the computer. By adding graphics and extended memory capabilities, CLIPS can be converted into a complete and powerful system development tool, on the other most economical and popular computer platform. New models of PCs have amazing processing capabilities and graphic resolutions that cannot be ignored and should be used to the fullest of their resources. CLIPS is a powerful expert system development tool, but it cannot be complete without the support of a graphics package needed to create user interfaces and general purpose graphics, or without enough memory to handle large knowledge bases. Now, a well known limitation on the PC's is the usage of real memory which limits CLIPS to use only 640 Kb of real memory, but now that problem can be solved by developing a version of CLIPS that uses extended memory. The user has access of up to 16 MB of memory on 80286 based computers and, practically, all the available memory (4 GB) on computers that use the 80386 processor. So if we give CLIPS a self-configuring graphics package that will automatically detect the graphics hardware and pointing device present in the computer, and we add the availability of the extended memory that exists in the computer (with no special hardware needed), the user will be able to create more powerful systems at a fraction of the cost and on the most popular, portable, and economic platform available such as the PC platform.

  13. Are mixed explicit/implicit solvation models reliable for studying phosphate hydrolysis? A comparative study of continuum, explicit and mixed solvation models.

    SciTech Connect

    Kamerlin, Shina C. L.; Haranczyk, Maciej; Warshel, Arieh

    2009-05-01

    Phosphate hydrolysis is ubiquitous in biology. However, despite intensive research on this class of reactions, the precise nature of the reaction mechanism remains controversial. In this work, we have examined the hydrolysis of three homologous phosphate diesters. The solvation free energy was simulated by means of either an implicit solvation model (COSMO), hybrid quantum mechanical / molecular mechanical free energy perturbation (QM/MM-FEP) or a mixed solvation model in which N water molecules were explicitly included in the ab initio description of the reacting system (where N=1-3), with the remainder of the solvent being implicitly modelled as a continuum. Here, both COSMO and QM/MM-FEP reproduce Delta Gobs within an error of about 2kcal/mol. However, we demonstrate that in order to obtain any form of reliable results from a mixed model, it is essential to carefully select the explicit water molecules from short QM/MM runs that act as a model for the true infinite system. Additionally, the mixed models tend to be increasingly inaccurate the more explicit water molecules are placed into the system. Thus, our analysis indicates that this approach provides an unreliable way for modelling phosphate hydrolysis in solution.

  14. Eliciting mixed emotions: a meta-analysis comparing models, types, and measures

    PubMed Central

    Berrios, Raul; Totterdell, Peter; Kellett, Stephen

    2015-01-01

    The idea that people can experience two oppositely valenced emotions has been controversial ever since early attempts to investigate the construct of mixed emotions. This meta-analysis examined the robustness with which mixed emotions have been elicited experimentally. A systematic literature search identified 63 experimental studies that instigated the experience of mixed emotions. Studies were distinguished according to the structure of the underlying affect model—dimensional or discrete—as well as according to the type of mixed emotions studied (e.g., happy-sad, fearful-happy, positive-negative). The meta-analysis using a random-effects model revealed a moderate to high effect size for the elicitation of mixed emotions (dIG+ = 0.77), which remained consistent regardless of the structure of the affect model, and across different types of mixed emotions. Several methodological and design moderators were tested. Studies using the minimum index (i.e., the minimum value between a pair of opposite valenced affects) resulted in smaller effect sizes, whereas subjective measures of mixed emotions increased the effect sizes. The presence of more women in the samples was also associated with larger effect sizes. The current study indicates that mixed emotions are a robust, measurable and non-artifactual experience. The results are discussed in terms of the implications for an affect system that has greater versatility and flexibility than previously thought. PMID:25926805

  15. Graphical environment for DAQ simulations

    NASA Astrophysics Data System (ADS)

    Wang, Chung-Ching; Booth, Alexander W.; Chen, Yen-Min; Botlo, Michael

    1994-02-01

    At the Superconducting Super Collider Laboratory (SSCL) a tool called DAQSIM has been developed to study the behavior of data acquisition (DAQ) systems. This paper reports and discusses the graphics use in DAQSIM. DAQSIM graphics includes graphical user interface (GUI), animation debugging, and control facilities. DAQSIM graphics not only provides a convenient DAQ simulation environment, it also serves as an efficient manager in simulation development and verification.

  16. A new particle interaction mixing model for turbulent dispersion and turbulent reactive flows

    NASA Astrophysics Data System (ADS)

    Meyer, Daniel W.

    2010-03-01

    Probability density function (PDF) methods are an established tool applied for the simulation of turbulent mixing and turbulent reactive flows. Mixing models are required to close the molecular diffusion term in the PDF transport equation. From the nature of molecular diffusion, several requirements or design criteria can be derived for mixing models. All current models have certain shortcomings with respect to these requirements. A new mixing model is presented which fully satisfies almost all requirements. It conserves the mean of an inert scalar, reduces its scalar variance, and relaxes closely to a Gaussian scalar PDF. Multiple inert scalars without differential diffusion effects evolve independently and are kept bounded within their allowable region. Mixing is conditional on the velocity and particle scalar trajectories are continuous in time leading to a model that is local in a weak sense. Validation tests show that the model can reproduce differential diffusion effects and mixing rate dependencies due to variable initial scalar length scales or Reynolds and Schmidt number variations.

  17. Span graphics display utilities handbook, first edition

    NASA Technical Reports Server (NTRS)

    Gallagher, D. L.; Green, J. L.; Newman, R.

    1985-01-01

    The Space Physics Analysis Network (SPAN) is a computer network connecting scientific institutions throughout the United States. This network provides an avenue for timely, correlative research between investigators, in a multidisciplinary approach to space physics studies. An objective in the development of SPAN is to make available direct and simplified procedures that scientists can use, without specialized training, to exchange information over the network. Information exchanges include raw and processes data, analysis programs, correspondence, documents, and graphite images. This handbook details procedures that can be used to exchange graphic images over SPAN. The intent is to periodically update this handbook to reflect the constantly changing facilities available on SPAN. The utilities described within reflect an earnest attempt to provide useful descriptions of working utilities that can be used to transfer graphic images across the network. Whether graphic images are representative of satellite servations or theoretical modeling and whether graphics images are of device dependent or independent type, the SPAN graphics display utilities handbook will be the users guide to graphic image exchange.

  18. Graphic Novels and School Libraries

    ERIC Educational Resources Information Center

    Rudiger, Hollis Margaret; Schliesman, Megan

    2007-01-01

    School libraries serving children and teenagers today should be committed to collecting graphic novels to the extent that their budgets allow. However, the term "graphic novel" is enough to make some librarians--not to mention administrators and parents--pause. Graphic novels are simply book-length comics. They can be works of fiction or…

  19. Selecting Mangas and Graphic Novels

    ERIC Educational Resources Information Center

    Nylund, Carol

    2007-01-01

    The decision to add graphic novels, and particularly the Japanese styled called manga, was one the author has debated for a long time. In this article, the author shares her experience when she purchased graphic novels and mangas to add to her library collection. She shares how graphic novels and mangas have revitalized the library.

  20. Low Cost Graphics. Second Edition.

    ERIC Educational Resources Information Center

    Tinker, Robert F.

    This manual describes the CALM TV graphics interface, a low-cost means of producing quality graphics on an ordinary TV. The system permits the output of data in graphic as well as alphanumeric form and the input of data from the face of the TV using a light pen. The integrated circuits required in the interface can be obtained from standard…