Sample records for unified computational framework

  1. An Unified Multiscale Framework for Planar, Surface, and Curve Skeletonization.

    PubMed

    Jalba, Andrei C; Sobiecki, Andre; Telea, Alexandru C

    2016-01-01

    Computing skeletons of 2D shapes, and medial surface and curve skeletons of 3D shapes, is a challenging task. In particular, there is no unified framework that detects all types of skeletons using a single model, and also produces a multiscale representation which allows to progressively simplify, or regularize, all skeleton types. In this paper, we present such a framework. We model skeleton detection and regularization by a conservative mass transport process from a shape's boundary to its surface skeleton, next to its curve skeleton, and finally to the shape center. The resulting density field can be thresholded to obtain a multiscale representation of progressively simplified surface, or curve, skeletons. We detail a numerical implementation of our framework which is demonstrably stable and has high computational efficiency. We demonstrate our framework on several complex 2D and 3D shapes.

  2. Development and application of unified algorithms for problems in computational science

    NASA Technical Reports Server (NTRS)

    Shankar, Vijaya; Chakravarthy, Sukumar

    1987-01-01

    A framework is presented for developing computationally unified numerical algorithms for solving nonlinear equations that arise in modeling various problems in mathematical physics. The concept of computational unification is an attempt to encompass efficient solution procedures for computing various nonlinear phenomena that may occur in a given problem. For example, in Computational Fluid Dynamics (CFD), a unified algorithm will be one that allows for solutions to subsonic (elliptic), transonic (mixed elliptic-hyperbolic), and supersonic (hyperbolic) flows for both steady and unsteady problems. The objectives are: development of superior unified algorithms emphasizing accuracy and efficiency aspects; development of codes based on selected algorithms leading to validation; application of mature codes to realistic problems; and extension/application of CFD-based algorithms to problems in other areas of mathematical physics. The ultimate objective is to achieve integration of multidisciplinary technologies to enhance synergism in the design process through computational simulation. Specific unified algorithms for a hierarchy of gas dynamics equations and their applications to two other areas: electromagnetic scattering, and laser-materials interaction accounting for melting.

  3. A unified framework for heat and mass transport at the atomic scale

    NASA Astrophysics Data System (ADS)

    Ponga, Mauricio; Sun, Dingyi

    2018-04-01

    We present a unified framework to simulate heat and mass transport in systems of particles. The proposed framework is based on kinematic mean field theory and uses a phenomenological master equation to compute effective transport rates between particles without the need to evaluate operators. We exploit this advantage and apply the model to simulate transport phenomena at the nanoscale. We demonstrate that, when calibrated to experimentally-measured transport coefficients, the model can accurately predict transient and steady state temperature and concentration profiles even in scenarios where the length of the device is comparable to the mean free path of the carriers. Through several example applications, we demonstrate the validity of our model for all classes of materials, including ones that, until now, would have been outside the domain of computational feasibility.

  4. Unified Computational Methods for Regression Analysis of Zero-Inflated and Bound-Inflated Data

    PubMed Central

    Yang, Yan; Simpson, Douglas

    2010-01-01

    Bounded data with excess observations at the boundary are common in many areas of application. Various individual cases of inflated mixture models have been studied in the literature for bound-inflated data, yet the computational methods have been developed separately for each type of model. In this article we use a common framework for computing these models, and expand the range of models for both discrete and semi-continuous data with point inflation at the lower boundary. The quasi-Newton and EM algorithms are adapted and compared for estimation of model parameters. The numerical Hessian and generalized Louis method are investigated as means for computing standard errors after optimization. Correlated data are included in this framework via generalized estimating equations. The estimation of parameters and effectiveness of standard errors are demonstrated through simulation and in the analysis of data from an ultrasound bioeffect study. The unified approach enables reliable computation for a wide class of inflated mixture models and comparison of competing models. PMID:20228950

  5. Toward a unifying framework for evolutionary processes.

    PubMed

    Paixão, Tiago; Badkobeh, Golnaz; Barton, Nick; Çörüş, Doğan; Dang, Duc-Cuong; Friedrich, Tobias; Lehre, Per Kristian; Sudholt, Dirk; Sutton, Andrew M; Trubenová, Barbora

    2015-10-21

    The theory of population genetics and evolutionary computation have been evolving separately for nearly 30 years. Many results have been independently obtained in both fields and many others are unique to its respective field. We aim to bridge this gap by developing a unifying framework for evolutionary processes that allows both evolutionary algorithms and population genetics models to be cast in the same formal framework. The framework we present here decomposes the evolutionary process into its several components in order to facilitate the identification of similarities between different models. In particular, we propose a classification of evolutionary operators based on the defining properties of the different components. We cast several commonly used operators from both fields into this common framework. Using this, we map different evolutionary and genetic algorithms to different evolutionary regimes and identify candidates with the most potential for the translation of results between the fields. This provides a unified description of evolutionary processes and represents a stepping stone towards new tools and results to both fields. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. The semiotics of medical image Segmentation.

    PubMed

    Baxter, John S H; Gibson, Eli; Eagleson, Roy; Peters, Terry M

    2018-02-01

    As the interaction between clinicians and computational processes increases in complexity, more nuanced mechanisms are required to describe how their communication is mediated. Medical image segmentation in particular affords a large number of distinct loci for interaction which can act on a deep, knowledge-driven level which complicates the naive interpretation of the computer as a symbol processing machine. Using the perspective of the computer as dialogue partner, we can motivate the semiotic understanding of medical image segmentation. Taking advantage of Peircean semiotic traditions and new philosophical inquiry into the structure and quality of metaphors, we can construct a unified framework for the interpretation of medical image segmentation as a sign exchange in which each sign acts as an interface metaphor. This allows for a notion of finite semiosis, described through a schematic medium, that can rigorously describe how clinicians and computers interpret the signs mediating their interaction. Altogether, this framework provides a unified approach to the understanding and development of medical image segmentation interfaces. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Space-Time Processing for Tactical Mobile Ad Hoc Networks

    DTIC Science & Technology

    2008-08-01

    vision for multiple concurrent communication settings, i.e., a many-to-many framework where multi-packet transmissions (MPTs) and multi-packet...modelling framework of capacity-delay tradeoffs We have introduced the first unified modeling framework for the computation of fundamental limits o We...dalities in wireless n twor i-packet modelling framework to account for the use of m lti-packet reception (MPR) f ad hoc networks with MPT under

  8. Generalized Information Theory Meets Human Cognition: Introducing a Unified Framework to Model Uncertainty and Information Search.

    PubMed

    Crupi, Vincenzo; Nelson, Jonathan D; Meder, Björn; Cevolani, Gustavo; Tentori, Katya

    2018-06-17

    Searching for information is critical in many situations. In medicine, for instance, careful choice of a diagnostic test can help narrow down the range of plausible diseases that the patient might have. In a probabilistic framework, test selection is often modeled by assuming that people's goal is to reduce uncertainty about possible states of the world. In cognitive science, psychology, and medical decision making, Shannon entropy is the most prominent and most widely used model to formalize probabilistic uncertainty and the reduction thereof. However, a variety of alternative entropy metrics (Hartley, Quadratic, Tsallis, Rényi, and more) are popular in the social and the natural sciences, computer science, and philosophy of science. Particular entropy measures have been predominant in particular research areas, and it is often an open issue whether these divergences emerge from different theoretical and practical goals or are merely due to historical accident. Cutting across disciplinary boundaries, we show that several entropy and entropy reduction measures arise as special cases in a unified formalism, the Sharma-Mittal framework. Using mathematical results, computer simulations, and analyses of published behavioral data, we discuss four key questions: How do various entropy models relate to each other? What insights can be obtained by considering diverse entropy models within a unified framework? What is the psychological plausibility of different entropy models? What new questions and insights for research on human information acquisition follow? Our work provides several new pathways for theoretical and empirical research, reconciling apparently conflicting approaches and empirical findings within a comprehensive and unified information-theoretic formalism. Copyright © 2018 Cognitive Science Society, Inc.

  9. A Unified Methodology for Computing Accurate Quaternion Color Moments and Moment Invariants.

    PubMed

    Karakasis, Evangelos G; Papakostas, George A; Koulouriotis, Dimitrios E; Tourassis, Vassilios D

    2014-02-01

    In this paper, a general framework for computing accurate quaternion color moments and their corresponding invariants is proposed. The proposed unified scheme arose by studying the characteristics of different orthogonal polynomials. These polynomials are used as kernels in order to form moments, the invariants of which can easily be derived. The resulted scheme permits the usage of any polynomial-like kernel in a unified and consistent way. The resulted moments and moment invariants demonstrate robustness to noisy conditions and high discriminative power. Additionally, in the case of continuous moments, accurate computations take place to avoid approximation errors. Based on this general methodology, the quaternion Tchebichef, Krawtchouk, Dual Hahn, Legendre, orthogonal Fourier-Mellin, pseudo Zernike and Zernike color moments, and their corresponding invariants are introduced. A selected paradigm presents the reconstruction capability of each moment family, whereas proper classification scenarios evaluate the performance of color moment invariants.

  10. Models for evaluating the performability of degradable computing systems

    NASA Technical Reports Server (NTRS)

    Wu, L. T.

    1982-01-01

    Recent advances in multiprocessor technology established the need for unified methods to evaluate computing systems performance and reliability. In response to this modeling need, a general modeling framework that permits the modeling, analysis and evaluation of degradable computing systems is considered. Within this framework, several user oriented performance variables are identified and shown to be proper generalizations of the traditional notions of system performance and reliability. Furthermore, a time varying version of the model is developed to generalize the traditional fault tree reliability evaluation methods of phased missions.

  11. Representations, approximations, and limitations within a computational framework for cognitive science. Comment on “Toward a computational framework for cognitive biology: Unifying approaches from cognitive neuroscience and comparative cognition” by W. Tecumseh Fitch

    NASA Astrophysics Data System (ADS)

    Perfors, Amy

    2014-09-01

    There is much to approve of in this provocative and interesting paper. I strongly agree in many parts, especially the point that dichotomies like nature/nurture are actively detrimental to the field. I also appreciate the idea that cognitive scientists should take the "biological wetware" of the cell (rather than the network) more seriously.

  12. Computation of elementary modes: a unifying framework and the new binary approach

    PubMed Central

    Gagneur, Julien; Klamt, Steffen

    2004-01-01

    Background Metabolic pathway analysis has been recognized as a central approach to the structural analysis of metabolic networks. The concept of elementary (flux) modes provides a rigorous formalism to describe and assess pathways and has proven to be valuable for many applications. However, computing elementary modes is a hard computational task. In recent years we assisted in a multiplication of algorithms dedicated to it. We require a summarizing point of view and a continued improvement of the current methods. Results We show that computing the set of elementary modes is equivalent to computing the set of extreme rays of a convex cone. This standard mathematical representation provides a unified framework that encompasses the most prominent algorithmic methods that compute elementary modes and allows a clear comparison between them. Taking lessons from this benchmark, we here introduce a new method, the binary approach, which computes the elementary modes as binary patterns of participating reactions from which the respective stoichiometric coefficients can be computed in a post-processing step. We implemented the binary approach in FluxAnalyzer 5.1, a software that is free for academics. The binary approach decreases the memory demand up to 96% without loss of speed giving the most efficient method available for computing elementary modes to date. Conclusions The equivalence between elementary modes and extreme ray computations offers opportunities for employing tools from polyhedral computation for metabolic pathway analysis. The new binary approach introduced herein was derived from this general theoretical framework and facilitates the computation of elementary modes in considerably larger networks. PMID:15527509

  13. Decomposing dendrophilia. Comment on “Toward a computational framework for cognitive biology: Unifying approaches from cognitive neuroscience and comparative cognition” by W. Tecumseh Fitch

    NASA Astrophysics Data System (ADS)

    Honing, Henkjan; Zuidema, Willem

    2014-09-01

    The future of cognitive science will be about bridging neuroscience and behavioral studies, with essential roles played by comparative biology, formal modeling, and the theory of computation. Nowhere will this integration be more strongly needed than in understanding the biological basis of language and music. We thus strongly sympathize with the general framework that Fitch [1] proposes, and welcome the remarkably broad and readable review he presents to support it.

  14. Self-evaluation of decision-making: A general Bayesian framework for metacognitive computation.

    PubMed

    Fleming, Stephen M; Daw, Nathaniel D

    2017-01-01

    People are often aware of their mistakes, and report levels of confidence in their choices that correlate with objective performance. These metacognitive assessments of decision quality are important for the guidance of behavior, particularly when external feedback is absent or sporadic. However, a computational framework that accounts for both confidence and error detection is lacking. In addition, accounts of dissociations between performance and metacognition have often relied on ad hoc assumptions, precluding a unified account of intact and impaired self-evaluation. Here we present a general Bayesian framework in which self-evaluation is cast as a "second-order" inference on a coupled but distinct decision system, computationally equivalent to inferring the performance of another actor. Second-order computation may ensue whenever there is a separation between internal states supporting decisions and confidence estimates over space and/or time. We contrast second-order computation against simpler first-order models in which the same internal state supports both decisions and confidence estimates. Through simulations we show that second-order computation provides a unified account of different types of self-evaluation often considered in separate literatures, such as confidence and error detection, and generates novel predictions about the contribution of one's own actions to metacognitive judgments. In addition, the model provides insight into why subjects' metacognition may sometimes be better or worse than task performance. We suggest that second-order computation may underpin self-evaluative judgments across a range of domains. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  15. Self-Evaluation of Decision-Making: A General Bayesian Framework for Metacognitive Computation

    PubMed Central

    2017-01-01

    People are often aware of their mistakes, and report levels of confidence in their choices that correlate with objective performance. These metacognitive assessments of decision quality are important for the guidance of behavior, particularly when external feedback is absent or sporadic. However, a computational framework that accounts for both confidence and error detection is lacking. In addition, accounts of dissociations between performance and metacognition have often relied on ad hoc assumptions, precluding a unified account of intact and impaired self-evaluation. Here we present a general Bayesian framework in which self-evaluation is cast as a “second-order” inference on a coupled but distinct decision system, computationally equivalent to inferring the performance of another actor. Second-order computation may ensue whenever there is a separation between internal states supporting decisions and confidence estimates over space and/or time. We contrast second-order computation against simpler first-order models in which the same internal state supports both decisions and confidence estimates. Through simulations we show that second-order computation provides a unified account of different types of self-evaluation often considered in separate literatures, such as confidence and error detection, and generates novel predictions about the contribution of one’s own actions to metacognitive judgments. In addition, the model provides insight into why subjects’ metacognition may sometimes be better or worse than task performance. We suggest that second-order computation may underpin self-evaluative judgments across a range of domains. PMID:28004960

  16. A Unified Framework for Association Analysis with Multiple Related Phenotypes

    PubMed Central

    Stephens, Matthew

    2013-01-01

    We consider the problem of assessing associations between multiple related outcome variables, and a single explanatory variable of interest. This problem arises in many settings, including genetic association studies, where the explanatory variable is genotype at a genetic variant. We outline a framework for conducting this type of analysis, based on Bayesian model comparison and model averaging for multivariate regressions. This framework unifies several common approaches to this problem, and includes both standard univariate and standard multivariate association tests as special cases. The framework also unifies the problems of testing for associations and explaining associations – that is, identifying which outcome variables are associated with genotype. This provides an alternative to the usual, but conceptually unsatisfying, approach of resorting to univariate tests when explaining and interpreting significant multivariate findings. The method is computationally tractable genome-wide for modest numbers of phenotypes (e.g. 5–10), and can be applied to summary data, without access to raw genotype and phenotype data. We illustrate the methods on both simulated examples, and to a genome-wide association study of blood lipid traits where we identify 18 potential novel genetic associations that were not identified by univariate analyses of the same data. PMID:23861737

  17. Computer-Aided Group Problem Solving for Unified Life Cycle Engineering (ULCE)

    DTIC Science & Technology

    1989-02-01

    defining the problem, generating alternative solutions, evaluating alternatives, selecting alternatives, and implementing the solution. Systems...specialist in group dynamics, assists the group in formulating the problem and selecting a model framework. The analyst provides the group with computer...allocating resources, evaluating and selecting options, making judgments explicit, and analyzing dynamic systems. c. University of Rhode Island Drs. Geoffery

  18. Generic-distributed framework for cloud services marketplace based on unified ontology.

    PubMed

    Hasan, Samer; Valli Kumari, V

    2017-11-01

    Cloud computing is a pattern for delivering ubiquitous and on demand computing resources based on pay-as-you-use financial model. Typically, cloud providers advertise cloud service descriptions in various formats on the Internet. On the other hand, cloud consumers use available search engines (Google and Yahoo) to explore cloud service descriptions and find the adequate service. Unfortunately, general purpose search engines are not designed to provide a small and complete set of results, which makes the process a big challenge. This paper presents a generic-distrusted framework for cloud services marketplace to automate cloud services discovery and selection process, and remove the barriers between service providers and consumers. Additionally, this work implements two instances of generic framework by adopting two different matching algorithms; namely dominant and recessive attributes algorithm borrowed from gene science and semantic similarity algorithm based on unified cloud service ontology. Finally, this paper presents unified cloud services ontology and models the real-life cloud services according to the proposed ontology. To the best of the authors' knowledge, this is the first attempt to build a cloud services marketplace where cloud providers and cloud consumers can trend cloud services as utilities. In comparison with existing work, semantic approach reduced the execution time by 20% and maintained the same values for all other parameters. On the other hand, dominant and recessive attributes approach reduced the execution time by 57% but showed lower value for recall.

  19. Experimental analysis of chaotic neural network models for combinatorial optimization under a unifying framework.

    PubMed

    Kwok, T; Smith, K A

    2000-09-01

    The aim of this paper is to study both the theoretical and experimental properties of chaotic neural network (CNN) models for solving combinatorial optimization problems. Previously we have proposed a unifying framework which encompasses the three main model types, namely, Chen and Aihara's chaotic simulated annealing (CSA) with decaying self-coupling, Wang and Smith's CSA with decaying timestep, and the Hopfield network with chaotic noise. Each of these models can be represented as a special case under the framework for certain conditions. This paper combines the framework with experimental results to provide new insights into the effect of the chaotic neurodynamics of each model. By solving the N-queen problem of various sizes with computer simulations, the CNN models are compared in different parameter spaces, with optimization performance measured in terms of feasibility, efficiency, robustness and scalability. Furthermore, characteristic chaotic neurodynamics crucial to effective optimization are identified, together with a guide to choosing the corresponding model parameters.

  20. A Semantic Grid Oriented to E-Tourism

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao Ming

    With increasing complexity of tourism business models and tasks, there is a clear need of the next generation e-Tourism infrastructure to support flexible automation, integration, computation, storage, and collaboration. Currently several enabling technologies such as semantic Web, Web service, agent and grid computing have been applied in the different e-Tourism applications, however there is no a unified framework to be able to integrate all of them. So this paper presents a promising e-Tourism framework based on emerging semantic grid, in which a number of key design issues are discussed including architecture, ontologies structure, semantic reconciliation, service and resource discovery, role based authorization and intelligent agent. The paper finally provides the implementation of the framework.

  1. A unified account of perceptual layering and surface appearance in terms of gamut relativity.

    PubMed

    Vladusich, Tony; McDonnell, Mark D

    2014-01-01

    When we look at the world--or a graphical depiction of the world--we perceive surface materials (e.g. a ceramic black and white checkerboard) independently of variations in illumination (e.g. shading or shadow) and atmospheric media (e.g. clouds or smoke). Such percepts are partly based on the way physical surfaces and media reflect and transmit light and partly on the way the human visual system processes the complex patterns of light reaching the eye. One way to understand how these percepts arise is to assume that the visual system parses patterns of light into layered perceptual representations of surfaces, illumination and atmospheric media, one seen through another. Despite a great deal of previous experimental and modelling work on layered representation, however, a unified computational model of key perceptual demonstrations is still lacking. Here we present the first general computational model of perceptual layering and surface appearance--based on a boarder theoretical framework called gamut relativity--that is consistent with these demonstrations. The model (a) qualitatively explains striking effects of perceptual transparency, figure-ground separation and lightness, (b) quantitatively accounts for the role of stimulus- and task-driven constraints on perceptual matching performance, and (c) unifies two prominent theoretical frameworks for understanding surface appearance. The model thereby provides novel insights into the remarkable capacity of the human visual system to represent and identify surface materials, illumination and atmospheric media, which can be exploited in computer graphics applications.

  2. A Unified Account of Perceptual Layering and Surface Appearance in Terms of Gamut Relativity

    PubMed Central

    Vladusich, Tony; McDonnell, Mark D.

    2014-01-01

    When we look at the world—or a graphical depiction of the world—we perceive surface materials (e.g. a ceramic black and white checkerboard) independently of variations in illumination (e.g. shading or shadow) and atmospheric media (e.g. clouds or smoke). Such percepts are partly based on the way physical surfaces and media reflect and transmit light and partly on the way the human visual system processes the complex patterns of light reaching the eye. One way to understand how these percepts arise is to assume that the visual system parses patterns of light into layered perceptual representations of surfaces, illumination and atmospheric media, one seen through another. Despite a great deal of previous experimental and modelling work on layered representation, however, a unified computational model of key perceptual demonstrations is still lacking. Here we present the first general computational model of perceptual layering and surface appearance—based on a boarder theoretical framework called gamut relativity—that is consistent with these demonstrations. The model (a) qualitatively explains striking effects of perceptual transparency, figure-ground separation and lightness, (b) quantitatively accounts for the role of stimulus- and task-driven constraints on perceptual matching performance, and (c) unifies two prominent theoretical frameworks for understanding surface appearance. The model thereby provides novel insights into the remarkable capacity of the human visual system to represent and identify surface materials, illumination and atmospheric media, which can be exploited in computer graphics applications. PMID:25402466

  3. A unified computational model of the development of object unity, object permanence, and occluded object trajectory perception.

    PubMed

    Franz, A; Triesch, J

    2010-12-01

    The perception of the unity of objects, their permanence when out of sight, and the ability to perceive continuous object trajectories even during occlusion belong to the first and most important capacities that infants have to acquire. Despite much research a unified model of the development of these abilities is still missing. Here we make an attempt to provide such a unified model. We present a recurrent artificial neural network that learns to predict the motion of stimuli occluding each other and that develops representations of occluded object parts. It represents completely occluded, moving objects for several time steps and successfully predicts their reappearance after occlusion. This framework allows us to account for a broad range of experimental data. Specifically, the model explains how the perception of object unity develops, the role of the width of the occluders, and it also accounts for differences between data for moving and stationary stimuli. We demonstrate that these abilities can be acquired by learning to predict the sensory input. The model makes specific predictions and provides a unifying framework that has the potential to be extended to other visual event categories. Copyright © 2010 Elsevier Inc. All rights reserved.

  4. Fully Associative, Nonisothermal, Potential-Based Unified Viscoplastic Model for Titanium-Based Matrices

    NASA Technical Reports Server (NTRS)

    2005-01-01

    A number of titanium matrix composite (TMC) systems are currently being investigated for high-temperature air frame and propulsion system applications. As a result, numerous computational methodologies for predicting both deformation and life for this class of materials are under development. An integral part of these methodologies is an accurate and computationally efficient constitutive model for the metallic matrix constituent. Furthermore, because these systems are designed to operate at elevated temperatures, the required constitutive models must account for both time-dependent and time-independent deformations. To accomplish this, the NASA Lewis Research Center is employing a recently developed, complete, potential-based framework. This framework, which utilizes internal state variables, was put forth for the derivation of reversible and irreversible constitutive equations. The framework, and consequently the resulting constitutive model, is termed complete because the existence of the total (integrated) form of the Gibbs complementary free energy and complementary dissipation potentials are assumed a priori. The specific forms selected here for both the Gibbs and complementary dissipation potentials result in a fully associative, multiaxial, nonisothermal, unified viscoplastic model with nonlinear kinematic hardening. This model constitutes one of many models in the Generalized Viscoplasticity with Potential Structure (GVIPS) class of inelastic constitutive equations.

  5. A general modeling framework for describing spatially structured population dynamics

    USGS Publications Warehouse

    Sample, Christine; Fryxell, John; Bieri, Joanna; Federico, Paula; Earl, Julia; Wiederholt, Ruscena; Mattsson, Brady; Flockhart, Tyler; Nicol, Sam; Diffendorfer, James E.; Thogmartin, Wayne E.; Erickson, Richard A.; Norris, D. Ryan

    2017-01-01

    Variation in movement across time and space fundamentally shapes the abundance and distribution of populations. Although a variety of approaches model structured population dynamics, they are limited to specific types of spatially structured populations and lack a unifying framework. Here, we propose a unified network-based framework sufficiently novel in its flexibility to capture a wide variety of spatiotemporal processes including metapopulations and a range of migratory patterns. It can accommodate different kinds of age structures, forms of population growth, dispersal, nomadism and migration, and alternative life-history strategies. Our objective was to link three general elements common to all spatially structured populations (space, time and movement) under a single mathematical framework. To do this, we adopt a network modeling approach. The spatial structure of a population is represented by a weighted and directed network. Each node and each edge has a set of attributes which vary through time. The dynamics of our network-based population is modeled with discrete time steps. Using both theoretical and real-world examples, we show how common elements recur across species with disparate movement strategies and how they can be combined under a unified mathematical framework. We illustrate how metapopulations, various migratory patterns, and nomadism can be represented with this modeling approach. We also apply our network-based framework to four organisms spanning a wide range of life histories, movement patterns, and carrying capacities. General computer code to implement our framework is provided, which can be applied to almost any spatially structured population. This framework contributes to our theoretical understanding of population dynamics and has practical management applications, including understanding the impact of perturbations on population size, distribution, and movement patterns. By working within a common framework, there is less chance that comparative analyses are colored by model details rather than general principles

  6. Probabilistic arithmetic automata and their applications.

    PubMed

    Marschall, Tobias; Herms, Inke; Kaltenbach, Hans-Michael; Rahmann, Sven

    2012-01-01

    We present a comprehensive review on probabilistic arithmetic automata (PAAs), a general model to describe chains of operations whose operands depend on chance, along with two algorithms to numerically compute the distribution of the results of such probabilistic calculations. PAAs provide a unifying framework to approach many problems arising in computational biology and elsewhere. We present five different applications, namely 1) pattern matching statistics on random texts, including the computation of the distribution of occurrence counts, waiting times, and clump sizes under hidden Markov background models; 2) exact analysis of window-based pattern matching algorithms; 3) sensitivity of filtration seeds used to detect candidate sequence alignments; 4) length and mass statistics of peptide fragments resulting from enzymatic cleavage reactions; and 5) read length statistics of 454 and IonTorrent sequencing reads. The diversity of these applications indicates the flexibility and unifying character of the presented framework. While the construction of a PAA depends on the particular application, we single out a frequently applicable construction method: We introduce deterministic arithmetic automata (DAAs) to model deterministic calculations on sequences, and demonstrate how to construct a PAA from a given DAA and a finite-memory random text model. This procedure is used for all five discussed applications and greatly simplifies the construction of PAAs. Implementations are available as part of the MoSDi package. Its application programming interface facilitates the rapid development of new applications based on the PAA framework.

  7. A Unified Framework for Simulating Markovian Models of Highly Dependable Systems

    DTIC Science & Technology

    1989-07-01

    ependability I’valuiation of Complex lault- lolerant Computing Systems. Ptreedings of the 1-.et-enth Sv~npmiun on Falult- lolerant Comnputing. Portland, Maine...New York. [12] (icis;t, R.M. and ’I’rivedi, K.S. (1983). I!Itra-Il gh Reliability Prediction for Fault-’ lolerant Computer Systems. IEE.-E Trw.%,.cions... 1998 ). Surv’ey of Software Tools for [valuating Reli- ability. A vailability, and Serviceabilitv. ACA1 Computing S urveyjs 20. 4, 227-269). [32] Meyer

  8. A Unified Theoretical Framework for Cognitive Sequencing.

    PubMed

    Savalia, Tejas; Shukla, Anuj; Bapi, Raju S

    2016-01-01

    The capacity to sequence information is central to human performance. Sequencing ability forms the foundation stone for higher order cognition related to language and goal-directed planning. Information related to the order of items, their timing, chunking and hierarchical organization are important aspects in sequencing. Past research on sequencing has emphasized two distinct and independent dichotomies: implicit vs. explicit and goal-directed vs. habits. We propose a theoretical framework unifying these two streams. Our proposal relies on brain's ability to implicitly extract statistical regularities from the stream of stimuli and with attentional engagement organizing sequences explicitly and hierarchically. Similarly, sequences that need to be assembled purposively to accomplish a goal require engagement of attentional processes. With repetition, these goal-directed plans become habits with concomitant disengagement of attention. Thus, attention and awareness play a crucial role in the implicit-to-explicit transition as well as in how goal-directed plans become automatic habits. Cortico-subcortical loops basal ganglia-frontal cortex and hippocampus-frontal cortex loops mediate the transition process. We show how the computational principles of model-free and model-based learning paradigms, along with a pivotal role for attention and awareness, offer a unifying framework for these two dichotomies. Based on this framework, we make testable predictions related to the potential influence of response-to-stimulus interval (RSI) on developing awareness in implicit learning tasks.

  9. A Unified Theoretical Framework for Cognitive Sequencing

    PubMed Central

    Savalia, Tejas; Shukla, Anuj; Bapi, Raju S.

    2016-01-01

    The capacity to sequence information is central to human performance. Sequencing ability forms the foundation stone for higher order cognition related to language and goal-directed planning. Information related to the order of items, their timing, chunking and hierarchical organization are important aspects in sequencing. Past research on sequencing has emphasized two distinct and independent dichotomies: implicit vs. explicit and goal-directed vs. habits. We propose a theoretical framework unifying these two streams. Our proposal relies on brain's ability to implicitly extract statistical regularities from the stream of stimuli and with attentional engagement organizing sequences explicitly and hierarchically. Similarly, sequences that need to be assembled purposively to accomplish a goal require engagement of attentional processes. With repetition, these goal-directed plans become habits with concomitant disengagement of attention. Thus, attention and awareness play a crucial role in the implicit-to-explicit transition as well as in how goal-directed plans become automatic habits. Cortico-subcortical loops basal ganglia-frontal cortex and hippocampus-frontal cortex loops mediate the transition process. We show how the computational principles of model-free and model-based learning paradigms, along with a pivotal role for attention and awareness, offer a unifying framework for these two dichotomies. Based on this framework, we make testable predictions related to the potential influence of response-to-stimulus interval (RSI) on developing awareness in implicit learning tasks. PMID:27917146

  10. Learning, epigenetics, and computation: An extension on Fitch's proposal. Comment on “Toward a computational framework for cognitive biology: Unifying approaches from cognitive neuroscience and comparative cognition” by W. Tecumseh Fitch

    NASA Astrophysics Data System (ADS)

    Okanoya, Kazuo

    2014-09-01

    The comparative computational approach of Fitch [1] attempts to renew the classical David Marr paradigm of computation, algorithm, and implementation, by introducing evolutionary view of the relationship between neural architecture and cognition. This comparative evolutionary view provides constraints useful in narrowing down the problem space for both cognition and neural mechanisms. I will provide two examples from our own studies that reinforce and extend Fitch's proposal.

  11. Observability of Automata Networks: Fixed and Switching Cases.

    PubMed

    Li, Rui; Hong, Yiguang; Wang, Xingyuan

    2018-04-01

    Automata networks are a class of fully discrete dynamical systems, which have received considerable interest in various different areas. This brief addresses the observability of automata networks and switched automata networks in a unified framework, and proposes simple necessary and sufficient conditions for observability. The results are achieved by employing methods from symbolic computation, and are suited for implementation using computer algebra systems. Several examples are presented to demonstrate the application of the results.

  12. Practical Issues in Estimating Classification Accuracy and Consistency with R Package cacIRT

    ERIC Educational Resources Information Center

    Lathrop, Quinn N.

    2015-01-01

    There are two main lines of research in estimating classification accuracy (CA) and classification consistency (CC) under Item Response Theory (IRT). The R package cacIRT provides computer implementations of both approaches in an accessible and unified framework. Even with available implementations, there remains decisions a researcher faces when…

  13. Optimization Techniques for Analysis of Biological and Social Networks

    DTIC Science & Technology

    2012-03-28

    analyzing a new metaheuristic technique, variable objective search. 3. Experimentation and application: Implement the proposed algorithms , test and fine...alternative mathematical programming formulations, their theoretical analysis, the development of exact algorithms , and heuristics. Originally, clusters...systematic fashion under a unifying theoretical and algorithmic framework. Optimization, Complex Networks, Social Network Analysis, Computational

  14. VisRseq: R-based visual framework for analysis of sequencing data

    PubMed Central

    2015-01-01

    Background Several tools have been developed to enable biologists to perform initial browsing and exploration of sequencing data. However the computational tool set for further analyses often requires significant computational expertise to use and many of the biologists with the knowledge needed to interpret these data must rely on programming experts. Results We present VisRseq, a framework for analysis of sequencing datasets that provides a computationally rich and accessible framework for integrative and interactive analyses without requiring programming expertise. We achieve this aim by providing R apps, which offer a semi-auto generated and unified graphical user interface for computational packages in R and repositories such as Bioconductor. To address the interactivity limitation inherent in R libraries, our framework includes several native apps that provide exploration and brushing operations as well as an integrated genome browser. The apps can be chained together to create more powerful analysis workflows. Conclusions To validate the usability of VisRseq for analysis of sequencing data, we present two case studies performed by our collaborators and report their workflow and insights. PMID:26328469

  15. VisRseq: R-based visual framework for analysis of sequencing data.

    PubMed

    Younesy, Hamid; Möller, Torsten; Lorincz, Matthew C; Karimi, Mohammad M; Jones, Steven J M

    2015-01-01

    Several tools have been developed to enable biologists to perform initial browsing and exploration of sequencing data. However the computational tool set for further analyses often requires significant computational expertise to use and many of the biologists with the knowledge needed to interpret these data must rely on programming experts. We present VisRseq, a framework for analysis of sequencing datasets that provides a computationally rich and accessible framework for integrative and interactive analyses without requiring programming expertise. We achieve this aim by providing R apps, which offer a semi-auto generated and unified graphical user interface for computational packages in R and repositories such as Bioconductor. To address the interactivity limitation inherent in R libraries, our framework includes several native apps that provide exploration and brushing operations as well as an integrated genome browser. The apps can be chained together to create more powerful analysis workflows. To validate the usability of VisRseq for analysis of sequencing data, we present two case studies performed by our collaborators and report their workflow and insights.

  16. A UNIFIED FRAMEWORK FOR VARIANCE COMPONENT ESTIMATION WITH SUMMARY STATISTICS IN GENOME-WIDE ASSOCIATION STUDIES.

    PubMed

    Zhou, Xiang

    2017-12-01

    Linear mixed models (LMMs) are among the most commonly used tools for genetic association studies. However, the standard method for estimating variance components in LMMs-the restricted maximum likelihood estimation method (REML)-suffers from several important drawbacks: REML requires individual-level genotypes and phenotypes from all samples in the study, is computationally slow, and produces downward-biased estimates in case control studies. To remedy these drawbacks, we present an alternative framework for variance component estimation, which we refer to as MQS. MQS is based on the method of moments (MoM) and the minimal norm quadratic unbiased estimation (MINQUE) criterion, and brings two seemingly unrelated methods-the renowned Haseman-Elston (HE) regression and the recent LD score regression (LDSC)-into the same unified statistical framework. With this new framework, we provide an alternative but mathematically equivalent form of HE that allows for the use of summary statistics. We provide an exact estimation form of LDSC to yield unbiased and statistically more efficient estimates. A key feature of our method is its ability to pair marginal z -scores computed using all samples with SNP correlation information computed using a small random subset of individuals (or individuals from a proper reference panel), while capable of producing estimates that can be almost as accurate as if both quantities are computed using the full data. As a result, our method produces unbiased and statistically efficient estimates, and makes use of summary statistics, while it is computationally efficient for large data sets. Using simulations and applications to 37 phenotypes from 8 real data sets, we illustrate the benefits of our method for estimating and partitioning SNP heritability in population studies as well as for heritability estimation in family studies. Our method is implemented in the GEMMA software package, freely available at www.xzlab.org/software.html.

  17. Phase 1: Definition of intercity transportation comparison framework. Volume 1: Summary. [operations research of passenger and freight transporatation systems

    NASA Technical Reports Server (NTRS)

    1978-01-01

    A unified framework for comparing intercity passenger and freight transportation systems is presented. Composite measures for cost, service/demand, energy, and environmental impact were determined. A set of 14 basic measures were articulated to form the foundation for computing the composite measures. A parameter dependency diagram, constructed to explicitly interrelate the composite and basic measures is discussed. Ground rules and methodology for developing the values of the basic measures are provided and the use of the framework with existing cost and service data is illustrated for various freight systems.

  18. FREQ: A computational package for multivariable system loop-shaping procedures

    NASA Technical Reports Server (NTRS)

    Giesy, Daniel P.; Armstrong, Ernest S.

    1989-01-01

    Many approaches in the field of linear, multivariable time-invariant systems analysis and controller synthesis employ loop-sharing procedures wherein design parameters are chosen to shape frequency-response singular value plots of selected transfer matrices. A software package, FREQ, is documented for computing within on unified framework many of the most used multivariable transfer matrices for both continuous and discrete systems. The matrices are evaluated at user-selected frequency-response values, and singular values against frequency. Example computations are presented to demonstrate the use of the FREQ code.

  19. The SCEC Unified Community Velocity Model (UCVM) Software Framework for Distributing and Querying Seismic Velocity Models

    NASA Astrophysics Data System (ADS)

    Maechling, P. J.; Taborda, R.; Callaghan, S.; Shaw, J. H.; Plesch, A.; Olsen, K. B.; Jordan, T. H.; Goulet, C. A.

    2017-12-01

    Crustal seismic velocity models and datasets play a key role in regional three-dimensional numerical earthquake ground-motion simulation, full waveform tomography, modern physics-based probabilistic earthquake hazard analysis, as well as in other related fields including geophysics, seismology, and earthquake engineering. The standard material properties provided by a seismic velocity model are P- and S-wave velocities and density for any arbitrary point within the geographic volume for which the model is defined. Many seismic velocity models and datasets are constructed by synthesizing information from multiple sources and the resulting models are delivered to users in multiple file formats, such as text files, binary files, HDF-5 files, structured and unstructured grids, and through computer applications that allow for interactive querying of material properties. The Southern California Earthquake Center (SCEC) has developed the Unified Community Velocity Model (UCVM) software framework to facilitate the registration and distribution of existing and future seismic velocity models to the SCEC community. The UCVM software framework is designed to provide a standard query interface to multiple, alternative velocity models, even if the underlying velocity models are defined in different formats or use different geographic projections. The UCVM framework provides a comprehensive set of open-source tools for querying seismic velocity model properties, combining regional 3D models and 1D background models, visualizing 3D models, and generating computational models in the form of regular grids or unstructured meshes that can be used as inputs for ground-motion simulations. The UCVM framework helps researchers compare seismic velocity models and build equivalent simulation meshes from alternative velocity models. These capabilities enable researchers to evaluate the impact of alternative velocity models in ground-motion simulations and seismic hazard analysis applications. In this poster, we summarize the key components of the UCVM framework and describe the impact it has had in various computational geoscientific applications.

  20. Gender Divide and Acceptance of Collaborative Web 2.0 Applications for Learning in Higher Education

    ERIC Educational Resources Information Center

    Huang, Wen-Hao David; Hood, Denice Ward; Yoo, Sun Joo

    2013-01-01

    Situated in the gender digital divide framework, this survey study investigated the role of computer anxiety in influencing female college students' perceptions toward Web 2.0 applications for learning. Based on 432 college students' "Web 2.0 for learning" perception ratings collected by relevant categories of "Unified Theory of Acceptance and Use…

  1. A unifying framework for rigid multibody dynamics and serial and parallel computational issues

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Jain, Abhinandan

    1989-01-01

    A unifying framework for various formulations of the dynamics of open-chain rigid multibody systems is discussed. Their suitability for serial and parallel processing is assessed. The framework is based on the derivation of intrinsic, i.e., coordinate-free, equations of the algorithms which provides a suitable abstraction and permits a distinction to be made between the computational redundancy in the intrinsic and extrinsic equations. A set of spatial notation is used which allows the derivation of the various algorithms in a common setting and thus clarifies the relationships among them. The three classes of algorithms viz., O(n), O(n exp 2) and O(n exp 3) or the solution of the dynamics problem are investigated. Researchers begin with the derivation of O(n exp 3) algorithms based on the explicit computation of the mass matrix and it provides insight into the underlying basis of the O(n) algorithms. From a computational perspective, the optimal choice of a coordinate frame for the projection of the intrinsic equations is discussed and the serial computational complexity of the different algorithms is evaluated. The three classes of algorithms are also analyzed for suitability for parallel processing. It is shown that the problem belongs to the class of N C and the time and processor bounds are of O(log2/2(n)) and O(n exp 4), respectively. However, the algorithm that achieves the above bounds is not stable. Researchers show that the fastest stable parallel algorithm achieves a computational complexity of O(n) with O(n exp 4), respectively. However, the algorithm that achieves the above bounds is not stable. Researchers show that the fastest stable parallel algorithm achieves a computational complexity of O(n) with O(n exp 2) processors, and results from the parallelization of the O(n exp 3) serial algorithm.

  2. A stochastically fully connected conditional random field framework for super resolution OCT

    NASA Astrophysics Data System (ADS)

    Boroomand, A.; Tan, B.; Wong, A.; Bizheva, K.

    2017-02-01

    A number of factors can degrade the resolution and contrast of OCT images, such as: (1) changes of the OCT pointspread function (PSF) resulting from wavelength dependent scattering and absorption of light along the imaging depth (2) speckle noise, as well as (3) motion artifacts. We propose a new Super Resolution OCT (SR OCT) imaging framework that takes advantage of a Stochastically Fully Connected Conditional Random Field (SF-CRF) model to generate a Super Resolved OCT (SR OCT) image of higher quality from a set of Low-Resolution OCT (LR OCT) images. The proposed SF-CRF SR OCT imaging is able to simultaneously compensate for all of the factors mentioned above, that degrade the OCT image quality, using a unified computational framework. The proposed SF-CRF SR OCT imaging framework was tested on a set of simulated LR human retinal OCT images generated from a high resolution, high contrast retinal image, and on a set of in-vivo, high resolution, high contrast rat retinal OCT images. The reconstructed SR OCT images show considerably higher spatial resolution, less speckle noise and higher contrast compared to other tested methods. Visual assessment of the results demonstrated the usefulness of the proposed approach in better preservation of fine details and structures of the imaged sample, retaining biological tissue boundaries while reducing speckle noise using a unified computational framework. Quantitative evaluation using both Contrast to Noise Ratio (CNR) and Edge Preservation (EP) parameter also showed superior performance of the proposed SF-CRF SR OCT approach compared to other image processing approaches.

  3. A Unified Framework for Periodic, On-Demand, and User-Specified Software Information

    NASA Technical Reports Server (NTRS)

    Kolano, Paul Z.

    2004-01-01

    Although grid computing can increase the number of resources available to a user; not all resources on the grid may have a software environment suitable for running a given application. To provide users with the necessary assistance for selecting resources with compatible software environments and/or for automatically establishing such environments, it is necessary to have an accurate source of information about the software installed across the grid. This paper presents a new OGSI-compliant software information service that has been implemented as part of NASA's Information Power Grid project. This service is built on top of a general framework for reconciling information from periodic, on-demand, and user-specified sources. Information is retrieved using standard XPath queries over a single unified namespace independent of the information's source. Two consumers of the provided software information, the IPG Resource Broker and the IPG Neutralization Service, are briefly described.

  4. Semantically enabled image similarity search

    NASA Astrophysics Data System (ADS)

    Casterline, May V.; Emerick, Timothy; Sadeghi, Kolia; Gosse, C. A.; Bartlett, Brent; Casey, Jason

    2015-05-01

    Georeferenced data of various modalities are increasingly available for intelligence and commercial use, however effectively exploiting these sources demands a unified data space capable of capturing the unique contribution of each input. This work presents a suite of software tools for representing geospatial vector data and overhead imagery in a shared high-dimension vector or embedding" space that supports fused learning and similarity search across dissimilar modalities. While the approach is suitable for fusing arbitrary input types, including free text, the present work exploits the obvious but computationally difficult relationship between GIS and overhead imagery. GIS is comprised of temporally-smoothed but information-limited content of a GIS, while overhead imagery provides an information-rich but temporally-limited perspective. This processing framework includes some important extensions of concepts in literature but, more critically, presents a means to accomplish them as a unified framework at scale on commodity cloud architectures.

  5. High-Order Methods for Computational Fluid Dynamics: A Brief Review of Compact Differential Formulations on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Huynh, H. T.; Wang, Z. J.; Vincent, P. E.

    2013-01-01

    Popular high-order schemes with compact stencils for Computational Fluid Dynamics (CFD) include Discontinuous Galerkin (DG), Spectral Difference (SD), and Spectral Volume (SV) methods. The recently proposed Flux Reconstruction (FR) approach or Correction Procedure using Reconstruction (CPR) is based on a differential formulation and provides a unifying framework for these high-order schemes. Here we present a brief review of recent developments for the FR/CPR schemes as well as some pacing items.

  6. Challenges and insights for situated language processing: Comment on "Towards a computational comparative neuroprimatology: Framing the language-ready brain" by Michael A. Arbib

    NASA Astrophysics Data System (ADS)

    Knoeferle, Pia

    2016-03-01

    In his review article [19], Arbib outlines an ambitious research agenda: to accommodate within a unified framework the evolution, the development, and the processing of language in natural settings (implicating other systems such as vision). He does so with neuro-computationally explicit modeling in mind [1,2] and inspired by research on the mirror neuron system in primates. Similar research questions have received substantial attention also among other scientists [3,4,12].

  7. Computational Models of Anterior Cingulate Cortex: At the Crossroads between Prediction and Effort.

    PubMed

    Vassena, Eliana; Holroyd, Clay B; Alexander, William H

    2017-01-01

    In the last two decades the anterior cingulate cortex (ACC) has become one of the most investigated areas of the brain. Extensive neuroimaging evidence suggests countless functions for this region, ranging from conflict and error coding, to social cognition, pain and effortful control. In response to this burgeoning amount of data, a proliferation of computational models has tried to characterize the neurocognitive architecture of ACC. Early seminal models provided a computational explanation for a relatively circumscribed set of empirical findings, mainly accounting for EEG and fMRI evidence. More recent models have focused on ACC's contribution to effortful control. In parallel to these developments, several proposals attempted to explain within a single computational framework a wider variety of empirical findings that span different cognitive processes and experimental modalities. Here we critically evaluate these modeling attempts, highlighting the continued need to reconcile the array of disparate ACC observations within a coherent, unifying framework.

  8. Trajectory optimization for lunar soft landing with complex constraints

    NASA Astrophysics Data System (ADS)

    Chu, Huiping; Ma, Lin; Wang, Kexin; Shao, Zhijiang; Song, Zhengyu

    2017-11-01

    A unified trajectory optimization framework with initialization strategies is proposed in this paper for lunar soft landing for various missions with specific requirements. Two main missions of interest are Apollo-like Landing from low lunar orbit and Vertical Takeoff Vertical Landing (a promising mobility method) on the lunar surface. The trajectory optimization is characterized by difficulties arising from discontinuous thrust, multi-phase connections, jump of attitude angle, and obstacles avoidance. Here R-function is applied to deal with the discontinuities of thrust, checkpoint constraints are introduced to connect multiple landing phases, attitude angular rate is designed to get rid of radical changes, and safeguards are imposed to avoid collision with obstacles. The resulting dynamic problems are generally with complex constraints. The unified framework based on Gauss Pseudospectral Method (GPM) and Nonlinear Programming (NLP) solver are designed to solve the problems efficiently. Advanced initialization strategies are developed to enhance both the convergence and computation efficiency. Numerical results demonstrate the adaptability of the framework for various landing missions, and the performance of successful solution of difficult dynamic problems.

  9. Quantum Computing Architectural Design

    NASA Astrophysics Data System (ADS)

    West, Jacob; Simms, Geoffrey; Gyure, Mark

    2006-03-01

    Large scale quantum computers will invariably require scalable architectures in addition to high fidelity gate operations. Quantum computing architectural design (QCAD) addresses the problems of actually implementing fault-tolerant algorithms given physical and architectural constraints beyond those of basic gate-level fidelity. Here we introduce a unified framework for QCAD that enables the scientist to study the impact of varying error correction schemes, architectural parameters including layout and scheduling, and physical operations native to a given architecture. Our software package, aptly named QCAD, provides compilation, manipulation/transformation, multi-paradigm simulation, and visualization tools. We demonstrate various features of the QCAD software package through several examples.

  10. Unifying Gate Synthesis and Magic State Distillation.

    PubMed

    Campbell, Earl T; Howard, Mark

    2017-02-10

    The leading paradigm for performing a computation on quantum memories can be encapsulated as distill-then-synthesize. Initially, one performs several rounds of distillation to create high-fidelity magic states that provide one good T gate, an essential quantum logic gate. Subsequently, gate synthesis intersperses many T gates with Clifford gates to realize a desired circuit. We introduce a unified framework that implements one round of distillation and multiquibit gate synthesis in a single step. Typically, our method uses the same number of T gates as conventional synthesis but with the added benefit of quadratic error suppression. Because of this, one less round of magic state distillation needs to be performed, leading to significant resource savings.

  11. Peer-to-peer Monte Carlo simulation of photon migration in topical applications of biomedical optics

    NASA Astrophysics Data System (ADS)

    Doronin, Alexander; Meglinski, Igor

    2012-09-01

    In the framework of further development of the unified approach of photon migration in complex turbid media, such as biological tissues we present a peer-to-peer (P2P) Monte Carlo (MC) code. The object-oriented programming is used for generalization of MC model for multipurpose use in various applications of biomedical optics. The online user interface providing multiuser access is developed using modern web technologies, such as Microsoft Silverlight, ASP.NET. The emerging P2P network utilizing computers with different types of compute unified device architecture-capable graphics processing units (GPUs) is applied for acceleration and to overcome the limitations, imposed by multiuser access in the online MC computational tool. The developed P2P MC was validated by comparing the results of simulation of diffuse reflectance and fluence rate distribution for semi-infinite scattering medium with known analytical results, results of adding-doubling method, and with other GPU-based MC techniques developed in the past. The best speedup of processing multiuser requests in a range of 4 to 35 s was achieved using single-precision computing, and the double-precision computing for floating-point arithmetic operations provides higher accuracy.

  12. Peer-to-peer Monte Carlo simulation of photon migration in topical applications of biomedical optics.

    PubMed

    Doronin, Alexander; Meglinski, Igor

    2012-09-01

    In the framework of further development of the unified approach of photon migration in complex turbid media, such as biological tissues we present a peer-to-peer (P2P) Monte Carlo (MC) code. The object-oriented programming is used for generalization of MC model for multipurpose use in various applications of biomedical optics. The online user interface providing multiuser access is developed using modern web technologies, such as Microsoft Silverlight, ASP.NET. The emerging P2P network utilizing computers with different types of compute unified device architecture-capable graphics processing units (GPUs) is applied for acceleration and to overcome the limitations, imposed by multiuser access in the online MC computational tool. The developed P2P MC was validated by comparing the results of simulation of diffuse reflectance and fluence rate distribution for semi-infinite scattering medium with known analytical results, results of adding-doubling method, and with other GPU-based MC techniques developed in the past. The best speedup of processing multiuser requests in a range of 4 to 35 s was achieved using single-precision computing, and the double-precision computing for floating-point arithmetic operations provides higher accuracy.

  13. Programming model for distributed intelligent systems

    NASA Technical Reports Server (NTRS)

    Sztipanovits, J.; Biegl, C.; Karsai, G.; Bogunovic, N.; Purves, B.; Williams, R.; Christiansen, T.

    1988-01-01

    A programming model and architecture which was developed for the design and implementation of complex, heterogeneous measurement and control systems is described. The Multigraph Architecture integrates artificial intelligence techniques with conventional software technologies, offers a unified framework for distributed and shared memory based parallel computational models and supports multiple programming paradigms. The system can be implemented on different hardware architectures and can be adapted to strongly different applications.

  14. ELSI: A unified software interface for Kohn–Sham electronic structure solvers

    DOE PAGES

    Yu, Victor Wen-zhe; Corsetti, Fabiano; Garcia, Alberto; ...

    2017-09-15

    Solving the electronic structure from a generalized or standard eigenproblem is often the bottleneck in large scale calculations based on Kohn-Sham density-functional theory. This problem must be addressed by essentially all current electronic structure codes, based on similar matrix expressions, and by high-performance computation. We here present a unified software interface, ELSI, to access different strategies that address the Kohn-Sham eigenvalue problem. Currently supported algorithms include the dense generalized eigensolver library ELPA, the orbital minimization method implemented in libOMM, and the pole expansion and selected inversion (PEXSI) approach with lower computational complexity for semilocal density functionals. The ELSI interface aimsmore » to simplify the implementation and optimal use of the different strategies, by offering (a) a unified software framework designed for the electronic structure solvers in Kohn-Sham density-functional theory; (b) reasonable default parameters for a chosen solver; (c) automatic conversion between input and internal working matrix formats, and in the future (d) recommendation of the optimal solver depending on the specific problem. As a result, comparative benchmarks are shown for system sizes up to 11,520 atoms (172,800 basis functions) on distributed memory supercomputing architectures.« less

  15. ELSI: A unified software interface for Kohn-Sham electronic structure solvers

    NASA Astrophysics Data System (ADS)

    Yu, Victor Wen-zhe; Corsetti, Fabiano; García, Alberto; Huhn, William P.; Jacquelin, Mathias; Jia, Weile; Lange, Björn; Lin, Lin; Lu, Jianfeng; Mi, Wenhui; Seifitokaldani, Ali; Vázquez-Mayagoitia, Álvaro; Yang, Chao; Yang, Haizhao; Blum, Volker

    2018-01-01

    Solving the electronic structure from a generalized or standard eigenproblem is often the bottleneck in large scale calculations based on Kohn-Sham density-functional theory. This problem must be addressed by essentially all current electronic structure codes, based on similar matrix expressions, and by high-performance computation. We here present a unified software interface, ELSI, to access different strategies that address the Kohn-Sham eigenvalue problem. Currently supported algorithms include the dense generalized eigensolver library ELPA, the orbital minimization method implemented in libOMM, and the pole expansion and selected inversion (PEXSI) approach with lower computational complexity for semilocal density functionals. The ELSI interface aims to simplify the implementation and optimal use of the different strategies, by offering (a) a unified software framework designed for the electronic structure solvers in Kohn-Sham density-functional theory; (b) reasonable default parameters for a chosen solver; (c) automatic conversion between input and internal working matrix formats, and in the future (d) recommendation of the optimal solver depending on the specific problem. Comparative benchmarks are shown for system sizes up to 11,520 atoms (172,800 basis functions) on distributed memory supercomputing architectures.

  16. ELSI: A unified software interface for Kohn–Sham electronic structure solvers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Victor Wen-zhe; Corsetti, Fabiano; Garcia, Alberto

    Solving the electronic structure from a generalized or standard eigenproblem is often the bottleneck in large scale calculations based on Kohn-Sham density-functional theory. This problem must be addressed by essentially all current electronic structure codes, based on similar matrix expressions, and by high-performance computation. We here present a unified software interface, ELSI, to access different strategies that address the Kohn-Sham eigenvalue problem. Currently supported algorithms include the dense generalized eigensolver library ELPA, the orbital minimization method implemented in libOMM, and the pole expansion and selected inversion (PEXSI) approach with lower computational complexity for semilocal density functionals. The ELSI interface aimsmore » to simplify the implementation and optimal use of the different strategies, by offering (a) a unified software framework designed for the electronic structure solvers in Kohn-Sham density-functional theory; (b) reasonable default parameters for a chosen solver; (c) automatic conversion between input and internal working matrix formats, and in the future (d) recommendation of the optimal solver depending on the specific problem. As a result, comparative benchmarks are shown for system sizes up to 11,520 atoms (172,800 basis functions) on distributed memory supercomputing architectures.« less

  17. A Vision on the Status and Evolution of HEP Physics Software Tools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Canal, P.; Elvira, D.; Hatcher, R.

    2013-07-28

    This paper represents the vision of the members of the Fermilab Scientific Computing Division's Computational Physics Department (SCD-CPD) on the status and the evolution of various HEP software tools such as the Geant4 detector simulation toolkit, the Pythia and GENIE physics generators, and the ROOT data analysis framework. The goal of this paper is to contribute ideas to the Snowmass 2013 process toward the composition of a unified document on the current status and potential evolution of the physics software tools which are essential to HEP.

  18. Complex networks as a unified framework for descriptive analysis and predictive modeling in climate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steinhaeuser, Karsten J K; Chawla, Nitesh; Ganguly, Auroop R

    The analysis of climate data has relied heavily on hypothesis-driven statistical methods, while projections of future climate are based primarily on physics-based computational models. However, in recent years a wealth of new datasets has become available. Therefore, we take a more data-centric approach and propose a unified framework for studying climate, with an aim towards characterizing observed phenomena as well as discovering new knowledge in the climate domain. Specifically, we posit that complex networks are well-suited for both descriptive analysis and predictive modeling tasks. We show that the structural properties of climate networks have useful interpretation within the domain. Further,more » we extract clusters from these networks and demonstrate their predictive power as climate indices. Our experimental results establish that the network clusters are statistically significantly better predictors than clusters derived using a more traditional clustering approach. Using complex networks as data representation thus enables the unique opportunity for descriptive and predictive modeling to inform each other.« less

  19. Subspace algorithms for identifying separable-in-denominator 2D systems with deterministic-stochastic inputs

    NASA Astrophysics Data System (ADS)

    Ramos, José A.; Mercère, Guillaume

    2016-12-01

    In this paper, we present an algorithm for identifying two-dimensional (2D) causal, recursive and separable-in-denominator (CRSD) state-space models in the Roesser form with deterministic-stochastic inputs. The algorithm implements the N4SID, PO-MOESP and CCA methods, which are well known in the literature on 1D system identification, but here we do so for the 2D CRSD Roesser model. The algorithm solves the 2D system identification problem by maintaining the constraint structure imposed by the problem (i.e. Toeplitz and Hankel) and computes the horizontal and vertical system orders, system parameter matrices and covariance matrices of a 2D CRSD Roesser model. From a computational point of view, the algorithm has been presented in a unified framework, where the user can select which of the three methods to use. Furthermore, the identification task is divided into three main parts: (1) computing the deterministic horizontal model parameters, (2) computing the deterministic vertical model parameters and (3) computing the stochastic components. Specific attention has been paid to the computation of a stabilised Kalman gain matrix and a positive real solution when required. The efficiency and robustness of the unified algorithm have been demonstrated via a thorough simulation example.

  20. Computation of free energy profiles with parallel adaptive dynamics

    NASA Astrophysics Data System (ADS)

    Lelièvre, Tony; Rousset, Mathias; Stoltz, Gabriel

    2007-04-01

    We propose a formulation of an adaptive computation of free energy differences, in the adaptive biasing force or nonequilibrium metadynamics spirit, using conditional distributions of samples of configurations which evolve in time. This allows us to present a truly unifying framework for these methods, and to prove convergence results for certain classes of algorithms. From a numerical viewpoint, a parallel implementation of these methods is very natural, the replicas interacting through the reconstructed free energy. We demonstrate how to improve this parallel implementation by resorting to some selection mechanism on the replicas. This is illustrated by computations on a model system of conformational changes.

  1. Phase noise suppression for coherent optical block transmission systems: a unified framework.

    PubMed

    Yang, Chuanchuan; Yang, Feng; Wang, Ziyu

    2011-08-29

    A unified framework for phase noise suppression is proposed in this paper, which could be applied in any coherent optical block transmission systems, including coherent optical orthogonal frequency-division multiplexing (CO-OFDM), coherent optical single-carrier frequency-domain equalization block transmission (CO-SCFDE), etc. Based on adaptive modeling of phase noise, unified observation equations for different coherent optical block transmission systems are constructed, which lead to unified phase noise estimation and suppression. Numerical results demonstrate that the proposal is powerful in mitigating laser phase noise.

  2. A Framework for Modeling and Simulation of the Artificial

    DTIC Science & Technology

    2012-01-01

    y or n) >> y Name: petra Simple Aspects: face_shape/thin, nose/small, skintone/light, hair_color/black, hair_type/curly Integrated Aspects...Multiconference. Orlando, FL (2012) 23. Mittal, S., Risco- Martin , J.: Netcentric System of Systems Engineering with DEVS Unified Process. CRC Press (2012) 24...Mittal, S., Risco- Martin , J., Zeigler, B.: DEVS-based simulation web services for net-centric T&E. In: Proceedings of the 2007 summer computer

  3. An Overview of MSHN: The Management System for Heterogeneous Networks

    DTIC Science & Technology

    1999-04-01

    An Overview of MSHN: The Management System for Heterogeneous Networks Debra A. Hensgen†, Taylor Kidd†, David St. John§, Matthew C . Schnaidt†, Howard...ABSTRACT UU 18. NUMBER OF PAGES 15 19a. NAME OF RESPONSIBLE PERSON a. REPORT unclassified b. ABSTRACT unclassified c . THIS PAGE...Alhusaini, V. K. Prasanna, and C . S. Raghavendra, “A unified resource scheduling framework for heterogeneous computing environments,” Proc. 8th IEEE

  4. High-Maneuverability Airframe: Initial Investigation of Configuration’s Aft End for Increased Stability, Range, and Maneuverability

    DTIC Science & Technology

    2013-09-01

    including the interaction effects between the fins and canards. 2. Solution Technique 2.1 Computational Aerodynamics The double-precision solver of a...and overset grids (unified-grid). • Total variation diminishing discretization based on a new multidimensional interpolation framework. • Riemann ... solvers to provide proper signal propagation physics including versions for preconditioned forms of the governing equations. • Consistent and

  5. Comparing, optimizing, and benchmarking quantum-control algorithms in a unifying programming framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Machnes, S.; Institute for Theoretical Physics, University of Ulm, D-89069 Ulm; Sander, U.

    2011-08-15

    For paving the way to novel applications in quantum simulation, computation, and technology, increasingly large quantum systems have to be steered with high precision. It is a typical task amenable to numerical optimal control to turn the time course of pulses, i.e., piecewise constant control amplitudes, iteratively into an optimized shape. Here, we present a comparative study of optimal-control algorithms for a wide range of finite-dimensional applications. We focus on the most commonly used algorithms: GRAPE methods which update all controls concurrently, and Krotov-type methods which do so sequentially. Guidelines for their use are given and open research questions aremore » pointed out. Moreover, we introduce a unifying algorithmic framework, DYNAMO (dynamic optimization platform), designed to provide the quantum-technology community with a convenient matlab-based tool set for optimal control. In addition, it gives researchers in optimal-control techniques a framework for benchmarking and comparing newly proposed algorithms with the state of the art. It allows a mix-and-match approach with various types of gradients, update and step-size methods as well as subspace choices. Open-source code including examples is made available at http://qlib.info.« less

  6. Brains are not just neurons. Comment on “Toward a computational framework for cognitive biology: Unifying approaches from cognitive neuroscience and comparative cognition” by Fitch

    NASA Astrophysics Data System (ADS)

    Huber, Ludwig

    2014-09-01

    This comment addresses the first component of Fitch's framework: the computational power of single neurons [3]. Although I agree that traditional models of neural computation have vastly underestimated the computational power of single neurons, I am hesitant to follow him completely. The exclusive focus on neurons is likely to underestimate the importance of other cells in the brain. In the last years, two such cell types have received appropriate attention by neuroscientists: interneurons and glia. Interneurons are small, tightly packed cells involved in the control of information processing in learning and memory. Rather than transmitting externally (like motor or sensory neurons), these neurons process information within internal circuits of the brain (therefore also called 'relay neurons'). Some specialized interneuron subtypes temporally regulate the flow of information in a given cortical circuit during relevant behavioral events [4]. In the human brain approx. 100 billion interneurons control information processing and are implicated in disorders such as epilepsy and Parkinson's.

  7. 3D Fluid-Structure Interaction Simulation of Aortic Valves Using a Unified Continuum ALE FEM Model.

    PubMed

    Spühler, Jeannette H; Jansson, Johan; Jansson, Niclas; Hoffman, Johan

    2018-01-01

    Due to advances in medical imaging, computational fluid dynamics algorithms and high performance computing, computer simulation is developing into an important tool for understanding the relationship between cardiovascular diseases and intraventricular blood flow. The field of cardiac flow simulation is challenging and highly interdisciplinary. We apply a computational framework for automated solutions of partial differential equations using Finite Element Methods where any mathematical description directly can be translated to code. This allows us to develop a cardiac model where specific properties of the heart such as fluid-structure interaction of the aortic valve can be added in a modular way without extensive efforts. In previous work, we simulated the blood flow in the left ventricle of the heart. In this paper, we extend this model by placing prototypes of both a native and a mechanical aortic valve in the outflow region of the left ventricle. Numerical simulation of the blood flow in the vicinity of the valve offers the possibility to improve the treatment of aortic valve diseases as aortic stenosis (narrowing of the valve opening) or regurgitation (leaking) and to optimize the design of prosthetic heart valves in a controlled and specific way. The fluid-structure interaction and contact problem are formulated in a unified continuum model using the conservation laws for mass and momentum and a phase function. The discretization is based on an Arbitrary Lagrangian-Eulerian space-time finite element method with streamline diffusion stabilization, and it is implemented in the open source software Unicorn which shows near optimal scaling up to thousands of cores. Computational results are presented to demonstrate the capability of our framework.

  8. 3D Fluid-Structure Interaction Simulation of Aortic Valves Using a Unified Continuum ALE FEM Model

    PubMed Central

    Spühler, Jeannette H.; Jansson, Johan; Jansson, Niclas; Hoffman, Johan

    2018-01-01

    Due to advances in medical imaging, computational fluid dynamics algorithms and high performance computing, computer simulation is developing into an important tool for understanding the relationship between cardiovascular diseases and intraventricular blood flow. The field of cardiac flow simulation is challenging and highly interdisciplinary. We apply a computational framework for automated solutions of partial differential equations using Finite Element Methods where any mathematical description directly can be translated to code. This allows us to develop a cardiac model where specific properties of the heart such as fluid-structure interaction of the aortic valve can be added in a modular way without extensive efforts. In previous work, we simulated the blood flow in the left ventricle of the heart. In this paper, we extend this model by placing prototypes of both a native and a mechanical aortic valve in the outflow region of the left ventricle. Numerical simulation of the blood flow in the vicinity of the valve offers the possibility to improve the treatment of aortic valve diseases as aortic stenosis (narrowing of the valve opening) or regurgitation (leaking) and to optimize the design of prosthetic heart valves in a controlled and specific way. The fluid-structure interaction and contact problem are formulated in a unified continuum model using the conservation laws for mass and momentum and a phase function. The discretization is based on an Arbitrary Lagrangian-Eulerian space-time finite element method with streamline diffusion stabilization, and it is implemented in the open source software Unicorn which shows near optimal scaling up to thousands of cores. Computational results are presented to demonstrate the capability of our framework. PMID:29713288

  9. A Unified Framework for Analyzing and Designing for Stationary Arterial Networks

    DOT National Transportation Integrated Search

    2017-05-17

    This research aims to develop a unified theoretical and simulation framework for analyzing and designing signals for stationary arterial networks. Existing traffic flow models used in design and analysis of signal control strategies are either too si...

  10. Multivariate Lipschitz optimization: Survey and computational comparison

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, P.; Gourdin, E.; Jaumard, B.

    1994-12-31

    Many methods have been proposed to minimize a multivariate Lipschitz function on a box. They pertain the three approaches: (i) reduction to the univariate case by projection (Pijavskii) or by using a space-filling curve (Strongin); (ii) construction and refinement of a single upper bounding function (Pijavskii, Mladineo, Mayne and Polak, Jaumard Hermann and Ribault, Wood...); (iii) branch and bound with local upper bounding functions (Galperin, Pint{acute e}r, Meewella and Mayne, the present authors). A survey is made, stressing similarities of algorithms, expressed when possible within a unified framework. Moreover, an extensive computational comparison is reported on.

  11. Mental structures and hierarchical brain processing. Comment on “Toward a computational framework for cognitive biology: Unifying approaches from cognitive neuroscience and comparative cognition” by W. Tecumseh Fitch

    NASA Astrophysics Data System (ADS)

    Petkov, C. I.

    2014-09-01

    Fitch proposes an appealing hypothesis that humans are dendrophiles, who constantly build mental trees supported by analogous hierarchical brain processes [1]. Moreover, it is argued that, by comparison, nonhuman animals build flat or more compact behaviorally-relevant structures. Should we thus expect less impressive hierarchical brain processes in other animals? Not necessarily.

  12. An Adaptive Shifted Power Method for Computing Generalized Tensor Eigenpairs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kolda, Tamara G.; Mayo, Jackson R.

    2014-12-11

    Several tensor eigenpair definitions have been put forth in the past decade, but these can all be unified under generalized tensor eigenpair framework, introduced by Chang, Pearson, and Zhang [J. Math. Anal. Appl., 350 (2009), pp. 416--422]. Given mth-order, n-dimensional real-valued symmetric tensorsmore » $${\\mathscr{A}}$$ and $$\\boldsymbol{\\mathscr{B}}$$, the goal is to find $$\\lambda \\in \\mathbb{R}$$ and $$\\mathbf{x} \\in \\mathbb{R}^{n}, \\mathbf{x} \

  13. Unifying framework for multimodal brain MRI segmentation based on Hidden Markov Chains.

    PubMed

    Bricq, S; Collet, Ch; Armspach, J P

    2008-12-01

    In the frame of 3D medical imaging, accurate segmentation of multimodal brain MR images is of interest for many brain disorders. However, due to several factors such as noise, imaging artifacts, intrinsic tissue variation and partial volume effects, tissue classification remains a challenging task. In this paper, we present a unifying framework for unsupervised segmentation of multimodal brain MR images including partial volume effect, bias field correction, and information given by a probabilistic atlas. Here-proposed method takes into account neighborhood information using a Hidden Markov Chain (HMC) model. Due to the limited resolution of imaging devices, voxels may be composed of a mixture of different tissue types, this partial volume effect is included to achieve an accurate segmentation of brain tissues. Instead of assigning each voxel to a single tissue class (i.e., hard classification), we compute the relative amount of each pure tissue class in each voxel (mixture estimation). Further, a bias field estimation step is added to the proposed algorithm to correct intensity inhomogeneities. Furthermore, atlas priors were incorporated using probabilistic brain atlas containing prior expectations about the spatial localization of different tissue classes. This atlas is considered as a complementary sensor and the proposed method is extended to multimodal brain MRI without any user-tunable parameter (unsupervised algorithm). To validate this new unifying framework, we present experimental results on both synthetic and real brain images, for which the ground truth is available. Comparison with other often used techniques demonstrates the accuracy and the robustness of this new Markovian segmentation scheme.

  14. Toward a computational theory for motion understanding: The expert animators model

    NASA Technical Reports Server (NTRS)

    Mohamed, Ahmed S.; Armstrong, William W.

    1988-01-01

    Artificial intelligence researchers claim to understand some aspect of human intelligence when their model is able to emulate it. In the context of computer graphics, the ability to go from motion representation to convincing animation should accordingly be treated not simply as a trick for computer graphics programmers but as important epistemological and methodological goal. In this paper we investigate a unifying model for animating a group of articulated bodies such as humans and robots in a three-dimensional environment. The proposed model is considered in the framework of knowledge representation and processing, with special reference to motion knowledge. The model is meant to help setting the basis for a computational theory for motion understanding applied to articulated bodies.

  15. Music video shot segmentation using independent component analysis and keyframe extraction based on image complexity

    NASA Astrophysics Data System (ADS)

    Li, Wei; Chen, Ting; Zhang, Wenjun; Shi, Yunyu; Li, Jun

    2012-04-01

    In recent years, Music video data is increasing at an astonishing speed. Shot segmentation and keyframe extraction constitute a fundamental unit in organizing, indexing, retrieving video content. In this paper a unified framework is proposed to detect the shot boundaries and extract the keyframe of a shot. Music video is first segmented to shots by illumination-invariant chromaticity histogram in independent component (IC) analysis feature space .Then we presents a new metric, image complexity, to extract keyframe in a shot which is computed by ICs. Experimental results show the framework is effective and has a good performance.

  16. Medical privacy protection based on granular computing.

    PubMed

    Wang, Da-Wei; Liau, Churn-Jung; Hsu, Tsan-Sheng

    2004-10-01

    Based on granular computing methodology, we propose two criteria to quantitatively measure privacy invasion. The total cost criterion measures the effort needed for a data recipient to find private information. The average benefit criterion measures the benefit a data recipient obtains when he received the released data. These two criteria remedy the inadequacy of the deterministic privacy formulation proposed in Proceedings of Asia Pacific Medical Informatics Conference, 2000; Int J Med Inform 2003;71:17-23. Granular computing methodology provides a unified framework for these quantitative measurements and previous bin size and logical approaches. These two new criteria are implemented in a prototype system Cellsecu 2.0. Preliminary system performance evaluation is conducted and reviewed.

  17. Control of Distributed Parameter Systems

    DTIC Science & Technology

    1990-08-01

    vari- ant of the general Lotka - Volterra model for interspecific competition. The variant described the emergence of one subpopulation from another as a...distribut ion unlimited. I&. ARSTRACT (MAUMUnw2O1 A unified arioroximation framework for Parameter estimation In general linear POE models has been completed...unified approximation framework for parameter estimation in general linear PDE models. This framework has provided the theoretical basis for a number of

  18. Probabilistic Graphical Model Representation in Phylogenetics

    PubMed Central

    Höhna, Sebastian; Heath, Tracy A.; Boussau, Bastien; Landis, Michael J.; Ronquist, Fredrik; Huelsenbeck, John P.

    2014-01-01

    Recent years have seen a rapid expansion of the model space explored in statistical phylogenetics, emphasizing the need for new approaches to statistical model representation and software development. Clear communication and representation of the chosen model is crucial for: (i) reproducibility of an analysis, (ii) model development, and (iii) software design. Moreover, a unified, clear and understandable framework for model representation lowers the barrier for beginners and nonspecialists to grasp complex phylogenetic models, including their assumptions and parameter/variable dependencies. Graphical modeling is a unifying framework that has gained in popularity in the statistical literature in recent years. The core idea is to break complex models into conditionally independent distributions. The strength lies in the comprehensibility, flexibility, and adaptability of this formalism, and the large body of computational work based on it. Graphical models are well-suited to teach statistical models, to facilitate communication among phylogeneticists and in the development of generic software for simulation and statistical inference. Here, we provide an introduction to graphical models for phylogeneticists and extend the standard graphical model representation to the realm of phylogenetics. We introduce a new graphical model component, tree plates, to capture the changing structure of the subgraph corresponding to a phylogenetic tree. We describe a range of phylogenetic models using the graphical model framework and introduce modules to simplify the representation of standard components in large and complex models. Phylogenetic model graphs can be readily used in simulation, maximum likelihood inference, and Bayesian inference using, for example, Metropolis–Hastings or Gibbs sampling of the posterior distribution. [Computation; graphical models; inference; modularization; statistical phylogenetics; tree plate.] PMID:24951559

  19. A unified perspective on robot control - The energy Lyapunov function approach

    NASA Technical Reports Server (NTRS)

    Wen, John T.

    1990-01-01

    A unified framework for the stability analysis of robot tracking control is presented. By using an energy-motivated Lyapunov function candidate, the closed-loop stability is shown for a large family of control laws sharing a common structure of proportional and derivative feedback and a model-based feedforward. The feedforward can be zero, partial or complete linearized dynamics, partial or complete nonlinear dynamics, or linearized or nonlinear dynamics with parameter adaptation. As result, the dichotomous approaches to the robot control problem based on the open-loop linearization and nonlinear Lyapunov analysis are both included in this treatment. Furthermore, quantitative estimates of the trade-offs between different schemes in terms of the tracking performance, steady state error, domain of convergence, realtime computation load and required a prior model information are derived.

  20. A unified statistical approach to non-negative matrix factorization and probabilistic latent semantic indexing

    PubMed Central

    Wang, Guoli; Ebrahimi, Nader

    2014-01-01

    Non-negative matrix factorization (NMF) is a powerful machine learning method for decomposing a high-dimensional nonnegative matrix V into the product of two nonnegative matrices, W and H, such that V ∼ W H. It has been shown to have a parts-based, sparse representation of the data. NMF has been successfully applied in a variety of areas such as natural language processing, neuroscience, information retrieval, image processing, speech recognition and computational biology for the analysis and interpretation of large-scale data. There has also been simultaneous development of a related statistical latent class modeling approach, namely, probabilistic latent semantic indexing (PLSI), for analyzing and interpreting co-occurrence count data arising in natural language processing. In this paper, we present a generalized statistical approach to NMF and PLSI based on Renyi's divergence between two non-negative matrices, stemming from the Poisson likelihood. Our approach unifies various competing models and provides a unique theoretical framework for these methods. We propose a unified algorithm for NMF and provide a rigorous proof of monotonicity of multiplicative updates for W and H. In addition, we generalize the relationship between NMF and PLSI within this framework. We demonstrate the applicability and utility of our approach as well as its superior performance relative to existing methods using real-life and simulated document clustering data. PMID:25821345

  1. A unified statistical approach to non-negative matrix factorization and probabilistic latent semantic indexing.

    PubMed

    Devarajan, Karthik; Wang, Guoli; Ebrahimi, Nader

    2015-04-01

    Non-negative matrix factorization (NMF) is a powerful machine learning method for decomposing a high-dimensional nonnegative matrix V into the product of two nonnegative matrices, W and H , such that V ∼ W H . It has been shown to have a parts-based, sparse representation of the data. NMF has been successfully applied in a variety of areas such as natural language processing, neuroscience, information retrieval, image processing, speech recognition and computational biology for the analysis and interpretation of large-scale data. There has also been simultaneous development of a related statistical latent class modeling approach, namely, probabilistic latent semantic indexing (PLSI), for analyzing and interpreting co-occurrence count data arising in natural language processing. In this paper, we present a generalized statistical approach to NMF and PLSI based on Renyi's divergence between two non-negative matrices, stemming from the Poisson likelihood. Our approach unifies various competing models and provides a unique theoretical framework for these methods. We propose a unified algorithm for NMF and provide a rigorous proof of monotonicity of multiplicative updates for W and H . In addition, we generalize the relationship between NMF and PLSI within this framework. We demonstrate the applicability and utility of our approach as well as its superior performance relative to existing methods using real-life and simulated document clustering data.

  2. Integrating Xgrid into the HENP distributed computing model

    NASA Astrophysics Data System (ADS)

    Hajdu, L.; Kocoloski, A.; Lauret, J.; Miller, M.

    2008-07-01

    Modern Macintosh computers feature Xgrid, a distributed computing architecture built directly into Apple's OS X operating system. While the approach is radically different from those generally expected by the Unix based Grid infrastructures (Open Science Grid, TeraGrid, EGEE), opportunistic computing on Xgrid is nonetheless a tempting and novel way to assemble a computing cluster with a minimum of additional configuration. In fact, it requires only the default operating system and authentication to a central controller from each node. OS X also implements arbitrarily extensible metadata, allowing an instantly updated file catalog to be stored as part of the filesystem itself. The low barrier to entry allows an Xgrid cluster to grow quickly and organically. This paper and presentation will detail the steps that can be taken to make such a cluster a viable resource for HENP research computing. We will further show how to provide to users a unified job submission framework by integrating Xgrid through the STAR Unified Meta-Scheduler (SUMS), making tasks and jobs submission effortlessly at reach for those users already using the tool for traditional Grid or local cluster job submission. We will discuss additional steps that can be taken to make an Xgrid cluster a full partner in grid computing initiatives, focusing on Open Science Grid integration. MIT's Xgrid system currently supports the work of multiple research groups in the Laboratory for Nuclear Science, and has become an important tool for generating simulations and conducting data analyses at the Massachusetts Institute of Technology.

  3. Information Geometry for Landmark Shape Analysis: Unifying Shape Representation and Deformation

    PubMed Central

    Peter, Adrian M.; Rangarajan, Anand

    2010-01-01

    Shape matching plays a prominent role in the comparison of similar structures. We present a unifying framework for shape matching that uses mixture models to couple both the shape representation and deformation. The theoretical foundation is drawn from information geometry wherein information matrices are used to establish intrinsic distances between parametric densities. When a parameterized probability density function is used to represent a landmark-based shape, the modes of deformation are automatically established through the information matrix of the density. We first show that given two shapes parameterized by Gaussian mixture models (GMMs), the well-known Fisher information matrix of the mixture model is also a Riemannian metric (actually, the Fisher-Rao Riemannian metric) and can therefore be used for computing shape geodesics. The Fisher-Rao metric has the advantage of being an intrinsic metric and invariant to reparameterization. The geodesic—computed using this metric—establishes an intrinsic deformation between the shapes, thus unifying both shape representation and deformation. A fundamental drawback of the Fisher-Rao metric is that it is not available in closed form for the GMM. Consequently, shape comparisons are computationally very expensive. To address this, we develop a new Riemannian metric based on generalized ϕ-entropy measures. In sharp contrast to the Fisher-Rao metric, the new metric is available in closed form. Geodesic computations using the new metric are considerably more efficient. We validate the performance and discriminative capabilities of these new information geometry-based metrics by pairwise matching of corpus callosum shapes. We also study the deformations of fish shapes that have various topological properties. A comprehensive comparative analysis is also provided using other landmark-based distances, including the Hausdorff distance, the Procrustes metric, landmark-based diffeomorphisms, and the bending energies of the thin-plate (TPS) and Wendland splines. PMID:19110497

  4. A description of a system of programs for mathematically processing on unified series (YeS) computers photographic images of the Earth taken from spacecraft

    NASA Technical Reports Server (NTRS)

    Zolotukhin, V. G.; Kolosov, B. I.; Usikov, D. A.; Borisenko, V. I.; Mosin, S. T.; Gorokhov, V. N.

    1980-01-01

    A description of a batch of programs for the YeS-1040 computer combined into an automated system for processing photo (and video) images of the Earth's surface, taken from spacecraft, is presented. Individual programs with the detailed discussion of the algorithmic and programmatic facilities needed by the user are presented. The basic principles for assembling the system, and the control programs are included. The exchange format within whose framework the cataloging of any programs recommended for the system of processing will be activated in the future is displayed.

  5. GXNOR-Net: Training deep neural networks with ternary weights and activations without full-precision memory under a unified discretization framework.

    PubMed

    Deng, Lei; Jiao, Peng; Pei, Jing; Wu, Zhenzhi; Li, Guoqi

    2018-04-01

    Although deep neural networks (DNNs) are being a revolutionary power to open up the AI era, the notoriously huge hardware overhead has challenged their applications. Recently, several binary and ternary networks, in which the costly multiply-accumulate operations can be replaced by accumulations or even binary logic operations, make the on-chip training of DNNs quite promising. Therefore there is a pressing need to build an architecture that could subsume these networks under a unified framework that achieves both higher performance and less overhead. To this end, two fundamental issues are yet to be addressed. The first one is how to implement the back propagation when neuronal activations are discrete. The second one is how to remove the full-precision hidden weights in the training phase to break the bottlenecks of memory/computation consumption. To address the first issue, we present a multi-step neuronal activation discretization method and a derivative approximation technique that enable the implementing the back propagation algorithm on discrete DNNs. While for the second issue, we propose a discrete state transition (DST) methodology to constrain the weights in a discrete space without saving the hidden weights. Through this way, we build a unified framework that subsumes the binary or ternary networks as its special cases, and under which a heuristic algorithm is provided at the website https://github.com/AcrossV/Gated-XNOR. More particularly, we find that when both the weights and activations become ternary values, the DNNs can be reduced to sparse binary networks, termed as gated XNOR networks (GXNOR-Nets) since only the event of non-zero weight and non-zero activation enables the control gate to start the XNOR logic operations in the original binary networks. This promises the event-driven hardware design for efficient mobile intelligence. We achieve advanced performance compared with state-of-the-art algorithms. Furthermore, the computational sparsity and the number of states in the discrete space can be flexibly modified to make it suitable for various hardware platforms. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. High-resolution coupled physics solvers for analysing fine-scale nuclear reactor design problems.

    PubMed

    Mahadevan, Vijay S; Merzari, Elia; Tautges, Timothy; Jain, Rajeev; Obabko, Aleksandr; Smith, Michael; Fischer, Paul

    2014-08-06

    An integrated multi-physics simulation capability for the design and analysis of current and future nuclear reactor models is being investigated, to tightly couple neutron transport and thermal-hydraulics physics under the SHARP framework. Over several years, high-fidelity, validated mono-physics solvers with proven scalability on petascale architectures have been developed independently. Based on a unified component-based architecture, these existing codes can be coupled with a mesh-data backplane and a flexible coupling-strategy-based driver suite to produce a viable tool for analysts. The goal of the SHARP framework is to perform fully resolved coupled physics analysis of a reactor on heterogeneous geometry, in order to reduce the overall numerical uncertainty while leveraging available computational resources. The coupling methodology and software interfaces of the framework are presented, along with verification studies on two representative fast sodium-cooled reactor demonstration problems to prove the usability of the SHARP framework.

  7. High-resolution coupled physics solvers for analysing fine-scale nuclear reactor design problems

    PubMed Central

    Mahadevan, Vijay S.; Merzari, Elia; Tautges, Timothy; Jain, Rajeev; Obabko, Aleksandr; Smith, Michael; Fischer, Paul

    2014-01-01

    An integrated multi-physics simulation capability for the design and analysis of current and future nuclear reactor models is being investigated, to tightly couple neutron transport and thermal-hydraulics physics under the SHARP framework. Over several years, high-fidelity, validated mono-physics solvers with proven scalability on petascale architectures have been developed independently. Based on a unified component-based architecture, these existing codes can be coupled with a mesh-data backplane and a flexible coupling-strategy-based driver suite to produce a viable tool for analysts. The goal of the SHARP framework is to perform fully resolved coupled physics analysis of a reactor on heterogeneous geometry, in order to reduce the overall numerical uncertainty while leveraging available computational resources. The coupling methodology and software interfaces of the framework are presented, along with verification studies on two representative fast sodium-cooled reactor demonstration problems to prove the usability of the SHARP framework. PMID:24982250

  8. Applications of airborne ultrasound in human-computer interaction.

    PubMed

    Dahl, Tobias; Ealo, Joao L; Bang, Hans J; Holm, Sverre; Khuri-Yakub, Pierre

    2014-09-01

    Airborne ultrasound is a rapidly developing subfield within human-computer interaction (HCI). Touchless ultrasonic interfaces and pen tracking systems are part of recent trends in HCI and are gaining industry momentum. This paper aims to provide the background and overview necessary to understand the capabilities of ultrasound and its potential future in human-computer interaction. The latest developments on the ultrasound transducer side are presented, focusing on capacitive micro-machined ultrasonic transducers, or CMUTs. Their introduction is an important step toward providing real, low-cost multi-sensor array and beam-forming options. We also provide a unified mathematical framework for understanding and analyzing algorithms used for ultrasound detection and tracking for some of the most relevant applications. Copyright © 2014. Published by Elsevier B.V.

  9. A structured interface to the object-oriented genomics unified schema for XML-formatted data.

    PubMed

    Clark, Terry; Jurek, Josef; Kettler, Gregory; Preuss, Daphe

    2005-01-01

    Data management systems are fast becoming required components in many biology laboratories as the role of computer-based information grows. Although the need for data management systems is on the rise, their inherent complexities can deter the full and routine use of their computational capabilities. The significant undertaking to implement a capable production system can be reduced in part by adapting an established data management system. In such a way, we are leveraging the Genomics Unified Schema (GUS) developed at the Computational Biology and Informatics Laboratory at the University of Pennsylvania as a foundation for managing and analysing DNA sequence data in centromere research projects around Arabidopsis thaliana and related species. Because GUS provides a core schema that includes support for genome sequences, mRNA and its expression, and annotated chromosomes, it is ideal for synthesising a variety of parameters to analyse these repetitive and highly dynamic portions of the genome. Despite this, production-strength data management frameworks are complex, requiring dedicated efforts to adapt and maintain. The work reported in this article addresses one component of such an effort, namely the pivotal task of marshalling data from various sources into GUS. In order to harness GUS for our project, and motivated by efficiency needs, we developed a structured framework for transferring data into GUS from outside sources. This technology is embodied in a GUS object-layer processor, XMLGUS. XMLGUS facilitates incorporating data into GUS by (i) formulating an XML interface that includes relational database key constraint definitions, (ii) regularising traversal through that XML, (iii) realising automatic processing of the XML with database key constraints and (iv) allowing for special processing of input data within the framework for automated processing. The application of XMLGUS to production pipeline processing for a sequencing project and inputting the Arabidopsis genome into GUS is discussed. XMLGUS is available from the Flora website (http://flora.ittc.ku.edu/).

  10. Dynamic Information Management and Exchange for Command and Control Applications, Modelling and Enforcing Category-Based Access Control via Term Rewriting

    DTIC Science & Technology

    2015-03-01

    a hotel and a hospital. 2. Event handler for emergency policies (item 2 above): this has been implemented in two UG projects, one project developed a...Workshop on Logical and Se- mantic Frameworks, with Applications, Brasilia, Brazil , September 2014. Electronic Notes in Theoretical Computer Science (to...Brasilia, Brazil , September 2014, 2015. [3] S. Barker. The next 700 access control models or a unifying meta-model? In SACMAT 2009, 14th ACM Symposium on

  11. Attending to the forest and the trees. Reply to comments on “Toward a computational framework for cognitive biology: Unifying approaches from cognitive neuroscience and comparative cognition”

    NASA Astrophysics Data System (ADS)

    Fitch, W. Tecumseh

    2014-09-01

    I am grateful to the commentators for their kind and generally positive comments, and the many interesting and challenging remarks and observations. While limitations of space and time preclude me pursuing each of the interesting lines of potential discussion or debate opened up by these commentaries, I will try to respond to all of the major points, roughly following the original order of the target article [1].

  12. Generalized Preconditioned Locally Harmonic Residual Eigensolver (GPLHR) v0.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    VECHARYNSKI, EUGENE; YANG, CHAO

    The software contains a MATLAB implementation of the Generalized Preconditioned Locally Harmonic Residual (GPLHR) method for solving standard and generalized non-Hermitian eigenproblems. The method is particularly useful for computing a subset of eigenvalues, and their eigen- or Schur vectors, closest to a given shift. The proposed method is based on block iterations and can take advantage of a preconditioner if it is available. It does not need to perform exact shift-and-invert transformation. Standard and generalized eigenproblems are handled in a unified framework.

  13. Constraints on single entity driven inflationary and radiation eras

    NASA Astrophysics Data System (ADS)

    Bouhmadi-López, Mariam; Chen, Pisin; Liu, Yen-Wei

    2012-07-01

    We present a model that attempts to fuse the inflationary era and the subsequent radiation dominated era under a unified framework so as to provide a smooth transition between the two. The model is based on a modification of the generalized Chaplygin gas. We constrain the model observationally by mapping the primordial power spectrum of the scalar perturbations to the latest data of WMAP7. We compute as well the spectrum of the primordial gravitational waves as would be measured today.

  14. Unified Program Design: Organizing Existing Programming Models, Delivery Options, and Curriculum

    ERIC Educational Resources Information Center

    Rubenstein, Lisa DaVia; Ridgley, Lisa M.

    2017-01-01

    A persistent problem in the field of gifted education has been the lack of categorization and delineation of gifted programming options. To address this issue, we propose Unified Program Design as a structural framework for gifted program models. This framework defines gifted programs as the combination of delivery methods and curriculum models.…

  15. A Unified Framework for Complex Networks with Degree Trichotomy Based on Markov Chains.

    PubMed

    Hui, David Shui Wing; Chen, Yi-Chao; Zhang, Gong; Wu, Weijie; Chen, Guanrong; Lui, John C S; Li, Yingtao

    2017-06-16

    This paper establishes a Markov chain model as a unified framework for describing the evolution processes in complex networks. The unique feature of the proposed model is its capability in addressing the formation mechanism that can reflect the "trichotomy" observed in degree distributions, based on which closed-form solutions can be derived. Important special cases of the proposed unified framework are those classical models, including Poisson, Exponential, Power-law distributed networks. Both simulation and experimental results demonstrate a good match of the proposed model with real datasets, showing its superiority over the classical models. Implications of the model to various applications including citation analysis, online social networks, and vehicular networks design, are also discussed in the paper.

  16. A unified structural/terminological interoperability framework based on LexEVS: application to TRANSFoRm.

    PubMed

    Ethier, Jean-François; Dameron, Olivier; Curcin, Vasa; McGilchrist, Mark M; Verheij, Robert A; Arvanitis, Theodoros N; Taweel, Adel; Delaney, Brendan C; Burgun, Anita

    2013-01-01

    Biomedical research increasingly relies on the integration of information from multiple heterogeneous data sources. Despite the fact that structural and terminological aspects of interoperability are interdependent and rely on a common set of requirements, current efforts typically address them in isolation. We propose a unified ontology-based knowledge framework to facilitate interoperability between heterogeneous sources, and investigate if using the LexEVS terminology server is a viable implementation method. We developed a framework based on an ontology, the general information model (GIM), to unify structural models and terminologies, together with relevant mapping sets. This allowed a uniform access to these resources within LexEVS to facilitate interoperability by various components and data sources from implementing architectures. Our unified framework has been tested in the context of the EU Framework Program 7 TRANSFoRm project, where it was used to achieve data integration in a retrospective diabetes cohort study. The GIM was successfully instantiated in TRANSFoRm as the clinical data integration model, and necessary mappings were created to support effective information retrieval for software tools in the project. We present a novel, unifying approach to address interoperability challenges in heterogeneous data sources, by representing structural and semantic models in one framework. Systems using this architecture can rely solely on the GIM that abstracts over both the structure and coding. Information models, terminologies and mappings are all stored in LexEVS and can be accessed in a uniform manner (implementing the HL7 CTS2 service functional model). The system is flexible and should reduce the effort needed from data sources personnel for implementing and managing the integration.

  17. A unified structural/terminological interoperability framework based on LexEVS: application to TRANSFoRm

    PubMed Central

    Ethier, Jean-François; Dameron, Olivier; Curcin, Vasa; McGilchrist, Mark M; Verheij, Robert A; Arvanitis, Theodoros N; Taweel, Adel; Delaney, Brendan C; Burgun, Anita

    2013-01-01

    Objective Biomedical research increasingly relies on the integration of information from multiple heterogeneous data sources. Despite the fact that structural and terminological aspects of interoperability are interdependent and rely on a common set of requirements, current efforts typically address them in isolation. We propose a unified ontology-based knowledge framework to facilitate interoperability between heterogeneous sources, and investigate if using the LexEVS terminology server is a viable implementation method. Materials and methods We developed a framework based on an ontology, the general information model (GIM), to unify structural models and terminologies, together with relevant mapping sets. This allowed a uniform access to these resources within LexEVS to facilitate interoperability by various components and data sources from implementing architectures. Results Our unified framework has been tested in the context of the EU Framework Program 7 TRANSFoRm project, where it was used to achieve data integration in a retrospective diabetes cohort study. The GIM was successfully instantiated in TRANSFoRm as the clinical data integration model, and necessary mappings were created to support effective information retrieval for software tools in the project. Conclusions We present a novel, unifying approach to address interoperability challenges in heterogeneous data sources, by representing structural and semantic models in one framework. Systems using this architecture can rely solely on the GIM that abstracts over both the structure and coding. Information models, terminologies and mappings are all stored in LexEVS and can be accessed in a uniform manner (implementing the HL7 CTS2 service functional model). The system is flexible and should reduce the effort needed from data sources personnel for implementing and managing the integration. PMID:23571850

  18. miRMaid: a unified programming interface for microRNA data resources

    PubMed Central

    2010-01-01

    Background MicroRNAs (miRNAs) are endogenous small RNAs that play a key role in post-transcriptional regulation of gene expression in animals and plants. The number of known miRNAs has increased rapidly over the years. The current release (version 14.0) of miRBase, the central online repository for miRNA annotation, comprises over 10.000 miRNA precursors from 115 different species. Furthermore, a large number of decentralized online resources are now available, each contributing with important miRNA annotation and information. Results We have developed a software framework, designated here as miRMaid, with the goal of integrating miRNA data resources in a uniform web service interface that can be accessed and queried by researchers and, most importantly, by computers. miRMaid is built around data from miRBase and is designed to follow the official miRBase data releases. It exposes miRBase data as inter-connected web services. Third-party miRNA data resources can be modularly integrated as miRMaid plugins or they can loosely couple with miRMaid as individual entities in the World Wide Web. miRMaid is available as a public web service but is also easily installed as a local application. The software framework is freely available under the LGPL open source license for academic and commercial use. Conclusion miRMaid is an intuitive and modular software platform designed to unify miRBase and independent miRNA data resources. It enables miRNA researchers to computationally address complex questions involving the multitude of miRNA data resources. Furthermore, miRMaid constitutes a basic framework for further programming in which microRNA-interested bioinformaticians can readily develop their own tools and data sources. PMID:20074352

  19. Physically Based Modeling and Simulation with Dynamic Spherical Volumetric Simplex Splines

    PubMed Central

    Tan, Yunhao; Hua, Jing; Qin, Hong

    2009-01-01

    In this paper, we present a novel computational modeling and simulation framework based on dynamic spherical volumetric simplex splines. The framework can handle the modeling and simulation of genus-zero objects with real physical properties. In this framework, we first develop an accurate and efficient algorithm to reconstruct the high-fidelity digital model of a real-world object with spherical volumetric simplex splines which can represent with accuracy geometric, material, and other properties of the object simultaneously. With the tight coupling of Lagrangian mechanics, the dynamic volumetric simplex splines representing the object can accurately simulate its physical behavior because it can unify the geometric and material properties in the simulation. The visualization can be directly computed from the object’s geometric or physical representation based on the dynamic spherical volumetric simplex splines during simulation without interpolation or resampling. We have applied the framework for biomechanic simulation of brain deformations, such as brain shifting during the surgery and brain injury under blunt impact. We have compared our simulation results with the ground truth obtained through intra-operative magnetic resonance imaging and the real biomechanic experiments. The evaluations demonstrate the excellent performance of our new technique. PMID:20161636

  20. Incubation, Insight, and Creative Problem Solving: A Unified Theory and a Connectionist Model

    ERIC Educational Resources Information Center

    Helie, Sebastien; Sun, Ron

    2010-01-01

    This article proposes a unified framework for understanding creative problem solving, namely, the explicit-implicit interaction theory. This new theory of creative problem solving constitutes an attempt at providing a more unified explanation of relevant phenomena (in part by reinterpreting/integrating various fragmentary existing theories of…

  1. Clinical data integration model. Core interoperability ontology for research using primary care data.

    PubMed

    Ethier, J-F; Curcin, V; Barton, A; McGilchrist, M M; Bastiaens, H; Andreasson, A; Rossiter, J; Zhao, L; Arvanitis, T N; Taweel, A; Delaney, B C; Burgun, A

    2015-01-01

    This article is part of the Focus Theme of METHODS of Information in Medicine on "Managing Interoperability and Complexity in Health Systems". Primary care data is the single richest source of routine health care data. However its use, both in research and clinical work, often requires data from multiple clinical sites, clinical trials databases and registries. Data integration and interoperability are therefore of utmost importance. TRANSFoRm's general approach relies on a unified interoperability framework, described in a previous paper. We developed a core ontology for an interoperability framework based on data mediation. This article presents how such an ontology, the Clinical Data Integration Model (CDIM), can be designed to support, in conjunction with appropriate terminologies, biomedical data federation within TRANSFoRm, an EU FP7 project that aims to develop the digital infrastructure for a learning healthcare system in European Primary Care. TRANSFoRm utilizes a unified structural / terminological interoperability framework, based on the local-as-view mediation paradigm. Such an approach mandates the global information model to describe the domain of interest independently of the data sources to be explored. Following a requirement analysis process, no ontology focusing on primary care research was identified and, thus we designed a realist ontology based on Basic Formal Ontology to support our framework in collaboration with various terminologies used in primary care. The resulting ontology has 549 classes and 82 object properties and is used to support data integration for TRANSFoRm's use cases. Concepts identified by researchers were successfully expressed in queries using CDIM and pertinent terminologies. As an example, we illustrate how, in TRANSFoRm, the Query Formulation Workbench can capture eligibility criteria in a computable representation, which is based on CDIM. A unified mediation approach to semantic interoperability provides a flexible and extensible framework for all types of interaction between health record systems and research systems. CDIM, as core ontology of such an approach, enables simplicity and consistency of design across the heterogeneous software landscape and can support the specific needs of EHR-driven phenotyping research using primary care data.

  2. Three-dimensional photoacoustic tomography based on graphics-processing-unit-accelerated finite element method.

    PubMed

    Peng, Kuan; He, Ling; Zhu, Ziqiang; Tang, Jingtian; Xiao, Jiaying

    2013-12-01

    Compared with commonly used analytical reconstruction methods, the frequency-domain finite element method (FEM) based approach has proven to be an accurate and flexible algorithm for photoacoustic tomography. However, the FEM-based algorithm is computationally demanding, especially for three-dimensional cases. To enhance the algorithm's efficiency, in this work a parallel computational strategy is implemented in the framework of the FEM-based reconstruction algorithm using a graphic-processing-unit parallel frame named the "compute unified device architecture." A series of simulation experiments is carried out to test the accuracy and accelerating effect of the improved method. The results obtained indicate that the parallel calculation does not change the accuracy of the reconstruction algorithm, while its computational cost is significantly reduced by a factor of 38.9 with a GTX 580 graphics card using the improved method.

  3. Semiclassical Virasoro blocks from AdS 3 gravity

    DOE PAGES

    Hijano, Eliot; Kraus, Per; Perlmutter, Eric; ...

    2015-12-14

    We present a unified framework for the holographic computation of Virasoro conformal blocks at large central charge. In particular, we provide bulk constructions that correctly reproduce all semiclassical Virasoro blocks that are known explicitly from conformal field theory computations. The results revolve around the use of geodesic Witten diagrams, recently introduced in [1], evaluated in locally AdS 3 geometries generated by backreaction of heavy operators. We also provide an alternative computation of the heavy-light semiclassical block — in which two external operators become parametrically heavy — as a certain scattering process involving higher spin gauge fields in AdS 3; thismore » approach highlights the chiral nature of Virasoro blocks. Finally, these techniques may be systematically extended to compute corrections to these blocks and to interpolate amongst the different semiclassical regimes.« less

  4. Computational Analysis and Simulation of Empathic Behaviors: a Survey of Empathy Modeling with Behavioral Signal Processing Framework.

    PubMed

    Xiao, Bo; Imel, Zac E; Georgiou, Panayiotis; Atkins, David C; Narayanan, Shrikanth S

    2016-05-01

    Empathy is an important psychological process that facilitates human communication and interaction. Enhancement of empathy has profound significance in a range of applications. In this paper, we review emerging directions of research on computational analysis of empathy expression and perception as well as empathic interactions, including their simulation. We summarize the work on empathic expression analysis by the targeted signal modalities (e.g., text, audio, and facial expressions). We categorize empathy simulation studies into theory-based emotion space modeling or application-driven user and context modeling. We summarize challenges in computational study of empathy including conceptual framing and understanding of empathy, data availability, appropriate use and validation of machine learning techniques, and behavior signal processing. Finally, we propose a unified view of empathy computation and offer a series of open problems for future research.

  5. Scale Space for Camera Invariant Features.

    PubMed

    Puig, Luis; Guerrero, José J; Daniilidis, Kostas

    2014-09-01

    In this paper we propose a new approach to compute the scale space of any central projection system, such as catadioptric, fisheye or conventional cameras. Since these systems can be explained using a unified model, the single parameter that defines each type of system is used to automatically compute the corresponding Riemannian metric. This metric, is combined with the partial differential equations framework on manifolds, allows us to compute the Laplace-Beltrami (LB) operator, enabling the computation of the scale space of any central projection system. Scale space is essential for the intrinsic scale selection and neighborhood description in features like SIFT. We perform experiments with synthetic and real images to validate the generalization of our approach to any central projection system. We compare our approach with the best-existing methods showing competitive results in all type of cameras: catadioptric, fisheye, and perspective.

  6. Noise in Neuronal and Electronic Circuits: A General Modeling Framework and Non-Monte Carlo Simulation Techniques.

    PubMed

    Kilinc, Deniz; Demir, Alper

    2017-08-01

    The brain is extremely energy efficient and remarkably robust in what it does despite the considerable variability and noise caused by the stochastic mechanisms in neurons and synapses. Computational modeling is a powerful tool that can help us gain insight into this important aspect of brain mechanism. A deep understanding and computational design tools can help develop robust neuromorphic electronic circuits and hybrid neuroelectronic systems. In this paper, we present a general modeling framework for biological neuronal circuits that systematically captures the nonstationary stochastic behavior of ion channels and synaptic processes. In this framework, fine-grained, discrete-state, continuous-time Markov chain models of both ion channels and synaptic processes are treated in a unified manner. Our modeling framework features a mechanism for the automatic generation of the corresponding coarse-grained, continuous-state, continuous-time stochastic differential equation models for neuronal variability and noise. Furthermore, we repurpose non-Monte Carlo noise analysis techniques, which were previously developed for analog electronic circuits, for the stochastic characterization of neuronal circuits both in time and frequency domain. We verify that the fast non-Monte Carlo analysis methods produce results with the same accuracy as computationally expensive Monte Carlo simulations. We have implemented the proposed techniques in a prototype simulator, where both biological neuronal and analog electronic circuits can be simulated together in a coupled manner.

  7. Computational analysis of conserved RNA secondary structure in transcriptomes and genomes.

    PubMed

    Eddy, Sean R

    2014-01-01

    Transcriptomics experiments and computational predictions both enable systematic discovery of new functional RNAs. However, many putative noncoding transcripts arise instead from artifacts and biological noise, and current computational prediction methods have high false positive rates. I discuss prospects for improving computational methods for analyzing and identifying functional RNAs, with a focus on detecting signatures of conserved RNA secondary structure. An interesting new front is the application of chemical and enzymatic experiments that probe RNA structure on a transcriptome-wide scale. I review several proposed approaches for incorporating structure probing data into the computational prediction of RNA secondary structure. Using probabilistic inference formalisms, I show how all these approaches can be unified in a well-principled framework, which in turn allows RNA probing data to be easily integrated into a wide range of analyses that depend on RNA secondary structure inference. Such analyses include homology search and genome-wide detection of new structural RNAs.

  8. A generalized framework unifying image registration and respiratory motion models and incorporating image reconstruction, for partial image data or full images

    NASA Astrophysics Data System (ADS)

    McClelland, Jamie R.; Modat, Marc; Arridge, Simon; Grimes, Helen; D'Souza, Derek; Thomas, David; O' Connell, Dylan; Low, Daniel A.; Kaza, Evangelia; Collins, David J.; Leach, Martin O.; Hawkes, David J.

    2017-06-01

    Surrogate-driven respiratory motion models relate the motion of the internal anatomy to easily acquired respiratory surrogate signals, such as the motion of the skin surface. They are usually built by first using image registration to determine the motion from a number of dynamic images, and then fitting a correspondence model relating the motion to the surrogate signals. In this paper we present a generalized framework that unifies the image registration and correspondence model fitting into a single optimization. This allows the use of ‘partial’ imaging data, such as individual slices, projections, or k-space data, where it would not be possible to determine the motion from an individual frame of data. Motion compensated image reconstruction can also be incorporated using an iterative approach, so that both the motion and a motion-free image can be estimated from the partial image data. The framework has been applied to real 4DCT, Cine CT, multi-slice CT, and multi-slice MR data, as well as simulated datasets from a computer phantom. This includes the use of a super-resolution reconstruction method for the multi-slice MR data. Good results were obtained for all datasets, including quantitative results for the 4DCT and phantom datasets where the ground truth motion was known or could be estimated.

  9. A generalized framework unifying image registration and respiratory motion models and incorporating image reconstruction, for partial image data or full images.

    PubMed

    McClelland, Jamie R; Modat, Marc; Arridge, Simon; Grimes, Helen; D'Souza, Derek; Thomas, David; Connell, Dylan O'; Low, Daniel A; Kaza, Evangelia; Collins, David J; Leach, Martin O; Hawkes, David J

    2017-06-07

    Surrogate-driven respiratory motion models relate the motion of the internal anatomy to easily acquired respiratory surrogate signals, such as the motion of the skin surface. They are usually built by first using image registration to determine the motion from a number of dynamic images, and then fitting a correspondence model relating the motion to the surrogate signals. In this paper we present a generalized framework that unifies the image registration and correspondence model fitting into a single optimization. This allows the use of 'partial' imaging data, such as individual slices, projections, or k-space data, where it would not be possible to determine the motion from an individual frame of data. Motion compensated image reconstruction can also be incorporated using an iterative approach, so that both the motion and a motion-free image can be estimated from the partial image data. The framework has been applied to real 4DCT, Cine CT, multi-slice CT, and multi-slice MR data, as well as simulated datasets from a computer phantom. This includes the use of a super-resolution reconstruction method for the multi-slice MR data. Good results were obtained for all datasets, including quantitative results for the 4DCT and phantom datasets where the ground truth motion was known or could be estimated.

  10. A generalized framework unifying image registration and respiratory motion models and incorporating image reconstruction, for partial image data or full images

    PubMed Central

    McClelland, Jamie R; Modat, Marc; Arridge, Simon; Grimes, Helen; D’Souza, Derek; Thomas, David; Connell, Dylan O’; Low, Daniel A; Kaza, Evangelia; Collins, David J; Leach, Martin O; Hawkes, David J

    2017-01-01

    Abstract Surrogate-driven respiratory motion models relate the motion of the internal anatomy to easily acquired respiratory surrogate signals, such as the motion of the skin surface. They are usually built by first using image registration to determine the motion from a number of dynamic images, and then fitting a correspondence model relating the motion to the surrogate signals. In this paper we present a generalized framework that unifies the image registration and correspondence model fitting into a single optimization. This allows the use of ‘partial’ imaging data, such as individual slices, projections, or k-space data, where it would not be possible to determine the motion from an individual frame of data. Motion compensated image reconstruction can also be incorporated using an iterative approach, so that both the motion and a motion-free image can be estimated from the partial image data. The framework has been applied to real 4DCT, Cine CT, multi-slice CT, and multi-slice MR data, as well as simulated datasets from a computer phantom. This includes the use of a super-resolution reconstruction method for the multi-slice MR data. Good results were obtained for all datasets, including quantitative results for the 4DCT and phantom datasets where the ground truth motion was known or could be estimated. PMID:28195833

  11. The visual system’s internal model of the world

    PubMed Central

    Lee, Tai Sing

    2015-01-01

    The Bayesian paradigm has provided a useful conceptual theory for understanding perceptual computation in the brain. While the detailed neural mechanisms of Bayesian inference are not fully understood, recent computational and neurophysiological works have illuminated the underlying computational principles and representational architecture. The fundamental insights are that the visual system is organized as a modular hierarchy to encode an internal model of the world, and that perception is realized by statistical inference based on such internal model. In this paper, I will discuss and analyze the varieties of representational schemes of these internal models and how they might be used to perform learning and inference. I will argue for a unified theoretical framework for relating the internal models to the observed neural phenomena and mechanisms in the visual cortex. PMID:26566294

  12. Another Initiative? Where Does it Fit? A Unifying Framework and an Integrated Infrastructure for Schools to Address Barriers to Learning and Promote Healthy Development

    ERIC Educational Resources Information Center

    Center for Mental Health in Schools at UCLA, 2005

    2005-01-01

    This report was developed to highlight the current state of affairs and illustrate the value of a unifying framework and integrated infrastructure for the many initiatives, projects, programs, and services schools pursue in addressing barriers to learning and promoting healthy development. Specifically, it highlights how initiatives can be…

  13. Tensor scale-based fuzzy connectedness image segmentation

    NASA Astrophysics Data System (ADS)

    Saha, Punam K.; Udupa, Jayaram K.

    2003-05-01

    Tangible solutions to image segmentation are vital in many medical imaging applications. Toward this goal, a framework based on fuzzy connectedness was developed in our laboratory. A fundamental notion called "affinity" - a local fuzzy hanging togetherness relation on voxels - determines the effectiveness of this segmentation framework in real applications. In this paper, we introduce the notion of "tensor scale" - a recently developed local morphometric parameter - in affinity definition and study its effectiveness. Although, our previous notion of "local scale" using the spherical model successfully incorporated local structure size into affinity and resulted in measureable improvements in segmentation results, a major limitation of the previous approach was that it ignored local structural orientation and anisotropy. The current approach of using tensor scale in affinity computation allows an effective utilization of local size, orientation, and ansiotropy in a unified manner. Tensor scale is used for computing both the homogeneity- and object-feature-based components of affinity. Preliminary results of the proposed method on several medical images and computer generated phantoms of realistic shapes are presented. Further extensions of this work are discussed.

  14. Towards a Coupled Vortex Particle and Acoustic Boundary Element Solver to Predict the Noise Production of Bio-Inspired Propulsion

    NASA Astrophysics Data System (ADS)

    Wagenhoffer, Nathan; Moored, Keith; Jaworski, Justin

    2016-11-01

    The design of quiet and efficient bio-inspired propulsive concepts requires a rapid, unified computational framework that integrates the coupled fluid dynamics with the noise generation. Such a framework is developed where the fluid motion is modeled with a two-dimensional unsteady boundary element method that includes a vortex-particle wake. The unsteady surface forces from the potential flow solver are then passed to an acoustic boundary element solver to predict the radiated sound in low-Mach-number flows. The use of the boundary element method for both the hydrodynamic and acoustic solvers permits dramatic computational acceleration by application of the fast multiple method. The reduced order of calculations due to the fast multipole method allows for greater spatial resolution of the vortical wake per unit of computational time. The coupled flow-acoustic solver is validated against canonical vortex-sound problems. The capability of the coupled solver is demonstrated by analyzing the performance and noise production of an isolated bio-inspired swimmer and of tandem swimmers.

  15. A Unified Classification Framework for FP, DP and CP Data at X-Band in Southern China

    NASA Astrophysics Data System (ADS)

    Xie, Lei; Zhang, Hong; Li, Hhongzhong; Wang, Chao

    2015-04-01

    The main objective of this paper is to introduce an unified framework for crop classification in Southern China using data in fully polarimetric (FP), dual-pol (DP) and compact polarimetric (CP) modes. The TerraSAR-X data acquired over the Leizhou Peninsula, South China are used in our experiments. The study site involves four main crops (rice, banana, sugarcane eucalyptus). Through exploring the similarities between data in these three modes, a knowledge-based characteristic space is created and the unified framework is presented. The overall classification accuracies for data in the FP, coherent HH/VV are about 95%, and is about 91% in CP modes, which suggests that the proposed classification scheme is effective and promising. Compared with the Wishart Maximum Likelihood (ML) classifier, the proposed method exhibits higher classification accuracy.

  16. Toward a computational framework for cognitive biology: Unifying approaches from cognitive neuroscience and comparative cognition

    NASA Astrophysics Data System (ADS)

    Fitch, W. Tecumseh

    2014-09-01

    Progress in understanding cognition requires a quantitative, theoretical framework, grounded in the other natural sciences and able to bridge between implementational, algorithmic and computational levels of explanation. I review recent results in neuroscience and cognitive biology that, when combined, provide key components of such an improved conceptual framework for contemporary cognitive science. Starting at the neuronal level, I first discuss the contemporary realization that single neurons are powerful tree-shaped computers, which implies a reorientation of computational models of learning and plasticity to a lower, cellular, level. I then turn to predictive systems theory (predictive coding and prediction-based learning) which provides a powerful formal framework for understanding brain function at a more global level. Although most formal models concerning predictive coding are framed in associationist terms, I argue that modern data necessitate a reinterpretation of such models in cognitive terms: as model-based predictive systems. Finally, I review the role of the theory of computation and formal language theory in the recent explosion of comparative biological research attempting to isolate and explore how different species differ in their cognitive capacities. Experiments to date strongly suggest that there is an important difference between humans and most other species, best characterized cognitively as a propensity by our species to infer tree structures from sequential data. Computationally, this capacity entails generative capacities above the regular (finite-state) level; implementationally, it requires some neural equivalent of a push-down stack. I dub this unusual human propensity "dendrophilia", and make a number of concrete suggestions about how such a system may be implemented in the human brain, about how and why it evolved, and what this implies for models of language acquisition. I conclude that, although much remains to be done, a neurally-grounded framework for theoretical cognitive science is within reach that can move beyond polarized debates and provide a more adequate theoretical future for cognitive biology.

  17. Toward a computational framework for cognitive biology: unifying approaches from cognitive neuroscience and comparative cognition.

    PubMed

    Fitch, W Tecumseh

    2014-09-01

    Progress in understanding cognition requires a quantitative, theoretical framework, grounded in the other natural sciences and able to bridge between implementational, algorithmic and computational levels of explanation. I review recent results in neuroscience and cognitive biology that, when combined, provide key components of such an improved conceptual framework for contemporary cognitive science. Starting at the neuronal level, I first discuss the contemporary realization that single neurons are powerful tree-shaped computers, which implies a reorientation of computational models of learning and plasticity to a lower, cellular, level. I then turn to predictive systems theory (predictive coding and prediction-based learning) which provides a powerful formal framework for understanding brain function at a more global level. Although most formal models concerning predictive coding are framed in associationist terms, I argue that modern data necessitate a reinterpretation of such models in cognitive terms: as model-based predictive systems. Finally, I review the role of the theory of computation and formal language theory in the recent explosion of comparative biological research attempting to isolate and explore how different species differ in their cognitive capacities. Experiments to date strongly suggest that there is an important difference between humans and most other species, best characterized cognitively as a propensity by our species to infer tree structures from sequential data. Computationally, this capacity entails generative capacities above the regular (finite-state) level; implementationally, it requires some neural equivalent of a push-down stack. I dub this unusual human propensity "dendrophilia", and make a number of concrete suggestions about how such a system may be implemented in the human brain, about how and why it evolved, and what this implies for models of language acquisition. I conclude that, although much remains to be done, a neurally-grounded framework for theoretical cognitive science is within reach that can move beyond polarized debates and provide a more adequate theoretical future for cognitive biology. Copyright © 2014. Published by Elsevier B.V.

  18. Cognitive theory and brain fact: Insights for the future of cognitive neuroscience. Comment on “Toward a computational framework for cognitive biology: Unifying approaches from cognitive neuroscience and comparative cognition” by W. Tecumseh Fitch

    NASA Astrophysics Data System (ADS)

    Bowling, Daniel

    2014-09-01

    A central challenge in neuroscience is to understand the relationship between the mechanistic operation of the nervous system and the psychological phenomena we experience everyday (e.g., perception, memory, attention, emotion, and consciousness). Supported by revolutionary advances in technology, knowledge of neural mechanisms has grown dramatically over recent decades, but with few exceptions our understanding of how these mechanisms relate to psychological phenomena remains poor.

  19. The Unified Behavior Framework for the Simulation of Autonomous Agents

    DTIC Science & Technology

    2015-03-01

    1980s, researchers have designed a variety of robot control architectures intending to imbue robots with some degree of autonomy. A recently developed ...Identification Friend or Foe viii THE UNIFIED BEHAVIOR FRAMEWORK FOR THE SIMULATION OF AUTONOMOUS AGENTS I. Introduction The development of autonomy has...room for research by utilizing methods like simulation and modeling that consume less time and fewer monetary resources. A recently developed reactive

  20. A lightweight messaging-based distributed processing and workflow execution framework for real-time and big data analysis

    NASA Astrophysics Data System (ADS)

    Laban, Shaban; El-Desouky, Aly

    2014-05-01

    To achieve a rapid, simple and reliable parallel processing of different types of tasks and big data processing on any compute cluster, a lightweight messaging-based distributed applications processing and workflow execution framework model is proposed. The framework is based on Apache ActiveMQ and Simple (or Streaming) Text Oriented Message Protocol (STOMP). ActiveMQ , a popular and powerful open source persistence messaging and integration patterns server with scheduler capabilities, acts as a message broker in the framework. STOMP provides an interoperable wire format that allows framework programs to talk and interact between each other and ActiveMQ easily. In order to efficiently use the message broker a unified message and topic naming pattern is utilized to achieve the required operation. Only three Python programs and simple library, used to unify and simplify the implementation of activeMQ and STOMP protocol, are needed to use the framework. A watchdog program is used to monitor, remove, add, start and stop any machine and/or its different tasks when necessary. For every machine a dedicated one and only one zoo keeper program is used to start different functions or tasks, stompShell program, needed for executing the user required workflow. The stompShell instances are used to execute any workflow jobs based on received message. A well-defined, simple and flexible message structure, based on JavaScript Object Notation (JSON), is used to build any complex workflow systems. Also, JSON format is used in configuration, communication between machines and programs. The framework is platform independent. Although, the framework is built using Python the actual workflow programs or jobs can be implemented by any programming language. The generic framework can be used in small national data centres for processing seismological and radionuclide data received from the International Data Centre (IDC) of the Preparatory Commission for the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO). Also, it is possible to extend the use of the framework in monitoring the IDC pipeline. The detailed design, implementation,conclusion and future work of the proposed framework will be presented.

  1. A Unified Probabilistic Framework for Dose–Response Assessment of Human Health Effects

    PubMed Central

    Slob, Wout

    2015-01-01

    Background When chemical health hazards have been identified, probabilistic dose–response assessment (“hazard characterization”) quantifies uncertainty and/or variability in toxicity as a function of human exposure. Existing probabilistic approaches differ for different types of endpoints or modes-of-action, lacking a unifying framework. Objectives We developed a unified framework for probabilistic dose–response assessment. Methods We established a framework based on four principles: a) individual and population dose responses are distinct; b) dose–response relationships for all (including quantal) endpoints can be recast as relating to an underlying continuous measure of response at the individual level; c) for effects relevant to humans, “effect metrics” can be specified to define “toxicologically equivalent” sizes for this underlying individual response; and d) dose–response assessment requires making adjustments and accounting for uncertainty and variability. We then derived a step-by-step probabilistic approach for dose–response assessment of animal toxicology data similar to how nonprobabilistic reference doses are derived, illustrating the approach with example non-cancer and cancer datasets. Results Probabilistically derived exposure limits are based on estimating a “target human dose” (HDMI), which requires risk management–informed choices for the magnitude (M) of individual effect being protected against, the remaining incidence (I) of individuals with effects ≥ M in the population, and the percent confidence. In the example datasets, probabilistically derived 90% confidence intervals for HDMI values span a 40- to 60-fold range, where I = 1% of the population experiences ≥ M = 1%–10% effect sizes. Conclusions Although some implementation challenges remain, this unified probabilistic framework can provide substantially more complete and transparent characterization of chemical hazards and support better-informed risk management decisions. Citation Chiu WA, Slob W. 2015. A unified probabilistic framework for dose–response assessment of human health effects. Environ Health Perspect 123:1241–1254; http://dx.doi.org/10.1289/ehp.1409385 PMID:26006063

  2. A model linking immediate serial recall, the Hebb repetition effect and the learning of phonological word forms

    PubMed Central

    Page, M. P. A.; Norris, D.

    2009-01-01

    We briefly review the considerable evidence for a common ordering mechanism underlying both immediate serial recall (ISR) tasks (e.g. digit span, non-word repetition) and the learning of phonological word forms. In addition, we discuss how recent work on the Hebb repetition effect is consistent with the idea that learning in this task is itself a laboratory analogue of the sequence-learning component of phonological word-form learning. In this light, we present a unifying modelling framework that seeks to account for ISR and Hebb repetition effects, while being extensible to word-form learning. Because word-form learning is performed in the service of later word recognition, our modelling framework also subsumes a mechanism for word recognition from continuous speech. Simulations of a computational implementation of the modelling framework are presented and are shown to be in accordance with data from the Hebb repetition paradigm. PMID:19933143

  3. From data towards knowledge: revealing the architecture of signaling systems by unifying knowledge mining and data mining of systematic perturbation data.

    PubMed

    Lu, Songjian; Jin, Bo; Cowart, L Ashley; Lu, Xinghua

    2013-01-01

    Genetic and pharmacological perturbation experiments, such as deleting a gene and monitoring gene expression responses, are powerful tools for studying cellular signal transduction pathways. However, it remains a challenge to automatically derive knowledge of a cellular signaling system at a conceptual level from systematic perturbation-response data. In this study, we explored a framework that unifies knowledge mining and data mining towards the goal. The framework consists of the following automated processes: 1) applying an ontology-driven knowledge mining approach to identify functional modules among the genes responding to a perturbation in order to reveal potential signals affected by the perturbation; 2) applying a graph-based data mining approach to search for perturbations that affect a common signal; and 3) revealing the architecture of a signaling system by organizing signaling units into a hierarchy based on their relationships. Applying this framework to a compendium of yeast perturbation-response data, we have successfully recovered many well-known signal transduction pathways; in addition, our analysis has led to many new hypotheses regarding the yeast signal transduction system; finally, our analysis automatically organized perturbed genes as a graph reflecting the architecture of the yeast signaling system. Importantly, this framework transformed molecular findings from a gene level to a conceptual level, which can be readily translated into computable knowledge in the form of rules regarding the yeast signaling system, such as "if genes involved in the MAPK signaling are perturbed, genes involved in pheromone responses will be differentially expressed."

  4. Systemic risk in a unifying framework for cascading processes on networks

    NASA Astrophysics Data System (ADS)

    Lorenz, J.; Battiston, S.; Schweitzer, F.

    2009-10-01

    We introduce a general framework for models of cascade and contagion processes on networks, to identify their commonalities and differences. In particular, models of social and financial cascades, as well as the fiber bundle model, the voter model, and models of epidemic spreading are recovered as special cases. To unify their description, we define the net fragility of a node, which is the difference between its fragility and the threshold that determines its failure. Nodes fail if their net fragility grows above zero and their failure increases the fragility of neighbouring nodes, thus possibly triggering a cascade. In this framework, we identify three classes depending on the way the fragility of a node is increased by the failure of a neighbour. At the microscopic level, we illustrate with specific examples how the failure spreading pattern varies with the node triggering the cascade, depending on its position in the network and its degree. At the macroscopic level, systemic risk is measured as the final fraction of failed nodes, X*, and for each of the three classes we derive a recursive equation to compute its value. The phase diagram of X* as a function of the initial conditions, thus allows for a prediction of the systemic risk as well as a comparison of the three different model classes. We could identify which model class leads to a first-order phase transition in systemic risk, i.e. situations where small changes in the initial conditions determine a global failure. Eventually, we generalize our framework to encompass stochastic contagion models. This indicates the potential for further generalizations.

  5. Parallel Domain Decomposition Formulation and Software for Large-Scale Sparse Symmetrical/Unsymmetrical Aeroacoustic Applications

    NASA Technical Reports Server (NTRS)

    Nguyen, D. T.; Watson, Willie R. (Technical Monitor)

    2005-01-01

    The overall objectives of this research work are to formulate and validate efficient parallel algorithms, and to efficiently design/implement computer software for solving large-scale acoustic problems, arised from the unified frameworks of the finite element procedures. The adopted parallel Finite Element (FE) Domain Decomposition (DD) procedures should fully take advantages of multiple processing capabilities offered by most modern high performance computing platforms for efficient parallel computation. To achieve this objective. the formulation needs to integrate efficient sparse (and dense) assembly techniques, hybrid (or mixed) direct and iterative equation solvers, proper pre-conditioned strategies, unrolling strategies, and effective processors' communicating schemes. Finally, the numerical performance of the developed parallel finite element procedures will be evaluated by solving series of structural, and acoustic (symmetrical and un-symmetrical) problems (in different computing platforms). Comparisons with existing "commercialized" and/or "public domain" software are also included, whenever possible.

  6. Computational Analysis and Simulation of Empathic Behaviors: A Survey of Empathy Modeling with Behavioral Signal Processing Framework

    PubMed Central

    Xiao, Bo; Imel, Zac E.; Georgiou, Panayiotis; Atkins, David C.; Narayanan, Shrikanth S.

    2017-01-01

    Empathy is an important psychological process that facilitates human communication and interaction. Enhancement of empathy has profound significance in a range of applications. In this paper, we review emerging directions of research on computational analysis of empathy expression and perception as well as empathic interactions, including their simulation. We summarize the work on empathic expression analysis by the targeted signal modalities (e.g., text, audio, facial expressions). We categorize empathy simulation studies into theory-based emotion space modeling or application-driven user and context modeling. We summarize challenges in computational study of empathy including conceptual framing and understanding of empathy, data availability, appropriate use and validation of machine learning techniques, and behavior signal processing. Finally, we propose a unified view of empathy computation, and offer a series of open problems for future research. PMID:27017830

  7. High-resolution coupled physics solvers for analysing fine-scale nuclear reactor design problems

    DOE PAGES

    Mahadevan, Vijay S.; Merzari, Elia; Tautges, Timothy; ...

    2014-06-30

    An integrated multi-physics simulation capability for the design and analysis of current and future nuclear reactor models is being investigated, to tightly couple neutron transport and thermal-hydraulics physics under the SHARP framework. Over several years, high-fidelity, validated mono-physics solvers with proven scalability on petascale architectures have been developed independently. Based on a unified component-based architecture, these existing codes can be coupled with a mesh-data backplane and a flexible coupling-strategy-based driver suite to produce a viable tool for analysts. The goal of the SHARP framework is to perform fully resolved coupled physics analysis of a reactor on heterogeneous geometry, in ordermore » to reduce the overall numerical uncertainty while leveraging available computational resources. Finally, the coupling methodology and software interfaces of the framework are presented, along with verification studies on two representative fast sodium-cooled reactor demonstration problems to prove the usability of the SHARP framework.« less

  8. Robopedia: Leveraging Sensorpedia for Web-Enabled Robot Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Resseguie, David R

    There is a growing interest in building Internetscale sensor networks that integrate sensors from around the world into a single unified system. In contrast, robotics application development has primarily focused on building specialized systems. These specialized systems take scalability and reliability into consideration, but generally neglect exploring the key components required to build a large scale system. Integrating robotic applications with Internet-scale sensor networks will unify specialized robotics applications and provide answers to large scale implementation concerns. We focus on utilizing Internet-scale sensor network technology to construct a framework for unifying robotic systems. Our framework web-enables a surveillance robot smore » sensor observations and provides a webinterface to the robot s actuators. This lets robots seamlessly integrate into web applications. In addition, the framework eliminates most prerequisite robotics knowledge, allowing for the creation of general web-based robotics applications. The framework also provides mechanisms to create applications that can interface with any robot. Frameworks such as this one are key to solving large scale mobile robotics implementation problems. We provide an overview of previous Internetscale sensor networks, Sensorpedia (an ad-hoc Internet-scale sensor network), our framework for integrating robots with Sensorpedia, two applications which illustrate our frameworks ability to support general web-based robotic control, and offer experimental results that illustrate our framework s scalability, feasibility, and resource requirements.« less

  9. From data to analysis: linking NWChem and Avogadro with the syntax and semantics of Chemical Markup Language

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Jong, Wibe A.; Walker, Andrew M.; Hanwell, Marcus D.

    Background Multidisciplinary integrated research requires the ability to couple the diverse sets of data obtained from a range of complex experiments and computer simulations. Integrating data requires semantically rich information. In this paper the generation of semantically rich data from the NWChem computational chemistry software is discussed within the Chemical Markup Language (CML) framework. Results The NWChem computational chemistry software has been modified and coupled to the FoX library to write CML compliant XML data files. The FoX library was expanded to represent the lexical input files used by the computational chemistry software. Conclusions The production of CML compliant XMLmore » files for the computational chemistry software NWChem can be relatively easily accomplished using the FoX library. A unified computational chemistry or CompChem convention and dictionary needs to be developed through a community-based effort. The long-term goal is to enable a researcher to do Google-style chemistry and physics searches.« less

  10. A Unified Computational Model for Solar and Stellar Flares

    NASA Technical Reports Server (NTRS)

    Allred, Joel C.; Kowalski, Adam F.; Carlsson, Mats

    2015-01-01

    We present a unified computational framework that can be used to describe impulsive flares on the Sun and on dMe stars. The models assume that the flare impulsive phase is caused by a beam of charged particles that is accelerated in the corona and propagates downward depositing energy and momentum along the way. This rapidly heats the lower stellar atmosphere causing it to explosively expand and dramatically brighten. Our models consist of flux tubes that extend from the sub-photosphere into the corona. We simulate how flare-accelerated charged particles propagate down one-dimensional flux tubes and heat the stellar atmosphere using the Fokker-Planck kinetic theory. Detailed radiative transfer is included so that model predictions can be directly compared with observations. The flux of flare-accelerated particles drives return currents which additionally heat the stellar atmosphere. These effects are also included in our models. We examine the impact of the flare-accelerated particle beams on model solar and dMe stellar atmospheres and perform parameter studies varying the injected particle energy spectra. We find the atmospheric response is strongly dependent on the accelerated particle cutoff energy and spectral index.

  11. A Computational framework for telemedicine.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foster, I.; von Laszewski, G.; Thiruvathukal, G. K.

    1998-07-01

    Emerging telemedicine applications require the ability to exploit diverse and geographically distributed resources. Highspeed networks are used to integrate advanced visualization devices, sophisticated instruments, large databases, archival storage devices, PCs, workstations, and supercomputers. This form of telemedical environment is similar to networked virtual supercomputers, also known as metacomputers. Metacomputers are already being used in many scientific application areas. In this article, we analyze requirements necessary for a telemedical computing infrastructure and compare them with requirements found in a typical metacomputing environment. We will show that metacomputing environments can be used to enable a more powerful and unified computational infrastructure formore » telemedicine. The Globus metacomputing toolkit can provide the necessary low level mechanisms to enable a large scale telemedical infrastructure. The Globus toolkit components are designed in a modular fashion and can be extended to support the specific requirements for telemedicine.« less

  12. BIRCH: a user-oriented, locally-customizable, bioinformatics system.

    PubMed

    Fristensky, Brian

    2007-02-09

    Molecular biologists need sophisticated analytical tools which often demand extensive computational resources. While finding, installing, and using these tools can be challenging, pipelining data from one program to the next is particularly awkward, especially when using web-based programs. At the same time, system administrators tasked with maintaining these tools do not always appreciate the needs of research biologists. BIRCH (Biological Research Computing Hierarchy) is an organizational framework for delivering bioinformatics resources to a user group, scaling from a single lab to a large institution. The BIRCH core distribution includes many popular bioinformatics programs, unified within the GDE (Genetic Data Environment) graphic interface. Of equal importance, BIRCH provides the system administrator with tools that simplify the job of managing a multiuser bioinformatics system across different platforms and operating systems. These include tools for integrating locally-installed programs and databases into BIRCH, and for customizing the local BIRCH system to meet the needs of the user base. BIRCH can also act as a front end to provide a unified view of already-existing collections of bioinformatics software. Documentation for the BIRCH and locally-added programs is merged in a hierarchical set of web pages. In addition to manual pages for individual programs, BIRCH tutorials employ step by step examples, with screen shots and sample files, to illustrate both the important theoretical and practical considerations behind complex analytical tasks. BIRCH provides a versatile organizational framework for managing software and databases, and making these accessible to a user base. Because of its network-centric design, BIRCH makes it possible for any user to do any task from anywhere.

  13. BIRCH: A user-oriented, locally-customizable, bioinformatics system

    PubMed Central

    Fristensky, Brian

    2007-01-01

    Background Molecular biologists need sophisticated analytical tools which often demand extensive computational resources. While finding, installing, and using these tools can be challenging, pipelining data from one program to the next is particularly awkward, especially when using web-based programs. At the same time, system administrators tasked with maintaining these tools do not always appreciate the needs of research biologists. Results BIRCH (Biological Research Computing Hierarchy) is an organizational framework for delivering bioinformatics resources to a user group, scaling from a single lab to a large institution. The BIRCH core distribution includes many popular bioinformatics programs, unified within the GDE (Genetic Data Environment) graphic interface. Of equal importance, BIRCH provides the system administrator with tools that simplify the job of managing a multiuser bioinformatics system across different platforms and operating systems. These include tools for integrating locally-installed programs and databases into BIRCH, and for customizing the local BIRCH system to meet the needs of the user base. BIRCH can also act as a front end to provide a unified view of already-existing collections of bioinformatics software. Documentation for the BIRCH and locally-added programs is merged in a hierarchical set of web pages. In addition to manual pages for individual programs, BIRCH tutorials employ step by step examples, with screen shots and sample files, to illustrate both the important theoretical and practical considerations behind complex analytical tasks. Conclusion BIRCH provides a versatile organizational framework for managing software and databases, and making these accessible to a user base. Because of its network-centric design, BIRCH makes it possible for any user to do any task from anywhere. PMID:17291351

  14. OpenARC: Extensible OpenACC Compiler Framework for Directive-Based Accelerator Programming Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Seyong; Vetter, Jeffrey S

    2014-01-01

    Directive-based, accelerator programming models such as OpenACC have arisen as an alternative solution to program emerging Scalable Heterogeneous Computing (SHC) platforms. However, the increased complexity in the SHC systems incurs several challenges in terms of portability and productivity. This paper presents an open-sourced OpenACC compiler, called OpenARC, which serves as an extensible research framework to address those issues in the directive-based accelerator programming. This paper explains important design strategies and key compiler transformation techniques needed to implement the reference OpenACC compiler. Moreover, this paper demonstrates the efficacy of OpenARC as a research framework for directive-based programming study, by proposing andmore » implementing OpenACC extensions in the OpenARC framework to 1) support hybrid programming of the unified memory and separate memory and 2) exploit architecture-specific features in an abstract manner. Porting thirteen standard OpenACC programs and three extended OpenACC programs to CUDA GPUs shows that OpenARC performs similarly to a commercial OpenACC compiler, while it serves as a high-level research framework.« less

  15. Unified Framework for Development, Deployment and Robust Testing of Neuroimaging Algorithms

    PubMed Central

    Joshi, Alark; Scheinost, Dustin; Okuda, Hirohito; Belhachemi, Dominique; Murphy, Isabella; Staib, Lawrence H.; Papademetris, Xenophon

    2011-01-01

    Developing both graphical and command-line user interfaces for neuroimaging algorithms requires considerable effort. Neuroimaging algorithms can meet their potential only if they can be easily and frequently used by their intended users. Deployment of a large suite of such algorithms on multiple platforms requires consistency of user interface controls, consistent results across various platforms and thorough testing. We present the design and implementation of a novel object-oriented framework that allows for rapid development of complex image analysis algorithms with many reusable components and the ability to easily add graphical user interface controls. Our framework also allows for simplified yet robust nightly testing of the algorithms to ensure stability and cross platform interoperability. All of the functionality is encapsulated into a software object requiring no separate source code for user interfaces, testing or deployment. This formulation makes our framework ideal for developing novel, stable and easy-to-use algorithms for medical image analysis and computer assisted interventions. The framework has been both deployed at Yale and released for public use in the open source multi-platform image analysis software—BioImage Suite (bioimagesuite.org). PMID:21249532

  16. Multiple hypothesis tracking for cluttered biological image sequences.

    PubMed

    Chenouard, Nicolas; Bloch, Isabelle; Olivo-Marin, Jean-Christophe

    2013-11-01

    In this paper, we present a method for simultaneously tracking thousands of targets in biological image sequences, which is of major importance in modern biology. The complexity and inherent randomness of the problem lead us to propose a unified probabilistic framework for tracking biological particles in microscope images. The framework includes realistic models of particle motion and existence and of fluorescence image features. For the track extraction process per se, the very cluttered conditions motivate the adoption of a multiframe approach that enforces tracking decision robustness to poor imaging conditions and to random target movements. We tackle the large-scale nature of the problem by adapting the multiple hypothesis tracking algorithm to the proposed framework, resulting in a method with a favorable tradeoff between the model complexity and the computational cost of the tracking procedure. When compared to the state-of-the-art tracking techniques for bioimaging, the proposed algorithm is shown to be the only method providing high-quality results despite the critically poor imaging conditions and the dense target presence. We thus demonstrate the benefits of advanced Bayesian tracking techniques for the accurate computational modeling of dynamical biological processes, which is promising for further developments in this domain.

  17. Noncontextual Wirings

    NASA Astrophysics Data System (ADS)

    Amaral, Barbara; Cabello, Adán; Cunha, Marcelo Terra; Aolita, Leandro

    2018-03-01

    Contextuality is a fundamental feature of quantum theory necessary for certain models of quantum computation and communication. Serious steps have therefore been taken towards a formal framework for contextuality as an operational resource. However, the main ingredient of a resource theory—a concrete, explicit form of free operations of contextuality—was still missing. Here we provide such a component by introducing noncontextual wirings: a class of contextuality-free operations with a clear operational interpretation and a friendly parametrization. We characterize them completely for general black-box measurement devices with arbitrarily many inputs and outputs. As applications, we show that the relative entropy of contextuality is a contextuality monotone and that maximally contextual boxes that serve as contextuality bits exist for a broad class of scenarios. Our results complete a unified resource-theoretic framework for contextuality and Bell nonlocality.

  18. Noncontextual Wirings.

    PubMed

    Amaral, Barbara; Cabello, Adán; Cunha, Marcelo Terra; Aolita, Leandro

    2018-03-30

    Contextuality is a fundamental feature of quantum theory necessary for certain models of quantum computation and communication. Serious steps have therefore been taken towards a formal framework for contextuality as an operational resource. However, the main ingredient of a resource theory-a concrete, explicit form of free operations of contextuality-was still missing. Here we provide such a component by introducing noncontextual wirings: a class of contextuality-free operations with a clear operational interpretation and a friendly parametrization. We characterize them completely for general black-box measurement devices with arbitrarily many inputs and outputs. As applications, we show that the relative entropy of contextuality is a contextuality monotone and that maximally contextual boxes that serve as contextuality bits exist for a broad class of scenarios. Our results complete a unified resource-theoretic framework for contextuality and Bell nonlocality.

  19. Design Considerations of a Virtual Laboratory for Advanced X-ray Sources

    NASA Astrophysics Data System (ADS)

    Luginsland, J. W.; Frese, M. H.; Frese, S. D.; Watrous, J. J.; Heileman, G. L.

    2004-11-01

    The field of scientific computation has greatly advanced in the last few years, resulting in the ability to perform complex computer simulations that can predict the performance of real-world experiments in a number of fields of study. Among the forces driving this new computational capability is the advent of parallel algorithms, allowing calculations in three-dimensional space with realistic time scales. Electromagnetic radiation sources driven by high-voltage, high-current electron beams offer an area to further push the state-of-the-art in high fidelity, first-principles simulation tools. The physics of these x-ray sources combine kinetic plasma physics (electron beams) with dense fluid-like plasma physics (anode plasmas) and x-ray generation (bremsstrahlung). There are a number of mature techniques and software packages for dealing with the individual aspects of these sources, such as Particle-In-Cell (PIC), Magneto-Hydrodynamics (MHD), and radiation transport codes. The current effort is focused on developing an object-oriented software environment using the Rational© Unified Process and the Unified Modeling Language (UML) to provide a framework where multiple 3D parallel physics packages, such as a PIC code (ICEPIC), a MHD code (MACH), and a x-ray transport code (ITS) can co-exist in a system-of-systems approach to modeling advanced x-ray sources. Initial software design and assessments of the various physics algorithms' fidelity will be presented.

  20. A Perfect Match Genomic Landscape Provides a Unified Framework for the Precise Detection of Variation in Natural and Synthetic Haploid Genomes

    PubMed Central

    Palacios-Flores, Kim; García-Sotelo, Jair; Castillo, Alejandra; Uribe, Carina; Aguilar, Luis; Morales, Lucía; Gómez-Romero, Laura; Reyes, José; Garciarubio, Alejandro; Boege, Margareta; Dávila, Guillermo

    2018-01-01

    We present a conceptually simple, sensitive, precise, and essentially nonstatistical solution for the analysis of genome variation in haploid organisms. The generation of a Perfect Match Genomic Landscape (PMGL), which computes intergenome identity with single nucleotide resolution, reveals signatures of variation wherever a query genome differs from a reference genome. Such signatures encode the precise location of different types of variants, including single nucleotide variants, deletions, insertions, and amplifications, effectively introducing the concept of a general signature of variation. The precise nature of variants is then resolved through the generation of targeted alignments between specific sets of sequence reads and known regions of the reference genome. Thus, the perfect match logic decouples the identification of the location of variants from the characterization of their nature, providing a unified framework for the detection of genome variation. We assessed the performance of the PMGL strategy via simulation experiments. We determined the variation profiles of natural genomes and of a synthetic chromosome, both in the context of haploid yeast strains. Our approach uncovered variants that have previously escaped detection. Moreover, our strategy is ideally suited for further refining high-quality reference genomes. The source codes for the automated PMGL pipeline have been deposited in a public repository. PMID:29367403

  1. A Unified Mathematical Framework for Coding Time, Space, and Sequences in the Hippocampal Region

    PubMed Central

    MacDonald, Christopher J.; Tiganj, Zoran; Shankar, Karthik H.; Du, Qian; Hasselmo, Michael E.; Eichenbaum, Howard

    2014-01-01

    The medial temporal lobe (MTL) is believed to support episodic memory, vivid recollection of a specific event situated in a particular place at a particular time. There is ample neurophysiological evidence that the MTL computes location in allocentric space and more recent evidence that the MTL also codes for time. Space and time represent a similar computational challenge; both are variables that cannot be simply calculated from the immediately available sensory information. We introduce a simple mathematical framework that computes functions of both spatial location and time as special cases of a more general computation. In this framework, experience unfolding in time is encoded via a set of leaky integrators. These leaky integrators encode the Laplace transform of their input. The information contained in the transform can be recovered using an approximation to the inverse Laplace transform. In the temporal domain, the resulting representation reconstructs the temporal history. By integrating movements, the equations give rise to a representation of the path taken to arrive at the present location. By modulating the transform with information about allocentric velocity, the equations code for position of a landmark. Simulated cells show a close correspondence to neurons observed in various regions for all three cases. In the temporal domain, novel secondary analyses of hippocampal time cells verified several qualitative predictions of the model. An integrated representation of spatiotemporal context can be computed by taking conjunctions of these elemental inputs, leading to a correspondence with conjunctive neural representations observed in dorsal CA1. PMID:24672015

  2. Analysis of the Length of Braille Texts in English Braille American Edition, the Nemeth Code, and Computer Braille Code versus the Unified English Braille Code

    ERIC Educational Resources Information Center

    Knowlton, Marie; Wetzel, Robin

    2006-01-01

    This study compared the length of text in English Braille American Edition, the Nemeth code, and the computer braille code with the Unified English Braille Code (UEBC)--also known as Unified English Braille (UEB). The findings indicate that differences in the length of text are dependent on the type of material that is transcribed and the grade…

  3. Renormalization group, normal form theory and the Ising model

    NASA Astrophysics Data System (ADS)

    Raju, Archishman; Hayden, Lorien; Clement, Colin; Liarte, Danilo; Sethna, James

    The results of the renormalization group are commonly advertised as the existence of power law singularities at critical points. Logarithmic and exponential corrections are seen as special cases and dealt with on a case-by-case basis. We propose to systematize computing the singularities in the renormalization group using perturbative normal form theory. This gives us a way to classify all such singularities in a unified framework and to generate a systematic machinery to do scaling collapses. We show that this procedure leads to some new results even in classic cases like the Ising model and has general applicability.

  4. A Unified Approach to Modeling Multidisciplinary Interactions

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid A.; Bhatia, Kumar G.

    2000-01-01

    There are a number of existing methods to transfer information among various disciplines. For a multidisciplinary application with n disciplines, the traditional methods may be required to model (n(exp 2) - n) interactions. This paper presents a unified three-dimensional approach that reduces the number of interactions from (n(exp 2) - n) to 2n by using a computer-aided design model. The proposed modeling approach unifies the interactions among various disciplines. The approach is independent of specific discipline implementation, and a number of existing methods can be reformulated in the context of the proposed unified approach. This paper provides an overview of the proposed unified approach and reformulations for two existing methods. The unified approach is specially tailored for application environments where the geometry is created and managed through a computer-aided design system. Results are presented for a blended-wing body and a high-speed civil transport.

  5. Contextuality supplies the 'magic' for quantum computation.

    PubMed

    Howard, Mark; Wallman, Joel; Veitch, Victor; Emerson, Joseph

    2014-06-19

    Quantum computers promise dramatic advantages over their classical counterparts, but the source of the power in quantum computing has remained elusive. Here we prove a remarkable equivalence between the onset of contextuality and the possibility of universal quantum computation via 'magic state' distillation, which is the leading model for experimentally realizing a fault-tolerant quantum computer. This is a conceptually satisfying link, because contextuality, which precludes a simple 'hidden variable' model of quantum mechanics, provides one of the fundamental characterizations of uniquely quantum phenomena. Furthermore, this connection suggests a unifying paradigm for the resources of quantum information: the non-locality of quantum theory is a particular kind of contextuality, and non-locality is already known to be a critical resource for achieving advantages with quantum communication. In addition to clarifying these fundamental issues, this work advances the resource framework for quantum computation, which has a number of practical applications, such as characterizing the efficiency and trade-offs between distinct theoretical and experimental schemes for achieving robust quantum computation, and putting bounds on the overhead cost for the classical simulation of quantum algorithms.

  6. Heterogeneous compute in computer vision: OpenCL in OpenCV

    NASA Astrophysics Data System (ADS)

    Gasparakis, Harris

    2014-02-01

    We explore the relevance of Heterogeneous System Architecture (HSA) in Computer Vision, both as a long term vision, and as a near term emerging reality via the recently ratified OpenCL 2.0 Khronos standard. After a brief review of OpenCL 1.2 and 2.0, including HSA features such as Shared Virtual Memory (SVM) and platform atomics, we identify what genres of Computer Vision workloads stand to benefit by leveraging those features, and we suggest a new mental framework that replaces GPU compute with hybrid HSA APU compute. As a case in point, we discuss, in some detail, popular object recognition algorithms (part-based models), emphasizing the interplay and concurrent collaboration between the GPU and CPU. We conclude by describing how OpenCL has been incorporated in OpenCV, a popular open source computer vision library, emphasizing recent work on the Transparent API, to appear in OpenCV 3.0, which unifies the native CPU and OpenCL execution paths under a single API, allowing the same code to execute either on CPU or on a OpenCL enabled device, without even recompiling.

  7. False Discovery Control in Large-Scale Spatial Multiple Testing

    PubMed Central

    Sun, Wenguang; Reich, Brian J.; Cai, T. Tony; Guindani, Michele; Schwartzman, Armin

    2014-01-01

    Summary This article develops a unified theoretical and computational framework for false discovery control in multiple testing of spatial signals. We consider both point-wise and cluster-wise spatial analyses, and derive oracle procedures which optimally control the false discovery rate, false discovery exceedance and false cluster rate, respectively. A data-driven finite approximation strategy is developed to mimic the oracle procedures on a continuous spatial domain. Our multiple testing procedures are asymptotically valid and can be effectively implemented using Bayesian computational algorithms for analysis of large spatial data sets. Numerical results show that the proposed procedures lead to more accurate error control and better power performance than conventional methods. We demonstrate our methods for analyzing the time trends in tropospheric ozone in eastern US. PMID:25642138

  8. Biomedical Informatics on the Cloud: A Treasure Hunt for Advancing Cardiovascular Medicine.

    PubMed

    Ping, Peipei; Hermjakob, Henning; Polson, Jennifer S; Benos, Panagiotis V; Wang, Wei

    2018-04-27

    In the digital age of cardiovascular medicine, the rate of biomedical discovery can be greatly accelerated by the guidance and resources required to unearth potential collections of knowledge. A unified computational platform leverages metadata to not only provide direction but also empower researchers to mine a wealth of biomedical information and forge novel mechanistic insights. This review takes the opportunity to present an overview of the cloud-based computational environment, including the functional roles of metadata, the architecture schema of indexing and search, and the practical scenarios of machine learning-supported molecular signature extraction. By introducing several established resources and state-of-the-art workflows, we share with our readers a broadly defined informatics framework to phenotype cardiovascular health and disease. © 2018 American Heart Association, Inc.

  9. Bridging online and offline social networks: Multiplex analysis

    NASA Astrophysics Data System (ADS)

    Filiposka, Sonja; Gajduk, Andrej; Dimitrova, Tamara; Kocarev, Ljupco

    2017-04-01

    We show that three basic actor characteristics, namely normalized reciprocity, three cycles, and triplets, can be expressed using an unified framework that is based on computing the similarity index between two sets associated with the actor: the set of her/his friends and the set of those considering her/him as a friend. These metrics are extended to multiplex networks and then computed for two friendship networks generated by collecting data from two groups of undergraduate students. We found that in offline communication strong and weak ties are (almost) equally presented, while in online communication weak ties are dominant. Moreover, weak ties are much less reciprocal than strong ties. However, across different layers of the multiplex network reciprocities are preserved, while triads (measured with normalized three cycles and triplets) are not significant.

  10. With or without you: predictive coding and Bayesian inference in the brain

    PubMed Central

    Aitchison, Laurence; Lengyel, Máté

    2018-01-01

    Two theoretical ideas have emerged recently with the ambition to provide a unifying functional explanation of neural population coding and dynamics: predictive coding and Bayesian inference. Here, we describe the two theories and their combination into a single framework: Bayesian predictive coding. We clarify how the two theories can be distinguished, despite sharing core computational concepts and addressing an overlapping set of empirical phenomena. We argue that predictive coding is an algorithmic / representational motif that can serve several different computational goals of which Bayesian inference is but one. Conversely, while Bayesian inference can utilize predictive coding, it can also be realized by a variety of other representations. We critically evaluate the experimental evidence supporting Bayesian predictive coding and discuss how to test it more directly. PMID:28942084

  11. What Does CALL Have to Offer Computer Science and What Does Computer Science Have to Offer CALL?

    ERIC Educational Resources Information Center

    Cushion, Steve

    2006-01-01

    We will argue that CALL can usefully be viewed as a subset of computer software engineering and can profit from adopting some of the recent progress in software development theory. The unified modelling language has become the industry standard modelling technique and the accompanying unified process is rapidly gaining acceptance. The manner in…

  12. Structured sparse linear graph embedding.

    PubMed

    Wang, Haixian

    2012-03-01

    Subspace learning is a core issue in pattern recognition and machine learning. Linear graph embedding (LGE) is a general framework for subspace learning. In this paper, we propose a structured sparse extension to LGE (SSLGE) by introducing a structured sparsity-inducing norm into LGE. Specifically, SSLGE casts the projection bases learning into a regression-type optimization problem, and then the structured sparsity regularization is applied to the regression coefficients. The regularization selects a subset of features and meanwhile encodes high-order information reflecting a priori structure information of the data. The SSLGE technique provides a unified framework for discovering structured sparse subspace. Computationally, by using a variational equality and the Procrustes transformation, SSLGE is efficiently solved with closed-form updates. Experimental results on face image show the effectiveness of the proposed method. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. Analysis of Flame Deflector Spray Nozzles in Rocket Engine Test Stands

    NASA Technical Reports Server (NTRS)

    Sachdev, Jai S.; Ahuja, Vineet; Hosangadi, Ashvin; Allgood, Daniel C.

    2010-01-01

    The development of a unified tightly coupled multi-phase computational framework is described for the analysis and design of cooling spray nozzle configurations on the flame deflector in rocket engine test stands. An Eulerian formulation is used to model the disperse phase and is coupled to the gas-phase equations through momentum and heat transfer as well as phase change. The phase change formulation is modeled according to a modified form of the Hertz-Knudsen equation. Various simple test cases are presented to verify the validity of the numerical framework. The ability of the methodology to accurately predict the temperature load on the flame deflector is demonstrated though application to an actual sub-scale test facility. The CFD simulation was able to reproduce the result of the test-firing, showing that the spray nozzle configuration provided insufficient amount of cooling.

  14. From 16-bit to high-accuracy IDCT approximation: fruits of single architecture affliation

    NASA Astrophysics Data System (ADS)

    Liu, Lijie; Tran, Trac D.; Topiwala, Pankaj

    2007-09-01

    In this paper, we demonstrate an effective unified framework for high-accuracy approximation of the irrational co-effcient floating-point IDCT by a single integer-coeffcient fixed-point architecture. Our framework is based on a modified version of the Loeffler's sparse DCT factorization, and the IDCT architecture is constructed via a cascade of dyadic lifting steps and butterflies. We illustrate that simply varying the accuracy of the approximating parameters yields a large family of standard-compliant IDCTs, from rare 16-bit approximations catering to portable computing to ultra-high-accuracy 32-bit versions that virtually eliminate any drifting effect when pairing with the 64-bit floating-point IDCT at the encoder. Drifting performances of the proposed IDCTs along with existing popular IDCT algorithms in H.263+, MPEG-2 and MPEG-4 are also demonstrated.

  15. A Unified Approach to Model-Based Planning and Execution

    NASA Technical Reports Server (NTRS)

    Muscettola, Nicola; Dorais, Gregory A.; Fry, Chuck; Levinson, Richard; Plaunt, Christian; Norvig, Peter (Technical Monitor)

    2000-01-01

    Writing autonomous software is complex, requiring the coordination of functionally and technologically diverse software modules. System and mission engineers must rely on specialists familiar with the different software modules to translate requirements into application software. Also, each module often encodes the same requirement in different forms. The results are high costs and reduced reliability due to the difficulty of tracking discrepancies in these encodings. In this paper we describe a unified approach to planning and execution that we believe provides a unified representational and computational framework for an autonomous agent. We identify the four main components whose interplay provides the basis for the agent's autonomous behavior: the domain model, the plan database, the plan running module, and the planner modules. This representational and problem solving approach can be applied at all levels of the architecture of a complex agent, such as Remote Agent. In the rest of the paper we briefly describe the Remote Agent architecture. The new agent architecture proposed here aims at achieving the full Remote Agent functionality. We then give the fundamental ideas behind the new agent architecture and point out some implication of the structure of the architecture, mainly in the area of reactivity and interaction between reactive and deliberative decision making. We conclude with related work and current status.

  16. Exploring Environmental Factors in Nursing Workplaces That Promote Psychological Resilience: Constructing a Unified Theoretical Model.

    PubMed

    Cusack, Lynette; Smith, Morgan; Hegney, Desley; Rees, Clare S; Breen, Lauren J; Witt, Regina R; Rogers, Cath; Williams, Allison; Cross, Wendy; Cheung, Kin

    2016-01-01

    Building nurses' resilience to complex and stressful practice environments is necessary to keep skilled nurses in the workplace and ensuring safe patient care. A unified theoretical framework titled Health Services Workplace Environmental Resilience Model (HSWERM), is presented to explain the environmental factors in the workplace that promote nurses' resilience. The framework builds on a previously-published theoretical model of individual resilience, which identified the key constructs of psychological resilience as self-efficacy, coping and mindfulness, but did not examine environmental factors in the workplace that promote nurses' resilience. This unified theoretical framework was developed using a literary synthesis drawing on data from international studies and literature reviews on the nursing workforce in hospitals. The most frequent workplace environmental factors were identified, extracted and clustered in alignment with key constructs for psychological resilience. Six major organizational concepts emerged that related to a positive resilience-building workplace and formed the foundation of the theoretical model. Three concepts related to nursing staff support (professional, practice, personal) and three related to nursing staff development (professional, practice, personal) within the workplace environment. The unified theoretical model incorporates these concepts within the workplace context, linking to the nurse, and then impacting on personal resilience and workplace outcomes, and its use has the potential to increase staff retention and quality of patient care.

  17. The HARNESS Workbench: Unified and Adaptive Access to Diverse HPC Platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sunderam, Vaidy S.

    2012-03-20

    The primary goal of the Harness WorkBench (HWB) project is to investigate innovative software environments that will help enhance the overall productivity of applications science on diverse HPC platforms. Two complementary frameworks were designed: one, a virtualized command toolkit for application building, deployment, and execution, that provides a common view across diverse HPC systems, in particular the DOE leadership computing platforms (Cray, IBM, SGI, and clusters); and two, a unified runtime environment that consolidates access to runtime services via an adaptive framework for execution-time and post processing activities. A prototype of the first was developed based on the concept ofmore » a 'system-call virtual machine' (SCVM), to enhance portability of the HPC application deployment process across heterogeneous high-end machines. The SCVM approach to portable builds is based on the insertion of toolkit-interpretable directives into original application build scripts. Modifications resulting from these directives preserve the semantics of the original build instruction flow. The execution of the build script is controlled by our toolkit that intercepts build script commands in a manner transparent to the end-user. We have applied this approach to a scientific production code (Gamess-US) on the Cray-XT5 machine. The second facet, termed Unibus, aims to facilitate provisioning and aggregation of multifaceted resources from resource providers and end-users perspectives. To achieve that, Unibus proposes a Capability Model and mediators (resource drivers) to virtualize access to diverse resources, and soft and successive conditioning to enable automatic and user-transparent resource provisioning. A proof of concept implementation has demonstrated the viability of this approach on high end machines, grid systems and computing clouds.« less

  18. A unified momentum equation approach for computing thermal residual stresses during melting and solidification

    NASA Astrophysics Data System (ADS)

    Yeo, Haram; Ki, Hyungson

    2018-03-01

    In this article, we present a novel numerical method for computing thermal residual stresses from a viewpoint of fluid-structure interaction (FSI). In a thermal processing of a material, residual stresses are developed as the material undergoes melting and solidification, and liquid, solid, and a mixture of liquid and solid (or mushy state) coexist and interact with each other during the process. In order to accurately account for the stress development during phase changes, we derived a unified momentum equation from the momentum equations of incompressible fluids and elastoplastic solids. In this approach, the whole fluid-structure system is treated as a single continuum, and the interaction between fluid and solid phases across the mushy zone is naturally taken into account in a monolithic way. For thermal analysis, an enthalpy-based method was employed. As a numerical example, a two-dimensional laser heating problem was considered, where a carbon steel sheet was heated by a Gaussian laser beam. Momentum and energy equations were discretized on a uniform Cartesian grid in a finite volume framework, and temperature-dependent material properties were used. The austenite-martensite phase transformation of carbon steel was also considered. In this study, the effects of solid strains, fluid flow, mushy zone size, and laser heating time on residual stress formation were investigated.

  19. Conceptualising paediatric health disparities: a metanarrative systematic review and unified conceptual framework.

    PubMed

    Ridgeway, Jennifer L; Wang, Zhen; Finney Rutten, Lila J; van Ryn, Michelle; Griffin, Joan M; Murad, M Hassan; Asiedu, Gladys B; Egginton, Jason S; Beebe, Timothy J

    2017-08-04

    There exists a paucity of work in the development and testing of theoretical models specific to childhood health disparities even though they have been linked to the prevalence of adult health disparities including high rates of chronic disease. We conducted a systematic review and thematic analysis of existing models of health disparities specific to children to inform development of a unified conceptual framework. We systematically reviewed articles reporting theoretical or explanatory models of disparities on a range of outcomes related to child health. We searched Ovid Medline In-Process & Other Non-Indexed Citations, Ovid MEDLINE, Ovid Embase, Ovid Cochrane Central Register of Controlled Trials, Ovid Cochrane Database of Systematic Reviews, and Scopus (database inception to 9 July 2015). A metanarrative approach guided the analysis process. A total of 48 studies presenting 48 models were included. This systematic review found multiple models but no consensus on one approach. However, we did discover a fair amount of overlap, such that the 48 models reviewed converged into the unified conceptual framework. The majority of models included factors in three domains: individual characteristics and behaviours (88%), healthcare providers and systems (63%), and environment/community (56%), . Only 38% of models included factors in the health and public policies domain. A disease-agnostic unified conceptual framework may inform integration of existing knowledge of child health disparities and guide future research. This multilevel framework can focus attention among clinical, basic and social science research on the relationships between policy, social factors, health systems and the physical environment that impact children's health outcomes. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  20. A Unified Probabilistic Framework for Dose-Response Assessment of Human Health Effects.

    PubMed

    Chiu, Weihsueh A; Slob, Wout

    2015-12-01

    When chemical health hazards have been identified, probabilistic dose-response assessment ("hazard characterization") quantifies uncertainty and/or variability in toxicity as a function of human exposure. Existing probabilistic approaches differ for different types of endpoints or modes-of-action, lacking a unifying framework. We developed a unified framework for probabilistic dose-response assessment. We established a framework based on four principles: a) individual and population dose responses are distinct; b) dose-response relationships for all (including quantal) endpoints can be recast as relating to an underlying continuous measure of response at the individual level; c) for effects relevant to humans, "effect metrics" can be specified to define "toxicologically equivalent" sizes for this underlying individual response; and d) dose-response assessment requires making adjustments and accounting for uncertainty and variability. We then derived a step-by-step probabilistic approach for dose-response assessment of animal toxicology data similar to how nonprobabilistic reference doses are derived, illustrating the approach with example non-cancer and cancer datasets. Probabilistically derived exposure limits are based on estimating a "target human dose" (HDMI), which requires risk management-informed choices for the magnitude (M) of individual effect being protected against, the remaining incidence (I) of individuals with effects ≥ M in the population, and the percent confidence. In the example datasets, probabilistically derived 90% confidence intervals for HDMI values span a 40- to 60-fold range, where I = 1% of the population experiences ≥ M = 1%-10% effect sizes. Although some implementation challenges remain, this unified probabilistic framework can provide substantially more complete and transparent characterization of chemical hazards and support better-informed risk management decisions.

  1. Consolidation and development roadmap of the EMI middleware

    NASA Astrophysics Data System (ADS)

    Kónya, B.; Aiftimiei, C.; Cecchi, M.; Field, L.; Fuhrmann, P.; Nilsen, J. K.; White, J.

    2012-12-01

    Scientific research communities have benefited recently from the increasing availability of computing and data infrastructures with unprecedented capabilities for large scale distributed initiatives. These infrastructures are largely defined and enabled by the middleware they deploy. One of the major issues in the current usage of research infrastructures is the need to use similar but often incompatible middleware solutions. The European Middleware Initiative (EMI) is a collaboration of the major European middleware providers ARC, dCache, gLite and UNICORE. EMI aims to: deliver a consolidated set of middleware components for deployment in EGI, PRACE and other Distributed Computing Infrastructures; extend the interoperability between grids and other computing infrastructures; strengthen the reliability of the services; establish a sustainable model to maintain and evolve the middleware; fulfil the requirements of the user communities. This paper presents the consolidation and development objectives of the EMI software stack covering the last two years. The EMI development roadmap is introduced along the four technical areas of compute, data, security and infrastructure. The compute area plan focuses on consolidation of standards and agreements through a unified interface for job submission and management, a common format for accounting, the wide adoption of GLUE schema version 2.0 and the provision of a common framework for the execution of parallel jobs. The security area is working towards a unified security model and lowering the barriers to Grid usage by allowing users to gain access with their own credentials. The data area is focusing on implementing standards to ensure interoperability with other grids and industry components and to reuse already existing clients in operating systems and open source distributions. One of the highlights of the infrastructure area is the consolidation of the information system services via the creation of a common information backbone.

  2. A unified framework for approximation in inverse problems for distributed parameter systems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Ito, K.

    1988-01-01

    A theoretical framework is presented that can be used to treat approximation techniques for very general classes of parameter estimation problems involving distributed systems that are either first or second order in time. Using the approach developed, one can obtain both convergence and stability (continuous dependence of parameter estimates with respect to the observations) under very weak regularity and compactness assumptions on the set of admissible parameters. This unified theory can be used for many problems found in the recent literature and in many cases offers significant improvements to existing results.

  3. GFDL's unified regional-global weather-climate modeling system with variable resolution capability for severe weather predictions and regional climate simulations

    NASA Astrophysics Data System (ADS)

    Lin, S. J.

    2015-12-01

    The NOAA/Geophysical Fluid Dynamics Laboratory has been developing a unified regional-global modeling system with variable resolution capabilities that can be used for severe weather predictions (e.g., tornado outbreak events and cat-5 hurricanes) and ultra-high-resolution (1-km) regional climate simulations within a consistent global modeling framework. The fundation of this flexible regional-global modeling system is the non-hydrostatic extension of the vertically Lagrangian dynamical core (Lin 2004, Monthly Weather Review) known in the community as FV3 (finite-volume on the cubed-sphere). Because of its flexability and computational efficiency, the FV3 is one of the final candidates of NOAA's Next Generation Global Prediction System (NGGPS). We have built into the modeling system a stretched (single) grid capability, a two-way (regional-global) multiple nested grid capability, and the combination of the stretched and two-way nests, so as to make convection-resolving regional climate simulation within a consistent global modeling system feasible using today's High Performance Computing System. One of our main scientific goals is to enable simulations of high impact weather phenomena (such as tornadoes, thunderstorms, category-5 hurricanes) within an IPCC-class climate modeling system previously regarded as impossible. In this presentation I will demonstrate that it is computationally feasible to simulate not only super-cell thunderstorms, but also the subsequent genesis of tornadoes using a global model that was originally designed for century long climate simulations. As a unified weather-climate modeling system, we evaluated the performance of the model with horizontal resolution ranging from 1 km to as low as 200 km. In particular, for downscaling studies, we have developed various tests to ensure that the large-scale circulation within the global varaible resolution system is well simulated while at the same time the small-scale can be accurately captured within the targeted high resolution region.

  4. Unified, Cross-Platform, Open-Source Library Package for High-Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kozacik, Stephen

    Compute power is continually increasing, but this increased performance is largely found in sophisticated computing devices and supercomputer resources that are difficult to use, resulting in under-utilization. We developed a unified set of programming tools that will allow users to take full advantage of the new technology by allowing them to work at a level abstracted away from the platform specifics, encouraging the use of modern computing systems, including government-funded supercomputer facilities.

  5. In Search of a Unified Model of Language Contact

    ERIC Educational Resources Information Center

    Winford, Donald

    2013-01-01

    Much previous research has pointed to the need for a unified framework for language contact phenomena -- one that would include social factors and motivations, structural factors and linguistic constraints, and psycholinguistic factors involved in processes of language processing and production. While Contact Linguistics has devoted a great deal…

  6. A Perfect Match Genomic Landscape Provides a Unified Framework for the Precise Detection of Variation in Natural and Synthetic Haploid Genomes.

    PubMed

    Palacios-Flores, Kim; García-Sotelo, Jair; Castillo, Alejandra; Uribe, Carina; Aguilar, Luis; Morales, Lucía; Gómez-Romero, Laura; Reyes, José; Garciarubio, Alejandro; Boege, Margareta; Dávila, Guillermo

    2018-04-01

    We present a conceptually simple, sensitive, precise, and essentially nonstatistical solution for the analysis of genome variation in haploid organisms. The generation of a Perfect Match Genomic Landscape (PMGL), which computes intergenome identity with single nucleotide resolution, reveals signatures of variation wherever a query genome differs from a reference genome. Such signatures encode the precise location of different types of variants, including single nucleotide variants, deletions, insertions, and amplifications, effectively introducing the concept of a general signature of variation. The precise nature of variants is then resolved through the generation of targeted alignments between specific sets of sequence reads and known regions of the reference genome. Thus, the perfect match logic decouples the identification of the location of variants from the characterization of their nature, providing a unified framework for the detection of genome variation. We assessed the performance of the PMGL strategy via simulation experiments. We determined the variation profiles of natural genomes and of a synthetic chromosome, both in the context of haploid yeast strains. Our approach uncovered variants that have previously escaped detection. Moreover, our strategy is ideally suited for further refining high-quality reference genomes. The source codes for the automated PMGL pipeline have been deposited in a public repository. Copyright © 2018 by the Genetics Society of America.

  7. Microphysics in the Multi-Scale Modeling Systems with Unified Physics

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Chern, J.; Lamg, S.; Matsui, T.; Shen, B.; Zeng, X.; Shi, R.

    2011-01-01

    In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (l) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, the microphysics developments of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the heavy precipitation processes will be presented.

  8. Aspects, Wrappers and Events

    NASA Technical Reports Server (NTRS)

    Filman, Robert E.

    2003-01-01

    This viewgraph presentation provides information on Object Infrastructure Framework (OIF), an Aspect-Oriented Programming (AOP) system. The presentation begins with an introduction to the difficulties and requirements of distributed computing, including functional and non-functional requirements (ilities). The architecture of Distributed Object Technology includes stubs, proxies for implementation objects, and skeletons, proxies for client applications. The key OIF ideas (injecting behavior, annotated communications, thread contexts, and pragma) are discussed. OIF is an AOP mechanism; AOP is centered on: 1) Separate expression of crosscutting concerns; 2) Mechanisms to weave the separate expressions into a unified system. AOP is software engineering technology for separately expressing systematic properties while nevertheless producing running systems that embody these properties.

  9. A Real-Time High Performance Computation Architecture for Multiple Moving Target Tracking Based on Wide-Area Motion Imagery via Cloud and Graphic Processing Units

    PubMed Central

    Liu, Kui; Wei, Sixiao; Chen, Zhijiang; Jia, Bin; Chen, Genshe; Ling, Haibin; Sheaff, Carolyn; Blasch, Erik

    2017-01-01

    This paper presents the first attempt at combining Cloud with Graphic Processing Units (GPUs) in a complementary manner within the framework of a real-time high performance computation architecture for the application of detecting and tracking multiple moving targets based on Wide Area Motion Imagery (WAMI). More specifically, the GPU and Cloud Moving Target Tracking (GC-MTT) system applied a front-end web based server to perform the interaction with Hadoop and highly parallelized computation functions based on the Compute Unified Device Architecture (CUDA©). The introduced multiple moving target detection and tracking method can be extended to other applications such as pedestrian tracking, group tracking, and Patterns of Life (PoL) analysis. The cloud and GPUs based computing provides an efficient real-time target recognition and tracking approach as compared to methods when the work flow is applied using only central processing units (CPUs). The simultaneous tracking and recognition results demonstrate that a GC-MTT based approach provides drastically improved tracking with low frame rates over realistic conditions. PMID:28208684

  10. A Real-Time High Performance Computation Architecture for Multiple Moving Target Tracking Based on Wide-Area Motion Imagery via Cloud and Graphic Processing Units.

    PubMed

    Liu, Kui; Wei, Sixiao; Chen, Zhijiang; Jia, Bin; Chen, Genshe; Ling, Haibin; Sheaff, Carolyn; Blasch, Erik

    2017-02-12

    This paper presents the first attempt at combining Cloud with Graphic Processing Units (GPUs) in a complementary manner within the framework of a real-time high performance computation architecture for the application of detecting and tracking multiple moving targets based on Wide Area Motion Imagery (WAMI). More specifically, the GPU and Cloud Moving Target Tracking (GC-MTT) system applied a front-end web based server to perform the interaction with Hadoop and highly parallelized computation functions based on the Compute Unified Device Architecture (CUDA©). The introduced multiple moving target detection and tracking method can be extended to other applications such as pedestrian tracking, group tracking, and Patterns of Life (PoL) analysis. The cloud and GPUs based computing provides an efficient real-time target recognition and tracking approach as compared to methods when the work flow is applied using only central processing units (CPUs). The simultaneous tracking and recognition results demonstrate that a GC-MTT based approach provides drastically improved tracking with low frame rates over realistic conditions.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurth, Thorsten; Pochinsky, Andrew; Sarje, Abhinav

    Practitioners of lattice QCD/QFT have been some of the primary pioneer users of the state-of-the-art high-performance-computing systems, and contribute towards the stress tests of such new machines as soon as they become available. As with all aspects of high-performance-computing, I/O is becoming an increasingly specialized component of these systems. In order to take advantage of the latest available high-performance I/O infrastructure, to ensure reliability and backwards compatibility of data files, and to help unify the data structures used in lattice codes, we have incorporated parallel HDF5 I/O into the SciDAC supported USQCD software stack. Here we present the design andmore » implementation of this I/O framework. Our HDF5 implementation outperforms optimized QIO at the 10-20% level and leaves room for further improvement by utilizing appropriate dataset chunking.« less

  12. Programming chemistry in DNA-addressable bioreactors

    PubMed Central

    Fellermann, Harold; Cardelli, Luca

    2014-01-01

    We present a formal calculus, termed the chemtainer calculus, able to capture the complexity of compartmentalized reaction systems such as populations of possibly nested vesicular compartments. Compartments contain molecular cargo as well as surface markers in the form of DNA single strands. These markers serve as compartment addresses and allow for their targeted transport and fusion, thereby enabling reactions of previously separated chemicals. The overall system organization allows for the set-up of programmable chemistry in microfluidic or other automated environments. We introduce a simple sequential programming language whose instructions are motivated by state-of-the-art microfluidic technology. Our approach integrates electronic control, chemical computing and material production in a unified formal framework that is able to mimic the integrated computational and constructive capabilities of the subcellular matrix. We provide a non-deterministic semantics of our programming language that enables us to analytically derive the computational and constructive power of our machinery. This semantics is used to derive the sets of all constructable chemicals and supermolecular structures that emerge from different underlying instruction sets. Because our proofs are constructive, they can be used to automatically infer control programs for the construction of target structures from a limited set of resource molecules. Finally, we present an example of our framework from the area of oligosaccharide synthesis. PMID:25121647

  13. Targeted ENO schemes with tailored resolution property for hyperbolic conservation laws

    NASA Astrophysics Data System (ADS)

    Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2017-11-01

    In this paper, we extend the range of targeted ENO (TENO) schemes (Fu et al. (2016) [18]) by proposing an eighth-order TENO8 scheme. A general formulation to construct the high-order undivided difference τK within the weighting strategy is proposed. With the underlying scale-separation strategy, sixth-order accuracy for τK in the smooth solution regions is designed for good performance and robustness. Furthermore, a unified framework to optimize independently the dispersion and dissipation properties of high-order finite-difference schemes is proposed. The new framework enables tailoring of dispersion and dissipation as function of wavenumber. The optimal linear scheme has minimum dispersion error and a dissipation error that satisfies a dispersion-dissipation relation. Employing the optimal linear scheme, a sixth-order TENO8-opt scheme is constructed. A set of benchmark cases involving strong discontinuities and broadband fluctuations is computed to demonstrate the high-resolution properties of the new schemes.

  14. Bayesian Group Bridge for Bi-level Variable Selection.

    PubMed

    Mallick, Himel; Yi, Nengjun

    2017-06-01

    A Bayesian bi-level variable selection method (BAGB: Bayesian Analysis of Group Bridge) is developed for regularized regression and classification. This new development is motivated by grouped data, where generic variables can be divided into multiple groups, with variables in the same group being mechanistically related or statistically correlated. As an alternative to frequentist group variable selection methods, BAGB incorporates structural information among predictors through a group-wise shrinkage prior. Posterior computation proceeds via an efficient MCMC algorithm. In addition to the usual ease-of-interpretation of hierarchical linear models, the Bayesian formulation produces valid standard errors, a feature that is notably absent in the frequentist framework. Empirical evidence of the attractiveness of the method is illustrated by extensive Monte Carlo simulations and real data analysis. Finally, several extensions of this new approach are presented, providing a unified framework for bi-level variable selection in general models with flexible penalties.

  15. Exploring Environmental Factors in Nursing Workplaces That Promote Psychological Resilience: Constructing a Unified Theoretical Model

    PubMed Central

    Cusack, Lynette; Smith, Morgan; Hegney, Desley; Rees, Clare S.; Breen, Lauren J.; Witt, Regina R.; Rogers, Cath; Williams, Allison; Cross, Wendy; Cheung, Kin

    2016-01-01

    Building nurses' resilience to complex and stressful practice environments is necessary to keep skilled nurses in the workplace and ensuring safe patient care. A unified theoretical framework titled Health Services Workplace Environmental Resilience Model (HSWERM), is presented to explain the environmental factors in the workplace that promote nurses' resilience. The framework builds on a previously-published theoretical model of individual resilience, which identified the key constructs of psychological resilience as self-efficacy, coping and mindfulness, but did not examine environmental factors in the workplace that promote nurses' resilience. This unified theoretical framework was developed using a literary synthesis drawing on data from international studies and literature reviews on the nursing workforce in hospitals. The most frequent workplace environmental factors were identified, extracted and clustered in alignment with key constructs for psychological resilience. Six major organizational concepts emerged that related to a positive resilience-building workplace and formed the foundation of the theoretical model. Three concepts related to nursing staff support (professional, practice, personal) and three related to nursing staff development (professional, practice, personal) within the workplace environment. The unified theoretical model incorporates these concepts within the workplace context, linking to the nurse, and then impacting on personal resilience and workplace outcomes, and its use has the potential to increase staff retention and quality of patient care. PMID:27242567

  16. The neurosciences and the search for a unified psychology: the science and esthetics of a single framework

    PubMed Central

    Stam, Henderikus J.

    2015-01-01

    The search for a so-called unified or integrated theory has long served as a goal for some psychologists, even if the search is often implicit. But if the established sciences do not have an explicitly unified set of theories, then why should psychology? After examining this question again I argue that psychology is in fact reasonably unified around its methods and its commitment to functional explanations, an indeterminate functionalism. The question of the place of the neurosciences in this framework is complex. On the one hand, the neuroscientific project will not likely renew and synthesize the disparate arms of psychology. On the other hand, their reformulation of what it means to be human will exert an influence in multiple ways. One way to capture that influence is to conceptualize the brain in terms of a technology that we interact with in a manner that we do not yet fully understand. In this way we maintain both a distance from neuro-reductionism and refrain from committing to an unfettered subjectivity. PMID:26500571

  17. Efficient construction of unified continuous and discontinuous Galerkin formulations for the 3D Euler equations

    NASA Astrophysics Data System (ADS)

    Abdi, Daniel S.; Giraldo, Francis X.

    2016-09-01

    A unified approach for the numerical solution of the 3D hyperbolic Euler equations using high order methods, namely continuous Galerkin (CG) and discontinuous Galerkin (DG) methods, is presented. First, we examine how classical CG that uses a global storage scheme can be constructed within the DG framework using constraint imposition techniques commonly used in the finite element literature. Then, we implement and test a simplified version in the Non-hydrostatic Unified Model of the Atmosphere (NUMA) for the case of explicit time integration and a diagonal mass matrix. Constructing CG within the DG framework allows CG to benefit from the desirable properties of DG such as, easier hp-refinement, better stability etc. Moreover, this representation allows for regional mixing of CG and DG depending on the flow regime in an area. The different flavors of CG and DG in the unified implementation are then tested for accuracy and performance using a suite of benchmark problems representative of cloud-resolving scale, meso-scale and global-scale atmospheric dynamics. The value of our unified approach is that we are able to show how to carry both CG and DG methods within the same code and also offer a simple recipe for modifying an existing CG code to DG and vice versa.

  18. U.S. History Framework for the 2010 National Assessment of Educational Progress

    ERIC Educational Resources Information Center

    National Assessment Governing Board, 2009

    2009-01-01

    This framework identifies the main ideas, major events, key individuals, and unifying themes of American history as a basis for preparing the 2010 assessment. The framework recognizes that U.S. history includes powerful ideas, common and diverse traditions, economic developments, technological and scientific innovations, philosophical debates,…

  19. Applying Laban's Movement Framework in Elementary Physical Education

    ERIC Educational Resources Information Center

    Langton, Terence W.

    2007-01-01

    This article recommends raising the bar in elementary physical education by using Laban's movement framework to develop curriculum content in the areas of games, gymnastics, and dance (with physical fitness concepts blended in) in order to help students achieve the NASPE content standards. The movement framework can permeate and unify an…

  20. UNITY: Confronting Supernova Cosmology's Statistical and Systematic Uncertainties in a Unified Bayesian Framework

    NASA Astrophysics Data System (ADS)

    Rubin, D.; Aldering, G.; Barbary, K.; Boone, K.; Chappell, G.; Currie, M.; Deustua, S.; Fagrelius, P.; Fruchter, A.; Hayden, B.; Lidman, C.; Nordin, J.; Perlmutter, S.; Saunders, C.; Sofiatti, C.; Supernova Cosmology Project, The

    2015-11-01

    While recent supernova (SN) cosmology research has benefited from improved measurements, current analysis approaches are not statistically optimal and will prove insufficient for future surveys. This paper discusses the limitations of current SN cosmological analyses in treating outliers, selection effects, shape- and color-standardization relations, unexplained dispersion, and heterogeneous observations. We present a new Bayesian framework, called UNITY (Unified Nonlinear Inference for Type-Ia cosmologY), that incorporates significant improvements in our ability to confront these effects. We apply the framework to real SN observations and demonstrate smaller statistical and systematic uncertainties. We verify earlier results that SNe Ia require nonlinear shape and color standardizations, but we now include these nonlinear relations in a statistically well-justified way. This analysis was primarily performed blinded, in that the basic framework was first validated on simulated data before transitioning to real data. We also discuss possible extensions of the method.

  1. Sheldon Glashow, the Electroweak Theory, and the Grand Unified Theory

    Science.gov Websites

    ] 'Glashow shared the 1979 Nobel Prize for physics with Steven Weinberg and Abdus Salam for unifying the particle physics and provides a framework for understanding how the early universe evolved and how the our universe came into being," says Lawrence R. Sulak, chairman of the Boston University physics

  2. "UNICERT," or: Towards the Development of a Unified Language Certificate for German Universities.

    ERIC Educational Resources Information Center

    Voss, Bernd

    The standardization of second language proficiency levels for university students in Germany is discussed. Problems with the current system, in which each university has developed its own program of study and proficiency certification, are examined and a framework for development of a unified language certificate for all universities is outlined.…

  3. Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining.

    PubMed

    Hero, Alfred O; Rajaratnam, Bala

    2016-01-01

    When can reliable inference be drawn in fue "Big Data" context? This paper presents a framework for answering this fundamental question in the context of correlation mining, wifu implications for general large scale inference. In large scale data applications like genomics, connectomics, and eco-informatics fue dataset is often variable-rich but sample-starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than fue number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for "Big Data". Sample complexity however has received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address fuis gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where fue variable dimension is fixed and fue sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; 3) the purely high dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa cale data dimension. We illustrate this high dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables fua t are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. we demonstrate various regimes of correlation mining based on the unifying perspective of high dimensional learning rates and sample complexity for different structured covariance models and different inference tasks.

  4. Neo-Symbiosis: The Next Stage in the Evolution of Human Information Interaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Griffith, Douglas; Greitzer, Frank L.

    We re-address the vision of human-computer symbiosis expressed by J. C. R. Licklider nearly a half-century ago, when he wrote: “The hope is that in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today.” (Licklider, 1960). Unfortunately, little progress was made toward this vision over four decades following Licklider’s challenge, despite significant advancements in the fields of human factors and computer science. Licklider’s vision wasmore » largely forgotten. However, recent advances in information science and technology, psychology, and neuroscience have rekindled the potential of making the Licklider’s vision a reality. This paper provides a historical context for and updates the vision, and it argues that such a vision is needed as a unifying framework for advancing IS&T.« less

  5. A versatile model for soft patchy particles with various patch arrangements.

    PubMed

    Li, Zhan-Wei; Zhu, You-Liang; Lu, Zhong-Yuan; Sun, Zhao-Yan

    2016-01-21

    We propose a simple and general mesoscale soft patchy particle model, which can felicitously describe the deformable and surface-anisotropic characteristics of soft patchy particles. This model can be used in dynamics simulations to investigate the aggregation behavior and mechanism of various types of soft patchy particles with tunable number, size, direction, and geometrical arrangement of the patches. To improve the computational efficiency of this mesoscale model in dynamics simulations, we give the simulation algorithm that fits the compute unified device architecture (CUDA) framework of NVIDIA graphics processing units (GPUs). The validation of the model and the performance of the simulations using GPUs are demonstrated by simulating several benchmark systems of soft patchy particles with 1 to 4 patches in a regular geometrical arrangement. Because of its simplicity and computational efficiency, the soft patchy particle model will provide a powerful tool to investigate the aggregation behavior of soft patchy particles, such as patchy micelles, patchy microgels, and patchy dendrimers, over larger spatial and temporal scales.

  6. A unified theoretical framework for mapping models for the multi-state Hamiltonian.

    PubMed

    Liu, Jian

    2016-11-28

    We propose a new unified theoretical framework to construct equivalent representations of the multi-state Hamiltonian operator and present several approaches for the mapping onto the Cartesian phase space. After mapping an F-dimensional Hamiltonian onto an F+1 dimensional space, creation and annihilation operators are defined such that the F+1 dimensional space is complete for any combined excitation. Commutation and anti-commutation relations are then naturally derived, which show that the underlying degrees of freedom are neither bosons nor fermions. This sets the scene for developing equivalent expressions of the Hamiltonian operator in quantum mechanics and their classical/semiclassical counterparts. Six mapping models are presented as examples. The framework also offers a novel way to derive such as the well-known Meyer-Miller model.

  7. P21 Framework Definitions

    ERIC Educational Resources Information Center

    Partnership for 21st Century Skills, 2009

    2009-01-01

    To help practitioners integrate skills into the teaching of core academic subjects, the Partnership for 21st Century Skills has developed a unified, collective vision for learning known as the Framework for 21st Century Learning. This Framework describes the skills, knowledge and expertise students must master to succeed in work and life; it is a…

  8. Toward a Unified Validation Framework in Mixed Methods Research

    ERIC Educational Resources Information Center

    Dellinger, Amy B.; Leech, Nancy L.

    2007-01-01

    The primary purpose of this article is to further discussions of validity in mixed methods research by introducing a validation framework to guide thinking about validity in this area. To justify the use of this framework, the authors discuss traditional terminology and validity criteria for quantitative and qualitative research, as well as…

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Yinan; Shi Handuo; Xiong Zhaoxi

    We present a unified universal quantum cloning machine, which combines several different existing universal cloning machines together, including the asymmetric case. In this unified framework, the identical pure states are projected equally into each copy initially constituted by input and one half of the maximally entangled states. We show explicitly that the output states of those universal cloning machines are the same. One importance of this unified cloning machine is that the cloning procession is always the symmetric projection, which reduces dramatically the difficulties for implementation. Also, it is found that this unified cloning machine can be directly modified tomore » the general asymmetric case. Besides the global fidelity and the single-copy fidelity, we also present all possible arbitrary-copy fidelities.« less

  10. [Arabian food pyramid: unified framework for nutritional health messages].

    PubMed

    Shokr, Adel M

    2008-01-01

    There are several ways to present nutritional health messages, particularly pyramidic indices, but they have many deficiencies such as lack of agreement on a unified or clear methodology for food grouping and ignoring nutritional group inter-relation and integration. This causes confusion for health educators and target individuals. This paper presents an Arabian food pyramid that aims to unify the bases of nutritional health messages, bringing together the function, contents, source and nutritional group servings and indicating the inter-relation and integration of nutritional groups. This provides comprehensive, integrated, simple and flexible health messages.

  11. The Unified English Braille Code: Examination by Science, Mathematics, and Computer Science Technical Expert Braille Readers

    ERIC Educational Resources Information Center

    Holbrook, M. Cay; MacCuspie, P. Ann

    2010-01-01

    Braille-reading mathematicians, scientists, and computer scientists were asked to examine the usability of the Unified English Braille Code (UEB) for technical materials. They had little knowledge of the code prior to the study. The research included two reading tasks, a short tutorial about UEB, and a focus group. The results indicated that the…

  12. Probabilistic Models and Generative Neural Networks: Towards an Unified Framework for Modeling Normal and Impaired Neurocognitive Functions

    PubMed Central

    Testolin, Alberto; Zorzi, Marco

    2016-01-01

    Connectionist models can be characterized within the more general framework of probabilistic graphical models, which allow to efficiently describe complex statistical distributions involving a large number of interacting variables. This integration allows building more realistic computational models of cognitive functions, which more faithfully reflect the underlying neural mechanisms at the same time providing a useful bridge to higher-level descriptions in terms of Bayesian computations. Here we discuss a powerful class of graphical models that can be implemented as stochastic, generative neural networks. These models overcome many limitations associated with classic connectionist models, for example by exploiting unsupervised learning in hierarchical architectures (deep networks) and by taking into account top-down, predictive processing supported by feedback loops. We review some recent cognitive models based on generative networks, and we point out promising research directions to investigate neuropsychological disorders within this approach. Though further efforts are required in order to fill the gap between structured Bayesian models and more realistic, biophysical models of neuronal dynamics, we argue that generative neural networks have the potential to bridge these levels of analysis, thereby improving our understanding of the neural bases of cognition and of pathologies caused by brain damage. PMID:27468262

  13. Structural design using equilibrium programming formulations

    NASA Technical Reports Server (NTRS)

    Scotti, Stephen J.

    1995-01-01

    Solutions to increasingly larger structural optimization problems are desired. However, computational resources are strained to meet this need. New methods will be required to solve increasingly larger problems. The present approaches to solving large-scale problems involve approximations for the constraints of structural optimization problems and/or decomposition of the problem into multiple subproblems that can be solved in parallel. An area of game theory, equilibrium programming (also known as noncooperative game theory), can be used to unify these existing approaches from a theoretical point of view (considering the existence and optimality of solutions), and be used as a framework for the development of new methods for solving large-scale optimization problems. Equilibrium programming theory is described, and existing design techniques such as fully stressed design and constraint approximations are shown to fit within its framework. Two new structural design formulations are also derived. The first new formulation is another approximation technique which is a general updating scheme for the sensitivity derivatives of design constraints. The second new formulation uses a substructure-based decomposition of the structure for analysis and sensitivity calculations. Significant computational benefits of the new formulations compared with a conventional method are demonstrated.

  14. Probabilistic Models and Generative Neural Networks: Towards an Unified Framework for Modeling Normal and Impaired Neurocognitive Functions.

    PubMed

    Testolin, Alberto; Zorzi, Marco

    2016-01-01

    Connectionist models can be characterized within the more general framework of probabilistic graphical models, which allow to efficiently describe complex statistical distributions involving a large number of interacting variables. This integration allows building more realistic computational models of cognitive functions, which more faithfully reflect the underlying neural mechanisms at the same time providing a useful bridge to higher-level descriptions in terms of Bayesian computations. Here we discuss a powerful class of graphical models that can be implemented as stochastic, generative neural networks. These models overcome many limitations associated with classic connectionist models, for example by exploiting unsupervised learning in hierarchical architectures (deep networks) and by taking into account top-down, predictive processing supported by feedback loops. We review some recent cognitive models based on generative networks, and we point out promising research directions to investigate neuropsychological disorders within this approach. Though further efforts are required in order to fill the gap between structured Bayesian models and more realistic, biophysical models of neuronal dynamics, we argue that generative neural networks have the potential to bridge these levels of analysis, thereby improving our understanding of the neural bases of cognition and of pathologies caused by brain damage.

  15. "Machine" consciousness and "artificial" thought: an operational architectonics model guided approach.

    PubMed

    Fingelkurts, Andrew A; Fingelkurts, Alexander A; Neves, Carlos F H

    2012-01-05

    Instead of using low-level neurophysiology mimicking and exploratory programming methods commonly used in the machine consciousness field, the hierarchical operational architectonics (OA) framework of brain and mind functioning proposes an alternative conceptual-theoretical framework as a new direction in the area of model-driven machine (robot) consciousness engineering. The unified brain-mind theoretical OA model explicitly captures (though in an informal way) the basic essence of brain functional architecture, which indeed constitutes a theory of consciousness. The OA describes the neurophysiological basis of the phenomenal level of brain organization. In this context the problem of producing man-made "machine" consciousness and "artificial" thought is a matter of duplicating all levels of the operational architectonics hierarchy (with its inherent rules and mechanisms) found in the brain electromagnetic field. We hope that the conceptual-theoretical framework described in this paper will stimulate the interest of mathematicians and/or computer scientists to abstract and formalize principles of hierarchy of brain operations which are the building blocks for phenomenal consciousness and thought. Copyright © 2010 Elsevier B.V. All rights reserved.

  16. Exploiting Vector and Multicore Parallelsim for Recursive, Data- and Task-Parallel Programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, Bin; Krishnamoorthy, Sriram; Agrawal, Kunal

    Modern hardware contains parallel execution resources that are well-suited for data-parallelism-vector units-and task parallelism-multicores. However, most work on parallel scheduling focuses on one type of hardware or the other. In this work, we present a scheduling framework that allows for a unified treatment of task- and data-parallelism. Our key insight is an abstraction, task blocks, that uniformly handles data-parallel iterations and task-parallel tasks, allowing them to be scheduled on vector units or executed independently as multicores. Our framework allows us to define schedulers that can dynamically select between executing task- blocks on vector units or multicores. We show that thesemore » schedulers are asymptotically optimal, and deliver the maximum amount of parallelism available in computation trees. To evaluate our schedulers, we develop program transformations that can convert mixed data- and task-parallel pro- grams into task block-based programs. Using a prototype instantiation of our scheduling framework, we show that, on an 8-core system, we can simultaneously exploit vector and multicore parallelism to achieve 14×-108× speedup over sequential baselines.« less

  17. Attention in a Bayesian Framework

    PubMed Central

    Whiteley, Louise; Sahani, Maneesh

    2012-01-01

    The behavioral phenomena of sensory attention are thought to reflect the allocation of a limited processing resource, but there is little consensus on the nature of the resource or why it should be limited. Here we argue that a fundamental bottleneck emerges naturally within Bayesian models of perception, and use this observation to frame a new computational account of the need for, and action of, attention – unifying diverse attentional phenomena in a way that goes beyond previous inferential, probabilistic and Bayesian models. Attentional effects are most evident in cluttered environments, and include both selective phenomena, where attention is invoked by cues that point to particular stimuli, and integrative phenomena, where attention is invoked dynamically by endogenous processing. However, most previous Bayesian accounts of attention have focused on describing relatively simple experimental settings, where cues shape expectations about a small number of upcoming stimuli and thus convey “prior” information about clearly defined objects. While operationally consistent with the experiments it seeks to describe, this view of attention as prior seems to miss many essential elements of both its selective and integrative roles, and thus cannot be easily extended to complex environments. We suggest that the resource bottleneck stems from the computational intractability of exact perceptual inference in complex settings, and that attention reflects an evolved mechanism for approximate inference which can be shaped to refine the local accuracy of perception. We show that this approach extends the simple picture of attention as prior, so as to provide a unified and computationally driven account of both selective and integrative attentional phenomena. PMID:22712010

  18. A theoretical and experimental study of neuromorphic atomic switch networks for reservoir computing.

    PubMed

    Sillin, Henry O; Aguilera, Renato; Shieh, Hsien-Hang; Avizienis, Audrius V; Aono, Masakazu; Stieg, Adam Z; Gimzewski, James K

    2013-09-27

    Atomic switch networks (ASNs) have been shown to generate network level dynamics that resemble those observed in biological neural networks. To facilitate understanding and control of these behaviors, we developed a numerical model based on the synapse-like properties of individual atomic switches and the random nature of the network wiring. We validated the model against various experimental results highlighting the possibility to functionalize the network plasticity and the differences between an atomic switch in isolation and its behaviors in a network. The effects of changing connectivity density on the nonlinear dynamics were examined as characterized by higher harmonic generation in response to AC inputs. To demonstrate their utility for computation, we subjected the simulated network to training within the framework of reservoir computing and showed initial evidence of the ASN acting as a reservoir which may be optimized for specific tasks by adjusting the input gain. The work presented represents steps in a unified approach to experimentation and theory of complex systems to make ASNs a uniquely scalable platform for neuromorphic computing.

  19. A theoretical and experimental study of neuromorphic atomic switch networks for reservoir computing

    NASA Astrophysics Data System (ADS)

    Sillin, Henry O.; Aguilera, Renato; Shieh, Hsien-Hang; Avizienis, Audrius V.; Aono, Masakazu; Stieg, Adam Z.; Gimzewski, James K.

    2013-09-01

    Atomic switch networks (ASNs) have been shown to generate network level dynamics that resemble those observed in biological neural networks. To facilitate understanding and control of these behaviors, we developed a numerical model based on the synapse-like properties of individual atomic switches and the random nature of the network wiring. We validated the model against various experimental results highlighting the possibility to functionalize the network plasticity and the differences between an atomic switch in isolation and its behaviors in a network. The effects of changing connectivity density on the nonlinear dynamics were examined as characterized by higher harmonic generation in response to AC inputs. To demonstrate their utility for computation, we subjected the simulated network to training within the framework of reservoir computing and showed initial evidence of the ASN acting as a reservoir which may be optimized for specific tasks by adjusting the input gain. The work presented represents steps in a unified approach to experimentation and theory of complex systems to make ASNs a uniquely scalable platform for neuromorphic computing.

  20. Enhancing the Resilience of Interdependent Critical Infrastructure Systems Using a Common Computational Framework

    NASA Astrophysics Data System (ADS)

    Little, J. C.; Filz, G. M.

    2016-12-01

    As modern societies become more complex, critical interdependent infrastructure systems become more likely to fail under stress unless they are designed and implemented to be resilient. Hurricane Katrina clearly demonstrated the catastrophic and as yet unpredictable consequences of such failures. Resilient infrastructure systems maintain the flow of goods and services in the face of a broad range of natural and manmade hazards. In this presentation, we illustrate a generic computational framework to facilitate high-level decision-making about how to invest scarce resources most effectively to enhance resilience in coastal protection, transportation, and the economy of a region. Coastal Louisiana, our study area, has experienced the catastrophic effects of several land-falling hurricanes in recent years. In this project, we implement and further refine three process models (a coastal protection model, a transportation model, and an economic model) for the coastal Louisiana region. We upscale essential mechanistic features of the three detailed process models to the systems level and integrate the three reduced-order systems models in a modular fashion. We also evaluate the proposed approach in annual workshops with input from stakeholders. Based on stakeholder inputs, we derive a suite of goals, targets, and indicators for evaluating resilience at the systems level, and assess and enhance resilience using several deterministic scenarios. The unifying framework will be able to accommodate the different spatial and temporal scales that are appropriate for each model. We combine our generic computational framework, which encompasses the entire system of systems, with the targets, and indicators needed to systematically meet our chosen resilience goals. We will start with targets that focus on technical and economic systems, but future work will ensure that targets and indicators are extended to other dimensions of resilience including those in the environmental and social systems. The overall model can be used to optimize decision making in a probabilistic risk-based framework.

  1. High-Performance I/O: HDF5 for Lattice QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurth, Thorsten; Pochinsky, Andrew; Sarje, Abhinav

    2015-01-01

    Practitioners of lattice QCD/QFT have been some of the primary pioneer users of the state-of-the-art high-performance-computing systems, and contribute towards the stress tests of such new machines as soon as they become available. As with all aspects of high-performance-computing, I/O is becoming an increasingly specialized component of these systems. In order to take advantage of the latest available high-performance I/O infrastructure, to ensure reliability and backwards compatibility of data files, and to help unify the data structures used in lattice codes, we have incorporated parallel HDF5 I/O into the SciDAC supported USQCD software stack. Here we present the design andmore » implementation of this I/O framework. Our HDF5 implementation outperforms optimized QIO at the 10-20% level and leaves room for further improvement by utilizing appropriate dataset chunking.« less

  2. High-Performance I/O: HDF5 for Lattice QCD

    DOE PAGES

    Kurth, Thorsten; Pochinsky, Andrew; Sarje, Abhinav; ...

    2017-05-09

    Practitioners of lattice QCD/QFT have been some of the primary pioneer users of the state-of-the-art high-performance-computing systems, and contribute towards the stress tests of such new machines as soon as they become available. As with all aspects of high-performance-computing, I/O is becoming an increasingly specialized component of these systems. In order to take advantage of the latest available high-performance I/O infrastructure, to ensure reliability and backwards compatibility of data files, and to help unify the data structures used in lattice codes, we have incorporated parallel HDF5 I/O into the SciDAC supported USQCD software stack. Here we present the design andmore » implementation of this I/O framework. Our HDF5 implementation outperforms optimized QIO at the 10-20% level and leaves room for further improvement by utilizing appropriate dataset chunking.« less

  3. Algorithms for computing the time-corrected instantaneous frequency (reassigned) spectrogram, with applications.

    PubMed

    Fulop, Sean A; Fitz, Kelly

    2006-01-01

    A modification of the spectrogram (log magnitude of the short-time Fourier transform) to more accurately show the instantaneous frequencies of signal components was first proposed in 1976 [Kodera et al., Phys. Earth Planet. Inter. 12, 142-150 (1976)], and has been considered or reinvented a few times since but never widely adopted. This paper presents a unified theoretical picture of this time-frequency analysis method, the time-corrected instantaneous frequency spectrogram, together with detailed implementable algorithms comparing three published techniques for its computation. The new representation is evaluated against the conventional spectrogram for its superior ability to track signal components. The lack of a uniform framework for either mathematics or implementation details which has characterized the disparate literature on the schemes has been remedied here. Fruitful application of the method is shown in the realms of speech phonation analysis, whale song pitch tracking, and additive sound modeling.

  4. Decision making in recurrent neuronal circuits.

    PubMed

    Wang, Xiao-Jing

    2008-10-23

    Decision making has recently emerged as a central theme in neurophysiological studies of cognition, and experimental and computational work has led to the proposal of a cortical circuit mechanism of elemental decision computations. This mechanism depends on slow recurrent synaptic excitation balanced by fast feedback inhibition, which not only instantiates attractor states for forming categorical choices but also long transients for gradually accumulating evidence in favor of or against alternative options. Such a circuit endowed with reward-dependent synaptic plasticity is able to produce adaptive choice behavior. While decision threshold is a core concept for reaction time tasks, it can be dissociated from a general decision rule. Moreover, perceptual decisions and value-based economic choices are described within a unified framework in which probabilistic choices result from irregular neuronal activity as well as iterative interactions of a decision maker with an uncertain environment or other unpredictable decision makers in a social group.

  5. Collusion-resistant multimedia fingerprinting: a unified framework

    NASA Astrophysics Data System (ADS)

    Wu, Min; Trappe, Wade; Wang, Z. Jane; Liu, K. J. Ray

    2004-06-01

    Digital fingerprints are unique labels inserted in different copies of the same content before distribution. Each digital fingerprint is assigned to an inteded recipient, and can be used to trace the culprits who use their content for unintended purposes. Attacks mounted by multiple users, known as collusion attacks, provide a cost-effective method for attenuating the identifying fingerprint from each coluder, thus collusion poses a reeal challenge to protect the digital media data and enforce usage policies. This paper examines a few major design methodologies for collusion-resistant fingerprinting of multimedia, and presents a unified framework that helps highlight the common issues and the uniqueness of different fingerprinting techniques.

  6. Computational Design for Multifunctional Microstructural Composites

    NASA Astrophysics Data System (ADS)

    Chen, Yuhang; Zhou, Shiwei; Li, Qing

    As an important class of natural and engineered materials, periodic microstructural composites have drawn substantial attention from the material research community for their excellent flexibility in tailoring various desirable physical behaviors. To develop periodic cellular composites for multifunctional applications, this paper presents a unified design framework for combining stiffness and a range of physical properties governed by quasi-harmonic partial differential equations. A multiphase microstructural configuration is sought within a periodic base-cell design domain using topology optimization. To deal with conflicting properties, e.g. conductivity/permeability versus bulk modulus, the optimum is sought in a Pareto sense. Illustrative examples demonstrate the capability of the presented procedure for the design of multiphysical composites and tissue scaffolds.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lue Xing; Sun Kun; Wang Pan

    In the framework of Bell-polynomial manipulations, under investigation hereby are three single-field bilinearizable equations: the (1+1)-dimensional shallow water wave model, Boiti-Leon-Manna-Pempinelli model, and (2+1)-dimensional Sawada-Kotera model. Based on the concept of scale invariance, a direct and unifying Bell-polynomial scheme is employed to achieve the Baecklund transformations and Lax pairs associated with those three soliton equations. Note that the Bell-polynomial expressions and Bell-polynomial-typed Baecklund transformations for those three soliton equations can be, respectively, cast into the bilinear equations and bilinear Baecklund transformations with symbolic computation. Consequently, it is also shown that the Bell-polynomial-typed Baecklund transformations can be linearized into the correspondingmore » Lax pairs.« less

  8. General Multivariate Linear Modeling of Surface Shapes Using SurfStat

    PubMed Central

    Chung, Moo K.; Worsley, Keith J.; Nacewicz, Brendon, M.; Dalton, Kim M.; Davidson, Richard J.

    2010-01-01

    Although there are many imaging studies on traditional ROI-based amygdala volumetry, there are very few studies on modeling amygdala shape variations. This paper present a unified computational and statistical framework for modeling amygdala shape variations in a clinical population. The weighted spherical harmonic representation is used as to parameterize, to smooth out, and to normalize amygdala surfaces. The representation is subsequently used as an input for multivariate linear models accounting for nuisance covariates such as age and brain size difference using SurfStat package that completely avoids the complexity of specifying design matrices. The methodology has been applied for quantifying abnormal local amygdala shape variations in 22 high functioning autistic subjects. PMID:20620211

  9. Stochastic correlative firing for figure-ground segregation.

    PubMed

    Chen, Zhe

    2005-03-01

    Segregation of sensory inputs into separate objects is a central aspect of perception and arises in all sensory modalities. The figure-ground segregation problem requires identifying an object of interest in a complex scene, in many cases given binaural auditory or binocular visual observations. The computations required for visual and auditory figure-ground segregation share many common features and can be cast within a unified framework. Sensory perception can be viewed as a problem of optimizing information transmission. Here we suggest a stochastic correlative firing mechanism and an associative learning rule for figure-ground segregation in several classic sensory perception tasks, including the cocktail party problem in binaural hearing, binocular fusion of stereo images, and Gestalt grouping in motion perception.

  10. Consistent multiphysics simulation of a central tower CSP plant as applied to ISTORE

    NASA Astrophysics Data System (ADS)

    Votyakov, Evgeny V.; Papanicolas, Costas N.

    2017-06-01

    We present a unified consistent multiphysics approach to model a central tower CSP plant. The framework for the model includes Monte Carlo ray tracing (RT) and computational fluid dynamics (CFD) components utilizing the OpenFOAM C++ software library. The RT part works effectively with complex surfaces of engineering design given in CAD formats. The CFD simulation, which is based on 3D Navier-Stokes equations, takes into account all possible heat transfer mechanisms: radiation, conduction, and convection. Utilizing this package, the solar field of the experimental Platform for Research, Observation, and TEchnological Applications in Solar Energy (PROTEAS) and the Integrated STOrage and Receiver (ISTORE), developed at the Cyprus Institute, are being examined.

  11. Gauge Gravity and Electroweak Theory

    NASA Astrophysics Data System (ADS)

    Hestenes, David

    2008-09-01

    Reformulation of the Dirac equation in terms of the real Spacetime Algebra (STA) reveals hidden geometric structure, including a geometric role for the unit imaginary as generator of rotations in a spacelike plane. The STA and the real Dirac equation play essential roles in a new Gauge Theory Gravity (GTG) version of General Relativity (GR). Besides clarifying the conceptual foundations of GR and facilitating complex computations, GTG opens up new possibilities for a unified gauge theory of gravity and quantum mechanics, including spacetime geometry of electroweak interactions. The Weinberg-Salam model fits perfectly into this geometric framework, and a promising variant that replaces chiral states with Majorana states is formulated to incorporate zitterbewegung in electron states.

  12. Efficient optimization of the quantum relative entropy

    NASA Astrophysics Data System (ADS)

    Fawzi, Hamza; Fawzi, Omar

    2018-04-01

    Many quantum information measures can be written as an optimization of the quantum relative entropy between sets of states. For example, the relative entropy of entanglement of a state is the minimum relative entropy to the set of separable states. The various capacities of quantum channels can also be written in this way. We propose a unified framework to numerically compute these quantities using off-the-shelf semidefinite programming solvers, exploiting the approximation method proposed in Fawzi, Saunderson and Parrilo (2017 arXiv: 1705.00812). As a notable application, this method allows us to provide numerical counterexamples for a proposed lower bound on the quantum conditional mutual information in terms of the relative entropy of recovery.

  13. A development framework for semantically interoperable health information systems.

    PubMed

    Lopez, Diego M; Blobel, Bernd G M E

    2009-02-01

    Semantic interoperability is a basic challenge to be met for new generations of distributed, communicating and co-operating health information systems (HIS) enabling shared care and e-Health. Analysis, design, implementation and maintenance of such systems and intrinsic architectures have to follow a unified development methodology. The Generic Component Model (GCM) is used as a framework for modeling any system to evaluate and harmonize state of the art architecture development approaches and standards for health information systems as well as to derive a coherent architecture development framework for sustainable, semantically interoperable HIS and their components. The proposed methodology is based on the Rational Unified Process (RUP), taking advantage of its flexibility to be configured for integrating other architectural approaches such as Service-Oriented Architecture (SOA), Model-Driven Architecture (MDA), ISO 10746, and HL7 Development Framework (HDF). Existing architectural approaches have been analyzed, compared and finally harmonized towards an architecture development framework for advanced health information systems. Starting with the requirements for semantic interoperability derived from paradigm changes for health information systems, and supported in formal software process engineering methods, an appropriate development framework for semantically interoperable HIS has been provided. The usability of the framework has been exemplified in a public health scenario.

  14. The unified model of vegetarian identity: A conceptual framework for understanding plant-based food choices.

    PubMed

    Rosenfeld, Daniel L; Burrow, Anthony L

    2017-05-01

    By departing from social norms regarding food behaviors, vegetarians acquire membership in a distinct social group and can develop a salient vegetarian identity. However, vegetarian identities are diverse, multidimensional, and unique to each individual. Much research has identified fundamental psychological aspects of vegetarianism, and an identity framework that unifies these findings into common constructs and conceptually defines variables is needed. Integrating psychological theories of identity with research on food choices and vegetarianism, this paper proposes a conceptual model for studying vegetarianism: The Unified Model of Vegetarian Identity (UMVI). The UMVI encompasses ten dimensions-organized into three levels (contextual, internalized, and externalized)-that capture the role of vegetarianism in an individual's self-concept. Contextual dimensions situate vegetarianism within contexts; internalized dimensions outline self-evaluations; and externalized dimensions describe enactments of identity through behavior. Together, these dimensions form a coherent vegetarian identity, characterizing one's thoughts, feelings, and behaviors regarding being vegetarian. By unifying dimensions that capture psychological constructs universally, the UMVI can prevent discrepancies in operationalization, capture the inherent diversity of vegetarian identities, and enable future research to generate greater insight into how people understand themselves and their food choices. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. The Pursuit of a "Better" Explanation as an Organizing Framework for Science Teaching and Learning

    ERIC Educational Resources Information Center

    Papadouris, Nicos; Vokos, Stamatis; Constantinou, Constantinos P.

    2018-01-01

    This article seeks to make the case for the pursuit of a "better" explanation being a productive organizing framework for science teaching and learning. Underlying this position is the idea that this framework allows promoting, in a unified manner, facility with the scientific practice of constructing explanations, appreciation of its…

  16. Modelling Trial-by-Trial Changes in the Mismatch Negativity

    PubMed Central

    Lieder, Falk; Daunizeau, Jean; Garrido, Marta I.; Friston, Karl J.; Stephan, Klaas E.

    2013-01-01

    The mismatch negativity (MMN) is a differential brain response to violations of learned regularities. It has been used to demonstrate that the brain learns the statistical structure of its environment and predicts future sensory inputs. However, the algorithmic nature of these computations and the underlying neurobiological implementation remain controversial. This article introduces a mathematical framework with which competing ideas about the computational quantities indexed by MMN responses can be formalized and tested against single-trial EEG data. This framework was applied to five major theories of the MMN, comparing their ability to explain trial-by-trial changes in MMN amplitude. Three of these theories (predictive coding, model adjustment, and novelty detection) were formalized by linking the MMN to different manifestations of the same computational mechanism: approximate Bayesian inference according to the free-energy principle. We thereby propose a unifying view on three distinct theories of the MMN. The relative plausibility of each theory was assessed against empirical single-trial MMN amplitudes acquired from eight healthy volunteers in a roving oddball experiment. Models based on the free-energy principle provided more plausible explanations of trial-by-trial changes in MMN amplitude than models representing the two more traditional theories (change detection and adaptation). Our results suggest that the MMN reflects approximate Bayesian learning of sensory regularities, and that the MMN-generating process adjusts a probabilistic model of the environment according to prediction errors. PMID:23436989

  17. Programming chemistry in DNA-addressable bioreactors.

    PubMed

    Fellermann, Harold; Cardelli, Luca

    2014-10-06

    We present a formal calculus, termed the chemtainer calculus, able to capture the complexity of compartmentalized reaction systems such as populations of possibly nested vesicular compartments. Compartments contain molecular cargo as well as surface markers in the form of DNA single strands. These markers serve as compartment addresses and allow for their targeted transport and fusion, thereby enabling reactions of previously separated chemicals. The overall system organization allows for the set-up of programmable chemistry in microfluidic or other automated environments. We introduce a simple sequential programming language whose instructions are motivated by state-of-the-art microfluidic technology. Our approach integrates electronic control, chemical computing and material production in a unified formal framework that is able to mimic the integrated computational and constructive capabilities of the subcellular matrix. We provide a non-deterministic semantics of our programming language that enables us to analytically derive the computational and constructive power of our machinery. This semantics is used to derive the sets of all constructable chemicals and supermolecular structures that emerge from different underlying instruction sets. Because our proofs are constructive, they can be used to automatically infer control programs for the construction of target structures from a limited set of resource molecules. Finally, we present an example of our framework from the area of oligosaccharide synthesis. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  18. A modular approach to large-scale design optimization of aerospace systems

    NASA Astrophysics Data System (ADS)

    Hwang, John T.

    Gradient-based optimization and the adjoint method form a synergistic combination that enables the efficient solution of large-scale optimization problems. Though the gradient-based approach struggles with non-smooth or multi-modal problems, the capability to efficiently optimize up to tens of thousands of design variables provides a valuable design tool for exploring complex tradeoffs and finding unintuitive designs. However, the widespread adoption of gradient-based optimization is limited by the implementation challenges for computing derivatives efficiently and accurately, particularly in multidisciplinary and shape design problems. This thesis addresses these difficulties in two ways. First, to deal with the heterogeneity and integration challenges of multidisciplinary problems, this thesis presents a computational modeling framework that solves multidisciplinary systems and computes their derivatives in a semi-automated fashion. This framework is built upon a new mathematical formulation developed in this thesis that expresses any computational model as a system of algebraic equations and unifies all methods for computing derivatives using a single equation. The framework is applied to two engineering problems: the optimization of a nanosatellite with 7 disciplines and over 25,000 design variables; and simultaneous allocation and mission optimization for commercial aircraft involving 330 design variables, 12 of which are integer variables handled using the branch-and-bound method. In both cases, the framework makes large-scale optimization possible by reducing the implementation effort and code complexity. The second half of this thesis presents a differentiable parametrization of aircraft geometries and structures for high-fidelity shape optimization. Existing geometry parametrizations are not differentiable, or they are limited in the types of shape changes they allow. This is addressed by a novel parametrization that smoothly interpolates aircraft components, providing differentiability. An unstructured quadrilateral mesh generation algorithm is also developed to automate the creation of detailed meshes for aircraft structures, and a mesh convergence study is performed to verify that the quality of the mesh is maintained as it is refined. As a demonstration, high-fidelity aerostructural analysis is performed for two unconventional configurations with detailed structures included, and aerodynamic shape optimization is applied to the truss-braced wing, which finds and eliminates a shock in the region bounded by the struts and the wing.

  19. OpenTox predictive toxicology framework: toxicological ontology and semantic media wiki-based OpenToxipedia

    PubMed Central

    2012-01-01

    Background The OpenTox Framework, developed by the partners in the OpenTox project (http://www.opentox.org), aims at providing a unified access to toxicity data, predictive models and validation procedures. Interoperability of resources is achieved using a common information model, based on the OpenTox ontologies, describing predictive algorithms, models and toxicity data. As toxicological data may come from different, heterogeneous sources, a deployed ontology, unifying the terminology and the resources, is critical for the rational and reliable organization of the data, and its automatic processing. Results The following related ontologies have been developed for OpenTox: a) Toxicological ontology – listing the toxicological endpoints; b) Organs system and Effects ontology – addressing organs, targets/examinations and effects observed in in vivo studies; c) ToxML ontology – representing semi-automatic conversion of the ToxML schema; d) OpenTox ontology– representation of OpenTox framework components: chemical compounds, datasets, types of algorithms, models and validation web services; e) ToxLink–ToxCast assays ontology and f) OpenToxipedia community knowledge resource on toxicology terminology. OpenTox components are made available through standardized REST web services, where every compound, data set, and predictive method has a unique resolvable address (URI), used to retrieve its Resource Description Framework (RDF) representation, or to initiate the associated calculations and generate new RDF-based resources. The services support the integration of toxicity and chemical data from various sources, the generation and validation of computer models for toxic effects, seamless integration of new algorithms and scientifically sound validation routines and provide a flexible framework, which allows building arbitrary number of applications, tailored to solving different problems by end users (e.g. toxicologists). Availability The OpenTox toxicological ontology projects may be accessed via the OpenTox ontology development page http://www.opentox.org/dev/ontology; the OpenTox ontology is available as OWL at http://opentox.org/api/1 1/opentox.owl, the ToxML - OWL conversion utility is an open source resource available at http://ambit.svn.sourceforge.net/viewvc/ambit/branches/toxml-utils/ PMID:22541598

  20. OpenTox predictive toxicology framework: toxicological ontology and semantic media wiki-based OpenToxipedia.

    PubMed

    Tcheremenskaia, Olga; Benigni, Romualdo; Nikolova, Ivelina; Jeliazkova, Nina; Escher, Sylvia E; Batke, Monika; Baier, Thomas; Poroikov, Vladimir; Lagunin, Alexey; Rautenberg, Micha; Hardy, Barry

    2012-04-24

    The OpenTox Framework, developed by the partners in the OpenTox project (http://www.opentox.org), aims at providing a unified access to toxicity data, predictive models and validation procedures. Interoperability of resources is achieved using a common information model, based on the OpenTox ontologies, describing predictive algorithms, models and toxicity data. As toxicological data may come from different, heterogeneous sources, a deployed ontology, unifying the terminology and the resources, is critical for the rational and reliable organization of the data, and its automatic processing. The following related ontologies have been developed for OpenTox: a) Toxicological ontology - listing the toxicological endpoints; b) Organs system and Effects ontology - addressing organs, targets/examinations and effects observed in in vivo studies; c) ToxML ontology - representing semi-automatic conversion of the ToxML schema; d) OpenTox ontology- representation of OpenTox framework components: chemical compounds, datasets, types of algorithms, models and validation web services; e) ToxLink-ToxCast assays ontology and f) OpenToxipedia community knowledge resource on toxicology terminology.OpenTox components are made available through standardized REST web services, where every compound, data set, and predictive method has a unique resolvable address (URI), used to retrieve its Resource Description Framework (RDF) representation, or to initiate the associated calculations and generate new RDF-based resources.The services support the integration of toxicity and chemical data from various sources, the generation and validation of computer models for toxic effects, seamless integration of new algorithms and scientifically sound validation routines and provide a flexible framework, which allows building arbitrary number of applications, tailored to solving different problems by end users (e.g. toxicologists). The OpenTox toxicological ontology projects may be accessed via the OpenTox ontology development page http://www.opentox.org/dev/ontology; the OpenTox ontology is available as OWL at http://opentox.org/api/1 1/opentox.owl, the ToxML - OWL conversion utility is an open source resource available at http://ambit.svn.sourceforge.net/viewvc/ambit/branches/toxml-utils/

  1. Unified formalism for higher order non-autonomous dynamical systems

    NASA Astrophysics Data System (ADS)

    Prieto-Martínez, Pedro Daniel; Román-Roy, Narciso

    2012-03-01

    This work is devoted to giving a geometric framework for describing higher order non-autonomous mechanical systems. The starting point is to extend the Lagrangian-Hamiltonian unified formalism of Skinner and Rusk for these kinds of systems, generalizing previous developments for higher order autonomous mechanical systems and first-order non-autonomous mechanical systems. Then, we use this unified formulation to derive the standard Lagrangian and Hamiltonian formalisms, including the Legendre-Ostrogradsky map and the Euler-Lagrange and the Hamilton equations, both for regular and singular systems. As applications of our model, two examples of regular and singular physical systems are studied.

  2. Colaborated Architechture Framework for Composition UML 2.0 in Zachman Framework

    NASA Astrophysics Data System (ADS)

    Hermawan; Hastarista, Fika

    2016-01-01

    Zachman Framework (ZF) is the framework of enterprise architechture that most widely adopted in the Enterprise Information System (EIS) development. In this study, has been developed Colaborated Architechture Framework (CAF) to collaborate ZF with Unified Modeling Language (UML) 2.0 modeling. The CAF provides the composition of ZF matrix that each cell is consist of the Model Driven architechture (MDA) from the various UML models and many Software Requirement Specification (SRS) documents. Implementation of this modeling is used to develops Enterprise Resource Planning (ERP). Because ERP have a coverage of applications in large numbers and complexly relations, it is necessary to use Agile Model Driven Design (AMDD) approach as an advanced method to transforms MDA into components of application modules with efficiently and accurately. Finally, through the using of the CAF, give good achievement in fullfilment the needs from all stakeholders that are involved in the overall process stage of Rational Unified Process (RUP), and also obtaining a high satisfaction to fullfiled the functionality features of the ERP software in PT. Iglas (Persero) Gresik.

  3. Family Systems Theory: A Unifying Framework for Codependence.

    ERIC Educational Resources Information Center

    Prest, Layne A.; Protinsky, Howard

    1993-01-01

    Considers addictions and construct of codependence. Offers critical review and synthesis of codependency literature, along with an intergenerational family systems framework for conceptualizing the relationship of the dysfunctional family to the construct of codependence. Presents theoretical basis for systemic clinical work and research in this…

  4. [Research on tumor information grid framework].

    PubMed

    Zhang, Haowei; Qin, Zhu; Liu, Ying; Tan, Jianghao; Cao, Haitao; Chen, Youping; Zhang, Ke; Ding, Yuqing

    2013-10-01

    In order to realize tumor disease information sharing and unified management, we utilized grid technology to make the data and software resources which distributed in various medical institutions for effective integration so that we could make the heterogeneous resources consistent and interoperable in both semantics and syntax aspects. This article describes the tumor grid framework, the type of the service being packaged in Web Service Description Language (WSDL) and extensible markup language schemas definition (XSD), the client use the serialized document to operate the distributed resources. The service objects could be built by Unified Modeling Language (UML) as middle ware to create application programming interface. All of the grid resources are registered in the index and released in the form of Web Services based on Web Services Resource Framework (WSRF). Using the system we can build a multi-center, large sample and networking tumor disease resource sharing framework to improve the level of development in medical scientific research institutions and the patient's quality of life.

  5. Unified Engineering Software System

    NASA Technical Reports Server (NTRS)

    Purves, L. R.; Gordon, S.; Peltzman, A.; Dube, M.

    1989-01-01

    Collection of computer programs performs diverse functions in prototype engineering. NEXUS, NASA Engineering Extendible Unified Software system, is research set of computer programs designed to support full sequence of activities encountered in NASA engineering projects. Sequence spans preliminary design, design analysis, detailed design, manufacturing, assembly, and testing. Primarily addresses process of prototype engineering, task of getting single or small number of copies of product to work. Written in FORTRAN 77 and PROLOG.

  6. Concentration-driven models revisited: towards a unified framework to model settling tanks in water resource recovery facilities.

    PubMed

    Torfs, Elena; Martí, M Carmen; Locatelli, Florent; Balemans, Sophie; Bürger, Raimund; Diehl, Stefan; Laurent, Julien; Vanrolleghem, Peter A; François, Pierre; Nopens, Ingmar

    2017-02-01

    A new perspective on the modelling of settling behaviour in water resource recovery facilities is introduced. The ultimate goal is to describe in a unified way the processes taking place both in primary settling tanks (PSTs) and secondary settling tanks (SSTs) for a more detailed operation and control. First, experimental evidence is provided, pointing out distributed particle properties (such as size, shape, density, porosity, and flocculation state) as an important common source of distributed settling behaviour in different settling unit processes and throughout different settling regimes (discrete, hindered and compression settling). Subsequently, a unified model framework that considers several particle classes is proposed in order to describe distributions in settling behaviour as well as the effect of variations in particle properties on the settling process. The result is a set of partial differential equations (PDEs) that are valid from dilute concentrations, where they correspond to discrete settling, to concentrated suspensions, where they correspond to compression settling. Consequently, these PDEs model both PSTs and SSTs.

  7. A quasi-likelihood approach to non-negative matrix factorization

    PubMed Central

    Devarajan, Karthik; Cheung, Vincent C.K.

    2017-01-01

    A unified approach to non-negative matrix factorization based on the theory of generalized linear models is proposed. This approach embeds a variety of statistical models, including the exponential family, within a single theoretical framework and provides a unified view of such factorizations from the perspective of quasi-likelihood. Using this framework, a family of algorithms for handling signal-dependent noise is developed and its convergence proven using the Expectation-Maximization algorithm. In addition, a measure to evaluate the goodness-of-fit of the resulting factorization is described. The proposed methods allow modeling of non-linear effects via appropriate link functions and are illustrated using an application in biomedical signal processing. PMID:27348511

  8. Groundwater modelling in decision support: reflections on a unified conceptual framework

    NASA Astrophysics Data System (ADS)

    Doherty, John; Simmons, Craig T.

    2013-11-01

    Groundwater models are commonly used as basis for environmental decision-making. There has been discussion and debate in recent times regarding the issue of model simplicity and complexity. This paper contributes to this ongoing discourse. The selection of an appropriate level of model structural and parameterization complexity is not a simple matter. Although the metrics on which such selection should be based are simple, there are many competing, and often unquantifiable, considerations which must be taken into account as these metrics are applied. A unified conceptual framework is introduced and described which is intended to underpin groundwater modelling in decision support with a direct focus on matters regarding model simplicity and complexity.

  9. Pattern-oriented modeling of agent-based complex systems: Lessons from ecology

    USGS Publications Warehouse

    Grimm, Volker; Revilla, Eloy; Berger, Uta; Jeltsch, Florian; Mooij, Wolf M.; Railsback, Steven F.; Thulke, Hans-Hermann; Weiner, Jacob; Wiegand, Thorsten; DeAngelis, Donald L.

    2005-01-01

    Agent-based complex systems are dynamic networks of many interacting agents; examples include ecosystems, financial markets, and cities. The search for general principles underlying the internal organization of such systems often uses bottom-up simulation models such as cellular automata and agent-based models. No general framework for designing, testing, and analyzing bottom-up models has yet been established, but recent advances in ecological modeling have come together in a general strategy we call pattern-oriented modeling. This strategy provides a unifying framework for decoding the internal organization of agent-based complex systems and may lead toward unifying algorithmic theories of the relation between adaptive behavior and system complexity.

  10. Pattern-Oriented Modeling of Agent-Based Complex Systems: Lessons from Ecology

    NASA Astrophysics Data System (ADS)

    Grimm, Volker; Revilla, Eloy; Berger, Uta; Jeltsch, Florian; Mooij, Wolf M.; Railsback, Steven F.; Thulke, Hans-Hermann; Weiner, Jacob; Wiegand, Thorsten; DeAngelis, Donald L.

    2005-11-01

    Agent-based complex systems are dynamic networks of many interacting agents; examples include ecosystems, financial markets, and cities. The search for general principles underlying the internal organization of such systems often uses bottom-up simulation models such as cellular automata and agent-based models. No general framework for designing, testing, and analyzing bottom-up models has yet been established, but recent advances in ecological modeling have come together in a general strategy we call pattern-oriented modeling. This strategy provides a unifying framework for decoding the internal organization of agent-based complex systems and may lead toward unifying algorithmic theories of the relation between adaptive behavior and system complexity.

  11. A Unified Model of Geostrophic Adjustment and Frontogenesis

    NASA Astrophysics Data System (ADS)

    Taylor, John; Shakespeare, Callum

    2013-11-01

    Fronts, or regions with strong horizontal density gradients, are ubiquitous and dynamically important features of the ocean and atmosphere. In the ocean, fronts are associated with enhanced air-sea fluxes, turbulence, and biological productivity, while atmospheric fronts are associated with some of the most extreme weather events. Here, we describe a new mathematical framework for describing the formation of fronts, or frontogenesis. This framework unifies two classical problems in geophysical fluid dynamics, geostrophic adjustment and strain-driven frontogenesis, and provides a number of important extensions beyond previous efforts. The model solutions closely match numerical simulations during the early stages of frontogenesis, and provide a means to describe the development of turbulence at mature fronts.

  12. Integrating diverse databases into an unified analysis framework: a Galaxy approach

    PubMed Central

    Blankenberg, Daniel; Coraor, Nathan; Von Kuster, Gregory; Taylor, James; Nekrutenko, Anton

    2011-01-01

    Recent technological advances have lead to the ability to generate large amounts of data for model and non-model organisms. Whereas, in the past, there have been a relatively small number of central repositories that serve genomic data, an increasing number of distinct specialized data repositories and resources have been established. Here, we describe a generic approach that provides for the integration of a diverse spectrum of data resources into a unified analysis framework, Galaxy (http://usegalaxy.org). This approach allows the simplified coupling of external data resources with the data analysis tools available to Galaxy users, while leveraging the native data mining facilities of the external data resources. Database URL: http://usegalaxy.org PMID:21531983

  13. ConnectViz: Accelerated Approach for Brain Structural Connectivity Using Delaunay Triangulation.

    PubMed

    Adeshina, A M; Hashim, R

    2016-03-01

    Stroke is a cardiovascular disease with high mortality and long-term disability in the world. Normal functioning of the brain is dependent on the adequate supply of oxygen and nutrients to the brain complex network through the blood vessels. Stroke, occasionally a hemorrhagic stroke, ischemia or other blood vessel dysfunctions can affect patients during a cerebrovascular incident. Structurally, the left and the right carotid arteries, and the right and the left vertebral arteries are responsible for supplying blood to the brain, scalp and the face. However, a number of impairment in the function of the frontal lobes may occur as a result of any decrease in the flow of the blood through one of the internal carotid arteries. Such impairment commonly results in numbness, weakness or paralysis. Recently, the concepts of brain's wiring representation, the connectome, was introduced. However, construction and visualization of such brain network requires tremendous computation. Consequently, previously proposed approaches have been identified with common problems of high memory consumption and slow execution. Furthermore, interactivity in the previously proposed frameworks for brain network is also an outstanding issue. This study proposes an accelerated approach for brain connectomic visualization based on graph theory paradigm using compute unified device architecture, extending the previously proposed SurLens Visualization and computer aided hepatocellular carcinoma frameworks. The accelerated brain structural connectivity framework was evaluated with stripped brain datasets from the Department of Surgery, University of North Carolina, Chapel Hill, USA. Significantly, our proposed framework is able to generate and extract points and edges of datasets, displays nodes and edges in the datasets in form of a network and clearly maps data volume to the corresponding brain surface. Moreover, with the framework, surfaces of the dataset were simultaneously displayed with the nodes and the edges. The framework is very efficient in providing greater interactivity as a way of representing the nodes and the edges intuitively, all achieved at a considerably interactive speed for instantaneous mapping of the datasets' features. Uniquely, the connectomic algorithm performed remarkably fast with normal hardware requirement specifications.

  14. ConnectViz: Accelerated approach for brain structural connectivity using Delaunay triangulation.

    PubMed

    Adeshina, A M; Hashim, R

    2015-02-06

    Stroke is a cardiovascular disease with high mortality and long-term disability in the world. Normal functioning of the brain is dependent on the adequate supply of oxygen and nutrients to the brain complex network through the blood vessels. Stroke, occasionally a hemorrhagic stroke, ischemia or other blood vessel dysfunctions can affect patients during a cerebrovascular incident. Structurally, the left and the right carotid arteries, and the right and the left vertebral arteries are responsible for supplying blood to the brain, scalp and the face. However, a number of impairment in the function of the frontal lobes may occur as a result of any decrease in the flow of the blood through one of the internal carotid arteries. Such impairment commonly results in numbness, weakness or paralysis. Recently, the concepts of brain's wiring representation, the connectome, was introduced. However, construction and visualization of such brain network requires tremendous computation. Consequently, previously proposed approaches have been identified with common problems of high memory consumption and slow execution. Furthermore, interactivity in the previously proposed frameworks for brain network is also an outstanding issue. This study proposes an accelerated approach for brain connectomic visualization based on graph theory paradigm using Compute Unified Device Architecture (CUDA), extending the previously proposed SurLens Visualization and Computer Aided Hepatocellular Carcinoma (CAHECA) frameworks. The accelerated brain structural connectivity framework was evaluated with stripped brain datasets from the Department of Surgery, University of North Carolina, Chapel Hill, United States. Significantly, our proposed framework is able to generates and extracts points and edges of datasets, displays nodes and edges in the datasets in form of a network and clearly maps data volume to the corresponding brain surface. Moreover, with the framework, surfaces of the dataset were simultaneously displayed with the nodes and the edges. The framework is very efficient in providing greater interactivity as a way of representing the nodes and the edges intuitively, all achieved at a considerably interactive speed for instantaneous mapping of the datasets' features. Uniquely, the connectomic algorithm performed remarkably fast with normal hardware requirement specifications.

  15. Putting the School Interoperability Framework to the Test

    ERIC Educational Resources Information Center

    Mercurius, Neil; Burton, Glenn; Hopkins, Bill; Larsen, Hans

    2004-01-01

    The Jurupa Unified School District in Southern California recently partnered with Microsoft, Dell and the Zone Integration Group for the implementation of a School Interoperability Framework (SIF) database repository model throughout the district (Magner 2002). A two-week project--the Integrated District Education Applications System, better known…

  16. Generalized Multilevel Structural Equation Modeling

    ERIC Educational Resources Information Center

    Rabe-Hesketh, Sophia; Skrondal, Anders; Pickles, Andrew

    2004-01-01

    A unifying framework for generalized multilevel structural equation modeling is introduced. The models in the framework, called generalized linear latent and mixed models (GLLAMM), combine features of generalized linear mixed models (GLMM) and structural equation models (SEM) and consist of a response model and a structural model for the latent…

  17. Synergy of the Developed 6D BIM Framework and Conception of the nD BIM Framework and nD BIM Process Ontology

    ERIC Educational Resources Information Center

    O'Keeffe, Shawn Edward

    2013-01-01

    The author developed a unified nD framework and process ontology for Building Information Modeling (BIM). The research includes a framework developed for 6D BIM, nD BIM, and nD ontology that defines the domain and sub-domain constructs for future nD BIM dimensions. The nD ontology defines the relationships of kinds within any new proposed…

  18. Failure to Visualize and Describe Operations: The Evolution and Implementation of the Operational Framework

    DTIC Science & Technology

    2017-05-25

    Operations, and Unified Land Operations) and the US Army’s leader development model identifies how the education , training, and experience of field-grade...officers have failed in their incorporation of the framework because they lack the education , training, and experience for the use of the framework... education , training, and experience of field-grade officers at the division level have influenced their use of the operational framework. The cause for

  19. A cell-based computational model of early embryogenesis coupling mechanical behaviour and gene regulation

    NASA Astrophysics Data System (ADS)

    Delile, Julien; Herrmann, Matthieu; Peyriéras, Nadine; Doursat, René

    2017-01-01

    The study of multicellular development is grounded in two complementary domains: cell biomechanics, which examines how physical forces shape the embryo, and genetic regulation and molecular signalling, which concern how cells determine their states and behaviours. Integrating both sides into a unified framework is crucial to fully understand the self-organized dynamics of morphogenesis. Here we introduce MecaGen, an integrative modelling platform enabling the hypothesis-driven simulation of these dual processes via the coupling between mechanical and chemical variables. Our approach relies upon a minimal `cell behaviour ontology' comprising mesenchymal and epithelial cells and their associated behaviours. MecaGen enables the specification and control of complex collective movements in 3D space through a biologically relevant gene regulatory network and parameter space exploration. Three case studies investigating pattern formation, epithelial differentiation and tissue tectonics in zebrafish early embryogenesis, the latter with quantitative comparison to live imaging data, demonstrate the validity and usefulness of our framework.

  20. Next Generation Extended Lagrangian Quantum-based Molecular Dynamics

    NASA Astrophysics Data System (ADS)

    Negre, Christian

    2017-06-01

    A new framework for extended Lagrangian first-principles molecular dynamics simulations is presented, which overcomes shortcomings of regular, direct Born-Oppenheimer molecular dynamics, while maintaining important advantages of the unified extended Lagrangian formulation of density functional theory pioneered by Car and Parrinello three decades ago. The new framework allows, for the first time, energy conserving, linear-scaling Born-Oppenheimer molecular dynamics simulations, which is necessary to study larger and more realistic systems over longer simulation times than previously possible. Expensive, self-consinstent-field optimizations are avoided and normal integration time steps of regular, direct Born-Oppenheimer molecular dynamics can be used. Linear scaling electronic structure theory is presented using a graph-based approach that is ideal for parallel calculations on hybrid computer platforms. For the first time, quantum based Born-Oppenheimer molecular dynamics simulation is becoming a practically feasible approach in simulations of +100,000 atoms-representing a competitive alternative to classical polarizable force field methods. In collaboration with: Anders Niklasson, Los Alamos National Laboratory.

  1. Model Checking Degrees of Belief in a System of Agents

    NASA Technical Reports Server (NTRS)

    Raimondi, Franco; Primero, Giuseppe; Rungta, Neha

    2014-01-01

    Reasoning about degrees of belief has been investigated in the past by a number of authors and has a number of practical applications in real life. In this paper we present a unified framework to model and verify degrees of belief in a system of agents. In particular, we describe an extension of the temporal-epistemic logic CTLK and we introduce a semantics based on interpreted systems for this extension. In this way, degrees of beliefs do not need to be provided externally, but can be derived automatically from the possible executions of the system, thereby providing a computationally grounded formalism. We leverage the semantics to (a) construct a model checking algorithm, (b) investigate its complexity, (c) provide a Java implementation of the model checking algorithm, and (d) evaluate our approach using the standard benchmark of the dining cryptographers. Finally, we provide a detailed case study: using our framework and our implementation, we assess and verify the situational awareness of the pilot of Air France 447 flying in off-nominal conditions.

  2. Noise Production of an Idealized Two-Dimensional Fish School

    NASA Astrophysics Data System (ADS)

    Wagenhoffer, Nathan; Moored, Keith; Jaworski, Justin

    2017-11-01

    The analysis of quiet bio-inspired propulsive concepts requires a rapid, unified computational framework that integrates the coupled fluid-solid dynamics of swimmers and their wakes with the resulting noise generation. Such a framework is presented for two-dimensional flows, where the fluid motion is modeled by an unsteady boundary element method with a vortex-particle wake. The unsteady surface forces from the potential flow solver are then passed to an acoustic boundary element solver to predict the radiated sound in low-Mach-number flows. The coupled flow-acoustic solver is validated against canonical vortex-sound problems. A diamond arrangement of four airfoils are subjected to traveling wave kinematics representing a known idealized pattern for a school of fish, and the airfoil motion and inflow values are derived from the range of Strouhal values common to many natural swimmers. The coupled flow-acoustic solver estimates and analyzes the hydrodynamic performance and noise production of the idealized school of swimmers.

  3. Brain mechanisms controlling decision making and motor planning.

    PubMed

    Ramakrishnan, Arjun; Murthy, Aditya

    2013-01-01

    Accumulator models of decision making provide a unified framework to understand decision making and motor planning. In these models, the evolution of a decision is reflected in the accumulation of sensory information into a motor plan that reaches a threshold, leading to choice behavior. While these models provide an elegant framework to understand performance and reaction times, their ability to explain complex behaviors such as decision making and motor control of sequential movements in dynamic environments is unclear. To examine and probe the limits of online modification of decision making and motor planning, an oculomotor "redirect" task was used. Here, subjects were expected to change their eye movement plan when a new saccade target appeared. Based on task performance, saccade reaction time distributions, computational models of behavior, and intracortical microstimulation of monkey frontal eye fields, we show how accumulator models can be tested and extended to study dynamic aspects of decision making and motor control. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. Parametric models to relate spike train and LFP dynamics with neural information processing.

    PubMed

    Banerjee, Arpan; Dean, Heather L; Pesaran, Bijan

    2012-01-01

    Spike trains and local field potentials (LFPs) resulting from extracellular current flows provide a substrate for neural information processing. Understanding the neural code from simultaneous spike-field recordings and subsequent decoding of information processing events will have widespread applications. One way to demonstrate an understanding of the neural code, with particular advantages for the development of applications, is to formulate a parametric statistical model of neural activity and its covariates. Here, we propose a set of parametric spike-field models (unified models) that can be used with existing decoding algorithms to reveal the timing of task or stimulus specific processing. Our proposed unified modeling framework captures the effects of two important features of information processing: time-varying stimulus-driven inputs and ongoing background activity that occurs even in the absence of environmental inputs. We have applied this framework for decoding neural latencies in simulated and experimentally recorded spike-field sessions obtained from the lateral intraparietal area (LIP) of awake, behaving monkeys performing cued look-and-reach movements to spatial targets. Using both simulated and experimental data, we find that estimates of trial-by-trial parameters are not significantly affected by the presence of ongoing background activity. However, including background activity in the unified model improves goodness of fit for predicting individual spiking events. Uncovering the relationship between the model parameters and the timing of movements offers new ways to test hypotheses about the relationship between neural activity and behavior. We obtained significant spike-field onset time correlations from single trials using a previously published data set where significantly strong correlation was only obtained through trial averaging. We also found that unified models extracted a stronger relationship between neural response latency and trial-by-trial behavioral performance than existing models of neural information processing. Our results highlight the utility of the unified modeling framework for characterizing spike-LFP recordings obtained during behavioral performance.

  5. A unifying framework for systems modeling, control systems design, and system operation

    NASA Technical Reports Server (NTRS)

    Dvorak, Daniel L.; Indictor, Mark B.; Ingham, Michel D.; Rasmussen, Robert D.; Stringfellow, Margaret V.

    2005-01-01

    Current engineering practice in the analysis and design of large-scale multi-disciplinary control systems is typified by some form of decomposition- whether functional or physical or discipline-based-that enables multiple teams to work in parallel and in relative isolation. Too often, the resulting system after integration is an awkward marriage of different control and data mechanisms with poor end-to-end accountability. System of systems engineering, which faces this problem on a large scale, cries out for a unifying framework to guide analysis, design, and operation. This paper describes such a framework based on a state-, model-, and goal-based architecture for semi-autonomous control systems that guides analysis and modeling, shapes control system software design, and directly specifies operational intent. This paper illustrates the key concepts in the context of a large-scale, concurrent, globally distributed system of systems: NASA's proposed Array-based Deep Space Network.

  6. Toward a unified approach to dose-response modeling in ecotoxicology.

    PubMed

    Ritz, Christian

    2010-01-01

    This study reviews dose-response models that are used in ecotoxicology. The focus lies on clarification of differences and similarities between models, and as a side effect, their different guises in ecotoxicology are unravelled. A look at frequently used dose-response models reveals major discrepancies, among other things in naming conventions. Therefore, there is a need for a unified view on dose-response modeling in order to improve the understanding of it and to facilitate communication and comparison of findings across studies, thus realizing its full potential. This study attempts to establish a general framework that encompasses most dose-response models that are of interest to ecotoxicologists in practice. The framework includes commonly used models such as the log-logistic and Weibull models, but also features entire suites of models as found in various guidance documents. An outline on how the proposed framework can be implemented in statistical software systems is also provided.

  7. Free Energy Computations by Minimization of Kullback-Leibler Divergence: An Efficient Adaptive Biasing Potential Method for Sparse Representations

    DTIC Science & Technology

    2011-10-14

    landscapes. It is motivated by statistical learning arguments and unifies the tasks of biasing the molecular dynamics to escape free energy wells and...statistical learning arguments and unifies the tasks of biasing the molecular dynamics to escape free energy wells and estimating the free energy...experimentally, to characterize global changes as well as investigate relative stabilities. In most applications, a brute- force computation based on

  8. A Multi-scale Modeling System with Unified Physics to Study Precipitation Processes

    NASA Astrophysics Data System (ADS)

    Tao, W. K.

    2017-12-01

    In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), and (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF). The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the precipitation, processes and their sensitivity on model resolution and microphysics schemes will be presented. Also how to use of the multi-satellite simulator to improve precipitation processes will be discussed.

  9. Theoretical Foundation of Copernicus: A Unified System for Trajectory Design and Optimization

    NASA Technical Reports Server (NTRS)

    Ocampo, Cesar; Senent, Juan S.; Williams, Jacob

    2010-01-01

    The fundamental methods are described for the general spacecraft trajectory design and optimization software system called Copernicus. The methods rely on a unified framework that is used to model, design, and optimize spacecraft trajectories that may operate in complex gravitational force fields, use multiple propulsion systems, and involve multiple spacecraft. The trajectory model, with its associated equations of motion and maneuver models, are discussed.

  10. A Unified Framework for Monetary Theory and Policy Analysis.

    ERIC Educational Resources Information Center

    Lagos, Ricardo; Wright, Randall

    2005-01-01

    Search-theoretic models of monetary exchange are based on explicit descriptions of the frictions that make money essential. However, tractable versions of these models typically make strong assumptions that render them ill suited for monetary policy analysis. We propose a new framework, based on explicit micro foundations, within which macro…

  11. Generalizability Theory as a Unifying Framework of Measurement Reliability in Adolescent Research

    ERIC Educational Resources Information Center

    Fan, Xitao; Sun, Shaojing

    2014-01-01

    In adolescence research, the treatment of measurement reliability is often fragmented, and it is not always clear how different reliability coefficients are related. We show that generalizability theory (G-theory) is a comprehensive framework of measurement reliability, encompassing all other reliability methods (e.g., Pearson "r,"…

  12. Rapid development of entity-based data models for bioinformatics with persistence object-oriented design and structured interfaces.

    PubMed

    Ezra Tsur, Elishai

    2017-01-01

    Databases are imperative for research in bioinformatics and computational biology. Current challenges in database design include data heterogeneity and context-dependent interconnections between data entities. These challenges drove the development of unified data interfaces and specialized databases. The curation of specialized databases is an ever-growing challenge due to the introduction of new data sources and the emergence of new relational connections between established datasets. Here, an open-source framework for the curation of specialized databases is proposed. The framework supports user-designed models of data encapsulation, objects persistency and structured interfaces to local and external data sources such as MalaCards, Biomodels and the National Centre for Biotechnology Information (NCBI) databases. The proposed framework was implemented using Java as the development environment, EclipseLink as the data persistency agent and Apache Derby as the database manager. Syntactic analysis was based on J3D, jsoup, Apache Commons and w3c.dom open libraries. Finally, a construction of a specialized database for aneurysms associated vascular diseases is demonstrated. This database contains 3-dimensional geometries of aneurysms, patient's clinical information, articles, biological models, related diseases and our recently published model of aneurysms' risk of rapture. Framework is available in: http://nbel-lab.com.

  13. Efficient particle-in-cell simulation of auroral plasma phenomena using a CUDA enabled graphics processing unit

    NASA Astrophysics Data System (ADS)

    Sewell, Stephen

    This thesis introduces a software framework that effectively utilizes low-cost commercially available Graphic Processing Units (GPUs) to simulate complex scientific plasma phenomena that are modeled using the Particle-In-Cell (PIC) paradigm. The software framework that was developed conforms to the Compute Unified Device Architecture (CUDA), a standard for general purpose graphic processing that was introduced by NVIDIA Corporation. This framework has been verified for correctness and applied to advance the state of understanding of the electromagnetic aspects of the development of the Aurora Borealis and Aurora Australis. For each phase of the PIC methodology, this research has identified one or more methods to exploit the problem's natural parallelism and effectively map it for execution on the graphic processing unit and its host processor. The sources of overhead that can reduce the effectiveness of parallelization for each of these methods have also been identified. One of the novel aspects of this research was the utilization of particle sorting during the grid interpolation phase. The final representation resulted in simulations that executed about 38 times faster than simulations that were run on a single-core general-purpose processing system. The scalability of this framework to larger problem sizes and future generation systems has also been investigated.

  14. Unified Bayesian Estimator of EEG Reference at Infinity: rREST (Regularized Reference Electrode Standardization Technique).

    PubMed

    Hu, Shiang; Yao, Dezhong; Valdes-Sosa, Pedro A

    2018-01-01

    The choice of reference for the electroencephalogram (EEG) is a long-lasting unsolved issue resulting in inconsistent usages and endless debates. Currently, both the average reference (AR) and the reference electrode standardization technique (REST) are two primary, apparently irreconcilable contenders. We propose a theoretical framework to resolve this reference issue by formulating both (a) estimation of potentials at infinity, and (b) determination of the reference, as a unified Bayesian linear inverse problem, which can be solved by maximum a posterior estimation. We find that AR and REST are very particular cases of this unified framework: AR results from biophysically non-informative prior; while REST utilizes the prior based on the EEG generative model. To allow for simultaneous denoising and reference estimation, we develop the regularized versions of AR and REST, named rAR and rREST, respectively. Both depend on a regularization parameter that is the noise to signal variance ratio. Traditional and new estimators are evaluated with this framework, by both simulations and analysis of real resting EEGs. Toward this end, we leverage the MRI and EEG data from 89 subjects which participated in the Cuban Human Brain Mapping Project. Generated artificial EEGs-with a known ground truth, show that relative error in estimating the EEG potentials at infinity is lowest for rREST. It also reveals that realistic volume conductor models improve the performances of REST and rREST. Importantly, for practical applications, it is shown that an average lead field gives the results comparable to the individual lead field. Finally, it is shown that the selection of the regularization parameter with Generalized Cross-Validation (GCV) is close to the "oracle" choice based on the ground truth. When evaluated with the real 89 resting state EEGs, rREST consistently yields the lowest GCV. This study provides a novel perspective to the EEG reference problem by means of a unified inverse solution framework. It may allow additional principled theoretical formulations and numerical evaluation of performance.

  15. Towards a behavioral-matching based compilation of synthetic biology functions.

    PubMed

    Basso-Blandin, Adrien; Delaplace, Franck

    2015-09-01

    The field of synthetic biology is looking forward engineering framework for safely designing reliable de-novo biological functions. In this undertaking, Computer-Aided-Design (CAD) environments should play a central role for facilitating the design. Although, CAD environment is widely used to engineer artificial systems the application in synthetic biology is still in its infancy. In this article we address the problem of the design of a high level language which at the core of CAD environment. More specifically the Gubs (Genomic Unified Behavioural Specification) language is a specification language used to describe the observations of the expected behaviour. The compiler appropriately selects components such that the observation of the synthetic biological function resulting to their assembly complies to the programmed behaviour.

  16. Quantum approach to classical statistical mechanics.

    PubMed

    Somma, R D; Batista, C D; Ortiz, G

    2007-07-20

    We present a new approach to study the thermodynamic properties of d-dimensional classical systems by reducing the problem to the computation of ground state properties of a d-dimensional quantum model. This classical-to-quantum mapping allows us to extend the scope of standard optimization methods by unifying them under a general framework. The quantum annealing method is naturally extended to simulate classical systems at finite temperatures. We derive the rates to assure convergence to the optimal thermodynamic state using the adiabatic theorem of quantum mechanics. For simulated and quantum annealing, we obtain the asymptotic rates of T(t) approximately (pN)/(k(B)logt) and gamma(t) approximately (Nt)(-c/N), for the temperature and magnetic field, respectively. Other annealing strategies are also discussed.

  17. State estimation applications in aircraft flight-data analysis: A user's manual for SMACK

    NASA Technical Reports Server (NTRS)

    Bach, Ralph E., Jr.

    1991-01-01

    The evolution in the use of state estimation is traced for the analysis of aircraft flight data. A unifying mathematical framework for state estimation is reviewed, and several examples are presented that illustrate a general approach for checking instrument accuracy and data consistency, and for estimating variables that are difficult to measure. Recent applications associated with research aircraft flight tests and airline turbulence upsets are described. A computer program for aircraft state estimation is discussed in some detail. This document is intended to serve as a user's manual for the program called SMACK (SMoothing for AirCraft Kinematics). The diversity of the applications described emphasizes the potential advantages in using SMACK for flight-data analysis.

  18. Answering Schrödinger's question: A free-energy formulation

    NASA Astrophysics Data System (ADS)

    Ramstead, Maxwell James Désormeau; Badcock, Paul Benjamin; Friston, Karl John

    2018-03-01

    The free-energy principle (FEP) is a formal model of neuronal processes that is widely recognised in neuroscience as a unifying theory of the brain and biobehaviour. More recently, however, it has been extended beyond the brain to explain the dynamics of living systems, and their unique capacity to avoid decay. The aim of this review is to synthesise these advances with a meta-theoretical ontology of biological systems called variational neuroethology, which integrates the FEP with Tinbergen's four research questions to explain biological systems across spatial and temporal scales. We exemplify this framework by applying it to Homo sapiens, before translating variational neuroethology into a systematic research heuristic that supplies the biological, cognitive, and social sciences with a computationally tractable guide to discovery.

  19. CoCoNUT: an efficient system for the comparison and analysis of genomes

    PubMed Central

    2008-01-01

    Background Comparative genomics is the analysis and comparison of genomes from different species. This area of research is driven by the large number of sequenced genomes and heavily relies on efficient algorithms and software to perform pairwise and multiple genome comparisons. Results Most of the software tools available are tailored for one specific task. In contrast, we have developed a novel system CoCoNUT (Computational Comparative geNomics Utility Toolkit) that allows solving several different tasks in a unified framework: (1) finding regions of high similarity among multiple genomic sequences and aligning them, (2) comparing two draft or multi-chromosomal genomes, (3) locating large segmental duplications in large genomic sequences, and (4) mapping cDNA/EST to genomic sequences. Conclusion CoCoNUT is competitive with other software tools w.r.t. the quality of the results. The use of state of the art algorithms and data structures allows CoCoNUT to solve comparative genomics tasks more efficiently than previous tools. With the improved user interface (including an interactive visualization component), CoCoNUT provides a unified, versatile, and easy-to-use software tool for large scale studies in comparative genomics. PMID:19014477

  20. RT-18: Value of Flexibility. Phase 1

    DTIC Science & Technology

    2010-09-25

    an analytical framework based on sound mathematical constructs. A review of the current state-of-the-art showed that there is little unifying theory...framework that is mathematically consistent, domain independent and applicable under varying information levels. This report presents our advances in...During this period, we also explored the development of an analytical framework based on sound mathematical constructs. A review of the current state

  1. Toxicology ontology perspectives.

    PubMed

    Hardy, Barry; Apic, Gordana; Carthew, Philip; Clark, Dominic; Cook, David; Dix, Ian; Escher, Sylvia; Hastings, Janna; Heard, David J; Jeliazkova, Nina; Judson, Philip; Matis-Mitchell, Sherri; Mitic, Dragana; Myatt, Glenn; Shah, Imran; Spjuth, Ola; Tcheremenskaia, Olga; Toldo, Luca; Watson, David; White, Andrew; Yang, Chihae

    2012-01-01

    The field of predictive toxicology requires the development of open, public, computable, standardized toxicology vocabularies and ontologies to support the applications required by in silico, in vitro, and in vivo toxicology methods and related analysis and reporting activities. In this article we review ontology developments based on a set of perspectives showing how ontologies are being used in predictive toxicology initiatives and applications. Perspectives on resources and initiatives reviewed include OpenTox, eTOX, Pistoia Alliance, ToxWiz, Virtual Liver, EU-ADR, BEL, ToxML, and Bioclipse. We also review existing ontology developments in neighboring fields that can contribute to establishing an ontological framework for predictive toxicology. A significant set of resources is already available to provide a foundation for an ontological framework for 21st century mechanistic-based toxicology research. Ontologies such as ToxWiz provide a basis for application to toxicology investigations, whereas other ontologies under development in the biological, chemical, and biomedical communities could be incorporated in an extended future framework. OpenTox has provided a semantic web framework for the implementation of such ontologies into software applications and linked data resources. Bioclipse developers have shown the benefit of interoperability obtained through ontology by being able to link their workbench application with remote OpenTox web services. Although these developments are promising, an increased international coordination of efforts is greatly needed to develop a more unified, standardized, and open toxicology ontology framework.

  2. Framework Design of Unified Cross-Authentication Based on the Fourth Platform Integrated Payment

    NASA Astrophysics Data System (ADS)

    Yong, Xu; Yujin, He

    The essay advances a unified authentication based on the fourth integrated payment platform. The research aims at improving the compatibility of the authentication in electronic business and providing a reference for the establishment of credit system by seeking a way to carry out a standard unified authentication on a integrated payment platform. The essay introduces the concept of the forth integrated payment platform and finally put forward the whole structure and different components. The main issue of the essay is about the design of the credit system of the fourth integrated payment platform and the PKI/CA structure design.

  3. Key Technology Research on Open Architecture for The Sharing of Heterogeneous Geographic Analysis Models

    NASA Astrophysics Data System (ADS)

    Yue, S. S.; Wen, Y. N.; Lv, G. N.; Hu, D.

    2013-10-01

    In recent years, the increasing development of cloud computing technologies laid critical foundation for efficiently solving complicated geographic issues. However, it is still difficult to realize the cooperative operation of massive heterogeneous geographical models. Traditional cloud architecture is apt to provide centralized solution to end users, while all the required resources are often offered by large enterprises or special agencies. Thus, it's a closed framework from the perspective of resource utilization. Solving comprehensive geographic issues requires integrating multifarious heterogeneous geographical models and data. In this case, an open computing platform is in need, with which the model owners can package and deploy their models into cloud conveniently, while model users can search, access and utilize those models with cloud facility. Based on this concept, the open cloud service strategies for the sharing of heterogeneous geographic analysis models is studied in this article. The key technology: unified cloud interface strategy, sharing platform based on cloud service, and computing platform based on cloud service are discussed in detail, and related experiments are conducted for further verification.

  4. On the equivalence of spherical splines with least-squares collocation and Stokes's formula for regional geoid computation

    NASA Astrophysics Data System (ADS)

    Ophaug, Vegard; Gerlach, Christian

    2017-11-01

    This work is an investigation of three methods for regional geoid computation: Stokes's formula, least-squares collocation (LSC), and spherical radial base functions (RBFs) using the spline kernel (SK). It is a first attempt to compare the three methods theoretically and numerically in a unified framework. While Stokes integration and LSC may be regarded as classic methods for regional geoid computation, RBFs may still be regarded as a modern approach. All methods are theoretically equal when applied globally, and we therefore expect them to give comparable results in regional applications. However, it has been shown by de Min (Bull Géod 69:223-232, 1995. doi: 10.1007/BF00806734) that the equivalence of Stokes's formula and LSC does not hold in regional applications without modifying the cross-covariance function. In order to make all methods comparable in regional applications, the corresponding modification has been introduced also in the SK. Ultimately, we present numerical examples comparing Stokes's formula, LSC, and SKs in a closed-loop environment using synthetic noise-free data, to verify their equivalence. All agree on the millimeter level.

  5. Training Excitatory-Inhibitory Recurrent Neural Networks for Cognitive Tasks: A Simple and Flexible Framework

    PubMed Central

    Wang, Xiao-Jing

    2016-01-01

    The ability to simultaneously record from large numbers of neurons in behaving animals has ushered in a new era for the study of the neural circuit mechanisms underlying cognitive functions. One promising approach to uncovering the dynamical and computational principles governing population responses is to analyze model recurrent neural networks (RNNs) that have been optimized to perform the same tasks as behaving animals. Because the optimization of network parameters specifies the desired output but not the manner in which to achieve this output, “trained” networks serve as a source of mechanistic hypotheses and a testing ground for data analyses that link neural computation to behavior. Complete access to the activity and connectivity of the circuit, and the ability to manipulate them arbitrarily, make trained networks a convenient proxy for biological circuits and a valuable platform for theoretical investigation. However, existing RNNs lack basic biological features such as the distinction between excitatory and inhibitory units (Dale’s principle), which are essential if RNNs are to provide insights into the operation of biological circuits. Moreover, trained networks can achieve the same behavioral performance but differ substantially in their structure and dynamics, highlighting the need for a simple and flexible framework for the exploratory training of RNNs. Here, we describe a framework for gradient descent-based training of excitatory-inhibitory RNNs that can incorporate a variety of biological knowledge. We provide an implementation based on the machine learning library Theano, whose automatic differentiation capabilities facilitate modifications and extensions. We validate this framework by applying it to well-known experimental paradigms such as perceptual decision-making, context-dependent integration, multisensory integration, parametric working memory, and motor sequence generation. Our results demonstrate the wide range of neural activity patterns and behavior that can be modeled, and suggest a unified setting in which diverse cognitive computations and mechanisms can be studied. PMID:26928718

  6. An information model for managing multi-dimensional gridded data in a GIS

    NASA Astrophysics Data System (ADS)

    Xu, H.; Abdul-Kadar, F.; Gao, P.

    2016-04-01

    Earth observation agencies like NASA and NOAA produce huge volumes of historical, near real-time, and forecasting data representing terrestrial, atmospheric, and oceanic phenomena. The data drives climatological and meteorological studies, and underpins operations ranging from weather pattern prediction and forest fire monitoring to global vegetation analysis. These gridded data sets are distributed mostly as files in HDF, GRIB, or netCDF format and quantify variables like precipitation, soil moisture, or sea surface temperature, along one or more dimensions like time and depth. Although the data cube is a well-studied model for storing and analyzing multi-dimensional data, the GIS community remains in need of a solution that simplifies interactions with the data, and elegantly fits with existing database schemas and dissemination protocols. This paper presents an information model that enables Geographic Information Systems (GIS) to efficiently catalog very large heterogeneous collections of geospatially-referenced multi-dimensional rasters—towards providing unified access to the resulting multivariate hypercubes. We show how the implementation of the model encapsulates format-specific variations and provides unified access to data along any dimension. We discuss how this framework lends itself to familiar GIS concepts like image mosaics, vector field visualization, layer animation, distributed data access via web services, and scientific computing. Global data sources like MODIS from USGS and HYCOM from NOAA illustrate how one would employ this framework for cataloging, querying, and intuitively visualizing such hypercubes. ArcGIS—an established platform for processing, analyzing, and visualizing geospatial data—serves to demonstrate how this integration brings the full power of GIS to the scientific community.

  7. A unified algorithm for predicting partition coefficients for PBPK modeling of drugs and environmental chemicals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peyret, Thomas; Poulin, Patrick; Krishnan, Kannan, E-mail: kannan.krishnan@umontreal.ca

    The algorithms in the literature focusing to predict tissue:blood PC (P{sub tb}) for environmental chemicals and tissue:plasma PC based on total (K{sub p}) or unbound concentration (K{sub pu}) for drugs differ in their consideration of binding to hemoglobin, plasma proteins and charged phospholipids. The objective of the present study was to develop a unified algorithm such that P{sub tb}, K{sub p} and K{sub pu} for both drugs and environmental chemicals could be predicted. The development of the unified algorithm was accomplished by integrating all mechanistic algorithms previously published to compute the PCs. Furthermore, the algorithm was structured in such amore » way as to facilitate predictions of the distribution of organic compounds at the macro (i.e. whole tissue) and micro (i.e. cells and fluids) levels. The resulting unified algorithm was applied to compute the rat P{sub tb}, K{sub p} or K{sub pu} of muscle (n = 174), liver (n = 139) and adipose tissue (n = 141) for acidic, neutral, zwitterionic and basic drugs as well as ketones, acetate esters, alcohols, aliphatic hydrocarbons, aromatic hydrocarbons and ethers. The unified algorithm reproduced adequately the values predicted previously by the published algorithms for a total of 142 drugs and chemicals. The sensitivity analysis demonstrated the relative importance of the various compound properties reflective of specific mechanistic determinants relevant to prediction of PC values of drugs and environmental chemicals. Overall, the present unified algorithm uniquely facilitates the computation of macro and micro level PCs for developing organ and cellular-level PBPK models for both chemicals and drugs.« less

  8. A unified framework for building high performance DVEs

    NASA Astrophysics Data System (ADS)

    Lei, Kaibin; Ma, Zhixia; Xiong, Hua

    2011-10-01

    A unified framework for integrating PC cluster based parallel rendering with distributed virtual environments (DVEs) is presented in this paper. While various scene graphs have been proposed in DVEs, it is difficult to enable collaboration of different scene graphs. This paper proposes a technique for non-distributed scene graphs with the capability of object and event distribution. With the increase of graphics data, DVEs require more powerful rendering ability. But general scene graphs are inefficient in parallel rendering. The paper also proposes a technique to connect a DVE and a PC cluster based parallel rendering environment. A distributed multi-player video game is developed to show the interaction of different scene graphs and the parallel rendering performance on a large tiled display wall.

  9. Unified Behavior Framework for Discrete Event Simulation Systems

    DTIC Science & Technology

    2015-03-26

    I would like to thank Dr. Hodson for his guidance and direction throughout the AFIT program. I also would like to thank my thesis committee members...SPA Sense-Plan-Act SSL System Service Layer TCA Task Control Architecture TRP Teleo-Reactive Program UAV Unmanned Aerial Vehicle UBF Unified Behavior...a teleo-reactive architecture [11]. Teleo-Reactive Programs ( TRPs ) are composed of a list of rules, where each has a condition and an action. When the

  10. Evolutionary game theory meets social science: is there a unifying rule for human cooperation?

    PubMed

    Rosas, Alejandro

    2010-05-21

    Evolutionary game theory has shown that human cooperation thrives in different types of social interactions with a PD structure. Models treat the cooperative strategies within the different frameworks as discrete entities and sometimes even as contenders. Whereas strong reciprocity was acclaimed as superior to classic reciprocity for its ability to defeat defectors in public goods games, recent experiments and simulations show that costly punishment fails to promote cooperation in the IR and DR games, where classic reciprocity succeeds. My aim is to show that cooperative strategies across frameworks are capable of a unified treatment, for they are governed by a common underlying rule or norm. An analysis of the reputation and action rules that govern some representative cooperative strategies both in models and in economic experiments confirms that the different frameworks share a conditional action rule and several reputation rules. The common conditional rule contains an option between costly punishment and withholding benefits that provides alternative enforcement methods against defectors. Depending on the framework, individuals can switch to the appropriate strategy and method of enforcement. The stability of human cooperation looks more promising if one mechanism controls successful strategies across frameworks. Published by Elsevier Ltd.

  11. General System Theory: Toward a Conceptual Framework for Science and Technology Education for All.

    ERIC Educational Resources Information Center

    Chen, David; Stroup, Walter

    1993-01-01

    Suggests using general system theory as a unifying theoretical framework for science and technology education for all. Five reasons are articulated: the multidisciplinary nature of systems theory, the ability to engage complexity, the capacity to describe system dynamics, the ability to represent the relationship between microlevel and…

  12. Making Learning Personally Meaningful: A New Framework for Relevance Research

    ERIC Educational Resources Information Center

    Priniski, Stacy J.; Hecht, Cameron A.; Harackiewicz, Judith M.

    2018-01-01

    Personal relevance goes by many names in the motivation literature, stemming from a number of theoretical frameworks. Currently these lines of research are being conducted in parallel with little synthesis across them, perhaps because there is no unifying definition of the relevance construct within which this research can be situated. In this…

  13. Unifying Different Theories of Learning: Theoretical Framework and Empirical Evidence

    ERIC Educational Resources Information Center

    Phan, Huy Phuong

    2008-01-01

    The main aim of this research study was to test out a conceptual model encompassing the theoretical frameworks of achievement goals, study processing strategies, effort, and reflective thinking practice. In particular, it was postulated that the causal influences of achievement goals on academic performance are direct and indirect through study…

  14. Enabling Curriculum Change in Physical Education: The Interplay between Policy Constructors and Practitioners

    ERIC Educational Resources Information Center

    MacLean, Justine; Mulholland, Rosemary; Gray, Shirley; Horrell, Andrew

    2015-01-01

    Background: Curriculum for Excellence, a new national policy initiative in Scottish Schools, provides a unified curricular framework for children aged 3-18. Within this framework, Physical Education (PE) now forms part of a collective alongside physical activity and sport, subsumed by the newly created curriculum area of "Health and…

  15. Trichotomous processes in early memory development, aging, and neurocognitive impairment: a unified theory.

    PubMed

    Brainerd, C J; Reyna, V F; Howe, M L

    2009-10-01

    One of the most extensively investigated topics in the adult memory literature, dual memory processes, has had virtually no impact on the study of early memory development. The authors remove the key obstacles to such research by formulating a trichotomous theory of recall that combines the traditional dual processes of recollection and familiarity with a reconstruction process. The theory is then embedded in a hidden Markov model that measures all 3 processes with low-burden tasks that are appropriate for even young children. These techniques are applied to a large corpus of developmental studies of recall, yielding stable findings about the emergence of dual memory processes between childhood and young adulthood and generating tests of many theoretical predictions. The techniques are extended to the study of healthy aging and to the memory sequelae of common forms of neurocognitive impairment, resulting in a theoretical framework that is unified over 4 major domains of memory research: early development, mainstream adult research, aging, and neurocognitive impairment. The techniques are also extended to recognition, creating a unified dual process framework for recall and recognition.

  16. Functional dissociation of stimulus intensity encoding and predictive coding of pain in the insula

    PubMed Central

    Geuter, Stephan; Boll, Sabrina; Eippert, Falk; Büchel, Christian

    2017-01-01

    The computational principles by which the brain creates a painful experience from nociception are still unknown. Classic theories suggest that cortical regions either reflect stimulus intensity or additive effects of intensity and expectations, respectively. By contrast, predictive coding theories provide a unified framework explaining how perception is shaped by the integration of beliefs about the world with mismatches resulting from the comparison of these beliefs against sensory input. Using functional magnetic resonance imaging during a probabilistic heat pain paradigm, we investigated which computations underlie pain perception. Skin conductance, pupil dilation, and anterior insula responses to cued pain stimuli strictly followed the response patterns hypothesized by the predictive coding model, whereas posterior insula encoded stimulus intensity. This novel functional dissociation of pain processing within the insula together with previously observed alterations in chronic pain offer a novel interpretation of aberrant pain processing as disturbed weighting of predictions and prediction errors. DOI: http://dx.doi.org/10.7554/eLife.24770.001 PMID:28524817

  17. Time-variant random interval natural frequency analysis of structures

    NASA Astrophysics Data System (ADS)

    Wu, Binhua; Wu, Di; Gao, Wei; Song, Chongmin

    2018-02-01

    This paper presents a new robust method namely, unified interval Chebyshev-based random perturbation method, to tackle hybrid random interval structural natural frequency problem. In the proposed approach, random perturbation method is implemented to furnish the statistical features (i.e., mean and standard deviation) and Chebyshev surrogate model strategy is incorporated to formulate the statistical information of natural frequency with regards to the interval inputs. The comprehensive analysis framework combines the superiority of both methods in a way that computational cost is dramatically reduced. This presented method is thus capable of investigating the day-to-day based time-variant natural frequency of structures accurately and efficiently under concrete intrinsic creep effect with probabilistic and interval uncertain variables. The extreme bounds of the mean and standard deviation of natural frequency are captured through the embedded optimization strategy within the analysis procedure. Three particularly motivated numerical examples with progressive relationship in perspective of both structure type and uncertainty variables are demonstrated to justify the computational applicability, accuracy and efficiency of the proposed method.

  18. The software/wetware distinction. Comment on “Toward a computational framework for cognitive biology: Unifying approaches from cognitive neuroscience and comparative cognition” by W. Tecumseh Fitch

    NASA Astrophysics Data System (ADS)

    Dennett, Daniel

    2014-09-01

    Fitch [5] has not only articulated a growing consensus, after decades of ideological quarreling, about how to put cognitive science together, but in the process has attempted to advance the unification process with some bold strokes of his own. His proposal [4] that we take seriously the perspective which replaces "spherical neurons" (McCulloch Pitts logical neurons and their close kin) with neurons that are micro-agents with agendas and computational talents of their own, has been taken up by a variety of theorists, including myself [2,3]. Now his dendrophilia hypothesis promises to distill the core truths energizing the heated debates about the innate equipment that distinguishes the cognitive competences of our species from all others. Whether this promise can be kept is a wide-open empirical question, but Fitch has given us enough specification to justify a serious investment in answering it.

  19. Quantum computing with Majorana fermion codes

    NASA Astrophysics Data System (ADS)

    Litinski, Daniel; von Oppen, Felix

    2018-05-01

    We establish a unified framework for Majorana-based fault-tolerant quantum computation with Majorana surface codes and Majorana color codes. All logical Clifford gates are implemented with zero-time overhead. This is done by introducing a protocol for Pauli product measurements with tetrons and hexons which only requires local 4-Majorana parity measurements. An analogous protocol is used in the fault-tolerant setting, where tetrons and hexons are replaced by Majorana surface code patches, and parity measurements are replaced by lattice surgery, still only requiring local few-Majorana parity measurements. To this end, we discuss twist defects in Majorana fermion surface codes and adapt the technique of twist-based lattice surgery to fermionic codes. Moreover, we propose a family of codes that we refer to as Majorana color codes, which are obtained by concatenating Majorana surface codes with small Majorana fermion codes. Majorana surface and color codes can be used to decrease the space overhead and stabilizer weight compared to their bosonic counterparts.

  20. Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hero, Alfred O.; Rajaratnam, Bala

    When can reliable inference be drawn in the ‘‘Big Data’’ context? This article presents a framework for answering this fundamental question in the context of correlation mining, with implications for general large-scale inference. In large-scale data applications like genomics, connectomics, and eco-informatics, the data set is often variable rich but sample starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than the number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for ‘‘Big Data.’’ Sample complexity, however, hasmore » received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address this gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where the variable dimension is fixed and the sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; and 3) the purely high-dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa-scale data dimension. We illustrate this high-dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables that are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. We demonstrate various regimes of correlation mining based on the unifying perspective of high-dimensional learning rates and sample complexity for different structured covariance models and different inference tasks.« less

  1. Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining

    PubMed Central

    Hero, Alfred O.; Rajaratnam, Bala

    2015-01-01

    When can reliable inference be drawn in fue “Big Data” context? This paper presents a framework for answering this fundamental question in the context of correlation mining, wifu implications for general large scale inference. In large scale data applications like genomics, connectomics, and eco-informatics fue dataset is often variable-rich but sample-starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than fue number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for “Big Data”. Sample complexity however has received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address fuis gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where fue variable dimension is fixed and fue sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; 3) the purely high dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa cale data dimension. We illustrate this high dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables fua t are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. we demonstrate various regimes of correlation mining based on the unifying perspective of high dimensional learning rates and sample complexity for different structured covariance models and different inference tasks. PMID:27087700

  2. Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining

    DOE PAGES

    Hero, Alfred O.; Rajaratnam, Bala

    2015-12-09

    When can reliable inference be drawn in the ‘‘Big Data’’ context? This article presents a framework for answering this fundamental question in the context of correlation mining, with implications for general large-scale inference. In large-scale data applications like genomics, connectomics, and eco-informatics, the data set is often variable rich but sample starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than the number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for ‘‘Big Data.’’ Sample complexity, however, hasmore » received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address this gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where the variable dimension is fixed and the sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; and 3) the purely high-dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa-scale data dimension. We illustrate this high-dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables that are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. We demonstrate various regimes of correlation mining based on the unifying perspective of high-dimensional learning rates and sample complexity for different structured covariance models and different inference tasks.« less

  3. A Systematic Framework and Nanoperiodic Concept for Unifying Nanoscience: Hard/Soft Nanoelements, Superatoms, Meta-Atoms, New Emerging Properties, Periodic Property Patterns, and Predictive Mendeleev-like Nanoperiodic Tables.

    PubMed

    Tomalia, Donald A; Khanna, Shiv N

    2016-02-24

    Development of a central paradigm is undoubtedly the single most influential force responsible for advancing Dalton's 19th century atomic/molecular chemistry concepts to the current maturity enjoyed by traditional chemistry. A similar central dogma for guiding and unifying nanoscience has been missing. This review traces the origins, evolution, and current status of such a critical nanoperiodic concept/framework for defining and unifying nanoscience. Based on parallel efforts and a mutual consensus now shared by both chemists and physicists, a nanoperiodic/systematic framework concept has emerged. This concept is based on the well-documented existence of discrete, nanoscale collections of traditional inorganic/organic atoms referred to as hard and soft superatoms (i.e., nanoelement categories). These nanometric entities are widely recognized to exhibit nanoscale atom mimicry features reminiscent of traditional picoscale atoms. All unique superatom/nanoelement physicochemical features are derived from quantized structural control defined by six critical nanoscale design parameters (CNDPs), namely, size, shape, surface chemistry, flexibility/rigidity, architecture, and elemental composition. These CNDPs determine all intrinsic superatom properties, their combining behavior to form stoichiometric nanocompounds/assemblies as well as to exhibit nanoperiodic properties leading to new nanoperiodic rules and predictive Mendeleev-like nanoperiodic tables, and they portend possible extension of these principles to larger quantized building blocks including meta-atoms.

  4. A Unifying Mathematical Framework for Genetic Robustness, Environmental Robustness, Network Robustness and their Trade-off on Phenotype Robustness in Biological Networks Part I: Gene Regulatory Networks in Systems and Evolutionary Biology

    PubMed Central

    Chen, Bor-Sen; Lin, Ying-Po

    2013-01-01

    Robust stabilization and environmental disturbance attenuation are ubiquitous systematic properties observed in biological systems at different levels. The underlying principles for robust stabilization and environmental disturbance attenuation are universal to both complex biological systems and sophisticated engineering systems. In many biological networks, network robustness should be enough to confer intrinsic robustness in order to tolerate intrinsic parameter fluctuations, genetic robustness for buffering genetic variations, and environmental robustness for resisting environmental disturbances. With this, the phenotypic stability of biological network can be maintained, thus guaranteeing phenotype robustness. This paper presents a survey on biological systems and then develops a unifying mathematical framework for investigating the principles of both robust stabilization and environmental disturbance attenuation in systems and evolutionary biology. Further, from the unifying mathematical framework, it was discovered that the phenotype robustness criterion for biological networks at different levels relies upon intrinsic robustness + genetic robustness + environmental robustness ≦ network robustness. When this is true, the phenotype robustness can be maintained in spite of intrinsic parameter fluctuations, genetic variations, and environmental disturbances. Therefore, the trade-offs between intrinsic robustness, genetic robustness, environmental robustness, and network robustness in systems and evolutionary biology can also be investigated through their corresponding phenotype robustness criterion from the systematic point of view. PMID:23515240

  5. A Unifying Mathematical Framework for Genetic Robustness, Environmental Robustness, Network Robustness and their Trade-off on Phenotype Robustness in Biological Networks Part I: Gene Regulatory Networks in Systems and Evolutionary Biology.

    PubMed

    Chen, Bor-Sen; Lin, Ying-Po

    2013-01-01

    Robust stabilization and environmental disturbance attenuation are ubiquitous systematic properties observed in biological systems at different levels. The underlying principles for robust stabilization and environmental disturbance attenuation are universal to both complex biological systems and sophisticated engineering systems. In many biological networks, network robustness should be enough to confer intrinsic robustness in order to tolerate intrinsic parameter fluctuations, genetic robustness for buffering genetic variations, and environmental robustness for resisting environmental disturbances. With this, the phenotypic stability of biological network can be maintained, thus guaranteeing phenotype robustness. This paper presents a survey on biological systems and then develops a unifying mathematical framework for investigating the principles of both robust stabilization and environmental disturbance attenuation in systems and evolutionary biology. Further, from the unifying mathematical framework, it was discovered that the phenotype robustness criterion for biological networks at different levels relies upon intrinsic robustness + genetic robustness + environmental robustness ≦ network robustness. When this is true, the phenotype robustness can be maintained in spite of intrinsic parameter fluctuations, genetic variations, and environmental disturbances. Therefore, the trade-offs between intrinsic robustness, genetic robustness, environmental robustness, and network robustness in systems and evolutionary biology can also be investigated through their corresponding phenotype robustness criterion from the systematic point of view.

  6. A new view of Baryon symmetric cosmology based on grand unified theories

    NASA Technical Reports Server (NTRS)

    Stecker, F. W.

    1981-01-01

    Within the framework of grand unified theories, it is shown how spontaneous CP violation leads to a domain structure in the universe with the domains evolving into separate regions of matter and antimatter excesses. Subsequent to exponential horizon growth, this can result in a universe of matter galaxies and antimatter galaxies. Various astrophysical data appear to favor this form of big bang cosmology. Future direct tests for cosmologically significant antimatter are discussed.

  7. A model for calculating expected performance of the Apollo unified S-band (USB) communication system

    NASA Technical Reports Server (NTRS)

    Schroeder, N. W.

    1971-01-01

    A model for calculating the expected performance of the Apollo unified S-band (USB) communication system is presented. The general organization of the Apollo USB is described. The mathematical model is reviewed and the computer program for implementation of the calculations is included.

  8. A unified software framework for deriving, visualizing, and exploring abstraction networks for ontologies

    PubMed Central

    Ochs, Christopher; Geller, James; Perl, Yehoshua; Musen, Mark A.

    2016-01-01

    Software tools play a critical role in the development and maintenance of biomedical ontologies. One important task that is difficult without software tools is ontology quality assurance. In previous work, we have introduced different kinds of abstraction networks to provide a theoretical foundation for ontology quality assurance tools. Abstraction networks summarize the structure and content of ontologies. One kind of abstraction network that we have used repeatedly to support ontology quality assurance is the partial-area taxonomy. It summarizes structurally and semantically similar concepts within an ontology. However, the use of partial-area taxonomies was ad hoc and not generalizable. In this paper, we describe the Ontology Abstraction Framework (OAF), a unified framework and software system for deriving, visualizing, and exploring partial-area taxonomy abstraction networks. The OAF includes support for various ontology representations (e.g., OWL and SNOMED CT's relational format). A Protégé plugin for deriving “live partial-area taxonomies” is demonstrated. PMID:27345947

  9. A unified software framework for deriving, visualizing, and exploring abstraction networks for ontologies.

    PubMed

    Ochs, Christopher; Geller, James; Perl, Yehoshua; Musen, Mark A

    2016-08-01

    Software tools play a critical role in the development and maintenance of biomedical ontologies. One important task that is difficult without software tools is ontology quality assurance. In previous work, we have introduced different kinds of abstraction networks to provide a theoretical foundation for ontology quality assurance tools. Abstraction networks summarize the structure and content of ontologies. One kind of abstraction network that we have used repeatedly to support ontology quality assurance is the partial-area taxonomy. It summarizes structurally and semantically similar concepts within an ontology. However, the use of partial-area taxonomies was ad hoc and not generalizable. In this paper, we describe the Ontology Abstraction Framework (OAF), a unified framework and software system for deriving, visualizing, and exploring partial-area taxonomy abstraction networks. The OAF includes support for various ontology representations (e.g., OWL and SNOMED CT's relational format). A Protégé plugin for deriving "live partial-area taxonomies" is demonstrated. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. North American Science Symposium: Toward a unified framework for inventorying and monitoring forest ecosystem resources

    Treesearch

    Celedonio Aguirre-Bravo; Carlos Rodriguez Franco

    1999-01-01

    The general objective of this Symposium was to build on the best science and technology available to assure that the data and information produced in future inventory and monitoring programs are comparable, quality assured, available, and adequate for their intended purposes, thereby providing a reliable framework for characterization, assessment, and management of...

  11. Students and Teacher Academic Evaluation Perceptions: Methodology to Construct a Representation Based on Actionable Knowledge Discovery Framework

    ERIC Educational Resources Information Center

    Molina, Otilia Alejandro; Ratté, Sylvie

    2017-01-01

    This research introduces a method to construct a unified representation of teachers and students perspectives based on the actionable knowledge discovery (AKD) and delivery framework. The representation is constructed using two models: one obtained from student evaluations and the other obtained from teachers' reflections about their teaching…

  12. A high-resolution bioclimate map of the world: a unifying framework for global biodiversity research and monitoring

    USGS Publications Warehouse

    Metzger, Marc J.; Bunce, Robert G.H.; Jongman, Rob H.G.; Sayre, Roger G.; Trabucco, Antonio; Zomer, Robert

    2013-01-01

    Main conclusions: The GEnS provides a robust spatial analytical framework for the aggregation of local observations, identification of gaps in current monitoring efforts and systematic design of complementary and new monitoring and research. The dataset is available for non-commercial use through the GEO portal (http://www.geoportal.org).

  13. The Nation's Report Card Science 2009 Trial Urban District Snapshot Report. Fresno Unified School District. Grade 8, Public Schools

    ERIC Educational Resources Information Center

    National Center for Education Statistics, 2011

    2011-01-01

    Guided by a new framework, the National Assessment of Educational Progress (NAEP) science assessment was updated in 2009 to keep the content current with key developments in science, curriculum standards, assessments, and research. The 2009 framework organizes science content into three broad content areas. Physical science includes concepts…

  14. Teaching Introductory Business Statistics Using the DCOVA Framework

    ERIC Educational Resources Information Center

    Levine, David M.; Stephan, David F.

    2011-01-01

    Introductory business statistics students often receive little guidance on how to apply the methods they learn to further business objectives they may one day face. And those students may fail to see the continuity among the topics taught in an introductory course if they learn those methods outside a context that provides a unifying framework.…

  15. The Nation's Report Card Science 2009 Trial Urban District Snapshot Report. Fresno Unified School District. Grade 4, Public Schools

    ERIC Educational Resources Information Center

    National Center for Education Statistics, 2011

    2011-01-01

    Guided by a new framework, the National Assessment of Educational Progress (NAEP) science assessment was updated in 2009 to keep the content current with key developments in science, curriculum standards, assessments, and research. The 2009 framework organizes science content into three broad content areas. Physical science includes concepts…

  16. Evaluating Health Information Systems Using Ontologies

    PubMed Central

    Anderberg, Peter; Larsson, Tobias C; Fricker, Samuel A; Berglund, Johan

    2016-01-01

    Background There are several frameworks that attempt to address the challenges of evaluation of health information systems by offering models, methods, and guidelines about what to evaluate, how to evaluate, and how to report the evaluation results. Model-based evaluation frameworks usually suggest universally applicable evaluation aspects but do not consider case-specific aspects. On the other hand, evaluation frameworks that are case specific, by eliciting user requirements, limit their output to the evaluation aspects suggested by the users in the early phases of system development. In addition, these case-specific approaches extract different sets of evaluation aspects from each case, making it challenging to collectively compare, unify, or aggregate the evaluation of a set of heterogeneous health information systems. Objectives The aim of this paper is to find a method capable of suggesting evaluation aspects for a set of one or more health information systems—whether similar or heterogeneous—by organizing, unifying, and aggregating the quality attributes extracted from those systems and from an external evaluation framework. Methods On the basis of the available literature in semantic networks and ontologies, a method (called Unified eValuation using Ontology; UVON) was developed that can organize, unify, and aggregate the quality attributes of several health information systems into a tree-style ontology structure. The method was extended to integrate its generated ontology with the evaluation aspects suggested by model-based evaluation frameworks. An approach was developed to extract evaluation aspects from the ontology that also considers evaluation case practicalities such as the maximum number of evaluation aspects to be measured or their required degree of specificity. The method was applied and tested in Future Internet Social and Technological Alignment Research (FI-STAR), a project of 7 cloud-based eHealth applications that were developed and deployed across European Union countries. Results The relevance of the evaluation aspects created by the UVON method for the FI-STAR project was validated by the corresponding stakeholders of each case. These evaluation aspects were extracted from a UVON-generated ontology structure that reflects both the internally declared required quality attributes in the 7 eHealth applications of the FI-STAR project and the evaluation aspects recommended by the Model for ASsessment of Telemedicine applications (MAST) evaluation framework. The extracted evaluation aspects were used to create questionnaires (for the corresponding patients and health professionals) to evaluate each individual case and the whole of the FI-STAR project. Conclusions The UVON method can provide a relevant set of evaluation aspects for a heterogeneous set of health information systems by organizing, unifying, and aggregating the quality attributes through ontological structures. Those quality attributes can be either suggested by evaluation models or elicited from the stakeholders of those systems in the form of system requirements. The method continues to be systematic, context sensitive, and relevant across a heterogeneous set of health information systems. PMID:27311735

  17. Evaluating Health Information Systems Using Ontologies.

    PubMed

    Eivazzadeh, Shahryar; Anderberg, Peter; Larsson, Tobias C; Fricker, Samuel A; Berglund, Johan

    2016-06-16

    There are several frameworks that attempt to address the challenges of evaluation of health information systems by offering models, methods, and guidelines about what to evaluate, how to evaluate, and how to report the evaluation results. Model-based evaluation frameworks usually suggest universally applicable evaluation aspects but do not consider case-specific aspects. On the other hand, evaluation frameworks that are case specific, by eliciting user requirements, limit their output to the evaluation aspects suggested by the users in the early phases of system development. In addition, these case-specific approaches extract different sets of evaluation aspects from each case, making it challenging to collectively compare, unify, or aggregate the evaluation of a set of heterogeneous health information systems. The aim of this paper is to find a method capable of suggesting evaluation aspects for a set of one or more health information systems-whether similar or heterogeneous-by organizing, unifying, and aggregating the quality attributes extracted from those systems and from an external evaluation framework. On the basis of the available literature in semantic networks and ontologies, a method (called Unified eValuation using Ontology; UVON) was developed that can organize, unify, and aggregate the quality attributes of several health information systems into a tree-style ontology structure. The method was extended to integrate its generated ontology with the evaluation aspects suggested by model-based evaluation frameworks. An approach was developed to extract evaluation aspects from the ontology that also considers evaluation case practicalities such as the maximum number of evaluation aspects to be measured or their required degree of specificity. The method was applied and tested in Future Internet Social and Technological Alignment Research (FI-STAR), a project of 7 cloud-based eHealth applications that were developed and deployed across European Union countries. The relevance of the evaluation aspects created by the UVON method for the FI-STAR project was validated by the corresponding stakeholders of each case. These evaluation aspects were extracted from a UVON-generated ontology structure that reflects both the internally declared required quality attributes in the 7 eHealth applications of the FI-STAR project and the evaluation aspects recommended by the Model for ASsessment of Telemedicine applications (MAST) evaluation framework. The extracted evaluation aspects were used to create questionnaires (for the corresponding patients and health professionals) to evaluate each individual case and the whole of the FI-STAR project. The UVON method can provide a relevant set of evaluation aspects for a heterogeneous set of health information systems by organizing, unifying, and aggregating the quality attributes through ontological structures. Those quality attributes can be either suggested by evaluation models or elicited from the stakeholders of those systems in the form of system requirements. The method continues to be systematic, context sensitive, and relevant across a heterogeneous set of health information systems.

  18. Integration of Multidisciplinary Sensory Data:

    PubMed Central

    Miller, Perry L.; Nadkarni, Prakash; Singer, Michael; Marenco, Luis; Hines, Michael; Shepherd, Gordon

    2001-01-01

    The paper provides an overview of neuroinformatics research at Yale University being performed as part of the national Human Brain Project. This research is exploring the integration of multidisciplinary sensory data, using the olfactory system as a model domain. The neuroinformatics activities fall into three main areas: 1) building databases and related tools that support experimental olfactory research at Yale and can also serve as resources for the field as a whole, 2) using computer models (molecular models and neuronal models) to help understand data being collected experimentally and to help guide further laboratory experiments, 3) performing basic neuroinformatics research to develop new informatics technologies, including a flexible data model (EAV/CR, entity-attribute-value with classes and relationships) designed to facilitate the integration of diverse heterogeneous data within a single unifying framework. PMID:11141511

  19. Design of a multiple kernel learning algorithm for LS-SVM by convex programming.

    PubMed

    Jian, Ling; Xia, Zhonghang; Liang, Xijun; Gao, Chuanhou

    2011-06-01

    As a kernel based method, the performance of least squares support vector machine (LS-SVM) depends on the selection of the kernel as well as the regularization parameter (Duan, Keerthi, & Poo, 2003). Cross-validation is efficient in selecting a single kernel and the regularization parameter; however, it suffers from heavy computational cost and is not flexible to deal with multiple kernels. In this paper, we address the issue of multiple kernel learning for LS-SVM by formulating it as semidefinite programming (SDP). Furthermore, we show that the regularization parameter can be optimized in a unified framework with the kernel, which leads to an automatic process for model selection. Extensive experimental validations are performed and analyzed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Gradient calculations for dynamic recurrent neural networks: a survey.

    PubMed

    Pearlmutter, B A

    1995-01-01

    Surveys learning algorithms for recurrent neural networks with hidden units and puts the various techniques into a common framework. The authors discuss fixed point learning algorithms, namely recurrent backpropagation and deterministic Boltzmann machines, and nonfixed point algorithms, namely backpropagation through time, Elman's history cutoff, and Jordan's output feedback architecture. Forward propagation, an on-line technique that uses adjoint equations, and variations thereof, are also discussed. In many cases, the unified presentation leads to generalizations of various sorts. The author discusses advantages and disadvantages of temporally continuous neural networks in contrast to clocked ones continues with some "tricks of the trade" for training, using, and simulating continuous time and recurrent neural networks. The author presents some simulations, and at the end, addresses issues of computational complexity and learning speed.

  1. Towards a general object-oriented software development methodology

    NASA Technical Reports Server (NTRS)

    Seidewitz, ED; Stark, Mike

    1986-01-01

    Object diagrams were used to design a 5000 statement team training exercise and to design the entire dynamics simulator. The object diagrams are also being used to design another 50,000 statement Ada system and a personal computer based system that will be written in Modula II. The design methodology evolves out of these experiences as well as the limitations of other methods that were studied. Object diagrams, abstraction analysis, and associated principles provide a unified framework which encompasses concepts from Yourdin, Booch, and Cherry. This general object-oriented approach handles high level system design, possibly with concurrency, through object-oriented decomposition down to a completely functional level. How object-oriented concepts can be used in other phases of the software life-cycle, such as specification and testing is being studied concurrently.

  2. Scoring functions for protein-protein interactions.

    PubMed

    Moal, Iain H; Moretti, Rocco; Baker, David; Fernández-Recio, Juan

    2013-12-01

    The computational evaluation of protein-protein interactions will play an important role in organising the wealth of data being generated by high-throughput initiatives. Here we discuss future applications, report recent developments and identify areas requiring further investigation. Many functions have been developed to quantify the structural and energetic properties of interacting proteins, finding use in interrelated challenges revolving around the relationship between sequence, structure and binding free energy. These include loop modelling, side-chain refinement, docking, multimer assembly, affinity prediction, affinity change upon mutation, hotspots location and interface design. Information derived from models optimised for one of these challenges can be used to benefit the others, and can be unified within the theoretical frameworks of multi-task learning and Pareto-optimal multi-objective learning. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Representing nursing guideline with unified modeling language to facilitate development of a computer system: a case study.

    PubMed

    Choi, Jeeyae; Choi, Jeungok E

    2014-01-01

    To provide best recommendations at the point of care, guidelines have been implemented in computer systems. As a prerequisite, guidelines are translated into a computer-interpretable guideline format. Since there are no specific tools to translate nursing guidelines, only a few nursing guidelines are translated and implemented in computer systems. Unified modeling language (UML) is a software writing language and is known to well and accurately represent end-users' perspective, due to the expressive characteristics of the UML. In order to facilitate the development of computer systems for nurses' use, the UML was used to translate a paper-based nursing guideline, and its ease of use and the usefulness were tested through a case study of a genetic counseling guideline. The UML was found to be a useful tool to nurse informaticians and a sufficient tool to model a guideline in a computer program.

  4. Beyond Containment and Deterrence: A Security Framework for Europe in the 21st Century

    DTIC Science & Technology

    1990-04-02

    decades of the 21st Century in Europe, and examines DDO FJoA 1473 E. T1O. Of INOV 65 IS OBSOLETE Uaf eSECRIT CUnclassified SECURITY CLASSIFICATION’ OF THIS... Poland , and parts of France and Russia, but it did not truely unify Germany. Bismarck unified only parts of Germany which he could constrain under...Europe, Central Europe, the Balkans, and the Soviet Union. Central Europe includes Vest Germany, East Germany, Austria, Czechoslavakia, Poland , and

  5. Towards a Unified Description of the Electroweak Nuclear Response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benhar, Omar; Lovato, Alessandro

    2015-06-01

    We briefly review the growing efforts to set up a unified framework for the description of neutrino interactions with atomic nuclei and nuclear matter, applicable in the broad kinematical region corresponding to neutrino energies ranging between few MeV and few GeV. The emerging picture suggests that the formalism of nuclear many-body theory (NMBT) can be exploited to obtain the neutrino-nucleus cross-sections needed for both the interpretation of oscillation signals and simulations of neutrino transport in compact stars

  6. A theoretical formulation of wave-vortex interactions

    NASA Technical Reports Server (NTRS)

    Wu, J. Z.; Wu, J. M.

    1989-01-01

    A unified theoretical formulation for wave-vortex interaction, designated the '(omega, Pi) framework,' is presented. Based on the orthogonal decomposition of fluid dynamic interactions, the formulation can be used to study a variety of problems, including the interaction of a longitudinal (acoustic) wave and/or transverse (vortical) wave with a main vortex flow. Moreover, the formulation permits a unified treatment of wave-vortex interaction at various approximate levels, where the normal 'piston' process and tangential 'rubbing' process can be approximated dfferently.

  7. Heuristics for the inversion median problem

    PubMed Central

    2010-01-01

    Background The study of genome rearrangements has become a mainstay of phylogenetics and comparative genomics. Fundamental in such a study is the median problem: given three genomes find a fourth that minimizes the sum of the evolutionary distances between itself and the given three. Many exact algorithms and heuristics have been developed for the inversion median problem, of which the best known is MGR. Results We present a unifying framework for median heuristics, which enables us to clarify existing strategies and to place them in a partial ordering. Analysis of this framework leads to a new insight: the best strategies continue to refer to the input data rather than reducing the problem to smaller instances. Using this insight, we develop a new heuristic for inversion medians that uses input data to the end of its computation and leverages our previous work with DCJ medians. Finally, we present the results of extensive experimentation showing that our new heuristic outperforms all others in accuracy and, especially, in running time: the heuristic typically returns solutions within 1% of optimal and runs in seconds to minutes even on genomes with 25'000 genes--in contrast, MGR can take days on instances of 200 genes and cannot be used beyond 1'000 genes. Conclusion Finding good rearrangement medians, in particular inversion medians, had long been regarded as the computational bottleneck in whole-genome studies. Our new heuristic for inversion medians, ASM, which dominates all others in our framework, puts that issue to rest by providing near-optimal solutions within seconds to minutes on even the largest genomes. PMID:20122203

  8. Semiotics, Information Science, Documents and Computers.

    ERIC Educational Resources Information Center

    Warner, Julian

    1990-01-01

    Discusses the relationship and value of semiotics to the established domains of information science. Highlights include documentation; computer operations; the language of computing; automata theory; linguistics; speech and writing; and the written language as a unifying principle for the document and the computer. (93 references) (LRW)

  9. Dual-Arm Generalized Compliant Motion With Shared Control

    NASA Technical Reports Server (NTRS)

    Backes, Paul G.

    1994-01-01

    Dual-Arm Generalized Compliant Motion (DAGCM) primitive computer program implementing improved unified control scheme for two manipulator arms cooperating in task in which both grasp same object. Provides capabilities for autonomous, teleoperation, and shared control of two robot arms. Unifies cooperative dual-arm control with multi-sensor-based task control and makes complete task-control capability available to higher-level task-planning computer system via large set of input parameters used to describe desired force and position trajectories followed by manipulator arms. Some concepts discussed in "A Generalized-Compliant-Motion Primitive" (NPO-18134).

  10. Unified Bayesian Estimator of EEG Reference at Infinity: rREST (Regularized Reference Electrode Standardization Technique)

    PubMed Central

    Hu, Shiang; Yao, Dezhong; Valdes-Sosa, Pedro A.

    2018-01-01

    The choice of reference for the electroencephalogram (EEG) is a long-lasting unsolved issue resulting in inconsistent usages and endless debates. Currently, both the average reference (AR) and the reference electrode standardization technique (REST) are two primary, apparently irreconcilable contenders. We propose a theoretical framework to resolve this reference issue by formulating both (a) estimation of potentials at infinity, and (b) determination of the reference, as a unified Bayesian linear inverse problem, which can be solved by maximum a posterior estimation. We find that AR and REST are very particular cases of this unified framework: AR results from biophysically non-informative prior; while REST utilizes the prior based on the EEG generative model. To allow for simultaneous denoising and reference estimation, we develop the regularized versions of AR and REST, named rAR and rREST, respectively. Both depend on a regularization parameter that is the noise to signal variance ratio. Traditional and new estimators are evaluated with this framework, by both simulations and analysis of real resting EEGs. Toward this end, we leverage the MRI and EEG data from 89 subjects which participated in the Cuban Human Brain Mapping Project. Generated artificial EEGs—with a known ground truth, show that relative error in estimating the EEG potentials at infinity is lowest for rREST. It also reveals that realistic volume conductor models improve the performances of REST and rREST. Importantly, for practical applications, it is shown that an average lead field gives the results comparable to the individual lead field. Finally, it is shown that the selection of the regularization parameter with Generalized Cross-Validation (GCV) is close to the “oracle” choice based on the ground truth. When evaluated with the real 89 resting state EEGs, rREST consistently yields the lowest GCV. This study provides a novel perspective to the EEG reference problem by means of a unified inverse solution framework. It may allow additional principled theoretical formulations and numerical evaluation of performance. PMID:29780302

  11. In quest of a systematic framework for unifying and defining nanoscience

    PubMed Central

    2009-01-01

    This article proposes a systematic framework for unifying and defining nanoscience based on historic first principles and step logic that led to a “central paradigm” (i.e., unifying framework) for traditional elemental/small-molecule chemistry. As such, a Nanomaterials classification roadmap is proposed, which divides all nanomatter into Category I: discrete, well-defined and Category II: statistical, undefined nanoparticles. We consider only Category I, well-defined nanoparticles which are >90% monodisperse as a function of Critical Nanoscale Design Parameters (CNDPs) defined according to: (a) size, (b) shape, (c) surface chemistry, (d) flexibility, and (e) elemental composition. Classified as either hard (H) (i.e., inorganic-based) or soft (S) (i.e., organic-based) categories, these nanoparticles were found to manifest pervasive atom mimicry features that included: (1) a dominance of zero-dimensional (0D) core–shell nanoarchitectures, (2) the ability to self-assemble or chemically bond as discrete, quantized nanounits, and (3) exhibited well-defined nanoscale valencies and stoichiometries reminiscent of atom-based elements. These discrete nanoparticle categories are referred to as hard or soft particle nanoelements. Many examples describing chemical bonding/assembly of these nanoelements have been reported in the literature. We refer to these hard:hard (H-n:H-n), soft:soft (S-n:S-n), or hard:soft (H-n:S-n) nanoelement combinations as nanocompounds. Due to their quantized features, many nanoelement and nanocompound categories are reported to exhibit well-defined nanoperiodic property patterns. These periodic property patterns are dependent on their quantized nanofeatures (CNDPs) and dramatically influence intrinsic physicochemical properties (i.e., melting points, reactivity/self-assembly, sterics, and nanoencapsulation), as well as important functional/performance properties (i.e., magnetic, photonic, electronic, and toxicologic properties). We propose this perspective as a modest first step toward more clearly defining synthetic nanochemistry as well as providing a systematic framework for unifying nanoscience. With further progress, one should anticipate the evolution of future nanoperiodic table(s) suitable for predicting important risk/benefit boundaries in the field of nanoscience. Electronic supplementary material The online version of this article (doi:10.1007/s11051-009-9632-z) contains supplementary material, which is available to authorized users. PMID:21170133

  12. Steepest entropy ascent model for far-nonequilibrium thermodynamics: Unified implementation of the maximum entropy production principle

    NASA Astrophysics Data System (ADS)

    Beretta, Gian Paolo

    2014-10-01

    By suitable reformulations, we cast the mathematical frameworks of several well-known different approaches to the description of nonequilibrium dynamics into a unified formulation valid in all these contexts, which extends to such frameworks the concept of steepest entropy ascent (SEA) dynamics introduced by the present author in previous works on quantum thermodynamics. Actually, the present formulation constitutes a generalization also for the quantum thermodynamics framework. The analysis emphasizes that in the SEA modeling principle a key role is played by the geometrical metric with respect to which to measure the length of a trajectory in state space. In the near-thermodynamic-equilibrium limit, the metric tensor is directly related to the Onsager's generalized resistivity tensor. Therefore, through the identification of a suitable metric field which generalizes the Onsager generalized resistance to the arbitrarily far-nonequilibrium domain, most of the existing theories of nonequilibrium thermodynamics can be cast in such a way that the state exhibits the spontaneous tendency to evolve in state space along the path of SEA compatible with the conservation constraints and the boundary conditions. The resulting unified family of SEA dynamical models is intrinsically and strongly consistent with the second law of thermodynamics. The non-negativity of the entropy production is a general and readily proved feature of SEA dynamics. In several of the different approaches to nonequilibrium description we consider here, the SEA concept has not been investigated before. We believe it defines the precise meaning and the domain of general validity of the so-called maximum entropy production principle. Therefore, it is hoped that the present unifying approach may prove useful in providing a fresh basis for effective, thermodynamically consistent, numerical models and theoretical treatments of irreversible conservative relaxation towards equilibrium from far nonequilibrium states. The mathematical frameworks we consider are the following: (A) statistical or information-theoretic models of relaxation; (B) small-scale and rarefied gas dynamics (i.e., kinetic models for the Boltzmann equation); (C) rational extended thermodynamics, macroscopic nonequilibrium thermodynamics, and chemical kinetics; (D) mesoscopic nonequilibrium thermodynamics, continuum mechanics with fluctuations; and (E) quantum statistical mechanics, quantum thermodynamics, mesoscopic nonequilibrium quantum thermodynamics, and intrinsic quantum thermodynamics.

  13. Improving the accuracy in detection of clustered microcalcifications with a context-sensitive classification model.

    PubMed

    Wang, Juan; Nishikawa, Robert M; Yang, Yongyi

    2016-01-01

    In computer-aided detection of microcalcifications (MCs), the detection accuracy is often compromised by frequent occurrence of false positives (FPs), which can be attributed to a number of factors, including imaging noise, inhomogeneity in tissue background, linear structures, and artifacts in mammograms. In this study, the authors investigated a unified classification approach for combating the adverse effects of these heterogeneous factors for accurate MC detection. To accommodate FPs caused by different factors in a mammogram image, the authors developed a classification model to which the input features were adapted according to the image context at a detection location. For this purpose, the input features were defined in two groups, of which one group was derived from the image intensity pattern in a local neighborhood of a detection location, and the other group was used to characterize how a MC is different from its structural background. Owing to the distinctive effect of linear structures in the detector response, the authors introduced a dummy variable into the unified classifier model, which allowed the input features to be adapted according to the image context at a detection location (i.e., presence or absence of linear structures). To suppress the effect of inhomogeneity in tissue background, the input features were extracted from different domains aimed for enhancing MCs in a mammogram image. To demonstrate the flexibility of the proposed approach, the authors implemented the unified classifier model by two widely used machine learning algorithms, namely, a support vector machine (SVM) classifier and an Adaboost classifier. In the experiment, the proposed approach was tested for two representative MC detectors in the literature [difference-of-Gaussians (DoG) detector and SVM detector]. The detection performance was assessed using free-response receiver operating characteristic (FROC) analysis on a set of 141 screen-film mammogram (SFM) images (66 cases) and a set of 188 full-field digital mammogram (FFDM) images (95 cases). The FROC analysis results show that the proposed unified classification approach can significantly improve the detection accuracy of two MC detectors on both SFM and FFDM images. Despite the difference in performance between the two detectors, the unified classifiers can reduce their FP rate to a similar level in the output of the two detectors. In particular, with true-positive rate at 85%, the FP rate on SFM images for the DoG detector was reduced from 1.16 to 0.33 clusters/image (unified SVM) and 0.36 clusters/image (unified Adaboost), respectively; similarly, for the SVM detector, the FP rate was reduced from 0.45 clusters/image to 0.30 clusters/image (unified SVM) and 0.25 clusters/image (unified Adaboost), respectively. Similar FP reduction results were also achieved on FFDM images for the two MC detectors. The proposed unified classification approach can be effective for discriminating MCs from FPs caused by different factors (such as MC-like noise patterns and linear structures) in MC detection. The framework is general and can be applicable for further improving the detection accuracy of existing MC detectors.

  14. Unified Quest 2004 Revisits Future War, Volume 6, Issue 3, April-June 2004

    DTIC Science & Technology

    2004-06-01

    fought campaign plans with students from the other Senior Level Colleges in a free - play computer-assisted war game. INSIDE THIS ISSUE • Unified...dynamic free - play environment. The exercise, guided by the participants’ own goals and objectives, and not by scripts or the Master Scenario Event

  15. Configurational forces in electronic structure calculations using Kohn-Sham density functional theory

    NASA Astrophysics Data System (ADS)

    Motamarri, Phani; Gavini, Vikram

    2018-04-01

    We derive the expressions for configurational forces in Kohn-Sham density functional theory, which correspond to the generalized variational force computed as the derivative of the Kohn-Sham energy functional with respect to the position of a material point x . These configurational forces that result from the inner variations of the Kohn-Sham energy functional provide a unified framework to compute atomic forces as well as stress tensor for geometry optimization. Importantly, owing to the variational nature of the formulation, these configurational forces inherently account for the Pulay corrections. The formulation presented in this work treats both pseudopotential and all-electron calculations in a single framework, and employs a local variational real-space formulation of Kohn-Sham density functional theory (DFT) expressed in terms of the nonorthogonal wave functions that is amenable to reduced-order scaling techniques. We demonstrate the accuracy and performance of the proposed configurational force approach on benchmark all-electron and pseudopotential calculations conducted using higher-order finite-element discretization. To this end, we examine the rates of convergence of the finite-element discretization in the computed forces and stresses for various materials systems, and, further, verify the accuracy from finite differencing the energy. Wherever applicable, we also compare the forces and stresses with those obtained from Kohn-Sham DFT calculations employing plane-wave basis (pseudopotential calculations) and Gaussian basis (all-electron calculations). Finally, we verify the accuracy of the forces on large materials systems involving a metallic aluminum nanocluster containing 666 atoms and an alkane chain containing 902 atoms, where the Kohn-Sham electronic ground state is computed using a reduced-order scaling subspace projection technique [P. Motamarri and V. Gavini, Phys. Rev. B 90, 115127 (2014), 10.1103/PhysRevB.90.115127].

  16. The Dynamic Brain: From Spiking Neurons to Neural Masses and Cortical Fields

    PubMed Central

    Deco, Gustavo; Jirsa, Viktor K.; Robinson, Peter A.; Breakspear, Michael; Friston, Karl

    2008-01-01

    The cortex is a complex system, characterized by its dynamics and architecture, which underlie many functions such as action, perception, learning, language, and cognition. Its structural architecture has been studied for more than a hundred years; however, its dynamics have been addressed much less thoroughly. In this paper, we review and integrate, in a unifying framework, a variety of computational approaches that have been used to characterize the dynamics of the cortex, as evidenced at different levels of measurement. Computational models at different space–time scales help us understand the fundamental mechanisms that underpin neural processes and relate these processes to neuroscience data. Modeling at the single neuron level is necessary because this is the level at which information is exchanged between the computing elements of the brain; the neurons. Mesoscopic models tell us how neural elements interact to yield emergent behavior at the level of microcolumns and cortical columns. Macroscopic models can inform us about whole brain dynamics and interactions between large-scale neural systems such as cortical regions, the thalamus, and brain stem. Each level of description relates uniquely to neuroscience data, from single-unit recordings, through local field potentials to functional magnetic resonance imaging (fMRI), electroencephalogram (EEG), and magnetoencephalogram (MEG). Models of the cortex can establish which types of large-scale neuronal networks can perform computations and characterize their emergent properties. Mean-field and related formulations of dynamics also play an essential and complementary role as forward models that can be inverted given empirical data. This makes dynamic models critical in integrating theory and experiments. We argue that elaborating principled and informed models is a prerequisite for grounding empirical neuroscience in a cogent theoretical framework, commensurate with the achievements in the physical sciences. PMID:18769680

  17. Quantum Behavior of an Autonomous Maxwell Demon

    NASA Astrophysics Data System (ADS)

    Chapman, Adrian; Miyake, Akimasa

    2015-03-01

    A Maxwell Demon is an agent that can exploit knowledge of a system's microstate to perform useful work. The second law of thermodynamics is only recovered upon taking into account the work required to irreversibly update the demon's memory, bringing information theoretic concepts into a thermodynamic framework. Recently, there has been interest in modeling a classical Maxwell demon as an autonomous physical system to study this information-work tradeoff explicitly. Motivated by the idea that states with non-local entanglement structure can be used as a computational resource, we ask whether these states have thermodynamic resource quality as well by generalizing a particular classical autonomous Maxwell demon to the quantum regime. We treat the full quantum description using a matrix product operator formalism, which allows us to handle quantum and classical correlations in a unified framework. Applying this, together with techniques from statistical mechanics, we are able to approximate nonlocal quantities such as the erasure performed on the demon's memory register when correlations are present. Finally, we examine how the demon may use these correlations as a resource to outperform its classical counterpart.

  18. Compact localized states and flat bands from local symmetry partitioning

    NASA Astrophysics Data System (ADS)

    Röntgen, M.; Morfonios, C. V.; Schmelcher, P.

    2018-01-01

    We propose a framework for the connection between local symmetries of discrete Hamiltonians and the design of compact localized states. Such compact localized states are used for the creation of tunable, local symmetry-induced bound states in an energy continuum and flat energy bands for periodically repeated local symmetries in one- and two-dimensional lattices. The framework is based on very recent theorems in graph theory which are here employed to obtain a block partitioning of the Hamiltonian induced by the symmetry of a given system under local site permutations. The diagonalization of the Hamiltonian is thereby reduced to finding the eigenspectra of smaller matrices, with eigenvectors automatically divided into compact localized and extended states. We distinguish between local symmetry operations which commute with the Hamiltonian, and those which do not commute due to an asymmetric coupling to the surrounding sites. While valuable as a computational tool for versatile discrete systems with locally symmetric structures, the approach provides in particular a unified, intuitive, and efficient route to the flexible design of compact localized states at desired energies.

  19. De-Aliasing Through Over-Integration Applied to the Flux Reconstruction and Discontinuous Galerkin Methods

    NASA Technical Reports Server (NTRS)

    Spiegel, Seth C.; Huynh, H. T.; DeBonis, James R.

    2015-01-01

    High-order methods are quickly becoming popular for turbulent flows as the amount of computer processing power increases. The flux reconstruction (FR) method presents a unifying framework for a wide class of high-order methods including discontinuous Galerkin (DG), Spectral Difference (SD), and Spectral Volume (SV). It offers a simple, efficient, and easy way to implement nodal-based methods that are derived via the differential form of the governing equations. Whereas high-order methods have enjoyed recent success, they have been known to introduce numerical instabilities due to polynomial aliasing when applied to under-resolved nonlinear problems. Aliasing errors have been extensively studied in reference to DG methods; however, their study regarding FR methods has mostly been limited to the selection of the nodal points used within each cell. Here, we extend some of the de-aliasing techniques used for DG methods, primarily over-integration, to the FR framework. Our results show that over-integration does remove aliasing errors but may not remove all instabilities caused by insufficient resolution (for FR as well as DG).

  20. Camera Control and Geo-Registration for Video Sensor Networks

    NASA Astrophysics Data System (ADS)

    Davis, James W.

    With the use of large video networks, there is a need to coordinate and interpret the video imagery for decision support systems with the goal of reducing the cognitive and perceptual overload of human operators. We present computer vision strategies that enable efficient control and management of cameras to effectively monitor wide-coverage areas, and examine the framework within an actual multi-camera outdoor urban video surveillance network. First, we construct a robust and precise camera control model for commercial pan-tilt-zoom (PTZ) video cameras. In addition to providing a complete functional control mapping for PTZ repositioning, the model can be used to generate wide-view spherical panoramic viewspaces for the cameras. Using the individual camera control models, we next individually map the spherical panoramic viewspace of each camera to a large aerial orthophotograph of the scene. The result provides a unified geo-referenced map representation to permit automatic (and manual) video control and exploitation of cameras in a coordinated manner. The combined framework provides new capabilities for video sensor networks that are of significance and benefit to the broad surveillance/security community.

  1. A comparative study of the Unified System for Orbit Computation and the Flight Design System. [computer programs for mission planning tasks associated with space shuttle

    NASA Technical Reports Server (NTRS)

    Maag, W.

    1977-01-01

    The Flight Design System (FDS) and the Unified System for Orbit Computation (USOC) are compared and described in relation to mission planning for the shuttle transportation system (STS). The FDS is designed to meet the requirements of a standardized production tool and the USOC is designed for rapid generation of particular application programs. The main emphasis in USOC is put on adaptability to new types of missions. It is concluded that a software system having a USOC-like structure, adapted to the specific needs of MPAD, would be appropriate to support planning tasks in the area unique to STS missions.

  2. An infrastructure with a unified control plane to integrate IP into optical metro networks to provide flexible and intelligent bandwidth on demand for cloud computing

    NASA Astrophysics Data System (ADS)

    Yang, Wei; Hall, Trevor

    2012-12-01

    The Internet is entering an era of cloud computing to provide more cost effective, eco-friendly and reliable services to consumer and business users and the nature of the Internet traffic will undertake a fundamental transformation. Consequently, the current Internet will no longer suffice for serving cloud traffic in metro areas. This work proposes an infrastructure with a unified control plane that integrates simple packet aggregation technology with optical express through the interoperation between IP routers and electrical traffic controllers in optical metro networks. The proposed infrastructure provides flexible, intelligent, and eco-friendly bandwidth on demand for cloud computing in metro areas.

  3. Unifying Terrain Awareness for the Visually Impaired through Real-Time Semantic Segmentation

    PubMed Central

    Yang, Kailun; Wang, Kaiwei; Romera, Eduardo; Hu, Weijian; Sun, Dongming; Sun, Junwei; Cheng, Ruiqi; Chen, Tianxue; López, Elena

    2018-01-01

    Navigational assistance aims to help visually-impaired people to ambulate the environment safely and independently. This topic becomes challenging as it requires detecting a wide variety of scenes to provide higher level assistive awareness. Vision-based technologies with monocular detectors or depth sensors have sprung up within several years of research. These separate approaches have achieved remarkable results with relatively low processing time and have improved the mobility of impaired people to a large extent. However, running all detectors jointly increases the latency and burdens the computational resources. In this paper, we put forward seizing pixel-wise semantic segmentation to cover navigation-related perception needs in a unified way. This is critical not only for the terrain awareness regarding traversable areas, sidewalks, stairs and water hazards, but also for the avoidance of short-range obstacles, fast-approaching pedestrians and vehicles. The core of our unification proposal is a deep architecture, aimed at attaining efficient semantic understanding. We have integrated the approach in a wearable navigation system by incorporating robust depth segmentation. A comprehensive set of experiments prove the qualified accuracy over state-of-the-art methods while maintaining real-time speed. We also present a closed-loop field test involving real visually-impaired users, demonstrating the effectivity and versatility of the assistive framework. PMID:29748508

  4. A Graph-Embedding Approach to Hierarchical Visual Word Mergence.

    PubMed

    Wang, Lei; Liu, Lingqiao; Zhou, Luping

    2017-02-01

    Appropriately merging visual words are an effective dimension reduction method for the bag-of-visual-words model in image classification. The approach of hierarchically merging visual words has been extensively employed, because it gives a fully determined merging hierarchy. Existing supervised hierarchical merging methods take different approaches and realize the merging process with various formulations. In this paper, we propose a unified hierarchical merging approach built upon the graph-embedding framework. Our approach is able to merge visual words for any scenario, where a preferred structure and an undesired structure are defined, and, therefore, can effectively attend to all kinds of requirements for the word-merging process. In terms of computational efficiency, we show that our algorithm can seamlessly integrate a fast search strategy developed in our previous work and, thus, well maintain the state-of-the-art merging speed. To the best of our survey, the proposed approach is the first one that addresses the hierarchical visual word mergence in such a flexible and unified manner. As demonstrated, it can maintain excellent image classification performance even after a significant dimension reduction, and outperform all the existing comparable visual word-merging methods. In a broad sense, our work provides an open platform for applying, evaluating, and developing new criteria for hierarchical word-merging tasks.

  5. Re-engineering the Federal planning process: A total Federal planning strategy, integrating NEPA with modern management tools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eccleston, C.H.

    1997-09-05

    The National Environmental Policy Act (NEPA) of 1969 was established by Congress more than a quarter of a century ago, yet there is a surprising lack of specific tools, techniques, and methodologies for effectively implementing these regulatory requirements. Lack of professionally accepted techniques is a principal factor responsible for many inefficiencies. Often, decision makers do not fully appreciate or capitalize on the true potential which NEPA provides as a platform for planning future actions. New approaches and modem management tools must be adopted to fully achieve NEPA`s mandate. A new strategy, referred to as Total Federal Planning, is proposed formore » unifying large-scale federal planning efforts under a single, systematic, structured, and holistic process. Under this approach, the NEPA planning process provides a unifying framework for integrating all early environmental and nonenvironmental decision-making factors into a single comprehensive planning process. To promote effectiveness and efficiency, modem tools and principles from the disciplines of Value Engineering, Systems Engineering, and Total Quality Management are incorporated. Properly integrated and implemented, these planning tools provide the rigorous, structured, and disciplined framework essential in achieving effective planning. Ultimately, the goal of a Total Federal Planning strategy is to construct a unified and interdisciplinary framework that substantially improves decision-making, while reducing the time, cost, redundancy, and effort necessary to comply with environmental and other planning requirements. At a time when Congress is striving to re-engineer the governmental framework, apparatus, and process, a Total Federal Planning philosophy offers a systematic approach for uniting the disjointed and often convoluted planning process currently used by most federal agencies. Potentially this approach has widespread implications in the way federal planning is approached.« less

  6. The Nation's Report Card Science 2009 Trial Urban District Snapshot Report. San Diego Unified School District. Grade 4, Public Schools

    ERIC Educational Resources Information Center

    National Center for Education Statistics, 2011

    2011-01-01

    Guided by a new framework, the National Assessment of Educational Progress (NAEP) science assessment was updated in 2009 to keep the content current with key developments in science, curriculum standards, assessments, and research. The 2009 framework organizes science content into three broad content areas. Physical science includes concepts…

  7. The Nation's Report Card Science 2009 Trial Urban District Snapshot Report. San Diego Unified School District. Grade 8, Public Schools

    ERIC Educational Resources Information Center

    National Center for Education Statistics, 2011

    2011-01-01

    Guided by a new framework, the National Assessment of Educational Progress (NAEP) science assessment was updated in 2009 to keep the content current with key developments in science, curriculum standards, assessments, and research. The 2009 framework organizes science content into three broad content areas. Physical science includes concepts…

  8. The Nation's Report Card Science 2009 Trial Urban District Snapshot Report. Los Angeles Unified School District. Grade 4, Public Schools

    ERIC Educational Resources Information Center

    National Center for Education Statistics, 2011

    2011-01-01

    Guided by a new framework, the National Assessment of Educational Progress (NAEP) science assessment was updated in 2009 to keep the content current with key developments in science, curriculum standards, assessments, and research. The 2009 framework organizes science content into three broad content areas. Physical science includes concepts…

  9. The Nation's Report Card Science 2009 Trial Urban District Snapshot Report. Los Angeles Unified School District. Grade 8, Public Schools

    ERIC Educational Resources Information Center

    National Center for Education Statistics, 2011

    2011-01-01

    Guided by a new framework, the National Assessment of Educational Progress (NAEP) science assessment was updated in 2009 to keep the content current with key developments in science, curriculum standards, assessments, and research. The 2009 framework organizes science content into three broad content areas. Physical science includes concepts…

  10. A unifying framework for quantifying the nature of animal interactions.

    PubMed

    Potts, Jonathan R; Mokross, Karl; Lewis, Mark A

    2014-07-06

    Collective phenomena, whereby agent-agent interactions determine spatial patterns, are ubiquitous in the animal kingdom. On the other hand, movement and space use are also greatly influenced by the interactions between animals and their environment. Despite both types of interaction fundamentally influencing animal behaviour, there has hitherto been no unifying framework for the models proposed in both areas. Here, we construct a general method for inferring population-level spatial patterns from underlying individual movement and interaction processes, a key ingredient in building a statistical mechanics for ecological systems. We show that resource selection functions, as well as several examples of collective motion models, arise as special cases of our framework, thus bringing together resource selection analysis and collective animal behaviour into a single theory. In particular, we focus on combining the various mechanistic models of territorial interactions in the literature with step selection functions, by incorporating interactions into the step selection framework and demonstrating how to derive territorial patterns from the resulting models. We demonstrate the efficacy of our model by application to a population of insectivore birds in the Amazon rainforest. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  11. A unified framework for image retrieval using keyword and visual features.

    PubMed

    Jing, Feng; Li, Mingling; Zhang, Hong-Jiang; Zhang, Bo

    2005-07-01

    In this paper, a unified image retrieval framework based on both keyword annotations and visual features is proposed. In this framework, a set of statistical models are built based on visual features of a small set of manually labeled images to represent semantic concepts and used to propagate keywords to other unlabeled images. These models are updated periodically when more images implicitly labeled by users become available through relevance feedback. In this sense, the keyword models serve the function of accumulation and memorization of knowledge learned from user-provided relevance feedback. Furthermore, two sets of effective and efficient similarity measures and relevance feedback schemes are proposed for query by keyword scenario and query by image example scenario, respectively. Keyword models are combined with visual features in these schemes. In particular, a new, entropy-based active learning strategy is introduced to improve the efficiency of relevance feedback for query by keyword. Furthermore, a new algorithm is proposed to estimate the keyword features of the search concept for query by image example. It is shown to be more appropriate than two existing relevance feedback algorithms. Experimental results demonstrate the effectiveness of the proposed framework.

  12. Linear models of coregionalization for multivariate lattice data: Order-dependent and order-free cMCARs.

    PubMed

    MacNab, Ying C

    2016-08-01

    This paper concerns with multivariate conditional autoregressive models defined by linear combination of independent or correlated underlying spatial processes. Known as linear models of coregionalization, the method offers a systematic and unified approach for formulating multivariate extensions to a broad range of univariate conditional autoregressive models. The resulting multivariate spatial models represent classes of coregionalized multivariate conditional autoregressive models that enable flexible modelling of multivariate spatial interactions, yielding coregionalization models with symmetric or asymmetric cross-covariances of different spatial variation and smoothness. In the context of multivariate disease mapping, for example, they facilitate borrowing strength both over space and cross variables, allowing for more flexible multivariate spatial smoothing. Specifically, we present a broadened coregionalization framework to include order-dependent, order-free, and order-robust multivariate models; a new class of order-free coregionalized multivariate conditional autoregressives is introduced. We tackle computational challenges and present solutions that are integral for Bayesian analysis of these models. We also discuss two ways of computing deviance information criterion for comparison among competing hierarchical models with or without unidentifiable prior parameters. The models and related methodology are developed in the broad context of modelling multivariate data on spatial lattice and illustrated in the context of multivariate disease mapping. The coregionalization framework and related methods also present a general approach for building spatially structured cross-covariance functions for multivariate geostatistics. © The Author(s) 2016.

  13. 40 CFR 300.105 - General organization concepts.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... capabilities. (b) Three fundamental kinds of activities are performed pursuant to the NCP: (1) Preparedness....205(c). (d) The basic framework for the response management structure is a system (e.g., a unified...

  14. Dissecting Embryonic Stem Cell Self-Renewal and Differentiation Commitment from Quantitative Models.

    PubMed

    Hu, Rong; Dai, Xianhua; Dai, Zhiming; Xiang, Qian; Cai, Yanning

    2016-10-01

    To model quantitatively embryonic stem cell (ESC) self-renewal and differentiation by computational approaches, we developed a unified mathematical model for gene expression involved in cell fate choices. Our quantitative model comprised ESC master regulators and lineage-specific pivotal genes. It took the factors of multiple pathways as input and computed expression as a function of intrinsic transcription factors, extrinsic cues, epigenetic modifications, and antagonism between ESC master regulators and lineage-specific pivotal genes. In the model, the differential equations of expression of genes involved in cell fate choices from regulation relationship were established according to the transcription and degradation rates. We applied this model to the Murine ESC self-renewal and differentiation commitment and found that it modeled the expression patterns with good accuracy. Our model analysis revealed that Murine ESC was an attractor state in culture and differentiation was predominantly caused by antagonism between ESC master regulators and lineage-specific pivotal genes. Moreover, antagonism among lineages played a critical role in lineage reprogramming. Our results also uncovered that the ordered expression alteration of ESC master regulators over time had a central role in ESC differentiation fates. Our computational framework was generally applicable to most cell-type maintenance and lineage reprogramming.

  15. Pharmacological Fingerprints of Contextual Uncertainty

    PubMed Central

    Ruge, Diane; Stephan, Klaas E.

    2016-01-01

    Successful interaction with the environment requires flexible updating of our beliefs about the world. By estimating the likelihood of future events, it is possible to prepare appropriate actions in advance and execute fast, accurate motor responses. According to theoretical proposals, agents track the variability arising from changing environments by computing various forms of uncertainty. Several neuromodulators have been linked to uncertainty signalling, but comprehensive empirical characterisation of their relative contributions to perceptual belief updating, and to the selection of motor responses, is lacking. Here we assess the roles of noradrenaline, acetylcholine, and dopamine within a single, unified computational framework of uncertainty. Using pharmacological interventions in a sample of 128 healthy human volunteers and a hierarchical Bayesian learning model, we characterise the influences of noradrenergic, cholinergic, and dopaminergic receptor antagonism on individual computations of uncertainty during a probabilistic serial reaction time task. We propose that noradrenaline influences learning of uncertain events arising from unexpected changes in the environment. In contrast, acetylcholine balances attribution of uncertainty to chance fluctuations within an environmental context, defined by a stable set of probabilistic associations, or to gross environmental violations following a contextual switch. Dopamine supports the use of uncertainty representations to engender fast, adaptive responses. PMID:27846219

  16. Using IKAROS as a data transfer and management utility within the KM3NeT computing model

    NASA Astrophysics Data System (ADS)

    Filippidis, Christos; Cotronis, Yiannis; Markou, Christos

    2016-04-01

    KM3NeT is a future European deep-sea research infrastructure hosting a new generation neutrino detectors that - located at the bottom of the Mediterranean Sea - will open a new window on the universe and answer fundamental questions both in particle physics and astrophysics. IKAROS is a framework that enables creating scalable storage formations on-demand and helps addressing several limitations that the current file systems face when dealing with very large scale infrastructures. It enables creating ad-hoc nearby storage formations and can use a huge number of I/O nodes in order to increase the available bandwidth (I/O and network). IKAROS unifies remote and local access in the overall data flow, by permitting direct access to each I/O node. In this way we can handle the overall data flow at the network layer, limiting the interaction with the operating system. This approach allows virtually connecting, at the users level, the several different computing facilities used (Grids, Clouds, HPCs, Data Centers, Local computing Clusters and personal storage devices), on-demand, based on the needs, by using well known standards and protocols, like HTTP.

  17. A unified and efficient framework for court-net sports video analysis using 3D camera modeling

    NASA Astrophysics Data System (ADS)

    Han, Jungong; de With, Peter H. N.

    2007-01-01

    The extensive amount of video data stored on available media (hard and optical disks) necessitates video content analysis, which is a cornerstone for different user-friendly applications, such as, smart video retrieval and intelligent video summarization. This paper aims at finding a unified and efficient framework for court-net sports video analysis. We concentrate on techniques that are generally applicable for more than one sports type to come to a unified approach. To this end, our framework employs the concept of multi-level analysis, where a novel 3-D camera modeling is utilized to bridge the gap between the object-level and the scene-level analysis. The new 3-D camera modeling is based on collecting features points from two planes, which are perpendicular to each other, so that a true 3-D reference is obtained. Another important contribution is a new tracking algorithm for the objects (i.e. players). The algorithm can track up to four players simultaneously. The complete system contributes to summarization by various forms of information, of which the most important are the moving trajectory and real-speed of each player, as well as 3-D height information of objects and the semantic event segments in a game. We illustrate the performance of the proposed system by evaluating it for a variety of court-net sports videos containing badminton, tennis and volleyball, and we show that the feature detection performance is above 92% and events detection about 90%.

  18. A Textbook for a First Course in Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Zingg, D. W.; Pulliam, T. H.; Nixon, David (Technical Monitor)

    1999-01-01

    This paper describes and discusses the textbook, Fundamentals of Computational Fluid Dynamics by Lomax, Pulliam, and Zingg, which is intended for a graduate level first course in computational fluid dynamics. This textbook emphasizes fundamental concepts in developing, analyzing, and understanding numerical methods for the partial differential equations governing the physics of fluid flow. Its underlying philosophy is that the theory of linear algebra and the attendant eigenanalysis of linear systems provides a mathematical framework to describe and unify most numerical methods in common use in the field of fluid dynamics. Two linear model equations, the linear convection and diffusion equations, are used to illustrate concepts throughout. Emphasis is on the semi-discrete approach, in which the governing partial differential equations (PDE's) are reduced to systems of ordinary differential equations (ODE's) through a discretization of the spatial derivatives. The ordinary differential equations are then reduced to ordinary difference equations (O(Delta)E's) using a time-marching method. This methodology, using the progression from PDE through ODE's to O(Delta)E's, together with the use of the eigensystems of tridiagonal matrices and the theory of O(Delta)E's, gives the book its distinctiveness and provides a sound basis for a deep understanding of fundamental concepts in computational fluid dynamics.

  19. On the neural implementation of the speed-accuracy trade-off

    PubMed Central

    Standage, Dominic; Blohm, Gunnar; Dorris, Michael C.

    2014-01-01

    Decisions are faster and less accurate when conditions favor speed, and are slower and more accurate when they favor accuracy. This phenomenon is referred to as the speed-accuracy trade-off (SAT). Behavioral studies of the SAT have a long history, and the data from these studies are well characterized within the framework of bounded integration. According to this framework, decision makers accumulate noisy evidence until the running total for one of the alternatives reaches a bound. Lower and higher bounds favor speed and accuracy respectively, each at the expense of the other. Studies addressing the neural implementation of these computations are a recent development in neuroscience. In this review, we describe the experimental and theoretical evidence provided by these studies. We structure the review according to the framework of bounded integration, describing evidence for (1) the modulation of the encoding of evidence under conditions favoring speed or accuracy, (2) the modulation of the integration of encoded evidence, and (3) the modulation of the amount of integrated evidence sufficient to make a choice. We discuss commonalities and differences between the proposed neural mechanisms, some of their assumptions and simplifications, and open questions for future work. We close by offering a unifying hypothesis on the present state of play in this nascent research field. PMID:25165430

  20. On the neural implementation of the speed-accuracy trade-off.

    PubMed

    Standage, Dominic; Blohm, Gunnar; Dorris, Michael C

    2014-01-01

    Decisions are faster and less accurate when conditions favor speed, and are slower and more accurate when they favor accuracy. This phenomenon is referred to as the speed-accuracy trade-off (SAT). Behavioral studies of the SAT have a long history, and the data from these studies are well characterized within the framework of bounded integration. According to this framework, decision makers accumulate noisy evidence until the running total for one of the alternatives reaches a bound. Lower and higher bounds favor speed and accuracy respectively, each at the expense of the other. Studies addressing the neural implementation of these computations are a recent development in neuroscience. In this review, we describe the experimental and theoretical evidence provided by these studies. We structure the review according to the framework of bounded integration, describing evidence for (1) the modulation of the encoding of evidence under conditions favoring speed or accuracy, (2) the modulation of the integration of encoded evidence, and (3) the modulation of the amount of integrated evidence sufficient to make a choice. We discuss commonalities and differences between the proposed neural mechanisms, some of their assumptions and simplifications, and open questions for future work. We close by offering a unifying hypothesis on the present state of play in this nascent research field.

  1. Word-level language modeling for P300 spellers based on discriminative graphical models

    NASA Astrophysics Data System (ADS)

    Delgado Saa, Jaime F.; de Pesters, Adriana; McFarland, Dennis; Çetin, Müjdat

    2015-04-01

    Objective. In this work we propose a probabilistic graphical model framework that uses language priors at the level of words as a mechanism to increase the performance of P300-based spellers. Approach. This paper is concerned with brain-computer interfaces based on P300 spellers. Motivated by P300 spelling scenarios involving communication based on a limited vocabulary, we propose a probabilistic graphical model framework and an associated classification algorithm that uses learned statistical models of language at the level of words. Exploiting such high-level contextual information helps reduce the error rate of the speller. Main results. Our experimental results demonstrate that the proposed approach offers several advantages over existing methods. Most importantly, it increases the classification accuracy while reducing the number of times the letters need to be flashed, increasing the communication rate of the system. Significance. The proposed approach models all the variables in the P300 speller in a unified framework and has the capability to correct errors in previous letters in a word, given the data for the current one. The structure of the model we propose allows the use of efficient inference algorithms, which in turn makes it possible to use this approach in real-time applications.

  2. Cloud Computing - A Unified Approach for Surveillance Issues

    NASA Astrophysics Data System (ADS)

    Rachana, C. R.; Banu, Reshma, Dr.; Ahammed, G. F. Ali, Dr.; Parameshachari, B. D., Dr.

    2017-08-01

    Cloud computing describes highly scalable resources provided as an external service via the Internet on a basis of pay-per-use. From the economic point of view, the main attractiveness of cloud computing is that users only use what they need, and only pay for what they actually use. Resources are available for access from the cloud at any time, and from any location through networks. Cloud computing is gradually replacing the traditional Information Technology Infrastructure. Securing data is one of the leading concerns and biggest issue for cloud computing. Privacy of information is always a crucial pointespecially when an individual’s personalinformation or sensitive information is beingstored in the organization. It is indeed true that today; cloud authorization systems are notrobust enough. This paper presents a unified approach for analyzing the various security issues and techniques to overcome the challenges in the cloud environment.

  3. A unified account of gloss and lightness perception in terms of gamut relativity.

    PubMed

    Vladusich, Tony

    2013-08-01

    A recently introduced computational theory of visual surface representation, termed gamut relativity, overturns the classical assumption that brightness, lightness, and transparency constitute perceptual dimensions corresponding to the physical dimensions of luminance, diffuse reflectance, and transmittance, respectively. Here I extend the theory to show how surface gloss and lightness can be understood in a unified manner in terms of the vector computation of "layered representations" of surface and illumination properties, rather than as perceptual dimensions corresponding to diffuse and specular reflectance, respectively. The theory simulates the effects of image histogram skewness on surface gloss/lightness and lightness constancy as a function of specular highlight intensity. More generally, gamut relativity clarifies, unifies, and generalizes a wide body of previous theoretical and experimental work aimed at understanding how the visual system parses the retinal image into layered representations of surface and illumination properties.

  4. LIFE CYCLE ENGINEERING GUIDELINES

    EPA Science Inventory

    This document provides guidelines for the implementation of LCE concepts, information, and techniques in engineering products, systems, processes, and facilities. To make this document as practical and useable as possible, a unifying LCE framework is presented. Subsequent topics ...

  5. Value of Flexibility - Phase 1

    DTIC Science & Technology

    2010-09-25

    weaknesses of each approach. During this period, we also explored the development of an analytical framework based on sound mathematical constructs... mathematical constructs. A review of the current state-of-the-art showed that there is little unifying theory or guidance on best approaches to...research activities is in developing a coherent value based definition of flexibility that is based on an analytical framework that is mathematically

  6. The Unified Levelling Network of Sarawak and its Adjustment

    NASA Astrophysics Data System (ADS)

    Som, Z. A. M.; Yazid, A. M.; Ming, T. K.; Yazid, N. M.

    2016-09-01

    The height reference network of Sarawak has seen major improvement in over the past two decades. The most significant improvement was the establishment of extended precise leveling network of which is now able to connect all three major datum points at Pulau Lakei, Original and Bintulu. Datum by following the major accessible routes across Sarawak. This means the leveling network in Sarawak has now been inter-connected and unified. By having such a unified network leads to the possibility of having a common single least squares adjustment been performed for the first time. The least squares adjustment of this unified levelling network was attempted in order to compute the height of all Bench Marks established in the entire levelling network. The adjustment was done by using MoreFix levelling adjustment package developed at FGHT UTM. The computational procedure adopted is linear parametric adjustment by minimum constraint. Since Sarawak has three separate datums therefore three separate adjustments were implemented by utilizing datum at Pulau Lakei, Original Miri and Bintulu Datum respectively. Results of the MoreFix unified adjustment agreed very well with adjustment repeated using Starnet. Further the results were compared with solution given by Jupem and they are in good agreement as well. The difference in height analysed were within 10mm for the case of minimum constraint at Pulau Lakei datum and with much better agreement in the case of Original Miri Datum.

  7. Next-to-leading order QCD predictions for top-quark pair production with up to two jets merged with a parton shower

    DOE PAGES

    Höche, Stefan; Krauss, Frank; Maierhöfer, Philipp; ...

    2015-06-26

    We present differential cross sections for the production of top-quark pairs in conjunction with up to two jets, computed at next-to-leading order in perturbative QCD and consistently merged with a parton shower in the SHERPA+OPENLOOPS framework. Top quark decays including spin correlation effects are taken into account at leading order accuracy. The calculation yields a unified description of top-pair plus multi-jet production, and detailed results are presented for various key observables at the Large Hadron Collider. As a result, a large improvement with respect to the multi-jet merging approach at leading order is found for the total transverse energy spectrum,more » which plays a prominent role in searches for physics beyond the Standard Model.« less

  8. Electric Fields and Enzyme Catalysis

    PubMed Central

    Fried, Stephen D.; Boxer, Steven G.

    2017-01-01

    What happens inside an enzyme’s active site to allow slow and difficult chemical reactions to occur so rapidly? This question has occupied biochemists’ attention for a long time. Computer models of increasing sophistication have predicted an important role for electrostatic interactions in enzymatic reactions, yet this hypothesis has proved vexingly difficult to test experimentally. Recent experiments utilizing the vibrational Stark effect make it possible to measure the electric field a substrate molecule experiences when bound inside its enzyme’s active site. These experiments have provided compelling evidence supporting a major electrostatic contribution to enzymatic catalysis. Here, we review these results and develop a simple model for electrostatic catalysis that enables us to incorporate disparate concepts introduced by many investigators to describe how enzymes work into a more unified framework stressing the importance of electric fields at the active site. PMID:28375745

  9. Particle-based solid for nonsmooth multidomain dynamics

    NASA Astrophysics Data System (ADS)

    Nordberg, John; Servin, Martin

    2018-04-01

    A method for simulation of elastoplastic solids in multibody systems with nonsmooth and multidomain dynamics is developed. The solid is discretised into pseudo-particles using the meshfree moving least squares method for computing the strain tensor. The particle's strain and stress tensor variables are mapped to a compliant deformation constraint. The discretised solid model thus fit a unified framework for nonsmooth multidomain dynamics simulations including rigid multibodies with complex kinematic constraints such as articulation joints, unilateral contacts with dry friction, drivelines, and hydraulics. The nonsmooth formulation allows for impact impulses to propagate instantly between the rigid multibody and the solid. Plasticity is introduced through an associative perfectly plastic modified Drucker-Prager model. The elastic and plastic dynamics are verified for simple test systems, and the capability of simulating tracked terrain vehicles driving on a deformable terrain is demonstrated.

  10. Dielectric and diamagnetic susceptibilities near percolative superconductor-insulator transitions.

    PubMed

    Loh, Yen Lee; Karki, Pragalv

    2017-10-25

    Coarse-grained superconductor-insulator composites exhibit a superconductor-insulator transition governed by classical percolation, which should be describable by networks of inductors and capacitors. We study several classes of random inductor-capacitor networks on square lattices. We present a unifying framework for defining electric and magnetic response functions, and we extend the Frank-Lobb bond-propagation algorithm to compute these quantities by network reduction. We confirm that the superfluid stiffness scales approximately as [Formula: see text] as the superconducting bond fraction p approaches the percolation threshold p c . We find that the diamagnetic susceptibility scales as [Formula: see text] below percolation, and as [Formula: see text] above percolation. For models lacking self-capacitances, the electric susceptibility scales as [Formula: see text]. Including a self-capacitance on each node changes the critical behavior to approximately [Formula: see text].

  11. Conic section function neural network circuitry for offline signature recognition.

    PubMed

    Erkmen, Burcu; Kahraman, Nihan; Vural, Revna A; Yildirim, Tulay

    2010-04-01

    In this brief, conic section function neural network (CSFNN) circuitry was designed for offline signature recognition. CSFNN is a unified framework for multilayer perceptron (MLP) and radial basis function (RBF) networks to make simultaneous use of advantages of both. The CSFNN circuitry architecture was developed using a mixed mode circuit implementation. The designed circuit system is problem independent. Hence, the general purpose neural network circuit system could be applied to various pattern recognition problems with different network sizes on condition with the maximum network size of 16-16-8. In this brief, CSFNN circuitry system has been applied to two different signature recognition problems. CSFNN circuitry was trained with chip-in-the-loop learning technique in order to compensate typical analog process variations. CSFNN hardware achieved highly comparable computational performances with CSFNN software for nonlinear signature recognition problems.

  12. Food-web based unified model of macro- and microevolution.

    PubMed

    Chowdhury, Debashish; Stauffer, Dietrich

    2003-10-01

    We incorporate the generic hierarchical architecture of foodwebs into a "unified" model that describes both micro- and macroevolutions within a single theoretical framework. This model describes the microevolution in detail by accounting for the birth, ageing, and natural death of individual organisms as well as prey-predator interactions on a hierarchical dynamic food web. It also provides a natural description of random mutations and speciation (origination) of species as well as their extinctions. The distribution of lifetimes of species follows an approximate power law only over a limited regime.

  13. Unified approach to redshift in cosmological/black hole spacetimes and synchronous frame

    NASA Astrophysics Data System (ADS)

    Toporensky, A. V.; Zaslavskii, O. B.; Popov, S. B.

    2018-01-01

    Usually, interpretation of redshift in static spacetimes (for example, near black holes) is opposed to that in cosmology. In this methodological note, we show that both explanations are unified in a natural picture. This is achieved if, considering the static spacetime, one (i) makes a transition to a synchronous frame, and (ii) returns to the original frame by means of local Lorentz boost. To reach our goal, we consider a rather general class of spherically symmetric spacetimes. In doing so, we construct frames that generalize the well-known Lemaitre and Painlevé-Gullstand ones and elucidate the relation between them. This helps us to understand, in a unifying approach, how gravitation reveals itself in different branches of general relativity. This framework can be useful for general relativity university courses.

  14. Impact of Beads and Drops on a Repellent Solid Surface: A Unified Description

    NASA Astrophysics Data System (ADS)

    Arora, S.; Fromental, J.-M.; Mora, S.; Phou, Ty; Ramos, L.; Ligoure, C.

    2018-04-01

    We investigate freely expanding sheets formed by ultrasoft gel beads, and liquid and viscoelastic drops, produced by the impact of the bead or drop on a silicon wafer covered with a thin layer of liquid nitrogen that suppresses viscous dissipation thanks to an inverse Leidenfrost effect. Our experiments show a unified behavior for the impact dynamics that holds for solids, liquids, and viscoelastic fluids and that we rationalize by properly taking into account elastocapillary effects. In this framework, the classical impact dynamics of solids and liquids, as far as viscous dissipation is negligible, appears as the asymptotic limits of a universal theoretical description. A novel material-dependent characteristic velocity that includes both capillary and bulk elasticity emerges from this unified description of the physics of impact.

  15. SCIFIO: an extensible framework to support scientific image formats.

    PubMed

    Hiner, Mark C; Rueden, Curtis T; Eliceiri, Kevin W

    2016-12-07

    No gold standard exists in the world of scientific image acquisition; a proliferation of instruments each with its own proprietary data format has made out-of-the-box sharing of that data nearly impossible. In the field of light microscopy, the Bio-Formats library was designed to translate such proprietary data formats to a common, open-source schema, enabling sharing and reproduction of scientific results. While Bio-Formats has proved successful for microscopy images, the greater scientific community was lacking a domain-independent framework for format translation. SCIFIO (SCientific Image Format Input and Output) is presented as a freely available, open-source library unifying the mechanisms of reading and writing image data. The core of SCIFIO is its modular definition of formats, the design of which clearly outlines the components of image I/O to encourage extensibility, facilitated by the dynamic discovery of the SciJava plugin framework. SCIFIO is structured to support coexistence of multiple domain-specific open exchange formats, such as Bio-Formats' OME-TIFF, within a unified environment. SCIFIO is a freely available software library developed to standardize the process of reading and writing scientific image formats.

  16. Evolutionary dynamics of tree invasions: complementing the unified framework for biological invasions.

    PubMed

    Zenni, Rafael Dudeque; Dickie, Ian A; Wingfield, Michael J; Hirsch, Heidi; Crous, Casparus J; Meyerson, Laura A; Burgess, Treena I; Zimmermann, Thalita G; Klock, Metha M; Siemann, Evan; Erfmeier, Alexandra; Aragon, Roxana; Montti, Lia; Le Roux, Johannes J

    2016-12-30

    Evolutionary processes greatly impact the outcomes of biological invasions. An extensive body of research suggests that invasive populations often undergo phenotypic and ecological divergence from their native sources. Evolution also operates at different and distinct stages during the invasion process. Thus, it is important to incorporate evolutionary change into frameworks of biological invasions because it allows us to conceptualize how these processes may facilitate or hinder invasion success. Here, we review such processes, with an emphasis on tree invasions, and place them in the context of the unified framework for biological invasions. The processes and mechanisms described are pre-introduction evolutionary history, sampling effect, founder effect, genotype-by-environment interactions, admixture, hybridization, polyploidization, rapid evolution, epigenetics, and second-genomes. For the last, we propose that co-evolved symbionts, both beneficial and harmful, which are closely physiologically associated with invasive species, contain critical genetic traits that affect the evolutionary dynamics of biological invasions. By understanding the mechanisms underlying invasion success, researchers will be better equipped to predict, understand, and manage biological invasions. Published by Oxford University Press on behalf of the Annals of Botany Company.

  17. Evolutionary dynamics of tree invasions: complementing the unified framework for biological invasions

    PubMed Central

    Dickie, Ian A.; Wingfield, Michael J.; Hirsch, Heidi; Crous, Casparus J.; Meyerson, Laura A.; Burgess, Treena I.; Zimmermann, Thalita G.; Klock, Metha M.; Siemann, Evan; Erfmeier, Alexandra; Aragon, Roxana; Montti, Lia; Le Roux, Johannes J.

    2017-01-01

    Abstract Evolutionary processes greatly impact the outcomes of biological invasions. An extensive body of research suggests that invasive populations often undergo phenotypic and ecological divergence from their native sources. Evolution also operates at different and distinct stages during the invasion process. Thus, it is important to incorporate evolutionary change into frameworks of biological invasions because it allows us to conceptualize how these processes may facilitate or hinder invasion success. Here, we review such processes, with an emphasis on tree invasions, and place them in the context of the unified framework for biological invasions. The processes and mechanisms described are pre-introduction evolutionary history, sampling effect, founder effect, genotype-by-environment interactions, admixture, hybridization, polyploidization, rapid evolution, epigenetics and second-genomes. For the last, we propose that co-evolved symbionts, both beneficial and harmful, which are closely physiologically associated with invasive species, contain critical genetic traits that affect the evolutionary dynamics of biological invasions. By understanding the mechanisms underlying invasion success, researchers will be better equipped to predict, understand and manage biological invasions. PMID:28039118

  18. Managing urban water systems with significant adaptation deficits - a unified framework for secondary cities

    NASA Astrophysics Data System (ADS)

    Pathirana, A.; Radhakrishnan, M.; Zevenbergen, C.; Quan, N. H.

    2016-12-01

    The need to address the shortcomings of urban systems - adaptation deficit - and shortcomings in response to climate change - `adaptation gap' - are both major challenges in maintaining the livability and sustainability of cities. However, the adaptation actions defined in terms of type I (addressing adaptation deficits) and type II (addressing adaptation gaps), often compete and conflict each other in the secondary cities of the global south. Extending the concept of the environmental Kuznets curve, this paper argues that a unified framework that calls for synergistic action on type I and type II adaptation is essential in order for these cities to maintain their livability, sustainability and resilience facing extreme rates of urbanization and rapid onset of climate change. The proposed framework has been demonstrated in Can Tho, Vietnam, where there are significant adaptation deficits due to rapid urbanisation and adaptation gaps due to climate change and socio-economic changes. The analysis in Can Tho reveals the lack of integration between type I and type II measures that could be overcome by closer integration between various stakeholders in terms of planning, prioritising and implementing the adaptation measures.

  19. Unified framework for automated iris segmentation using distantly acquired face images.

    PubMed

    Tan, Chun-Wei; Kumar, Ajay

    2012-09-01

    Remote human identification using iris biometrics has high civilian and surveillance applications and its success requires the development of robust segmentation algorithm to automatically extract the iris region. This paper presents a new iris segmentation framework which can robustly segment the iris images acquired using near infrared or visible illumination. The proposed approach exploits multiple higher order local pixel dependencies to robustly classify the eye region pixels into iris or noniris regions. Face and eye detection modules have been incorporated in the unified framework to automatically provide the localized eye region from facial image for iris segmentation. We develop robust postprocessing operations algorithm to effectively mitigate the noisy pixels caused by the misclassification. Experimental results presented in this paper suggest significant improvement in the average segmentation errors over the previously proposed approaches, i.e., 47.5%, 34.1%, and 32.6% on UBIRIS.v2, FRGC, and CASIA.v4 at-a-distance databases, respectively. The usefulness of the proposed approach is also ascertained from recognition experiments on three different publicly available databases.

  20. Application of the SP theory of intelligence to the understanding of natural vision and the development of computer vision.

    PubMed

    Wolff, J Gerard

    2014-01-01

    The SP theory of intelligence aims to simplify and integrate concepts in computing and cognition, with information compression as a unifying theme. This article is about how the SP theory may, with advantage, be applied to the understanding of natural vision and the development of computer vision. Potential benefits include an overall simplification of concepts in a universal framework for knowledge and seamless integration of vision with other sensory modalities and other aspects of intelligence. Low level perceptual features such as edges or corners may be identified by the extraction of redundancy in uniform areas in the manner of the run-length encoding technique for information compression. The concept of multiple alignment in the SP theory may be applied to the recognition of objects, and to scene analysis, with a hierarchy of parts and sub-parts, at multiple levels of abstraction, and with family-resemblance or polythetic categories. The theory has potential for the unsupervised learning of visual objects and classes of objects, and suggests how coherent concepts may be derived from fragments. As in natural vision, both recognition and learning in the SP system are robust in the face of errors of omission, commission and substitution. The theory suggests how, via vision, we may piece together a knowledge of the three-dimensional structure of objects and of our environment, it provides an account of how we may see things that are not objectively present in an image, how we may recognise something despite variations in the size of its retinal image, and how raster graphics and vector graphics may be unified. And it has things to say about the phenomena of lightness constancy and colour constancy, the role of context in recognition, ambiguities in visual perception, and the integration of vision with other senses and other aspects of intelligence.

  1. An efficient implementation of 3D high-resolution imaging for large-scale seismic data with GPU/CPU heterogeneous parallel computing

    NASA Astrophysics Data System (ADS)

    Xu, Jincheng; Liu, Wei; Wang, Jin; Liu, Linong; Zhang, Jianfeng

    2018-02-01

    De-absorption pre-stack time migration (QPSTM) compensates for the absorption and dispersion of seismic waves by introducing an effective Q parameter, thereby making it an effective tool for 3D, high-resolution imaging of seismic data. Although the optimal aperture obtained via stationary-phase migration reduces the computational cost of 3D QPSTM and yields 3D stationary-phase QPSTM, the associated computational efficiency is still the main problem in the processing of 3D, high-resolution images for real large-scale seismic data. In the current paper, we proposed a division method for large-scale, 3D seismic data to optimize the performance of stationary-phase QPSTM on clusters of graphics processing units (GPU). Then, we designed an imaging point parallel strategy to achieve an optimal parallel computing performance. Afterward, we adopted an asynchronous double buffering scheme for multi-stream to perform the GPU/CPU parallel computing. Moreover, several key optimization strategies of computation and storage based on the compute unified device architecture (CUDA) were adopted to accelerate the 3D stationary-phase QPSTM algorithm. Compared with the initial GPU code, the implementation of the key optimization steps, including thread optimization, shared memory optimization, register optimization and special function units (SFU), greatly improved the efficiency. A numerical example employing real large-scale, 3D seismic data showed that our scheme is nearly 80 times faster than the CPU-QPSTM algorithm. Our GPU/CPU heterogeneous parallel computing framework significant reduces the computational cost and facilitates 3D high-resolution imaging for large-scale seismic data.

  2. Motor symptoms in Parkinson's disease: A unified framework.

    PubMed

    Moustafa, Ahmed A; Chakravarthy, Srinivasa; Phillips, Joseph R; Gupta, Ankur; Keri, Szabolcs; Polner, Bertalan; Frank, Michael J; Jahanshahi, Marjan

    2016-09-01

    Parkinson's disease (PD) is characterized by a range of motor symptoms. Besides the cardinal symptoms (akinesia and bradykinesia, tremor and rigidity), PD patients show additional motor deficits, including: gait disturbance, impaired handwriting, grip force and speech deficits, among others. Some of these motor symptoms (e.g., deficits of gait, speech, and handwriting) have similar clinical profiles, neural substrates, and respond similarly to dopaminergic medication and deep brain stimulation (DBS). Here, we provide an extensive review of the clinical characteristics and neural substrates of each of these motor symptoms, to highlight precisely how PD and its medical and surgical treatments impact motor symptoms. In conclusion, we offer a unified framework for understanding the range of motor symptoms in PD. We argue that various motor symptoms in PD reflect dysfunction of neural structures responsible for action selection, motor sequencing, and coordination and execution of movement. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Benefits of a Unified LaSRS++ Simulation for NAS-Wide and High-Fidelity Modeling

    NASA Technical Reports Server (NTRS)

    Glaab, Patricia; Madden, Michael

    2014-01-01

    The LaSRS++ high-fidelity vehicle simulation was extended in 2012 to support a NAS-wide simulation mode. Since the initial proof-of-concept, the LaSRS++ NAS-wide simulation is maturing into a research-ready tool. A primary benefit of this new capability is the consolidation of the two modeling paradigms under a single framework to save cost, facilitate iterative concept testing between the two tools, and to promote communication and model sharing between user communities at Langley. Specific benefits of each type of modeling are discussed along with the expected benefits of the unified framework. Current capability details of the LaSRS++ NAS-wide simulations are provided, including the visualization tool, live data interface, trajectory generators, terminal routing for arrivals and departures, maneuvering, re-routing, navigation, winds, and turbulence. The plan for future development is also described.

  4. Depth Reconstruction from Single Images Using a Convolutional Neural Network and a Condition Random Field Model.

    PubMed

    Liu, Dan; Liu, Xuejun; Wu, Yiguang

    2018-04-24

    This paper presents an effective approach for depth reconstruction from a single image through the incorporation of semantic information and local details from the image. A unified framework for depth acquisition is constructed by joining a deep Convolutional Neural Network (CNN) and a continuous pairwise Conditional Random Field (CRF) model. Semantic information and relative depth trends of local regions inside the image are integrated into the framework. A deep CNN network is firstly used to automatically learn a hierarchical feature representation of the image. To get more local details in the image, the relative depth trends of local regions are incorporated into the network. Combined with semantic information of the image, a continuous pairwise CRF is then established and is used as the loss function of the unified model. Experiments on real scenes demonstrate that the proposed approach is effective and that the approach obtains satisfactory results.

  5. Robust nonlinear control of vectored thrust aircraft

    NASA Technical Reports Server (NTRS)

    Doyle, John C.; Murray, Richard; Morris, John

    1993-01-01

    An interdisciplinary program in robust control for nonlinear systems with applications to a variety of engineering problems is outlined. Major emphasis will be placed on flight control, with both experimental and analytical studies. This program builds on recent new results in control theory for stability, stabilization, robust stability, robust performance, synthesis, and model reduction in a unified framework using Linear Fractional Transformations (LFT's), Linear Matrix Inequalities (LMI's), and the structured singular value micron. Most of these new advances have been accomplished by the Caltech controls group independently or in collaboration with researchers in other institutions. These recent results offer a new and remarkably unified framework for all aspects of robust control, but what is particularly important for this program is that they also have important implications for system identification and control of nonlinear systems. This combines well with Caltech's expertise in nonlinear control theory, both in geometric methods and methods for systems with constraints and saturations.

  6. Discrete shearlet transform: faithful digitization concept and its applications

    NASA Astrophysics Data System (ADS)

    Lim, Wang-Q.

    2011-09-01

    Over the past years, various representation systems which sparsely approximate functions governed by anisotropic features such as edges in images have been proposed. Alongside the theoretical development of these systems, algorithmic realizations of the associated transforms were provided. However, one of the most common short-comings of these frameworks is the lack of providing a unified treatment of the continuum and digital world, i.e., allowing a digital theory to be a natural digitization of the continuum theory. Shearlets were introduced as means to sparsely encode anisotropic singularities of multivariate data while providing a unified treatment of the continuous and digital realm. In this paper, we introduce a discrete framework which allows a faithful digitization of the continuum domain shearlet transform based on compactly supported shearlets. Finally, we show numerical experiments demonstrating the potential of the discrete shearlet transform in several image processing applications.

  7. Some characteristics of supernetworks based on unified hybrid network theory framework

    NASA Astrophysics Data System (ADS)

    Liu, Qiang; Fang, Jin-Qing; Li, Yong

    Comparing with single complex networks, supernetworks are more close to the real world in some ways, and have become the newest research hot spot in the network science recently. Some progresses have been made in the research of supernetworks, but the theoretical research method and complex network characteristics of supernetwork models are still needed to further explore. In this paper, we propose three kinds of supernetwork models with three layers based on the unified hybrid network theory framework (UHNTF), and introduce preferential and random linking, respectively, between the upper and lower layers. Then we compared the topological characteristics of the single networks with the supernetwork models. In order to analyze the influence of the interlayer edges on network characteristics, the cross-degree is defined as a new important parameter. Then some interesting new phenomena are found, the results imply this supernetwork model has reference value and application potential.

  8. Snoopy--a unifying Petri net framework to investigate biomolecular networks.

    PubMed

    Rohr, Christian; Marwan, Wolfgang; Heiner, Monika

    2010-04-01

    To investigate biomolecular networks, Snoopy provides a unifying Petri net framework comprising a family of related Petri net classes. Models can be hierarchically structured, allowing for the mastering of larger networks. To move easily between the qualitative, stochastic and continuous modelling paradigms, models can be converted into each other. We get models sharing structure, but specialized by their kinetic information. The analysis and iterative reverse engineering of biomolecular networks is supported by the simultaneous use of several Petri net classes, while the graphical user interface adapts dynamically to the active one. Built-in animation and simulation are complemented by exports to various analysis tools. Snoopy facilitates the addition of new Petri net classes thanks to its generic design. Our tool with Petri net samples is available free of charge for non-commercial use at http://www-dssz.informatik.tu-cottbus.de/snoopy.html; supported operating systems: Mac OS X, Windows and Linux (selected distributions).

  9. An Application of Artificial Intelligence to the Implementation of Electronic Commerce

    NASA Astrophysics Data System (ADS)

    Srivastava, Anoop Kumar

    In this paper, we present an application of Artificial Intelligence (AI) to the implementation of Electronic Commerce. We provide a multi autonomous agent based framework. Our agent based architecture leads to flexible design of a spectrum of multiagent system (MAS) by distributing computation and by providing a unified interface to data and programs. Autonomous agents are intelligent enough and provide autonomy, simplicity of communication, computation, and a well developed semantics. The steps of design and implementation are discussed in depth, structure of Electronic Marketplace, an ontology, the agent model, and interaction pattern between agents is given. We have developed mechanisms for coordination between agents using a language, which is called Virtual Enterprise Modeling Language (VEML). VEML is a integration of Java and Knowledge Query and Manipulation Language (KQML). VEML provides application programmers with potential to globally develop different kinds of MAS based on their requirements and applications. We have implemented a multi autonomous agent based system called VE System. We demonstrate efficacy of our system by discussing experimental results and its salient features.

  10. Simulation of the M13 life cycle I: Assembly of a genetically-structured deterministic chemical kinetic simulation.

    PubMed

    Smeal, Steven W; Schmitt, Margaret A; Pereira, Ronnie Rodrigues; Prasad, Ashok; Fisk, John D

    2017-01-01

    To expand the quantitative, systems level understanding and foster the expansion of the biotechnological applications of the filamentous bacteriophage M13, we have unified the accumulated quantitative information on M13 biology into a genetically-structured, experimentally-based computational simulation of the entire phage life cycle. The deterministic chemical kinetic simulation explicitly includes the molecular details of DNA replication, mRNA transcription, protein translation and particle assembly, as well as the competing protein-protein and protein-nucleic acid interactions that control the timing and extent of phage production. The simulation reproduces the holistic behavior of M13, closely matching experimentally reported values of the intracellular levels of phage species and the timing of events in the M13 life cycle. The computational model provides a quantitative description of phage biology, highlights gaps in the present understanding of M13, and offers a framework for exploring alternative mechanisms of regulation in the context of the complete M13 life cycle. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Subspace Clustering via Learning an Adaptive Low-Rank Graph.

    PubMed

    Yin, Ming; Xie, Shengli; Wu, Zongze; Zhang, Yun; Gao, Junbin

    2018-08-01

    By using a sparse representation or low-rank representation of data, the graph-based subspace clustering has recently attracted considerable attention in computer vision, given its capability and efficiency in clustering data. However, the graph weights built using the representation coefficients are not the exact ones as the traditional definition is in a deterministic way. The two steps of representation and clustering are conducted in an independent manner, thus an overall optimal result cannot be guaranteed. Furthermore, it is unclear how the clustering performance will be affected by using this graph. For example, the graph parameters, i.e., the weights on edges, have to be artificially pre-specified while it is very difficult to choose the optimum. To this end, in this paper, a novel subspace clustering via learning an adaptive low-rank graph affinity matrix is proposed, where the affinity matrix and the representation coefficients are learned in a unified framework. As such, the pre-computed graph regularizer is effectively obviated and better performance can be achieved. Experimental results on several famous databases demonstrate that the proposed method performs better against the state-of-the-art approaches, in clustering.

  12. Haptic simulation framework for determining virtual dental occlusion.

    PubMed

    Wu, Wen; Chen, Hui; Cen, Yuhai; Hong, Yang; Khambay, Balvinder; Heng, Pheng Ann

    2017-04-01

    The surgical treatment of many dentofacial deformities is often complex due to its three-dimensional nature. To determine the dental occlusion in the most stable position is essential for the success of the treatment. Computer-aided virtual planning on individualized patient-specific 3D model can help formulate the surgical plan and predict the surgical change. However, in current computer-aided planning systems, it is not possible to determine the dental occlusion of the digital models in the intuitive way during virtual surgical planning because of absence of haptic feedback. In this paper, a physically based haptic simulation framework is proposed, which can provide surgeons with the intuitive haptic feedback to determine the dental occlusion of the digital models in their most stable position. To provide the physically realistic force feedback when the dental models contact each other during the searching process, the contact model is proposed to describe the dynamic and collision properties of the dental models during the alignment. The simulated impulse/contact-based forces are integrated into the unified simulation framework. A validation study has been conducted on fifteen sets of virtual dental models chosen at random and covering a wide range of the dental relationships found clinically. The dental occlusions obtained by an expert were employed as a benchmark to compare the virtual occlusion results. The mean translational and angular deviations of the virtual occlusion results from the benchmark were small. The experimental results show the validity of our method. The simulated forces can provide valuable insights to determine the virtual dental occlusion. The findings of this work and the validation of proposed concept lead the way for full virtual surgical planning on patient-specific virtual models allowing fully customized treatment plans for the surgical correction of dentofacial deformities.

  13. A statistical framework for neuroimaging data analysis based on mutual information estimated via a gaussian copula

    PubMed Central

    Giordano, Bruno L.; Kayser, Christoph; Rousselet, Guillaume A.; Gross, Joachim; Schyns, Philippe G.

    2016-01-01

    Abstract We begin by reviewing the statistical framework of information theory as applicable to neuroimaging data analysis. A major factor hindering wider adoption of this framework in neuroimaging is the difficulty of estimating information theoretic quantities in practice. We present a novel estimation technique that combines the statistical theory of copulas with the closed form solution for the entropy of Gaussian variables. This results in a general, computationally efficient, flexible, and robust multivariate statistical framework that provides effect sizes on a common meaningful scale, allows for unified treatment of discrete, continuous, unidimensional and multidimensional variables, and enables direct comparisons of representations from behavioral and brain responses across any recording modality. We validate the use of this estimate as a statistical test within a neuroimaging context, considering both discrete stimulus classes and continuous stimulus features. We also present examples of analyses facilitated by these developments, including application of multivariate analyses to MEG planar magnetic field gradients, and pairwise temporal interactions in evoked EEG responses. We show the benefit of considering the instantaneous temporal derivative together with the raw values of M/EEG signals as a multivariate response, how we can separately quantify modulations of amplitude and direction for vector quantities, and how we can measure the emergence of novel information over time in evoked responses. Open‐source Matlab and Python code implementing the new methods accompanies this article. Hum Brain Mapp 38:1541–1573, 2017. © 2016 Wiley Periodicals, Inc. PMID:27860095

  14. Unified framework for information integration based on information geometry

    PubMed Central

    Oizumi, Masafumi; Amari, Shun-ichi

    2016-01-01

    Assessment of causal influences is a ubiquitous and important subject across diverse research fields. Drawn from consciousness studies, integrated information is a measure that defines integration as the degree of causal influences among elements. Whereas pairwise causal influences between elements can be quantified with existing methods, quantifying multiple influences among many elements poses two major mathematical difficulties. First, overestimation occurs due to interdependence among influences if each influence is separately quantified in a part-based manner and then simply summed over. Second, it is difficult to isolate causal influences while avoiding noncausal confounding influences. To resolve these difficulties, we propose a theoretical framework based on information geometry for the quantification of multiple causal influences with a holistic approach. We derive a measure of integrated information, which is geometrically interpreted as the divergence between the actual probability distribution of a system and an approximated probability distribution where causal influences among elements are statistically disconnected. This framework provides intuitive geometric interpretations harmonizing various information theoretic measures in a unified manner, including mutual information, transfer entropy, stochastic interaction, and integrated information, each of which is characterized by how causal influences are disconnected. In addition to the mathematical assessment of consciousness, our framework should help to analyze causal relationships in complex systems in a complete and hierarchical manner. PMID:27930289

  15. Towards a unified model of passive drug permeation I: origins of the unstirred water layer with applications to ionic permeation.

    PubMed

    Ghosh, Avijit; Scott, Dennis O; Maurer, Tristan S

    2014-02-14

    In this work, we provide a unified theoretical framework describing how drug molecules can permeate across membranes in neutral and ionized forms for unstirred in vitro systems. The analysis provides a self-consistent basis for the origin of the unstirred water layer (UWL) within the Nernst-Planck framework in the fully unstirred limit and further provides an accounting mechanism based simply on the bulk aqueous solvent diffusion constant of the drug molecule. Our framework makes no new assumptions about the underlying physics of molecular permeation. We hold simply that Nernst-Planck is a reasonable approximation at low concentrations and all physical systems must conserve mass. The applicability of the derived framework has been examined both with respect to the effect of stirring and externally applied voltages to measured permeability. The analysis contains data for 9 compounds extracted from the literature representing a range of permeabilities and aqueous diffusion coefficients. Applicability with respect to ionized permeation is examined using literature data for the permanently charged cation, crystal violet, providing a basis for the underlying mechanism for ionized drug permeation for this molecule as being due to mobile counter-current flow. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. A Unified Framework for Street-View Panorama Stitching

    PubMed Central

    Li, Li; Yao, Jian; Xie, Renping; Xia, Menghan; Zhang, Wei

    2016-01-01

    In this paper, we propose a unified framework to generate a pleasant and high-quality street-view panorama by stitching multiple panoramic images captured from the cameras mounted on the mobile platform. Our proposed framework is comprised of four major steps: image warping, color correction, optimal seam line detection and image blending. Since the input images are captured without a precisely common projection center from the scenes with the depth differences with respect to the cameras to different extents, such images cannot be precisely aligned in geometry. Therefore, an efficient image warping method based on the dense optical flow field is proposed to greatly suppress the influence of large geometric misalignment at first. Then, to lessen the influence of photometric inconsistencies caused by the illumination variations and different exposure settings, we propose an efficient color correction algorithm via matching extreme points of histograms to greatly decrease color differences between warped images. After that, the optimal seam lines between adjacent input images are detected via the graph cut energy minimization framework. At last, the Laplacian pyramid blending algorithm is applied to further eliminate the stitching artifacts along the optimal seam lines. Experimental results on a large set of challenging street-view panoramic images captured form the real world illustrate that the proposed system is capable of creating high-quality panoramas. PMID:28025481

  17. A Unified Analysis of Structured Sonar-terrain Data using Bayesian Functional Mixed Models.

    PubMed

    Zhu, Hongxiao; Caspers, Philip; Morris, Jeffrey S; Wu, Xiaowei; Müller, Rolf

    2018-01-01

    Sonar emits pulses of sound and uses the reflected echoes to gain information about target objects. It offers a low cost, complementary sensing modality for small robotic platforms. While existing analytical approaches often assume independence across echoes, real sonar data can have more complicated structures due to device setup or experimental design. In this paper, we consider sonar echo data collected from multiple terrain substrates with a dual-channel sonar head. Our goals are to identify the differential sonar responses to terrains and study the effectiveness of this dual-channel design in discriminating targets. We describe a unified analytical framework that achieves these goals rigorously, simultaneously, and automatically. The analysis was done by treating the echo envelope signals as functional responses and the terrain/channel information as covariates in a functional regression setting. We adopt functional mixed models that facilitate the estimation of terrain and channel effects while capturing the complex hierarchical structure in data. This unified analytical framework incorporates both Gaussian models and robust models. We fit the models using a full Bayesian approach, which enables us to perform multiple inferential tasks under the same modeling framework, including selecting models, estimating the effects of interest, identifying significant local regions, discriminating terrain types, and describing the discriminatory power of local regions. Our analysis of the sonar-terrain data identifies time regions that reflect differential sonar responses to terrains. The discriminant analysis suggests that a multi- or dual-channel design achieves target identification performance comparable with or better than a single-channel design.

  18. A Unified Analysis of Structured Sonar-terrain Data using Bayesian Functional Mixed Models

    PubMed Central

    Zhu, Hongxiao; Caspers, Philip; Morris, Jeffrey S.; Wu, Xiaowei; Müller, Rolf

    2017-01-01

    Sonar emits pulses of sound and uses the reflected echoes to gain information about target objects. It offers a low cost, complementary sensing modality for small robotic platforms. While existing analytical approaches often assume independence across echoes, real sonar data can have more complicated structures due to device setup or experimental design. In this paper, we consider sonar echo data collected from multiple terrain substrates with a dual-channel sonar head. Our goals are to identify the differential sonar responses to terrains and study the effectiveness of this dual-channel design in discriminating targets. We describe a unified analytical framework that achieves these goals rigorously, simultaneously, and automatically. The analysis was done by treating the echo envelope signals as functional responses and the terrain/channel information as covariates in a functional regression setting. We adopt functional mixed models that facilitate the estimation of terrain and channel effects while capturing the complex hierarchical structure in data. This unified analytical framework incorporates both Gaussian models and robust models. We fit the models using a full Bayesian approach, which enables us to perform multiple inferential tasks under the same modeling framework, including selecting models, estimating the effects of interest, identifying significant local regions, discriminating terrain types, and describing the discriminatory power of local regions. Our analysis of the sonar-terrain data identifies time regions that reflect differential sonar responses to terrains. The discriminant analysis suggests that a multi- or dual-channel design achieves target identification performance comparable with or better than a single-channel design. PMID:29749977

  19. Conservative boundary conditions for 3D gas dynamics problems

    NASA Technical Reports Server (NTRS)

    Gerasimov, B. P.; Karagichev, A. B.; Semushin, S. A.

    1986-01-01

    A method is described for 3D-gas dynamics computer simulation in regions of complicated shape by means of nonadjusted rectangular grids providing unified treatment of various problems. Some test problem computation results are given.

  20. Brain-Mind Operational Architectonics Imaging: Technical and Methodological Aspects

    PubMed Central

    Fingelkurts, Andrew A; Fingelkurts, Alexander A

    2008-01-01

    This review paper deals with methodological and technical foundations of the Operational Architectonics framework of brain and mind functioning. This theory provides a framework for mapping and understanding important aspects of the brain mechanisms that constitute perception, cognition, and eventually consciousness. The methods utilized within Operational Architectonics framework allow analyzing with an incredible detail the operational behavior of local neuronal assemblies and their joint activity in the form of unified and metastable operational modules, which constitute the whole hierarchy of brain operations, operations of cognition and phenomenal consciousness. PMID:19526071

  1. A Unified Nonlinear Adaptive Approach for Detection and Isolation of Engine Faults

    NASA Technical Reports Server (NTRS)

    Tang, Liang; DeCastro, Jonathan A.; Zhang, Xiaodong; Farfan-Ramos, Luis; Simon, Donald L.

    2010-01-01

    A challenging problem in aircraft engine health management (EHM) system development is to detect and isolate faults in system components (i.e., compressor, turbine), actuators, and sensors. Existing nonlinear EHM methods often deal with component faults, actuator faults, and sensor faults separately, which may potentially lead to incorrect diagnostic decisions and unnecessary maintenance. Therefore, it would be ideal to address sensor faults, actuator faults, and component faults under one unified framework. This paper presents a systematic and unified nonlinear adaptive framework for detecting and isolating sensor faults, actuator faults, and component faults for aircraft engines. The fault detection and isolation (FDI) architecture consists of a parallel bank of nonlinear adaptive estimators. Adaptive thresholds are appropriately designed such that, in the presence of a particular fault, all components of the residual generated by the adaptive estimator corresponding to the actual fault type remain below their thresholds. If the faults are sufficiently different, then at least one component of the residual generated by each remaining adaptive estimator should exceed its threshold. Therefore, based on the specific response of the residuals, sensor faults, actuator faults, and component faults can be isolated. The effectiveness of the approach was evaluated using the NASA C-MAPSS turbofan engine model, and simulation results are presented.

  2. A High Performance Bayesian Computing Framework for Spatiotemporal Uncertainty Modeling

    NASA Astrophysics Data System (ADS)

    Cao, G.

    2015-12-01

    All types of spatiotemporal measurements are subject to uncertainty. With spatiotemporal data becomes increasingly involved in scientific research and decision making, it is important to appropriately model the impact of uncertainty. Quantitatively modeling spatiotemporal uncertainty, however, is a challenging problem considering the complex dependence and dataheterogeneities.State-space models provide a unifying and intuitive framework for dynamic systems modeling. In this paper, we aim to extend the conventional state-space models for uncertainty modeling in space-time contexts while accounting for spatiotemporal effects and data heterogeneities. Gaussian Markov Random Field (GMRF) models, also known as conditional autoregressive models, are arguably the most commonly used methods for modeling of spatially dependent data. GMRF models basically assume that a geo-referenced variable primarily depends on its neighborhood (Markov property), and the spatial dependence structure is described via a precision matrix. Recent study has shown that GMRFs are efficient approximation to the commonly used Gaussian fields (e.g., Kriging), and compared with Gaussian fields, GMRFs enjoy a series of appealing features, such as fast computation and easily accounting for heterogeneities in spatial data (e.g, point and areal). This paper represents each spatial dataset as a GMRF and integrates them into a state-space form to statistically model the temporal dynamics. Different types of spatial measurements (e.g., categorical, count or continuous), can be accounted for by according link functions. A fast alternative to MCMC framework, so-called Integrated Nested Laplace Approximation (INLA), was adopted for model inference.Preliminary case studies will be conducted to showcase the advantages of the described framework. In the first case, we apply the proposed method for modeling the water table elevation of Ogallala aquifer over the past decades. In the second case, we analyze the drought impacts in Texas counties in the past years, where the spatiotemporal dynamics are represented in areal data.

  3. Reframing Information Literacy as a Metaliteracy

    ERIC Educational Resources Information Center

    Mackey, Thomas P.; Jacobson, Trudi E.

    2011-01-01

    Social media environments and online communities are innovative collaborative technologies that challenge traditional definitions of information literacy. Metaliteracy is an overarching and self-referential framework that integrates emerging technologies and unifies multiple literacy types. This redefinition of information literacy expands the…

  4. Implicit gas-kinetic unified algorithm based on multi-block docking grid for multi-body reentry flows covering all flow regimes

    NASA Astrophysics Data System (ADS)

    Peng, Ao-Ping; Li, Zhi-Hui; Wu, Jun-Lin; Jiang, Xin-Yu

    2016-12-01

    Based on the previous researches of the Gas-Kinetic Unified Algorithm (GKUA) for flows from highly rarefied free-molecule transition to continuum, a new implicit scheme of cell-centered finite volume method is presented for directly solving the unified Boltzmann model equation covering various flow regimes. In view of the difficulty in generating the single-block grid system with high quality for complex irregular bodies, a multi-block docking grid generation method is designed on the basis of data transmission between blocks, and the data structure is constructed for processing arbitrary connection relations between blocks with high efficiency and reliability. As a result, the gas-kinetic unified algorithm with the implicit scheme and multi-block docking grid has been firstly established and used to solve the reentry flow problems around the multi-bodies covering all flow regimes with the whole range of Knudsen numbers from 10 to 3.7E-6. The implicit and explicit schemes are applied to computing and analyzing the supersonic flows in near-continuum and continuum regimes around a circular cylinder with careful comparison each other. It is shown that the present algorithm and modelling possess much higher computational efficiency and faster converging properties. The flow problems including two and three side-by-side cylinders are simulated from highly rarefied to near-continuum flow regimes, and the present computed results are found in good agreement with the related DSMC simulation and theoretical analysis solutions, which verify the good accuracy and reliability of the present method. It is observed that the spacing of the multi-body is smaller, the cylindrical throat obstruction is greater with the flow field of single-body asymmetrical more obviously and the normal force coefficient bigger. While in the near-continuum transitional flow regime of near-space flying surroundings, the spacing of the multi-body increases to six times of the diameter of the single-body, the interference effects of the multi-bodies tend to be negligible. The computing practice has confirmed that it is feasible for the present method to compute the aerodynamics and reveal flow mechanism around complex multi-body vehicles covering all flow regimes from the gas-kinetic point of view of solving the unified Boltzmann model velocity distribution function equation.

  5. NeuCube: a spiking neural network architecture for mapping, learning and understanding of spatio-temporal brain data.

    PubMed

    Kasabov, Nikola K

    2014-04-01

    The brain functions as a spatio-temporal information processing machine. Spatio- and spectro-temporal brain data (STBD) are the most commonly collected data for measuring brain response to external stimuli. An enormous amount of such data has been already collected, including brain structural and functional data under different conditions, molecular and genetic data, in an attempt to make a progress in medicine, health, cognitive science, engineering, education, neuro-economics, Brain-Computer Interfaces (BCI), and games. Yet, there is no unifying computational framework to deal with all these types of data in order to better understand this data and the processes that generated it. Standard machine learning techniques only partially succeeded and they were not designed in the first instance to deal with such complex data. Therefore, there is a need for a new paradigm to deal with STBD. This paper reviews some methods of spiking neural networks (SNN) and argues that SNN are suitable for the creation of a unifying computational framework for learning and understanding of various STBD, such as EEG, fMRI, genetic, DTI, MEG, and NIRS, in their integration and interaction. One of the reasons is that SNN use the same computational principle that generates STBD, namely spiking information processing. This paper introduces a new SNN architecture, called NeuCube, for the creation of concrete models to map, learn and understand STBD. A NeuCube model is based on a 3D evolving SNN that is an approximate map of structural and functional areas of interest of the brain related to the modeling STBD. Gene information is included optionally in the form of gene regulatory networks (GRN) if this is relevant to the problem and the data. A NeuCube model learns from STBD and creates connections between clusters of neurons that manifest chains (trajectories) of neuronal activity. Once learning is applied, a NeuCube model can reproduce these trajectories, even if only part of the input STBD or the stimuli data is presented, thus acting as an associative memory. The NeuCube framework can be used not only to discover functional pathways from data, but also as a predictive system of brain activities, to predict and possibly, prevent certain events. Analysis of the internal structure of a model after training can reveal important spatio-temporal relationships 'hidden' in the data. NeuCube will allow the integration in one model of various brain data, information and knowledge, related to a single subject (personalized modeling) or to a population of subjects. The use of NeuCube for classification of STBD is illustrated in a case study problem of EEG data. NeuCube models result in a better accuracy of STBD classification than standard machine learning techniques. They are robust to noise (so typical in brain data) and facilitate a better interpretation of the results and understanding of the STBD and the brain conditions under which data was collected. Future directions for the use of SNN for STBD are discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Unified sensor management in unknown dynamic clutter

    NASA Astrophysics Data System (ADS)

    Mahler, Ronald; El-Fallah, Adel

    2010-04-01

    In recent years the first author has developed a unified, computationally tractable approach to multisensor-multitarget sensor management. This approach consists of closed-loop recursion of a PHD or CPHD filter with maximization of a "natural" sensor management objective function called PENT (posterior expected number of targets). In this paper we extend this approach so that it can be used in unknown, dynamic clutter backgrounds.

  7. Statistical Mechanics of the Cytoskeleton

    NASA Astrophysics Data System (ADS)

    Wang, Shenshen

    The mechanical integrity of eukaryotic cells along with their capability of dynamic remodeling depends on their cytoskeleton, a structural scaffold made up of a complex and dense network of filamentous proteins spanning the cytoplasm. Active force generation within the cytoskeletal networks by molecular motors is ultimately powered by the consumption of chemical energy and conversion of that energy into mechanical work. The resulting functional movements range from the collective cell migration in epithelial tissues responsible for wound healing to the changes of cell shape that occur during muscle contraction, as well as all the internal structural rearrangements essential for cell division. The role of the cytoskeleton as a dynamic versatile mesoscale "muscle", whose passive and active performance is both highly heterogeneous in space and time and intimately linked to diverse biological functions, allows it to serve as a sensitive indicator for the health and developmental state of the cell. By approaching this natural nonequilibrium many-body system from a variety of perspectives, researchers have made major progress toward understanding the cytoskeleton's unusual mechanical, dynamical and structural properties. Yet a unifying framework capable of capturing both the dynamics of active pattern formation and the emergence of spontaneous collective motion, that allows one to predict the dependence of the model's control parameters on motor properties, is still needed. In the following we construct a microscopic model and provide a theoretical framework to investigate the intricate interplay between local force generation, network architecture and collective motor action. This framework is able to accommodate both regular and heterogeneous pattern formation, as well as arrested coarsening and macroscopic contraction in a unified manner, through the notion of motor-driven effective interactions. Moreover a systematic expansion scheme combined with a variational stability analysis yields a threshold strength of motor kicking noise, below which the motorized system behaves as if it were at an effective equilibrium, but with a nontrivial effective temperature. Above the threshold, however, collective directed motion emerges spontaneously. Computer simulations support the theoretical predictions and highlight the essential role played in large-scale contraction by spatial correlation in motor kicking events.

  8. Modeling and control of operator functional state in a unified framework of fuzzy inference petri nets.

    PubMed

    Zhang, Jian-Hua; Xia, Jia-Jun; Garibaldi, Jonathan M; Groumpos, Petros P; Wang, Ru-Bin

    2017-06-01

    In human-machine (HM) hybrid control systems, human operator and machine cooperate to achieve the control objectives. To enhance the overall HM system performance, the discrete manual control task-load by the operator must be dynamically allocated in accordance with continuous-time fluctuation of psychophysiological functional status of the operator, so-called operator functional state (OFS). The behavior of the HM system is hybrid in nature due to the co-existence of discrete task-load (control) variable and continuous operator performance (system output) variable. Petri net is an effective tool for modeling discrete event systems, but for hybrid system involving discrete dynamics, generally Petri net model has to be extended. Instead of using different tools to represent continuous and discrete components of a hybrid system, this paper proposed a method of fuzzy inference Petri nets (FIPN) to represent the HM hybrid system comprising a Mamdani-type fuzzy model of OFS and a logical switching controller in a unified framework, in which the task-load level is dynamically reallocated between the operator and machine based on the model-predicted OFS. Furthermore, this paper used a multi-model approach to predict the operator performance based on three electroencephalographic (EEG) input variables (features) via the Wang-Mendel (WM) fuzzy modeling method. The membership function parameters of fuzzy OFS model for each experimental participant were optimized using artificial bee colony (ABC) evolutionary algorithm. Three performance indices, RMSE, MRE, and EPR, were computed to evaluate the overall modeling accuracy. Experiment data from six participants are analyzed. The results show that the proposed method (FIPN with adaptive task allocation) yields lower breakdown rate (from 14.8% to 3.27%) and higher human performance (from 90.30% to 91.99%). The simulation results of the FIPN-based adaptive HM (AHM) system on six experimental participants demonstrate that the FIPN framework provides an effective way to model and regulate/optimize the OFS in HM hybrid systems composed of continuous-time OFS model and discrete-event switching controller. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Unified treatment of microscopic boundary conditions and efficient algorithms for estimating tangent operators of the homogenized behavior in the computational homogenization method

    NASA Astrophysics Data System (ADS)

    Nguyen, Van-Dung; Wu, Ling; Noels, Ludovic

    2017-03-01

    This work provides a unified treatment of arbitrary kinds of microscopic boundary conditions usually considered in the multi-scale computational homogenization method for nonlinear multi-physics problems. An efficient procedure is developed to enforce the multi-point linear constraints arising from the microscopic boundary condition either by the direct constraint elimination or by the Lagrange multiplier elimination methods. The macroscopic tangent operators are computed in an efficient way from a multiple right hand sides linear system whose left hand side matrix is the stiffness matrix of the microscopic linearized system at the converged solution. The number of vectors at the right hand side is equal to the number of the macroscopic kinematic variables used to formulate the microscopic boundary condition. As the resolution of the microscopic linearized system often follows a direct factorization procedure, the computation of the macroscopic tangent operators is then performed using this factorized matrix at a reduced computational time.

  10. A UML profile for framework modeling.

    PubMed

    Xu, Xiao-liang; Wang, Le-yu; Zhou, Hong

    2004-01-01

    The current standard Unified Modeling Language(UML) could not model framework flexibility and extendability adequately due to lack of appropriate constructs to distinguish framework hot-spots from kernel elements. A new UML profile that may customize UML for framework modeling was presented using the extension mechanisms of UML, providing a group of UML extensions to meet the needs of framework modeling. In this profile, the extended class diagrams and sequence diagrams were defined to straightforwardly identify the hot-spots and describe their instantiation restrictions. A transformation model based on design patterns was also put forward, such that the profile based framework design diagrams could be automatically mapped to the corresponding implementation diagrams. It was proved that the presented profile makes framework modeling more straightforwardly and therefore easier to understand and instantiate.

  11. Partly cloudy with a chance of migration: Weather, radars, and aeroecology

    USDA-ARS?s Scientific Manuscript database

    Aeroecology is an emerging scientific discipline that integrates atmospheric science, terrestrial science, geography, ecology, computer science, computational biology, and engineering to further the understanding of ecological patterns and processes. The unifying concept underlying this new transdis...

  12. XFEM modeling of hydraulic fracture in porous rocks with natural fractures

    NASA Astrophysics Data System (ADS)

    Wang, Tao; Liu, ZhanLi; Zeng, QingLei; Gao, Yue; Zhuang, Zhuo

    2017-08-01

    Hydraulic fracture (HF) in porous rocks is a complex multi-physics coupling process which involves fluid flow, diffusion and solid deformation. In this paper, the extended finite element method (XFEM) coupling with Biot theory is developed to study the HF in permeable rocks with natural fractures (NFs). In the recent XFEM based computational HF models, the fluid flow in fractures and interstitials of the porous media are mostly solved separately, which brings difficulties in dealing with complex fracture morphology. In our new model the fluid flow is solved in a unified framework by considering the fractures as a kind of special porous media and introducing Poiseuille-type flow inside them instead of Darcy-type flow. The most advantage is that it is very convenient to deal with fluid flow inside the complex fracture network, which is important in shale gas extraction. The weak formulation for the new coupled model is derived based on virtual work principle, which includes the XFEM formulation for multiple fractures and fractures intersection in porous media and finite element formulation for the unified fluid flow. Then the plane strain Kristianovic-Geertsma-de Klerk (KGD) model and the fluid flow inside the fracture network are simulated to validate the accuracy and applicability of this method. The numerical results show that large injection rate, low rock permeability and isotropic in-situ stresses tend to lead to a more uniform and productive fracture network.

  13. A Unified Approach to Genotype Imputation and Haplotype-Phase Inference for Large Data Sets of Trios and Unrelated Individuals

    PubMed Central

    Browning, Brian L.; Browning, Sharon R.

    2009-01-01

    We present methods for imputing data for ungenotyped markers and for inferring haplotype phase in large data sets of unrelated individuals and parent-offspring trios. Our methods make use of known haplotype phase when it is available, and our methods are computationally efficient so that the full information in large reference panels with thousands of individuals is utilized. We demonstrate that substantial gains in imputation accuracy accrue with increasingly large reference panel sizes, particularly when imputing low-frequency variants, and that unphased reference panels can provide highly accurate genotype imputation. We place our methodology in a unified framework that enables the simultaneous use of unphased and phased data from trios and unrelated individuals in a single analysis. For unrelated individuals, our imputation methods produce well-calibrated posterior genotype probabilities and highly accurate allele-frequency estimates. For trios, our haplotype-inference method is four orders of magnitude faster than the gold-standard PHASE program and has excellent accuracy. Our methods enable genotype imputation to be performed with unphased trio or unrelated reference panels, thus accounting for haplotype-phase uncertainty in the reference panel. We present a useful measure of imputation accuracy, allelic R2, and show that this measure can be estimated accurately from posterior genotype probabilities. Our methods are implemented in version 3.0 of the BEAGLE software package. PMID:19200528

  14. Intuitive and deliberate judgments are based on common principles.

    PubMed

    Kruglanski, Arie W; Gigerenzer, Gerd

    2011-01-01

    A popular distinction in cognitive and social psychology has been between intuitive and deliberate judgments. This juxtaposition has aligned in dual-process theories of reasoning associative, unconscious, effortless, heuristic, and suboptimal processes (assumed to foster intuitive judgments) versus rule-based, conscious, effortful, analytic, and rational processes (assumed to characterize deliberate judgments). In contrast, we provide convergent arguments and evidence for a unified theoretical approach to both intuitive and deliberative judgments. Both are rule-based, and in fact, the very same rules can underlie both intuitive and deliberate judgments. The important open question is that of rule selection, and we propose a 2-step process in which the task itself and the individual's memory constrain the set of applicable rules, whereas the individual's processing potential and the (perceived) ecological rationality of the rule for the task guide the final selection from that set. Deliberate judgments are not generally more accurate than intuitive judgments; in both cases, accuracy depends on the match between rule and environment: the rules' ecological rationality. Heuristics that are less effortful and in which parts of the information are ignored can be more accurate than cognitive strategies that have more information and computation. The proposed framework adumbrates a unified approach that specifies the critical dimensions on which judgmental situations may vary and the environmental conditions under which rules can be expected to be successful.

  15. PyMVPA: A Unifying Approach to the Analysis of Neuroscientific Data

    PubMed Central

    Hanke, Michael; Halchenko, Yaroslav O.; Sederberg, Per B.; Olivetti, Emanuele; Fründ, Ingo; Rieger, Jochem W.; Herrmann, Christoph S.; Haxby, James V.; Hanson, Stephen José; Pollmann, Stefan

    2008-01-01

    The Python programming language is steadily increasing in popularity as the language of choice for scientific computing. The ability of this scripting environment to access a huge code base in various languages, combined with its syntactical simplicity, make it the ideal tool for implementing and sharing ideas among scientists from numerous fields and with heterogeneous methodological backgrounds. The recent rise of reciprocal interest between the machine learning (ML) and neuroscience communities is an example of the desire for an inter-disciplinary transfer of computational methods that can benefit from a Python-based framework. For many years, a large fraction of both research communities have addressed, almost independently, very high-dimensional problems with almost completely non-overlapping methods. However, a number of recently published studies that applied ML methods to neuroscience research questions attracted a lot of attention from researchers from both fields, as well as the general public, and showed that this approach can provide novel and fruitful insights into the functioning of the brain. In this article we show how PyMVPA, a specialized Python framework for machine learning based data analysis, can help to facilitate this inter-disciplinary technology transfer by providing a single interface to a wide array of machine learning libraries and neural data-processing methods. We demonstrate the general applicability and power of PyMVPA via analyses of a number of neural data modalities, including fMRI, EEG, MEG, and extracellular recordings. PMID:19212459

  16. Self-Efficacy: Toward a Unifying Theory of Behavioral Change

    ERIC Educational Resources Information Center

    Bandura, Albert

    1977-01-01

    This research presents an integrative theoretical framework to explain and to predict psychological changes achieved by different modes of treatment. This theory states that psychological procedures, whatever their form, alter the level and strength of "self-efficacy". (Editor/RK)

  17. COMPLEMENTARITY OF ECOLOGICAL GOAL FUNCTIONS

    EPA Science Inventory

    This paper summarizes, in the framework of network environ analysis, a set of analyses of energy-matter flow and storage in steady state systems. The network perspective is used to codify and unify ten ecological orientors or external principles: maximum power (Lotka), maximum st...

  18. Unified-theory-of-reinforcement neural networks do not simulate the blocking effect.

    PubMed

    Calvin, Nicholas T; J McDowell, J

    2015-11-01

    For the last 20 years the unified theory of reinforcement (Donahoe et al., 1993) has been used to develop computer simulations to evaluate its plausibility as an account for behavior. The unified theory of reinforcement states that operant and respondent learning occurs via the same neural mechanisms. As part of a larger project to evaluate the operant behavior predicted by the theory, this project was the first replication of neural network models based on the unified theory of reinforcement. In the process of replicating these neural network models it became apparent that a previously published finding, namely, that the networks simulate the blocking phenomenon (Donahoe et al., 1993), was a misinterpretation of the data. We show that the apparent blocking produced by these networks is an artifact of the inability of these networks to generate the same conditioned response to multiple stimuli. The piecemeal approach to evaluate the unified theory of reinforcement via simulation is critiqued and alternatives are discussed. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Rapid indirect trajectory optimization on highly parallel computing architectures

    NASA Astrophysics Data System (ADS)

    Antony, Thomas

    Trajectory optimization is a field which can benefit greatly from the advantages offered by parallel computing. The current state-of-the-art in trajectory optimization focuses on the use of direct optimization methods, such as the pseudo-spectral method. These methods are favored due to their ease of implementation and large convergence regions while indirect methods have largely been ignored in the literature in the past decade except for specific applications in astrodynamics. It has been shown that the shortcomings conventionally associated with indirect methods can be overcome by the use of a continuation method in which complex trajectory solutions are obtained by solving a sequence of progressively difficult optimization problems. High performance computing hardware is trending towards more parallel architectures as opposed to powerful single-core processors. Graphics Processing Units (GPU), which were originally developed for 3D graphics rendering have gained popularity in the past decade as high-performance, programmable parallel processors. The Compute Unified Device Architecture (CUDA) framework, a parallel computing architecture and programming model developed by NVIDIA, is one of the most widely used platforms in GPU computing. GPUs have been applied to a wide range of fields that require the solution of complex, computationally demanding problems. A GPU-accelerated indirect trajectory optimization methodology which uses the multiple shooting method and continuation is developed using the CUDA platform. The various algorithmic optimizations used to exploit the parallelism inherent in the indirect shooting method are described. The resulting rapid optimal control framework enables the construction of high quality optimal trajectories that satisfy problem-specific constraints and fully satisfy the necessary conditions of optimality. The benefits of the framework are highlighted by construction of maximum terminal velocity trajectories for a hypothetical long range weapon system. The techniques used to construct an initial guess from an analytic near-ballistic trajectory and the methods used to formulate the necessary conditions of optimality in a manner that is transparent to the designer are discussed. Various hypothetical mission scenarios that enforce different combinations of initial, terminal, interior point and path constraints demonstrate the rapid construction of complex trajectories without requiring any a-priori insight into the structure of the solutions. Trajectory problems of this kind were previously considered impractical to solve using indirect methods. The performance of the GPU-accelerated solver is found to be 2x--4x faster than MATLAB's bvp4c, even while running on GPU hardware that is five years behind the state-of-the-art.

  20. Chimaera simulation of complex states of flowing matter

    PubMed Central

    2016-01-01

    We discuss a unified mesoscale framework (chimaera) for the simulation of complex states of flowing matter across scales of motion. The chimaera framework can deal with each of the three macro–meso–micro levels through suitable ‘mutations’ of the basic mesoscale formulation. The idea is illustrated through selected simulations of complex micro- and nanoscale flows. This article is part of the themed issue ‘Multiscale modelling at the physics–chemistry–biology interface’. PMID:27698031

  1. A multimedia retrieval framework based on semi-supervised ranking and relevance feedback.

    PubMed

    Yang, Yi; Nie, Feiping; Xu, Dong; Luo, Jiebo; Zhuang, Yueting; Pan, Yunhe

    2012-04-01

    We present a new framework for multimedia content analysis and retrieval which consists of two independent algorithms. First, we propose a new semi-supervised algorithm called ranking with Local Regression and Global Alignment (LRGA) to learn a robust Laplacian matrix for data ranking. In LRGA, for each data point, a local linear regression model is used to predict the ranking scores of its neighboring points. A unified objective function is then proposed to globally align the local models from all the data points so that an optimal ranking score can be assigned to each data point. Second, we propose a semi-supervised long-term Relevance Feedback (RF) algorithm to refine the multimedia data representation. The proposed long-term RF algorithm utilizes both the multimedia data distribution in multimedia feature space and the history RF information provided by users. A trace ratio optimization problem is then formulated and solved by an efficient algorithm. The algorithms have been applied to several content-based multimedia retrieval applications, including cross-media retrieval, image retrieval, and 3D motion/pose data retrieval. Comprehensive experiments on four data sets have demonstrated its advantages in precision, robustness, scalability, and computational efficiency.

  2. Computational motor control: feedback and accuracy.

    PubMed

    Guigon, Emmanuel; Baraduc, Pierre; Desmurget, Michel

    2008-02-01

    Speed/accuracy trade-off is a ubiquitous phenomenon in motor behaviour, which has been ascribed to the presence of signal-dependent noise (SDN) in motor commands. Although this explanation can provide a quantitative account of many aspects of motor variability, including Fitts' law, the fact that this law is frequently violated, e.g. during the acquisition of new motor skills, remains unexplained. Here, we describe a principled approach to the influence of noise on motor behaviour, in which motor variability results from the interplay between sensory and motor execution noises in an optimal feedback-controlled system. In this framework, we first show that Fitts' law arises due to signal-dependent motor noise (SDN(m)) when sensory (proprioceptive) noise is low, e.g. under visual feedback. Then we show that the terminal variability of non-visually guided movement can be explained by the presence of signal-dependent proprioceptive noise. Finally, we show that movement accuracy can be controlled by opposite changes in signal-dependent sensory (SDN(s)) and SDN(m), a phenomenon that could be ascribed to muscular co-contraction. As the model also explains kinematics, kinetics, muscular and neural characteristics of reaching movements, it provides a unified framework to address motor variability.

  3. Probing solvation decay length in order to characterize hydrophobicity-induced bead-bead attractive interactions in polymer chains.

    PubMed

    Das, Siddhartha; Chakraborty, Suman

    2011-08-01

    In this paper, we quantitatively demonstrate that exponentially decaying attractive potentials can effectively mimic strong hydrophobic interactions between monomer units of a polymer chain dissolved in aqueous solvent. Classical approaches to modeling hydrophobic solvation interactions are based on invariant attractive length scales. However, we demonstrate here that the solvation interaction decay length may need to be posed as a function of the relative separation distances and the sizes of the interacting species (or beads or monomers) to replicate the necessary physical interactions. As an illustrative example, we derive a universal scaling relationship for a given solute-solvent combination between the solvation decay length, the bead radius, and the distance between the interacting beads. With our formalism, the hydrophobic component of the net attractive interaction between monomer units can be synergistically accounted for within the unified framework of a simple exponentially decaying potential law, where the characteristic decay length incorporates the distinctive and critical physical features of the underlying interaction. The present formalism, even in a mesoscopic computational framework, is capable of incorporating the essential physics of the appropriate solute-size dependence and solvent-interaction dependence in the hydrophobic force estimation, without explicitly resolving the underlying molecular level details.

  4. Uncertainty Quantification for Polynomial Systems via Bernstein Expansions

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2012-01-01

    This paper presents a unifying framework to uncertainty quantification for systems having polynomial response metrics that depend on both aleatory and epistemic uncertainties. The approach proposed, which is based on the Bernstein expansions of polynomials, enables bounding the range of moments and failure probabilities of response metrics as well as finding supersets of the extreme epistemic realizations where the limits of such ranges occur. These bounds and supersets, whose analytical structure renders them free of approximation error, can be made arbitrarily tight with additional computational effort. Furthermore, this framework enables determining the importance of particular uncertain parameters according to the extent to which they affect the first two moments of response metrics and failure probabilities. This analysis enables determining the parameters that should be considered uncertain as well as those that can be assumed to be constants without incurring significant error. The analytical nature of the approach eliminates the numerical error that characterizes the sampling-based techniques commonly used to propagate aleatory uncertainties as well as the possibility of under predicting the range of the statistic of interest that may result from searching for the best- and worstcase epistemic values via nonlinear optimization or sampling.

  5. Molecular Monte Carlo Simulations Using Graphics Processing Units: To Waste Recycle or Not?

    PubMed

    Kim, Jihan; Rodgers, Jocelyn M; Athènes, Manuel; Smit, Berend

    2011-10-11

    In the waste recycling Monte Carlo (WRMC) algorithm, (1) multiple trial states may be simultaneously generated and utilized during Monte Carlo moves to improve the statistical accuracy of the simulations, suggesting that such an algorithm may be well posed for implementation in parallel on graphics processing units (GPUs). In this paper, we implement two waste recycling Monte Carlo algorithms in CUDA (Compute Unified Device Architecture) using uniformly distributed random trial states and trial states based on displacement random-walk steps, and we test the methods on a methane-zeolite MFI framework system to evaluate their utility. We discuss the specific implementation details of the waste recycling GPU algorithm and compare the methods to other parallel algorithms optimized for the framework system. We analyze the relationship between the statistical accuracy of our simulations and the CUDA block size to determine the efficient allocation of the GPU hardware resources. We make comparisons between the GPU and the serial CPU Monte Carlo implementations to assess speedup over conventional microprocessors. Finally, we apply our optimized GPU algorithms to the important problem of determining free energy landscapes, in this case for molecular motion through the zeolite LTA.

  6. Cross-scale efficient tensor contractions for coupled cluster computations through multiple programming model backends

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibrahim, Khaled Z.; Epifanovsky, Evgeny; Williams, Samuel

    Coupled-cluster methods provide highly accurate models of molecular structure through explicit numerical calculation of tensors representing the correlation between electrons. These calculations are dominated by a sequence of tensor contractions, motivating the development of numerical libraries for such operations. While based on matrix–matrix multiplication, these libraries are specialized to exploit symmetries in the molecular structure and in electronic interactions, and thus reduce the size of the tensor representation and the complexity of contractions. The resulting algorithms are irregular and their parallelization has been previously achieved via the use of dynamic scheduling or specialized data decompositions. We introduce our efforts tomore » extend the Libtensor framework to work in the distributed memory environment in a scalable and energy-efficient manner. We achieve up to 240× speedup compared with the optimized shared memory implementation of Libtensor. We attain scalability to hundreds of thousands of compute cores on three distributed-memory architectures (Cray XC30 and XC40, and IBM Blue Gene/Q), and on a heterogeneous GPU-CPU system (Cray XK7). As the bottlenecks shift from being compute-bound DGEMM's to communication-bound collectives as the size of the molecular system scales, we adopt two radically different parallelization approaches for handling load-imbalance, tasking and bulk synchronous models. Nevertheless, we preserve a unified interface to both programming models to maintain the productivity of computational quantum chemists.« less

  7. Cross-scale efficient tensor contractions for coupled cluster computations through multiple programming model backends

    DOE PAGES

    Ibrahim, Khaled Z.; Epifanovsky, Evgeny; Williams, Samuel; ...

    2017-03-08

    Coupled-cluster methods provide highly accurate models of molecular structure through explicit numerical calculation of tensors representing the correlation between electrons. These calculations are dominated by a sequence of tensor contractions, motivating the development of numerical libraries for such operations. While based on matrix–matrix multiplication, these libraries are specialized to exploit symmetries in the molecular structure and in electronic interactions, and thus reduce the size of the tensor representation and the complexity of contractions. The resulting algorithms are irregular and their parallelization has been previously achieved via the use of dynamic scheduling or specialized data decompositions. We introduce our efforts tomore » extend the Libtensor framework to work in the distributed memory environment in a scalable and energy-efficient manner. We achieve up to 240× speedup compared with the optimized shared memory implementation of Libtensor. We attain scalability to hundreds of thousands of compute cores on three distributed-memory architectures (Cray XC30 and XC40, and IBM Blue Gene/Q), and on a heterogeneous GPU-CPU system (Cray XK7). As the bottlenecks shift from being compute-bound DGEMM's to communication-bound collectives as the size of the molecular system scales, we adopt two radically different parallelization approaches for handling load-imbalance, tasking and bulk synchronous models. Nevertheless, we preserve a unified interface to both programming models to maintain the productivity of computational quantum chemists.« less

  8. Implementation is crucial but must be neurobiologically grounded. Comment on “Toward a computational framework for cognitive biology: Unifying approaches from cognitive neuroscience and comparative cognition” by W. Tecumseh Fitch

    NASA Astrophysics Data System (ADS)

    Bornkessel-Schlesewsky, Ina; Schlesewsky, Matthias; Small, Steven L.

    2014-09-01

    From the perspective of language, Fitch's [1] claim that theories of cognitive computation should not be separated from those of implementation surely deserves applauding. Recent developments in the Cognitive Neuroscience of Language, leading to the new field of the Neurobiology of Language [2-4], emphasise precisely this point: rather than attempting to simply map cognitive theories of language onto the brain, we should aspire to understand how the brain implements language. This perspective resonates with many of the points raised by Fitch in his review, such as the discussion of unhelpful dichotomies (e.g., Nature versus Nurture). Cognitive dichotomies and debates have repeatedly turned out to be of limited usefulness when it comes to understanding language in the brain. The famous modularity-versus-interactivity and dual route-versus-connectionist debates are cases in point: in spite of hundreds of experiments using neuroimaging (or other techniques), or the construction of myriad computer models, little progress has been made in their resolution. This suggests that dichotomies proposed at a purely cognitive (or computational) level without consideration of biological grounding appear to be "asking the wrong questions" about the neurobiology of language. In accordance with these developments, several recent proposals explicitly consider neurobiological constraints while seeking to explain language processing at a cognitive level (e.g. [5-7]).

  9. Statistical mechanics of complex neural systems and high dimensional data

    NASA Astrophysics Data System (ADS)

    Advani, Madhu; Lahiri, Subhaneil; Ganguli, Surya

    2013-03-01

    Recent experimental advances in neuroscience have opened new vistas into the immense complexity of neuronal networks. This proliferation of data challenges us on two parallel fronts. First, how can we form adequate theoretical frameworks for understanding how dynamical network processes cooperate across widely disparate spatiotemporal scales to solve important computational problems? Second, how can we extract meaningful models of neuronal systems from high dimensional datasets? To aid in these challenges, we give a pedagogical review of a collection of ideas and theoretical methods arising at the intersection of statistical physics, computer science and neurobiology. We introduce the interrelated replica and cavity methods, which originated in statistical physics as powerful ways to quantitatively analyze large highly heterogeneous systems of many interacting degrees of freedom. We also introduce the closely related notion of message passing in graphical models, which originated in computer science as a distributed algorithm capable of solving large inference and optimization problems involving many coupled variables. We then show how both the statistical physics and computer science perspectives can be applied in a wide diversity of contexts to problems arising in theoretical neuroscience and data analysis. Along the way we discuss spin glasses, learning theory, illusions of structure in noise, random matrices, dimensionality reduction and compressed sensing, all within the unified formalism of the replica method. Moreover, we review recent conceptual connections between message passing in graphical models, and neural computation and learning. Overall, these ideas illustrate how statistical physics and computer science might provide a lens through which we can uncover emergent computational functions buried deep within the dynamical complexities of neuronal networks.

  10. Synaptic State Matching: A Dynamical Architecture for Predictive Internal Representation and Feature Detection

    PubMed Central

    Tavazoie, Saeed

    2013-01-01

    Here we explore the possibility that a core function of sensory cortex is the generation of an internal simulation of sensory environment in real-time. A logical elaboration of this idea leads to a dynamical neural architecture that oscillates between two fundamental network states, one driven by external input, and the other by recurrent synaptic drive in the absence of sensory input. Synaptic strength is modified by a proposed synaptic state matching (SSM) process that ensures equivalence of spike statistics between the two network states. Remarkably, SSM, operating locally at individual synapses, generates accurate and stable network-level predictive internal representations, enabling pattern completion and unsupervised feature detection from noisy sensory input. SSM is a biologically plausible substrate for learning and memory because it brings together sequence learning, feature detection, synaptic homeostasis, and network oscillations under a single unifying computational framework. PMID:23991161

  11. Understanding trends in C-H bond activation in heterogeneous catalysis.

    PubMed

    Latimer, Allegra A; Kulkarni, Ambarish R; Aljama, Hassan; Montoya, Joseph H; Yoo, Jong Suk; Tsai, Charlie; Abild-Pedersen, Frank; Studt, Felix; Nørskov, Jens K

    2017-02-01

    While the search for catalysts capable of directly converting methane to higher value commodity chemicals and liquid fuels has been active for over a century, a viable industrial process for selective methane activation has yet to be developed. Electronic structure calculations are playing an increasingly relevant role in this search, but large-scale materials screening efforts are hindered by computationally expensive transition state barrier calculations. The purpose of the present letter is twofold. First, we show that, for the wide range of catalysts that proceed via a radical intermediate, a unifying framework for predicting C-H activation barriers using a single universal descriptor can be established. Second, we combine this scaling approach with a thermodynamic analysis of active site formation to provide a map of methane activation rates. Our model successfully rationalizes the available empirical data and lays the foundation for future catalyst design strategies that transcend different catalyst classes.

  12. Understanding trends in C–H bond activation in heterogeneous catalysis

    DOE PAGES

    Latimer, Allegra A.; Kulkarni, Ambarish R.; Aljama, Hassan; ...

    2016-10-10

    While the search for catalysts capable of directly converting methane to higher value commodity chemicals and liquid fuels has been active for over a century, a viable industrial process for selective methane activation has yet to be developed1. Electronic structure calculations are playing an increasingly relevant role in this search, but large-scale materials screening efforts are hindered by computationally expensive transition state barrier calculations. The purpose of the present letter is twofold. First, we show that, for the wide range of catalysts that proceed via a radical intermediate, a unifying framework for predicting C–H activation barriers using a single universalmore » descriptor can be established. Second, we combine this scaling approach with a thermodynamic analysis of active site formation to provide a map of methane activation rates. Lastly, our model successfully rationalizes the available empirical data and lays the foundation for future catalyst design strategies that transcend different catalyst classes.« less

  13. Higher Curvature Gravity from Entanglement in Conformal Field Theories.

    PubMed

    Haehl, Felix M; Hijano, Eliot; Parrikar, Onkar; Rabideau, Charles

    2018-05-18

    By generalizing different recent works to the context of higher curvature gravity, we provide a unifying framework for three related results: (i) If an asymptotically anti-de Sitter (AdS) spacetime computes the entanglement entropies of ball-shaped regions in a conformal field theory using a generalized Ryu-Takayanagi formula up to second order in state deformations around the vacuum, then the spacetime satisfies the correct gravitational equations of motion up to second order around the AdS background. (ii) The holographic dual of entanglement entropy in higher curvature theories of gravity is given by the Wald entropy plus a particular correction term involving extrinsic curvatures. (iii) Conformal field theory relative entropy is dual to gravitational canonical energy (also in higher curvature theories of gravity). Especially for the second point, our novel derivation of this previously known statement does not involve the Euclidean replica trick.

  14. Higher Curvature Gravity from Entanglement in Conformal Field Theories

    NASA Astrophysics Data System (ADS)

    Haehl, Felix M.; Hijano, Eliot; Parrikar, Onkar; Rabideau, Charles

    2018-05-01

    By generalizing different recent works to the context of higher curvature gravity, we provide a unifying framework for three related results: (i) If an asymptotically anti-de Sitter (AdS) spacetime computes the entanglement entropies of ball-shaped regions in a conformal field theory using a generalized Ryu-Takayanagi formula up to second order in state deformations around the vacuum, then the spacetime satisfies the correct gravitational equations of motion up to second order around the AdS background. (ii) The holographic dual of entanglement entropy in higher curvature theories of gravity is given by the Wald entropy plus a particular correction term involving extrinsic curvatures. (iii) Conformal field theory relative entropy is dual to gravitational canonical energy (also in higher curvature theories of gravity). Especially for the second point, our novel derivation of this previously known statement does not involve the Euclidean replica trick.

  15. Understanding trends in C-H bond activation in heterogeneous catalysis

    NASA Astrophysics Data System (ADS)

    Latimer, Allegra A.; Kulkarni, Ambarish R.; Aljama, Hassan; Montoya, Joseph H.; Yoo, Jong Suk; Tsai, Charlie; Abild-Pedersen, Frank; Studt, Felix; Nørskov, Jens K.

    2017-02-01

    While the search for catalysts capable of directly converting methane to higher value commodity chemicals and liquid fuels has been active for over a century, a viable industrial process for selective methane activation has yet to be developed. Electronic structure calculations are playing an increasingly relevant role in this search, but large-scale materials screening efforts are hindered by computationally expensive transition state barrier calculations. The purpose of the present letter is twofold. First, we show that, for the wide range of catalysts that proceed via a radical intermediate, a unifying framework for predicting C-H activation barriers using a single universal descriptor can be established. Second, we combine this scaling approach with a thermodynamic analysis of active site formation to provide a map of methane activation rates. Our model successfully rationalizes the available empirical data and lays the foundation for future catalyst design strategies that transcend different catalyst classes.

  16. Finding order in complexity: themes from the career of Dr. Robert F. Wagner

    NASA Astrophysics Data System (ADS)

    Myers, Kyle J.

    2009-02-01

    Over the course of his long and productive career, Dr. Robert F. Wagner built a framework for the evaluation of imaging systems based on a task-based, decision theoretic approach. His most recent contributions involved the consideration of the random effects associated with multiple readers of medical images and the logical extension of this work to the problem of the evaluation of multiple competing classifiers in statistical pattern recognition. This contemporary work expanded on familiar themes from Bob's many SPIE presentations in earlier years. It was driven by the need for practical solutions to current problems facing FDA'S Center for Devices and Radiological Health and the medical imaging community regarding the assessment of new computer-aided diagnosis tools and Bob's unique ability to unify concepts across a range of disciplines as he gave order to increasingly complex problems in our field.

  17. Genetic mixed linear models for twin survival data.

    PubMed

    Ha, Il Do; Lee, Youngjo; Pawitan, Yudi

    2007-07-01

    Twin studies are useful for assessing the relative importance of genetic or heritable component from the environmental component. In this paper we develop a methodology to study the heritability of age-at-onset or lifespan traits, with application to analysis of twin survival data. Due to limited period of observation, the data can be left truncated and right censored (LTRC). Under the LTRC setting we propose a genetic mixed linear model, which allows general fixed predictors and random components to capture genetic and environmental effects. Inferences are based upon the hierarchical-likelihood (h-likelihood), which provides a statistically efficient and unified framework for various mixed-effect models. We also propose a simple and fast computation method for dealing with large data sets. The method is illustrated by the survival data from the Swedish Twin Registry. Finally, a simulation study is carried out to evaluate its performance.

  18. Model-based Executive Control through Reactive Planning for Autonomous Rovers

    NASA Technical Reports Server (NTRS)

    Finzi, Alberto; Ingrand, Felix; Muscettola, Nicola

    2004-01-01

    This paper reports on the design and implementation of a real-time executive for a mobile rover that uses a model-based, declarative approach. The control system is based on the Intelligent Distributed Execution Architecture (IDEA), an approach to planning and execution that provides a unified representational and computational framework for an autonomous agent. The basic hypothesis of IDEA is that a large control system can be structured as a collection of interacting agents, each with the same fundamental structure. We show that planning and real-time response are compatible if the executive minimizes the size of the planning problem. We detail the implementation of this approach on an exploration rover (Gromit an RWI ATRV Junior at NASA Ames) presenting different IDEA controllers of the same domain and comparing them with more classical approaches. We demonstrate that the approach is scalable to complex coordination of functional modules needed for autonomous navigation and exploration.

  19. General purpose graphic processing unit implementation of adaptive pulse compression algorithms

    NASA Astrophysics Data System (ADS)

    Cai, Jingxiao; Zhang, Yan

    2017-07-01

    This study introduces a practical approach to implement real-time signal processing algorithms for general surveillance radar based on NVIDIA graphical processing units (GPUs). The pulse compression algorithms are implemented using compute unified device architecture (CUDA) libraries such as CUDA basic linear algebra subroutines and CUDA fast Fourier transform library, which are adopted from open source libraries and optimized for the NVIDIA GPUs. For more advanced, adaptive processing algorithms such as adaptive pulse compression, customized kernel optimization is needed and investigated. A statistical optimization approach is developed for this purpose without needing much knowledge of the physical configurations of the kernels. It was found that the kernel optimization approach can significantly improve the performance. Benchmark performance is compared with the CPU performance in terms of processing accelerations. The proposed implementation framework can be used in various radar systems including ground-based phased array radar, airborne sense and avoid radar, and aerospace surveillance radar.

  20. Diverse types of genetic variation converge on functional gene networks involved in schizophrenia.

    PubMed

    Gilman, Sarah R; Chang, Jonathan; Xu, Bin; Bawa, Tejdeep S; Gogos, Joseph A; Karayiorgou, Maria; Vitkup, Dennis

    2012-12-01

    Despite the successful identification of several relevant genomic loci, the underlying molecular mechanisms of schizophrenia remain largely unclear. We developed a computational approach (NETBAG+) that allows an integrated analysis of diverse disease-related genetic data using a unified statistical framework. The application of this approach to schizophrenia-associated genetic variations, obtained using unbiased whole-genome methods, allowed us to identify several cohesive gene networks related to axon guidance, neuronal cell mobility, synaptic function and chromosomal remodeling. The genes forming the networks are highly expressed in the brain, with higher brain expression during prenatal development. The identified networks are functionally related to genes previously implicated in schizophrenia, autism and intellectual disability. A comparative analysis of copy number variants associated with autism and schizophrenia suggests that although the molecular networks implicated in these distinct disorders may be related, the mutations associated with each disease are likely to lead, at least on average, to different functional consequences.

  1. Probabilistic self-organizing maps for continuous data.

    PubMed

    Lopez-Rubio, Ezequiel

    2010-10-01

    The original self-organizing feature map did not define any probability distribution on the input space. However, the advantages of introducing probabilistic methodologies into self-organizing map models were soon evident. This has led to a wide range of proposals which reflect the current emergence of probabilistic approaches to computational intelligence. The underlying estimation theories behind them derive from two main lines of thought: the expectation maximization methodology and stochastic approximation methods. Here, we present a comprehensive view of the state of the art, with a unifying perspective of the involved theoretical frameworks. In particular, we examine the most commonly used continuous probability distributions, self-organization mechanisms, and learning schemes. Special emphasis is given to the connections among them and their relative advantages depending on the characteristics of the problem at hand. Furthermore, we evaluate their performance in two typical applications of self-organizing maps: classification and visualization.

  2. Model-based object classification using unification grammars and abstract representations

    NASA Astrophysics Data System (ADS)

    Liburdy, Kathleen A.; Schalkoff, Robert J.

    1993-04-01

    The design and implementation of a high level computer vision system which performs object classification is described. General object labelling and functional analysis require models of classes which display a wide range of geometric variations. A large representational gap exists between abstract criteria such as `graspable' and current geometric image descriptions. The vision system developed and described in this work addresses this problem and implements solutions based on a fusion of semantics, unification, and formal language theory. Object models are represented using unification grammars, which provide a framework for the integration of structure and semantics. A methodology for the derivation of symbolic image descriptions capable of interacting with the grammar-based models is described and implemented. A unification-based parser developed for this system achieves object classification by determining if the symbolic image description can be unified with the abstract criteria of an object model. Future research directions are indicated.

  3. Unifying Complexity and Information

    NASA Astrophysics Data System (ADS)

    Ke, Da-Guan

    2013-04-01

    Complex systems, arising in many contexts in the computer, life, social, and physical sciences, have not shared a generally-accepted complexity measure playing a fundamental role as the Shannon entropy H in statistical mechanics. Superficially-conflicting criteria of complexity measurement, i.e. complexity-randomness (C-R) relations, have given rise to a special measure intrinsically adaptable to more than one criterion. However, deep causes of the conflict and the adaptability are not much clear. Here I trace the root of each representative or adaptable measure to its particular universal data-generating or -regenerating model (UDGM or UDRM). A representative measure for deterministic dynamical systems is found as a counterpart of the H for random process, clearly redefining the boundary of different criteria. And a specific UDRM achieving the intrinsic adaptability enables a general information measure that ultimately solves all major disputes. This work encourages a single framework coving deterministic systems, statistical mechanics and real-world living organisms.

  4. Polynomial algebra of discrete models in systems biology.

    PubMed

    Veliz-Cuba, Alan; Jarrah, Abdul Salam; Laubenbacher, Reinhard

    2010-07-01

    An increasing number of discrete mathematical models are being published in Systems Biology, ranging from Boolean network models to logical models and Petri nets. They are used to model a variety of biochemical networks, such as metabolic networks, gene regulatory networks and signal transduction networks. There is increasing evidence that such models can capture key dynamic features of biological networks and can be used successfully for hypothesis generation. This article provides a unified framework that can aid the mathematical analysis of Boolean network models, logical models and Petri nets. They can be represented as polynomial dynamical systems, which allows the use of a variety of mathematical tools from computer algebra for their analysis. Algorithms are presented for the translation into polynomial dynamical systems. Examples are given of how polynomial algebra can be used for the model analysis. alanavc@vt.edu Supplementary data are available at Bioinformatics online.

  5. Critical length scale controls adhesive wear mechanisms

    PubMed Central

    Aghababaei, Ramin; Warner, Derek H.; Molinari, Jean-Francois

    2016-01-01

    The adhesive wear process remains one of the least understood areas of mechanics. While it has long been established that adhesive wear is a direct result of contacting surface asperities, an agreed upon understanding of how contacting asperities lead to wear debris particle has remained elusive. This has restricted adhesive wear prediction to empirical models with limited transferability. Here we show that discrepant observations and predictions of two distinct adhesive wear mechanisms can be reconciled into a unified framework. Using atomistic simulations with model interatomic potentials, we reveal a transition in the asperity wear mechanism when contact junctions fall below a critical length scale. A simple analytic model is formulated to predict the transition in both the simulation results and experiments. This new understanding may help expand use of computer modelling to explore adhesive wear processes and to advance physics-based wear laws without empirical coefficients. PMID:27264270

  6. A unified framework for evaluating the risk of re-identification of text de-identification tools.

    PubMed

    Scaiano, Martin; Middleton, Grant; Arbuckle, Luk; Kolhatkar, Varada; Peyton, Liam; Dowling, Moira; Gipson, Debbie S; El Emam, Khaled

    2016-10-01

    It has become regular practice to de-identify unstructured medical text for use in research using automatic methods, the goal of which is to remove patient identifying information to minimize re-identification risk. The metrics commonly used to determine if these systems are performing well do not accurately reflect the risk of a patient being re-identified. We therefore developed a framework for measuring the risk of re-identification associated with textual data releases. We apply the proposed evaluation framework to a data set from the University of Michigan Medical School. Our risk assessment results are then compared with those that would be obtained using a typical contemporary micro-average evaluation of recall in order to illustrate the difference between the proposed evaluation framework and the current baseline method. We demonstrate how this framework compares against common measures of the re-identification risk associated with an automated text de-identification process. For the probability of re-identification using our evaluation framework we obtained a mean value for direct identifiers of 0.0074 and a mean value for quasi-identifiers of 0.0022. The 95% confidence interval for these estimates were below the relevant thresholds. The threshold for direct identifier risk was based on previously used approaches in the literature. The threshold for quasi-identifiers was determined based on the context of the data release following commonly used de-identification criteria for structured data. Our framework attempts to correct for poorly distributed evaluation corpora, accounts for the data release context, and avoids the often optimistic assumptions that are made using the more traditional evaluation approach. It therefore provides a more realistic estimate of the true probability of re-identification. This framework should be used as a basis for computing re-identification risk in order to more realistically evaluate future text de-identification tools. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  7. Current and Future Development of a Non-hydrostatic Unified Atmospheric Model (NUMA)

    DTIC Science & Technology

    2010-09-09

    following capabilities: 1.  Highly scalable on current and future computer architectures ( exascale computing and beyond and GPUs) 2.  Flexibility... Exascale Computing •  10 of Top 500 are already in the Petascale range •  Should also keep our eyes on GPUs (e.g., Mare Nostrum) 2.  Numerical

  8. Adaptive Allocation of Decision Making Responsibility Between Human and Computer in Multi-Task Situations. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Chu, Y. Y.

    1978-01-01

    A unified formulation of computer-aided, multi-task, decision making is presented. Strategy for the allocation of decision making responsibility between human and computer is developed. The plans of a flight management systems are studied. A model based on the queueing theory was implemented.

  9. Community health workers in Brazil's Unified Health System: a framework of their praxis and contributions to patient health behaviors.

    PubMed

    Pinto, Rogério M; da Silva, Sueli Bulhões; Soriano, Rafaela

    2012-03-01

    Community health workers (CHWs) play a pivotal role in primary care, serving as liaisons between community members and medical providers. However, the growing reliance of health care systems worldwide on CHWs has outpaced research explaining their praxis - how they combine indigenous and technical knowledge, overcome challenges and impact patient outcomes. This paper thus articulates the CHW Praxis and Patient Health Behavior Framework. Such a framework is needed to advance research on CHW impact on patient outcomes and to advance CHW training. The project that originated this framework followed community-based participatory research principles. A team of U.S.-Brazil research partners, including CHWs, worked together from conceptualization of the study to dissemination of its findings. The framework is built on an integrated conceptual foundation including learning/teaching and individual behavior theories. The empirical base of the framework comprises in-depth interviews with 30 CHWs in Brazil's Unified Health System, Mesquita, Rio de Janeiro. Data collection for the project which originated this report occurred in 2008-10. Semi-structured questions examined how CHWs used their knowledge/skills; addressed personal and environmental challenges; and how they promoted patient health behaviors. This study advances an explanation of how CHWs use self-identified strategies--i.e., empathic communication and perseverance--to help patients engage in health behaviors. Grounded in our proposed framework, survey measures can be developed and used in predictive models testing the effects of CHW praxis on health behaviors. Training for CHWs can explicitly integrate indigenous and technical knowledge in order for CHWs to overcome contextual challenges and enhance service delivery. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Community Health Workers in Brazil's Unified Health System: A Framework of their Praxis and Contributions to Patient Health Behaviors

    PubMed Central

    Pinto, Rogério M.; da Silva, Sueli Bulhões; Soriano, Rafaela

    2012-01-01

    Community Health Workers (CHWs) play a pivotal role in primary care, serving as liaisons between community members and medical providers. However, the growing reliance of health care systems worldwide on CHWs has outpaced research explaining their praxis – how they combine indigenous and technical knowledge, overcome challenges and impact patient outcomes. This paper thus articulates the CHW Praxis and Patient Health Behavior Framework. Such a framework is needed to advance research on CHW impact on patient outcomes and to advance CHW training. The project that originated this framework followed Community-Based Participatory Research principles. A team of U.S.-Brazil research partners, including CHWs, worked together from conceptualization of the study to dissemination of its findings. The framework is built on an integrated conceptual foundation including learning/teaching and individual behavior theories. The empirical base of the framework comprises in-depth interviews with 30 CHWs in Brazil's Unified Health System, Mesquita, Rio de Janeiro. Data collection for the project which originated this report occurred in 2008–10. Semi-structured questions examined how CHWs used their knowledge/skills; addressed personal and environmental challenges; and how they promoted patient health behaviors. This study advances an explanation of how CHWs use self-identified strategies – i.e., empathic communication and perseverance – to help patients engage in health behaviors. Grounded in our proposed framework, survey measures can be developed and used in predictive models testing the effects of CHW praxis on health behaviors. Training for CHWs can explicitly integrate indigenous and technical knowledge in order for CHWs to overcome contextual challenges and enhance service delivery. PMID:22305469

  11. MELCOR computer code manuals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Summers, R.M.; Cole, R.K. Jr.; Smith, R.C.

    1995-03-01

    MELCOR is a fully integrated, engineering-level computer code that models the progression of severe accidents in light water reactor nuclear power plants. MELCOR is being developed at Sandia National Laboratories for the U.S. Nuclear Regulatory Commission as a second-generation plant risk assessment tool and the successor to the Source Term Code Package. A broad spectrum of severe accident phenomena in both boiling and pressurized water reactors is treated in MELCOR in a unified framework. These include: thermal-hydraulic response in the reactor coolant system, reactor cavity, containment, and confinement buildings; core heatup, degradation, and relocation; core-concrete attack; hydrogen production, transport, andmore » combustion; fission product release and transport; and the impact of engineered safety features on thermal-hydraulic and radionuclide behavior. Current uses of MELCOR include estimation of severe accident source terms and their sensitivities and uncertainties in a variety of applications. This publication of the MELCOR computer code manuals corresponds to MELCOR 1.8.3, released to users in August, 1994. Volume 1 contains a primer that describes MELCOR`s phenomenological scope, organization (by package), and documentation. The remainder of Volume 1 contains the MELCOR Users Guides, which provide the input instructions and guidelines for each package. Volume 2 contains the MELCOR Reference Manuals, which describe the phenomenological models that have been implemented in each package.« less

  12. Neuromorphic log-domain silicon synapse circuits obey bernoulli dynamics: a unifying tutorial analysis

    PubMed Central

    Papadimitriou, Konstantinos I.; Liu, Shih-Chii; Indiveri, Giacomo; Drakakis, Emmanuel M.

    2014-01-01

    The field of neuromorphic silicon synapse circuits is revisited and a parsimonious mathematical framework able to describe the dynamics of this class of log-domain circuits in the aggregate and in a systematic manner is proposed. Starting from the Bernoulli Cell Formalism (BCF), originally formulated for the modular synthesis and analysis of externally linear, time-invariant logarithmic filters, and by means of the identification of new types of Bernoulli Cell (BC) operators presented here, a generalized formalism (GBCF) is established. The expanded formalism covers two new possible and practical combinations of a MOS transistor (MOST) and a linear capacitor. The corresponding mathematical relations codifying each case are presented and discussed through the tutorial treatment of three well-known transistor-level examples of log-domain neuromorphic silicon synapses. The proposed mathematical tool unifies past analysis approaches of the same circuits under a common theoretical framework. The speed advantage of the proposed mathematical framework as an analysis tool is also demonstrated by a compelling comparative circuit analysis example of high order, where the GBCF and another well-known log-domain circuit analysis method are used for the determination of the input-output transfer function of the high (4th) order topology. PMID:25653579

  13. Neuromorphic log-domain silicon synapse circuits obey bernoulli dynamics: a unifying tutorial analysis.

    PubMed

    Papadimitriou, Konstantinos I; Liu, Shih-Chii; Indiveri, Giacomo; Drakakis, Emmanuel M

    2014-01-01

    The field of neuromorphic silicon synapse circuits is revisited and a parsimonious mathematical framework able to describe the dynamics of this class of log-domain circuits in the aggregate and in a systematic manner is proposed. Starting from the Bernoulli Cell Formalism (BCF), originally formulated for the modular synthesis and analysis of externally linear, time-invariant logarithmic filters, and by means of the identification of new types of Bernoulli Cell (BC) operators presented here, a generalized formalism (GBCF) is established. The expanded formalism covers two new possible and practical combinations of a MOS transistor (MOST) and a linear capacitor. The corresponding mathematical relations codifying each case are presented and discussed through the tutorial treatment of three well-known transistor-level examples of log-domain neuromorphic silicon synapses. The proposed mathematical tool unifies past analysis approaches of the same circuits under a common theoretical framework. The speed advantage of the proposed mathematical framework as an analysis tool is also demonstrated by a compelling comparative circuit analysis example of high order, where the GBCF and another well-known log-domain circuit analysis method are used for the determination of the input-output transfer function of the high (4(th)) order topology.

  14. Computational Unification: a Vision for Connecting Researchers

    NASA Astrophysics Data System (ADS)

    Troy, R. M.; Kingrey, O. J.

    2002-12-01

    Computational Unification of science, once only a vision, is becoming a reality. This technology is based upon a scientifically defensible, general solution for Earth Science data management and processing. The computational unification of science offers a real opportunity to foster inter and intra-discipline cooperation, and the end of 're-inventing the wheel'. As we move forward using computers as tools, it is past time to move from computationally isolating, "one-off" or discipline-specific solutions into a unified framework where research can be more easily shared, especially with researchers in other disciplines. The author will discuss how distributed meta-data, distributed processing and distributed data objects are structured to constitute a working interdisciplinary system, including how these resources lead to scientific defensibility through known lineage of all data products. Illustration of how scientific processes are encapsulated and executed illuminates how previously written processes and functions are integrated into the system efficiently and with minimal effort. Meta-data basics will illustrate how intricate relationships may easily be represented and used to good advantage. Retrieval techniques will be discussed including trade-offs of using meta-data versus embedded data, how the two may be integrated, and how simplifying assumptions may or may not help. This system is based upon the experience of the Sequoia 2000 and BigSur research projects at the University of California, Berkeley, whose goals were to find an alternative to the Hughes EOS-DIS system and is presently offered by Science Tools corporation, of which the author is a principal.

  15. Classical Markov Chains: A Unifying Framework for Understanding Avian Reproductive Success

    EPA Science Inventory

    Traditional methods for monitoring and analysis of avian nesting success have several important shortcomings, including 1) inability to handle multiple classes of nest failure, and 2) inability to provide estimates of annual reproductive success (because birds can, and typically ...

  16. Do changes in connectivity explain desertification?

    USDA-ARS?s Scientific Manuscript database

    Desertification, broad-scale land degradation in drylands, is a major environmental hazard facing inhabitants of the world’s deserts as well as an important component of global change. There is no unifying framework that simply and effectively explains different forms of desertification. Here we arg...

  17. An Assessment of Security Vulnerabilities Comprehension of Cloud Computing Environments: A Quantitative Study Using the Unified Theory of Acceptance and Use

    ERIC Educational Resources Information Center

    Venkatesh, Vijay P.

    2013-01-01

    The current computing landscape owes its roots to the birth of hardware and software technologies from the 1940s and 1950s. Since then, the advent of mainframes, miniaturized computing, and internetworking has given rise to the now prevalent cloud computing era. In the past few months just after 2010, cloud computing adoption has picked up pace…

  18. A unified framework for unraveling the functional interaction structure of a biomolecular network based on stimulus-response experimental data.

    PubMed

    Cho, Kwang-Hyun; Choo, Sang-Mok; Wellstead, Peter; Wolkenhauer, Olaf

    2005-08-15

    We propose a unified framework for the identification of functional interaction structures of biomolecular networks in a way that leads to a new experimental design procedure. In developing our approach, we have built upon previous work. Thus we begin by pointing out some of the restrictions associated with existing structure identification methods and point out how these restrictions may be eased. In particular, existing methods use specific forms of experimental algebraic equations with which to identify the functional interaction structure of a biomolecular network. In our work, we employ an extended form of these experimental algebraic equations which, while retaining their merits, also overcome some of their disadvantages. Experimental data are required in order to estimate the coefficients of the experimental algebraic equation set associated with the structure identification task. However, experimentalists are rarely provided with guidance on which parameters to perturb, and to what extent, to perturb them. When a model of network dynamics is required then there is also the vexed question of sample rate and sample time selection to be resolved. Supplying some answers to these questions is the main motivation of this paper. The approach is based on stationary and/or temporal data obtained from parameter perturbations, and unifies the previous approaches of Kholodenko et al. (PNAS 99 (2002) 12841-12846) and Sontag et al. (Bioinformatics 20 (2004) 1877-1886). By way of demonstration, we apply our unified approach to a network model which cannot be properly identified by existing methods. Finally, we propose an experiment design methodology, which is not limited by the amount of parameter perturbations, and illustrate its use with an in numero example.

  19. Unified Ultrasonic/Eddy-Current Data Acquisition

    NASA Technical Reports Server (NTRS)

    Chern, E. James; Butler, David W.

    1993-01-01

    Imaging station for detecting cracks and flaws in solid materials developed combining both ultrasonic C-scan and eddy-current imaging. Incorporation of both techniques into one system eliminates duplication of computers and of mechanical scanners; unifies acquisition, processing, and storage of data; reduces setup time for repetitious ultrasonic and eddy-current scans; and increases efficiency of system. Same mechanical scanner used to maneuver either ultrasonic or eddy-current probe over specimen and acquire point-by-point data. For ultrasonic scanning, probe linked to ultrasonic pulser/receiver circuit card, while, for eddy-current imaging, probe linked to impedance-analyzer circuit card. Both ultrasonic and eddy-current imaging subsystems share same desktop-computer controller, containing dedicated plug-in circuit boards for each.

  20. ATP3 Unified Field Study Data

    DOE Data Explorer

    Wolfrum, Ed (ORCID:0000000273618931); Knoshug, Eric (ORCID:000000025709914X); Laurens, Lieve (ORCID:0000000349303267); Harmon, Valerie; Dempster, Thomas (ORCID:000000029550488X); McGowan, John (ORCID:0000000266920518); Rosov, Theresa; Cardello, David; Arrowsmith, Sarah; Kempkes, Sarah; Bautista, Maria; Lundquist, Tryg; Crowe, Brandon; Murawsky, Garrett; Nicolai, Eric; Rowe, Egan; Knurek, Emily; Javar, Reyna; Saracco Alvarez, Marcela; Schlosser, Steve; Riddle, Mary; Withstandley, Chris; Chen, Yongsheng; Van Ginkel, Steven; Igou, Thomas; Xu, Chunyan; Hu, Zixuan

    2017-10-20

    ATP3 Unified Field Study Data The Algae Testbed Public-Private Partnership (ATP3) was established with the goal of investigating open pond algae cultivation across different geographic, climatic, seasonal, and operational conditions while setting the benchmark for quality data collection, analysis, and dissemination. Identical algae cultivation systems and data analysis methodologies were established at testbed sites across the continental United States and Hawaii. Within this framework, the Unified Field Studies (UFS) were designed to characterize the cultivation of different algal strains during all 4 seasons across this testbed network. The dataset presented here is the complete, curated, climatic, cultivation, harvest, and biomass composition data for each season at each site. These data enable others to do in-depth cultivation, harvest, techno-economic, life cycle, resource, and predictive growth modeling analysis, as well as develop crop protection strategies for the nascent algae industry. NREL Sub award Number: DE-AC36-08-GO28308

  1. A unified universe

    NASA Astrophysics Data System (ADS)

    Codello, Alessandro; Jain, Rajeev Kumar

    2018-05-01

    We present a unified evolution of the universe from very early times until the present epoch by including both the leading local correction R^2 and the leading non-local term R1/\\square ^2R to the classical gravitational action. We find that the inflationary phase driven by R^2 term gracefully exits in a transitory regime characterized by coherent oscillations of the Hubble parameter. The universe then naturally enters into a radiation dominated epoch followed by a matter dominated era. At sufficiently late times after radiation-matter equality, the non-local term starts to dominate inducing an accelerated expansion of the universe at the present epoch. We further exhibit the fact that both the leading local and non-local terms can be obtained within the covariant effective field theory of gravity. This scenario thus provides a unified picture of inflation and dark energy in a single framework by means of a purely gravitational action without the usual need of a scalar field.

  2. Multilayer network of language: A unified framework for structural analysis of linguistic subsystems

    NASA Astrophysics Data System (ADS)

    Martinčić-Ipšić, Sanda; Margan, Domagoj; Meštrović, Ana

    2016-09-01

    Recently, the focus of complex networks' research has shifted from the analysis of isolated properties of a system toward a more realistic modeling of multiple phenomena - multilayer networks. Motivated by the prosperity of multilayer approach in social, transport or trade systems, we introduce the multilayer networks for language. The multilayer network of language is a unified framework for modeling linguistic subsystems and their structural properties enabling the exploration of their mutual interactions. Various aspects of natural language systems can be represented as complex networks, whose vertices depict linguistic units, while links model their relations. The multilayer network of language is defined by three aspects: the network construction principle, the linguistic subsystem and the language of interest. More precisely, we construct a word-level (syntax and co-occurrence) and a subword-level (syllables and graphemes) network layers, from four variations of original text (in the modeled language). The analysis and comparison of layers at the word and subword-levels are employed in order to determine the mechanism of the structural influences between linguistic units and subsystems. The obtained results suggest that there are substantial differences between the networks' structures of different language subsystems, which are hidden during the exploration of an isolated layer. The word-level layers share structural properties regardless of the language (e.g. Croatian or English), while the syllabic subword-level expresses more language dependent structural properties. The preserved weighted overlap quantifies the similarity of word-level layers in weighted and directed networks. Moreover, the analysis of motifs reveals a close topological structure of the syntactic and syllabic layers for both languages. The findings corroborate that the multilayer network framework is a powerful, consistent and systematic approach to model several linguistic subsystems simultaneously and hence to provide a more unified view on language.

  3. A Unifying Mathematical Framework for Genetic Robustness, Environmental Robustness, Network Robustness and their Trade-offs on Phenotype Robustness in Biological Networks. Part III: Synthetic Gene Networks in Synthetic Biology

    PubMed Central

    Chen, Bor-Sen; Lin, Ying-Po

    2013-01-01

    Robust stabilization and environmental disturbance attenuation are ubiquitous systematic properties that are observed in biological systems at many different levels. The underlying principles for robust stabilization and environmental disturbance attenuation are universal to both complex biological systems and sophisticated engineering systems. In many biological networks, network robustness should be large enough to confer: intrinsic robustness for tolerating intrinsic parameter fluctuations; genetic robustness for buffering genetic variations; and environmental robustness for resisting environmental disturbances. Network robustness is needed so phenotype stability of biological network can be maintained, guaranteeing phenotype robustness. Synthetic biology is foreseen to have important applications in biotechnology and medicine; it is expected to contribute significantly to a better understanding of functioning of complex biological systems. This paper presents a unifying mathematical framework for investigating the principles of both robust stabilization and environmental disturbance attenuation for synthetic gene networks in synthetic biology. Further, from the unifying mathematical framework, we found that the phenotype robustness criterion for synthetic gene networks is the following: if intrinsic robustness + genetic robustness + environmental robustness ≦ network robustness, then the phenotype robustness can be maintained in spite of intrinsic parameter fluctuations, genetic variations, and environmental disturbances. Therefore, the trade-offs between intrinsic robustness, genetic robustness, environmental robustness, and network robustness in synthetic biology can also be investigated through corresponding phenotype robustness criteria from the systematic point of view. Finally, a robust synthetic design that involves network evolution algorithms with desired behavior under intrinsic parameter fluctuations, genetic variations, and environmental disturbances, is also proposed, together with a simulation example. PMID:23515190

  4. A Unifying Mathematical Framework for Genetic Robustness, Environmental Robustness, Network Robustness and their Trade-offs on Phenotype Robustness in Biological Networks. Part III: Synthetic Gene Networks in Synthetic Biology.

    PubMed

    Chen, Bor-Sen; Lin, Ying-Po

    2013-01-01

    Robust stabilization and environmental disturbance attenuation are ubiquitous systematic properties that are observed in biological systems at many different levels. The underlying principles for robust stabilization and environmental disturbance attenuation are universal to both complex biological systems and sophisticated engineering systems. In many biological networks, network robustness should be large enough to confer: intrinsic robustness for tolerating intrinsic parameter fluctuations; genetic robustness for buffering genetic variations; and environmental robustness for resisting environmental disturbances. Network robustness is needed so phenotype stability of biological network can be maintained, guaranteeing phenotype robustness. Synthetic biology is foreseen to have important applications in biotechnology and medicine; it is expected to contribute significantly to a better understanding of functioning of complex biological systems. This paper presents a unifying mathematical framework for investigating the principles of both robust stabilization and environmental disturbance attenuation for synthetic gene networks in synthetic biology. Further, from the unifying mathematical framework, we found that the phenotype robustness criterion for synthetic gene networks is the following: if intrinsic robustness + genetic robustness + environmental robustness ≦ network robustness, then the phenotype robustness can be maintained in spite of intrinsic parameter fluctuations, genetic variations, and environmental disturbances. Therefore, the trade-offs between intrinsic robustness, genetic robustness, environmental robustness, and network robustness in synthetic biology can also be investigated through corresponding phenotype robustness criteria from the systematic point of view. Finally, a robust synthetic design that involves network evolution algorithms with desired behavior under intrinsic parameter fluctuations, genetic variations, and environmental disturbances, is also proposed, together with a simulation example.

  5. Chimaera simulation of complex states of flowing matter.

    PubMed

    Succi, S

    2016-11-13

    We discuss a unified mesoscale framework (chimaera) for the simulation of complex states of flowing matter across scales of motion. The chimaera framework can deal with each of the three macro-meso-micro levels through suitable 'mutations' of the basic mesoscale formulation. The idea is illustrated through selected simulations of complex micro- and nanoscale flows.This article is part of the themed issue 'Multiscale modelling at the physics-chemistry-biology interface'. © 2016 The Author(s).

  6. A unified framework for gesture recognition and spatiotemporal gesture segmentation.

    PubMed

    Alon, Jonathan; Athitsos, Vassilis; Yuan, Quan; Sclaroff, Stan

    2009-09-01

    Within the context of hand gesture recognition, spatiotemporal gesture segmentation is the task of determining, in a video sequence, where the gesturing hand is located and when the gesture starts and ends. Existing gesture recognition methods typically assume either known spatial segmentation or known temporal segmentation, or both. This paper introduces a unified framework for simultaneously performing spatial segmentation, temporal segmentation, and recognition. In the proposed framework, information flows both bottom-up and top-down. A gesture can be recognized even when the hand location is highly ambiguous and when information about when the gesture begins and ends is unavailable. Thus, the method can be applied to continuous image streams where gestures are performed in front of moving, cluttered backgrounds. The proposed method consists of three novel contributions: a spatiotemporal matching algorithm that can accommodate multiple candidate hand detections in every frame, a classifier-based pruning framework that enables accurate and early rejection of poor matches to gesture models, and a subgesture reasoning algorithm that learns which gesture models can falsely match parts of other longer gestures. The performance of the approach is evaluated on two challenging applications: recognition of hand-signed digits gestured by users wearing short-sleeved shirts, in front of a cluttered background, and retrieval of occurrences of signs of interest in a video database containing continuous, unsegmented signing in American Sign Language (ASL).

  7. Predicting binary, discrete and continued lncRNA-disease associations via a unified framework based on graph regression.

    PubMed

    Shi, Jian-Yu; Huang, Hua; Zhang, Yan-Ning; Long, Yu-Xi; Yiu, Siu-Ming

    2017-12-21

    In human genomes, long non-coding RNAs (lncRNAs) have attracted more and more attention because their dysfunctions are involved in many diseases. However, the associations between lncRNAs and diseases (LDA) still remain unknown in most cases. While identifying disease-related lncRNAs in vivo is costly, computational approaches are promising to not only accelerate the possible identification of associations but also provide clues on the underlying mechanism of various lncRNA-caused diseases. Former computational approaches usually only focus on predicting new associations between lncRNAs having known associations with diseases and other lncRNA-associated diseases. They also only work on binary lncRNA-disease associations (whether the pair has an association or not), which cannot reflect and reveal other biological facts, such as the number of proteins involved in LDA or how strong the association is (i.e., the intensity of LDA). To address abovementioned issues, we propose a graph regression-based unified framework (GRUF). In particular, our method can work on lncRNAs, which have no previously known disease association and diseases that have no known association with any lncRNAs. Also, instead of only a binary answer for the association, our method tries to uncover more biological relationship between a pair of lncRNA and disease, which may provide better clues for researchers. We compared GRUF with three state-of-the-art approaches and demonstrated the superiority of GRUF, which achieves 5%~16% improvement in terms of the area under the receiver operating characteristic curve (AUC). GRUF also provides a predicted confidence score for the predicted LDA, which reveals the significant correlation between the score and the number of RNA-Binding Proteins involved in LDAs. Lastly, three out of top-5 LDA candidates generated by GRUF in novel prediction are verified indirectly by medical literature and known biological facts. The proposed GRUF has two advantages over existing approaches. Firstly, it can be used to work on lncRNAs that have no known disease association and diseases that have no known association with any lncRNAs. Secondly, instead of providing a binary answer (with or without association), GRUF works for both discrete and continued LDA, which help revealing the pathological implications between lncRNAs and diseases.

  8. What's statistical about learning? Insights from modelling statistical learning as a set of memory processes

    PubMed Central

    2017-01-01

    Statistical learning has been studied in a variety of different tasks, including word segmentation, object identification, category learning, artificial grammar learning and serial reaction time tasks (e.g. Saffran et al. 1996 Science 274, 1926–1928; Orban et al. 2008 Proceedings of the National Academy of Sciences 105, 2745–2750; Thiessen & Yee 2010 Child Development 81, 1287–1303; Saffran 2002 Journal of Memory and Language 47, 172–196; Misyak & Christiansen 2012 Language Learning 62, 302–331). The difference among these tasks raises questions about whether they all depend on the same kinds of underlying processes and computations, or whether they are tapping into different underlying mechanisms. Prior theoretical approaches to statistical learning have often tried to explain or model learning in a single task. However, in many cases these approaches appear inadequate to explain performance in multiple tasks. For example, explaining word segmentation via the computation of sequential statistics (such as transitional probability) provides little insight into the nature of sensitivity to regularities among simultaneously presented features. In this article, we will present a formal computational approach that we believe is a good candidate to provide a unifying framework to explore and explain learning in a wide variety of statistical learning tasks. This framework suggests that statistical learning arises from a set of processes that are inherent in memory systems, including activation, interference, integration of information and forgetting (e.g. Perruchet & Vinter 1998 Journal of Memory and Language 39, 246–263; Thiessen et al. 2013 Psychological Bulletin 139, 792–814). From this perspective, statistical learning does not involve explicit computation of statistics, but rather the extraction of elements of the input into memory traces, and subsequent integration across those memory traces that emphasize consistent information (Thiessen and Pavlik 2013 Cognitive Science 37, 310–343). This article is part of the themed issue ‘New frontiers for statistical learning in the cognitive sciences'. PMID:27872374

  9. What's statistical about learning? Insights from modelling statistical learning as a set of memory processes.

    PubMed

    Thiessen, Erik D

    2017-01-05

    Statistical learning has been studied in a variety of different tasks, including word segmentation, object identification, category learning, artificial grammar learning and serial reaction time tasks (e.g. Saffran et al. 1996 Science 274: , 1926-1928; Orban et al. 2008 Proceedings of the National Academy of Sciences 105: , 2745-2750; Thiessen & Yee 2010 Child Development 81: , 1287-1303; Saffran 2002 Journal of Memory and Language 47: , 172-196; Misyak & Christiansen 2012 Language Learning 62: , 302-331). The difference among these tasks raises questions about whether they all depend on the same kinds of underlying processes and computations, or whether they are tapping into different underlying mechanisms. Prior theoretical approaches to statistical learning have often tried to explain or model learning in a single task. However, in many cases these approaches appear inadequate to explain performance in multiple tasks. For example, explaining word segmentation via the computation of sequential statistics (such as transitional probability) provides little insight into the nature of sensitivity to regularities among simultaneously presented features. In this article, we will present a formal computational approach that we believe is a good candidate to provide a unifying framework to explore and explain learning in a wide variety of statistical learning tasks. This framework suggests that statistical learning arises from a set of processes that are inherent in memory systems, including activation, interference, integration of information and forgetting (e.g. Perruchet & Vinter 1998 Journal of Memory and Language 39: , 246-263; Thiessen et al. 2013 Psychological Bulletin 139: , 792-814). From this perspective, statistical learning does not involve explicit computation of statistics, but rather the extraction of elements of the input into memory traces, and subsequent integration across those memory traces that emphasize consistent information (Thiessen and Pavlik 2013 Cognitive Science 37: , 310-343).This article is part of the themed issue 'New frontiers for statistical learning in the cognitive sciences'. © 2016 The Author(s).

  10. The Development of the Non-hydrostatic Unified Model of the Atmosphere (NUMA)

    DTIC Science & Technology

    2011-09-19

    capabilities: 1.  Highly scalable on current and future computer architectures ( exascale computing: this means CPUs and GPUs) 2.  Flexibility to use a...From Terascale to Petascale/ Exascale Computing •  10 of Top 500 are already in the Petascale range •  3 of top 10 are GPU-based machines 2

  11. Cognitive Computational Neuroscience: A New Conference for an Emerging Discipline.

    PubMed

    Naselaris, Thomas; Bassett, Danielle S; Fletcher, Alyson K; Kording, Konrad; Kriegeskorte, Nikolaus; Nienborg, Hendrikje; Poldrack, Russell A; Shohamy, Daphna; Kay, Kendrick

    2018-05-01

    Understanding the computational principles that underlie complex behavior is a central goal in cognitive science, artificial intelligence, and neuroscience. In an attempt to unify these disconnected communities, we created a new conference called Cognitive Computational Neuroscience (CCN). The inaugural meeting revealed considerable enthusiasm but significant obstacles remain. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Tablet Personal Computer Integration in Higher Education: Applying the Unified Theory of Acceptance and Use Technology Model to Understand Supporting Factors

    ERIC Educational Resources Information Center

    Moran, Mark; Hawkes, Mark; El Gayar, Omar

    2010-01-01

    Many educational institutions have implemented ubiquitous or required laptop, notebook, or tablet personal computing programs for their students. Yet, limited evidence exists to validate integration and acceptance of the technology among student populations. This research examines student acceptance of mobile computing devices using a modification…

  13. Toward a unified account of comprehension and production in language development.

    PubMed

    McCauley, Stewart M; Christiansen, Morten H

    2013-08-01

    Although Pickering & Garrod (P&G) argue convincingly for a unified system for language comprehension and production, they fail to explain how such a system might develop. Using a recent computational model of language acquisition as an example, we sketch a developmental perspective on the integration of comprehension and production. We conclude that only through development can we fully understand the intertwined nature of comprehension and production in adult processing.

  14. Data Centric Development Methodology

    ERIC Educational Resources Information Center

    Khoury, Fadi E.

    2012-01-01

    Data centric applications, an important effort of software development in large organizations, have been mostly adopting a software methodology, such as a waterfall or Rational Unified Process, as the framework for its development. These methodologies could work on structural, procedural, or object oriented based applications, but fails to capture…

  15. The thermodynamics of dense granular flow and jamming

    NASA Astrophysics Data System (ADS)

    Lu, Shih Yu

    The scope of the thesis is to propose, based on experimental evidence and theoretical validation, a quantifiable connection between systems that exhibit the jamming phenomenon. When jammed, some materials that flow are able to resist deformation so that they appear solid-like on the laboratory scale. But unlike ordinary fusion, which has a critically defined criterion in pressure and temperature, jamming occurs under a wide range of conditions. These condition have been rigorously investigated but at the moment, no self-consistent framework can apply to grains, foam and colloids that may have suddenly ceased to flow. To quantify the jamming behavior, a constitutive model of dense granular flows is deduced from shear-flow experiments. The empirical equations are then generalized, via a thermodynamic approach, into an equation-of-state for jamming. Notably, the unifying theory also predicts the experimental data on the behavior of molecular glassy liquids. This analogy paves a crucial road map for a unifying theoretical framework in condensed matter, for example, ranging from sand to fire retardants to toothpaste.

  16. Multi-View Multi-Instance Learning Based on Joint Sparse Representation and Multi-View Dictionary Learning.

    PubMed

    Li, Bing; Yuan, Chunfeng; Xiong, Weihua; Hu, Weiming; Peng, Houwen; Ding, Xinmiao; Maybank, Steve

    2017-12-01

    In multi-instance learning (MIL), the relations among instances in a bag convey important contextual information in many applications. Previous studies on MIL either ignore such relations or simply model them with a fixed graph structure so that the overall performance inevitably degrades in complex environments. To address this problem, this paper proposes a novel multi-view multi-instance learning algorithm (MIL) that combines multiple context structures in a bag into a unified framework. The novel aspects are: (i) we propose a sparse -graph model that can generate different graphs with different parameters to represent various context relations in a bag, (ii) we propose a multi-view joint sparse representation that integrates these graphs into a unified framework for bag classification, and (iii) we propose a multi-view dictionary learning algorithm to obtain a multi-view graph dictionary that considers cues from all views simultaneously to improve the discrimination of the MIL. Experiments and analyses in many practical applications prove the effectiveness of the M IL.

  17. A Unifying Mathematical Framework for Genetic Robustness, Environmental Robustness, Network Robustness and their Tradeoff on Phenotype Robustness in Biological Networks Part II: Ecological Networks

    PubMed Central

    Chen, Bor-Sen; Lin, Ying-Po

    2013-01-01

    In ecological networks, network robustness should be large enough to confer intrinsic robustness for tolerating intrinsic parameter fluctuations, as well as environmental robustness for resisting environmental disturbances, so that the phenotype stability of ecological networks can be maintained, thus guaranteeing phenotype robustness. However, it is difficult to analyze the network robustness of ecological systems because they are complex nonlinear partial differential stochastic systems. This paper develops a unifying mathematical framework for investigating the principles of both robust stabilization and environmental disturbance sensitivity in ecological networks. We found that the phenotype robustness criterion for ecological networks is that if intrinsic robustness + environmental robustness ≦ network robustness, then the phenotype robustness can be maintained in spite of intrinsic parameter fluctuations and environmental disturbances. These results in robust ecological networks are similar to that in robust gene regulatory networks and evolutionary networks even they have different spatial-time scales. PMID:23515112

  18. Domain Anomaly Detection in Machine Perception: A System Architecture and Taxonomy.

    PubMed

    Kittler, Josef; Christmas, William; de Campos, Teófilo; Windridge, David; Yan, Fei; Illingworth, John; Osman, Magda

    2014-05-01

    We address the problem of anomaly detection in machine perception. The concept of domain anomaly is introduced as distinct from the conventional notion of anomaly used in the literature. We propose a unified framework for anomaly detection which exposes the multifaceted nature of anomalies and suggest effective mechanisms for identifying and distinguishing each facet as instruments for domain anomaly detection. The framework draws on the Bayesian probabilistic reasoning apparatus which clearly defines concepts such as outlier, noise, distribution drift, novelty detection (object, object primitive), rare events, and unexpected events. Based on these concepts we provide a taxonomy of domain anomaly events. One of the mechanisms helping to pinpoint the nature of anomaly is based on detecting incongruence between contextual and noncontextual sensor(y) data interpretation. The proposed methodology has wide applicability. It underpins in a unified way the anomaly detection applications found in the literature. To illustrate some of its distinguishing features, in here the domain anomaly detection methodology is applied to the problem of anomaly detection for a video annotation system.

  19. A novel VLSI processor architecture for supercomputing arrays

    NASA Technical Reports Server (NTRS)

    Venkateswaran, N.; Pattabiraman, S.; Devanathan, R.; Ahmed, Ashaf; Venkataraman, S.; Ganesh, N.

    1993-01-01

    Design of the processor element for general purpose massively parallel supercomputing arrays is highly complex and cost ineffective. To overcome this, the architecture and organization of the functional units of the processor element should be such as to suit the diverse computational structures and simplify mapping of complex communication structures of different classes of algorithms. This demands that the computation and communication structures of different class of algorithms be unified. While unifying the different communication structures is a difficult process, analysis of a wide class of algorithms reveals that their computation structures can be expressed in terms of basic IP,IP,OP,CM,R,SM, and MAA operations. The execution of these operations is unified on the PAcube macro-cell array. Based on this PAcube macro-cell array, we present a novel processor element called the GIPOP processor, which has dedicated functional units to perform the above operations. The architecture and organization of these functional units are such to satisfy the two important criteria mentioned above. The structure of the macro-cell and the unification process has led to a very regular and simpler design of the GIPOP processor. The production cost of the GIPOP processor is drastically reduced as it is designed on high performance mask programmable PAcube arrays.

  20. Constitutive modeling for isotropic materials (HOST)

    NASA Technical Reports Server (NTRS)

    Lindholm, Ulric S.; Chan, Kwai S.; Bodner, S. R.; Weber, R. M.; Walker, K. P.; Cassenti, B. N.

    1984-01-01

    The results of the first year of work on a program to validate unified constitutive models for isotropic materials utilized in high temperature regions of gas turbine engines and to demonstrate their usefulness in computing stress-strain-time-temperature histories in complex three-dimensional structural components. The unified theories combine all inelastic strain-rate components in a single term avoiding, for example, treating plasticity and creep as separate response phenomena. An extensive review of existing unified theories is given and numerical methods for integrating these stiff time-temperature-dependent constitutive equations are discussed. Two particular models, those developed by Bodner and Partom and by Walker, were selected for more detailed development and evaluation against experimental tensile, creep and cyclic strain tests on specimens of a cast nickel base alloy, B19000+Hf. Initial results comparing computed and test results for tensile and cyclic straining for temperature from ambient to 982 C and strain rates from 10(exp-7) 10(exp-3) s(exp-1) are given. Some preliminary date correlations are presented also for highly non-proportional biaxial loading which demonstrate an increase in biaxial cyclic hardening rate over uniaxial or proportional loading conditions. Initial work has begun on the implementation of both constitutive models in the MARC finite element computer code.

  1. A unifying retinex model based on non-local differential operators

    NASA Astrophysics Data System (ADS)

    Zosso, Dominique; Tran, Giang; Osher, Stanley

    2013-02-01

    In this paper, we present a unifying framework for retinex that is able to reproduce many of the existing retinex implementations within a single model. The fundamental assumption, as shared with many retinex models, is that the observed image is a multiplication between the illumination and the true underlying reflectance of the object. Starting from Morel's 2010 PDE model for retinex, where illumination is supposed to vary smoothly and where the reflectance is thus recovered from a hard-thresholded Laplacian of the observed image in a Poisson equation, we define our retinex model in similar but more general two steps. First, look for a filtered gradient that is the solution of an optimization problem consisting of two terms: The first term is a sparsity prior of the reflectance, such as the TV or H1 norm, while the second term is a quadratic fidelity prior of the reflectance gradient with respect to the observed image gradients. In a second step, since this filtered gradient almost certainly is not a consistent image gradient, we then look for a reflectance whose actual gradient comes close. Beyond unifying existing models, we are able to derive entirely novel retinex formulations by using more interesting non-local versions for the sparsity and fidelity prior. Hence we define within a single framework new retinex instances particularly suited for texture-preserving shadow removal, cartoon-texture decomposition, color and hyperspectral image enhancement.

  2. Unifying a fragmented effort: a qualitative framework for improving international surgical teaching collaborations.

    PubMed

    Fallah, Parisa Nicole; Bernstein, Mark

    2017-09-07

    Access to adequate surgical care is limited globally, particularly in low- and middle-income countries (LMICs). To address this issue, surgeons are becoming increasingly involved in international surgical teaching collaborations (ISTCs), which include educational partnerships between surgical teams in high-income countries and those in LMICs. The purpose of this study is to determine a framework for unifying, systematizing, and improving the quality of ISTCs so that they can better address the global surgical need. A convenience sample of 68 surgeons, anesthesiologists, physicians, residents, nurses, academics, and administrators from the U.S., Canada, and Norway was used for the study. Participants all had some involvement in ISTCs and came from multiple specialties and institutions. Qualitative methodology was used, and participants were interviewed using a pre-determined set of open-ended questions. Data was gathered over two months either in-person, over the phone, or on Skype. Data was evaluated using thematic content analysis. To organize and systematize ISTCs, participants reported a need for a centralized/systematized process with designated leaders, a universal data bank of current efforts/progress, communication amongst involved parties, full-time administrative staff, dedicated funds, a scholarly approach, increased use of technology, and more research on needs and outcomes. By taking steps towards unifying and systematizing ISTCs, the quality of ISTCs can be improved. This could lead to an advancement in efforts to increase access to surgical care worldwide.

  3. Design Principles and Guidelines for Security

    DTIC Science & Technology

    2007-11-21

    Padula , Secure Computer Systems: Unified Exposition and Multics Interpretation. Electronic Systems Division, USAF. ESD-TR-75-306, MTR-2997 Rev.1...Hanscom AFB, MA. March 1976 [7] David Elliott Bell. “Looking Back at the Bell-La Padula Model,” Proc. Annual Computer Security Applications Conference

  4. Mechanic: The MPI/HDF code framework for dynamical astronomy

    NASA Astrophysics Data System (ADS)

    Słonina, Mariusz; Goździewski, Krzysztof; Migaszewski, Cezary

    2015-01-01

    We introduce the Mechanic, a new open-source code framework. It is designed to reduce the development effort of scientific applications by providing unified API (Application Programming Interface) for configuration, data storage and task management. The communication layer is based on the well-established Message Passing Interface (MPI) standard, which is widely used on variety of parallel computers and CPU-clusters. The data storage is performed within the Hierarchical Data Format (HDF5). The design of the code follows core-module approach which allows to reduce the user’s codebase and makes it portable for single- and multi-CPU environments. The framework may be used in a local user’s environment, without administrative access to the cluster, under the PBS or Slurm job schedulers. It may become a helper tool for a wide range of astronomical applications, particularly focused on processing large data sets, such as dynamical studies of long-term orbital evolution of planetary systems with Monte Carlo methods, dynamical maps or evolutionary algorithms. It has been already applied in numerical experiments conducted for Kepler-11 (Migaszewski et al., 2012) and νOctantis planetary systems (Goździewski et al., 2013). In this paper we describe the basics of the framework, including code listings for the implementation of a sample user’s module. The code is illustrated on a model Hamiltonian introduced by (Froeschlé et al., 2000) presenting the Arnold diffusion. The Arnold web is shown with the help of the MEGNO (Mean Exponential Growth of Nearby Orbits) fast indicator (Goździewski et al., 2008a) applied onto symplectic SABAn integrators family (Laskar and Robutel, 2001).

  5. A statistical framework for neuroimaging data analysis based on mutual information estimated via a gaussian copula.

    PubMed

    Ince, Robin A A; Giordano, Bruno L; Kayser, Christoph; Rousselet, Guillaume A; Gross, Joachim; Schyns, Philippe G

    2017-03-01

    We begin by reviewing the statistical framework of information theory as applicable to neuroimaging data analysis. A major factor hindering wider adoption of this framework in neuroimaging is the difficulty of estimating information theoretic quantities in practice. We present a novel estimation technique that combines the statistical theory of copulas with the closed form solution for the entropy of Gaussian variables. This results in a general, computationally efficient, flexible, and robust multivariate statistical framework that provides effect sizes on a common meaningful scale, allows for unified treatment of discrete, continuous, unidimensional and multidimensional variables, and enables direct comparisons of representations from behavioral and brain responses across any recording modality. We validate the use of this estimate as a statistical test within a neuroimaging context, considering both discrete stimulus classes and continuous stimulus features. We also present examples of analyses facilitated by these developments, including application of multivariate analyses to MEG planar magnetic field gradients, and pairwise temporal interactions in evoked EEG responses. We show the benefit of considering the instantaneous temporal derivative together with the raw values of M/EEG signals as a multivariate response, how we can separately quantify modulations of amplitude and direction for vector quantities, and how we can measure the emergence of novel information over time in evoked responses. Open-source Matlab and Python code implementing the new methods accompanies this article. Hum Brain Mapp 38:1541-1573, 2017. © 2016 Wiley Periodicals, Inc. 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  6. Hilltop supernatural inflation and SUSY unified models

    NASA Astrophysics Data System (ADS)

    Kohri, Kazunori; Lim, C. S.; Lin, Chia-Min; Mimura, Yukihiro

    2014-01-01

    In this paper, we consider high scale (100TeV) supersymmetry (SUSY) breaking and realize the idea of hilltop supernatural inflation in concrete particle physics models based on flipped-SU(5)and Pati-Salam models in the framework of supersymmetric grand unified theories (SUSY GUTs). The inflaton can be a flat direction including right-handed sneutrino and the waterfall field is a GUT Higgs. The spectral index is ns = 0.96 which fits very well with recent data by PLANCK satellite. There is no both thermal and non-thermal gravitino problems. Non-thermal leptogenesis can be resulted from the decay of right-handed sneutrino which plays (part of) the role of inflaton.

  7. Emotion and the prefrontal cortex: An integrative review.

    PubMed

    Dixon, Matthew L; Thiruchselvam, Ravi; Todd, Rebecca; Christoff, Kalina

    2017-10-01

    The prefrontal cortex (PFC) plays a critical role in the generation and regulation of emotion. However, we lack an integrative framework for understanding how different emotion-related functions are organized across the entire expanse of the PFC, as prior reviews have generally focused on specific emotional processes (e.g., decision making) or specific anatomical regions (e.g., orbitofrontal cortex). Additionally, psychological theories and neuroscientific investigations have proceeded largely independently because of the lack of a common framework. Here, we provide a comprehensive review of functional neuroimaging, electrophysiological, lesion, and structural connectivity studies on the emotion-related functions of 8 subregions spanning the entire PFC. We introduce the appraisal-by-content model, which provides a new framework for integrating the diverse range of empirical findings. Within this framework, appraisal serves as a unifying principle for understanding the PFC's role in emotion, while relative content-specialization serves as a differentiating principle for understanding the role of each subregion. A synthesis of data from affective, social, and cognitive neuroscience studies suggests that different PFC subregions are preferentially involved in assigning value to specific types of inputs: exteroceptive sensations, episodic memories and imagined future events, viscero-sensory signals, viscero-motor signals, actions, others' mental states (e.g., intentions), self-related information, and ongoing emotions. We discuss the implications of this integrative framework for understanding emotion regulation, value-based decision making, emotional salience, and refining theoretical models of emotion. This framework provides a unified understanding of how emotional processes are organized across PFC subregions and generates new hypotheses about the mechanisms underlying adaptive and maladaptive emotional functioning. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  8. Interprofessional Care and Collaborative Practice.

    ERIC Educational Resources Information Center

    Casto, R. Michael; And Others

    This book provides materials for those learning about the dynamics, techniques, and potential of interprofessional collaboration in health care and human services professions. Eight case studies thread their way through most chapters to unify and illustrate the text. Part 1 addresses the theoretical framework that forms the basis for…

  9. Mean Comparison: Manifest Variable versus Latent Variable

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Bentler, Peter M.

    2006-01-01

    An extension of multiple correspondence analysis is proposed that takes into account cluster-level heterogeneity in respondents' preferences/choices. The method involves combining multiple correspondence analysis and k-means in a unified framework. The former is used for uncovering a low-dimensional space of multivariate categorical variables…

  10. Unified Framework for Deriving Simultaneous Equation Algorithms for Water Distribution Networks

    EPA Science Inventory

    The known formulations for steady state hydraulics within looped water distribution networks are re-derived in terms of linear and non-linear transformations of the original set of partly linear and partly non-linear equations that express conservation of mass and energy. All of ...

  11. Reconciling Time, Space and Function: A New Dorsal-Ventral Stream Model of Sentence Comprehension

    ERIC Educational Resources Information Center

    Bornkessel-Schlesewsky, Ina; Schlesewsky, Matthias

    2013-01-01

    We present a new dorsal-ventral stream framework for language comprehension which unifies basic neurobiological assumptions (Rauschecker & Scott, 2009) with a cross-linguistic neurocognitive sentence comprehension model (eADM; Bornkessel & Schlesewsky, 2006). The dissociation between (time-dependent) syntactic structure-building and…

  12. Finding the way with a noisy brain.

    PubMed

    Cheung, Allen; Vickerstaff, Robert

    2010-11-11

    Successful navigation is fundamental to the survival of nearly every animal on earth, and achieved by nervous systems of vastly different sizes and characteristics. Yet surprisingly little is known of the detailed neural circuitry from any species which can accurately represent space for navigation. Path integration is one of the oldest and most ubiquitous navigation strategies in the animal kingdom. Despite a plethora of computational models, from equational to neural network form, there is currently no consensus, even in principle, of how this important phenomenon occurs neurally. Recently, all path integration models were examined according to a novel, unifying classification system. Here we combine this theoretical framework with recent insights from directed walk theory, and develop an intuitive yet mathematically rigorous proof that only one class of neural representation of space can tolerate noise during path integration. This result suggests many existing models of path integration are not biologically plausible due to their intolerance to noise. This surprising result imposes significant computational limitations on the neurobiological spatial representation of all successfully navigating animals, irrespective of species. Indeed, noise-tolerance may be an important functional constraint on the evolution of neuroarchitectural plans in the animal kingdom.

  13. Computational methods for reactive transport modeling: A Gibbs energy minimization approach for multiphase equilibrium calculations

    NASA Astrophysics Data System (ADS)

    Leal, Allan M. M.; Kulik, Dmitrii A.; Kosakowski, Georg

    2016-02-01

    We present a numerical method for multiphase chemical equilibrium calculations based on a Gibbs energy minimization approach. The method can accurately and efficiently determine the stable phase assemblage at equilibrium independently of the type of phases and species that constitute the chemical system. We have successfully applied our chemical equilibrium algorithm in reactive transport simulations to demonstrate its effective use in computationally intensive applications. We used FEniCS to solve the governing partial differential equations of mass transport in porous media using finite element methods in unstructured meshes. Our equilibrium calculations were benchmarked with GEMS3K, the numerical kernel of the geochemical package GEMS. This allowed us to compare our results with a well-established Gibbs energy minimization algorithm, as well as their performance on every mesh node, at every time step of the transport simulation. The benchmark shows that our novel chemical equilibrium algorithm is accurate, robust, and efficient for reactive transport applications, and it is an improvement over the Gibbs energy minimization algorithm used in GEMS3K. The proposed chemical equilibrium method has been implemented in Reaktoro, a unified framework for modeling chemically reactive systems, which is now used as an alternative numerical kernel of GEMS.

  14. A Spiking Neural Network System for Robust Sequence Recognition.

    PubMed

    Yu, Qiang; Yan, Rui; Tang, Huajin; Tan, Kay Chen; Li, Haizhou

    2016-03-01

    This paper proposes a biologically plausible network architecture with spiking neurons for sequence recognition. This architecture is a unified and consistent system with functional parts of sensory encoding, learning, and decoding. This is the first systematic model attempting to reveal the neural mechanisms considering both the upstream and the downstream neurons together. The whole system is a consistent temporal framework, where the precise timing of spikes is employed for information processing and cognitive computing. Experimental results show that the system is competent to perform the sequence recognition, being robust to noisy sensory inputs and invariant to changes in the intervals between input stimuli within a certain range. The classification ability of the temporal learning rule used in the system is investigated through two benchmark tasks that outperform the other two widely used learning rules for classification. The results also demonstrate the computational power of spiking neurons over perceptrons for processing spatiotemporal patterns. In summary, the system provides a general way with spiking neurons to encode external stimuli into spatiotemporal spikes, to learn the encoded spike patterns with temporal learning rules, and to decode the sequence order with downstream neurons. The system structure would be beneficial for developments in both hardware and software.

  15. A unified framework for automatic wound segmentation and analysis with deep convolutional neural networks.

    PubMed

    Wang, Changhan; Yan, Xinchen; Smith, Max; Kochhar, Kanika; Rubin, Marcie; Warren, Stephen M; Wrobel, James; Lee, Honglak

    2015-01-01

    Wound surface area changes over multiple weeks are highly predictive of the wound healing process. Furthermore, the quality and quantity of the tissue in the wound bed also offer important prognostic information. Unfortunately, accurate measurements of wound surface area changes are out of reach in the busy wound practice setting. Currently, clinicians estimate wound size by estimating wound width and length using a scalpel after wound treatment, which is highly inaccurate. To address this problem, we propose an integrated system to automatically segment wound regions and analyze wound conditions in wound images. Different from previous segmentation techniques which rely on handcrafted features or unsupervised approaches, our proposed deep learning method jointly learns task-relevant visual features and performs wound segmentation. Moreover, learned features are applied to further analysis of wounds in two ways: infection detection and healing progress prediction. To the best of our knowledge, this is the first attempt to automate long-term predictions of general wound healing progress. Our method is computationally efficient and takes less than 5 seconds per wound image (480 by 640 pixels) on a typical laptop computer. Our evaluations on a large-scale wound database demonstrate the effectiveness and reliability of the proposed system.

  16. RosettaRemodel: A Generalized Framework for Flexible Backbone Protein Design

    PubMed Central

    Huang, Po-Ssu; Ban, Yih-En Andrew; Richter, Florian; Andre, Ingemar; Vernon, Robert; Schief, William R.; Baker, David

    2011-01-01

    We describe RosettaRemodel, a generalized framework for flexible protein design that provides a versatile and convenient interface to the Rosetta modeling suite. RosettaRemodel employs a unified interface, called a blueprint, which allows detailed control over many aspects of flexible backbone protein design calculations. RosettaRemodel allows the construction and elaboration of customized protocols for a wide range of design problems ranging from loop insertion and deletion, disulfide engineering, domain assembly, loop remodeling, motif grafting, symmetrical units, to de novo structure modeling. PMID:21909381

  17. Pricing foreign equity option with stochastic volatility

    NASA Astrophysics Data System (ADS)

    Sun, Qi; Xu, Weidong

    2015-11-01

    In this paper we propose a general foreign equity option pricing framework that unifies the vast foreign equity option pricing literature and incorporates the stochastic volatility into foreign equity option pricing. Under our framework, the time-changed Lévy processes are used to model the underlying assets price of foreign equity option and the closed form pricing formula is obtained through the use of characteristic function methodology. Numerical tests indicate that stochastic volatility has a dramatic effect on the foreign equity option prices.

  18. RANZCR Body Systems Framework of diagnostic imaging examination descriptors.

    PubMed

    Pitman, Alexander G; Penlington, Lisa; Doromal, Darren; Slater, Gregory; Vukolova, Natalia

    2014-08-01

    A unified and logical system of descriptors for diagnostic imaging examinations and procedures is a desirable resource for radiology in Australia and New Zealand and is needed to support core activities of RANZCR. Existing descriptor systems available in Australia and New Zealand (including the Medicare DIST and the ACC Schedule) have significant limitations and are inappropriate for broader clinical application. An anatomically based grid was constructed, with anatomical structures arranged in rows and diagnostic imaging modalities arranged in columns (including nuclear medicine and positron emission tomography). The grid was segregated into five body systems. The cells at the intersection of an anatomical structure row and an imaging modality column were populated with short, formulaic descriptors of the applicable diagnostic imaging examinations. Clinically illogical or physically impossible combinations were 'greyed out'. Where the same examination applied to different anatomical structures, the descriptor was kept identical for the purposes of streamlining. The resulting Body Systems Framework of diagnostic imaging examination descriptors lists all the reasonably common diagnostic imaging examinations currently performed in Australia and New Zealand using a unified grid structure allowing navigation by both referrers and radiologists. The Framework has been placed on the RANZCR website and is available for access free of charge by registered users. The Body Systems Framework of diagnostic imaging examination descriptors is a system of descriptors based on relationships between anatomical structures and imaging modalities. The Framework is now available as a resource and reference point for the radiology profession and to support core College activities. © 2014 The Royal Australian and New Zealand College of Radiologists.

  19. Modelling biological behaviours with the unified modelling language: an immunological case study and critique.

    PubMed

    Read, Mark; Andrews, Paul S; Timmis, Jon; Kumar, Vipin

    2014-10-06

    We present a framework to assist the diagrammatic modelling of complex biological systems using the unified modelling language (UML). The framework comprises three levels of modelling, ranging in scope from the dynamics of individual model entities to system-level emergent properties. By way of an immunological case study of the mouse disease experimental autoimmune encephalomyelitis, we show how the framework can be used to produce models that capture and communicate the biological system, detailing how biological entities, interactions and behaviours lead to higher-level emergent properties observed in the real world. We demonstrate how the UML can be successfully applied within our framework, and provide a critique of UML's ability to capture concepts fundamental to immunology and biology more generally. We show how specialized, well-explained diagrams with less formal semantics can be used where no suitable UML formalism exists. We highlight UML's lack of expressive ability concerning cyclic feedbacks in cellular networks, and the compounding concurrency arising from huge numbers of stochastic, interacting agents. To compensate for this, we propose several additional relationships for expressing these concepts in UML's activity diagram. We also demonstrate the ambiguous nature of class diagrams when applied to complex biology, and question their utility in modelling such dynamic systems. Models created through our framework are non-executable, and expressly free of simulation implementation concerns. They are a valuable complement and precursor to simulation specifications and implementations, focusing purely on thoroughly exploring the biology, recording hypotheses and assumptions, and serve as a communication medium detailing exactly how a simulation relates to the real biology.

  20. Modelling biological behaviours with the unified modelling language: an immunological case study and critique

    PubMed Central

    Read, Mark; Andrews, Paul S.; Timmis, Jon; Kumar, Vipin

    2014-01-01

    We present a framework to assist the diagrammatic modelling of complex biological systems using the unified modelling language (UML). The framework comprises three levels of modelling, ranging in scope from the dynamics of individual model entities to system-level emergent properties. By way of an immunological case study of the mouse disease experimental autoimmune encephalomyelitis, we show how the framework can be used to produce models that capture and communicate the biological system, detailing how biological entities, interactions and behaviours lead to higher-level emergent properties observed in the real world. We demonstrate how the UML can be successfully applied within our framework, and provide a critique of UML's ability to capture concepts fundamental to immunology and biology more generally. We show how specialized, well-explained diagrams with less formal semantics can be used where no suitable UML formalism exists. We highlight UML's lack of expressive ability concerning cyclic feedbacks in cellular networks, and the compounding concurrency arising from huge numbers of stochastic, interacting agents. To compensate for this, we propose several additional relationships for expressing these concepts in UML's activity diagram. We also demonstrate the ambiguous nature of class diagrams when applied to complex biology, and question their utility in modelling such dynamic systems. Models created through our framework are non-executable, and expressly free of simulation implementation concerns. They are a valuable complement and precursor to simulation specifications and implementations, focusing purely on thoroughly exploring the biology, recording hypotheses and assumptions, and serve as a communication medium detailing exactly how a simulation relates to the real biology. PMID:25142524

  1. Qualitative insights into practice time management: does 'patient-centred time' in practice management offer a portal to improved access?

    PubMed

    Buetow, S; Adair, V; Coster, G; Hight, M; Gribben, B; Mitchell, E

    2002-12-01

    Different sets of literature suggest how aspects of practice time management can limit access to general practitioner (GP) care. Researchers have not organised this knowledge into a unified framework that can enhance understanding of barriers to, and opportunities for, improved access. To suggest a framework conceptualising how differences in professional and cultural understanding of practice time management in Auckland, New Zealand, influence access to GP care for children with chronic asthma. A qualitative study involving selective sampling, semi-structured interviews on barriers to access, and a general inductive approach. Twenty-nine key informants and ten mothers of children with chronic, moderate to severe asthma and poor access to GP care in Auckland. Development of a framework from themes describing barriers associated with, and needs for, practice time management. The themes were independently identified by two authors from transcribed interviews and confirmed through informant checking. Themes from key informant and patient interviews were triangulated with each other and with published literature. The framework distinguishes 'practice-centred time' from 'patient-centred time.' A predominance of 'practice-centred time' and an unmet opportunity for 'patient-centred time' are suggested by the persistence of five barriers to accessing GP care: limited hours of opening; traditional appointment systems; practice intolerance of missed appointments; long waiting times in the practice; and inadequate consultation lengths. None of the barriers is specific to asthmatic children. A unified framework was suggested for understanding how the organisation of practice work time can influence access to GP care by groups including asthmatic children.

  2. SSC San Diego Biennial Review 2003. Vol 2: Communication and Information Systems

    DTIC Science & Technology

    2003-01-01

    University, Department of Electrical and Computer Engineering) Michael Jablecki (Science and Technology Corporation) Stochastic Unified Multiple...wearable computers and cellular phones. The technology-transfer process involved a coalition of government and industrial partners, each providing...the design and fabrication of the coupler. SSC San Diego developed a computer -controlled fused fiber fabrication station to achieve the required

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Childs, Andrew M.; Center for Theoretical Physics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139; Leung, Debbie W.

    We present unified, systematic derivations of schemes in the two known measurement-based models of quantum computation. The first model (introduced by Raussendorf and Briegel, [Phys. Rev. Lett. 86, 5188 (2001)]) uses a fixed entangled state, adaptive measurements on single qubits, and feedforward of the measurement results. The second model (proposed by Nielsen, [Phys. Lett. A 308, 96 (2003)] and further simplified by Leung, [Int. J. Quant. Inf. 2, 33 (2004)]) uses adaptive two-qubit measurements that can be applied to arbitrary pairs of qubits, and feedforward of the measurement results. The underlying principle of our derivations is a variant of teleportationmore » introduced by Zhou, Leung, and Chuang, [Phys. Rev. A 62, 052316 (2000)]. Our derivations unify these two measurement-based models of quantum computation and provide significantly simpler schemes.« less

  4. Aerospace System Unified Life Cycle Engineering Producibility Measurement Issues

    DTIC Science & Technology

    1989-05-01

    Control .................................................................. 11-9 5 . C o st...in the development process; these computer -aided models offer clarity approaching that of a prototype model. Once a part geometry is represented...of part geometry , allowing manufacturability evaluation and possibly other computer -integrated manufacturing (CIM) tasks. (Other papers that discuss

  5. Computer Networking Strategies for Building Collaboration among Science Educators.

    ERIC Educational Resources Information Center

    Aust, Ronald

    The development and dissemination of science materials can be associated with technical delivery systems such as the Unified Network for Informatics in Teacher Education (UNITE). The UNITE project was designed to investigate ways for using computer networking to improve communications and collaboration among university schools of education and…

  6. A Model for Conducting and Assessing Interdisciplinary Undergraduate Dissertations

    ERIC Educational Resources Information Center

    Engström, Henrik

    2015-01-01

    This paper presents an effort to create a unified model for conducting and assessing undergraduate dissertations, shared by all disciplines involved in computer game development at a Swedish university. Computer game development includes technology-oriented disciplines as well as disciplines with aesthetical traditions. The challenge has been to…

  7. Computer Utilization by Schools: An Example.

    ERIC Educational Resources Information Center

    Tondow, Murray

    1968-01-01

    The Educational Data Services Department of the Palo Alto Unified School District is responsible for implementing data processing needs to improve the quality of education in Palo Alto, California. Information from the schools enters the Department data library to be scanned, coded, and corrected prior to IBM 1620 computer input. Operating 17…

  8. Unified commutation-pruning technique for efficient computation of composite DFTs

    NASA Astrophysics Data System (ADS)

    Castro-Palazuelos, David E.; Medina-Melendrez, Modesto Gpe.; Torres-Roman, Deni L.; Shkvarko, Yuriy V.

    2015-12-01

    An efficient computation of a composite length discrete Fourier transform (DFT), as well as a fast Fourier transform (FFT) of both time and space data sequences in uncertain (non-sparse or sparse) computational scenarios, requires specific processing algorithms. Traditional algorithms typically employ some pruning methods without any commutations, which prevents them from attaining the potential computational efficiency. In this paper, we propose an alternative unified approach with automatic commutations between three computational modalities aimed at efficient computations of the pruned DFTs adapted for variable composite lengths of the non-sparse input-output data. The first modality is an implementation of the direct computation of a composite length DFT, the second one employs the second-order recursive filtering method, and the third one performs the new pruned decomposed transform. The pruned decomposed transform algorithm performs the decimation in time or space (DIT) data acquisition domain and, then, decimation in frequency (DIF). The unified combination of these three algorithms is addressed as the DFTCOMM technique. Based on the treatment of the combinational-type hypotheses testing optimization problem of preferable allocations between all feasible commuting-pruning modalities, we have found the global optimal solution to the pruning problem that always requires a fewer or, at most, the same number of arithmetic operations than other feasible modalities. The DFTCOMM method outperforms the existing competing pruning techniques in the sense of attainable savings in the number of required arithmetic operations. It requires fewer or at most the same number of arithmetic operations for its execution than any other of the competing pruning methods reported in the literature. Finally, we provide the comparison of the DFTCOMM with the recently developed sparse fast Fourier transform (SFFT) algorithmic family. We feature that, in the sensing scenarios with sparse/non-sparse data Fourier spectrum, the DFTCOMM technique manifests robustness against such model uncertainties in the sense of insensitivity for sparsity/non-sparsity restrictions and the variability of the operating parameters.

  9. Teacher Preparation for Vocational Education and Training in Germany: A Potential Model for Canada?

    ERIC Educational Resources Information Center

    Barabasch, Antje; Watt-Malcolm, Bonnie

    2013-01-01

    Germany's vocational education and training (VET) and corresponding teacher-education programmes are known worldwide for their integrated framework. Government legislation unifies companies, unions and vocational schools, and specifies the education and training required for students as well as vocational teachers. Changing from the Diplom…

  10. The Unified Plant Growth Model (UPGM): software framework overview and model application

    USDA-ARS?s Scientific Manuscript database

    Since the Environmental Policy Integrated Climate (EPIC) model was developed in 1989, the EPIC plant growth component has been incorporated into other erosion and crop management models (e.g., WEPS, WEPP, SWAT, ALMANAC, and APEX) and modified to meet model developer research objectives. This has re...

  11. The Importance of Culture for Developmental Science

    ERIC Educational Resources Information Center

    Keller, Heidi

    2012-01-01

    In this essay, it is argued that a general understanding of human development needs a unified framework based on evolutionary theorizing and cross-cultural and cultural anthropological approaches. An eco-social model of development has been proposed that defines cultural milieus as adaptations to specific socio-demographic contexts. Ontogenetic…

  12. Software Hardware Asset Reuse Enterprise (SHARE) Repository Framework Final Report: Component Specification and Ontology

    DTIC Science & Technology

    2009-08-19

    SSDS Ship Self Defense System TSTS Total Ship Training System UDDI Universal Description, Discovery, and Integration UML Unified Modeling...34ContractorOrganization" type="ContractorOrganizationType"> <xs:annotation> <xs:documentation>Identifies a contractor organization resposible for the

  13. An Extension of Multiple Correspondence Analysis for Identifying Heterogeneous Subgroups of Respondents

    ERIC Educational Resources Information Center

    Hwang, Heungsun; Montreal, Hec; Dillon, William R.; Takane, Yoshio

    2006-01-01

    An extension of multiple correspondence analysis is proposed that takes into account cluster-level heterogeneity in respondents' preferences/choices. The method involves combining multiple correspondence analysis and k-means in a unified framework. The former is used for uncovering a low-dimensional space of multivariate categorical variables…

  14. Diversity from genes to ecosystems: A unifying framework to study variation across biological metrics and scales

    USDA-ARS?s Scientific Manuscript database

    Biological diversity is a key concept in the life sciences and plays a fundamental role in many ecological and evolutionary processes. Although biodiversity is inherently a hierarchical concept covering different levels of organization (genes, population, species, ecological communities and ecosyst...

  15. The Theory behind the Theory in DCT and SCDT: A Response to Rigazio-DiGilio.

    ERIC Educational Resources Information Center

    Terry, Linda L.

    1994-01-01

    Responds to previous article by Rigazio-DiGilio on Developmental Counseling and Therapy and Systemic Cognitive-Developmental Therapy as two integrative models that unify individual, family, and network treatment within coconstructive-developmental framework. Discusses hidden complexities in cognitive-developmental ecosystemic integration and…

  16. Potential of DCT/SCDT in Addressing Two Elusive Themes of Mental Health Counseling.

    ERIC Educational Resources Information Center

    Borders, L. DiAnne

    1994-01-01

    Responds to previous article by Rigazio-DiGilio on Developmental Counseling and Therapy and Systemic Cognitive-Developmental Therapy as two integrative models that unify individual, family, and network treatment within coconstructive-developmental framework. Considers extent to which model breaks impasse in integrating development into counseling…

  17. Converging Instructional Technology and Critical Intercultural Pedagogy in Teacher Education

    ERIC Educational Resources Information Center

    Pittman, Joyce

    2007-01-01

    Purpose: This paper aims to postulate an emerging unified cultural-convergence framework to converge the delivery of instructional technology and intercultural education (ICE) that extends beyond web-learning technologies to inculcate inclusive pedagogy in teacher education. Design/methodology/approach: The paper explores the literature and a…

  18. Spending on School Infrastructure: Does Money Matter?

    ERIC Educational Resources Information Center

    Crampton, Faith E.

    2009-01-01

    Purpose: The purpose of this study is to further develop an emerging thread of quantitative research that grounds investment in school infrastructure in a unified theoretical framework of investment in human, social, and physical capital. Design/methodology/approach: To answer the research question, what is the impact of investment in human,…

  19. Simultaneous Two-Way Clustering of Multiple Correspondence Analysis

    ERIC Educational Resources Information Center

    Hwang, Heungsun; Dillon, William R.

    2010-01-01

    A 2-way clustering approach to multiple correspondence analysis is proposed to account for cluster-level heterogeneity of both respondents and variable categories in multivariate categorical data. Specifically, in the proposed method, multiple correspondence analysis is combined with k-means in a unified framework in which "k"-means is…

  20. Darwinian Liberal Education

    ERIC Educational Resources Information Center

    Arnhart, Larry

    2006-01-01

    Be it metaphysics, theology, or some other unifying framework, humans have long sought to determine "first principles" underlying knowledge. Larry Arnhart continues in this vein, positing a Darwinian web of genetic, cultural, and cognitive evolution to explain our social behavior in terms of human nature as governed by biology. He leaves it to us…

Top