Sample records for explicit whole-system model

  1. Through the Immune Looking Glass: A Model for Brain Memory Strategies

    PubMed Central

    Sánchez-Ramón, Silvia; Faure, Florence

    2016-01-01

    The immune system (IS) and the central nervous system (CNS) are complex cognitive networks involved in defining the identity (self) of the individual through recognition and memory processes that enable one to anticipate responses to stimuli. Brain memory has traditionally been classified as either implicit or explicit on psychological and anatomical grounds, with reminiscences of the evolutionarily-based innate-adaptive IS responses. Beyond the multineuronal networks of the CNS, we propose a theoretical model of brain memory integrating the CNS as a whole. This is achieved by analogical reasoning between the operational rules of recognition and memory processes in both systems, coupled to an evolutionary analysis. In this new model, the hippocampus is no longer specifically ascribed to explicit memory but rather it both becomes part of the innate (implicit) memory system and tightly controls the explicit memory system. Alike the antigen presenting cells for the IS, the hippocampus would integrate transient and pseudo-specific (i.e., danger-fear) memories and would drive the formation of long-term and highly specific or explicit memories (i.e., the taste of the Proust’s madeleine cake) by the more complex and recent, evolutionarily speaking, neocortex. Experimental and clinical evidence is provided to support the model. We believe that the singularity of this model’s approximation could help to gain a better understanding of the mechanisms operating in brain memory strategies from a large-scale network perspective. PMID:26869886

  2. Systemic risk in banking ecosystems.

    PubMed

    Haldane, Andrew G; May, Robert M

    2011-01-20

    In the run-up to the recent financial crisis, an increasingly elaborate set of financial instruments emerged, intended to optimize returns to individual institutions with seemingly minimal risk. Essentially no attention was given to their possible effects on the stability of the system as a whole. Drawing analogies with the dynamics of ecological food webs and with networks within which infectious diseases spread, we explore the interplay between complexity and stability in deliberately simplified models of financial networks. We suggest some policy lessons that can be drawn from such models, with the explicit aim of minimizing systemic risk.

  3. Whole-system carbon balance for a regional temperate forest in Northern Wisconsin, USA

    NASA Astrophysics Data System (ADS)

    Peckham, S. D.; Gower, S. T.

    2010-12-01

    The whole-system (biological + industrial) carbon (C) balance was estimated for the Chequamegon-Nicolet National Forest (CNNF), a temperate forest covering 600,000 ha in Northern Wisconsin, USA. The biological system was modeled using a spatially-explicit version of the ecosystem process model Biome-BGC. The industrial system was modeled using life cycle inventory (LCI) models for wood and paper products. Biome-BGC was used to estimate net primary production, net ecosystem production (NEP), and timber harvest (H) over the entire CNNF. The industrial carbon budget (Ci) was estimated by applying LCI models of CO2 emissions resulting from timber harvest and production of specific wood and paper products in the CNNF region. In 2009, simulated NEP of the CNNF averaged 3.0 tC/ha and H averaged 0.1 tC/ha. Despite model uncertainty, the CNNF region is likely a carbon sink (NEP - Ci > 0), even when CO2 emissions from timber harvest and production of wood and paper products are included in the calculation of the entire forest system C budget.

  4. Mathematical modelling methodologies in predictive food microbiology: a SWOT analysis.

    PubMed

    Ferrer, Jordi; Prats, Clara; López, Daniel; Vives-Rego, Josep

    2009-08-31

    Predictive microbiology is the area of food microbiology that attempts to forecast the quantitative evolution of microbial populations over time. This is achieved to a great extent through models that include the mechanisms governing population dynamics. Traditionally, the models used in predictive microbiology are whole-system continuous models that describe population dynamics by means of equations applied to extensive or averaged variables of the whole system. Many existing models can be classified by specific criteria. We can distinguish between survival and growth models by seeing whether they tackle mortality or cell duplication. We can distinguish between empirical (phenomenological) models, which mathematically describe specific behaviour, and theoretical (mechanistic) models with a biological basis, which search for the underlying mechanisms driving already observed phenomena. We can also distinguish between primary, secondary and tertiary models, by examining their treatment of the effects of external factors and constraints on the microbial community. Recently, the use of spatially explicit Individual-based Models (IbMs) has spread through predictive microbiology, due to the current technological capacity of performing measurements on single individual cells and thanks to the consolidation of computational modelling. Spatially explicit IbMs are bottom-up approaches to microbial communities that build bridges between the description of micro-organisms at the cell level and macroscopic observations at the population level. They provide greater insight into the mesoscale phenomena that link unicellular and population levels. Every model is built in response to a particular question and with different aims. Even so, in this research we conducted a SWOT (Strength, Weaknesses, Opportunities and Threats) analysis of the different approaches (population continuous modelling and Individual-based Modelling), which we hope will be helpful for current and future researchers.

  5. Summary of the DREAM8 Parameter Estimation Challenge: Toward Parameter Identification for Whole-Cell Models.

    PubMed

    Karr, Jonathan R; Williams, Alex H; Zucker, Jeremy D; Raue, Andreas; Steiert, Bernhard; Timmer, Jens; Kreutz, Clemens; Wilkinson, Simon; Allgood, Brandon A; Bot, Brian M; Hoff, Bruce R; Kellen, Michael R; Covert, Markus W; Stolovitzky, Gustavo A; Meyer, Pablo

    2015-05-01

    Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM) 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model's structure and in silico "experimental" data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation.

  6. Summary of the DREAM8 Parameter Estimation Challenge: Toward Parameter Identification for Whole-Cell Models

    PubMed Central

    Karr, Jonathan R.; Williams, Alex H.; Zucker, Jeremy D.; Raue, Andreas; Steiert, Bernhard; Timmer, Jens; Kreutz, Clemens; Wilkinson, Simon; Allgood, Brandon A.; Bot, Brian M.; Hoff, Bruce R.; Kellen, Michael R.; Covert, Markus W.; Stolovitzky, Gustavo A.; Meyer, Pablo

    2015-01-01

    Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM) 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model’s structure and in silico “experimental” data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation. PMID:26020786

  7. Water: the bloodstream of the biosphere.

    PubMed

    Ripl, Wilhelm

    2003-12-29

    Water, the bloodstream of the biosphere, determines the sustainability of living systems. The essential role of water is expanded in a conceptual model of energy dissipation, based on the water balance of whole landscapes. In this model, the underlying role of water phase changes--and their energy-dissipative properties--in the function and the self-organized development of natural systems is explicitly recognized. The energy-dissipating processes regulate the ecological dynamics within the Earth's biosphere, in such a way that the development of natural systems is never allowed to proceed in an undirected or random way. A fundamental characteristic of self-organized development in natural systems is the increasing role of cyclic processes while loss processes are correspondingly reduced. This gives a coincidental increase in system efficiency, which is the basis of growing stability and sustainability. Growing sustainability can be seen as an increase of ecological efficiency, which is applicable at all levels up to whole landscapes. Criteria for necessary changes in society and for the design of the measures that are necessary to restore sustainable landscapes and waters are derived.

  8. Water: the bloodstream of the biosphere.

    PubMed Central

    Ripl, Wilhelm

    2003-01-01

    Water, the bloodstream of the biosphere, determines the sustainability of living systems. The essential role of water is expanded in a conceptual model of energy dissipation, based on the water balance of whole landscapes. In this model, the underlying role of water phase changes--and their energy-dissipative properties--in the function and the self-organized development of natural systems is explicitly recognized. The energy-dissipating processes regulate the ecological dynamics within the Earth's biosphere, in such a way that the development of natural systems is never allowed to proceed in an undirected or random way. A fundamental characteristic of self-organized development in natural systems is the increasing role of cyclic processes while loss processes are correspondingly reduced. This gives a coincidental increase in system efficiency, which is the basis of growing stability and sustainability. Growing sustainability can be seen as an increase of ecological efficiency, which is applicable at all levels up to whole landscapes. Criteria for necessary changes in society and for the design of the measures that are necessary to restore sustainable landscapes and waters are derived. PMID:14728789

  9. The economics of integrated electronic medical record systems.

    PubMed

    Chismar, William G; Thomas, Sean M

    2004-01-01

    The decision to adopt electronic medical record systems in private practices is usually based on factors specific to the practice--the cost, cost and timesaving, and impact on quality of care. As evident by the low adoption rates, providers have not found these evaluations compelling. However, it is recognized that the widespread adoption of EMR systems would greatly benefit the health care system as a whole. One explanation for the lack of adoption is that there is a misalignment of the costs and benefits of EMR systems across the health care system. In this paper we present an economic model of the adoption of EMR systems that explicitly represents the distribution of costs and benefits across stakeholders (physicians, hospitals, insurers, etc.). We discuss incentive systems for balancing the costs and benefits and, thus, promoting the faster adoption of EMR systems. Finally, we describe our plan to extend the model and to use real-world data to evaluate our model.

  10. Modelling the Implicit Learning of Phonological Decoding from Training on Whole-Word Spellings and Pronunciations

    ERIC Educational Resources Information Center

    Pritchard, Stephen C.; Coltheart, Max; Marinus, Eva; Castles, Anne

    2016-01-01

    Phonological decoding is central to learning to read, and deficits in its acquisition have been linked to reading disorders such as dyslexia. Understanding how this skill is acquired is therefore important for characterising reading difficulties. Decoding can be taught explicitly, or implicitly learned during instruction on whole word spellings…

  11. Feynman rules for a whole Abelian model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chauca, J.; Doria, R.; Soares, W.

    2012-09-24

    Feynman rules for an abelian extension of gauge theories are discussed and explicitly derived. Vertices with three and four abelian gauge bosons are obtained. A discussion on an eventual structure for the photon is presented.

  12. The Effects of Explicit Spelling Instruction in the Spanish EFL Classroom: Diagnosis, Development and Durability

    ERIC Educational Resources Information Center

    Canado, Maria Luisa Perez

    2006-01-01

    This paper aims to shed light on the explicit-implicit paradigm contention in connection with the foreign language area of English spelling. To this end, it frames the subject against the backdrop of the prolonged dispute between implicit, whole language, top-down, or whole-to-part approaches and explicit, traditional, bottom-up, or part-to-whole…

  13. Nonlinear integrable model of Frenkel-like excitations on a ribbon of triangular lattice

    NASA Astrophysics Data System (ADS)

    Vakhnenko, Oleksiy O.

    2015-03-01

    Following the considerable progress in nanoribbon technology, we propose to model the nonlinear Frenkel-like excitations on a triangular-lattice ribbon by the integrable nonlinear ladder system with the background-controlled intersite resonant coupling. The system of interest arises as a proper reduction of first general semidiscrete integrable system from an infinite hierarchy. The most significant local conservation laws related to the first general integrable system are found explicitly in the framework of generalized recursive approach. The obtained general local densities are equally applicable to any general semidiscrete integrable system from the respective infinite hierarchy. Using the recovered second densities, the Hamiltonian formulation of integrable nonlinear ladder system with background-controlled intersite resonant coupling is presented. In doing so, the relevant Poisson structure turns out to be essentially nontrivial. The Darboux transformation scheme as applied to the first general semidiscrete system is developed and the key role of Bäcklund transformation in justification of its self-consistency is pointed out. The spectral properties of Darboux matrix allow to restore the whole Darboux matrix thus ensuring generation one more soliton as compared with a priori known seed solution of integrable nonlinear system. The power of Darboux-dressing method is explicitly demonstrated in generating the multicomponent one-soliton solution to the integrable nonlinear ladder system with background-controlled intersite resonant coupling.

  14. Dynamical behaviors determined by the Lyapunov function in competitive Lotka-Volterra systems.

    PubMed

    Tang, Ying; Yuan, Ruoshi; Ma, Yian

    2013-01-01

    Dynamical behaviors of the competitive Lotka-Volterra system even for 3 species are not fully understood. In this paper, we study this problem from the perspective of the Lyapunov function. We construct explicitly the Lyapunov function using three examples of the competitive Lotka-Volterra system for the whole state space: (1) the general 2-species case, (2) a 3-species model, and (3) the model of May-Leonard. The basins of attraction for these examples are demonstrated, including cases with bistability and cyclical behavior. The first two examples are the generalized gradient system, where the energy dissipation may not follow the gradient of the Lyapunov function. In addition, under a new type of stochastic interpretation, the Lyapunov function also leads to the Boltzmann-Gibbs distribution on the final steady state when multiplicative noise is added.

  15. Dynamical behaviors determined by the Lyapunov function in competitive Lotka-Volterra systems

    NASA Astrophysics Data System (ADS)

    Tang, Ying; Yuan, Ruoshi; Ma, Yian

    2013-01-01

    Dynamical behaviors of the competitive Lotka-Volterra system even for 3 species are not fully understood. In this paper, we study this problem from the perspective of the Lyapunov function. We construct explicitly the Lyapunov function using three examples of the competitive Lotka-Volterra system for the whole state space: (1) the general 2-species case, (2) a 3-species model, and (3) the model of May-Leonard. The basins of attraction for these examples are demonstrated, including cases with bistability and cyclical behavior. The first two examples are the generalized gradient system, where the energy dissipation may not follow the gradient of the Lyapunov function. In addition, under a new type of stochastic interpretation, the Lyapunov function also leads to the Boltzmann-Gibbs distribution on the final steady state when multiplicative noise is added.

  16. Different Mechanisms of Soil Microbial Response to Global Change Result in Different Outcomes in the MIMICS-CN Model

    NASA Astrophysics Data System (ADS)

    Kyker-Snowman, E.; Wieder, W. R.; Grandy, S.

    2017-12-01

    Microbial-explicit models of soil carbon (C) and nitrogen (N) cycling have improved upon simulations of C and N stocks and flows at site-to-global scales relative to traditional first-order linear models. However, the response of microbial-explicit soil models to global change factors depends upon which parameters and processes in a model are altered by those factors. We used the MIcrobial-MIneral Carbon Stabilization Model with coupled N cycling (MIMICS-CN) to compare modeled responses to changes in temperature and plant inputs at two previously-modeled sites (Harvard Forest and Kellogg Biological Station). We spun the model up to equilibrium, applied each perturbation, and evaluated 15 years of post-perturbation C and N pools and fluxes. To model the effect of increasing temperatures, we independently examined the impact of decreasing microbial C use efficiency (CUE), increasing the rate of microbial turnover, and increasing Michaelis-Menten kinetic rates of litter decomposition, plus several combinations of the three. For plant inputs, we ran simulations with stepwise increases in metabolic litter, structural litter, whole litter (structural and metabolic), or labile soil C. The cumulative change in soil C or N varied in both sign and magnitude across simulations. For example, increasing kinetic rates of litter decomposition resulted in net releases of both C and N from soil pools, while decreasing CUE produced short-term increases in respiration but long-term accumulation of C in litter pools and shifts in soil C:N as microbial demand for C increased and biomass declined. Given that soil N cycling constrains the response of plant productivity to global change and that soils generate a large amount of uncertainty in current earth system models, microbial-explicit models are a critical opportunity to advance the modeled representation of soils. However, microbial-explicit models must be improved by experiments to isolate the physiological and stoichiometric parameters of soil microbes that shift under global change.

  17. Predicting Pilot Behavior in Medium Scale Scenarios Using Game Theory and Reinforcement Learning

    NASA Technical Reports Server (NTRS)

    Yildiz, Yildiray; Agogino, Adrian; Brat, Guillaume

    2013-01-01

    Effective automation is critical in achieving the capacity and safety goals of the Next Generation Air Traffic System. Unfortunately creating integration and validation tools for such automation is difficult as the interactions between automation and their human counterparts is complex and unpredictable. This validation becomes even more difficult as we integrate wide-reaching technologies that affect the behavior of different decision makers in the system such as pilots, controllers and airlines. While overt short-term behavior changes can be explicitly modeled with traditional agent modeling systems, subtle behavior changes caused by the integration of new technologies may snowball into larger problems and be very hard to detect. To overcome these obstacles, we show how integration of new technologies can be validated by learning behavior models based on goals. In this framework, human participants are not modeled explicitly. Instead, their goals are modeled and through reinforcement learning their actions are predicted. The main advantage to this approach is that modeling is done within the context of the entire system allowing for accurate modeling of all participants as they interact as a whole. In addition such an approach allows for efficient trade studies and feasibility testing on a wide range of automation scenarios. The goal of this paper is to test that such an approach is feasible. To do this we implement this approach using a simple discrete-state learning system on a scenario where 50 aircraft need to self-navigate using Automatic Dependent Surveillance-Broadcast (ADS-B) information. In this scenario, we show how the approach can be used to predict the ability of pilots to adequately balance aircraft separation and fly efficient paths. We present results with several levels of complexity and airspace congestion.

  18. A general theory of kinetics and thermodynamics of steady-state copolymerization.

    PubMed

    Shu, Yao-Gen; Song, Yong-Shun; Ou-Yang, Zhong-Can; Li, Ming

    2015-06-17

    Kinetics of steady-state copolymerization has been investigated since the 1940s. Irreversible terminal and penultimate models were successfully applied to a number of comonomer systems, but failed for systems where depropagation is significant. Although a general mathematical treatment of the terminal model with depropagation was established in the 1980s, a penultimate model and higher-order terminal models with depropagation have not been systematically studied, since depropagation leads to hierarchically-coupled and unclosed kinetic equations which are hard to solve analytically. In this work, we propose a truncation method to solve the steady-state kinetic equations of any-order terminal models with depropagation in a unified way, by reducing them into closed steady-state equations which give the exact solution of the original kinetic equations. Based on the steady-state equations, we also derive a general thermodynamic equality in which the Shannon entropy of the copolymer sequence is explicitly introduced as part of the free energy dissipation of the whole copolymerization system.

  19. Calibrating the mental number line.

    PubMed

    Izard, Véronique; Dehaene, Stanislas

    2008-03-01

    Human adults are thought to possess two dissociable systems to represent numbers: an approximate quantity system akin to a mental number line, and a verbal system capable of representing numbers exactly. Here, we study the interface between these two systems using an estimation task. Observers were asked to estimate the approximate numerosity of dot arrays. We show that, in the absence of calibration, estimates are largely inaccurate: responses increase monotonically with numerosity, but underestimate the actual numerosity. However, insertion of a few inducer trials, in which participants are explicitly (and sometimes misleadingly) told that a given display contains 30 dots, is sufficient to calibrate their estimates on the whole range of stimuli. Based on these empirical results, we develop a model of the mapping between the numerical symbols and the representations of numerosity on the number line.

  20. The Effects of Explicit-Strategy and Whole-Language Instruction on Students' Spelling Ability.

    ERIC Educational Resources Information Center

    Butyniec-Thomas, Jean; Woloshyn, Vera E.

    1997-01-01

    Whether explicit-strategy instruction combined with whole-language instruction would improve third graders' spelling more than using either approach alone was studied with 37 students. Findings suggest that young children learn to spell best when they are taught a repertoire of effective strategies in a meaningful context. (SLD)

  1. An integrative neural model of social perception, action observation, and theory of mind.

    PubMed

    Yang, Daniel Y-J; Rosenblau, Gabriela; Keifer, Cara; Pelphrey, Kevin A

    2015-04-01

    In the field of social neuroscience, major branches of research have been instrumental in describing independent components of typical and aberrant social information processing, but the field as a whole lacks a comprehensive model that integrates different branches. We review existing research related to the neural basis of three key neural systems underlying social information processing: social perception, action observation, and theory of mind. We propose an integrative model that unites these three processes and highlights the posterior superior temporal sulcus (pSTS), which plays a central role in all three systems. Furthermore, we integrate these neural systems with the dual system account of implicit and explicit social information processing. Large-scale meta-analyses based on Neurosynth confirmed that the pSTS is at the intersection of the three neural systems. Resting-state functional connectivity analysis with 1000 subjects confirmed that the pSTS is connected to all other regions in these systems. The findings presented in this review are specifically relevant for psychiatric research especially disorders characterized by social deficits such as autism spectrum disorder. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. An integrative neural model of social perception, action observation, and theory of mind

    PubMed Central

    Yang, Daniel Y.-J.; Rosenblau, Gabriela; Keifer, Cara; Pelphrey, Kevin A.

    2016-01-01

    In the field of social neuroscience, major branches of research have been instrumental in describing independent components of typical and aberrant social information processing, but the field as a whole lacks a comprehensive model that integrates different branches. We review existing research related to the neural basis of three key neural systems underlying social information processing: social perception, action observation, and theory of mind. We propose an integrative model that unites these three processes and highlights the posterior superior temporal sulcus (pSTS), which plays a central role in all three systems. Furthermore, we integrate these neural systems with the dual system account of implicit and explicit social information processing. Large-scale meta-analyses based on Neurosynth confirmed that the pSTS is at the intersection of the three neural systems. Resting-state functional connectivity analysis with 1000 subjects confirmed that the pSTS is connected to all other regions in these systems. The findings presented in this review are specifically relevant for psychiatric research especially disorders characterized by social deficits such as autism spectrum disorder. PMID:25660957

  3. Plant growth and architectural modelling and its applications

    PubMed Central

    Guo, Yan; Fourcaud, Thierry; Jaeger, Marc; Zhang, Xiaopeng; Li, Baoguo

    2011-01-01

    Over the last decade, a growing number of scientists around the world have invested in research on plant growth and architectural modelling and applications (often abbreviated to plant modelling and applications, PMA). By combining physical and biological processes, spatially explicit models have shown their ability to help in understanding plant–environment interactions. This Special Issue on plant growth modelling presents new information within this topic, which are summarized in this preface. Research results for a variety of plant species growing in the field, in greenhouses and in natural environments are presented. Various models and simulation platforms are developed in this field of research, opening new features to a wider community of researchers and end users. New modelling technologies relating to the structure and function of plant shoots and root systems are explored from the cellular to the whole-plant and plant-community levels. PMID:21638797

  4. Age-dependent Fourier model of the shape of the isolated ex vivo human crystalline lens.

    PubMed

    Urs, Raksha; Ho, Arthur; Manns, Fabrice; Parel, Jean-Marie

    2010-06-01

    To develop an age-dependent mathematical model of the zero-order shape of the isolated ex vivo human crystalline lens, using one mathematical function, that can be subsequently used to facilitate the development of other models for specific purposes such as optical modeling and analytical and numerical modeling of the lens. Profiles of whole isolated human lenses (n=30) aged 20-69, were measured from shadow-photogrammetric images. The profiles were fit to a 10th-order Fourier series consisting of cosine functions in polar-co-ordinate system that included terms for tilt and decentration. The profiles were corrected using these terms and processed in two ways. In the first, each lens was fit to a 10th-order Fourier series to obtain thickness and diameter, while in the second, all lenses were simultaneously fit to a Fourier series equation that explicitly include linear terms for age to develop an age-dependent mathematical model for the whole lens shape. Thickness and diameter obtained from Fourier series fits exhibited high correlation with manual measurements made from shadow-photogrammetric images. The root-mean-squared-error of the age-dependent fit was 205 microm. The age-dependent equations provide a reliable lens model for ages 20-60 years. The contour of the whole human crystalline lens can be modeled with a Fourier series. Shape obtained from the age-dependent model described in this paper can be used to facilitate the development of other models for specific purposes such as optical modeling and analytical and numerical modeling of the lens. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  5. A dynamic appearance descriptor approach to facial actions temporal modeling.

    PubMed

    Jiang, Bihan; Valstar, Michel; Martinez, Brais; Pantic, Maja

    2014-02-01

    Both the configuration and the dynamics of facial expressions are crucial for the interpretation of human facial behavior. Yet to date, the vast majority of reported efforts in the field either do not take the dynamics of facial expressions into account, or focus only on prototypic facial expressions of six basic emotions. Facial dynamics can be explicitly analyzed by detecting the constituent temporal segments in Facial Action Coding System (FACS) Action Units (AUs)-onset, apex, and offset. In this paper, we present a novel approach to explicit analysis of temporal dynamics of facial actions using the dynamic appearance descriptor Local Phase Quantization from Three Orthogonal Planes (LPQ-TOP). Temporal segments are detected by combining a discriminative classifier for detecting the temporal segments on a frame-by-frame basis with Markov Models that enforce temporal consistency over the whole episode. The system is evaluated in detail over the MMI facial expression database, the UNBC-McMaster pain database, the SAL database, the GEMEP-FERA dataset in database-dependent experiments, in cross-database experiments using the Cohn-Kanade, and the SEMAINE databases. The comparison with other state-of-the-art methods shows that the proposed LPQ-TOP method outperforms the other approaches for the problem of AU temporal segment detection, and that overall AU activation detection benefits from dynamic appearance information.

  6. Explicit least squares system parameter identification for exact differential input/output models

    NASA Technical Reports Server (NTRS)

    Pearson, A. E.

    1993-01-01

    The equation error for a class of systems modeled by input/output differential operator equations has the potential to be integrated exactly, given the input/output data on a finite time interval, thereby opening up the possibility of using an explicit least squares estimation technique for system parameter identification. The paper delineates the class of models for which this is possible and shows how the explicit least squares cost function can be obtained in a way that obviates dealing with unknown initial and boundary conditions. The approach is illustrated by two examples: a second order chemical kinetics model and a third order system of Lorenz equations.

  7. Explicit Instruction in Phonemic Awareness and Phonemically Based Decoding Skills as an Intervention Strategy for Struggling Readers in Whole Language Classrooms

    ERIC Educational Resources Information Center

    Ryder, Janice F.; Tunmer, William E.; Greaney, Keith T.

    2008-01-01

    The aim of this study was to determine whether explicit instruction in phonemic awareness and phonemically based decoding skills would be an effective intervention strategy for children with early reading difficulties in a whole language instructional environment. Twenty-four 6- and 7-year-old struggling readers were randomly assigned to an…

  8. Solving the Sea-Level Equation in an Explicit Time Differencing Scheme

    NASA Astrophysics Data System (ADS)

    Klemann, V.; Hagedoorn, J. M.; Thomas, M.

    2016-12-01

    In preparation of coupling the solid-earth to an ice-sheet compartment in an earth-system model, the dependency of initial topography on the ice-sheet history and viscosity structure has to be analysed. In this study, we discuss this dependency and how it influences the reconstruction of former sea level during a glacial cycle. The modelling is based on the VILMA code in which the field equations are solved in the time domain applying an explicit time-differencing scheme. The sea-level equation is solved simultaneously in the same explicit scheme as the viscoleastic field equations (Hagedoorn et al., 2007). With the assumption of only small changes, we neglect the iterative solution at each time step as suggested by e.g. Kendall et al. (2005). Nevertheless, the prediction of the initial paleo topography in case of moving coastlines remains to be iterated by repeated integration of the whole load history. The sensitivity study sketched at the beginning is accordingly motivated by the question if the iteration of the paleo topography can be replaced by a predefined one. This study is part of the German paleoclimate modelling initiative PalMod. Lit:Hagedoorn JM, Wolf D, Martinec Z, 2007. An estimate of global mean sea-level rise inferred from tide-gauge measurements using glacial-isostatic models consistent with the relative sea-level record. Pure appl. Geophys. 164: 791-818, doi:10.1007/s00024-007-0186-7Kendall RA, Mitrovica JX, Milne GA, 2005. On post-glacial sea level - II. Numerical formulation and comparative reesults on spherically symmetric models. Geophys. J. Int., 161: 679-706, doi:10.1111/j.365-246.X.2005.02553.x

  9. EPR spectroscopy of complex biological iron-sulfur systems.

    PubMed

    Hagen, Wilfred R

    2018-02-21

    From the very first discovery of biological iron-sulfur clusters with EPR, the spectroscopy has been used to study not only purified proteins but also complex systems such as respiratory complexes, membrane particles and, later, whole cells. In recent times, the emphasis of iron-sulfur biochemistry has moved from characterization of individual proteins to the systems biology of iron-sulfur biosynthesis, regulation, degradation, and implications for human health. Although this move would suggest a blossoming of System-EPR as a specific, non-invasive monitor of Fe/S (dys)homeostasis in whole cells, a review of the literature reveals limited success possibly due to technical difficulties in adherence to EPR spectroscopic and biochemical standards. In an attempt to boost application of System-EPR the required boundary conditions and their practical applications are explicitly and comprehensively formulated.

  10. Analytically Solvable Model of Spreading Dynamics with Non-Poissonian Processes

    NASA Astrophysics Data System (ADS)

    Jo, Hang-Hyun; Perotti, Juan I.; Kaski, Kimmo; Kertész, János

    2014-01-01

    Non-Poissonian bursty processes are ubiquitous in natural and social phenomena, yet little is known about their effects on the large-scale spreading dynamics. In order to characterize these effects, we devise an analytically solvable model of susceptible-infected spreading dynamics in infinite systems for arbitrary inter-event time distributions and for the whole time range. Our model is stationary from the beginning, and the role of the lower bound of inter-event times is explicitly considered. The exact solution shows that for early and intermediate times, the burstiness accelerates the spreading as compared to a Poisson-like process with the same mean and same lower bound of inter-event times. Such behavior is opposite for late-time dynamics in finite systems, where the power-law distribution of inter-event times results in a slower and algebraic convergence to a fully infected state in contrast to the exponential decay of the Poisson-like process. We also provide an intuitive argument for the exponent characterizing algebraic convergence.

  11. Requirement analysis for the one-stop logistics management of fresh agricultural products

    NASA Astrophysics Data System (ADS)

    Li, Jun; Gao, Hongmei; Liu, Yuchuan

    2017-08-01

    Issues and concerns for food safety, agro-processing, and the environmental and ecological impact of food production have been attracted many research interests. Traceability and logistics management of fresh agricultural products is faced with the technological challenges including food product label and identification, activity/process characterization, information systems for the supply chain, i.e., from farm to table. Application of one-stop logistics service focuses on the whole supply chain process integration for fresh agricultural products is studied. A collaborative research project for the supply and logistics of fresh agricultural products in Tianjin was performed. Requirement analysis for the one-stop logistics management information system is studied. The model-driven business transformation, an approach uses formal models to explicitly define the structure and behavior of a business, is applied for the review and analysis process. Specific requirements for the logistic management solutions are proposed. Development of this research is crucial for the solution of one-stop logistics management information system integration platform for fresh agricultural products.

  12. Are mixed explicit/implicit solvation models reliable for studying phosphate hydrolysis? A comparative study of continuum, explicit and mixed solvation models.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamerlin, Shina C. L.; Haranczyk, Maciej; Warshel, Arieh

    2009-05-01

    Phosphate hydrolysis is ubiquitous in biology. However, despite intensive research on this class of reactions, the precise nature of the reaction mechanism remains controversial. In this work, we have examined the hydrolysis of three homologous phosphate diesters. The solvation free energy was simulated by means of either an implicit solvation model (COSMO), hybrid quantum mechanical / molecular mechanical free energy perturbation (QM/MM-FEP) or a mixed solvation model in which N water molecules were explicitly included in the ab initio description of the reacting system (where N=1-3), with the remainder of the solvent being implicitly modelled as a continuum. Here, bothmore » COSMO and QM/MM-FEP reproduce Delta Gobs within an error of about 2kcal/mol. However, we demonstrate that in order to obtain any form of reliable results from a mixed model, it is essential to carefully select the explicit water molecules from short QM/MM runs that act as a model for the true infinite system. Additionally, the mixed models tend to be increasingly inaccurate the more explicit water molecules are placed into the system. Thus, our analysis indicates that this approach provides an unreliable way for modelling phosphate hydrolysis in solution.« less

  13. Electrostatic model for protein adsorption in ion-exchange chromatography and application to monoclonal antibodies, lysozyme and chymotrypsinogen A.

    PubMed

    Guélat, Bertrand; Ströhlein, Guido; Lattuada, Marco; Morbidelli, Massimo

    2010-08-27

    A model for the adsorption equilibrium of proteins in ion-exchange chromatography explicitly accounting for the effect of pH and salt concentration in the limit of highly diluted systems was developed. It is based on the use of DLVO theory to estimate the electrostatic interactions between the charged surface of the ion-exchanger and the proteins. The corresponding charge distributions were evaluated as a function of pH and salt concentration using a molecular approach. The model was verified for the adsorption equilibrium of lysozyme, chymotrypsinogen A and four industrial monoclonal antibodies on two strong cation-exchangers. The adsorption equilibrium constants of these proteins were determined experimentally at various pH values and salt concentrations and the model was fitted with a good agreement using three adjustable parameters for each protein in the whole range of experimental conditions. Despite the simplifications of the model regarding the geometry of the protein-ion-exchanger system, the physical meaning of the parameters was retained. 2010 Elsevier B.V. All rights reserved.

  14. Does Teaching Students How to Explicitly Model the Causal Structure of Systems Improve Their Understanding of These Systems?

    ERIC Educational Resources Information Center

    Jensen, Eva

    2014-01-01

    If students really understand the systems they study, they would be able to tell how changes in the system would affect a result. This demands that the students understand the mechanisms that drive its behaviour. The study investigates potential merits of learning how to explicitly model the causal structure of systems. The approach and…

  15. A neurocomputational theory of how explicit learning bootstraps early procedural learning.

    PubMed

    Paul, Erick J; Ashby, F Gregory

    2013-01-01

    It is widely accepted that human learning and memory is mediated by multiple memory systems that are each best suited to different requirements and demands. Within the domain of categorization, at least two systems are thought to facilitate learning: an explicit (declarative) system depending largely on the prefrontal cortex, and a procedural (non-declarative) system depending on the basal ganglia. Substantial evidence suggests that each system is optimally suited to learn particular categorization tasks. However, it remains unknown precisely how these systems interact to produce optimal learning and behavior. In order to investigate this issue, the present research evaluated the progression of learning through simulation of categorization tasks using COVIS, a well-known model of human category learning that includes both explicit and procedural learning systems. Specifically, the model's parameter space was thoroughly explored in procedurally learned categorization tasks across a variety of conditions and architectures to identify plausible interaction architectures. The simulation results support the hypothesis that one-way interaction between the systems occurs such that the explicit system "bootstraps" learning early on in the procedural system. Thus, the procedural system initially learns a suboptimal strategy employed by the explicit system and later refines its strategy. This bootstrapping could be from cortical-striatal projections that originate in premotor or motor regions of cortex, or possibly by the explicit system's control of motor responses through basal ganglia-mediated loops.

  16. Age effects on explicit and implicit memory

    PubMed Central

    Ward, Emma V.; Berry, Christopher J.; Shanks, David R.

    2013-01-01

    It is well-documented that explicit memory (e.g., recognition) declines with age. In contrast, many argue that implicit memory (e.g., priming) is preserved in healthy aging. For example, priming on tasks such as perceptual identification is often not statistically different in groups of young and older adults. Such observations are commonly taken as evidence for distinct explicit and implicit learning/memory systems. In this article we discuss several lines of evidence that challenge this view. We describe how patterns of differential age-related decline may arise from differences in the ways in which the two forms of memory are commonly measured, and review recent research suggesting that under improved measurement methods, implicit memory is not age-invariant. Formal computational models are of considerable utility in revealing the nature of underlying systems. We report the results of applying single and multiple-systems models to data on age effects in implicit and explicit memory. Model comparison clearly favors the single-system view. Implications for the memory systems debate are discussed. PMID:24065942

  17. Mountain hydrology of the western United States

    USGS Publications Warehouse

    Bales, Roger C.; Molotch, Noah P.; Painter, Thomas H; Dettinger, Michael D.; Rice, Robert; Dozier, Jeff

    2006-01-01

    Climate change and climate variability, population growth, and land use change drive the need for new hydrologic knowledge and understanding. In the mountainous West and other similar areas worldwide, three pressing hydrologic needs stand out: first, to better understand the processes controlling the partitioning of energy and water fluxes within and out from these systems; second, to better understand feedbacks between hydrological fluxes and biogeochemical and ecological processes; and, third, to enhance our physical and empirical understanding with integrated measurement strategies and information systems. We envision an integrative approach to monitoring, modeling, and sensing the mountain environment that will improve understanding and prediction of hydrologic fluxes and processes. Here extensive monitoring of energy fluxes and hydrologic states are needed to supplement existing measurements, which are largely limited to streamflow and snow water equivalent. Ground‐based observing systems must be explicitly designed for integration with remotely sensed data and for scaling up to basins and whole ranges.

  18. Combining Model-driven and Schema-based Program Synthesis

    NASA Technical Reports Server (NTRS)

    Denney, Ewen; Whittle, John

    2004-01-01

    We describe ongoing work which aims to extend the schema-based program synthesis paradigm with explicit models. In this context, schemas can be considered as model-to-model transformations. The combination of schemas with explicit models offers a number of advantages, namely, that building synthesis systems becomes much easier since the models can be used in verification and in adaptation of the synthesis systems. We illustrate our approach using an example from signal processing.

  19. Robust model predictive control for optimal continuous drug administration.

    PubMed

    Sopasakis, Pantelis; Patrinos, Panagiotis; Sarimveis, Haralambos

    2014-10-01

    In this paper the model predictive control (MPC) technology is used for tackling the optimal drug administration problem. The important advantage of MPC compared to other control technologies is that it explicitly takes into account the constraints of the system. In particular, for drug treatments of living organisms, MPC can guarantee satisfaction of the minimum toxic concentration (MTC) constraints. A whole-body physiologically-based pharmacokinetic (PBPK) model serves as the dynamic prediction model of the system after it is formulated as a discrete-time state-space model. Only plasma measurements are assumed to be measured on-line. The rest of the states (drug concentrations in other organs and tissues) are estimated in real time by designing an artificial observer. The complete system (observer and MPC controller) is able to drive the drug concentration to the desired levels at the organs of interest, while satisfying the imposed constraints, even in the presence of modelling errors, disturbances and noise. A case study on a PBPK model with 7 compartments, constraints on 5 tissues and a variable drug concentration set-point illustrates the efficiency of the methodology in drug dosing control applications. The proposed methodology is also tested in an uncertain setting and proves successful in presence of modelling errors and inaccurate measurements. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  20. Complete integrability of information processing by biochemical reactions

    PubMed Central

    Agliari, Elena; Barra, Adriano; Dello Schiavo, Lorenzo; Moro, Antonio

    2016-01-01

    Statistical mechanics provides an effective framework to investigate information processing in biochemical reactions. Within such framework far-reaching analogies are established among (anti-) cooperative collective behaviors in chemical kinetics, (anti-)ferromagnetic spin models in statistical mechanics and operational amplifiers/flip-flops in cybernetics. The underlying modeling – based on spin systems – has been proved to be accurate for a wide class of systems matching classical (e.g. Michaelis–Menten, Hill, Adair) scenarios in the infinite-size approximation. However, the current research in biochemical information processing has been focusing on systems involving a relatively small number of units, where this approximation is no longer valid. Here we show that the whole statistical mechanical description of reaction kinetics can be re-formulated via a mechanical analogy – based on completely integrable hydrodynamic-type systems of PDEs – which provides explicit finite-size solutions, matching recently investigated phenomena (e.g. noise-induced cooperativity, stochastic bi-stability, quorum sensing). The resulting picture, successfully tested against a broad spectrum of data, constitutes a neat rationale for a numerically effective and theoretically consistent description of collective behaviors in biochemical reactions. PMID:27812018

  1. Complete integrability of information processing by biochemical reactions

    NASA Astrophysics Data System (ADS)

    Agliari, Elena; Barra, Adriano; Dello Schiavo, Lorenzo; Moro, Antonio

    2016-11-01

    Statistical mechanics provides an effective framework to investigate information processing in biochemical reactions. Within such framework far-reaching analogies are established among (anti-) cooperative collective behaviors in chemical kinetics, (anti-)ferromagnetic spin models in statistical mechanics and operational amplifiers/flip-flops in cybernetics. The underlying modeling - based on spin systems - has been proved to be accurate for a wide class of systems matching classical (e.g. Michaelis-Menten, Hill, Adair) scenarios in the infinite-size approximation. However, the current research in biochemical information processing has been focusing on systems involving a relatively small number of units, where this approximation is no longer valid. Here we show that the whole statistical mechanical description of reaction kinetics can be re-formulated via a mechanical analogy - based on completely integrable hydrodynamic-type systems of PDEs - which provides explicit finite-size solutions, matching recently investigated phenomena (e.g. noise-induced cooperativity, stochastic bi-stability, quorum sensing). The resulting picture, successfully tested against a broad spectrum of data, constitutes a neat rationale for a numerically effective and theoretically consistent description of collective behaviors in biochemical reactions.

  2. Complete integrability of information processing by biochemical reactions.

    PubMed

    Agliari, Elena; Barra, Adriano; Dello Schiavo, Lorenzo; Moro, Antonio

    2016-11-04

    Statistical mechanics provides an effective framework to investigate information processing in biochemical reactions. Within such framework far-reaching analogies are established among (anti-) cooperative collective behaviors in chemical kinetics, (anti-)ferromagnetic spin models in statistical mechanics and operational amplifiers/flip-flops in cybernetics. The underlying modeling - based on spin systems - has been proved to be accurate for a wide class of systems matching classical (e.g. Michaelis-Menten, Hill, Adair) scenarios in the infinite-size approximation. However, the current research in biochemical information processing has been focusing on systems involving a relatively small number of units, where this approximation is no longer valid. Here we show that the whole statistical mechanical description of reaction kinetics can be re-formulated via a mechanical analogy - based on completely integrable hydrodynamic-type systems of PDEs - which provides explicit finite-size solutions, matching recently investigated phenomena (e.g. noise-induced cooperativity, stochastic bi-stability, quorum sensing). The resulting picture, successfully tested against a broad spectrum of data, constitutes a neat rationale for a numerically effective and theoretically consistent description of collective behaviors in biochemical reactions.

  3. Axisymmetric whole pin life modelling of advanced gas-cooled reactor nuclear fuel

    NASA Astrophysics Data System (ADS)

    Mella, R.; Wenman, M. R.

    2013-06-01

    Thermo-mechanical contributions to pellet-clad interaction (PCI) in advanced gas-cooled reactors (AGRs) are modelled in the ABAQUS finite element (FE) code. User supplied sub-routines permit the modelling of the non-linear behaviour of AGR fuel through life. Through utilisation of ABAQUS's well-developed pre- and post-processing ability, the behaviour of the axially constrained steel clad fuel was modelled. The 2D axisymmetric model includes thermo-mechanical behaviour of the fuel with time and condition dependent material properties. Pellet cladding gap dynamics and thermal behaviour are also modelled. The model treats heat up as a fully coupled temperature-displacement study. Dwell time and direct power cycling was applied to model the impact of online refuelling, a key feature of the AGR. The model includes the visco-plastic behaviour of the fuel under the stress and irradiation conditions within an AGR core and a non-linear heat transfer model. A multiscale fission gas release model is applied to compute pin pressure; this model is coupled to the PCI gap model through an explicit fission gas inventory code. Whole pin, whole life, models are able to show the impact of the fuel on all segments of cladding including weld end caps and cladding pellet locking mechanisms (unique to AGR fuel). The development of this model in a commercial FE package shows that the development of a potentially verified and future-proof fuel performance code can be created and used. The usability of a FE based fuel performance code would be an enhancement over past codes. Pre- and post-processors have lowered the entry barrier for the development of a fuel performance model to permit the ability to model complicated systems. Typical runtimes for a 5 year axisymmetric model takes less than one hour on a single core workstation. The current model has implemented: Non-linear fuel thermal behaviour, including a complex description of heat flow in the fuel. Coupled with a variety of different FE and finite difference models. Non-linear mechanical behaviour of the fuel and cladding including, fuel creep and swelling and cladding creep and plasticity each with dependencies on a variety of different properties. A fission gas release model which takes inputs from first principles calculations. Explicitly integrated inventory calculations performed in a coupled manner. Freedom to model steady state and transient behaviour using implicit time integration. The whole pin geometry is considered over an entire typical fuel life. The model showed by examination of normal operation and a subsequent transient chosen for software demonstration purposes: ABAQUS may be a sufficiently flexible platform to develop a complete and verified fuel performance code. The importance and effectiveness of the geometry of the fuel spacer pellets was characterised. The fuels performance under normal conditions (high friction no power spikes) would not suggest serious degradation of the cladding in fuel life. Large plastic strains were found when pellet bonding was strong, these would appear at all pellets cladding triple points and all pellet radial crack and cladding interfaces thus showing a possible axial direction to cracks forming from ductility exhaustion.

  4. Substructure based modeling of nickel single crystals cycled at low plastic strain amplitudes

    NASA Astrophysics Data System (ADS)

    Zhou, Dong

    In this dissertation a meso-scale, substructure-based, composite single crystal model is fully developed from the simple uniaxial model to the 3-D finite element method (FEM) model with explicit substructures and further with substructure evolution parameters, to simulate the completely reversed, strain controlled, low plastic strain amplitude cyclic deformation of nickel single crystals. Rate-dependent viscoplasticity and Armstrong-Frederick type kinematic hardening rules are applied to substructures on slip systems in the model to describe the kinematic hardening behavior of crystals. Three explicit substructure components are assumed in the composite single crystal model, namely "loop patches" and "channels" which are aligned in parallel in a "vein matrix," and persistent slip bands (PSBs) connected in series with the vein matrix. A magnetic domain rotation model is presented to describe the reverse magnetostriction of single crystal nickel. Kinematic hardening parameters are obtained by fitting responses to experimental data in the uniaxial model, and the validity of uniaxial assumption is verified in the 3-D FEM model with explicit substructures. With information gathered from experiments, all control parameters in the model including hardening parameters, volume fraction of loop patches and PSBs, and variation of Young's modulus etc. are correlated to cumulative plastic strain and/or plastic strain amplitude; and the whole cyclic deformation history of single crystal nickel at low plastic strain amplitudes is simulated in the uniaxial model. Then these parameters are implanted in the 3-D FEM model to simulate the formation of PSB bands. A resolved shear stress criterion is set to trigger the formation of PSBs, and stress perturbation in the specimen is obtained by several elements assigned with PSB material properties a priori. Displacement increment, plastic strain amplitude control and overall stress-strain monitor and output are carried out in the user subroutine DISP and URDFIL of ABAQUS, respectively, while constitutive formulations of the FEM model are coded and implemented in UMAT. The results of the simulations are compared to experiments. This model verified the validity of Winter's two-phase model and Taylor's uniform stress assumption, explored substructure evolution and "intrinsic" behavior in substructures and successfully simulated the process of PSB band formation and propagation.

  5. A Navier-Strokes Chimera Code on the Connection Machine CM-5: Design and Performance

    NASA Technical Reports Server (NTRS)

    Jespersen, Dennis C.; Levit, Creon; Kwak, Dochan (Technical Monitor)

    1994-01-01

    We have implemented a three-dimensional compressible Navier-Stokes code on the Connection Machine CM-5. The code is set up for implicit time-stepping on single or multiple structured grids. For multiple grids and geometrically complex problems, we follow the 'chimera' approach, where flow data on one zone is interpolated onto another in the region of overlap. We will describe our design philosophy and give some timing results for the current code. A parallel machine like the CM-5 is well-suited for finite-difference methods on structured grids. The regular pattern of connections of a structured mesh maps well onto the architecture of the machine. So the first design choice, finite differences on a structured mesh, is natural. We use centered differences in space, with added artificial dissipation terms. When numerically solving the Navier-Stokes equations, there are liable to be some mesh cells near a solid body that are small in at least one direction. This mesh cell geometry can impose a very severe CFL (Courant-Friedrichs-Lewy) condition on the time step for explicit time-stepping methods. Thus, though explicit time-stepping is well-suited to the architecture of the machine, we have adopted implicit time-stepping. We have further taken the approximate factorization approach. This creates the need to solve large banded linear systems and creates the first possible barrier to an efficient algorithm. To overcome this first possible barrier we have considered two options. The first is just to solve the banded linear systems with data spread over the whole machine, using whatever fast method is available. This option is adequate for solving scalar tridiagonal systems, but for scalar pentadiagonal or block tridiagonal systems it is somewhat slower than desired. The second option is to 'transpose' the flow and geometry variables as part of the time-stepping process: Start with x-lines of data in-processor. Form explicit terms in x, then transpose so y-lines of data are in-processor. Form explicit terms in y, then transpose so z-lines are in processor. Form explicit terms in z, then solve linear systems in the z-direction. Transpose to the y-direction, then solve linear systems in the y-direction. Finally transpose to the x direction and solve linear systems in the x-direction. This strategy avoids inter-processor communication when differencing and solving linear systems, but requires a large amount of communication when doing the transposes. The transpose method is more efficient than the non-transpose strategy when dealing with scalar pentadiagonal or block tridiagonal systems. For handling geometrically complex problems the chimera strategy was adopted. For multiple zone cases we compute on each zone sequentially (using the whole parallel machine), then send the chimera interpolation data to a distributed data structure (array) laid out over the whole machine. This information transfer implies an irregular communication pattern, and is the second possible barrier to an efficient algorithm. We have implemented these ideas on the CM-5 using CMF (Connection Machine Fortran), a data parallel language which combines elements of Fortran 90 and certain extensions, and which bears a strong similarity to High Performance Fortran. We make use of the Connection Machine Scientific Software Library (CMSSL) for the linear solver and array transpose operations.

  6. Multi-flexible-body analysis for application to wind turbine control design

    NASA Astrophysics Data System (ADS)

    Lee, Donghoon

    The objective of the present research is to build a theoretical and computational framework for the aeroelastic analysis of flexible rotating systems, more specifically with special application to a wind turbine control design. The methodology is based on the integration of Kane's approach for the analysis of the multi-rigid-body subsystem and a mixed finite element method for the analysis of the flexible-body subsystem. The combined analysis is then strongly coupled with an aerodynamic model based on Blade Element Momentum theory for inflow model. The unified framework from the analysis of subsystems is represented as, in a symbolic manner, a set of nonlinear ordinary differential equations with time-variant, periodic coefficients, which describe the aeroelastic behavior of whole system. The framework can be directly applied to control design due to its symbolic characteristics. The solution procedures for the equations are presented for the study of nonlinear simulation, periodic steady-state solution, and Floquet stability of the linearized system about the steady-state solution. Finally the linear periodic system equation can be obtained with both system and control matrices as explicit functions of time, which can be directly applicable to control design. The structural model is validated by comparison of its results with those from software, some of which is commercial. The stability of the linearized system about periodic steady-state solution is different from that obtained about a constant steady-state solution, which have been conventional in the field of wind turbine dynamics. Parametric studies are performed on a wind turbine model with various pitch angles, precone angles, and rotor speeds. Combined with composite material, their effects on wind turbine aeroelastic stability are investigated. Finally it is suggested that the aeroelastic stability analysis and control design for the whole system is crucial for the design of wind turbines, and the present research breaks new ground in the ability to treat the issue.

  7. High Performance Programming Using Explicit Shared Memory Model on Cray T3D1

    NASA Technical Reports Server (NTRS)

    Simon, Horst D.; Saini, Subhash; Grassi, Charles

    1994-01-01

    The Cray T3D system is the first-phase system in Cray Research, Inc.'s (CRI) three-phase massively parallel processing (MPP) program. This system features a heterogeneous architecture that closely couples DEC's Alpha microprocessors and CRI's parallel-vector technology, i.e., the Cray Y-MP and Cray C90. An overview of the Cray T3D hardware and available programming models is presented. Under Cray Research adaptive Fortran (CRAFT) model four programming methods (data parallel, work sharing, message-passing using PVM, and explicit shared memory model) are available to the users. However, at this time data parallel and work sharing programming models are not available to the user community. The differences between standard PVM and CRI's PVM are highlighted with performance measurements such as latencies and communication bandwidths. We have found that the performance of neither standard PVM nor CRI s PVM exploits the hardware capabilities of the T3D. The reasons for the bad performance of PVM as a native message-passing library are presented. This is illustrated by the performance of NAS Parallel Benchmarks (NPB) programmed in explicit shared memory model on Cray T3D. In general, the performance of standard PVM is about 4 to 5 times less than obtained by using explicit shared memory model. This degradation in performance is also seen on CM-5 where the performance of applications using native message-passing library CMMD on CM-5 is also about 4 to 5 times less than using data parallel methods. The issues involved (such as barriers, synchronization, invalidating data cache, aligning data cache etc.) while programming in explicit shared memory model are discussed. Comparative performance of NPB using explicit shared memory programming model on the Cray T3D and other highly parallel systems such as the TMC CM-5, Intel Paragon, Cray C90, IBM-SP1, etc. is presented.

  8. Hydro-geophysical observations integration in numerical model: case study in Mediterranean karstic unsaturated zone (Larzac, france)

    NASA Astrophysics Data System (ADS)

    Champollion, Cédric; Fores, Benjamin; Le Moigne, Nicolas; Chéry, Jean

    2016-04-01

    Karstic hydro-systems are highly non-linear and heterogeneous but one of the main water resource in the Mediterranean area. Neither local measurements in boreholes or analysis at the spring can take into account the variability of the water storage. Since a few years, ground-based geophysical measurements (such as gravity, electrical resistivity or seismological data) allows following water storage in heterogeneous hydrosystems at an intermediate scale between boreholes and basin. Behind classical rigorous monitoring, the integration of geophysical data in hydrological numerical models in needed for both processes interpretation and quantification. Since a few years, a karstic geophysical observatory (GEK: Géodésie de l'Environnement Karstique, OSU OREME, SNO H+) has been setup in the Mediterranean area in the south of France. The observatory is surrounding more than 250m karstified dolomite, with an unsaturated zone of ~150m thickness. At the observatory water level in boreholes, evapotranspiration and rainfall are classical hydro-meteorological observations completed by continuous gravity, resistivity and seismological measurements. The main objective of the study is the modelling of the whole observation dataset by explicit unsaturated numerical model in one dimension. Hydrus software is used for the explicit modelling of the water storage and transfer and links the different observations (geophysics, water level, evapotranspiration) with the water saturation. Unknown hydrological parameters (permeability, porosity) are retrieved from stochastic inversions. The scale of investigation of the different observations are discussed thank to the modelling results. A sensibility study of the measurements against the model is done and key hydro-geological processes of the site are presented.

  9. One-dimensional model and solutions for creeping gas flows in the approximation of uniform pressure

    NASA Astrophysics Data System (ADS)

    Vedernikov, A.; Balapanov, D.

    2016-11-01

    A model, along with analytical and numerical solutions, is presented to describe a wide variety of one-dimensional slow flows of compressible heat-conductive fluids. The model is based on the approximation of uniform pressure valid for the flows, in which the sound propagation time is much shorter than the duration of any meaningful density variation in the system. The energy balance is described by the heat equation that is solved independently. This approach enables the explicit solution for the fluid velocity to be obtained. Interfacial and volumetric heat and mass sources as well as boundary motion are considered as possible sources of density variation in the fluid. A set of particular tasks is analyzed for different motion sources in planar, axial, and central symmetries in the quasistationary limit of heat conduction (i.e., for large Fourier number). The analytical solutions are in excellent agreement with corresponding numerical solutions of the whole system of the Navier-Stokes equations. This work deals with the ideal gas. The approach is also valid for other equations of state.

  10. Spatially explicit watershed modeling: tracking water, mercury and nitrogen in multiple systems under diverse conditions

    EPA Science Inventory

    Environmental decision-making and the influences of various stressors, such as landscape and climate changes on water quantity and quality, requires the application of environmental modeling. Spatially explicit environmental and watershed-scale models using GIS as a base framewor...

  11. Class of self-limiting growth models in the presence of nonlinear diffusion

    NASA Astrophysics Data System (ADS)

    Kar, Sandip; Banik, Suman Kumar; Ray, Deb Shankar

    2002-06-01

    The source term in a reaction-diffusion system, in general, does not involve explicit time dependence. A class of self-limiting growth models dealing with animal and tumor growth and bacterial population in a culture, on the other hand, are described by kinetics with explicit functions of time. We analyze a reaction-diffusion system to study the propagation of spatial front for these models.

  12. Theory of point contact spectroscopy in correlated materials

    DOE PAGES

    Lee, Wei-Cheng; Park, Wan Kyu; Arham, Hamood Z.; ...

    2015-01-05

    Here, we developed a microscopic theory for the point-contact conductance between a metallic electrode and a strongly correlated material using the nonequilibrium Schwinger-Kadanoff-Baym-Keldysh formalism. We explicitly show that, in the classical limit, contact size shorter than the scattering length of the system, the microscopic model can be reduced to an effective model with transfer matrix elements that conserve in-plane momentum. We found that the conductance dI/dV is proportional to the effective density of states, that is, the integrated single-particle spectral function A(ω = eV) over the whole Brillouin zone. From this conclusion, we are able to establish the conditions undermore » which a non-Fermi liquid metal exhibits a zero-bias peak in the conductance. Lastly, this finding is discussed in the context of recent point-contact spectroscopy on the iron pnictides and chalcogenides, which has exhibited a zero-bias conductance peak.« less

  13. Two dimensional numerical prediction of deflagration-to-detonation transition in porous energetic materials.

    PubMed

    Narin, B; Ozyörük, Y; Ulas, A

    2014-05-30

    This paper describes a two-dimensional code developed for analyzing two-phase deflagration-to-detonation transition (DDT) phenomenon in granular, energetic, solid, explosive ingredients. The two-dimensional model is constructed in full two-phase, and based on a highly coupled system of partial differential equations involving basic flow conservation equations and some constitutive relations borrowed from some one-dimensional studies that appeared in open literature. The whole system is solved using an optimized high-order accurate, explicit, central-difference scheme with selective-filtering/shock capturing (SF-SC) technique, to augment central-diffencing and prevent excessive dispersion. The sources of the equations describing particle-gas interactions in terms of momentum and energy transfers make the equation system quite stiff, and hence its explicit integration difficult. To ease the difficulties, a time-split approach is used allowing higher time steps. In the paper, the physical model for the sources of the equation system is given for a typical explosive, and several numerical calculations are carried out to assess the developed code. Microscale intergranular and/or intragranular effects including pore collapse, sublimation, pyrolysis, etc. are not taken into account for ignition and growth, and a basic temperature switch is applied in calculations to control ignition in the explosive domain. Results for one-dimensional DDT phenomenon are in good agreement with experimental and computational results available in literature. A typical shaped-charge wave-shaper case study is also performed to test the two-dimensional features of the code and it is observed that results are in good agreement with those of commercial software. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Using Approximate Bayesian Computation to Probe Multiple Transiting Planet Systems

    NASA Astrophysics Data System (ADS)

    Morehead, Robert C.

    2015-08-01

    The large number of multiple transiting planet systems (MTPS) uncovered with Kepler suggest a population of well-aligned planetary systems. Previously, the distribution of transit duration ratios in MTPSs has been used to place constraints on the distributions of mutual orbital inclinations and orbital eccentricities in these systems. However, degeneracies with the underlying number of planets in these systems pose added challenges and make explicit likelihood functions intractable. Approximate Bayesian computation (ABC) offers an intriguing path forward. In its simplest form, ABC proposes from a prior on the population parameters to produce synthetic datasets via a physically-motivated model. Samples are accepted or rejected based on how close they come to reproducing the actual observed dataset to some tolerance. The accepted samples then form a robust and useful approximation of the true posterior distribution of the underlying population parameters. We will demonstrate the utility of ABC in exoplanet populations by presenting new constraints on the mutual inclination and eccentricity distributions in the Kepler MTPSs. We will also introduce Simple-ABC, a new open-source Python package designed for ease of use and rapid specification of general models, suitable for use in a wide variety of applications in both exoplanet science and astrophysics as a whole.

  15. Power Analysis of Artificial Selection Experiments Using Efficient Whole Genome Simulation of Quantitative Traits

    PubMed Central

    Kessner, Darren; Novembre, John

    2015-01-01

    Evolve and resequence studies combine artificial selection experiments with massively parallel sequencing technology to study the genetic basis for complex traits. In these experiments, individuals are selected for extreme values of a trait, causing alleles at quantitative trait loci (QTL) to increase or decrease in frequency in the experimental population. We present a new analysis of the power of artificial selection experiments to detect and localize quantitative trait loci. This analysis uses a simulation framework that explicitly models whole genomes of individuals, quantitative traits, and selection based on individual trait values. We find that explicitly modeling QTL provides qualitatively different insights than considering independent loci with constant selection coefficients. Specifically, we observe how interference between QTL under selection affects the trajectories and lengthens the fixation times of selected alleles. We also show that a substantial portion of the genetic variance of the trait (50–100%) can be explained by detected QTL in as little as 20 generations of selection, depending on the trait architecture and experimental design. Furthermore, we show that power depends crucially on the opportunity for recombination during the experiment. Finally, we show that an increase in power is obtained by leveraging founder haplotype information to obtain allele frequency estimates. PMID:25672748

  16. A Single-System Model Predicts Recognition Memory and Repetition Priming in Amnesia

    PubMed Central

    Kessels, Roy P.C.; Wester, Arie J.; Shanks, David R.

    2014-01-01

    We challenge the claim that there are distinct neural systems for explicit and implicit memory by demonstrating that a formal single-system model predicts the pattern of recognition memory (explicit) and repetition priming (implicit) in amnesia. In the current investigation, human participants with amnesia categorized pictures of objects at study and then, at test, identified fragmented versions of studied (old) and nonstudied (new) objects (providing a measure of priming), and made a recognition memory judgment (old vs new) for each object. Numerous results in the amnesic patients were predicted in advance by the single-system model, as follows: (1) deficits in recognition memory and priming were evident relative to a control group; (2) items judged as old were identified at greater levels of fragmentation than items judged new, regardless of whether the items were actually old or new; and (3) the magnitude of the priming effect (the identification advantage for old vs new items) overall was greater than that of items judged new. Model evidence measures also favored the single-system model over two formal multiple-systems models. The findings support the single-system model, which explains the pattern of recognition and priming in amnesia primarily as a reduction in the strength of a single dimension of memory strength, rather than a selective explicit memory system deficit. PMID:25122896

  17. Investigating the predictive validity of implicit and explicit measures of motivation on condom use, physical activity and healthy eating.

    PubMed

    Keatley, David; Clarke, David D; Hagger, Martin S

    2012-01-01

    The literature on health-related behaviours and motivation is replete with research involving explicit processes and their relations with intentions and behaviour. Recently, interest has been focused on the impact of implicit processes and measures on health-related behaviours. Dual-systems models have been proposed to provide a framework for understanding the effects of explicit or deliberative and implicit or impulsive processes on health behaviours. Informed by a dual-systems approach and self-determination theory, the aim of this study was to test the effects of implicit and explicit motivation on three health-related behaviours in a sample of undergraduate students (N = 162). Implicit motives were hypothesised to predict behaviour independent of intentions while explicit motives would be mediated by intentions. Regression analyses indicated that implicit motivation predicted physical activity behaviour only. Across all behaviours, intention mediated the effects of explicit motivational variables from self-determination theory. This study provides limited support for dual-systems models and the role of implicit motivation in the prediction of health-related behaviour. Suggestions for future research into the role of implicit processes in motivation are outlined.

  18. Relationships of Teachers' Language and Explicit Vocabulary Instruction to Students' Vocabulary Growth in Kindergarten

    ERIC Educational Resources Information Center

    Bowne, Jocelyn Bonnes; Yoshikawa, Hirokazu; Snow, Catherine E.

    2017-01-01

    This study evaluates the relationships between aspects of Chilean teachers' explicit vocabulary instruction and students' vocabulary development in kindergarten. Classroom videotapes of whole-class instruction gathered during a randomized experimental evaluation of a coaching-based professional development program were analyzed. The amount of…

  19. Systems Modeling at Multiple Levels of Regulation: Linking Systems and Genetic Networks to Spatially Explicit Plant Populations

    PubMed Central

    Kitchen, James L.; Allaby, Robin G.

    2013-01-01

    Selection and adaptation of individuals to their underlying environments are highly dynamical processes, encompassing interactions between the individual and its seasonally changing environment, synergistic or antagonistic interactions between individuals and interactions amongst the regulatory genes within the individual. Plants are useful organisms to study within systems modeling because their sedentary nature simplifies interactions between individuals and the environment, and many important plant processes such as germination or flowering are dependent on annual cycles which can be disrupted by climate behavior. Sedentism makes plants relevant candidates for spatially explicit modeling that is tied in with dynamical environments. We propose that in order to fully understand the complexities behind plant adaptation, a system that couples aspects from systems biology with population and landscape genetics is required. A suitable system could be represented by spatially explicit individual-based models where the virtual individuals are located within time-variable heterogeneous environments and contain mutable regulatory gene networks. These networks could directly interact with the environment, and should provide a useful approach to studying plant adaptation. PMID:27137364

  20. Solvent Reaction Field Potential inside an Uncharged Globular Protein: A Bridge between Implicit and Explicit Solvent Models?

    PubMed Central

    Baker, Nathan A.; McCammon, J. Andrew

    2008-01-01

    The solvent reaction field potential of an uncharged protein immersed in Simple Point Charge/Extended (SPC/E) explicit solvent was computed over a series of molecular dynamics trajectories, intotal 1560 ns of simulation time. A finite, positive potential of 13 to 24 kbTec−1 (where T = 300K), dependent on the geometry of the solvent-accessible surface, was observed inside the biomolecule. The primary contribution to this potential arose from a layer of positive charge density 1.0 Å from the solute surface, on average 0.008 ec/Å3, which we found to be the product of a highly ordered first solvation shell. Significant second solvation shell effects, including additional layers of charge density and a slight decrease in the short-range solvent-solvent interaction strength, were also observed. The impact of these findings on implicit solvent models was assessed by running similar explicit-solvent simulations on the fully charged protein system. When the energy due to the solvent reaction field in the uncharged system is accounted for, correlation between per-atom electrostatic energies for the explicit solvent model and a simple implicit (Poisson) calculation is 0.97, and correlation between per-atom energies for the explicit solvent model and a previously published, optimized Poisson model is 0.99. PMID:17949217

  1. Solvent reaction field potential inside an uncharged globular protein: A bridge between implicit and explicit solvent models?

    NASA Astrophysics Data System (ADS)

    Cerutti, David S.; Baker, Nathan A.; McCammon, J. Andrew

    2007-10-01

    The solvent reaction field potential of an uncharged protein immersed in simple point charge/extended explicit solvent was computed over a series of molecular dynamics trajectories, in total 1560ns of simulation time. A finite, positive potential of 13-24 kbTec-1 (where T =300K), dependent on the geometry of the solvent-accessible surface, was observed inside the biomolecule. The primary contribution to this potential arose from a layer of positive charge density 1.0Å from the solute surface, on average 0.008ec/Å3, which we found to be the product of a highly ordered first solvation shell. Significant second solvation shell effects, including additional layers of charge density and a slight decrease in the short-range solvent-solvent interaction strength, were also observed. The impact of these findings on implicit solvent models was assessed by running similar explicit solvent simulations on the fully charged protein system. When the energy due to the solvent reaction field in the uncharged system is accounted for, correlation between per-atom electrostatic energies for the explicit solvent model and a simple implicit (Poisson) calculation is 0.97, and correlation between per-atom energies for the explicit solvent model and a previously published, optimized Poisson model is 0.99.

  2. Effect of explicit dimension instruction on speech category learning

    PubMed Central

    Chandrasekaran, Bharath; Yi, Han-Gyol; Smayda, Kirsten E.; Maddox, W. Todd

    2015-01-01

    Learning non-native speech categories is often considered a challenging task in adulthood. This difficulty is driven by cross-language differences in weighting critical auditory dimensions that differentiate speech categories. For example, previous studies have shown that differentiating Mandarin tonal categories requires attending to dimensions related to pitch height and direction. Relative to native speakers of Mandarin, the pitch direction dimension is under-weighted by native English speakers. In the current study, we examined the effect of explicit instructions (dimension instruction) on native English speakers' Mandarin tone category learning within the framework of a dual-learning systems (DLS) model. This model predicts that successful speech category learning is initially mediated by an explicit, reflective learning system that frequently utilizes unidimensional rules, with an eventual switch to a more implicit, reflexive learning system that utilizes multidimensional rules. Participants were explicitly instructed to focus and/or ignore the pitch height dimension, the pitch direction dimension, or were given no explicit prime. Our results show that instruction instructing participants to focus on pitch direction, and instruction diverting attention away from pitch height resulted in enhanced tone categorization. Computational modeling of participant responses suggested that instruction related to pitch direction led to faster and more frequent use of multidimensional reflexive strategies, and enhanced perceptual selectivity along the previously underweighted pitch direction dimension. PMID:26542400

  3. The explicit and implicit dance in psychoanalytic change.

    PubMed

    Fosshage, James L

    2004-02-01

    How the implicit/non-declarative and explicit/declarative cognitive domains interact is centrally important in the consideration of effecting change within the psychoanalytic arena. Stern et al. (1998) declare that long-lasting change occurs in the domain of implicit relational knowledge. In the view of this author, the implicit and explicit domains are intricately intertwined in an interactive dance within a psychoanalytic process. The author views that a spirit of inquiry (Lichtenberg, Lachmann & Fosshage 2002) serves as the foundation of the psychoanalytic process. Analyst and patient strive to explore, understand and communicate and, thereby, create a 'spirit' of interaction that contributes, through gradual incremental learning, to new implicit relational knowledge. This spirit, as part of the implicit relational interaction, is a cornerstone of the analytic relationship. The 'inquiry' more directly brings explicit/declarative processing to the foreground in the joint attempt to explore and understand. The spirit of inquiry in the psychoanalytic arena highlights both the autobiographical scenarios of the explicit memory system and the mental models of the implicit memory system as each contributes to a sense of self, other, and self with other. This process facilitates the extrication and suspension of the old models, so that new models based on current relational experience can be gradually integrated into both memory systems for lasting change.

  4. Opinion Dynamics with Disagreement and Modulated Information

    NASA Astrophysics Data System (ADS)

    Sîrbu, Alina; Loreto, Vittorio; Servedio, Vito D. P.; Tria, Francesca

    2013-04-01

    Opinion dynamics concerns social processes through which populations or groups of individuals agree or disagree on specific issues. As such, modelling opinion dynamics represents an important research area that has been progressively acquiring relevance in many different domains. Existing approaches have mostly represented opinions through discrete binary or continuous variables by exploring a whole panoply of cases: e.g. independence, noise, external effects, multiple issues. In most of these cases the crucial ingredient is an attractive dynamics through which similar or similar enough agents get closer. Only rarely the possibility of explicit disagreement has been taken into account (i.e., the possibility for a repulsive interaction among individuals' opinions), and mostly for discrete or 1-dimensional opinions, through the introduction of additional model parameters. Here we introduce a new model of opinion formation, which focuses on the interplay between the possibility of explicit disagreement, modulated in a self-consistent way by the existing opinions' overlaps between the interacting individuals, and the effect of external information on the system. Opinions are modelled as a vector of continuous variables related to multiple possible choices for an issue. Information can be modulated to account for promoting multiple possible choices. Numerical results show that extreme information results in segregation and has a limited effect on the population, while milder messages have better success and a cohesion effect. Additionally, the initial condition plays an important role, with the population forming one or multiple clusters based on the initial average similarity between individuals, with a transition point depending on the number of opinion choices.

  5. Decision support systems in health economics.

    PubMed

    Quaglini, S; Dazzi, L; Stefanelli, M; Barosi, G; Marchetti, M

    1999-08-01

    This article describes a system addressed to different health care professionals for building, using, and sharing decision support systems for resource allocation. The system deals with selected areas, namely the choice of diagnostic tests, the therapy planning, and the instrumentation purchase. Decision support is based on decision-analytic models, incorporating an explicit knowledge representation of both the medical domain knowledge and the economic evaluation theory. Application models are built on top of meta-models, that are used as guidelines for making explicit both the cost and effectiveness components. This approach improves the transparency and soundness of the collaborative decision-making process and facilitates the result interpretation.

  6. Biomass and fire dynamics in a temperate forest-grassland mosaic: Integrating multi-species herbivory, climate, and fire with the FireBGCv2/GrazeBGC system

    Treesearch

    Robert A. Riggs; Robert E. Keane; Norm Cimon; Rachel Cook; Lisa Holsinger; John Cook; Timothy DelCurto; L.Scott Baggett; Donald Justice; David Powell; Martin Vavra; Bridgett Naylor

    2015-01-01

    Landscape fire succession models (LFSMs) predict spatially-explicit interactions between vegetation succession and disturbance, but these models have yet to fully integrate ungulate herbivory as a driver of their processes. We modified a complex LFSM, FireBGCv2, to include a multi-species herbivory module, GrazeBGC. The system is novel in that it explicitly...

  7. Chapter 15: Using System Dynamics to Model Industry's Developmental Response to Energy Policy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bush, Brian; Inman, Daniel; Newes, Emily

    In this chapter we explore the potential development of the biofuels industry using the Biomass Scenario Model (BSM), a system dynamics model developed at the National Renewable Energy Laboratory through the support of the U.S. Department of Energy. The BSM is designed to analyze the implications of policy on the development of the supply chain for biofuels in the United States. It explicitly represents the behavior of decision makers such as farmers, investors, fueling station owners, and consumers. We analyze several illustrative case studies that explore a range of policies and discuss how incentives interact with individual parts of themore » supply chain as well as the industry as a whole. The BSM represents specific incentives that are intended to approximate policy in the form of selected laws and regulations. Through characterizing the decision making behaviors of economic actors within the supply chain that critically influence the adoption rate of new biofuels production technologies and demonstrating synergies among policies, we find that incentives with coordinated impacts on each major element of the supply chain catalyze net effects of decision maker behavior such that the combined incentives are greater than the summed effects of individual incentives in isolation.« less

  8. Geophysical fluid dynamics: whence, whither and why?

    PubMed Central

    2016-01-01

    This article discusses the role of geophysical fluid dynamics (GFD) in understanding the natural environment, and in particular the dynamics of atmospheres and oceans on Earth and elsewhere. GFD, as usually understood, is a branch of the geosciences that deals with fluid dynamics and that, by tradition, seeks to extract the bare essence of a phenomenon, omitting detail where possible. The geosciences in general deal with complex interacting systems and in some ways resemble condensed matter physics or aspects of biology, where we seek explanations of phenomena at a higher level than simply directly calculating the interactions of all the constituent parts. That is, we try to develop theories or make simple models of the behaviour of the system as a whole. However, these days in many geophysical systems of interest, we can also obtain information for how the system behaves by almost direct numerical simulation from the governing equations. The numerical model itself then explicitly predicts the emergent phenomena—the Gulf Stream, for example—something that is still usually impossible in biology or condensed matter physics. Such simulations, as manifested, for example, in complicated general circulation models, have in some ways been extremely successful and one may reasonably now ask whether understanding a complex geophysical system is necessary for predicting it. In what follows we discuss such issues and the roles that GFD has played in the past and will play in the future. PMID:27616918

  9. Quantum dynamics in continuum for proton transport II: Variational solvent-solute interface.

    PubMed

    Chen, Duan; Chen, Zhan; Wei, Guo-Wei

    2012-01-01

    Proton transport plays an important role in biological energy transduction and sensory systems. Therefore, it has attracted much attention in biological science and biomedical engineering in the past few decades. The present work proposes a multiscale/multiphysics model for the understanding of the molecular mechanism of proton transport in transmembrane proteins involving continuum, atomic, and quantum descriptions, assisted with the evolution, formation, and visualization of membrane channel surfaces. We describe proton dynamics quantum mechanically via a new density functional theory based on the Boltzmann statistics, while implicitly model numerous solvent molecules as a dielectric continuum to reduce the number of degrees of freedom. The density of all other ions in the solvent is assumed to obey the Boltzmann distribution in a dynamic manner. The impact of protein molecular structure and its charge polarization on the proton transport is considered explicitly at the atomic scale. A variational solute-solvent interface is designed to separate the explicit molecule and implicit solvent regions. We formulate a total free-energy functional to put proton kinetic and potential energies, the free energy of all other ions, and the polar and nonpolar energies of the whole system on an equal footing. The variational principle is employed to derive coupled governing equations for the proton transport system. Generalized Laplace-Beltrami equation, generalized Poisson-Boltzmann equation, and generalized Kohn-Sham equation are obtained from the present variational framework. The variational solvent-solute interface is generated and visualized to facilitate the multiscale discrete/continuum/quantum descriptions. Theoretical formulations for the proton density and conductance are constructed based on fundamental laws of physics. A number of mathematical algorithms, including the Dirichlet-to-Neumann mapping, matched interface and boundary method, Gummel iteration, and Krylov space techniques are utilized to implement the proposed model in a computationally efficient manner. The gramicidin A channel is used to validate the performance of the proposed proton transport model and demonstrate the efficiency of the proposed mathematical algorithms. The proton channel conductances are studied over a number of applied voltages and reference concentrations. A comparison with experimental data verifies the present model predictions and confirms the proposed model. Copyright © 2011 John Wiley & Sons, Ltd.

  10. Strong disorder real-space renormalization for the many-body-localized phase of random Majorana models

    NASA Astrophysics Data System (ADS)

    Monthus, Cécile

    2018-03-01

    For the many-body-localized phase of random Majorana models, a general strong disorder real-space renormalization procedure known as RSRG-X (Pekker et al 2014 Phys. Rev. X 4 011052) is described to produce the whole set of excited states, via the iterative construction of the local integrals of motion (LIOMs). The RG rules are then explicitly derived for arbitrary quadratic Hamiltonians (free-fermions models) and for the Kitaev chain with local interactions involving even numbers of consecutive Majorana fermions. The emphasis is put on the advantages of the Majorana language over the usual quantum spin language to formulate unified RSRG-X rules.

  11. Towards an Understanding of Atmospheric Balance

    NASA Technical Reports Server (NTRS)

    Errico, Ronald M.

    2015-01-01

    During a 35 year period I published 30+ pear-reviewed papers and technical reports concerning, in part or whole, the topic of atmospheric balance. Most used normal modes, either implicitly or explicitly, as the appropriate diagnostic tool. This included examination of nonlinear balance in several different global and regional models using a variety of novel metrics as well as development of nonlinear normal mode initialization schemes for particular global and regional models. Recent studies also included the use of adjoint models and OSSEs to answer some questions regarding balance. lwill summarize what I learned through those many works, but also present what l see as remaining issues to be considered or investigated.

  12. Quantum memories with zero-energy Majorana modes and experimental constraints

    NASA Astrophysics Data System (ADS)

    Ippoliti, Matteo; Rizzi, Matteo; Giovannetti, Vittorio; Mazza, Leonardo

    2016-06-01

    In this work we address the problem of realizing a reliable quantum memory based on zero-energy Majorana modes in the presence of experimental constraints on the operations aimed at recovering the information. In particular, we characterize the best recovery operation acting only on the zero-energy Majorana modes and the memory fidelity that can be therewith achieved. In order to understand the effect of such restriction, we discuss two examples of noise models acting on the topological system and compare the amount of information that can be recovered by accessing either the whole system, or the zero modes only, with particular attention to the scaling with the size of the system and the energy gap. We explicitly discuss the case of a thermal bosonic environment inducing a parity-preserving Markovian dynamics in which the memory fidelity achievable via a read-out of the zero modes decays exponentially in time, independent from system size. We argue, however, that even in the presence of said experimental limitations, the Hamiltonian gap is still beneficial to the storage of information.

  13. Comparison of explicit finite element and mechanical simulation of the proximal femur during dynamic drop-tower testing.

    PubMed

    Ariza, O; Gilchrist, S; Widmer, R P; Guy, P; Ferguson, S J; Cripton, P A; Helgason, B

    2015-01-21

    Current screening techniques based on areal bone mineral density (aBMD) measurements are unable to identify the majority of people who sustain hip fractures. Biomechanical examination of such events may help determine what predisposes a hip to be susceptible to fracture. Recently, drop-tower simulations of in-vitro sideways falls have allowed the study of the mechanical response of the proximal human femur at realistic impact speeds. This technique has created an opportunity to validate explicit finite element (FE) models against dynamic test data. This study compared the outcomes of 15 human femoral specimens fractured using a drop tower with complementary specimen-specific explicit FE analysis. Correlation coefficient and root mean square error (RMSE) were found to be moderate for whole bone stiffness comparison (R(2)=0.3476 and 22.85% respectively). No correlation was found between experimentally and computationally predicted peak force, however, energy absorption comparison produced moderate correlation and RMSE (R(2)=0.4781 and 29.14% respectively). By comparing predicted strain maps to high speed video data we demonstrated the ability of the FE models to detect vulnerable portions of the bones. Based on our observations, we conclude that there exists a need to extend the current apparent level material models for bone to cover higher strain rates than previously tested experimentally. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Web information retrieval based on ontology

    NASA Astrophysics Data System (ADS)

    Zhang, Jian

    2013-03-01

    The purpose of the Information Retrieval (IR) is to find a set of documents that are relevant for a specific information need of a user. Traditional Information Retrieval model commonly used in commercial search engine is based on keyword indexing system and Boolean logic queries. One big drawback of traditional information retrieval is that they typically retrieve information without an explicitly defined domain of interest to the users so that a lot of no relevance information returns to users, which burden the user to pick up useful answer from these no relevance results. In order to tackle this issue, many semantic web information retrieval models have been proposed recently. The main advantage of Semantic Web is to enhance search mechanisms with the use of Ontology's mechanisms. In this paper, we present our approach to personalize web search engine based on ontology. In addition, key techniques are also discussed in our paper. Compared to previous research, our works concentrate on the semantic similarity and the whole process including query submission and information annotation.

  15. Validating spatiotemporal predictions of an important pest of small grains.

    PubMed

    Merrill, Scott C; Holtzer, Thomas O; Peairs, Frank B; Lester, Philip J

    2015-01-01

    Arthropod pests are typically managed using tactics applied uniformly to the whole field. Precision pest management applies tactics under the assumption that within-field pest pressure differences exist. This approach allows for more precise and judicious use of scouting resources and management tactics. For example, a portion of a field delineated as attractive to pests may be selected to receive extra monitoring attention. Likely because of the high variability in pest dynamics, little attention has been given to developing precision pest prediction models. Here, multimodel synthesis was used to develop a spatiotemporal model predicting the density of a key pest of wheat, the Russian wheat aphid, Diuraphis noxia (Kurdjumov). Spatially implicit and spatially explicit models were synthesized to generate spatiotemporal pest pressure predictions. Cross-validation and field validation were used to confirm model efficacy. A strong within-field signal depicting aphid density was confirmed with low prediction errors. Results show that the within-field model predictions will provide higher-quality information than would be provided by traditional field scouting. With improvements to the broad-scale model component, the model synthesis approach and resulting tool could improve pest management strategy and provide a template for the development of spatially explicit pest pressure models. © 2014 Society of Chemical Industry.

  16. Systems biology of the structural proteome.

    PubMed

    Brunk, Elizabeth; Mih, Nathan; Monk, Jonathan; Zhang, Zhen; O'Brien, Edward J; Bliven, Spencer E; Chen, Ke; Chang, Roger L; Bourne, Philip E; Palsson, Bernhard O

    2016-03-11

    The success of genome-scale models (GEMs) can be attributed to the high-quality, bottom-up reconstructions of metabolic, protein synthesis, and transcriptional regulatory networks on an organism-specific basis. Such reconstructions are biochemically, genetically, and genomically structured knowledge bases that can be converted into a mathematical format to enable a myriad of computational biological studies. In recent years, genome-scale reconstructions have been extended to include protein structural information, which has opened up new vistas in systems biology research and empowered applications in structural systems biology and systems pharmacology. Here, we present the generation, application, and dissemination of genome-scale models with protein structures (GEM-PRO) for Escherichia coli and Thermotoga maritima. We show the utility of integrating molecular scale analyses with systems biology approaches by discussing several comparative analyses on the temperature dependence of growth, the distribution of protein fold families, substrate specificity, and characteristic features of whole cell proteomes. Finally, to aid in the grand challenge of big data to knowledge, we provide several explicit tutorials of how protein-related information can be linked to genome-scale models in a public GitHub repository ( https://github.com/SBRG/GEMPro/tree/master/GEMPro_recon/). Translating genome-scale, protein-related information to structured data in the format of a GEM provides a direct mapping of gene to gene-product to protein structure to biochemical reaction to network states to phenotypic function. Integration of molecular-level details of individual proteins, such as their physical, chemical, and structural properties, further expands the description of biochemical network-level properties, and can ultimately influence how to model and predict whole cell phenotypes as well as perform comparative systems biology approaches to study differences between organisms. GEM-PRO offers insight into the physical embodiment of an organism's genotype, and its use in this comparative framework enables exploration of adaptive strategies for these organisms, opening the door to many new lines of research. With these provided tools, tutorials, and background, the reader will be in a position to run GEM-PRO for their own purposes.

  17. All is not lost: deriving a top-down mass budget of plastic at sea

    NASA Astrophysics Data System (ADS)

    Koelmans, Albert A.; Kooi, Merel; Lavender Law, Kara; van Sebille, Erik

    2017-11-01

    Understanding the global mass inventory is one of the main challenges in present research on plastic marine debris. Especially the fragmentation and vertical transport processes of oceanic plastic are poorly understood. However, whereas fragmentation rates are unknown, information on plastic emissions, concentrations of plastics in the ocean surface layer (OSL) and fragmentation mechanisms is available. Here, we apply a systems engineering analytical approach and propose a tentative ‘whole ocean’ mass balance model that combines emission data, surface area-normalized plastic fragmentation rates, estimated concentrations in the OSL, and removal from the OSL by sinking. We simulate known plastic abundances in the OSL and calculate an average whole ocean apparent surface area-normalized plastic fragmentation rate constant, given representative radii for macroplastic and microplastic. Simulations show that 99.8% of the plastic that had entered the ocean since 1950 had settled below the OSL by 2016, with an additional 9.4 million tons settling per year. In 2016, the model predicts that of the 0.309 million tons in the OSL, an estimated 83.7% was macroplastic, 13.8% microplastic, and 2.5% was < 0.335 mm ‘nanoplastic’. A zero future emission simulation shows that almost all plastic in the OSL would be removed within three years, implying a fast response time of surface plastic abundance to changes in inputs. The model complements current spatially explicit models, points to future experiments that would inform critical model parameters, and allows for further validation when more experimental and field data become available.

  18. Predictive microbiology in a dynamic environment: a system theory approach.

    PubMed

    Van Impe, J F; Nicolaï, B M; Schellekens, M; Martens, T; De Baerdemaeker, J

    1995-05-01

    The main factors influencing the microbial stability of chilled prepared food products for which there is an increased consumer interest-are temperature, pH, and water activity. Unlike the pH and the water activity, the temperature may vary extensively throughout the complete production and distribution chain. The shelf life of this kind of foods is usually limited due to spoilage by common microorganisms, and the increased risk for food pathogens. In predicting the shelf life, mathematical models are a powerful tool to increase the insight in the different subprocesses and their interactions. However, the predictive value of the sigmoidal functions reported in the literature to describe a bacterial growth curve as an explicit function of time is only guaranteed at a constant temperature within the temperature range of microbial growth. As a result, they are less appropriate in optimization studies of a whole production and distribution chain. In this paper a more general modeling approach, inspired by system theory concepts, is presented if for instance time varying temperature profiles are to be taken into account. As a case study, we discuss a recently proposed dynamic model to predict microbial growth and inactivation under time varying temperature conditions from a system theory point of view. Further, the validity of this methodology is illustrated with experimental data of Brochothrix thermosphacta and Lactobacillus plantarum. Finally, we propose some possible refinements of this model inspired by experimental results.

  19. An explicit mixed numerical method for mesoscale model

    NASA Technical Reports Server (NTRS)

    Hsu, H.-M.

    1981-01-01

    A mixed numerical method has been developed for mesoscale models. The technique consists of a forward difference scheme for time tendency terms, an upstream scheme for advective terms, and a central scheme for the other terms in a physical system. It is shown that the mixed method is conditionally stable and highly accurate for approximating the system of either shallow-water equations in one dimension or primitive equations in three dimensions. Since the technique is explicit and two time level, it conserves computer and programming resources.

  20. Chaotic processes using the two-parameter derivative with non-singular and non-local kernel: Basic theory and applications

    NASA Astrophysics Data System (ADS)

    Doungmo Goufo, Emile Franc

    2016-08-01

    After having the issues of singularity and locality addressed recently in mathematical modelling, another question regarding the description of natural phenomena was raised: How influent is the second parameter β of the two-parameter Mittag-Leffler function E α , β ( z ) , z ∈ ℂ ? To answer this question, we generalize the newly introduced one-parameter derivative with non-singular and non-local kernel [A. Atangana and I. Koca, Chaos, Solitons Fractals 89, 447 (2016); A. Atangana and D. Bealeanu (e-print)] by developing a similar two-parameter derivative with non-singular and non-local kernel based on Eα,β(z). We exploit the Agarwal/Erdelyi higher transcendental functions together with their Laplace transforms to explicitly establish the Laplace transform's expressions of the two-parameter derivatives, necessary for solving related fractional differential equations. Explicit expression of the associated two-parameter fractional integral is also established. Concrete applications are done on atmospheric convection process by using Lorenz non-linear simple system. Existence result for the model is provided and a numerical scheme established. As expected, solutions exhibit chaotic behaviors for α less than 0.55, and this chaos is not interrupted by the impact of β. Rather, this second parameter seems to indirectly squeeze and rotate the solutions, giving an impression of twisting. The whole graphics seem to have completely changed its orientation to a particular direction. This is a great observation that clearly shows the substantial impact of the second parameter of Eα,β(z), certainly opening new doors to modeling with two-parameter derivatives.

  1. Chaotic processes using the two-parameter derivative with non-singular and non-local kernel: Basic theory and applications.

    PubMed

    Doungmo Goufo, Emile Franc

    2016-08-01

    After having the issues of singularity and locality addressed recently in mathematical modelling, another question regarding the description of natural phenomena was raised: How influent is the second parameter β of the two-parameter Mittag-Leffler function Eα,β(z), z∈ℂ? To answer this question, we generalize the newly introduced one-parameter derivative with non-singular and non-local kernel [A. Atangana and I. Koca, Chaos, Solitons Fractals 89, 447 (2016); A. Atangana and D. Bealeanu (e-print)] by developing a similar two-parameter derivative with non-singular and non-local kernel based on Eα , β(z). We exploit the Agarwal/Erdelyi higher transcendental functions together with their Laplace transforms to explicitly establish the Laplace transform's expressions of the two-parameter derivatives, necessary for solving related fractional differential equations. Explicit expression of the associated two-parameter fractional integral is also established. Concrete applications are done on atmospheric convection process by using Lorenz non-linear simple system. Existence result for the model is provided and a numerical scheme established. As expected, solutions exhibit chaotic behaviors for α less than 0.55, and this chaos is not interrupted by the impact of β. Rather, this second parameter seems to indirectly squeeze and rotate the solutions, giving an impression of twisting. The whole graphics seem to have completely changed its orientation to a particular direction. This is a great observation that clearly shows the substantial impact of the second parameter of Eα , β(z), certainly opening new doors to modeling with two-parameter derivatives.

  2. Theory of wavelet-based coarse-graining hierarchies for molecular dynamics.

    PubMed

    Rinderspacher, Berend Christopher; Bardhan, Jaydeep P; Ismail, Ahmed E

    2017-07-01

    We present a multiresolution approach to compressing the degrees of freedom and potentials associated with molecular dynamics, such as the bond potentials. The approach suggests a systematic way to accelerate large-scale molecular simulations with more than two levels of coarse graining, particularly applications of polymeric materials. In particular, we derive explicit models for (arbitrarily large) linear (homo)polymers and iterative methods to compute large-scale wavelet decompositions from fragment solutions. This approach does not require explicit preparation of atomistic-to-coarse-grained mappings, but instead uses the theory of diffusion wavelets for graph Laplacians to develop system-specific mappings. Our methodology leads to a hierarchy of system-specific coarse-grained degrees of freedom that provides a conceptually clear and mathematically rigorous framework for modeling chemical systems at relevant model scales. The approach is capable of automatically generating as many coarse-grained model scales as necessary, that is, to go beyond the two scales in conventional coarse-grained strategies; furthermore, the wavelet-based coarse-grained models explicitly link time and length scales. Furthermore, a straightforward method for the reintroduction of omitted degrees of freedom is presented, which plays a major role in maintaining model fidelity in long-time simulations and in capturing emergent behaviors.

  3. Connecting Free Energy Surfaces in Implicit and Explicit Solvent: an Efficient Method to Compute Conformational and Solvation Free Energies

    PubMed Central

    Deng, Nanjie; Zhang, Bin W.; Levy, Ronald M.

    2015-01-01

    The ability to accurately model solvent effects on free energy surfaces is important for understanding many biophysical processes including protein folding and misfolding, allosteric transitions and protein-ligand binding. Although all-atom simulations in explicit solvent can provide an accurate model for biomolecules in solution, explicit solvent simulations are hampered by the slow equilibration on rugged landscapes containing multiple basins separated by barriers. In many cases, implicit solvent models can be used to significantly speed up the conformational sampling; however, implicit solvent simulations do not fully capture the effects of a molecular solvent, and this can lead to loss of accuracy in the estimated free energies. Here we introduce a new approach to compute free energy changes in which the molecular details of explicit solvent simulations are retained while also taking advantage of the speed of the implicit solvent simulations. In this approach, the slow equilibration in explicit solvent, due to the long waiting times before barrier crossing, is avoided by using a thermodynamic cycle which connects the free energy basins in implicit solvent and explicit solvent using a localized decoupling scheme. We test this method by computing conformational free energy differences and solvation free energies of the model system alanine dipeptide in water. The free energy changes between basins in explicit solvent calculated using fully explicit solvent paths agree with the corresponding free energy differences obtained using the implicit/explicit thermodynamic cycle to within 0.3 kcal/mol out of ~3 kcal/mol at only ~8 % of the computational cost. We note that WHAM methods can be used to further improve the efficiency and accuracy of the explicit/implicit thermodynamic cycle. PMID:26236174

  4. Connecting free energy surfaces in implicit and explicit solvent: an efficient method to compute conformational and solvation free energies.

    PubMed

    Deng, Nanjie; Zhang, Bin W; Levy, Ronald M

    2015-06-09

    The ability to accurately model solvent effects on free energy surfaces is important for understanding many biophysical processes including protein folding and misfolding, allosteric transitions, and protein–ligand binding. Although all-atom simulations in explicit solvent can provide an accurate model for biomolecules in solution, explicit solvent simulations are hampered by the slow equilibration on rugged landscapes containing multiple basins separated by barriers. In many cases, implicit solvent models can be used to significantly speed up the conformational sampling; however, implicit solvent simulations do not fully capture the effects of a molecular solvent, and this can lead to loss of accuracy in the estimated free energies. Here we introduce a new approach to compute free energy changes in which the molecular details of explicit solvent simulations are retained while also taking advantage of the speed of the implicit solvent simulations. In this approach, the slow equilibration in explicit solvent, due to the long waiting times before barrier crossing, is avoided by using a thermodynamic cycle which connects the free energy basins in implicit solvent and explicit solvent using a localized decoupling scheme. We test this method by computing conformational free energy differences and solvation free energies of the model system alanine dipeptide in water. The free energy changes between basins in explicit solvent calculated using fully explicit solvent paths agree with the corresponding free energy differences obtained using the implicit/explicit thermodynamic cycle to within 0.3 kcal/mol out of ∼3 kcal/mol at only ∼8% of the computational cost. We note that WHAM methods can be used to further improve the efficiency and accuracy of the implicit/explicit thermodynamic cycle.

  5. Localized 2D COSY sequences: Method and experimental evaluation for a whole metabolite quantification approach

    NASA Astrophysics Data System (ADS)

    Martel, Dimitri; Tse Ve Koon, K.; Le Fur, Yann; Ratiney, Hélène

    2015-11-01

    Two-dimensional spectroscopy offers the possibility to unambiguously distinguish metabolites by spreading out the multiplet structure of J-coupled spin systems into a second dimension. Quantification methods that perform parametric fitting of the 2D MRS signal have recently been proposed for resolved PRESS (JPRESS) but not explicitly for Localized Correlation Spectroscopy (LCOSY). Here, through a whole metabolite quantification approach, correlation spectroscopy quantification performances are studied. The ability to quantify metabolite relaxation constant times is studied for three localized 2D MRS sequences (LCOSY, LCTCOSY and the JPRESS) in vitro on preclinical MR systems. The issues encountered during implementation and quantification strategies are discussed with the help of the Fisher matrix formalism. The described parameterized models enable the computation of the lower bound for error variance - generally known as the Cramér Rao bounds (CRBs), a standard of precision - on the parameters estimated from these 2D MRS signal fittings. LCOSY has a theoretical net signal loss of two per unit of acquisition time compared to JPRESS. A rapid analysis could point that the relative CRBs of LCOSY compared to JPRESS (expressed as a percentage of the concentration values) should be doubled but we show that this is not necessarily true. Finally, the LCOSY quantification procedure has been applied on data acquired in vivo on a mouse brain.

  6. Water transport through tall trees: A vertically-explicit, analytical model of xylem hydraulic conductance in stems.

    PubMed

    Couvreur, Valentin; Ledder, Glenn; Manzoni, Stefano; Way, Danielle A; Muller, Erik B; Russo, Sabrina E

    2018-05-08

    Trees grow by vertically extending their stems, so accurate stem hydraulic models are fundamental to understanding the hydraulic challenges faced by tall trees. Using a literature survey, we showed that many tree species exhibit continuous vertical variation in hydraulic traits. To examine the effects of this variation on hydraulic function, we developed a spatially-explicit, analytical water transport model for stems. Our model allows Huber ratio, stem-saturated conductivity, pressure at 50% loss of conductivity, leaf area, and transpiration rate to vary continuously along the hydraulic path. Predictions from our model differ from a matric flux potential model parameterized with uniform traits. Analyses show that cavitation is a whole-stem emergent property resulting from nonlinear pressure-conductivity feedbacks that, with gravity, cause impaired water transport to accumulate along the path. Because of the compounding effects of vertical trait variation on hydraulic function, growing proportionally more sapwood and building tapered xylem with height, as well as reducing xylem vulnerability only at branch tips while maintaining transport capacity at the stem base, can compensate for these effects. We therefore conclude that the adaptive significance of vertical variation in stem hydraulic traits is to allow trees to grow tall and tolerate operating near their hydraulic limits. This article is protected by copyright. All rights reserved.

  7. Memory Systems Do Not Divide on Consciousness: Reinterpreting Memory in Terms of Activation and Binding

    PubMed Central

    Reder, Lynne M.; Park, Heekyeong; Kieffaber, Paul D.

    2009-01-01

    There is a popular hypothesis that performance on implicit and explicit memory tasks reflects 2 distinct memory systems. Explicit memory is said to store those experiences that can be consciously recollected, and implicit memory is said to store experiences and affect subsequent behavior but to be unavailable to conscious awareness. Although this division based on awareness is a useful taxonomy for memory tasks, the authors review the evidence that the unconscious character of implicit memory does not necessitate that it be treated as a separate system of human memory. They also argue that some implicit and explicit memory tasks share the same memory representations and that the important distinction is whether the task (implicit or explicit) requires the formation of a new association. The authors review and critique dissociations from the behavioral, amnesia, and neuroimaging literatures that have been advanced in support of separate explicit and implicit memory systems by highlighting contradictory evidence and by illustrating how the data can be accounted for using a simple computational memory model that assumes the same memory representation for those disparate tasks. PMID:19210052

  8. Towards a physically-based multi-scale ecohydrological simulator for semi-arid regions

    NASA Astrophysics Data System (ADS)

    Caviedes-Voullième, Daniel; Josefik, Zoltan; Hinz, Christoph

    2017-04-01

    The use of numerical models as tools for describing and understanding complex ecohydrological systems has enabled to test hypothesis and propose fundamental, process-based explanations of the system system behaviour as a whole as well as its internal dynamics. Reaction-diffusion equations have been used to describe and generate organized pattern such as bands, spots, and labyrinths using simple feedback mechanisms and boundary conditions. Alternatively, pattern-matching cellular automaton models have been used to generate vegetation self-organization in arid and semi-arid regions also using simple description of surface hydrological processes. A key question is: How much physical realism is needed in order to adequately capture the pattern formation processes in semi-arid regions while reliably representing the water balance dynamics at the relevant time scales? In fact, redistribution of water by surface runoff at the hillslope scale occurs at temporal resolution of minutes while the vegetation development requires much lower temporal resolution and longer times spans. This generates a fundamental spatio-temporal multi-scale problem to be solved, for which high resolution rainfall and surface topography are required. Accordingly, the objective of this contribution is to provide proof-of-concept that governing processes can be described numerically at those multiple scales. The requirements for a simulating ecohydrological processes and pattern formation with increased physical realism are, amongst others: i. high resolution rainfall that adequately captures the triggers of growth as vegetation dynamics of arid regions respond as pulsed systems. ii. complex, natural topography in order to accurately model drainage patterns, as surface water redistribution is highly sensitive to topographic features. iii. microtopography and hydraulic roughness, as small scale variations do impact on large scale hillslope behaviour iv. moisture dependent infiltration as temporal dynamics of infiltration affects water storage under vegetation and in bare soil Despite the volume of research in this field, fundamental limitations still exist in the models regarding the aforementioned issues. Topography and hydrodynamics have been strongly simplified. Infiltration has been modelled as dependent on depth but independent of soil moisture. Temporal rainfall variability has only been addressed for seasonal rain. Spatial heterogenity of the topography as well as roughness and infiltration properties, has not been fully and explicitly represented. We hypothesize that physical processes must be robustly modelled and the drivers of complexity must be present with as much resolution as possible in order to provide the necessary realism to improve transient simulations, perhaps leading the way to virtual laboratories and, arguably, predictive tools. This work provides a first approach into a model with explicit hydrological processes represented by physically-based hydrodynamic models, coupled with well-accepted vegetation models. The model aims to enable new possibilities relating to spatiotemporal variability, arbitrary topography and representation of spatial heterogeneity, including sub-daily (in fact, arbitrary) temporal variability of rain as the main forcing of the model, explicit representation of infiltration processes, and various feedback mechanisms between the hydrodynamics and the vegetation. Preliminary testing strongly suggests that the model is viable, has the potential of producing new information of internal dynamics of the system, and allows to successfully aggregate many of the sources of complexity. Initial benchmarking of the model also reveals strengths to be exploited, thus providing an interesting research outlook, as well as weaknesses to be addressed in the immediate future.

  9. MIST: An Open Source Environmental Modelling Programming Language Incorporating Easy to Use Data Parallelism.

    NASA Astrophysics Data System (ADS)

    Bellerby, Tim

    2014-05-01

    Model Integration System (MIST) is open-source environmental modelling programming language that directly incorporates data parallelism. The language is designed to enable straightforward programming structures, such as nested loops and conditional statements to be directly translated into sequences of whole-array (or more generally whole data-structure) operations. MIST thus enables the programmer to use well-understood constructs, directly relating to the mathematical structure of the model, without having to explicitly vectorize code or worry about details of parallelization. A range of common modelling operations are supported by dedicated language structures operating on cell neighbourhoods rather than individual cells (e.g.: the 3x3 local neighbourhood needed to implement an averaging image filter can be simply accessed from within a simple loop traversing all image pixels). This facility hides details of inter-process communication behind more mathematically relevant descriptions of model dynamics. The MIST automatic vectorization/parallelization process serves both to distribute work among available nodes and separately to control storage requirements for intermediate expressions - enabling operations on very large domains for which memory availability may be an issue. MIST is designed to facilitate efficient interpreter based implementations. A prototype open source interpreter is available, coded in standard FORTRAN 95, with tools to rapidly integrate existing FORTRAN 77 or 95 code libraries. The language is formally specified and thus not limited to FORTRAN implementation or to an interpreter-based approach. A MIST to FORTRAN compiler is under development and volunteers are sought to create an ANSI-C implementation. Parallel processing is currently implemented using OpenMP. However, parallelization code is fully modularised and could be replaced with implementations using other libraries. GPU implementation is potentially possible.

  10. An Explicit Algorithm for the Simulation of Fluid Flow through Porous Media

    NASA Astrophysics Data System (ADS)

    Trapeznikova, Marina; Churbanova, Natalia; Lyupa, Anastasiya

    2018-02-01

    The work deals with the development of an original mathematical model of porous medium flow constructed by analogy with the quasigasdynamic system of equations and allowing implementation via explicit numerical methods. The model is generalized to the case of multiphase multicomponent fluid and takes into account possible heat sources. The proposed approach is verified by a number of test predictions.

  11. BayMeth: improved DNA methylation quantification for affinity capture sequencing data using a flexible Bayesian approach

    PubMed Central

    2014-01-01

    Affinity capture of DNA methylation combined with high-throughput sequencing strikes a good balance between the high cost of whole genome bisulfite sequencing and the low coverage of methylation arrays. We present BayMeth, an empirical Bayes approach that uses a fully methylated control sample to transform observed read counts into regional methylation levels. In our model, inefficient capture can readily be distinguished from low methylation levels. BayMeth improves on existing methods, allows explicit modeling of copy number variation, and offers computationally efficient analytical mean and variance estimators. BayMeth is available in the Repitools Bioconductor package. PMID:24517713

  12. In silico ribozyme evolution in a metabolically coupled RNA population.

    PubMed

    Könnyű, Balázs; Szilágyi, András; Czárán, Tamás

    2015-05-27

    The RNA World hypothesis offers a plausible bridge from no-life to life on prebiotic Earth, by assuming that RNA, the only known molecule type capable of playing genetic and catalytic roles at the same time, could have been the first evolvable entity on the evolutionary path to the first living cell. We have developed the Metabolically Coupled Replicator System (MCRS), a spatially explicit simulation modelling approach to prebiotic RNA-World evolution on mineral surfaces, in which we incorporate the most important experimental facts and theoretical considerations to comply with recent knowledge on RNA and prebiotic evolution. In this paper the MCRS model framework has been extended in order to investigate the dynamical and evolutionary consequences of adding an important physico-chemical detail, namely explicit replicator structure - nucleotide sequence and 2D folding calculated from thermodynamical criteria - and their possible mutational changes, to the assumptions of a previously less detailed toy model. For each mutable nucleotide sequence the corresponding 2D folded structure with minimum free energy is calculated, which in turn is used to determine the fitness components (degradation rate, replicability and metabolic enzyme activity) of the replicator. We show that the community of such replicators providing the monomer supply for their own replication by evolving metabolic enzyme activities features an improved propensity for stable coexistence and structural adaptation. These evolutionary advantages are due to the emergent uniformity of metabolic replicator fitnesses imposed on the community by local group selection and attained through replicator trait convergence, i.e., the tendency of replicator lengths, ribozyme activities and population sizes to become similar between the coevolving replicator species that are otherwise both structurally and functionally different. In the most general terms it is the surprisingly high extra viability of the metabolic replicator system that the present model adds to the MCRS concept of the origin of life. Surface-bound, metabolically coupled RNA replicators tend to evolve different, enzymatically active sites within thermodynamically stable secondary structures, and the system as a whole evolves towards the robust coexistence of a complete set of such ribozymes driving the metabolism producing monomers for their own replication.

  13. Characterizing Aeroelastic Systems Using Eigenanalysis, Explicitly Retaining The Aerodynamic Degrees of Freedom

    NASA Technical Reports Server (NTRS)

    Heeg, Jennifer; Dowell, Earl H.

    2001-01-01

    Discrete time aeroelastic models with explicitly retained aerodynamic modes have been generated employing a time marching vortex lattice aerodynamic model. This paper presents analytical results from eigenanalysis of these models. The potential of these models to calculate the behavior of modes that represent damped system motion (noncritical modes) in addition to the simple harmonic modes is explored. A typical section with only structural freedom in pitch is examined. The eigenvalues are examined and compared to experimental data. Issues regarding the convergence of the solution with regard to refining the aerodynamic discretization are investigated. Eigenvector behavior is examined; the eigenvector associated with a particular eigenvalue can be viewed as the set of modal participation factors for that particular mode. For the present formulation of the equations of motion, the vorticity for each aerodynamic element appears explicitly as an element of each eigenvector in addition to the structural dynamic generalized coordinates. Thus, modal participation of the aerodynamic degrees of freedom can be assessed in M addition to participation of structural degrees of freedom.

  14. Implicit and explicit ethnocentrism: revisiting the ideologies of prejudice.

    PubMed

    Cunningham, William A; Nezlek, John B; Banaji, Mahzarin R

    2004-10-01

    Two studies investigated relationships among individual differences in implicit and explicit prejudice, right-wing ideology, and rigidity in thinking. The first study examined these relationships focusing on White Americans' prejudice toward Black Americans. The second study provided the first test of implicit ethnocentrism and its relationship to explicit ethnocentrism by studying the relationship between attitudes toward five social groups. Factor analyses found support for both implicit and explicit ethnocentrism. In both studies, mean explicit attitudes toward out groups were positive, whereas implicit attitudes were negative, suggesting that implicit and explicit prejudices are distinct; however, in both studies, implicit and explicit attitudes were related (r = .37, .47). Latent variable modeling indicates a simple structure within this ethnocentric system, with variables organized in order of specificity. These results lead to the conclusion that (a) implicit ethnocentrism exists and (b) it is related to and distinct from explicit ethnocentrism.

  15. Studies of implicit and explicit solution techniques in transient thermal analysis of structures

    NASA Technical Reports Server (NTRS)

    Adelman, H. M.; Haftka, R. T.; Robinson, J. C.

    1982-01-01

    Studies aimed at an increase in the efficiency of calculating transient temperature fields in complex aerospace vehicle structures are reported. The advantages and disadvantages of explicit and implicit algorithms are discussed and a promising set of implicit algorithms with variable time steps, known as GEARIB, is described. Test problems, used for evaluating and comparing various algorithms, are discussed and finite element models of the configurations are described. These problems include a coarse model of the Space Shuttle wing, an insulated frame tst article, a metallic panel for a thermal protection system, and detailed models of sections of the Space Shuttle wing. Results generally indicate a preference for implicit over explicit algorithms for transient structural heat transfer problems when the governing equations are stiff (typical of many practical problems such as insulated metal structures). The effects on algorithm performance of different models of an insulated cylinder are demonstrated. The stiffness of the problem is highly sensitive to modeling details and careful modeling can reduce the stiffness of the equations to the extent that explicit methods may become the best choice. Preliminary applications of a mixed implicit-explicit algorithm and operator splitting techniques for speeding up the solution of the algebraic equations are also described.

  16. Studies of implicit and explicit solution techniques in transient thermal analysis of structures

    NASA Astrophysics Data System (ADS)

    Adelman, H. M.; Haftka, R. T.; Robinson, J. C.

    1982-08-01

    Studies aimed at an increase in the efficiency of calculating transient temperature fields in complex aerospace vehicle structures are reported. The advantages and disadvantages of explicit and implicit algorithms are discussed and a promising set of implicit algorithms with variable time steps, known as GEARIB, is described. Test problems, used for evaluating and comparing various algorithms, are discussed and finite element models of the configurations are described. These problems include a coarse model of the Space Shuttle wing, an insulated frame tst article, a metallic panel for a thermal protection system, and detailed models of sections of the Space Shuttle wing. Results generally indicate a preference for implicit over explicit algorithms for transient structural heat transfer problems when the governing equations are stiff (typical of many practical problems such as insulated metal structures). The effects on algorithm performance of different models of an insulated cylinder are demonstrated. The stiffness of the problem is highly sensitive to modeling details and careful modeling can reduce the stiffness of the equations to the extent that explicit methods may become the best choice. Preliminary applications of a mixed implicit-explicit algorithm and operator splitting techniques for speeding up the solution of the algebraic equations are also described.

  17. JAVA PathFinder

    NASA Technical Reports Server (NTRS)

    Mehhtz, Peter

    2005-01-01

    JPF is an explicit state software model checker for Java bytecode. Today, JPF is a swiss army knife for all sort of runtime based verification purposes. This basically means JPF is a Java virtual machine that executes your program not just once (like a normal VM), but theoretically in all possible ways, checking for property violations like deadlocks or unhandled exceptions along all potential execution paths. If it finds an error, JPF reports the whole execution that leads to it. Unlike a normal debugger, JPF keeps track of every step how it got to the defect.

  18. Fast, Statistical Model of Surface Roughness for Ion-Solid Interaction Simulations and Efficient Code Coupling

    NASA Astrophysics Data System (ADS)

    Drobny, Jon; Curreli, Davide; Ruzic, David; Lasa, Ane; Green, David; Canik, John; Younkin, Tim; Blondel, Sophie; Wirth, Brian

    2017-10-01

    Surface roughness greatly impacts material erosion, and thus plays an important role in Plasma-Surface Interactions. Developing strategies for efficiently introducing rough surfaces into ion-solid interaction codes will be an important step towards whole-device modeling of plasma devices and future fusion reactors such as ITER. Fractal TRIDYN (F-TRIDYN) is an upgraded version of the Monte Carlo, BCA program TRIDYN developed for this purpose that includes an explicit fractal model of surface roughness and extended input and output options for file-based code coupling. Code coupling with both plasma and material codes has been achieved and allows for multi-scale, whole-device modeling of plasma experiments. These code coupling results will be presented. F-TRIDYN has been further upgraded with an alternative, statistical model of surface roughness. The statistical model is significantly faster than and compares favorably to the fractal model. Additionally, the statistical model compares well to alternative computational surface roughness models and experiments. Theoretical links between the fractal and statistical models are made, and further connections to experimental measurements of surface roughness are explored. This work was supported by the PSI-SciDAC Project funded by the U.S. Department of Energy through contract DOE-DE-SC0008658.

  19. 3D transient electromagnetic simulation using a modified correspondence principle for wave and diffusion fields

    NASA Astrophysics Data System (ADS)

    Hu, Y.; Ji, Y.; Egbert, G. D.

    2015-12-01

    The fictitious time domain method (FTD), based on the correspondence principle for wave and diffusion fields, has been developed and used over the past few years primarily for marine electromagnetic (EM) modeling. Here we present results of our efforts to apply the FTD approach to land and airborne TEM problems which can reduce the computer time several orders of magnitude and preserve high accuracy. In contrast to the marine case, where sources are in the conductive sea water, we must model the EM fields in the air; to allow for topography air layers must be explicitly included in the computational domain. Furthermore, because sources for most TEM applications generally must be modeled as finite loops, it is useful to solve directly for the impulse response appropriate to the problem geometry, instead of the point-source Green functions typically used for marine problems. Our approach can be summarized as follows: (1) The EM diffusion equation is transformed to a fictitious wave equation. (2) The FTD wave equation is solved with an explicit finite difference time-stepping scheme, with CPML (Convolutional PML) boundary conditions for the whole computational domain including the air and earth , with FTD domain source corresponding to the actual transmitter geometry. Resistivity of the air layers is kept as low as possible, to compromise between efficiency (longer fictitious time step) and accuracy. We have generally found a host/air resistivity contrast of 10-3 is sufficient. (3)A "Modified" Fourier Transform (MFT) allow us recover system's impulse response from the fictitious time domain to the diffusion (frequency) domain. (4) The result is multiplied by the Fourier transformation (FT) of the real source current avoiding time consuming convolutions in the time domain. (5) The inverse FT is employed to get the final full waveform and full time response of the system in the time domain. In general, this method can be used to efficiently solve most time-domain EM simulation problems for non-point sources.

  20. Investigating the predictive validity of implicit and explicit measures of motivation in problem-solving behavioural tasks.

    PubMed

    Keatley, David; Clarke, David D; Hagger, Martin S

    2013-09-01

    Research into the effects of individuals'autonomous motivation on behaviour has traditionally adopted explicit measures and self-reported outcome assessment. Recently, there has been increased interest in the effects of implicit motivational processes underlying behaviour from a self-determination theory (SDT) perspective. The aim of the present research was to provide support for the predictive validity of an implicit measure of autonomous motivation on behavioural persistence on two objectively measurable tasks. SDT and a dual-systems model were adopted as frameworks to explain the unique effects offered by explicit and implicit autonomous motivational constructs on behavioural persistence. In both studies, implicit autonomous motivation significantly predicted unique variance in time spent on each task. Several explicit measures of autonomous motivation also significantly predicted persistence. Results provide support for the proposed model and the inclusion of implicit measures in research on motivated behaviour. In addition, implicit measures of autonomous motivation appear to be better suited to explaining variance in behaviours that are more spontaneous or unplanned. Future implications for research examining implicit motivation from dual-systems models and SDT approaches are outlined. © 2012 The British Psychological Society.

  1. The `What is a system' reflection interview as a knowledge integration activity for high school students' understanding of complex systems in human biology

    NASA Astrophysics Data System (ADS)

    Tripto, Jaklin; Ben-Zvi Assaraf, Orit; Snapir, Zohar; Amit, Miriam

    2016-03-01

    This study examined the reflection interview as a tool for assessing and facilitating the use of 'systems language' amongst 11th grade students who have recently completed their first year of high school biology. Eighty-three students composed two concept maps in the 10th grade-one at the beginning of the school year and one at its end. The first part of the interview is dedicated to guiding the students through comparing their two concept maps and by means of both explicit and non-explicit teaching. Our study showed that the explicit guidance in comparing the two concept maps was more effective than the non-explicit, eliciting a variety of different, more specific, types of interactions and patterns (e.g. 'hierarchy', 'dynamism', 'homeostasis') in the students' descriptions of the human body system. The reflection interview as a knowledge integration activity was found to be an effective tool for assessing the subjects' conceptual models of 'system complexity', and for identifying those aspects of a system that are most commonly misunderstood.

  2. Spatially explicit decision support for selecting translocation areas for Mojave desert tortoises

    USGS Publications Warehouse

    Heaton, Jill S.; Nussear, Kenneth E.; Esque, Todd C.; Inman, Richard D.; Davenport, Frank; Leuteritz, Thomas E.; Medica, Philip A.; Strout, Nathan W.; Burgess, Paul A.; Benvenuti, Lisa

    2008-01-01

    Spatially explicit decision support systems are assuming an increasing role in natural resource and conservation management. In order for these systems to be successful, however, they must address real-world management problems with input from both the scientific and management communities. The National Training Center at Fort Irwin, California, has expanded its training area, encroaching U.S. Fish and Wildlife Service critical habitat set aside for the Mojave desert tortoise (Gopherus agassizii), a federally threatened species. Of all the mitigation measures proposed to offset expansion, the most challenging to implement was the selection of areas most feasible for tortoise translocation. We developed an objective, open, scientifically defensible spatially explicit decision support system to evaluate translocation potential within the Western Mojave Recovery Unit for tortoise populations under imminent threat from military expansion. Using up to a total of 10 biological, anthropogenic, and/or logistical criteria, seven alternative translocation scenarios were developed. The final translocation model was a consensus model between the seven scenarios. Within the final model, six potential translocation areas were identified.

  3. A virtual-system coupled multicanonical molecular dynamics simulation: Principles and applications to free-energy landscape of protein-protein interaction with an all-atom model in explicit solvent

    NASA Astrophysics Data System (ADS)

    Higo, Junichi; Umezawa, Koji; Nakamura, Haruki

    2013-05-01

    We propose a novel generalized ensemble method, a virtual-system coupled multicanonical molecular dynamics (V-McMD), to enhance conformational sampling of biomolecules expressed by an all-atom model in an explicit solvent. In this method, a virtual system, of which physical quantities can be set arbitrarily, is coupled with the biomolecular system, which is the target to be studied. This method was applied to a system of an Endothelin-1 derivative, KR-CSH-ET1, known to form an antisymmetric homodimer at room temperature. V-McMD was performed starting from a configuration in which two KR-CSH-ET1 molecules were mutually distant in an explicit solvent. The lowest free-energy state (the most thermally stable state) at room temperature coincides with the experimentally determined native complex structure. This state was separated to other non-native minor clusters by a free-energy barrier, although the barrier disappeared with elevated temperature. V-McMD produced a canonical ensemble faster than a conventional McMD method.

  4. The interaction of spatial scale and predator-prey functional response

    USGS Publications Warehouse

    Blaine, T.W.; DeAngelis, D.L.

    1997-01-01

    Predator-prey models with a prey-dependent functional response have the property that the prey equilibrium value is determined only by predator characteristics. However, in observed natural systems (for instance, snail-periphyton interactions in streams) the equilibrium periphyton biomass has been shown experimentally to be influenced by both snail numbers and levels of available limiting nutrient in the water. Hypothesizing that the observed patchiness in periphyton in streams may be part of the explanation for the departure of behavior of the equilibrium biomasses from predictions of the prey-dependent response of the snail-periphyton system, we developed and analyzed a spatially-explicit model of periphyton in which snails were modeled as individuals in their movement and feeding, and periphyton was modeled as patches or spatial cells. Three different assumptions on snail movement were used: (1) random movement between spatial cells, (2) tracking by snails of local abundances of periphyton, and (3) delayed departure of snails from cells to reduce costs associated with movement. Of these assumptions, only the third strategy, based on an herbivore strategy of staying in one patch until local periphyton biomass concentration falls below a certain threshold amount, produced results in which both periphyton and snail biomass increased with nutrient input. Thus, if data are averaged spatially over the whole system, we expect that a ratio-dependent functional response may be observed if the herbivore behaves according to the third assumption. Both random movement and delayed cell departure had the result that spatial heterogeneity of periphyton increased with nutrient input.

  5. Habitat fragmentation resulting in overgrazing by herbivores.

    PubMed

    Kondoh, Michio

    2003-12-21

    Habitat fragmentation sometimes results in outbreaks of herbivorous insect and causes an enormous loss of primary production. It is hypothesized that the driving force behind such herbivore outbreaks is disruption of natural enemy attack that releases herbivores from top-down control. To test this hypothesis I studied how trophic community structure changes along a gradient of habitat fragmentation level using spatially implicit and explicit models of a tri-trophic (plant, herbivore and natural enemy) food chain. While in spatially implicit model number of trophic levels gradually decreases with increasing fragmentation, in spatially explicit model a relatively low level of habitat fragmentation leads to overgrazing by herbivore to result in extinction of the plant population followed by a total system collapse. This provides a theoretical support to the hypothesis that habitat fragmentation can lead to overgrazing by herbivores and suggests a central role of spatial structure in the influence of habitat fragmentation on trophic communities. Further, the spatially explicit model shows (i) that the total system collapse by the overgrazing can occur only if herbivore colonization rate is high; (ii) that with increasing natural enemy colonization rate, the fragmentation level that leads to the system collapse becomes higher, and the frequency of the collapse is lowered.

  6. MOAB: a spatially explicit, individual-based expert system for creating animal foraging models

    USGS Publications Warehouse

    Carter, J.; Finn, John T.

    1999-01-01

    We describe the development, structure, and corroboration process of a simulation model of animal behavior (MOAB). MOAB can create spatially explicit, individual-based animal foraging models. Users can create or replicate heterogeneous landscape patterns, and place resources and individual animals of a goven species on that landscape to simultaneously simulate the foraging behavior of multiple species. The heuristic rules for animal behavior are maintained in a user-modifiable expert system. MOAB can be used to explore hypotheses concerning the influence of landscape patttern on animal movement and foraging behavior. A red fox (Vulpes vulpes L.) foraging and nest predation model was created to test MOAB's capabilities. Foxes were simulated for 30-day periods using both expert system and random movement rules. Home range size, territory formation and other available simulation studies. A striped skunk (Mephitis mephitis L.) model also was developed. The expert system model proved superior to stochastic in respect to territory formation, general movement patterns and home range size.

  7. Default contagion risks in Russian interbank market

    NASA Astrophysics Data System (ADS)

    Leonidov, A. V.; Rumyantsev, E. L.

    2016-06-01

    Systemic risks of default contagion in the Russian interbank market are investigated. The analysis is based on considering the bow-tie structure of the weighted oriented graph describing the structure of the interbank loans. A probabilistic model of interbank contagion explicitly taking into account the empirical bow-tie structure reflecting functionality of the corresponding nodes (borrowers, lenders, borrowers and lenders simultaneously), degree distributions and disassortativity of the interbank network under consideration based on empirical data is developed. The characteristics of contagion-related systemic risk calculated with this model are shown to be in agreement with those of explicit stress tests.

  8. The Efficiency and the Scalability of an Explicit Operator on an IBM POWER4 System

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    We present an evaluation of the efficiency and the scalability of an explicit CFD operator on an IBM POWER4 system. The POWER4 architecture exhibits a common trend in HPC architectures: boosting CPU processing power by increasing the number of functional units, while hiding the latency of memory access by increasing the depth of the memory hierarchy. The overall machine performance depends on the ability of the caches-buses-fabric-memory to feed the functional units with the data to be processed. In this study we evaluate the efficiency and scalability of one explicit CFD operator on an IBM POWER4. This operator performs computations at the points of a Cartesian grid and involves a few dozen floating point numbers and on the order of 100 floating point operations per grid point. The computations in all grid points are independent. Specifically, we estimate the efficiency of the RHS operator (SP of NPB) on a single processor as the observed/peak performance ratio. Then we estimate the scalability of the operator on a single chip (2 CPUs), a single MCM (8 CPUs), 16 CPUs, and the whole machine (32 CPUs). Then we perform the same measurements for a chache-optimized version of the RHS operator. For our measurements we use the HPM (Hardware Performance Monitor) counters available on the POWER4. These counters allow us to analyze the obtained performance results.

  9. On discrete symmetries for a whole Abelian model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chauca, J.; Doria, R.; Aprendanet, Petropolis, 25600

    Considering the whole concept applied to gauge theory a nonlinear abelian model is derived. A next step is to understand on the model properties. At this work, it will be devoted to discrete symmetries. For this, we will work based in two fields reference systems. This whole gauge symmetry allows to be analyzed through different sets which are the constructor basis {l_brace}D{sub {mu}},X{sup i}{sub {mu}}{r_brace} and the physical basis {l_brace}G{sub {mu}I}{r_brace}. Taking as fields reference system the diagonalized spin-1 sector, P, C, T and PCT symmetries are analyzed. They show that under this systemic model there are conservation laws drivenmore » for the parts and for the whole. It develops the meaning of whole-parity, field-parity and so on. However it is the whole symmetry that rules. This means that usually forbidden particles as pseudovector photons can be introduced through such whole abelian system. As result, one notices that the fields whole {l_brace}G{sub {mu}I}{r_brace} manifest a quanta diversity. It involves particles with different spins, masses and discrete quantum numbers under a same gauge symmetry. It says that without violating PCT symmetry different possibilities on discrete symmetries can be accommodated.« less

  10. Revisiting the Logan plot to account for non-negligible blood volume in brain tissue.

    PubMed

    Schain, Martin; Fazio, Patrik; Mrzljak, Ladislav; Amini, Nahid; Al-Tawil, Nabil; Fitzer-Attas, Cheryl; Bronzova, Juliana; Landwehrmeyer, Bernhard; Sampaio, Christina; Halldin, Christer; Varrone, Andrea

    2017-08-18

    Reference tissue-based quantification of brain PET data does not typically include correction for signal originating from blood vessels, which is known to result in biased outcome measures. The bias extent depends on the amount of radioactivity in the blood vessels. In this study, we seek to revisit the well-established Logan plot and derive alternative formulations that provide estimation of distribution volume ratios (DVRs) that are corrected for the signal originating from the vasculature. New expressions for the Logan plot based on arterial input function and reference tissue were derived, which included explicit terms for whole blood radioactivity. The new methods were evaluated using PET data acquired using [ 11 C]raclopride and [ 18 F]MNI-659. The two-tissue compartment model (2TCM), with which signal originating from blood can be explicitly modeled, was used as a gold standard. DVR values obtained for [ 11 C]raclopride using the either blood-based or reference tissue-based Logan plot were systematically underestimated compared to 2TCM, and for [ 18 F]MNI-659, a proportionality bias was observed, i.e., the bias varied across regions. The biases disappeared when optimal blood-signal correction was used for respective tracer, although for the case of [ 18 F]MNI-659 a small but systematic overestimation of DVR was still observed. The new method appears to remove the bias introduced due to absence of correction for blood volume in regular graphical analysis and can be considered in clinical studies. Further studies are however required to derive a generic mapping between plasma and whole-blood radioactivity levels.

  11. Study of a Steel’s Energy Absorption System for Heavy Quadricycles and Nonlinear Explicit Dynamic Analysis of its Behavior under Impact by FEM

    PubMed Central

    López Campos, José Ángel; Segade Robleda, Abraham; Vilán Vilán, José Antonio; García Nieto, Paulino José; Blanco Cordero, Javier

    2015-01-01

    Current knowledge of the behavior of heavy quadricycles under impact is still very poor. One of the most significant causes is the lack of energy absorption in the vehicle frame or its steel chassis structure. For this reason, special steels (with yield stresses equal to or greater than 350 MPa) are commonly used in the automotive industry due to their great strain hardening properties along the plastic zone, which allows good energy absorption under impact. This paper presents a proposal for a steel quadricycle energy absorption system which meets the percentages of energy absorption for conventional vehicles systems. This proposal is validated by explicit dynamics simulation, which will define the whole problem mathematically and verify behavior under impact at speeds of 40 km/h and 56 km/h using the finite element method (FEM). One of the main consequences of this study is that this FEM–based methodology can tackle high nonlinear problems like this one with success, avoiding the need to carry out experimental tests, with consequent economical savings since experimental tests are very expensive. Finally, the conclusions from this innovative research work are given. PMID:28793607

  12. Study of a Steel's Energy Absorption System for Heavy Quadricycles and Nonlinear Explicit Dynamic Analysis of its Behavior under Impact by FEM.

    PubMed

    López Campos, José Ángel; Segade Robleda, Abraham; Vilán Vilán, José Antonio; García Nieto, Paulino José; Blanco Cordero, Javier

    2015-10-10

    Current knowledge of the behavior of heavy quadricycles under impact is still very poor. One of the most significant causes is the lack of energy absorption in the vehicle frame or its steel chassis structure. For this reason, special steels (with yield stresses equal to or greater than 350 MPa) are commonly used in the automotive industry due to their great strain hardening properties along the plastic zone, which allows good energy absorption under impact. This paper presents a proposal for a steel quadricycle energy absorption system which meets the percentages of energy absorption for conventional vehicles systems. This proposal is validated by explicit dynamics simulation, which will define the whole problem mathematically and verify behavior under impact at speeds of 40 km/h and 56 km/h using the finite element method (FEM). One of the main consequences of this study is that this FEM-based methodology can tackle high nonlinear problems like this one with success, avoiding the need to carry out experimental tests, with consequent economical savings since experimental tests are very expensive. Finally, the conclusions from this innovative research work are given.

  13. A Coarse-grained Model of Stratum Corneum Lipids: Free Fatty Acids and Ceramide NS

    PubMed Central

    Moore, Timothy C.; Iacovella, Christopher R.; Hartkamp, Remco; Bunge, Annette L.; McCabe, Clare

    2017-01-01

    Ceramide (CER)-based biological membranes are used both experimentally and in simulations as simplified model systems of the skin barrier. Molecular dynamics studies have generally focused on simulating preassembled structures using atomistically detailed models of CERs, which limit the system sizes and timescales that can practically be probed, rendering them ineffective for studying particular phenomena, including self-assembly into bilayer and lamellar superstructures. Here, we report on the development of a coarse-grained (CG) model for CER NS, the most abundant CER in human stratum corneum. Multistate iterative Boltzmann inversion is used to derive the intermolecular pair potentials, resulting in a force field that is applicable over a range of state points and suitable for studying ceramide self-assembly. The chosen CG mapping, which includes explicit interaction sites for hydroxyl groups, captures the directional nature of hydrogen bonding and allows for accurate predictions of several key structural properties of CER NS bilayers. Simulated wetting experiments allow the hydrophobicity of CG beads to be accurately tuned to match atomistic wetting behavior, which affects the whole system since inaccurate hydrophobic character is found to unphysically alter the lipid packing in hydrated lamellar states. We find that CER NS can self-assemble into multilamellar structures, enabling the study of lipid systems more representative of the multilamellar lipid structures present in the skin barrier. The coarse-grained force field derived herein represents an important step in using molecular dynamics to study the human skin barrier, which gives a resolution not available through experiment alone. PMID:27564869

  14. Multi-model predictive control based on LMI: from the adaptation of the state-space model to the analytic description of the control law

    NASA Astrophysics Data System (ADS)

    Falugi, P.; Olaru, S.; Dumur, D.

    2010-08-01

    This article proposes an explicit robust predictive control solution based on linear matrix inequalities (LMIs). The considered predictive control strategy uses different local descriptions of the system dynamics and uncertainties and thus allows the handling of less conservative input constraints. The computed control law guarantees constraint satisfaction and asymptotic stability. The technique is effective for a class of nonlinear systems embedded into polytopic models. A detailed discussion of the procedures which adapt the partition of the state space is presented. For the practical implementation the construction of suitable (explicit) descriptions of the control law are described upon concrete algorithms.

  15. Innovations in individual feature history management - The significance of feature-based temporal model

    USGS Publications Warehouse

    Choi, J.; Seong, J.C.; Kim, B.; Usery, E.L.

    2008-01-01

    A feature relies on three dimensions (space, theme, and time) for its representation. Even though spatiotemporal models have been proposed, they have principally focused on the spatial changes of a feature. In this paper, a feature-based temporal model is proposed to represent the changes of both space and theme independently. The proposed model modifies the ISO's temporal schema and adds new explicit temporal relationship structure that stores temporal topological relationship with the ISO's temporal primitives of a feature in order to keep track feature history. The explicit temporal relationship can enhance query performance on feature history by removing topological comparison during query process. Further, a prototype system has been developed to test a proposed feature-based temporal model by querying land parcel history in Athens, Georgia. The result of temporal query on individual feature history shows the efficiency of the explicit temporal relationship structure. ?? Springer Science+Business Media, LLC 2007.

  16. A concept for holistic whole body MRI data analysis, Imiomics

    PubMed Central

    Malmberg, Filip; Johansson, Lars; Lind, Lars; Sundbom, Magnus; Ahlström, Håkan; Kullberg, Joel

    2017-01-01

    Purpose To present and evaluate a whole-body image analysis concept, Imiomics (imaging–omics) and an image registration method that enables Imiomics analyses by deforming all image data to a common coordinate system, so that the information in each voxel can be compared between persons or within a person over time and integrated with non-imaging data. Methods The presented image registration method utilizes relative elasticity constraints of different tissue obtained from whole-body water-fat MRI. The registration method is evaluated by inverse consistency and Dice coefficients and the Imiomics concept is evaluated by example analyses of importance for metabolic research using non-imaging parameters where we know what to expect. The example analyses include whole body imaging atlas creation, anomaly detection, and cross-sectional and longitudinal analysis. Results The image registration method evaluation on 128 subjects shows low inverse consistency errors and high Dice coefficients. Also, the statistical atlas with fat content intensity values shows low standard deviation values, indicating successful deformations to the common coordinate system. The example analyses show expected associations and correlations which agree with explicit measurements, and thereby illustrate the usefulness of the proposed Imiomics concept. Conclusions The registration method is well-suited for Imiomics analyses, which enable analyses of relationships to non-imaging data, e.g. clinical data, in new types of holistic targeted and untargeted big-data analysis. PMID:28241015

  17. Multiscale Cloud System Modeling

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Moncrieff, Mitchell W.

    2009-01-01

    The central theme of this paper is to describe how cloud system resolving models (CRMs) of grid spacing approximately 1 km have been applied to various important problems in atmospheric science across a wide range of spatial and temporal scales and how these applications relate to other modeling approaches. A long-standing problem concerns the representation of organized precipitating convective cloud systems in weather and climate models. Since CRMs resolve the mesoscale to large scales of motion (i.e., 10 km to global) they explicitly address the cloud system problem. By explicitly representing organized convection, CRMs bypass restrictive assumptions associated with convective parameterization such as the scale gap between cumulus and large-scale motion. Dynamical models provide insight into the physical mechanisms involved with scale interaction and convective organization. Multiscale CRMs simulate convective cloud systems in computational domains up to global and have been applied in place of contemporary convective parameterizations in global models. Multiscale CRMs pose a new challenge for model validation, which is met in an integrated approach involving CRMs, operational prediction systems, observational measurements, and dynamical models in a new international project: the Year of Tropical Convection, which has an emphasis on organized tropical convection and its global effects.

  18. Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models

    DOE PAGES

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; ...

    2018-04-17

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less

  19. Implicit-explicit (IMEX) Runge-Kutta methods for non-hydrostatic atmospheric models

    NASA Astrophysics Data System (ADS)

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; Reynolds, Daniel R.; Ullrich, Paul A.; Woodward, Carol S.

    2018-04-01

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit-explicit (IMEX) additive Runge-Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit - vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored. The accuracy and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.

  20. Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less

  1. Adaptive Neural Network Based Control of Noncanonical Nonlinear Systems.

    PubMed

    Zhang, Yanjun; Tao, Gang; Chen, Mou

    2016-09-01

    This paper presents a new study on the adaptive neural network-based control of a class of noncanonical nonlinear systems with large parametric uncertainties. Unlike commonly studied canonical form nonlinear systems whose neural network approximation system models have explicit relative degree structures, which can directly be used to derive parameterized controllers for adaptation, noncanonical form nonlinear systems usually do not have explicit relative degrees, and thus their approximation system models are also in noncanonical forms. It is well-known that the adaptive control of noncanonical form nonlinear systems involves the parameterization of system dynamics. As demonstrated in this paper, it is also the case for noncanonical neural network approximation system models. Effective control of such systems is an open research problem, especially in the presence of uncertain parameters. This paper shows that it is necessary to reparameterize such neural network system models for adaptive control design, and that such reparameterization can be realized using a relative degree formulation, a concept yet to be studied for general neural network system models. This paper then derives the parameterized controllers that guarantee closed-loop stability and asymptotic output tracking for noncanonical form neural network system models. An illustrative example is presented with the simulation results to demonstrate the control design procedure, and to verify the effectiveness of such a new design method.

  2. Structural kinetic modeling of metabolic networks.

    PubMed

    Steuer, Ralf; Gross, Thilo; Selbig, Joachim; Blasius, Bernd

    2006-08-08

    To develop and investigate detailed mathematical models of metabolic processes is one of the primary challenges in systems biology. However, despite considerable advance in the topological analysis of metabolic networks, kinetic modeling is still often severely hampered by inadequate knowledge of the enzyme-kinetic rate laws and their associated parameter values. Here we propose a method that aims to give a quantitative account of the dynamical capabilities of a metabolic system, without requiring any explicit information about the functional form of the rate equations. Our approach is based on constructing a local linear model at each point in parameter space, such that each element of the model is either directly experimentally accessible or amenable to a straightforward biochemical interpretation. This ensemble of local linear models, encompassing all possible explicit kinetic models, then allows for a statistical exploration of the comprehensive parameter space. The method is exemplified on two paradigmatic metabolic systems: the glycolytic pathway of yeast and a realistic-scale representation of the photosynthetic Calvin cycle.

  3. [The contribution of systems theory and "existential integrative psychotherapy" to the relationship of the concepts "endogenous", "exogenous", "psychogenic", and "sociogenic" in psychic disorders].

    PubMed

    Bühler, K E; Wyss, D

    1980-01-01

    Proof is given that the "atomistic" concepts of sickness lead to insolvable contradictions of methodic and logic origin. In this study these contradictions are exemplified and critically analysed, and historical aspects are included. We then propose the dimensions "Interior--Exterior" and "Psychogenous--Somatogeneous" as an heuristic model, dimensions on which any sickness is to be located according to its basic causes. The General System Theory had developed a new formal concept of sickness based on a relatively complete and integral vision of the human being. Nevertheless, the constructs of the General System Theory remain incomplete as they include only objects and their relations, never individual subjects. Wyss however has established explicitely an anthropology of the subject which he connects with his communication orientated concept of sickness. This concept does not judge sickness to be contradictory to health, but both, sickness and health together, form a functional whole on a higher level of abstraction. On this level the organism and its functions, the "interior", represent an inherent component of the "exterior". "Interior" and "exterior"--differentiated in various items--attempt to establish an equilibrium that is always in danger of being desquilibrized.

  4. Spatially explicit and stochastic simulation of forest landscape fire disturbance and succession

    Treesearch

    Hong S. He; David J. Mladenoff

    1999-01-01

    Understanding disturbance and recovery of forest landscapes is a challenge because of complex interactions over a range of temporal and spatial scales. Landscape simulation models offer an approach to studying such systems at broad scales. Fire can be simulated spatially using mechanistic or stochastic approaches. We describe the fire module in a spatially explicit,...

  5. Comparing RIEGL RiCOPTER UAV LiDAR Derived Canopy Height and DBH with Terrestrial LiDAR

    PubMed Central

    Bartholomeus, Harm M.; Kooistra, Lammert

    2017-01-01

    In recent years, LIght Detection And Ranging (LiDAR) and especially Terrestrial Laser Scanning (TLS) systems have shown the potential to revolutionise forest structural characterisation by providing unprecedented 3D data. However, manned Airborne Laser Scanning (ALS) requires costly campaigns and produces relatively low point density, while TLS is labour intense and time demanding. Unmanned Aerial Vehicle (UAV)-borne laser scanning can be the way in between. In this study, we present first results and experiences with the RIEGL RiCOPTER with VUX®-1UAV ALS system and compare it with the well tested RIEGL VZ-400 TLS system. We scanned the same forest plots with both systems over the course of two days. We derived Digital Terrain Models (DTMs), Digital Surface Models (DSMs) and finally Canopy Height Models (CHMs) from the resulting point clouds. ALS CHMs were on average 11.5 cm higher in five plots with different canopy conditions. This showed that TLS could not always detect the top of canopy. Moreover, we extracted trunk segments of 58 trees for ALS and TLS simultaneously, of which 39 could be used to model Diameter at Breast Height (DBH). ALS DBH showed a high agreement with TLS DBH with a correlation coefficient of 0.98 and root mean square error of 4.24 cm. We conclude that RiCOPTER has the potential to perform comparable to TLS for estimating forest canopy height and DBH under the studied forest conditions. Further research should be directed to testing UAV-borne LiDAR for explicit 3D modelling of whole trees to estimate tree volume and subsequently Above-Ground Biomass (AGB). PMID:29039755

  6. Comparing RIEGL RiCOPTER UAV LiDAR Derived Canopy Height and DBH with Terrestrial LiDAR.

    PubMed

    Brede, Benjamin; Lau, Alvaro; Bartholomeus, Harm M; Kooistra, Lammert

    2017-10-17

    In recent years, LIght Detection And Ranging (LiDAR) and especially Terrestrial Laser Scanning (TLS) systems have shown the potential to revolutionise forest structural characterisation by providing unprecedented 3D data. However, manned Airborne Laser Scanning (ALS) requires costly campaigns and produces relatively low point density, while TLS is labour intense and time demanding. Unmanned Aerial Vehicle (UAV)-borne laser scanning can be the way in between. In this study, we present first results and experiences with the RIEGL RiCOPTER with VUX ® -1UAV ALS system and compare it with the well tested RIEGL VZ-400 TLS system. We scanned the same forest plots with both systems over the course of two days. We derived Digital Terrain Model (DTMs), Digital Surface Model (DSMs) and finally Canopy Height Model (CHMs) from the resulting point clouds. ALS CHMs were on average 11.5 c m higher in five plots with different canopy conditions. This showed that TLS could not always detect the top of canopy. Moreover, we extracted trunk segments of 58 trees for ALS and TLS simultaneously, of which 39 could be used to model Diameter at Breast Height (DBH). ALS DBH showed a high agreement with TLS DBH with a correlation coefficient of 0.98 and root mean square error of 4.24 c m . We conclude that RiCOPTER has the potential to perform comparable to TLS for estimating forest canopy height and DBH under the studied forest conditions. Further research should be directed to testing UAV-borne LiDAR for explicit 3D modelling of whole trees to estimate tree volume and subsequently Above-Ground Biomass (AGB).

  7. Cross-section fluctuations in chaotic scattering systems.

    PubMed

    Ericson, Torleif E O; Dietz, Barbara; Richter, Achim

    2016-10-01

    Exact analytical expressions for the cross-section correlation functions of chaotic scattering systems have hitherto been derived only under special conditions. The objective of the present article is to provide expressions that are applicable beyond these restrictions. The derivation is based on a statistical model of Breit-Wigner type for chaotic scattering amplitudes which has been shown to describe the exact analytical results for the scattering (S)-matrix correlation functions accurately. Our results are given in the energy and in the time representations and apply in the whole range from isolated to overlapping resonances. The S-matrix contributions to the cross-section correlations are obtained in terms of explicit irreducible and reducible correlation functions. Consequently, the model can be used for a detailed exploration of the key features of the cross-section correlations and the underlying physical mechanisms. In the region of isolated resonances, the cross-section correlations contain a dominant contribution from the self-correlation term. For narrow states the self-correlations originate predominantly from widely spaced states with exceptionally large partial width. In the asymptotic region of well-overlapping resonances, the cross-section autocorrelation functions are given in terms of the S-matrix autocorrelation functions. For inelastic correlations, in particular, the Ericson fluctuations rapidly dominate in that region. Agreement with known analytical and experimental results is excellent.

  8. Development of a Prediction Model Based on RBF Neural Network for Sheet Metal Fixture Locating Layout Design and Optimization.

    PubMed

    Wang, Zhongqi; Yang, Bo; Kang, Yonggang; Yang, Yuan

    2016-01-01

    Fixture plays an important part in constraining excessive sheet metal part deformation at machining, assembly, and measuring stages during the whole manufacturing process. However, it is still a difficult and nontrivial task to design and optimize sheet metal fixture locating layout at present because there is always no direct and explicit expression describing sheet metal fixture locating layout and responding deformation. To that end, an RBF neural network prediction model is proposed in this paper to assist design and optimization of sheet metal fixture locating layout. The RBF neural network model is constructed by training data set selected by uniform sampling and finite element simulation analysis. Finally, a case study is conducted to verify the proposed method.

  9. Development of a Prediction Model Based on RBF Neural Network for Sheet Metal Fixture Locating Layout Design and Optimization

    PubMed Central

    Wang, Zhongqi; Yang, Bo; Kang, Yonggang; Yang, Yuan

    2016-01-01

    Fixture plays an important part in constraining excessive sheet metal part deformation at machining, assembly, and measuring stages during the whole manufacturing process. However, it is still a difficult and nontrivial task to design and optimize sheet metal fixture locating layout at present because there is always no direct and explicit expression describing sheet metal fixture locating layout and responding deformation. To that end, an RBF neural network prediction model is proposed in this paper to assist design and optimization of sheet metal fixture locating layout. The RBF neural network model is constructed by training data set selected by uniform sampling and finite element simulation analysis. Finally, a case study is conducted to verify the proposed method. PMID:27127499

  10. Uncertainties in SOA Formation from the Photooxidation of α-pinene

    NASA Astrophysics Data System (ADS)

    McVay, R.; Zhang, X.; Aumont, B.; Valorso, R.; Camredon, M.; La, S.; Seinfeld, J.

    2015-12-01

    Explicit chemical models such as GECKO-A (the Generator for Explicit Chemistry and Kinetics of Organics in the Atmosphere) enable detailed modeling of gas-phase photooxidation and secondary organic aerosol (SOA) formation. Comparison between these explicit models and chamber experiments can provide insight into processes that are missing or unknown in these models. GECKO-A is used to model seven SOA formation experiments from α-pinene photooxidation conducted at varying seed particle concentrations with varying oxidation rates. We investigate various physical and chemical processes to evaluate the extent of agreement between the experiments and the model predictions. We examine the effect of vapor wall loss on SOA formation and how the importance of this effect changes at different oxidation rates. Proposed gas-phase autoxidation mechanisms are shown to significantly affect SOA predictions. The potential effects of particle-phase dimerization and condensed-phase photolysis are investigated. We demonstrate the extent to which SOA predictions in the α-pinene photooxidation system depend on uncertainties in the chemical mechanism.

  11. Assessing implicit models for nonpolar mean solvation forces: The importance of dispersion and volume terms

    PubMed Central

    Wagoner, Jason A.; Baker, Nathan A.

    2006-01-01

    Continuum solvation models provide appealing alternatives to explicit solvent methods because of their ability to reproduce solvation effects while alleviating the need for expensive sampling. Our previous work has demonstrated that Poisson-Boltzmann methods are capable of faithfully reproducing polar explicit solvent forces for dilute protein systems; however, the popular solvent-accessible surface area model was shown to be incapable of accurately describing nonpolar solvation forces at atomic-length scales. Therefore, alternate continuum methods are needed to reproduce nonpolar interactions at the atomic scale. In the present work, we address this issue by supplementing the solvent-accessible surface area model with additional volume and dispersion integral terms suggested by scaled particle models and Weeks–Chandler–Andersen theory, respectively. This more complete nonpolar implicit solvent model shows very good agreement with explicit solvent results and suggests that, although often overlooked, the inclusion of appropriate dispersion and volume terms are essential for an accurate implicit solvent description of atomic-scale nonpolar forces. PMID:16709675

  12. Tracing global supply chains to air pollution hotspots

    NASA Astrophysics Data System (ADS)

    Moran, Daniel; Kanemoto, Keiichiro

    2016-09-01

    While high-income countries have made significant strides since the 1970s in improving air quality, air pollution continues to rise in many developing countries and the world as a whole. A significant share of the pollution burden in developing countries can be attributed to production for export to consumers in high-income nations. However, it remains a challenge to quantify individual actors’ share of responsibility for pollution, and to involve parties other than primary emitters in cleanup efforts. Here we present a new spatially explicit modeling approach to link SO2, NO x , and PM10 severe emissions hotspots to final consumers via global supply chains. These maps show developed countries reducing their emissions domestically but driving new pollution hotspots in developing countries. This is also the first time a spatially explicit footprint inventory has been established. Linking consumers and supply chains to emissions hotspots creates opportunities for other parties to participate alongside primary emitters and local regulators in pollution abatement efforts.

  13. Scattering matrices of Lamb waves at irregular surface and void defects.

    PubMed

    Feng, Feilong; Shen, Jianzhong; Lin, Shuyu

    2012-08-01

    Time-harmonic solution of Lamb wave scattering in a plane-strain waveguide with irregular thickness is investigated based on stair-step discretization and stepwise mode matching. The transfer relations of the transmission matrices and reflection matrices are derived in both directions of the waveguide. With these, an explicit expression of the scattering matrix is derived. When the scattering region of an inner irregular defect is geometrically divided into several parts composed of sub-waveguides with variable thicknesses and void regions with vertical free edges corresponding to the plate surfaces, the scattering matrix of the whole region could then be derived by modal matching along the artificial boundaries, as explicit functions of all the scattering matrices of the sub-waveguides and reflection matrices of the free edges. The effectiveness of the formulation is examined by numerical examples; the calculated scattering coefficients are in good accordance with those obtained from numerical simulation models. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. Characterizing Decision-Analysis Performances of Risk Prediction Models Using ADAPT Curves.

    PubMed

    Lee, Wen-Chung; Wu, Yun-Chun

    2016-01-01

    The area under the receiver operating characteristic curve is a widely used index to characterize the performance of diagnostic tests and prediction models. However, the index does not explicitly acknowledge the utilities of risk predictions. Moreover, for most clinical settings, what counts is whether a prediction model can guide therapeutic decisions in a way that improves patient outcomes, rather than to simply update probabilities.Based on decision theory, the authors propose an alternative index, the "average deviation about the probability threshold" (ADAPT).An ADAPT curve (a plot of ADAPT value against the probability threshold) neatly characterizes the decision-analysis performances of a risk prediction model.Several prediction models can be compared for their ADAPT values at a chosen probability threshold, for a range of plausible threshold values, or for the whole ADAPT curves. This should greatly facilitate the selection of diagnostic tests and prediction models.

  15. Ecosystem effects of environmental flows: Modelling and experimental floods in a dryland river

    USGS Publications Warehouse

    Shafroth, P.B.; Wilcox, A.C.; Lytle, D.A.; Hickey, J.T.; Andersen, D.C.; Beauchamp, Vanessa B.; Hautzinger, A.; McMullen, L.E.; Warner, A.

    2010-01-01

    Successful environmental flow prescriptions require an accurate understanding of the linkages among flow events, geomorphic processes and biotic responses. We describe models and results from experimental flow releases associated with an environmental flow program on the Bill Williams River (BWR), Arizona, in arid to semiarid western U.S.A. Two general approaches for improving knowledge and predictions of ecological responses to environmental flows are: (1) coupling physical system models to ecological responses and (2) clarifying empirical relationships between flow and ecological responses through implementation and monitoring of experimental flow releases. We modelled the BWR physical system using: (1) a reservoir operations model to simulate reservoir releases and reservoir water levels and estimate flow through the river system under a range of scenarios, (2) one- and two-dimensional river hydraulics models to estimate stage-discharge relationships at the whole-river and local scales, respectively, and (3) a groundwater model to estimate surface- and groundwater interactions in a large, alluvial valley on the BWR where surface flow is frequently absent. An example of a coupled, hydrology-ecology model is the Ecosystems Function Model, which we used to link a one-dimensional hydraulic model with riparian tree seedling establishment requirements to produce spatially explicit predictions of seedling recruitment locations in a Geographic Information System. We also quantified the effects of small experimental floods on the differential mortality of native and exotic riparian trees, on beaver dam integrity and distribution, and on the dynamics of differentially flow-adapted benthic macroinvertebrate groups. Results of model applications and experimental flow releases are contributing to adaptive flow management on the BWR and to the development of regional environmental flow standards. General themes that emerged from our work include the importance of response thresholds, which are commonly driven by geomorphic thresholds or mediated by geomorphic processes, and the importance of spatial and temporal variation in the effects of flows on ecosystems, which can result from factors such as longitudinal complexity and ecohydrological feedbacks. ?? Published 2009.

  16. Supporting Space Systems Design via Systems Dependency Analysis Methodology

    NASA Astrophysics Data System (ADS)

    Guariniello, Cesare

    The increasing size and complexity of space systems and space missions pose severe challenges to space systems engineers. When complex systems and Systems-of-Systems are involved, the behavior of the whole entity is not only due to that of the individual systems involved but also to the interactions and dependencies between the systems. Dependencies can be varied and complex, and designers usually do not perform analysis of the impact of dependencies at the level of complex systems, or this analysis involves excessive computational cost, or occurs at a later stage of the design process, after designers have already set detailed requirements, following a bottom-up approach. While classical systems engineering attempts to integrate the perspectives involved across the variety of engineering disciplines and the objectives of multiple stakeholders, there is still a need for more effective tools and methods capable to identify, analyze and quantify properties of the complex system as a whole and to model explicitly the effect of some of the features that characterize complex systems. This research describes the development and usage of Systems Operational Dependency Analysis and Systems Developmental Dependency Analysis, two methods based on parametric models of the behavior of complex systems, one in the operational domain and one in the developmental domain. The parameters of the developed models have intuitive meaning, are usable with subjective and quantitative data alike, and give direct insight into the causes of observed, and possibly emergent, behavior. The approach proposed in this dissertation combines models of one-to-one dependencies among systems and between systems and capabilities, to analyze and evaluate the impact of failures or delays on the outcome of the whole complex system. The analysis accounts for cascading effects, partial operational failures, multiple failures or delays, and partial developmental dependencies. The user of these methods can assess the behavior of each system based on its internal status and on the topology of its dependencies on systems connected to it. Designers and decision makers can therefore quickly analyze and explore the behavior of complex systems and evaluate different architectures under various working conditions. The methods support educated decision making both in the design and in the update process of systems architecture, reducing the need to execute extensive simulations. In particular, in the phase of concept generation and selection, the information given by the methods can be used to identify promising architectures to be further tested and improved, while discarding architectures that do not show the required level of global features. The methods, when used in conjunction with appropriate metrics, also allow for improved reliability and risk analysis, as well as for automatic scheduling and re-scheduling based on the features of the dependencies and on the accepted level of risk. This dissertation illustrates the use of the two methods in sample aerospace applications, both in the operational and in the developmental domain. The applications show how to use the developed methodology to evaluate the impact of failures, assess the criticality of systems, quantify metrics of interest, quantify the impact of delays, support informed decision making when scheduling the development of systems and evaluate the achievement of partial capabilities. A larger, well-framed case study illustrates how the Systems Operational Dependency Analysis method and the Systems Developmental Dependency Analysis method can support analysis and decision making, at the mid and high level, in the design process of architectures for the exploration of Mars. The case study also shows how the methods do not replace the classical systems engineering methodologies, but support and improve them.

  17. Role of seasonality on predator-prey-subsidy population dynamics.

    PubMed

    Levy, Dorian; Harrington, Heather A; Van Gorder, Robert A

    2016-05-07

    The role of seasonality on predator-prey interactions in the presence of a resource subsidy is examined using a system of non-autonomous ordinary differential equations (ODEs). The problem is motivated by the Arctic, inhabited by the ecological system of arctic foxes (predator), lemmings (prey), and seal carrion (subsidy). We construct two nonlinear, nonautonomous systems of ODEs named the Primary Model, and the n-Patch Model. The Primary Model considers spatial factors implicitly, and the n-Patch Model considers space explicitly as a "Stepping Stone" system. We establish the boundedness of the dynamics, as well as the necessity of sufficiently nutritional food for the survival of the predator. We investigate the importance of including the resource subsidy explicitly in the model, and the importance of accounting for predator mortality during migration. We find a variety of non-equilibrium dynamics for both systems, obtaining both limit cycles and chaotic oscillations. We were then able to discuss relevant implications for biologically interesting predator-prey systems including subsidy under seasonal effects. Notably, we can observe the extinction or persistence of a species when the corresponding autonomous system might predict the opposite. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Complexity of life via collective mind

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2004-01-01

    e mind is introduced as a set of simple intelligent units (say, neurons, or interacting agents), which can communicate by exchange of information without explicit global control. Incomplete information is compensated by a sequence of random guesses symmetrically distributed around expectations with prescribed variances. Both the expectations and variances are the invariants characterizing the whole class of agents. These invariants are stored as parameters of the collective mind, while they contribute into dynamical formalism of the agents' evolution, and in particular, into the reflective chains of their nested abstract images of the selves and non-selves. The proposed model consists of the system of stochastic differential equations in the Langevin form representing the motor dynamics, and the corresponding Fokker-Planck equation representing the mental dynamics (Motor dynamics describes the motion in physical space, while mental dynamics simulates the evolution of initial errors in terms of the probability density). The main departure of this model from Newtonian and statistical physics is due to a feedback from the mental to the motor dynamics which makes the Fokker-Planck equation nonlinear. Interpretation of this model from mathematical and physical viewpoints, as well as possible interpretation from biological, psychological, and social viewpoints are discussed. The model is illustrated by the dynamics of a dialog.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wieder, William R.; Allison, Steven D.; Davidson, Eric A.

    Microbes influence soil organic matter (SOM) decomposition and the long-term stabilization of carbon (C) in soils. We contend that by revising the representation of microbial processes and their interactions with the physicochemical soil environment, Earth system models (ESMs) may make more realistic global C cycle projections. Explicit representation of microbial processes presents considerable challenges due to the scale at which these processes occur. Thus, applying microbial theory in ESMs requires a framework to link micro-scale process-level understanding and measurements to macro-scale models used to make decadal- to century-long projections. Here, we review the diversity, advantages, and pitfalls of simulating soilmore » biogeochemical cycles using microbial-explicit modeling approaches. We present a roadmap for how to begin building, applying, and evaluating reliable microbial-explicit model formulations that can be applied in ESMs. Drawing from experience with traditional decomposition models we suggest: (1) guidelines for common model parameters and output that can facilitate future model intercomparisons; (2) development of benchmarking and model-data integration frameworks that can be used to effectively guide, inform, and evaluate model parameterizations with data from well-curated repositories; and (3) the application of scaling methods to integrate microbial-explicit soil biogeochemistry modules within ESMs. With contributions across scientific disciplines, we feel this roadmap can advance our fundamental understanding of soil biogeochemical dynamics and more realistically project likely soil C response to environmental change at global scales.« less

  20. Analysis of explicit model predictive control for path-following control

    PubMed Central

    2018-01-01

    In this paper, explicit Model Predictive Control(MPC) is employed for automated lane-keeping systems. MPC has been regarded as the key to handle such constrained systems. However, the massive computational complexity of MPC, which employs online optimization, has been a major drawback that limits the range of its target application to relatively small and/or slow problems. Explicit MPC can reduce this computational burden using a multi-parametric quadratic programming technique(mp-QP). The control objective is to derive an optimal front steering wheel angle at each sampling time so that autonomous vehicles travel along desired paths, including straight, circular, and clothoid parts, at high entry speeds. In terms of the design of the proposed controller, a method of choosing weighting matrices in an optimization problem and the range of horizons for path-following control are described through simulations. For the verification of the proposed controller, simulation results obtained using other control methods such as MPC, Linear-Quadratic Regulator(LQR), and driver model are employed, and CarSim, which reflects the features of a vehicle more realistically than MATLAB/Simulink, is used for reliable demonstration. PMID:29534080

  1. Analysis of explicit model predictive control for path-following control.

    PubMed

    Lee, Junho; Chang, Hyuk-Jun

    2018-01-01

    In this paper, explicit Model Predictive Control(MPC) is employed for automated lane-keeping systems. MPC has been regarded as the key to handle such constrained systems. However, the massive computational complexity of MPC, which employs online optimization, has been a major drawback that limits the range of its target application to relatively small and/or slow problems. Explicit MPC can reduce this computational burden using a multi-parametric quadratic programming technique(mp-QP). The control objective is to derive an optimal front steering wheel angle at each sampling time so that autonomous vehicles travel along desired paths, including straight, circular, and clothoid parts, at high entry speeds. In terms of the design of the proposed controller, a method of choosing weighting matrices in an optimization problem and the range of horizons for path-following control are described through simulations. For the verification of the proposed controller, simulation results obtained using other control methods such as MPC, Linear-Quadratic Regulator(LQR), and driver model are employed, and CarSim, which reflects the features of a vehicle more realistically than MATLAB/Simulink, is used for reliable demonstration.

  2. Learning from graphically integrated 2D and 3D representations improves retention of neuroanatomy

    NASA Astrophysics Data System (ADS)

    Naaz, Farah

    Visualizations in the form of computer-based learning environments are highly encouraged in science education, especially for teaching spatial material. Some spatial material, such as sectional neuroanatomy, is very challenging to learn. It involves learning the two dimensional (2D) representations that are sampled from the three dimensional (3D) object. In this study, a computer-based learning environment was used to explore the hypothesis that learning sectional neuroanatomy from a graphically integrated 2D and 3D representation will lead to better learning outcomes than learning from a sequential presentation. The integrated representation explicitly demonstrates the 2D-3D transformation and should lead to effective learning. This study was conducted using a computer graphical model of the human brain. There were two learning groups: Whole then Sections, and Integrated 2D3D. Both groups learned whole anatomy (3D neuroanatomy) before learning sectional anatomy (2D neuroanatomy). The Whole then Sections group then learned sectional anatomy using 2D representations only. The Integrated 2D3D group learned sectional anatomy from a graphically integrated 3D and 2D model. A set of tests for generalization of knowledge to interpreting biomedical images was conducted immediately after learning was completed. The order of presentation of the tests of generalization of knowledge was counterbalanced across participants to explore a secondary hypothesis of the study: preparation for future learning. If the computer-based instruction programs used in this study are effective tools for teaching anatomy, the participants should continue learning neuroanatomy with exposure to new representations. A test of long-term retention of sectional anatomy was conducted 4-8 weeks after learning was completed. The Integrated 2D3D group was better than the Whole then Sections group in retaining knowledge of difficult instances of sectional anatomy after the retention interval. The benefit of learning from an integrated 2D3D representation suggests that there are some spatial transformations which are better retained if they are learned through an explicit demonstration. Participants also showed evidence of continued learning on the tests of generalization with the help of cues and practice, even without feedback. This finding suggests that the computer-based learning programs used in this study were good tools for instruction of neuroanatomy.

  3. Illustrating the coupled human-environment system for vulnerability analysis: three case studies.

    PubMed

    Turner, B L; Matson, Pamela A; McCarthy, James J; Corell, Robert W; Christensen, Lindsey; Eckley, Noelle; Hovelsrud-Broda, Grete K; Kasperson, Jeanne X; Kasperson, Roger E; Luers, Amy; Martello, Marybeth L; Mathiesen, Svein; Naylor, Rosamond; Polsky, Colin; Pulsipher, Alexander; Schiller, Andrew; Selin, Henrik; Tyler, Nicholas

    2003-07-08

    The vulnerability framework of the Research and Assessment Systems for Sustainability Program explicitly recognizes the coupled human-environment system and accounts for interactions in the coupling affecting the system's responses to hazards and its vulnerability. This paper illustrates the usefulness of the vulnerability framework through three case studies: the tropical southern Yucatán, the arid Yaqui Valley of northwest Mexico, and the pan-Arctic. Together, these examples illustrate the role of external forces in reshaping the systems in question and their vulnerability to environmental hazards, as well as the different capacities of stakeholders, based on their access to social and biophysical capital, to respond to the changes and hazards. The framework proves useful in directing attention to the interacting parts of the coupled system and helps identify gaps in information and understanding relevant to reducing vulnerability in the systems as a whole.

  4. Two disjunct Pleistocene populations and anisotropic postglacial expansion shaped the current genetic structure of the relict plant Amborella trichopoda

    PubMed Central

    Tournebize, Rémi; Manel, Stéphanie; Vigouroux, Yves; Munoz, François; de Kochko, Alexandre

    2017-01-01

    Past climate fluctuations shaped the population dynamics of organisms in space and time, and have impacted their present intra-specific genetic structure. Demo-genetic modelling allows inferring the way past demographic and migration dynamics have determined this structure. Amborella trichopoda is an emblematic relict plant endemic to New Caledonia, widely distributed in the understory of non-ultramafic rainforests. We assessed the influence of the last glacial climates on the demographic history and the paleo-distribution of 12 Amborella populations covering the whole current distribution. We performed coalescent genetic modelling of these dynamics, based on both whole-genome resequencing and microsatellite genotyping data. We found that the two main genetic groups of Amborella were shaped by the divergence of two ancestral populations during the last glacial maximum. From 12,800 years BP, the South ancestral population has expanded 6.3-fold while the size of the North population has remained stable. Recent asymmetric gene flow between the groups further contributed to the phylogeographical pattern. Spatially explicit coalescent modelling allowed us to estimate the location of ancestral populations with good accuracy (< 22 km) and provided indications regarding the mid-elevation pathways that facilitated post-glacial expansion. PMID:28820899

  5. Black-box Brain Experiments, Causal Mathematical Logic, and the Thermodynamics of Intelligence

    NASA Astrophysics Data System (ADS)

    Pissanetzky, Sergio; Lanzalaco, Felix

    2013-12-01

    Awareness of the possible existence of a yet-unknown principle of Physics that explains cognition and intelligence does exist in several projects of emulation, simulation, and replication of the human brain currently under way. Brain simulation projects define their success partly in terms of the emergence of non-explicitly programmed biophysical signals such as self-oscillation and spreading cortical waves. We propose that a recently discovered theory of Physics known as Causal Mathematical Logic (CML) that links intelligence with causality and entropy and explains intelligent behavior from first principles, is the missing link. We further propose the theory as a roadway to understanding more complex biophysical signals, and to explain the set of intelligence principles. The new theory applies to information considered as an entity by itself. The theory proposes that any device that processes information and exhibits intelligence must satisfy certain theoretical conditions irrespective of the substrate where it is being processed. The substrate can be the human brain, a part of it, a worm's brain, a motor protein that self-locomotes in response to its environment, a computer. Here, we propose to extend the causal theory to systems in Neuroscience, because of its ability to model complex systems without heuristic approximations, and to predict emerging signals of intelligence directly from the models. The theory predicts the existence of a large number of observables (or "signals"), all of which emerge and can be directly and mathematically calculated from non-explicitly programmed detailed causal models. This approach is aiming for a universal and predictive language for Neuroscience and AGI based on causality and entropy, detailed enough to describe the finest structures and signals of the brain, yet general enough to accommodate the versatility and wholeness of intelligence. Experiments are focused on a black-box as one of the devices described above of which both the input and the output are precisely known, but not the internal implementation. The same input is separately supplied to a causal virtual machine, and the calculated output is compared with the measured output. The virtual machine, described in a previous paper, is a computer implementation of CML, fixed for all experiments and unrelated to the device in the black box. If the two outputs are equivalent, then the experiment has quantitatively succeeded and conclusions can be drawn regarding details of the internal implementation of the device. Several small black-box experiments were successfully performed and demonstrated the emergence of non-explicitly programmed cognitive function in each case

  6. Automation based on knowledge modeling theory and its applications in engine diagnostic systems using Space Shuttle Main Engine vibrational data. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Kim, Jonnathan H.

    1995-01-01

    Humans can perform many complicated tasks without explicit rules. This inherent and advantageous capability becomes a hurdle when a task is to be automated. Modern computers and numerical calculations require explicit rules and discrete numerical values. In order to bridge the gap between human knowledge and automating tools, a knowledge model is proposed. Knowledge modeling techniques are discussed and utilized to automate a labor and time intensive task of detecting anomalous bearing wear patterns in the Space Shuttle Main Engine (SSME) High Pressure Oxygen Turbopump (HPOTP).

  7. Human locognosic acuity on the arm varies with explicit and implicit manipulations of attention: implications for interpreting elevated tactile acuity on an amputation stump.

    PubMed

    O'Boyle, D J; Moore, C E; Poliakoff, E; Butterworth, R; Sutton, A; Cody, F W

    2001-06-01

    In Experiment 1, normal subjects' ability to localize tactile stimuli (locognosia) delivered to the upper arm was significantly higher when they were instructed explicitly to direct their attention selectively to that segment than when they were instructed explicitly to distribute their attention across the whole arm. This elevation of acuity was eliminated when subjects' attentional resources were divided by superimposition of an effortful, secondary task during stimulation. In Experiment 2, in the absence of explicit attentional instruction, subjects' locognosic acuity on one of three arm segments was significantly higher when stimulation of that segment was 2.5 times more probable than that of stimulation of the other two segments. We surmise that the attentional mechanisms responsible for such modulations of locognosic acuity in normal subjects may contribute to the elevated sensory acuity observed on the stumps of amputees.

  8. Explicit criteria for prioritization of cataract surgery

    PubMed Central

    Ma Quintana, José; Escobar, Antonio; Bilbao, Amaia

    2006-01-01

    Background Consensus techniques have been used previously to create explicit criteria to prioritize cataract extraction; however, the appropriateness of the intervention was not included explicitly in previous studies. We developed a prioritization tool for cataract extraction according to the RAND method. Methods Criteria were developed using a modified Delphi panel judgment process. A panel of 11 ophthalmologists was assembled. Ratings were analyzed regarding the level of agreement among panelists. We studied the effect of all variables on the final panel score using general linear and logistic regression models. Priority scoring systems were developed by means of optimal scaling and general linear models. The explicit criteria developed were summarized by means of regression tree analysis. Results Eight variables were considered to create the indications. Of the 310 indications that the panel evaluated, 22.6% were considered high priority, 52.3% intermediate priority, and 25.2% low priority. Agreement was reached for 31.9% of the indications and disagreement for 0.3%. Logistic regression and general linear models showed that the preoperative visual acuity of the cataractous eye, visual function, and anticipated visual acuity postoperatively were the most influential variables. Alternative and simple scoring systems were obtained by optimal scaling and general linear models where the previous variables were also the most important. The decision tree also shows the importance of the previous variables and the appropriateness of the intervention. Conclusion Our results showed acceptable validity as an evaluation and management tool for prioritizing cataract extraction. It also provides easy algorithms for use in clinical practice. PMID:16512893

  9. Physiologically based pharmacokinetic modeling of polyethylene glycol-coated polyacrylamide nanoparticles in rats.

    PubMed

    Li, Dingsheng; Johanson, Gunnar; Emond, Claude; Carlander, Ulrika; Philbert, Martin; Jolliet, Olivier

    2014-08-01

    Nanoparticles' health risks depend on their biodistribution in the body. Phagocytosis may greatly affect this distribution but has not yet explicitly accounted for in whole body pharmacokinetic models. Here, we present a physiologically based pharmacokinetic model that includes phagocytosis of nanoparticles to explore the biodistribution of intravenously injected polyethylene glycol-coated polyacrylamide nanoparticles in rats. The model explains 97% of the observed variation in nanoparticles amounts across organs. According to the model, phagocytizing cells quickly capture nanoparticles until their saturation and thereby constitute a major reservoir in richly perfused organs (spleen, liver, bone marrow, lungs, heart and kidneys), storing 83% of the nanoparticles found in these organs 120 h after injection. Key determinants of the nanoparticles biodistribution are the uptake capacities of phagocytizing cells in organs, the partitioning between tissue and blood, and the permeability between capillary blood and tissues. This framework can be extended to other types of nanoparticles by adapting these determinants.

  10. On the performance of explicit and implicit algorithms for transient thermal analysis

    NASA Astrophysics Data System (ADS)

    Adelman, H. M.; Haftka, R. T.

    1980-09-01

    The status of an effort to increase the efficiency of calculating transient temperature fields in complex aerospace vehicle structures is described. The advantages and disadvantages of explicit and implicit algorithms are discussed. A promising set of implicit algorithms, known as the GEAR package is described. Four test problems, used for evaluating and comparing various algorithms, have been selected and finite element models of the configurations are discribed. These problems include a space shuttle frame component, an insulated cylinder, a metallic panel for a thermal protection system and a model of the space shuttle orbiter wing. Calculations were carried out using the SPAR finite element program, the MITAS lumped parameter program and a special purpose finite element program incorporating the GEAR algorithms. Results generally indicate a preference for implicit over explicit algorithms for solution of transient structural heat transfer problems when the governing equations are stiff. Careful attention to modeling detail such as avoiding thin or short high-conducting elements can sometimes reduce the stiffness to the extent that explicit methods become advantageous.

  11. The Things You Do: Internal Models of Others’ Expected Behaviour Guide Action Observation

    PubMed Central

    Schenke, Kimberley C.; Wyer, Natalie A.; Bach, Patric

    2016-01-01

    Predictions allow humans to manage uncertainties within social interactions. Here, we investigate how explicit and implicit person models–how different people behave in different situations–shape these predictions. In a novel action identification task, participants judged whether actors interacted with or withdrew from objects. In two experiments, we manipulated, unbeknownst to participants, the two actors action likelihoods across situations, such that one actor typically interacted with one object and withdrew from the other, while the other actor showed the opposite behaviour. In Experiment 2, participants additionally received explicit information about the two individuals that either matched or mismatched their actual behaviours. The data revealed direct but dissociable effects of both kinds of person information on action identification. Implicit action likelihoods affected response times, speeding up the identification of typical relative to atypical actions, irrespective of the explicit knowledge about the individual’s behaviour. Explicit person knowledge, in contrast, affected error rates, causing participants to respond according to expectations instead of observed behaviour, even when they were aware that the explicit information might not be valid. Together, the data show that internal models of others’ behaviour are routinely re-activated during action observation. They provide first evidence of a person-specific social anticipation system, which predicts forthcoming actions from both explicit information and an individuals’ prior behaviour in a situation. These data link action observation to recent models of predictive coding in the non-social domain where similar dissociations between implicit effects on stimulus identification and explicit behavioural wagers have been reported. PMID:27434265

  12. Jet Noise Physics and Modeling Using First-principles Simulations

    NASA Technical Reports Server (NTRS)

    Freund, Jonathan B.

    2003-01-01

    An extensive analysis of our jet DNS database has provided for the first time the complex correlations that are the core of many statistical jet noise models, including MGBK. We have also for the first time explicitly computed the noise from different components of a commonly used noise source as proposed in many modeling approaches. Key findings are: (1) While two-point (space and time) velocity statistics are well-fitted by decaying exponentials, even for our low-Reynolds-number jet, spatially integrated fourth-order space/retarded-time correlations, which constitute the noise "source" in MGBK, are instead well-fitted by Gaussians. The width of these Gaussians depends (by a factor of 2) on which components are considered. This is counter to current modeling practice, (2) A standard decomposition of the Lighthill source is shown by direct evaluation to be somewhat artificial since the noise from these nominally separate components is in fact highly correlated. We anticipate that the same will be the case for the Lilley source, and (3) The far-field sound is computed in a way that explicitly includes all quadrupole cancellations, yet evaluating the Lighthill integral for only a small part of the jet yields a far-field noise far louder than that from the whole jet due to missing nonquadrupole cancellations. Details of this study are discussed in a draft of a paper included as appendix A.

  13. Coordination and transport of water and carbohydrates in the coupled soil-root-xylem-phloem leaf system

    NASA Astrophysics Data System (ADS)

    Katul, Gabriel; Huang, Cheng-Wei

    2017-04-01

    In response to varying environmental conditions, stomatal pores act as biological valves that dynamically adjust their size thereby determining the rate of CO2 assimilation and water loss (i.e., transpiration) to the atmosphere. Although the significance of this biotic control on gas exchange is rarely disputed, representing parsimoniously all the underlying mechanisms responsible for stomatal kinetics remain a subject of some debate. It has been conjectured that stomatal control in seed plants (i.e., angiosperm and gymnosperm) represents a compromise between biochemical demand for CO2 and prevention of excessive water loss. This view has been amended at the whole-plant level, where xylem hydraulics and sucrose transport efficiency in phloem appear to impose additional constraints on gas exchange. If such additional constraints impact stomatal opening and closure, then seed plants may have evolved coordinated photosynthetic-hydraulic-sugar transporting machinery that confers some competitive advantages in fluctuating environmental conditions. Thus, a stomatal optimization model that explicitly considers xylem hydraulics and maximum sucrose transport is developed to explore this coordination in the leaf-xylem-phloem system. The model is then applied to progressive drought conditions. The main findings from the model calculations are that (1) the predicted stomatal conductance from the conventional stomatal optimization theory at the leaf and the newly proposed models converge, suggesting a tight coordination in the leaf-xylem-phloem system; (2) stomatal control is mainly limited by the water supply function of the soil-xylem hydraulic system especially when the water flux through the transpiration stream is significantly larger than water exchange between xylem and phloem; (3) thus, xylem limitation imposed on the supply function can be used to differentiate species with different water use strategy across the spectrum of isohydric to anisohydric behavior.

  14. Modeling the Spatial Dynamics of Regional Land Use: The CLUE-S Model

    NASA Astrophysics Data System (ADS)

    Verburg, Peter H.; Soepboer, Welmoed; Veldkamp, A.; Limpiada, Ramil; Espaldon, Victoria; Mastura, Sharifah S. A.

    2002-09-01

    Land-use change models are important tools for integrated environmental management. Through scenario analysis they can help to identify near-future critical locations in the face of environmental change. A dynamic, spatially explicit, land-use change model is presented for the regional scale: CLUE-S. The model is specifically developed for the analysis of land use in small regions (e.g., a watershed or province) at a fine spatial resolution. The model structure is based on systems theory to allow the integrated analysis of land-use change in relation to socio-economic and biophysical driving factors. The model explicitly addresses the hierarchical organization of land use systems, spatial connectivity between locations and stability. Stability is incorporated by a set of variables that define the relative elasticity of the actual land-use type to conversion. The user can specify these settings based on expert knowledge or survey data. Two applications of the model in the Philippines and Malaysia are used to illustrate the functioning of the model and its validation.

  15. Modeling the spatial dynamics of regional land use: the CLUE-S model.

    PubMed

    Verburg, Peter H; Soepboer, Welmoed; Veldkamp, A; Limpiada, Ramil; Espaldon, Victoria; Mastura, Sharifah S A

    2002-09-01

    Land-use change models are important tools for integrated environmental management. Through scenario analysis they can help to identify near-future critical locations in the face of environmental change. A dynamic, spatially explicit, land-use change model is presented for the regional scale: CLUE-S. The model is specifically developed for the analysis of land use in small regions (e.g., a watershed or province) at a fine spatial resolution. The model structure is based on systems theory to allow the integrated analysis of land-use change in relation to socio-economic and biophysical driving factors. The model explicitly addresses the hierarchical organization of land use systems, spatial connectivity between locations and stability. Stability is incorporated by a set of variables that define the relative elasticity of the actual land-use type to conversion. The user can specify these settings based on expert knowledge or survey data. Two applications of the model in the Philippines and Malaysia are used to illustrate the functioning of the model and its validation.

  16. Biomechanical Model for Computing Deformations for Whole-Body Image Registration: A Meshless Approach

    PubMed Central

    Li, Mao; Miller, Karol; Joldes, Grand Roman; Kikinis, Ron; Wittek, Adam

    2016-01-01

    Patient-specific biomechanical models have been advocated as a tool for predicting deformations of soft body organs/tissue for medical image registration (aligning two sets of images) when differences between the images are large. However, complex and irregular geometry of the body organs makes generation of patient-specific biomechanical models very time consuming. Meshless discretisation has been proposed to solve this challenge. However, applications so far have been limited to 2-D models and computing single organ deformations. In this study, 3-D comprehensive patient-specific non-linear biomechanical models implemented using Meshless Total Lagrangian Explicit Dynamics (MTLED) algorithms are applied to predict a 3-D deformation field for whole-body image registration. Unlike a conventional approach which requires dividing (segmenting) the image into non-overlapping constituents representing different organs/tissues, the mechanical properties are assigned using the Fuzzy C-Means (FCM) algorithm without the image segmentation. Verification indicates that the deformations predicted using the proposed meshless approach are for practical purposes the same as those obtained using the previously validated finite element models. To quantitatively evaluate the accuracy of the predicted deformations, we determined the spatial misalignment between the registered (i.e. source images warped using the predicted deformations) and target images by computing the edge-based Hausdorff distance. The Hausdorff distance-based evaluation determines that our meshless models led to successful registration of the vast majority of the image features. PMID:26791945

  17. On whole Abelian model dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chauca, J.; Doria, R.; Aprendanet, Petropolis, 25600

    2012-09-24

    Physics challenge is to determine the objects dynamics. However, there are two ways for deciphering the part. The first one is to search for the ultimate constituents; the second one is to understand its behaviour in whole terms. Therefore, the parts can be defined either from elementary constituents or as whole functions. Historically, science has been moving through the first aspect, however, quarks confinement and complexity are interrupting this usual approach. These relevant facts are supporting for a systemic vision be introduced. Our effort here is to study on the whole meaning through gauge theory. Consider a systemic dynamics orientedmore » through the U(1) - systemic gauge parameter which function is to collect a fields set {l_brace}A{sub {mu}I}{r_brace}. Derive the corresponding whole gauge invariant Lagrangian, equations of motion, Bianchi identities, Noether relationships, charges and Ward-Takahashi equations. Whole Lorentz force and BRST symmetry are also studied. These expressions bring new interpretations further than the usual abelian model. They are generating a systemic system governed by 2N+ 10 classical equations plus Ward-Takahashi identities. A whole dynamics based on the notions of directive and circumstance is producing a set determinism where the parts dynamics are inserted in the whole evolution. A dynamics based on state, collective and individual equations with a systemic interdependence.« less

  18. Exploring complex dynamics in multi agent-based intelligent systems: Theoretical and experimental approaches using the Multi Agent-based Behavioral Economic Landscape (MABEL) model

    NASA Astrophysics Data System (ADS)

    Alexandridis, Konstantinos T.

    This dissertation adopts a holistic and detailed approach to modeling spatially explicit agent-based artificial intelligent systems, using the Multi Agent-based Behavioral Economic Landscape (MABEL) model. The research questions that addresses stem from the need to understand and analyze the real-world patterns and dynamics of land use change from a coupled human-environmental systems perspective. Describes the systemic, mathematical, statistical, socio-economic and spatial dynamics of the MABEL modeling framework, and provides a wide array of cross-disciplinary modeling applications within the research, decision-making and policy domains. Establishes the symbolic properties of the MABEL model as a Markov decision process, analyzes the decision-theoretic utility and optimization attributes of agents towards comprising statistically and spatially optimal policies and actions, and explores the probabilogic character of the agents' decision-making and inference mechanisms via the use of Bayesian belief and decision networks. Develops and describes a Monte Carlo methodology for experimental replications of agent's decisions regarding complex spatial parcel acquisition and learning. Recognizes the gap on spatially-explicit accuracy assessment techniques for complex spatial models, and proposes an ensemble of statistical tools designed to address this problem. Advanced information assessment techniques such as the Receiver-Operator Characteristic curve, the impurity entropy and Gini functions, and the Bayesian classification functions are proposed. The theoretical foundation for modular Bayesian inference in spatially-explicit multi-agent artificial intelligent systems, and the ensembles of cognitive and scenario assessment modular tools build for the MABEL model are provided. Emphasizes the modularity and robustness as valuable qualitative modeling attributes, and examines the role of robust intelligent modeling as a tool for improving policy-decisions related to land use change. Finally, the major contributions to the science are presented along with valuable directions for future research.

  19. Fast integration-based prediction bands for ordinary differential equation models.

    PubMed

    Hass, Helge; Kreutz, Clemens; Timmer, Jens; Kaschek, Daniel

    2016-04-15

    To gain a deeper understanding of biological processes and their relevance in disease, mathematical models are built upon experimental data. Uncertainty in the data leads to uncertainties of the model's parameters and in turn to uncertainties of predictions. Mechanistic dynamic models of biochemical networks are frequently based on nonlinear differential equation systems and feature a large number of parameters, sparse observations of the model components and lack of information in the available data. Due to the curse of dimensionality, classical and sampling approaches propagating parameter uncertainties to predictions are hardly feasible and insufficient. However, for experimental design and to discriminate between competing models, prediction and confidence bands are essential. To circumvent the hurdles of the former methods, an approach to calculate a profile likelihood on arbitrary observations for a specific time point has been introduced, which provides accurate confidence and prediction intervals for nonlinear models and is computationally feasible for high-dimensional models. In this article, reliable and smooth point-wise prediction and confidence bands to assess the model's uncertainty on the whole time-course are achieved via explicit integration with elaborate correction mechanisms. The corresponding system of ordinary differential equations is derived and tested on three established models for cellular signalling. An efficiency analysis is performed to illustrate the computational benefit compared with repeated profile likelihood calculations at multiple time points. The integration framework and the examples used in this article are provided with the software package Data2Dynamics, which is based on MATLAB and freely available at http://www.data2dynamics.org helge.hass@fdm.uni-freiburg.de Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  20. An Integrated Ecological Modeling System for Assessing Impacts of Multiple Stressors on Stream and Riverine Ecosystem Services Within River Basins

    EPA Science Inventory

    We demonstrate a novel, spatially explicit assessment of the current condition of aquatic ecosystem services, with limited sensitivity analysis for the atmospheric contaminant mercury. The Integrated Ecological Modeling System (IEMS) forecasts water quality and quantity, habitat ...

  1. On the application of multilevel modeling in environmental and ecological studies

    USGS Publications Warehouse

    Qian, Song S.; Cuffney, Thomas F.; Alameddine, Ibrahim; McMahon, Gerard; Reckhow, Kenneth H.

    2010-01-01

    This paper illustrates the advantages of a multilevel/hierarchical approach for predictive modeling, including flexibility of model formulation, explicitly accounting for hierarchical structure in the data, and the ability to predict the outcome of new cases. As a generalization of the classical approach, the multilevel modeling approach explicitly models the hierarchical structure in the data by considering both the within- and between-group variances leading to a partial pooling of data across all levels in the hierarchy. The modeling framework provides means for incorporating variables at different spatiotemporal scales. The examples used in this paper illustrate the iterative process of model fitting and evaluation, a process that can lead to improved understanding of the system being studied.

  2. Integrating ecosystem sampling, gradient modeling, remote sensing, and ecosystem simulation to create spatially explicit landscape inventories

    Treesearch

    Robert E. Keane; Matthew G. Rollins; Cecilia H. McNicoll; Russell A. Parsons

    2002-01-01

    Presented is a prototype of the Landscape Ecosystem Inventory System (LEIS), a system for creating maps of important landscape characteristics for natural resource planning. This system uses gradient-based field inventories coupled with gradient modeling remote sensing, ecosystem simulation, and statistical analyses to derive spatial data layers required for ecosystem...

  3. Genomic Prediction Accounting for Residual Heteroskedasticity

    PubMed Central

    Ou, Zhining; Tempelman, Robert J.; Steibel, Juan P.; Ernst, Catherine W.; Bates, Ronald O.; Bello, Nora M.

    2015-01-01

    Whole-genome prediction (WGP) models that use single-nucleotide polymorphism marker information to predict genetic merit of animals and plants typically assume homogeneous residual variance. However, variability is often heterogeneous across agricultural production systems and may subsequently bias WGP-based inferences. This study extends classical WGP models based on normality, heavy-tailed specifications and variable selection to explicitly account for environmentally-driven residual heteroskedasticity under a hierarchical Bayesian mixed-models framework. WGP models assuming homogeneous or heterogeneous residual variances were fitted to training data generated under simulation scenarios reflecting a gradient of increasing heteroskedasticity. Model fit was based on pseudo-Bayes factors and also on prediction accuracy of genomic breeding values computed on a validation data subset one generation removed from the simulated training dataset. Homogeneous vs. heterogeneous residual variance WGP models were also fitted to two quantitative traits, namely 45-min postmortem carcass temperature and loin muscle pH, recorded in a swine resource population dataset prescreened for high and mild residual heteroskedasticity, respectively. Fit of competing WGP models was compared using pseudo-Bayes factors. Predictive ability, defined as the correlation between predicted and observed phenotypes in validation sets of a five-fold cross-validation was also computed. Heteroskedastic error WGP models showed improved model fit and enhanced prediction accuracy compared to homoskedastic error WGP models although the magnitude of the improvement was small (less than two percentage points net gain in prediction accuracy). Nevertheless, accounting for residual heteroskedasticity did improve accuracy of selection, especially on individuals of extreme genetic merit. PMID:26564950

  4. Explicit simulation of a midlatitude Mesoscale Convective System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alexander, G.D.; Cotton, W.R.

    1996-04-01

    We have explicitly simulated the mesoscale convective system (MCS) observed on 23-24 June 1985 during PRE-STORM, the Preliminary Regional Experiment for the Stormscale Operational and Research and Meterology Program. Stensrud and Maddox (1988), Johnson and Bartels (1992), and Bernstein and Johnson (1994) are among the researchers who have investigated various aspects of this MCS event. We have performed this MCS simulation (and a similar one of a tropical MCS; Alexander and Cotton 1994) in the spirit of the Global Energy and Water Cycle Experiment Cloud Systems Study (GCSS), in which cloud-resolving models are used to assist in the formulation andmore » testing of cloud parameterization schemes for larger-scale models. In this paper, we describe (1) the nature of our 23-24 June MCS dimulation and (2) our efforts to date in using our explicit MCS simulations to assist in the development of a GCM parameterization for mesoscale flow branches. The paper is organized as follows. First, we discuss the synoptic situation surrounding the 23-24 June PRE-STORM MCS followed by a discussion of the model setup and results of our simulation. We then discuss the use of our MCS simulation. We then discuss the use of our MCS simulations in developing a GCM parameterization for mesoscale flow branches and summarize our results.« less

  5. Mathematical model of alternative mechanism of telomere length maintenance

    NASA Astrophysics Data System (ADS)

    Kollár, Richard; Bod'ová, Katarína; Nosek, Jozef; Tomáška, L'ubomír

    2014-03-01

    Biopolymer length regulation is a complex process that involves a large number of biological, chemical, and physical subprocesses acting simultaneously across multiple spatial and temporal scales. An illustrative example important for genomic stability is the length regulation of telomeres—nucleoprotein structures at the ends of linear chromosomes consisting of tandemly repeated DNA sequences and a specialized set of proteins. Maintenance of telomeres is often facilitated by the enzyme telomerase but, particularly in telomerase-free systems, the maintenance of chromosomal termini depends on alternative lengthening of telomeres (ALT) mechanisms mediated by recombination. Various linear and circular DNA structures were identified to participate in ALT, however, dynamics of the whole process is still poorly understood. We propose a chemical kinetics model of ALT with kinetic rates systematically derived from the biophysics of DNA diffusion and looping. The reaction system is reduced to a coagulation-fragmentation system by quasi-steady-state approximation. The detailed treatment of kinetic rates yields explicit formulas for expected size distributions of telomeres that demonstrate the key role played by the J factor, a quantitative measure of bending of polymers. The results are in agreement with experimental data and point out interesting phenomena: an appearance of very long telomeric circles if the total telomere density exceeds a critical value (excess mass) and a nonlinear response of the telomere size distributions to the amount of telomeric DNA in the system. The results can be of general importance for understanding dynamics of telomeres in telomerase-independent systems as this mode of telomere maintenance is similar to the situation in tumor cells lacking telomerase activity. Furthermore, due to its universality, the model may also serve as a prototype of an interaction between linear and circular DNA structures in various settings.

  6. Neutron Capture Energies for Flux Normalization and Approximate Model for Gamma-Smeared Power

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kang Seog; Clarno, Kevin T.; Liu, Yuxuan

    The Consortium for Advanced Simulation of Light Water Reactors (CASL) Virtual Environment for Reactor Applications (VERA) neutronics simulator MPACT has used a single recoverable fission energy for each fissionable nuclide assuming that all recoverable energies come only from fission reaction, for which capture energy is merged with fission energy. This approach includes approximations and requires improvement by separating capture energy from the merged effective recoverable energy. This report documents the procedure to generate recoverable neutron capture energies and the development of a program called CapKappa to generate capture energies. Recoverable neutron capture energies have been generated by using CapKappa withmore » the evaluated nuclear data file (ENDF)/B-7.0 and 7.1 cross section and decay libraries. The new capture kappas were compared to the current SCALE-6.2 and the CASMO-5 capture kappas. These new capture kappas have been incorporated into the Simplified AMPX 51- and 252-group libraries, and they can be used for the AMPX multigroup (MG) libraries and the SCALE code package. The CASL VERA neutronics simulator MPACT does not include a gamma transport capability, which limits it to explicitly estimating local energy deposition from fission, neutron, and gamma slowing down and capture. Since the mean free path of gamma rays is typically much longer than that for the neutron, and the total gamma energy is about 10% to the total energy, the gamma-smeared power distribution is different from the fission power distribution. Explicit local energy deposition through neutron and gamma transport calculation is significantly important in multi-physics whole core simulation with thermal-hydraulic feedback. Therefore, the gamma transport capability should be incorporated into the CASL neutronics simulator MPACT. However, this task will be timeconsuming in developing the neutron induced gamma production and gamma cross section libraries. This study is to investigate an approximate model to estimate gammasmeared power distribution without performing any gamma transport calculation. A simple approximate gamma smearing model has been investigated based on the facts that pinwise gamma energy depositions are almost flat over a fuel assembly, and assembly-wise gamma energy deposition is proportional to kappa-fission energy deposition. The approximate gamma smearing model works well for single assembly cases, and can partly improve the gamma smeared power distribution for the whole core model. Although the power distributions can be improved by the approximate gamma smearing model, still there is an issue to explicitly obtain local energy deposition. A new simple approach or gamma transport/diffusion capability may need to be incorporated into MPACT to estimate local energy deposition for more robust multi-physics simulation.« less

  7. EXPECT: Explicit Representations for Flexible Acquisition

    NASA Technical Reports Server (NTRS)

    Swartout, BIll; Gil, Yolanda

    1995-01-01

    To create more powerful knowledge acquisition systems, we not only need better acquisition tools, but we need to change the architecture of the knowledge based systems we create so that their structure will provide better support for acquisition. Current acquisition tools permit users to modify factual knowledge but they provide limited support for modifying problem solving knowledge. In this paper, the authors argue that this limitation (and others) stem from the use of incomplete models of problem-solving knowledge and inflexible specification of the interdependencies between problem-solving and factual knowledge. We describe the EXPECT architecture which addresses these problems by providing an explicit representation for problem-solving knowledge and intent. Using this more explicit representation, EXPECT can automatically derive the interdependencies between problem-solving and factual knowledge. By deriving these interdependencies from the structure of the knowledge-based system itself EXPECT supports more flexible and powerful knowledge acquisition.

  8. Increasing the sampling efficiency of protein conformational transition using velocity-scaling optimized hybrid explicit/implicit solvent REMD simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Yuqi; Wang, Jinan; Shao, Qiang, E-mail: qshao@mail.shcnc.ac.cn, E-mail: Jiye.Shi@ucb.com, E-mail: wlzhu@mail.shcnc.ac.cn

    2015-03-28

    The application of temperature replica exchange molecular dynamics (REMD) simulation on protein motion is limited by its huge requirement of computational resource, particularly when explicit solvent model is implemented. In the previous study, we developed a velocity-scaling optimized hybrid explicit/implicit solvent REMD method with the hope to reduce the temperature (replica) number on the premise of maintaining high sampling efficiency. In this study, we utilized this method to characterize and energetically identify the conformational transition pathway of a protein model, the N-terminal domain of calmodulin. In comparison to the standard explicit solvent REMD simulation, the hybrid REMD is much lessmore » computationally expensive but, meanwhile, gives accurate evaluation of the structural and thermodynamic properties of the conformational transition which are in well agreement with the standard REMD simulation. Therefore, the hybrid REMD could highly increase the computational efficiency and thus expand the application of REMD simulation to larger-size protein systems.« less

  9. Proceedings of the Combined Effects of Multiple Stressors on Operational Performance Held in San Diego, California on 4-5 April 1989

    DTIC Science & Technology

    1989-04-05

    PM 112PM APPROXIMRTE TIME OF DRY 330 10-year-old could drive through a 10-mile wide channel but these guys conned the tanker onto the rocks after...don’t have a whole lot of time. Who do they go to? In the Navy they go to their Medical Officer; their battalion surgeon, GMO aboard ship or senior...our work plan. CDR FRASER: That’s explicit. Your saying here is the answer I’ve got to provide. Models are ^,U- of pro -din that inorato - usabl fashon

  10. Surface plasmons for doped graphene

    NASA Astrophysics Data System (ADS)

    Bordag, M.; Pirozhenko, I. G.

    2015-04-01

    Within the Dirac model for the electronic excitations of graphene, we calculate the full polarization tensor with finite mass and chemical potential. It has, besides the (00)-component, a second form factor, which must be accounted for. We obtain explicit formulas for both form factors and for the reflection coefficients. Using these, we discuss the regions in the momentum-frequency plane where plasmons may exist and give numeric solutions for the plasmon dispersion relations. It turns out that plasmons exist for both, transverse electric and transverse magnetic polarizations over the whole range of the ratio of mass to chemical potential, except for zero chemical potential, where only a TE plasmon exists.

  11. Predictive Validity of Explicit and Implicit Threat Overestimation in Contamination Fear

    PubMed Central

    Green, Jennifer S.; Teachman, Bethany A.

    2012-01-01

    We examined the predictive validity of explicit and implicit measures of threat overestimation in relation to contamination-fear outcomes using structural equation modeling. Undergraduate students high in contamination fear (N = 56) completed explicit measures of contamination threat likelihood and severity, as well as looming vulnerability cognitions, in addition to an implicit measure of danger associations with potential contaminants. Participants also completed measures of contamination-fear symptoms, as well as subjective distress and avoidance during a behavioral avoidance task, and state looming vulnerability cognitions during an exposure task. The latent explicit (but not implicit) threat overestimation variable was a significant and unique predictor of contamination fear symptoms and self-reported affective and cognitive facets of contamination fear. On the contrary, the implicit (but not explicit) latent measure predicted behavioral avoidance (at the level of a trend). Results are discussed in terms of differential predictive validity of implicit versus explicit markers of threat processing and multiple fear response systems. PMID:24073390

  12. Toward a Generative Model of the Teaching-Learning Process.

    ERIC Educational Resources Information Center

    McMullen, David W.

    Until the rise of cognitive psychology, models of the teaching-learning process (TLP) stressed external rather than internal variables. Models remained general descriptions until control theory introduced explicit system analyses. Cybernetic models emphasize feedback and adaptivity but give little attention to creativity. Research on artificial…

  13. Logistic Mixed Models to Investigate Implicit and Explicit Belief Tracking.

    PubMed

    Lages, Martin; Scheel, Anne

    2016-01-01

    We investigated the proposition of a two-systems Theory of Mind in adults' belief tracking. A sample of N = 45 participants predicted the choice of one of two opponent players after observing several rounds in an animated card game. Three matches of this card game were played and initial gaze direction on target and subsequent choice predictions were recorded for each belief task and participant. We conducted logistic regressions with mixed effects on the binary data and developed Bayesian logistic mixed models to infer implicit and explicit mentalizing in true belief and false belief tasks. Although logistic regressions with mixed effects predicted the data well a Bayesian logistic mixed model with latent task- and subject-specific parameters gave a better account of the data. As expected explicit choice predictions suggested a clear understanding of true and false beliefs (TB/FB). Surprisingly, however, model parameters for initial gaze direction also indicated belief tracking. We discuss why task-specific parameters for initial gaze directions are different from choice predictions yet reflect second-order perspective taking.

  14. Quantum mechanical force field for hydrogen fluoride with explicit electronic polarization.

    PubMed

    Mazack, Michael J M; Gao, Jiali

    2014-05-28

    The explicit polarization (X-Pol) theory is a fragment-based quantum chemical method that explicitly models the internal electronic polarization and intermolecular interactions of a chemical system. X-Pol theory provides a framework to construct a quantum mechanical force field, which we have extended to liquid hydrogen fluoride (HF) in this work. The parameterization, called XPHF, is built upon the same formalism introduced for the XP3P model of liquid water, which is based on the polarized molecular orbital (PMO) semiempirical quantum chemistry method and the dipole-preserving polarization consistent point charge model. We introduce a fluorine parameter set for PMO, and find good agreement for various gas-phase results of small HF clusters compared to experiments and ab initio calculations at the M06-2X/MG3S level of theory. In addition, the XPHF model shows reasonable agreement with experiments for a variety of structural and thermodynamic properties in the liquid state, including radial distribution functions, interaction energies, diffusion coefficients, and densities at various state points.

  15. Spatial effects in meta-foodwebs.

    PubMed

    Barter, Edmund; Gross, Thilo

    2017-08-30

    In ecology it is widely recognised that many landscapes comprise a network of discrete patches of habitat. The species that inhabit the patches interact with each other through a foodweb, the network of feeding interactions. The meta-foodweb model proposed by Pillai et al. combines the feeding relationships at each patch with the dispersal of species between patches, such that the whole system is represented by a network of networks. Previous work on meta-foodwebs has focussed on landscape networks that do not have an explicit spatial embedding, but in real landscapes the patches are usually distributed in space. Here we compare the dispersal of a meta-foodweb on Erdős-Rényi networks, that do not have a spatial embedding, and random geometric networks, that do have a spatial embedding. We found that local structure and large network distances in spatially embedded networks, lead to meso-scale patterns of patch occupation by both specialist and omnivorous species. In particular, we found that spatial separations make the coexistence of competing species more likely. Our results highlight the effects of spatial embeddings for meta-foodweb models, and the need for new analytical approaches to them.

  16. Biomass Scenario Model: BETO Analysis Platform Peer Review; NREL (National Renewable Energy Laboratory)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bush, B.

    2015-03-23

    The Biomass Scenario Model (BSM) is a unique, carefully validated, state-of-the-art fourth-generation model of the domestic bioenergy supply chain which explicitly focuses on policy issues and their potential side effects. It integrates resource availability, behavior, policy, and physical, technological, and economic constraints. The BSM uses system-dynamics simulation to model dynamic interactions across the supply chain; it tracks the deployment of biofuels given technological development and the reaction of the investment community to those technologies in the context of land availability, the competing oil market, consumer demand for biofuels, and government policies over time. It places a strong emphasis on themore » behavior and decision-making of various economic agents. The model treats the major infrastructure-compatible fuels. Scenario analysis based on the BSM shows that the biofuels industry tends not to rapidly thrive without significant external actions in the early years of its evolution. An initial focus for jumpstarting the industry typically has strongest results in the BSM in areas where effects of intervention have been identified to be multiplicative. In general, we find that policies which are coordinated across the whole supply chain have significant impact in fostering the growth of the biofuels industry and that the production of tens of billions of gallons of biofuels may occur under sufficiently favorable conditions.« less

  17. A new solution method for wheel/rail rolling contact.

    PubMed

    Yang, Jian; Song, Hua; Fu, Lihua; Wang, Meng; Li, Wei

    2016-01-01

    To solve the problem of wheel/rail rolling contact of nonlinear steady-state curving, a three-dimensional transient finite element (FE) model is developed by the explicit software ANSYS/LS-DYNA. To improve the solving speed and efficiency, an explicit-explicit order solution method is put forward based on analysis of the features of implicit and explicit algorithm. The solution method was first applied to calculate the pre-loading of wheel/rail rolling contact with explicit algorithm, and then the results became the initial conditions in solving the dynamic process of wheel/rail rolling contact with explicit algorithm as well. Simultaneously, the common implicit-explicit order solution method is used to solve the FE model. Results show that the explicit-explicit order solution method has faster operation speed and higher efficiency than the implicit-explicit order solution method while the solution accuracy is almost the same. Hence, the explicit-explicit order solution method is more suitable for the wheel/rail rolling contact model with large scale and high nonlinearity.

  18. Soft Systems Methodology and Problem Framing: Development of an Environmental Problem Solving Model Respecting a New Emergent Reflexive Paradigm.

    ERIC Educational Resources Information Center

    Gauthier, Benoit; And Others

    1997-01-01

    Identifies the more representative problem-solving models in environmental education. Suggests the addition of a strategy for defining a problem situation using Soft Systems Methodology to environmental education activities explicitly designed for the development of critical thinking. Contains 45 references. (JRH)

  19. Vibration modelling and verifications for whole aero-engine

    NASA Astrophysics Data System (ADS)

    Chen, G.

    2015-08-01

    In this study, a new rotor-ball-bearing-casing coupling dynamic model for a practical aero-engine is established. In the coupling system, the rotor and casing systems are modelled using the finite element method, support systems are modelled as lumped parameter models, nonlinear factors of ball bearings and faults are included, and four types of supports and connection models are defined to model the complex rotor-support-casing coupling system of the aero-engine. A new numerical integral method that combines the Newmark-β method and the improved Newmark-β method (Zhai method) is used to obtain the system responses. Finally, the new model is verified in three ways: (1) modal experiment based on rotor-ball bearing rig, (2) modal experiment based on rotor-ball-bearing-casing rig, and (3) fault simulations for a certain type of missile turbofan aero-engine vibration. The results show that the proposed model can not only simulate the natural vibration characteristics of the whole aero-engine but also effectively perform nonlinear dynamic simulations of a whole aero-engine with faults.

  20. Resource reallocation does not influence estimates of pollen limitation or reproductive assurance in Clarkia xantiana subsp. parviflora (Onagraceae).

    PubMed

    Runquist, Ryan D Briscoe; Moeller, David A

    2013-09-01

    Studies of pollen limitation and the reproductive assurance value of selfing are important for examining the process of floral and mating system evolution in flowering plants. Recent meta-analyses have shown that common methods for measuring pollen limitation may often lead to biased estimates. Specifically, experiments involving single- or few-flower manipulations per plant tend to overestimate pollen limitation compared to those involving manipulations on most or all flowers per plant. Little previous work has explicitly tested for reallocation within individual systems using alternative methods and response variables. • We performed single-flower and whole-plant pollen supplementation and emasculation of flowers of Clarkia xantiana subsp. parviflora to estimate pollen limitation (PL) and reproductive assurance (RA). We compared levels of PL and RA using the following response variables: fruit set, seeds/flower, and seeds/plant. We also assessed the germination and viability of seeds to evaluate potential variation in pollen quality among treatments. • Autonomous selfing in Clarkia xantiana subsp. parviflora eliminates pollen limitation and provides reproductive assurance. Estimates from single-flower manipulations were not biased, closely resembling those from whole-plant manipulations. All three response variables followed the same pattern, but treatments were only significantly different for seeds/flower. Pollen quality, as indicated by seed viability, did not differ among treatments. • Partial plant manipulations provided reliable estimates of pollen limitation and reproductive assurance. These estimates were also unaffected by accounting for pollen quality. Although whole plant manipulations are desirable, this experiment demonstrates that in some systems partial plant manipulations can be used in studies where whole-plant manipulations are not feasible.

  1. Improving predictions of large scale soil carbon dynamics: Integration of fine-scale hydrological and biogeochemical processes, scaling, and benchmarking

    NASA Astrophysics Data System (ADS)

    Riley, W. J.; Dwivedi, D.; Ghimire, B.; Hoffman, F. M.; Pau, G. S. H.; Randerson, J. T.; Shen, C.; Tang, J.; Zhu, Q.

    2015-12-01

    Numerical model representations of decadal- to centennial-scale soil-carbon dynamics are a dominant cause of uncertainty in climate change predictions. Recent attempts by some Earth System Model (ESM) teams to integrate previously unrepresented soil processes (e.g., explicit microbial processes, abiotic interactions with mineral surfaces, vertical transport), poor performance of many ESM land models against large-scale and experimental manipulation observations, and complexities associated with spatial heterogeneity highlight the nascent nature of our community's ability to accurately predict future soil carbon dynamics. I will present recent work from our group to develop a modeling framework to integrate pore-, column-, watershed-, and global-scale soil process representations into an ESM (ACME), and apply the International Land Model Benchmarking (ILAMB) package for evaluation. At the column scale and across a wide range of sites, observed depth-resolved carbon stocks and their 14C derived turnover times can be explained by a model with explicit representation of two microbial populations, a simple representation of mineralogy, and vertical transport. Integrating soil and plant dynamics requires a 'process-scaling' approach, since all aspects of the multi-nutrient system cannot be explicitly resolved at ESM scales. I will show that one approach, the Equilibrium Chemistry Approximation, improves predictions of forest nitrogen and phosphorus experimental manipulations and leads to very different global soil carbon predictions. Translating model representations from the site- to ESM-scale requires a spatial scaling approach that either explicitly resolves the relevant processes, or more practically, accounts for fine-resolution dynamics at coarser scales. To that end, I will present recent watershed-scale modeling work that applies reduced order model methods to accurately scale fine-resolution soil carbon dynamics to coarse-resolution simulations. Finally, we contend that creating believable soil carbon predictions requires a robust, transparent, and community-available benchmarking framework. I will present an ILAMB evaluation of several of the above-mentioned approaches in ACME, and attempt to motivate community adoption of this evaluation approach.

  2. Chained Aggregation and Control System Design:; A Geometric Approach.

    DTIC Science & Technology

    1982-10-01

    Furthermore, it explicitly identifies a reduced order modal used to meet the design goals. This results in an interactive design pro- cedure which allows...same framework. This leads directly to dynamic compen- sator design. The results are applied to decentralized control problems, non interactive ...goals. Furthermore, it explicitly identifies a reduced order model used to meet the design goals. This results in an interactive design procedure which

  3. Representing functions/procedures and processes/structures for analysis of effects of failures on functions and operations

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Leifker, Daniel B.

    1991-01-01

    Current qualitative device and process models represent only the structure and behavior of physical systems. However, systems in the real world include goal-oriented activities that generally cannot be easily represented using current modeling techniques. An extension of a qualitative modeling system, known as functional modeling, which captures goal-oriented activities explicitly is proposed and how they may be used to support intelligent automation and fault management is shown.

  4. Urban watershed modeling in Seattle, Washington using VELMA – a spatially explicit ecohydrological watershed model

    EPA Science Inventory

    Urban watersheds are notoriously difficult to model due to their complex, small-scale combinations of landscape and land use characteristics including impervious surfaces that ultimately affect the hydrologic system. We utilized EPA’s Visualizing Ecosystem Land Management A...

  5. The Whole-Hand Point: The Structure and Function of Pointing From a Comparative Perspective

    PubMed Central

    Leavens, David A.; Hopkins, William D.

    2007-01-01

    Pointing by monkeys, apes, and human infants is reviewed and compared. Pointing with the index finger is a species-typical human gesture, although human infants exhibit more whole-hand pointing than is commonly appreciated. Captive monkeys and feral apes have been reported to only rarely “spontaneously” point, although apes in captivity frequently acquire pointing, both with the index finger and with the whole hand, without explicit training. Captive apes exhibit relatively more gaze alternation while pointing than do human infants about 1 year old. Human infants are relatively more vocal while pointing than are captive apes, consistent with paralinguistic use of pointing. PMID:10608565

  6. Assessment of the GECKO-A Modeling Tool and Simplified 3D Model Parameterizations for SOA Formation

    NASA Astrophysics Data System (ADS)

    Aumont, B.; Hodzic, A.; La, S.; Camredon, M.; Lannuque, V.; Lee-Taylor, J. M.; Madronich, S.

    2014-12-01

    Explicit chemical mechanisms aim to embody the current knowledge of the transformations occurring in the atmosphere during the oxidation of organic matter. These explicit mechanisms are therefore useful tools to explore the fate of organic matter during its tropospheric oxidation and examine how these chemical processes shape the composition and properties of the gaseous and the condensed phases. Furthermore, explicit mechanisms provide powerful benchmarks to design and assess simplified parameterizations to be included 3D model. Nevertheless, the explicit mechanism describing the oxidation of hydrocarbons with backbones larger than few carbon atoms involves millions of secondary organic compounds, far exceeding the size of chemical mechanisms that can be written manually. Data processing tools can however be designed to overcome these difficulties and automatically generate consistent and comprehensive chemical mechanisms on a systematic basis. The Generator for Explicit Chemistry and Kinetics of Organics in the Atmosphere (GECKO-A) has been developed for the automatic writing of explicit chemical schemes of organic species and their partitioning between the gas and condensed phases. GECKO-A can be viewed as an expert system that mimics the steps by which chemists might develop chemical schemes. GECKO-A generates chemical schemes according to a prescribed protocol assigning reaction pathways and kinetics data on the basis of experimental data and structure-activity relationships. In its current version, GECKO-A can generate the full atmospheric oxidation scheme for most linear, branched and cyclic precursors, including alkanes and alkenes up to C25. Assessments of the GECKO-A modeling tool based on chamber SOA observations will be presented. GECKO-A was recently used to design a parameterization for SOA formation based on a Volatility Basis Set (VBS) approach. First results will be presented.

  7. Managing Analysis Models in the Design Process

    NASA Technical Reports Server (NTRS)

    Briggs, Clark

    2006-01-01

    Design of large, complex space systems depends on significant model-based support for exploration of the design space. Integrated models predict system performance in mission-relevant terms given design descriptions and multiple physics-based numerical models. Both the design activities and the modeling activities warrant explicit process definitions and active process management to protect the project from excessive risk. Software and systems engineering processes have been formalized and similar formal process activities are under development for design engineering and integrated modeling. JPL is establishing a modeling process to define development and application of such system-level models.

  8. The feasibility of using explicit method for linear correction of the particle size variation using NIR Spectroscopy combined with PLS2regression method

    NASA Astrophysics Data System (ADS)

    Yulia, M.; Suhandy, D.

    2018-03-01

    NIR spectra obtained from spectral data acquisition system contains both chemical information of samples as well as physical information of the samples, such as particle size and bulk density. Several methods have been established for developing calibration models that can compensate for sample physical information variations. One common approach is to include physical information variation in the calibration model both explicitly and implicitly. The objective of this study was to evaluate the feasibility of using explicit method to compensate the influence of different particle size of coffee powder in NIR calibration model performance. A number of 220 coffee powder samples with two different types of coffee (civet and non-civet) and two different particle sizes (212 and 500 µm) were prepared. Spectral data was acquired using NIR spectrometer equipped with an integrating sphere for diffuse reflectance measurement. A discrimination method based on PLS-DA was conducted and the influence of different particle size on the performance of PLS-DA was investigated. In explicit method, we add directly the particle size as predicted variable results in an X block containing only the NIR spectra and a Y block containing the particle size and type of coffee. The explicit inclusion of the particle size into the calibration model is expected to improve the accuracy of type of coffee determination. The result shows that using explicit method the quality of the developed calibration model for type of coffee determination is a little bit superior with coefficient of determination (R2) = 0.99 and root mean square error of cross-validation (RMSECV) = 0.041. The performance of the PLS2 calibration model for type of coffee determination with particle size compensation was quite good and able to predict the type of coffee in two different particle sizes with relatively high R2 pred values. The prediction also resulted in low bias and RMSEP values.

  9. Rare Λb→Λ l+l- and Λb→Λ γ decays in the relativistic quark model

    NASA Astrophysics Data System (ADS)

    Faustov, R. N.; Galkin, V. O.

    2017-09-01

    Rare Λb→Λ l+l- and Λb→Λ γ decays are investigated in the relativistic quark model based on the quark-diquark picture of baryons. The decay form factors are calculated accounting for all relativistic effects, including relativistic transformations of baryon wave functions from rest to a moving reference frame and the contribution of the intermediate negative-energy states. The momentum-transfer-squared dependence of the form factors is explicitly determined in the whole accessible kinematical range. The calculated decay branching fractions, various forward-backward asymmetries for the rare decay Λb→Λ μ+μ-, are found to be consistent with recent detailed measurements by the LHCb Collaboration. Predictions for the Λb→Λ τ+τ- decay observables are given.

  10. Caliber Corrected Markov Modeling (C2M2): Correcting Equilibrium Markov Models.

    PubMed

    Dixit, Purushottam D; Dill, Ken A

    2018-02-13

    Rate processes are often modeled using Markov State Models (MSMs). Suppose you know a prior MSM and then learn that your prediction of some particular observable rate is wrong. What is the best way to correct the whole MSM? For example, molecular dynamics simulations of protein folding may sample many microstates, possibly giving correct pathways through them while also giving the wrong overall folding rate when compared to experiment. Here, we describe Caliber Corrected Markov Modeling (C 2 M 2 ), an approach based on the principle of maximum entropy for updating a Markov model by imposing state- and trajectory-based constraints. We show that such corrections are equivalent to asserting position-dependent diffusion coefficients in continuous-time continuous-space Markov processes modeled by a Smoluchowski equation. We derive the functional form of the diffusion coefficient explicitly in terms of the trajectory-based constraints. We illustrate with examples of 2D particle diffusion and an overdamped harmonic oscillator.

  11. Integrating Biodiversity into Biosphere-Atmosphere Interactions Using Individual-Based Models (IBM)

    NASA Astrophysics Data System (ADS)

    Wang, B.; Shugart, H. H., Jr.; Lerdau, M.

    2017-12-01

    A key component regulating complex, nonlinear, and dynamic biosphere-atmosphere interactions is the inherent diversity of biological systems. The model frameworks currently widely used, i.e., Plant Functional Type models) do not even begin to capture the metabolic and taxonomic diversity found in many terrestrial systems. We propose that a transition from PFT-based to individual-based modeling approaches (hereafter referred to as IBM) is essential for integrating biodiversity into research on biosphere-atmosphere interactions. The proposal emerges from our studying the interactions of forests with atmospheric processes in the context of climate change using an individual-based forest volatile organic compounds model, UVAFME-VOC. This individual-based model can explicitly simulate VOC emissions based on an explicit modelling of forest dynamics by computing the growth, death, and regeneration of each individual tree of different species and their competition for light, moisture, and nutrient, from which system-level VOC emissions are simulated by explicitly computing and summing up each individual's emissions. We found that elevated O3 significantly altered the forest dynamics by favoring species that are O3-resistant, which, meanwhile, are producers of isoprene. Such compositional changes, on the one hand, resulted in unsuppressed forest productivity and carbon stock because of the compensation by O3-resistant species. On the other hand, with more isoprene produced arising from increased producers, a possible positive feedback loop between tropospheric O3 and forest thereby emerged. We also found that climate warming will not always stimulate isoprene emissions because warming simultaneously reduces isoprene emissions by causing a decline in the abundance of isoprene-emitting species. These results suggest that species diversity is of great significance and that individual-based modelling strategies should be applied in studying biosphere-atmosphere interactions.

  12. Modeling heterogeneous processor scheduling for real time systems

    NASA Technical Reports Server (NTRS)

    Leathrum, J. F.; Mielke, R. R.; Stoughton, J. W.

    1994-01-01

    A new model is presented to describe dataflow algorithms implemented in a multiprocessing system. Called the resource/data flow graph (RDFG), the model explicitly represents cyclo-static processor schedules as circuits of processor arcs which reflect the order that processors execute graph nodes. The model also allows the guarantee of meeting hard real-time deadlines. When unfolded, the model identifies statically the processor schedule. The model therefore is useful for determining the throughput and latency of systems with heterogeneous processors. The applicability of the model is demonstrated using a space surveillance algorithm.

  13. Knowledge representation to support reasoning based on multiple models

    NASA Technical Reports Server (NTRS)

    Gillam, April; Seidel, Jorge P.; Parker, Alice C.

    1990-01-01

    Model Based Reasoning is a powerful tool used to design and analyze systems, which are often composed of numerous interactive, interrelated subsystems. Models of the subsystems are written independently and may be used together while they are still under development. Thus the models are not static. They evolve as information becomes obsolete, as improved artifact descriptions are developed, and as system capabilities change. Researchers are using three methods to support knowledge/data base growth, to track the model evolution, and to handle knowledge from diverse domains. First, the representation methodology is based on having pools, or types, of knowledge from which each model is constructed. In addition information is explicit. This includes the interactions between components, the description of the artifact structure, and the constraints and limitations of the models. The third principle we have followed is the separation of the data and knowledge from the inferencing and equation solving mechanisms. This methodology is used in two distinct knowledge-based systems: one for the design of space systems and another for the synthesis of VLSI circuits. It has facilitated the growth and evolution of our models, made accountability of results explicit, and provided credibility for the user community. These capabilities have been implemented and are being used in actual design projects.

  14. 3D Geological Mapping - uncovering the subsurface to increase environmental understanding

    NASA Astrophysics Data System (ADS)

    Kessler, H.; Mathers, S.; Peach, D.

    2012-12-01

    Geological understanding is required for many disciplines studying natural processes from hydrology to landscape evolution. The subsurface structure of rocks and soils and their properties occupies three-dimensional (3D) space and geological processes operate in time. Traditionally geologists have captured their spatial and temporal knowledge in 2 dimensional maps and cross-sections and through narrative, because paper maps and later two dimensional geographical information systems (GIS) were the only tools available to them. Another major constraint on using more explicit and numerical systems to express geological knowledge is the fact that a geologist only ever observes and measures a fraction of the system they study. Only on rare occasions does the geologist have access to enough real data to generate meaningful predictions of the subsurface without the input of conceptual understanding developed from and knowledge of the geological processes responsible for the deposition, emplacement and diagenesis of the rocks. This in turn has led to geology becoming an increasingly marginalised science as other disciplines have embraced the digital world and have increasingly turned to implicit numerical modelling to understand environmental processes and interactions. Recent developments in geoscience methodology and technology have gone some way to overcoming these barriers and geologists across the world are beginning to routinely capture their knowledge and combine it with all available subsurface data (of often highly varying spatial distribution and quality) to create regional and national geological three dimensional geological maps. This is re-defining the way geologists interact with other science disciplines, as their concepts and knowledge are now expressed in an explicit form that can be used downstream to design process models structure. For example, groundwater modellers can refine their understanding of groundwater flow in three dimensions or even directly parameterize their numerical models using outputs from 3D mapping. In some cases model code is being re-designed in order to deal with the increasing geological complexity expressed by Geologists. These 3D maps contain have inherent uncertainty, just as their predecessors, 2D geological maps had, and there remains a significant body of work to quantify and effectively communicate this uncertainty. Here we present examples of regional and national 3D maps from Geological Survey Organisations worldwide and how these are being used to better solve real-life environmental problems. The future challenge for geologists is to make these 3D maps easily available in an accessible and interoperable form so that the environmental science community can truly integrate the hidden subsurface into a common understanding of the whole geosphere.

  15. Batch-mode Reinforcement Learning for improved hydro-environmental systems management

    NASA Astrophysics Data System (ADS)

    Castelletti, A.; Galelli, S.; Restelli, M.; Soncini-Sessa, R.

    2010-12-01

    Despite the great progresses made in the last decades, the optimal management of hydro-environmental systems still remains a very active and challenging research area. The combination of multiple, often conflicting interests, high non-linearities of the physical processes and the management objectives, strong uncertainties in the inputs, and high dimensional state makes the problem challenging and intriguing. Stochastic Dynamic Programming (SDP) is one of the most suitable methods for designing (Pareto) optimal management policies preserving the original problem complexity. However, it suffers from a dual curse, which, de facto, prevents its practical application to even reasonably complex water systems. (i) Computational requirement grows exponentially with state and control dimension (Bellman's curse of dimensionality), so that SDP can not be used with water systems where the state vector includes more than few (2-3) units. (ii) An explicit model of each system's component is required (curse of modelling) to anticipate the effects of the system transitions, i.e. any information included into the SDP framework can only be either a state variable described by a dynamic model or a stochastic disturbance, independent in time, with the associated pdf. Any exogenous information that could effectively improve the system operation cannot be explicitly considered in taking the management decision, unless a dynamic model is identified for each additional information, thus adding to the problem complexity through the curse of dimensionality (additional state variables). To mitigate this dual curse, the combined use of batch-mode Reinforcement Learning (bRL) and Dynamic Model Reduction (DMR) techniques is explored in this study. bRL overcomes the curse of modelling by replacing explicit modelling with an external simulator and/or historical observations. The curse of dimensionality is averted using a functional approximation of the SDP value function based on proper non-linear regressors. DMR reduces the complexity and the associated computational requirements of non-linear distributed process based models, making them suitable for being included into optimization schemes. Results from real world applications of the approach are also presented, including reservoir operation with both quality and quantity targets.

  16. L1 Adaptive Control Augmentation System with Application to the X-29 Lateral/Directional Dynamics: A Multi-Input Multi-Output Approach

    NASA Technical Reports Server (NTRS)

    Griffin, Brian Joseph; Burken, John J.; Xargay, Enric

    2010-01-01

    This paper presents an L(sub 1) adaptive control augmentation system design for multi-input multi-output nonlinear systems in the presence of unmatched uncertainties which may exhibit significant cross-coupling effects. A piecewise continuous adaptive law is adopted and extended for applicability to multi-input multi-output systems that explicitly compensates for dynamic cross-coupling. In addition, explicit use of high-fidelity actuator models are added to the L1 architecture to reduce uncertainties in the system. The L(sub 1) multi-input multi-output adaptive control architecture is applied to the X-29 lateral/directional dynamics and results are evaluated against a similar single-input single-output design approach.

  17. Spatio-temporal modelling of climate-sensitive disease risk: Towards an early warning system for dengue in Brazil

    NASA Astrophysics Data System (ADS)

    Lowe, Rachel; Bailey, Trevor C.; Stephenson, David B.; Graham, Richard J.; Coelho, Caio A. S.; Sá Carvalho, Marilia; Barcellos, Christovam

    2011-03-01

    This paper considers the potential for using seasonal climate forecasts in developing an early warning system for dengue fever epidemics in Brazil. In the first instance, a generalised linear model (GLM) is used to select climate and other covariates which are both readily available and prove significant in prediction of confirmed monthly dengue cases based on data collected across the whole of Brazil for the period January 2001 to December 2008 at the microregion level (typically consisting of one large city and several smaller municipalities). The covariates explored include temperature and precipitation data on a 2.5°×2.5° longitude-latitude grid with time lags relevant to dengue transmission, an El Niño Southern Oscillation index and other relevant socio-economic and environmental variables. A negative binomial model formulation is adopted in this model selection to allow for extra-Poisson variation (overdispersion) in the observed dengue counts caused by unknown/unobserved confounding factors and possible correlations in these effects in both time and space. Subsequently, the selected global model is refined in the context of the South East region of Brazil, where dengue predominates, by reverting to a Poisson framework and explicitly modelling the overdispersion through a combination of unstructured and spatio-temporal structured random effects. The resulting spatio-temporal hierarchical model (or GLMM—generalised linear mixed model) is implemented via a Bayesian framework using Markov Chain Monte Carlo (MCMC). Dengue predictions are found to be enhanced both spatially and temporally when using the GLMM and the Bayesian framework allows posterior predictive distributions for dengue cases to be derived, which can be useful for developing a dengue alert system. Using this model, we conclude that seasonal climate forecasts could have potential value in helping to predict dengue incidence months in advance of an epidemic in South East Brazil.

  18. The choices, choosing model of quality of life: description and rationale.

    PubMed

    Gurland, Barry J; Gurland, Roni V

    2009-01-01

    This introductory paper offers a critical review of current models and measures of quality of life, and describes a choices and choosing (c-c) process as a new model of quality of life. Criteria are proposed for judging the relative merits of models of quality of life with preference being given to explicit mechanisms, linkages to a science base, a means of identifying deficits amenable to rational restorative interventions, and with embedded values of the whole person. A conjectured model, based on the processes of gaining access to choices and choosing among them, matches the proposed criteria. The c-c process is an evolved adaptive mechanism dedicated to the pursuit of quality of life, driven by specific biological and psychological systems, and influenced by social and environmental forces. This model strengthens the science base for the field of quality of life, unifies approaches to concept and measurement, and guides the evaluation of impairments of quality of life. Corresponding interventions can be aimed at relieving restrictions or distortions of the c-c process; thus helping people to preserve and improve their quality of life. RELATED WORK: Companion papers detail relevant aspects of the science base, present methods of identifying deficits and distortions of the c-c model so as to open opportunities for rational restorative interventions, and explore empirical analyses of the relationship between health imposed restrictions of c-c and conventional indicators of diminished quality of life. [corrected] (c) 2008 John Wiley & Sons, Ltd.

  19. The choices, choosing model of quality of life: linkages to a science base.

    PubMed

    Gurland, Barry J; Gurland, Roni V

    2009-01-01

    A previous paper began with a critical review of current models and measures of quality of life and then proposed criteria for judging the relative merits of alternative models: preference was given to finding a model with explicit mechanisms, linkages to a science base, a means of identifying deficits amenable to rational restorative interventions, and with embedded values of the whole person. A conjectured model, based on the processes of accessing choices and choosing among them, matched the proposed criteria. The choices and choosing (c-c) process is an evolved adaptive mechanism dedicated to the pursuit of quality of life, driven by specific biological and psychological systems, and influenced also by social and environmental forces. In this paper the c-c model is examined for its potential to strengthen the science base for the field of quality of life and thus to unify many approaches to concept and measurement. A third paper in this set will lay out a guide to applying the c-c model in evaluating impairments of quality of life and will tie this evaluation to corresponding interventions aimed at relieving restrictions or distortions of the c-c process; thus helping people to preserve and improve their quality of life. The fourth paper will demonstrate empirical analyses of the relationship between health imposed restrictions of options for living and conventional indicators of diminished quality of life. (c) 2008 John Wiley & Sons, Ltd.

  20. Short-Range Prediction of Monsoon Precipitation by NCMRWF Regional Unified Model with Explicit Convection

    NASA Astrophysics Data System (ADS)

    Mamgain, Ashu; Rajagopal, E. N.; Mitra, A. K.; Webster, S.

    2018-03-01

    There are increasing efforts towards the prediction of high-impact weather systems and understanding of related dynamical and physical processes. High-resolution numerical model simulations can be used directly to model the impact at fine-scale details. Improvement in forecast accuracy can help in disaster management planning and execution. National Centre for Medium Range Weather Forecasting (NCMRWF) has implemented high-resolution regional unified modeling system with explicit convection embedded within coarser resolution global model with parameterized convection. The models configurations are based on UK Met Office unified seamless modeling system. Recent land use/land cover data (2012-2013) obtained from Indian Space Research Organisation (ISRO) are also used in model simulations. Results based on short-range forecast of both the global and regional models over India for a month indicate that convection-permitting simulations by the high-resolution regional model is able to reduce the dry bias over southern parts of West Coast and monsoon trough zone with more intense rainfall mainly towards northern parts of monsoon trough zone. Regional model with explicit convection has significantly improved the phase of the diurnal cycle of rainfall as compared to the global model. Results from two monsoon depression cases during study period show substantial improvement in details of rainfall pattern. Many categories in rainfall defined for operational forecast purposes by Indian forecasters are also well represented in case of convection-permitting high-resolution simulations. For the statistics of number of days within a range of rain categories between `No-Rain' and `Heavy Rain', the regional model is outperforming the global model in all the ranges. In the very heavy and extremely heavy categories, the regional simulations show overestimation of rainfall days. Global model with parameterized convection have tendency to overestimate the light rainfall days and underestimate the heavy rain days compared to the observation data.

  1. Socio-hydrologic Modeling to Understand and Mediate the Competition for Water between Humans and Ecosystems: Murrumbidgee River Basin, Australia (Invited)

    NASA Astrophysics Data System (ADS)

    Sivapalan, M.

    2013-12-01

    Competition for water between humans and ecosystems is set to become a flash point in coming decades in all parts of the world. An entirely new and comprehensive quantitative framework is needed to establish a holistic understanding of that competition, thereby enabling development of effective mediation strategies. This paper presents a case study centered on the Murrumbidgee river basin in eastern Australia that illustrates the dynamics of the balance between water extraction and use for food production and efforts to mitigate and reverse consequent degradation of the riparian environment. Interactions between patterns of water management and climate driven hydrological variability within the prevailing socio-economic environment have contributed to the emergence of new whole system dynamics over the last 100 years. In particular, data analysis reveals a pendulum swing between an exclusive focus on agricultural development and food production in the initial stages of water resource development and its attendant socio-economic benefits, followed by the gradual realization of the adverse environmental impacts, efforts to mitigate these with the use of remedial measures, and ultimately concerted efforts and externally imposed solutions to restore environmental health and ecosystem services. A quasi-distributed coupled socio-hydrologic system model that explicitly includes the two-way coupling between human and hydrological systems, including evolution of human values/norms relating to water and the environment, is able to mimic broad features of this pendulum swing. The model consists of coupled nonlinear differential equations that include four state variables describing the co-evolution of storage capacity, irrigated area, human population, and ecosystem health. The model is used to generate insights into the dominant controls of the trajectory of co-evolution of the coupled human-water system, to serve as the theoretical framework for more detailed analysis of the system, and to generate organizing principles that may be transferable to other systems in different climatic and socio-economic settings.

  2. TERSSE: Definition of the Total Earth Resources System for the Shuttle Era. Volume 7: User Models: A System Assessment

    NASA Technical Reports Server (NTRS)

    1974-01-01

    User models defined as any explicit process or procedure used to transform information extracted from remotely sensed data into a form useful as a resource management information input are discussed. The role of the user models as information, technological, and operations interfaces between the TERSSE and the resource managers is emphasized. It is recommended that guidelines and management strategies be developed for a systems approach to user model development.

  3. Accident/Mishap Investigation System

    NASA Technical Reports Server (NTRS)

    Keller, Richard; Wolfe, Shawn; Gawdiak, Yuri; Carvalho, Robert; Panontin, Tina; Williams, James; Sturken, Ian

    2007-01-01

    InvestigationOrganizer (IO) is a Web-based collaborative information system that integrates the generic functionality of a database, a document repository, a semantic hypermedia browser, and a rule-based inference system with specialized modeling and visualization functionality to support accident/mishap investigation teams. This accessible, online structure is designed to support investigators by allowing them to make explicit, shared, and meaningful links among evidence, causal models, findings, and recommendations.

  4. Selection of fire spread model for Russian fire behavior prediction system

    Treesearch

    Alexandra V. Volokitina; Kevin C. Ryan; Tatiana M. Sofronova; Mark A. Sofronov

    2010-01-01

    Mathematical modeling of fire behavior prediction is only possible if the models are supplied with an information database that provides spatially explicit input parameters for modeled area. Mathematical models can be of three kinds: 1) physical; 2) empirical; and 3) quasi-empirical (Sullivan, 2009). Physical models (Grishin, 1992) are of academic interest only because...

  5. Biomechanical model for computing deformations for whole-body image registration: A meshless approach.

    PubMed

    Li, Mao; Miller, Karol; Joldes, Grand Roman; Kikinis, Ron; Wittek, Adam

    2016-12-01

    Patient-specific biomechanical models have been advocated as a tool for predicting deformations of soft body organs/tissue for medical image registration (aligning two sets of images) when differences between the images are large. However, complex and irregular geometry of the body organs makes generation of patient-specific biomechanical models very time-consuming. Meshless discretisation has been proposed to solve this challenge. However, applications so far have been limited to 2D models and computing single organ deformations. In this study, 3D comprehensive patient-specific nonlinear biomechanical models implemented using meshless Total Lagrangian explicit dynamics algorithms are applied to predict a 3D deformation field for whole-body image registration. Unlike a conventional approach that requires dividing (segmenting) the image into non-overlapping constituents representing different organs/tissues, the mechanical properties are assigned using the fuzzy c-means algorithm without the image segmentation. Verification indicates that the deformations predicted using the proposed meshless approach are for practical purposes the same as those obtained using the previously validated finite element models. To quantitatively evaluate the accuracy of the predicted deformations, we determined the spatial misalignment between the registered (i.e. source images warped using the predicted deformations) and target images by computing the edge-based Hausdorff distance. The Hausdorff distance-based evaluation determines that our meshless models led to successful registration of the vast majority of the image features. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  6. New jobs old roles - working for prevention in a whole-system model of health and social care for older people.

    PubMed

    Smith, Naomi; Barnes, Marian

    2013-01-01

    The Partnerships for Older People Projects programme provided government funding for local and health authorities to pilot prevention and intervention services in partnership with the voluntary sector and older people between 2006 and 2009. This local evaluation of a pilot in southern England undertaken between 2007 and 2009 used a Theory of Change approach to gathering and reflecting on data with different groups involved in the delivery of this whole-system based model of prevention. The model was delivered in the same way in seven social services locality areas within a large county authority. The method of data gathering enabled structured reflection on the implementation, development and projected outcomes of the model and a consideration of the key learning of working in a whole-system way with partners and stakeholders. The whole-system model, although complex and challenging to implement, was considered overall to have been a success and provided significant learning for partners and stakeholders on the challenges and benefits of working across professional and sectoral boundaries. New posts were created as part of the model. Two of these, recruited to and managed by voluntary sector partners, were identified as 'new jobs', but echoed 'old roles' within community and voluntary sector based health and social care. The authors reflect on the parallels of these roles with previously existing roles and ways of working and reflect on how the whole-system approach of this particular pilot enabled these new jobs to develop in particularly appropriate and successful ways. © 2012 Blackwell Publishing Ltd.

  7. A COUPLED LAND-SURFACE AND DRY DEPOSITION MODEL AND COMPARISON TO FIELD MEASUREMENTS OF SURFACE HEAT, MOISTURE, AND OZONE FLUXES

    EPA Science Inventory

    We have developed a coupled land-surface and dry deposition model for realistic treatment of surface fluxes of heat, moisture, and chemical dry deposition within a comprehensive air quality modeling system. A new land-surface model (LSM) with explicit treatment of soil moisture...

  8. Hydroclimatology of Dual Peak Cholera Incidence in Bengal Region: Inferences from a Spatial Explicit Model

    NASA Astrophysics Data System (ADS)

    Bertuzzo, E.; Mari, L.; Righetto, L.; Casagrandi, R.; Gatto, M.; Rodriguez-Iturbe, I.; Rinaldo, A.

    2010-12-01

    The seasonality of cholera and its relation with environmental drivers are receiving increasing interest and research efforts, yet they remain unsatisfactorily understood. A striking example is the observed annual cycle of cholera incidence in the Bengal region which exhibits two peaks despite the main environmental drivers that have been linked to the disease (air and sea surface temperature, zooplankton density, river discharge) follow a synchronous single-peak annual pattern. A first outbreak, mainly affecting the coastal regions, occurs in spring and it is followed, after a period of low incidence during summer, by a second, usually larger, peak in autumn also involving regions situated farther inland. A hydroclimatological explanation for this unique seasonal cycle has been recently proposed: the low river spring flows favor the intrusion of brackish water (the natural environment of the causative agent of the disease) which, in turn, triggers the first outbreak. The summer rising river discharges have a temporary dilution effect and prompt the repulsion of contaminated water which lowers the disease incidence. However, the monsoon flooding, together with the induced crowding of the population and the failure of the sanitation systems, can possibly facilitate the spatial transmission of the disease and promote the autumn outbreak. We test this hypothesis using a mechanistic, spatially explicit model of cholera epidemic. The framework directly accounts for the role of the river network in transporting and redistributing cholera bacteria among human communities as well as for the annual fluctuation of the river flow. The model is forced with the actual environmental drivers of the region, namely river flow and temperature. Our results show that these two drivers, both having a single peak in the summer, can generate a double peak cholera incidence pattern. Besides temporal patterns, the model is also able to qualitatively reproduce spatial patterns characterized by a spring peak confined to the coastal area and a autumn peak involving the whole region. The modeling exercise allows to identify the relevant processes and to understand how they concert to the generation of this peculiar pattern. Finally, the range of epidemiological and hydrological conditions under which dual or a single peaks are expected is quantified.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nikolić, Hrvoje, E-mail: hnikolic@irb.hr

    An argument by Banks, Susskind and Peskin (BSP), according to which violation of unitarity would violate either locality or energy-momentum conservation, is widely believed to be a strong argument against non-unitarity of Hawking radiation. We find that the whole BSP argument rests on the crucial assumption that the Hamiltonian is not highly degenerate, and point out that this assumption is not satisfied for systems with many degrees of freedom. Using Lindblad equation, we show that high degeneracy of the Hamiltonian allows local non-unitary evolution without violating energy-momentum conservation. Moreover, since energy-momentum is the source of gravity, we argue that energy-momentummore » is necessarily conserved for a large class of non-unitary systems with gravity. Finally, we explicitly calculate the Lindblad operators for non-unitary Hawking radiation and show that they conserve energy-momentum.« less

  10. Spatially explicit West Nile virus risk modeling in Santa Clara County, California

    USDA-ARS?s Scientific Manuscript database

    A previously created Geographic Information Systems model designed to identify regions of West Nile virus (WNV) transmission risk is tested and calibrated in Santa Clara County, California. American Crows that died from WNV infection in 2005 provide the spatial and temporal ground truth. Model param...

  11. Efficient Translation of LTL Formulae into Buchi Automata

    NASA Technical Reports Server (NTRS)

    Giannakopoulou, Dimitra; Lerda, Flavio

    2001-01-01

    Model checking is a fully automated technique for checking that a system satisfies a set of required properties. With explicit-state model checkers, properties are typically defined in linear-time temporal logic (LTL), and are translated into B chi automata in order to be checked. This report presents how we have combined and improved existing techniques to obtain an efficient LTL to B chi automata translator. In particular, we optimize the core of existing tableau-based approaches to generate significantly smaller automata. Our approach has been implemented and is being released as part of the Java PathFinder software (JPF), an explicit state model checker under development at the NASA Ames Research Center.

  12. Explicit Computations of Instantons and Large Deviations in Beta-Plane Turbulence

    NASA Astrophysics Data System (ADS)

    Laurie, J.; Bouchet, F.; Zaboronski, O.

    2012-12-01

    We use a path integral formalism and instanton theory in order to make explicit analytical predictions about large deviations and rare events in beta-plane turbulence. The path integral formalism is a concise way to get large deviation results in dynamical systems forced by random noise. In the most simple cases, it leads to the same results as the Freidlin-Wentzell theory, but it has a wider range of applicability. This approach is however usually extremely limited, due to the complexity of the theoretical problems. As a consequence it provides explicit results in a fairly limited number of models, often extremely simple ones with only a few degrees of freedom. Few exception exist outside the realm of equilibrium statistical physics. We will show that the barotropic model of beta-plane turbulence is one of these non-equilibrium exceptions. We describe sets of explicit solutions to the instanton equation, and precise derivations of the action functional (or large deviation rate function). The reason why such exact computations are possible is related to the existence of hidden symmetries and conservation laws for the instanton dynamics. We outline several applications of this apporach. For instance, we compute explicitly the very low probability to observe flows with an energy much larger or smaller than the typical one. Moreover, we consider regimes for which the system has multiple attractors (corresponding to different numbers of alternating jets), and discuss the computation of transition probabilities between two such attractors. These extremely rare events are of the utmost importance as the dynamics undergo qualitative macroscopic changes during such transitions.

  13. Size-dependent error of the density functional theory ionization potential in vacuum and solution

    DOE PAGES

    Sosa Vazquez, Xochitl A.; Isborn, Christine M.

    2015-12-22

    Density functional theory is often the method of choice for modeling the energetics of large molecules and including explicit solvation effects. It is preferable to use a method that treats systems of different sizes and with different amounts of explicit solvent on equal footing. However, recent work suggests that approximate density functional theory has a size-dependent error in the computation of the ionization potential. We here investigate the lack of size-intensivity of the ionization potential computed with approximate density functionals in vacuum and solution. We show that local and semi-local approximations to exchange do not yield a constant ionization potentialmore » for an increasing number of identical isolated molecules in vacuum. Instead, as the number of molecules increases, the total energy required to ionize the system decreases. Rather surprisingly, we find that this is still the case in solution, whether using a polarizable continuum model or with explicit solvent that breaks the degeneracy of each solute, and we find that explicit solvent in the calculation can exacerbate the size-dependent delocalization error. We demonstrate that increasing the amount of exact exchange changes the character of the polarization of the solvent molecules; for small amounts of exact exchange the solvent molecules contribute a fraction of their electron density to the ionized electron, but for larger amounts of exact exchange they properly polarize in response to the cationic solute. As a result, in vacuum and explicit solvent, the ionization potential can be made size-intensive by optimally tuning a long-range corrected hybrid functional.« less

  14. Size-dependent error of the density functional theory ionization potential in vacuum and solution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sosa Vazquez, Xochitl A.; Isborn, Christine M., E-mail: cisborn@ucmerced.edu

    2015-12-28

    Density functional theory is often the method of choice for modeling the energetics of large molecules and including explicit solvation effects. It is preferable to use a method that treats systems of different sizes and with different amounts of explicit solvent on equal footing. However, recent work suggests that approximate density functional theory has a size-dependent error in the computation of the ionization potential. We here investigate the lack of size-intensivity of the ionization potential computed with approximate density functionals in vacuum and solution. We show that local and semi-local approximations to exchange do not yield a constant ionization potentialmore » for an increasing number of identical isolated molecules in vacuum. Instead, as the number of molecules increases, the total energy required to ionize the system decreases. Rather surprisingly, we find that this is still the case in solution, whether using a polarizable continuum model or with explicit solvent that breaks the degeneracy of each solute, and we find that explicit solvent in the calculation can exacerbate the size-dependent delocalization error. We demonstrate that increasing the amount of exact exchange changes the character of the polarization of the solvent molecules; for small amounts of exact exchange the solvent molecules contribute a fraction of their electron density to the ionized electron, but for larger amounts of exact exchange they properly polarize in response to the cationic solute. In vacuum and explicit solvent, the ionization potential can be made size-intensive by optimally tuning a long-range corrected hybrid functional.« less

  15. Modeling fuels and fire effects in 3D: Model description and applications

    Treesearch

    Francois Pimont; Russell Parsons; Eric Rigolot; Francois de Coligny; Jean-Luc Dupuy; Philippe Dreyfus; Rodman R. Linn

    2016-01-01

    Scientists and managers critically need ways to assess how fuel treatments alter fire behavior, yet few tools currently exist for this purpose.We present a spatially-explicit-fuel-modeling system, FuelManager, which models fuels, vegetation growth, fire behavior (using a physics-based model, FIRETEC), and fire effects. FuelManager's flexible approach facilitates...

  16. Interrelations between different canonical descriptions of dissipative systems

    NASA Astrophysics Data System (ADS)

    Schuch, D.; Guerrero, J.; López-Ruiz, F. F.; Aldaya, V.

    2015-04-01

    There are many approaches for the description of dissipative systems coupled to some kind of environment. This environment can be described in different ways; only effective models are being considered here. In the Bateman model, the environment is represented by one additional degree of freedom and the corresponding momentum. In two other canonical approaches, no environmental degree of freedom appears explicitly, but the canonical variables are connected with the physical ones via non-canonical transformations. The link between the Bateman approach and those without additional variables is achieved via comparison with a canonical approach using expanding coordinates, as, in this case, both Hamiltonians are constants of motion. This leads to constraints that allow for the elimination of the additional degree of freedom in the Bateman approach. These constraints are not unique. Several choices are studied explicitly, and the consequences for the physical interpretation of the additional variable in the Bateman model are discussed.

  17. An Integrated Ecological Modeling System for Assessing ...

    EPA Pesticide Factsheets

    We demonstrate a novel, spatially explicit assessment of the current condition of aquatic ecosystem services, with limited sensitivity analysis for the atmospheric contaminant mercury. The Integrated Ecological Modeling System (IEMS) forecasts water quality and quantity, habitat suitability for aquatic biota, fish biomasses, population densities, productivities, and contamination by methylmercury across headwater watersheds. We applied this IEMS to the Coal River Basin (CRB), West Virginia (USA), an 8-digit hydrologic unit watershed, by simulating a network of 97 stream segments using the SWAT watershed model, a watershed mercury loading model, the WASP water quality model, the PiSCES fish community estimation model, a fish habitat suitability model, the BASS fish community and bioaccumulation model, and an ecoservices post-processer. Model application was facilitated by automated data retrieval and model setup and updated model wrappers and interfaces for data transfers between these models from a prior study. This companion study evaluates baseline predictions of ecoservices provided for 1990 – 2010 for the population of streams in the CRB and serves as a foundation for future model development. Published in the journal, Ecological Modeling. Highlights: • Demonstrate a spatially-explicit IEMS for multiple scales. • Design a flexible IEMS for

  18. Combining Distributed and Shared Memory Models: Approach and Evolution of the Global Arrays Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nieplocha, Jarek; Harrison, Robert J.; Kumar, Mukul

    2002-07-29

    Both shared memory and distributed memory models have advantages and shortcomings. Shared memory model is much easier to use but it ignores data locality/placement. Given the hierarchical nature of the memory subsystems in the modern computers this characteristic might have a negative impact on performance and scalability. Various techniques, such as code restructuring to increase data reuse and introducing blocking in data accesses, can address the problem and yield performance competitive with message passing[Singh], however at the cost of compromising the ease of use feature. Distributed memory models such as message passing or one-sided communication offer performance and scalability butmore » they compromise the ease-of-use. In this context, the message-passing model is sometimes referred to as?assembly programming for the scientific computing?. The Global Arrays toolkit[GA1, GA2] attempts to offer the best features of both models. It implements a shared-memory programming model in which data locality is managed explicitly by the programmer. This management is achieved by explicit calls to functions that transfer data between a global address space (a distributed array) and local storage. In this respect, the GA model has similarities to the distributed shared-memory models that provide an explicit acquire/release protocol. However, the GA model acknowledges that remote data is slower to access than local data and allows data locality to be explicitly specified and hence managed. The GA model exposes to the programmer the hierarchical memory of modern high-performance computer systems, and by recognizing the communication overhead for remote data transfer, it promotes data reuse and locality of reference. This paper describes the characteristics of the Global Arrays programming model, capabilities of the toolkit, and discusses its evolution.« less

  19. Are stormwater pollution impacts significant in life cycle assessment? A new methodology for quantifying embedded urban stormwater impacts.

    PubMed

    Phillips, Robert; Jeswani, Harish Kumar; Azapagic, Adisa; Apul, Defne

    2018-09-15

    Current life cycle assessment (LCA) models do not explicitly incorporate the impacts from urban stormwater pollution. To address this issue, a framework to estimate the impacts from urban stormwater pollution over the lifetime of a system has been developed, laying the groundwork for subsequent improvements in life cycle databases and LCA modelling. The proposed framework incorporates urban stormwater event mean concentration (EMC) data into existing LCA impact categories to account for the environmental impacts associated with urban land occupation across the whole life cycle of a system. It consists of five steps: (1) compilation of inventory of urban stormwater pollutants; (2) collection of precipitation data; (3) classification and characterisation within existing midpoint impact categories; (4) collation of inventory data for impermeable urban land occupation; and (5) impact assessment. The framework is generic and can be applied to any system using any LCA impact method. Its application is demonstrated by two illustrative case studies: electricity generation and production of construction materials. The results show that pollutants in urban stormwater have an influence on human toxicity, freshwater and marine ecotoxicity, marine eutrophication, freshwater eutrophication and terrestrial ecotoxicity. Among these, urban stormwater pollution has the highest relative contribution to the eutrophication potentials. The results also suggest that stormwater pollution from urban areas can have a substantial effect on the life cycle impacts of some systems (construction materials), while for some systems the effect is small (e.g. electricity generation). However, it is not possible to determine a priori which systems are affected so that the impacts from stormwater pollution should be considered routinely in future LCA studies. The paper also proposes ways to incorporate stormwater pollution burdens into the life cycle databases. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. A review on symmetries for certain Aedes aegypti models

    NASA Astrophysics Data System (ADS)

    Freire, Igor Leite; Torrisi, Mariano

    2015-04-01

    We summarize our results related with mathematical modeling of Aedes aegypti and its Lie symmetries. Moreover, some explicit, group-invariant solutions are also shown. Weak equivalence transformations of more general reaction diffusion systems are also considered. New classes of solutions are obtained.

  1. Nonadiabatic dynamics of electron transfer in solution: Explicit and implicit solvent treatments that include multiple relaxation time scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schwerdtfeger, Christine A.; Soudackov, Alexander V.; Hammes-Schiffer, Sharon, E-mail: shs3@illinois.edu

    2014-01-21

    The development of efficient theoretical methods for describing electron transfer (ET) reactions in condensed phases is important for a variety of chemical and biological applications. Previously, dynamical dielectric continuum theory was used to derive Langevin equations for a single collective solvent coordinate describing ET in a polar solvent. In this theory, the parameters are directly related to the physical properties of the system and can be determined from experimental data or explicit molecular dynamics simulations. Herein, we combine these Langevin equations with surface hopping nonadiabatic dynamics methods to calculate the rate constants for thermal ET reactions in polar solvents formore » a wide range of electronic couplings and reaction free energies. Comparison of explicit and implicit solvent calculations illustrates that the mapping from explicit to implicit solvent models is valid even for solvents exhibiting complex relaxation behavior with multiple relaxation time scales and a short-time inertial response. The rate constants calculated for implicit solvent models with a single solvent relaxation time scale corresponding to water, acetonitrile, and methanol agree well with analytical theories in the Golden rule and solvent-controlled regimes, as well as in the intermediate regime. The implicit solvent models with two relaxation time scales are in qualitative agreement with the analytical theories but quantitatively overestimate the rate constants compared to these theories. Analysis of these simulations elucidates the importance of multiple relaxation time scales and the inertial component of the solvent response, as well as potential shortcomings of the analytical theories based on single time scale solvent relaxation models. This implicit solvent approach will enable the simulation of a wide range of ET reactions via the stochastic dynamics of a single collective solvent coordinate with parameters that are relevant to experimentally accessible systems.« less

  2. 3-D Magnetotelluric Forward Modeling And Inversion Incorporating Topography By Using Vector Finite-Element Method Combined With Divergence Corrections Based On The Magnetic Field (VFEH++)

    NASA Astrophysics Data System (ADS)

    Shi, X.; Utada, H.; Jiaying, W.

    2009-12-01

    The vector finite-element method combined with divergence corrections based on the magnetic field H, referred to as VFEH++ method, is developed to simulate the magnetotelluric (MT) responses of 3-D conductivity models. The advantages of the new VFEH++ method are the use of edge-elements to eliminate the vector parasites and the divergence corrections to explicitly guarantee the divergence-free conditions in the whole modeling domain. 3-D MT topographic responses are modeling using the new VFEH++ method, and are compared with those calculated by other numerical methods. The results show that MT responses can be modeled highly accurate using the VFEH+ +method. The VFEH++ algorithm is also employed for the 3-D MT data inversion incorporating topography. The 3-D MT inverse problem is formulated as a minimization problem of the regularized misfit function. In order to avoid the huge memory requirement and very long time for computing the Jacobian sensitivity matrix for Gauss-Newton method, we employ the conjugate gradient (CG) approach to solve the inversion equation. In each iteration of CG algorithm, the cost computation is the product of the Jacobian sensitivity matrix with a model vector x or its transpose with a data vector y, which can be transformed into two pseudo-forwarding modeling. This avoids the full explicitly Jacobian matrix calculation and storage which leads to considerable savings in the memory required by the inversion program in PC computer. The performance of CG algorithm will be illustrated by several typical 3-D models with horizontal earth surface and topographic surfaces. The results show that the VFEH++ and CG algorithms can be effectively employed to 3-D MT field data inversion.

  3. Spelling Mastery. What Works Clearinghouse Intervention Report

    ERIC Educational Resources Information Center

    What Works Clearinghouse, 2014

    2014-01-01

    "Spelling Mastery" is designed to explicitly teach spelling skills to students in grades 1 through 6. One of several Direct Instruction curricula from McGraw-Hill that precisely specify how to teach incremental content, "Spelling Mastery" includes phonemic, morphemic, and whole-word strategies. The What Works Clearinghouse…

  4. Cerebral cartography and connectomics

    PubMed Central

    Sporns, Olaf

    2015-01-01

    Cerebral cartography and connectomics pursue similar goals in attempting to create maps that can inform our understanding of the structural and functional organization of the cortex. Connectome maps explicitly aim at representing the brain as a complex network, a collection of nodes and their interconnecting edges. This article reflects on some of the challenges that currently arise in the intersection of cerebral cartography and connectomics. Principal challenges concern the temporal dynamics of functional brain connectivity, the definition of areal parcellations and their hierarchical organization into large-scale networks, the extension of whole-brain connectivity to cellular-scale networks, and the mapping of structure/function relations in empirical recordings and computational models. Successfully addressing these challenges will require extensions of methods and tools from network science to the mapping and analysis of human brain connectivity data. The emerging view that the brain is more than a collection of areas, but is fundamentally operating as a complex networked system, will continue to drive the creation of ever more detailed and multi-modal network maps as tools for on-going exploration and discovery in human connectomics. PMID:25823870

  5. Logistic Mixed Models to Investigate Implicit and Explicit Belief Tracking

    PubMed Central

    Lages, Martin; Scheel, Anne

    2016-01-01

    We investigated the proposition of a two-systems Theory of Mind in adults’ belief tracking. A sample of N = 45 participants predicted the choice of one of two opponent players after observing several rounds in an animated card game. Three matches of this card game were played and initial gaze direction on target and subsequent choice predictions were recorded for each belief task and participant. We conducted logistic regressions with mixed effects on the binary data and developed Bayesian logistic mixed models to infer implicit and explicit mentalizing in true belief and false belief tasks. Although logistic regressions with mixed effects predicted the data well a Bayesian logistic mixed model with latent task- and subject-specific parameters gave a better account of the data. As expected explicit choice predictions suggested a clear understanding of true and false beliefs (TB/FB). Surprisingly, however, model parameters for initial gaze direction also indicated belief tracking. We discuss why task-specific parameters for initial gaze directions are different from choice predictions yet reflect second-order perspective taking. PMID:27853440

  6. Genomic Prediction Accounting for Residual Heteroskedasticity.

    PubMed

    Ou, Zhining; Tempelman, Robert J; Steibel, Juan P; Ernst, Catherine W; Bates, Ronald O; Bello, Nora M

    2015-11-12

    Whole-genome prediction (WGP) models that use single-nucleotide polymorphism marker information to predict genetic merit of animals and plants typically assume homogeneous residual variance. However, variability is often heterogeneous across agricultural production systems and may subsequently bias WGP-based inferences. This study extends classical WGP models based on normality, heavy-tailed specifications and variable selection to explicitly account for environmentally-driven residual heteroskedasticity under a hierarchical Bayesian mixed-models framework. WGP models assuming homogeneous or heterogeneous residual variances were fitted to training data generated under simulation scenarios reflecting a gradient of increasing heteroskedasticity. Model fit was based on pseudo-Bayes factors and also on prediction accuracy of genomic breeding values computed on a validation data subset one generation removed from the simulated training dataset. Homogeneous vs. heterogeneous residual variance WGP models were also fitted to two quantitative traits, namely 45-min postmortem carcass temperature and loin muscle pH, recorded in a swine resource population dataset prescreened for high and mild residual heteroskedasticity, respectively. Fit of competing WGP models was compared using pseudo-Bayes factors. Predictive ability, defined as the correlation between predicted and observed phenotypes in validation sets of a five-fold cross-validation was also computed. Heteroskedastic error WGP models showed improved model fit and enhanced prediction accuracy compared to homoskedastic error WGP models although the magnitude of the improvement was small (less than two percentage points net gain in prediction accuracy). Nevertheless, accounting for residual heteroskedasticity did improve accuracy of selection, especially on individuals of extreme genetic merit. Copyright © 2016 Ou et al.

  7. An image-based skeletal dosimetry model for the ICRP reference adult male—internal electron sources

    NASA Astrophysics Data System (ADS)

    Hough, Matthew; Johnson, Perry; Rajon, Didier; Jokisch, Derek; Lee, Choonsik; Bolch, Wesley

    2011-04-01

    In this study, a comprehensive electron dosimetry model of the adult male skeletal tissues is presented. The model is constructed using the University of Florida adult male hybrid phantom of Lee et al (2010 Phys. Med. Biol. 55 339-63) and the EGSnrc-based Paired Image Radiation Transport code of Shah et al (2005 J. Nucl. Med. 46 344-53). Target tissues include the active bone marrow, associated with radiogenic leukemia, and total shallow marrow, associated with radiogenic bone cancer. Monoenergetic electron emissions are considered over the energy range 1 keV to 10 MeV for the following sources: bone marrow (active and inactive), trabecular bone (surfaces and volumes), and cortical bone (surfaces and volumes). Specific absorbed fractions are computed according to the MIRD schema, and are given as skeletal-averaged values in the paper with site-specific values reported in both tabular and graphical format in an electronic annex available from http://stacks.iop.org/0031-9155/56/2309/mmedia. The distribution of cortical bone and spongiosa at the macroscopic dimensions of the phantom, as well as the distribution of trabecular bone and marrow tissues at the microscopic dimensions of the phantom, is imposed through detailed analyses of whole-body ex vivo CT images (1 mm resolution) and spongiosa-specific ex vivo microCT images (30 µm resolution), respectively, taken from a 40 year male cadaver. The method utilized in this work includes: (1) explicit accounting for changes in marrow self-dose with variations in marrow cellularity, (2) explicit accounting for electron escape from spongiosa, (3) explicit consideration of spongiosa cross-fire from cortical bone, and (4) explicit consideration of the ICRP's change in the surrogate tissue region defining the location of the osteoprogenitor cells (from a 10 µm endosteal layer covering the trabecular and cortical surfaces to a 50 µm shallow marrow layer covering trabecular and medullary cavity surfaces). Skeletal-averaged values of absorbed fraction in the present model are noted to be very compatible with those weighted by the skeletal tissue distributions found in the ICRP Publication 110 adult male and female voxel phantoms, but are in many cases incompatible with values used in current and widely implemented internal dosimetry software.

  8. Design and numerical evaluation of full-authority flight control systems for conventional and thruster-augmented helicopters employed in NOE operations

    NASA Technical Reports Server (NTRS)

    Perri, Todd A.; Mckillip, R. M., Jr.; Curtiss, H. C., Jr.

    1987-01-01

    The development and methodology is presented for development of full-authority implicit model-following and explicit model-following optimal controllers for use on helicopters operating in the Nap-of-the Earth (NOE) environment. Pole placement, input-output frequency response, and step input response were used to evaluate handling qualities performance. The pilot was equipped with velocity-command inputs. A mathematical/computational trajectory optimization method was employed to evaluate the ability of each controller to fly NOE maneuvers. The method determines the optimal swashplate and thruster input histories from the helicopter's dynamics and the prescribed geometry and desired flying qualities of the maneuver. Three maneuvers were investigated for both the implicit and explicit controllers with and without auxiliary propulsion installed: pop-up/dash/descent, bob-up at 40 knots, and glideslope. The explicit controller proved to be superior to the implicit controller in performance and ease of design.

  9. Download Trim.Fate

    EPA Pesticide Factsheets

    TRIM.FaTE is a spatially explicit, compartmental mass balance model that describes the movement and transformation of pollutants over time, through a user-defined, bounded system that includes both biotic and abiotic compartments.

  10. Towards refactoring the Molecular Function Ontology with a UML profile for function modeling.

    PubMed

    Burek, Patryk; Loebe, Frank; Herre, Heinrich

    2017-10-04

    Gene Ontology (GO) is the largest resource for cataloging gene products. This resource grows steadily and, naturally, this growth raises issues regarding the structure of the ontology. Moreover, modeling and refactoring large ontologies such as GO is generally far from being simple, as a whole as well as when focusing on certain aspects or fragments. It seems that human-friendly graphical modeling languages such as the Unified Modeling Language (UML) could be helpful in connection with these tasks. We investigate the use of UML for making the structural organization of the Molecular Function Ontology (MFO), a sub-ontology of GO, more explicit. More precisely, we present a UML dialect, called the Function Modeling Language (FueL), which is suited for capturing functions in an ontologically founded way. FueL is equipped, among other features, with language elements that arise from studying patterns of subsumption between functions. We show how to use this UML dialect for capturing the structure of molecular functions. Furthermore, we propose and discuss some refactoring options concerning fragments of MFO. FueL enables the systematic, graphical representation of functions and their interrelations, including making information explicit that is currently either implicit in MFO or is mainly captured in textual descriptions. Moreover, the considered subsumption patterns lend themselves to the methodical analysis of refactoring options with respect to MFO. On this basis we argue that the approach can increase the comprehensibility of the structure of MFO for humans and can support communication, for example, during revision and further development.

  11. How psychological science informs the teaching of reading.

    PubMed

    Rayner, K; Foorman, B R; Perfetti, C A; Pesetsky, D; Seidenberg, M S

    2001-11-01

    This monograph discusses research, theory, and practice relevant to how children learn to read English. After an initial overview of writing systems, the discussion summarizes research from developmental psychology on children's language competency when they enter school and on the nature of early reading development. Subsequent sections review theories of learning to read, the characteristics of children who do not learn to read (i.e., who have developmental dyslexia), research from cognitive psychology and cognitive neuroscience on skilled reading, and connectionist models of learning to read. The implications of the research findings for learning to read and teaching reading are discussed. Next, the primary methods used to teach reading (phonics and whole language) are summarized. The final section reviews laboratory and classroom studies on teaching reading. From these different sources of evidence, two inescapable conclusions emerge: (a) Mastering the alphabetic principle (that written symbols are associated with phonemes) is essential to becoming proficient in the skill of reading, and (b) methods that teach this principle directly are more effective than those that do not (especially for children who are at risk in some way for having difficulty learning to read). Using whole-language activities to supplement phonics instruction does help make reading fun and meaningful for children, but ultimately, phonics instruction is critically important because it helps beginning readers understand the alphabetic principle and learn new words. Thus, elementary-school teachers who make the alphabetic principle explicit are most effective in helping their students become skilled, independent readers.

  12. A Comparison of the neural correlates that underlie rule-based and information-integration category learning.

    PubMed

    Carpenter, Kathryn L; Wills, Andy J; Benattayallah, Abdelmalek; Milton, Fraser

    2016-10-01

    The influential competition between verbal and implicit systems (COVIS) model proposes that category learning is driven by two competing neural systems-an explicit, verbal, system, and a procedural-based, implicit, system. In the current fMRI study, participants learned either a conjunctive, rule-based (RB), category structure that is believed to engage the explicit system, or an information-integration category structure that is thought to preferentially recruit the implicit system. The RB and information-integration category structures were matched for participant error rate, the number of relevant stimulus dimensions, and category separation. Under these conditions, considerable overlap in brain activation, including the prefrontal cortex, basal ganglia, and the hippocampus, was found between the RB and information-integration category structures. Contrary to the predictions of COVIS, the medial temporal lobes and in particular the hippocampus, key regions for explicit memory, were found to be more active in the information-integration condition than in the RB condition. No regions were more activated in RB than information-integration category learning. The implications of these results for theories of category learning are discussed. Hum Brain Mapp 37:3557-3574, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  13. A SPATIALLY EXPLICIT HIERARCHICAL APPROACH TO MODELING COMPLEX ECOLOGICAL SYSTEMS: THEORY AND APPLICATIONS. (R827676)

    EPA Science Inventory

    Ecological systems are generally considered among the most complex because they are characterized by a large number of diverse components, nonlinear interactions, scale multiplicity, and spatial heterogeneity. Hierarchy theory, as well as empirical evidence, suggests that comp...

  14. Modular and Spatially Explicit: A Novel Approach to System Dynamics

    EPA Science Inventory

    The Open Modeling Environment (OME) is an open-source System Dynamics (SD) simulation engine which has been created as a joint project between Oregon State University and the US Environmental Protection Agency. It is designed around a modular implementation, and provides a standa...

  15. Issues of Spatial and Temporal Scale in Modeling the Effects of Field Operatiions on Soil Properties

    USDA-ARS?s Scientific Manuscript database

    Tillage is an important procedure for modifying the soil environment in order to enhance crop growth and conserve soil and water resources. Process-based models of crop production are widely used in decision support, but few explicitly simulate tillage. The Cropping Systems Model (CSM) was modified ...

  16. Integrated modeling of long-term vegetation and hydrologic dynamics in Rocky Mountain watersheds

    Treesearch

    Robert Steven Ahl

    2007-01-01

    Changes in forest structure resulting from natural disturbances, or managed treatments, can have negative and long lasting impacts on water resources. To facilitate integrated management of forest and water resources, a System for Long-Term Integrated Management Modeling (SLIMM) was developed. By combining two spatially explicit, continuous time models, vegetation...

  17. Convergence to equilibrium of renormalised solutions to nonlinear chemical reaction–diffusion systems

    NASA Astrophysics Data System (ADS)

    Fellner, Klemens; Tang, Bao Quoc

    2018-06-01

    The convergence to equilibrium for renormalised solutions to nonlinear reaction-diffusion systems is studied. The considered reaction-diffusion systems arise from chemical reaction networks with mass action kinetics and satisfy the complex balanced condition. By applying the so-called entropy method, we show that if the system does not have boundary equilibria, i.e. equilibrium states lying on the boundary of R_+^N, then any renormalised solution converges exponentially to the complex balanced equilibrium with a rate, which can be computed explicitly up to a finite-dimensional inequality. This inequality is proven via a contradiction argument and thus not explicitly. An explicit method of proof, however, is provided for a specific application modelling a reversible enzyme reaction by exploiting the specific structure of the conservation laws. Our approach is also useful to study the trend to equilibrium for systems possessing boundary equilibria. More precisely, to show the convergence to equilibrium for systems with boundary equilibria, we establish a sufficient condition in terms of a modified finite-dimensional inequality along trajectories of the system. By assuming this condition, which roughly means that the system produces too much entropy to stay close to a boundary equilibrium for infinite time, the entropy method shows exponential convergence to equilibrium for renormalised solutions to complex balanced systems with boundary equilibria.

  18. Two-dimensional habitat modeling in the Yellowstone/Upper Missouri River system

    USGS Publications Warehouse

    Waddle, T. J.; Bovee, K.D.; Bowen, Z.H.

    1997-01-01

    This study is being conducted to provide the aquatic biology component of a decision support system being developed by the U.S. Bureau of Reclamation. In an attempt to capture the habitat needs of Great Plains fish communities we are looking beyond previous habitat modeling methods. Traditional habitat modeling approaches have relied on one-dimensional hydraulic models and lumped compositional habitat metrics to describe aquatic habitat. A broader range of habitat descriptors is available when both composition and configuration of habitats is considered. Habitat metrics that consider both composition and configuration can be adapted from terrestrial biology. These metrics are most conveniently accessed with spatially explicit descriptors of the physical variables driving habitat composition. Two-dimensional hydrodynamic models have advanced to the point that they may provide the spatially explicit description of physical parameters needed to address this problem. This paper reports progress to date on applying two-dimensional hydraulic and habitat models on the Yellowstone and Missouri Rivers and uses examples from the Yellowstone River to illustrate the configurational metrics as a new tool for assessing riverine habitats.

  19. Explicitly represented polygon wall boundary model for the explicit MPS method

    NASA Astrophysics Data System (ADS)

    Mitsume, Naoto; Yoshimura, Shinobu; Murotani, Kohei; Yamada, Tomonori

    2015-05-01

    This study presents an accurate and robust boundary model, the explicitly represented polygon (ERP) wall boundary model, to treat arbitrarily shaped wall boundaries in the explicit moving particle simulation (E-MPS) method, which is a mesh-free particle method for strong form partial differential equations. The ERP model expresses wall boundaries as polygons, which are explicitly represented without using the distance function. These are derived so that for viscous fluids, and with less computational cost, they satisfy the Neumann boundary condition for the pressure and the slip/no-slip condition on the wall surface. The proposed model is verified and validated by comparing computed results with the theoretical solution, results obtained by other models, and experimental results. Two simulations with complex boundary movements are conducted to demonstrate the applicability of the E-MPS method to the ERP model.

  20. The Importance of Explicitly Representing Soil Carbon with Depth over the Permafrost Region in Earth System Models: Implications for Atmospheric Carbon Dynamics at Multiple Temporal Scales between 1960 and 2300.

    NASA Astrophysics Data System (ADS)

    McGuire, A. D.

    2014-12-01

    We conducted an assessment of changes in permafrost area and carbon storage simulated by process-based models between 1960 and 2300. The models participating in this comparison were those that had joined the model integration team of the Vulnerability of Permafrost Carbon Research Coordination Network (see http://www.biology.ufl.edu/permafrostcarbon/). Each of the models in this comparison conducted simulations over the permafrost land region in the Northern Hemisphere driven by CCSM4-simulated climate for RCP 4.5 and 8.5 scenarios. Among the models, the area of permafrost (defined as the area for which active layer thickness was less than 3 m) ranged between 13.2 and 20.0 million km2. Between 1960 and 2300, models indicated the loss of permafrost area between 5.1 to 6.0 million km2 for RCP 4.5 and between 7.1 and 15.2 million km2 for RCP 8.5. Among the models, the density of soil carbon storage in 1960 ranged between 13 and 42 thousand g C m-2; models that explicitly represented carbon with depth had estimates greater than 27 thousand g C m-2. For the RCP 4.5 scenario, changes in soil carbon between 1960 and 2300 ranged between losses of 32 Pg C to gains of 58 Pg C, in which models that explicitly represent soil carbon with depth simulated losses or lower gains of soil carbon in comparison with those that did not. For the RCP 8.5 scenario, changes in soil carbon between 1960 and 2300 ranged between losses of 642 Pg C to gains of 66 Pg C, in which those models that represent soil carbon explicitly with depth all simulated losses, while those that do not all simulated gains. These results indicate that there are substantial differences in responses of carbon dynamics between model that do and do not explicitly represent soil carbon with depth in the permafrost region. We present analyses of the implications of the differences for atmospheric carbon dynamics at multiple temporal scales between 1960 and 2300.

  1. Self-sustained peristaltic waves: Explicit asymptotic solutions

    NASA Astrophysics Data System (ADS)

    Dudchenko, O. A.; Guria, G. Th.

    2012-02-01

    A simple nonlinear model for the coupled problem of fluid flow and contractile wall deformation is proposed to describe peristalsis. In the context of the model the ability of a transporting system to perform autonomous peristaltic pumping is interpreted as the ability to propagate sustained waves of wall deformation. Piecewise-linear approximations of nonlinear functions are used to analytically demonstrate the existence of traveling-wave solutions. Explicit formulas are derived which relate the speed of self-sustained peristaltic waves to the rheological properties of the transporting vessel and the transported fluid. The results may contribute to the development of diagnostic and therapeutic procedures for cases of peristaltic motility disorders.

  2. The development of a whole-body algorithm

    NASA Technical Reports Server (NTRS)

    Kay, F. J.

    1973-01-01

    The whole-body algorithm is envisioned as a mathematical model that utilizes human physiology to simulate the behavior of vital body systems. The objective of this model is to determine the response of selected body parameters within these systems to various input perturbations, or stresses. Perturbations of interest are exercise, chemical unbalances, gravitational changes and other abnormal environmental conditions. This model provides for a study of man's physiological response in various space applications, underwater applications, normal and abnormal workloads and environments, and the functioning of the system with physical impairments or decay of functioning components. Many methods or approaches to the development of a whole-body algorithm are considered. Of foremost concern is the determination of the subsystems to be included, the detail of the subsystems and the interaction between the subsystems.

  3. Control Law Design in a Computational Aeroelasticity Environment

    NASA Technical Reports Server (NTRS)

    Newsom, Jerry R.; Robertshaw, Harry H.; Kapania, Rakesh K.

    2003-01-01

    A methodology for designing active control laws in a computational aeroelasticity environment is given. The methodology involves employing a systems identification technique to develop an explicit state-space model for control law design from the output of a computational aeroelasticity code. The particular computational aeroelasticity code employed in this paper solves the transonic small disturbance aerodynamic equation using a time-accurate, finite-difference scheme. Linear structural dynamics equations are integrated simultaneously with the computational fluid dynamics equations to determine the time responses of the structure. These structural responses are employed as the input to a modern systems identification technique that determines the Markov parameters of an "equivalent linear system". The Eigensystem Realization Algorithm is then employed to develop an explicit state-space model of the equivalent linear system. The Linear Quadratic Guassian control law design technique is employed to design a control law. The computational aeroelasticity code is modified to accept control laws and perform closed-loop simulations. Flutter control of a rectangular wing model is chosen to demonstrate the methodology. Various cases are used to illustrate the usefulness of the methodology as the nonlinearity of the aeroelastic system is increased through increased angle-of-attack changes.

  4. Harnessing Big Data to Represent 30-meter Spatial Heterogeneity in Earth System Models

    NASA Astrophysics Data System (ADS)

    Chaney, N.; Shevliakova, E.; Malyshev, S.; Van Huijgevoort, M.; Milly, C.; Sulman, B. N.

    2016-12-01

    Terrestrial land surface processes play a critical role in the Earth system; they have a profound impact on the global climate, food and energy production, freshwater resources, and biodiversity. One of the most fascinating yet challenging aspects of characterizing terrestrial ecosystems is their field-scale (˜30 m) spatial heterogeneity. It has been observed repeatedly that the water, energy, and biogeochemical cycles at multiple temporal and spatial scales have deep ties to an ecosystem's spatial structure. Current Earth system models largely disregard this important relationship leading to an inadequate representation of ecosystem dynamics. In this presentation, we will show how existing global environmental datasets can be harnessed to explicitly represent field-scale spatial heterogeneity in Earth system models. For each macroscale grid cell, these environmental data are clustered according to their field-scale soil and topographic attributes to define unique sub-grid tiles. The state-of-the-art Geophysical Fluid Dynamics Laboratory (GFDL) land model is then used to simulate these tiles and their spatial interactions via the exchange of water, energy, and nutrients along explicit topographic gradients. Using historical simulations over the contiguous United States, we will show how a robust representation of field-scale spatial heterogeneity impacts modeled ecosystem dynamics including the water, energy, and biogeochemical cycles as well as vegetation composition and distribution.

  5. SCOSII OL: A dedicated language for mission operations

    NASA Technical Reports Server (NTRS)

    Baldi, Andrea; Elgaard, Dennis; Lynenskjold, Steen; Pecchioli, Mauro

    1994-01-01

    The Spacecraft Control and Operations System 2 (SCOSII) is the new generation of Mission Control Systems (MCS) to be used at ESOC. The system is generic because it offers a collection of standard functions configured through a database upon which a dedicated MCS is established for a given mission. An integral component of SCOSII is the support of a dedicated Operations Language (OL). The spacecraft operation engineers edit, test, validate, and install OL scripts as part of the configuration of the system with, e.g., expressions for computing derived parameters and procedures for performing flight operations, all without involvement of software support engineers. A layered approach has been adopted for the implementation centered around the explicit representation of a data model. The data model is object-oriented defining the structure of the objects in terms of attributes (data) and services (functions) which can be accessed by the OL. SCOSII supports the creation of a mission model. System elements as, e.g., a gyro are explicit, as are the attributes which described them and the services they provide. The data model driven approach makes it possible to take immediate advantage of this higher-level of abstraction, without requiring expansion of the language. This article describes the background and context leading to the OL, concepts, language facilities, implementation, status and conclusions found so far.

  6. Cscibox: A Software System for Age-Model Construction and Evaluation

    NASA Astrophysics Data System (ADS)

    Bradley, E.; Anderson, K. A.; Marchitto, T. M., Jr.; de Vesine, L. R.; White, J. W. C.; Anderson, D. M.

    2014-12-01

    CSciBox is an integrated software system for the construction and evaluation of age models of paleo-environmetal archives, both directly dated and cross dated. The time has come to encourage cross-pollinization between earth science and computer science in dating paleorecords. This project addresses that need. The CSciBox code, which is being developed by a team of computer scientists and geoscientists, is open source and freely available on github. The system employs modern database technology to store paleoclimate proxy data and analysis results in an easily accessible and searchable form. This makes it possible to do analysis on the whole core at once, in an interactive fashion, or to tailor the analysis to a subset of the core without loading the entire data file. CSciBox provides a number of 'components' that perform the common steps in age-model construction and evaluation: calibrations, reservoir-age correction, interpolations, statistics, and so on. The user employs these components via a graphical user interface (GUI) to go from raw data to finished age model in a single tool: e.g., an IntCal09 calibration of 14C data from a marine sediment core, followed by a piecewise-linear interpolation. CSciBox's GUI supports plotting of any measurement in the core against any other measurement, or against any of the variables in the calculation of the age model-with or without explicit error representations. Using the GUI, CSciBox's user can import a new calibration curve or other background data set and define a new module that employs that information. Users can also incorporate other software (e.g., Calib, BACON) as 'plug ins.' In the case of truly large data or significant computational effort, CSciBox is parallelizable across modern multicore processors, or clusters, or even the cloud. The next generation of the CSciBox code, currently in the testing stages, includes an automated reasoning engine that supports a more-thorough exploration of plausible age models and cross-dating scenarios.

  7. A New Global Regression Analysis Method for the Prediction of Wind Tunnel Model Weight Corrections

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert Manfred; Bridge, Thomas M.; Amaya, Max A.

    2014-01-01

    A new global regression analysis method is discussed that predicts wind tunnel model weight corrections for strain-gage balance loads during a wind tunnel test. The method determines corrections by combining "wind-on" model attitude measurements with least squares estimates of the model weight and center of gravity coordinates that are obtained from "wind-off" data points. The method treats the least squares fit of the model weight separate from the fit of the center of gravity coordinates. Therefore, it performs two fits of "wind- off" data points and uses the least squares estimator of the model weight as an input for the fit of the center of gravity coordinates. Explicit equations for the least squares estimators of the weight and center of gravity coordinates are derived that simplify the implementation of the method in the data system software of a wind tunnel. In addition, recommendations for sets of "wind-off" data points are made that take typical model support system constraints into account. Explicit equations of the confidence intervals on the model weight and center of gravity coordinates and two different error analyses of the model weight prediction are also discussed in the appendices of the paper.

  8. Seeing the Wood for the Trees: Applying the dual-memory system model to investigate expert teachers' observational skills in natural ecological learning environments

    NASA Astrophysics Data System (ADS)

    Stolpe, Karin; Björklund, Lars

    2012-01-01

    This study aims to investigate two expert ecology teachers' ability to attend to essential details in a complex environment during a field excursion, as well as how they teach this ability to their students. In applying a cognitive dual-memory system model for learning, we also suggest a rationale for their behaviour. The model implies two separate memory systems: the implicit, non-conscious, non-declarative system and the explicit, conscious, declarative system. This model provided the starting point for the research design. However, it was revised from the empirical findings supported by new theoretical insights. The teachers were video and audio recorded during their excursion and interviewed in a stimulated recall setting afterwards. The data were qualitatively analysed using the dual-memory system model. The results show that the teachers used holistic pattern recognition in their own identification of natural objects. However, teachers' main strategy to teach this ability is to give the students explicit rules or specific characteristics. According to the dual-memory system model the holistic pattern recognition is processed in the implicit memory system as a non-conscious match with earlier experienced situations. We suggest that this implicit pattern matching serves as an explanation for teachers' ecological and teaching observational skills. Another function of the implicit memory system is its ability to control automatic behaviour and non-conscious decision-making. The teachers offer the students firsthand sensory experiences which provide a prerequisite for the formation of implicit memories that provides a foundation for expertise.

  9. Simulation of a severe convective storm using a numerical model with explicitly incorporated aerosols

    NASA Astrophysics Data System (ADS)

    Lompar, Miloš; Ćurić, Mladjen; Romanic, Djordje

    2017-09-01

    Despite an important role the aerosols play in all stages of cloud lifecycle, their representation in numerical weather prediction models is often rather crude. This paper investigates the effects the explicit versus implicit inclusion of aerosols in a microphysics parameterization scheme in Weather Research and Forecasting (WRF) - Advanced Research WRF (WRF-ARW) model has on cloud dynamics and microphysics. The testbed selected for this study is a severe mesoscale convective system with supercells that struck west and central parts of Serbia in the afternoon of July 21, 2014. Numerical products of two model runs, i.e. one with aerosols explicitly (WRF-AE) included and another with aerosols implicitly (WRF-AI) assumed, are compared against precipitation measurements from surface network of rain gauges, as well as against radar and satellite observations. The WRF-AE model accurately captured the transportation of dust from the north Africa over the Mediterranean and to the Balkan region. On smaller scales, both models displaced the locations of clouds situated above west and central Serbia towards southeast and under-predicted the maximum values of composite radar reflectivity. Similar to satellite images, WRF-AE shows the mesoscale convective system as a merged cluster of cumulonimbus clouds. Both models over-predicted the precipitation amounts; WRF-AE over-predictions are particularly pronounced in the zones of light rain, while WRF-AI gave larger outliers. Unlike WRF-AI, the WRF-AE approach enables the modelling of time evolution and influx of aerosols into the cloud which could be of practical importance in weather forecasting and weather modification. Several likely causes for discrepancies between models and observations are discussed and prospects for further research in this field are outlined.

  10. TRIM.FaTE Public Reference Library Documentation

    EPA Pesticide Factsheets

    TRIM.FaTE is a spatially explicit, compartmental mass balance model that describes the movement and transformation of pollutants over time, through a user-defined, bounded system that includes both biotic and abiotic compartments.

  11. A spatially explicit whole-system model of the lignocellulosic bioethanol supply chain: an assessment of decentralised processing potential

    PubMed Central

    Dunnett, Alex J; Adjiman, Claire S; Shah, Nilay

    2008-01-01

    Background Lignocellulosic bioethanol technologies exhibit significant capacity for performance improvement across the supply chain through the development of high-yielding energy crops, integrated pretreatment, hydrolysis and fermentation technologies and the application of dedicated ethanol pipelines. The impact of such developments on cost-optimal plant location, scale and process composition within multiple plant infrastructures is poorly understood. A combined production and logistics model has been developed to investigate cost-optimal system configurations for a range of technological, system scale, biomass supply and ethanol demand distribution scenarios specific to European agricultural land and population densities. Results Ethanol production costs for current technologies decrease significantly from $0.71 to $0.58 per litre with increasing economies of scale, up to a maximum single-plant capacity of 550 × 106 l year-1. The development of high-yielding energy crops and consolidated bio-processing realises significant cost reductions, with production costs ranging from $0.33 to $0.36 per litre. Increased feedstock yields result in systems of eight fully integrated plants operating within a 500 × 500 km2 region, each producing between 1.24 and 2.38 × 109 l year-1 of pure ethanol. A limited potential for distributed processing and centralised purification systems is identified, requiring developments in modular, ambient pretreatment and fermentation technologies and the pipeline transport of pure ethanol. Conclusion The conceptual and mathematical modelling framework developed provides a valuable tool for the assessment and optimisation of the lignocellulosic bioethanol supply chain. In particular, it can provide insight into the optimal configuration of multiple plant systems. This information is invaluable in ensuring (near-)cost-optimal strategic development within the sector at the regional and national scale. The framework is flexible and can thus accommodate a range of processing tasks, logistical modes, by-product markets and impacting policy constraints. Significant scope for application to real-world case studies through dynamic extensions of the formulation has been identified. PMID:18662392

  12. Using a spatially explicit analysis model to evaluate spatial variation of corn yield

    USDA-ARS?s Scientific Manuscript database

    Spatial irrigation of agricultural crops using site-specific variable-rate irrigation (VRI) systems is beginning to have wide-spread acceptance. However, optimizing the management of these VRI systems to conserve natural resources and increase profitability requires an understanding of the spatial ...

  13. Constant pH Molecular Dynamics of Proteins in Explicit Solvent with Proton Tautomerism

    PubMed Central

    Goh, Garrett B.; Hulbert, Benjamin S.; Zhou, Huiqing; Brooks, Charles L.

    2015-01-01

    pH is a ubiquitous regulator of biological activity, including protein-folding, protein-protein interactions and enzymatic activity. Existing constant pH molecular dynamics (CPHMD) models that were developed to address questions related to the pH-dependent properties of proteins are largely based on implicit solvent models. However, implicit solvent models are known to underestimate the desolvation energy of buried charged residues, increasing the error associated with predictions that involve internal ionizable residue that are important in processes like hydrogen transport and electron transfer. Furthermore, discrete water and ions cannot be modeled in implicit solvent, which are important in systems like membrane proteins and ion channels. We report on an explicit solvent constant pH molecular dynamics framework based on multi-site λ-dynamics (CPHMDMSλD). In the CPHMDMSλD framework, we performed seamless alchemical transitions between protonation and tautomeric states using multi-site λ-dynamics, and designed novel biasing potentials to ensure that the physical end-states are predominantly sampled. We show that explicit solvent CPHMDMSλD simulations model realistic pH-dependent properties of proteins such as the Hen-Egg White Lysozyme (HEWL), binding domain of 2-oxoglutarate dehydrogenase (BBL) and N-terminal domain of ribosomal L9 (NTL9), and the pKa predictions are in excellent agreement with experimental values, with a RMSE ranging from 0.72 to 0.84 pKa units. With the recent development of the explicit solvent CPHMDMSλD framework for nucleic acids, accurate modeling of pH-dependent properties of both major class of biomolecules – proteins and nucleic acids is now possible. PMID:24375620

  14. The Volcanism Ontology (VO): a model of the volcanic system

    NASA Astrophysics Data System (ADS)

    Myer, J.; Babaie, H. A.

    2017-12-01

    We have modeled a part of the complex material and process entities and properties of the volcanic system in the Volcanism Ontology (VO) applying several top-level ontologies such as Basic Formal Ontology (BFO), SWEET, and Ontology of Physics for Biology (OPB) within a single framework. The continuant concepts in BFO describe features with instances that persist as wholes through time and have qualities (attributes) that may change (e.g., state, composition, and location). In VO, the continuants include lava, volcanic rock, and volcano. The occurrent concepts in BFO include processes, their temporal boundaries, and the spatio-temporal regions within which they occur. In VO, these include eruption (process), the onset of pyroclastic flow (temporal boundary), and the space and time span of the crystallization of lava in a lava tube (spatio-temporal region). These processes can be of physical (e.g., debris flow, crystallization, injection), atmospheric (e.g., vapor emission, ash particles blocking solar radiation), hydrological (e.g., diffusion of water vapor, hot spring), thermal (e.g., cooling of lava) and other types. The properties (predicates) relate continuants to other continuants, occurrents to continuants, and occurrents to occurrents. The ontology also models other concepts such as laboratory and field procedures by volcanologists, sampling by sensors, and the type of instruments applied in monitoring volcanic activity. When deployed on the web, VO will be used to explicitly and formally annotate data and information collected by volcanologists based on domain knowledge. This will enable the integration of global volcanic data and improve the interoperability of software that deal with such data.

  15. A solution to the surface intersection problem. [Boolean functions in geometric modeling

    NASA Technical Reports Server (NTRS)

    Timer, H. G.

    1977-01-01

    An application-independent geometric model within a data base framework should support the use of Boolean operators which allow the user to construct a complex model by appropriately combining a series of simple models. The use of these operators leads to the concept of implicitly and explicitly defined surfaces. With an explicitly defined model, the surface area may be computed by simply summing the surface areas of the bounding surfaces. For an implicitly defined model, the surface area computation must deal with active and inactive regions. Because the surface intersection problem involves four unknowns and its solution is a space curve, the parametric coordinates of each surface must be determined as a function of the arc length. Various subproblems involved in the general intersection problem are discussed, and the mathematical basis for their solution is presented along with a program written in FORTRAN IV for implementation on the IBM 370 TSO system.

  16. Confinement-Dependent Friction in Peptide Bundles

    PubMed Central

    Erbaş, Aykut; Netz, Roland R.

    2013-01-01

    Friction within globular proteins or between adhering macromolecules crucially determines the kinetics of protein folding, the formation, and the relaxation of self-assembled molecular systems. One fundamental question is how these friction effects depend on the local environment and in particular on the presence of water. In this model study, we use fully atomistic MD simulations with explicit water to obtain friction forces as a single polyglycine peptide chain is pulled out of a bundle of k adhering parallel polyglycine peptide chains. The whole system is periodically replicated along the peptide axes, so a stationary state at prescribed mean sliding velocity V is achieved. The aggregation number is varied between k = 2 (two peptide chains adhering to each other with plenty of water present at the adhesion sites) and k = 7 (one peptide chain pulled out from a close-packed cylindrical array of six neighboring peptide chains with no water inside the bundle). The friction coefficient per hydrogen bond, extrapolated to the viscous limit of vanishing pulling velocity V → 0, exhibits an increase by five orders of magnitude when going from k = 2 to k = 7. This dramatic confinement-induced friction enhancement we argue to be due to a combination of water depletion and increased hydrogen-bond cooperativity. PMID:23528088

  17. Computational neuroanatomy: ontology-based representation of neural components and connectivity.

    PubMed

    Rubin, Daniel L; Talos, Ion-Florin; Halle, Michael; Musen, Mark A; Kikinis, Ron

    2009-02-05

    A critical challenge in neuroscience is organizing, managing, and accessing the explosion in neuroscientific knowledge, particularly anatomic knowledge. We believe that explicit knowledge-based approaches to make neuroscientific knowledge computationally accessible will be helpful in tackling this challenge and will enable a variety of applications exploiting this knowledge, such as surgical planning. We developed ontology-based models of neuroanatomy to enable symbolic lookup, logical inference and mathematical modeling of neural systems. We built a prototype model of the motor system that integrates descriptive anatomic and qualitative functional neuroanatomical knowledge. In addition to modeling normal neuroanatomy, our approach provides an explicit representation of abnormal neural connectivity in disease states, such as common movement disorders. The ontology-based representation encodes both structural and functional aspects of neuroanatomy. The ontology-based models can be evaluated computationally, enabling development of automated computer reasoning applications. Neuroanatomical knowledge can be represented in machine-accessible format using ontologies. Computational neuroanatomical approaches such as described in this work could become a key tool in translational informatics, leading to decision support applications that inform and guide surgical planning and personalized care for neurological disease in the future.

  18. Local models of astrophysical discs

    NASA Astrophysics Data System (ADS)

    Latter, Henrik N.; Papaloizou, John

    2017-12-01

    Local models of gaseous accretion discs have been successfully employed for decades to describe an assortment of small-scale phenomena, from instabilities and turbulence, to dust dynamics and planet formation. For the most part, they have been derived in a physically motivated but essentially ad hoc fashion, with some of the mathematical assumptions never made explicit nor checked for consistency. This approach is susceptible to error, and it is easy to derive local models that support spurious instabilities or fail to conserve key quantities. In this paper we present rigorous derivations, based on an asympototic ordering, and formulate a hierarchy of local models (incompressible, Boussinesq and compressible), making clear which is best suited for a particular flow or phenomenon, while spelling out explicitly the assumptions and approximations of each. We also discuss the merits of the anelastic approximation, emphasizing that anelastic systems struggle to conserve energy unless strong restrictions are imposed on the flow. The problems encountered by the anelastic approximation are exacerbated by the disc's differential rotation, but also attend non-rotating systems such as stellar interiors. We conclude with a defence of local models and their continued utility in astrophysical research.

  19. Transport and coordination in the coupled soil-root-xylem-phloem leaf system

    NASA Astrophysics Data System (ADS)

    Huang, C. W.; Katul, G. G.; Pockman, W.; Litvak, M. E.; Domec, J. C.; Palmroth, S.

    2016-12-01

    In response to varying environmental conditions, stomatal pores act as biological valves that dynamically adjust their size thereby determining the rate of CO2 assimilation and water loss (i.e., transpiration) to the dry atmosphere. Although the significance of this biotic control on gas exchange is rarely disputed, representing parsimoniously all the underlying mechanisms responsible for stomatal kinetics remain a subject of some debate. It has been conjectured that stomatal control in seed plants (i.e., angiosperm and gymnosperm) represents a compromise between biochemical demand for CO2 and prevention of excessive water loss. This view has been amended at the whole-plant level, where xylem hydraulics and sucrose transport efficiency in phloem appear to impose additional constraints on gas exchange. If such additional constraints impact stomatal opening and closure, then seed plants may have evolved coordinated photosynthetic-hydraulic-sugar transporting machinery that confers some competitive advantages in fluctuating environmental conditions. Thus, a stomatal optimization model that explicitly considers xylem hydraulics and maximum sucrose transport is developed to explore this coordination in the leaf-xylem-phloem system. The model is then applied to progressive drought conditions. The main findings from the model calculations are that (1) the predicted stomatal conductance from the conventional stomatal optimization theory at the leaf and the newly proposed models converge, suggesting a tight coordination in the leaf-xylem-phloem system; (2) stomatal control is mainly limited by the water supply function of the soil-xylem hydraulic system especially when the water flux through the transpiration stream is significantly larger than water exchange between xylem and phloem; (3) thus, xylem limitation imposed on the supply function can be used to differentiate species with different water use strategy across the spectrum of isohydric to anisohydric behavior. Keywords: leaf-level gas exchange, stomatal control, sucrose transport in phloem, xylem hydraulics

  20. Improving the spatial representation of soil properties and hydrology using topographically derived initialization processes in the SWAT model

    USDA-ARS?s Scientific Manuscript database

    Topography exerts critical controls on many hydrologic, geomorphologic, and environmental biophysical processes. Unfortunately many watershed modeling systems use topography only to define basin boundaries and stream channels and do not explicitly account for the topographic controls on processes su...

  1. Spatially Explicit West Nile Virus Risk Modeling in Santa Clara County, CA

    USDA-ARS?s Scientific Manuscript database

    A geographic information systems model designed to identify regions of West Nile virus (WNV) transmission risk was tested and calibrated with data collected in Santa Clara County, California. American Crows that died from WNV infection in 2005, provided spatial and temporal ground truth. When the mo...

  2. On interfacial properties of tetrahydrofuran: Atomistic and coarse-grained models from molecular dynamics simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garrido, J. M.; Algaba, J.; Blas, F. J., E-mail: felipe@uhu.es

    2016-04-14

    We have determined the interfacial properties of tetrahydrofuran (THF) from direct simulation of the vapor-liquid interface. The molecules are modeled using six different molecular models, three of them based on the united-atom approach and the other three based on a coarse-grained (CG) approach. In the first case, THF is modeled using the transferable parameters potential functions approach proposed by Chandrasekhar and Jorgensen [J. Chem. Phys. 77, 5073 (1982)] and a new parametrization of the TraPPE force fields for cyclic alkanes and ethers [S. J. Keasler et al., J. Phys. Chem. B 115, 11234 (2012)]. In both cases, dispersive and coulombicmore » intermolecular interactions are explicitly taken into account. In the second case, THF is modeled as a single sphere, a diatomic molecule, and a ring formed from three Mie monomers according to the SAFT-γ Mie top-down approach [V. Papaioannou et al., J. Chem. Phys. 140, 054107 (2014)]. Simulations were performed in the molecular dynamics canonical ensemble and the vapor-liquid surface tension is evaluated from the normal and tangential components of the pressure tensor along the simulation box. In addition to the surface tension, we have also obtained density profiles, coexistence densities, critical temperature, density, and pressure, and interfacial thickness as functions of temperature, paying special attention to the comparison between the estimations obtained from different models and literature experimental data. The simulation results obtained from the three CG models as described by the SAFT-γ Mie approach are able to predict accurately the vapor-liquid phase envelope of THF, in excellent agreement with estimations obtained from TraPPE model and experimental data in the whole range of coexistence. However, Chandrasekhar and Jorgensen model presents significant deviations from experimental results. We also compare the predictions for surface tension as obtained from simulation results for all the models with experimental data. The three CG models predict reasonably well (but only qualitatively) the surface tension of THF, as a function of temperature, from the triple point to the critical temperature. On the other hand, only the TraPPE united-atoms models are able to predict accurately the experimental surface tension of the system in the whole temperature range.« less

  3. Ocean-Atmosphere Coupled Model Simulations of Precipitation in the Central Andes

    NASA Technical Reports Server (NTRS)

    Nicholls, Stephen D.; Mohr, Karen I.

    2015-01-01

    The meridional extent and complex orography of the South American continent contributes to a wide diversity of climate regimes ranging from hyper-arid deserts to tropical rainforests to sub-polar highland regions. In addition, South American meteorology and climate are also made further complicated by ENSO, a powerful coupled ocean-atmosphere phenomenon. Modelling studies in this region have typically resorted to either atmospheric mesoscale or atmosphere-ocean coupled global climate models. The latter offers full physics and high spatial resolution, but it is computationally inefficient typically lack an interactive ocean, whereas the former offers high computational efficiency and ocean-atmosphere coupling, but it lacks adequate spatial and temporal resolution to adequate resolve the complex orography and explicitly simulate precipitation. Explicit simulation of precipitation is vital in the Central Andes where rainfall rates are light (0.5-5 mm hr-1), there is strong seasonality, and most precipitation is associated with weak mesoscale-organized convection. Recent increases in both computational power and model development have led to the advent of coupled ocean-atmosphere mesoscale models for both weather and climate study applications. These modelling systems, while computationally expensive, include two-way ocean-atmosphere coupling, high resolution, and explicit simulation of precipitation. In this study, we use the Coupled Ocean-Atmosphere-Wave-Sediment Transport (COAWST), a fully-coupled mesoscale atmosphere-ocean modeling system. Previous work has shown COAWST to reasonably simulate the entire 2003-2004 wet season (Dec-Feb) as validated against both satellite and model analysis data when ECMWF interim analysis data were used for boundary conditions on a 27-9-km grid configuration (Outer grid extent: 60.4S to 17.7N and 118.6W to 17.4W).

  4. Explicit pre-training instruction does not improve implicit perceptual-motor sequence learning

    PubMed Central

    Sanchez, Daniel J.; Reber, Paul J.

    2012-01-01

    Memory systems theory argues for separate neural systems supporting implicit and explicit memory in the human brain. Neuropsychological studies support this dissociation, but empirical studies of cognitively healthy participants generally observe that both kinds of memory are acquired to at least some extent, even in implicit learning tasks. A key question is whether this observation reflects parallel intact memory systems or an integrated representation of memory in healthy participants. Learning of complex tasks in which both explicit instruction and practice is used depends on both kinds of memory, and how these systems interact will be an important component of the learning process. Theories that posit an integrated, or single, memory system for both types of memory predict that explicit instruction should contribute directly to strengthening task knowledge. In contrast, if the two types of memory are independent and acquired in parallel, explicit knowledge should have no direct impact and may serve in a “scaffolding” role in complex learning. Using an implicit perceptual-motor sequence learning task, the effect of explicit pre-training instruction on skill learning and performance was assessed. Explicit pre-training instruction led to robust explicit knowledge, but sequence learning did not benefit from the contribution of pre-training sequence memorization. The lack of an instruction benefit suggests that during skill learning, implicit and explicit memory operate independently. While healthy participants will generally accrue parallel implicit and explicit knowledge in complex tasks, these types of information appear to be separately represented in the human brain consistent with multiple memory systems theory. PMID:23280147

  5. Total Risk Integrated Methodology (TRIM) - TRIM.FaTE

    EPA Pesticide Factsheets

    TRIM.FaTE is a spatially explicit, compartmental mass balance model that describes the movement and transformation of pollutants over time, through a user-defined, bounded system that includes both biotic and abiotic compartments.

  6. Using Q-Chem on the Peregrine System | High-Performance Computing | NREL

    Science.gov Websites

    initio quantum chemistry package with special strengths in excited state methods, non-adiabatic coupling , solvation models, explicitly correlated wavefunction methods, and cutting-edge DFT. Running Q-Chem on

  7. GeneImp: Fast Imputation to Large Reference Panels Using Genotype Likelihoods from Ultralow Coverage Sequencing

    PubMed Central

    Spiliopoulou, Athina; Colombo, Marco; Orchard, Peter; Agakov, Felix; McKeigue, Paul

    2017-01-01

    We address the task of genotype imputation to a dense reference panel given genotype likelihoods computed from ultralow coverage sequencing as inputs. In this setting, the data have a high-level of missingness or uncertainty, and are thus more amenable to a probabilistic representation. Most existing imputation algorithms are not well suited for this situation, as they rely on prephasing for computational efficiency, and, without definite genotype calls, the prephasing task becomes computationally expensive. We describe GeneImp, a program for genotype imputation that does not require prephasing and is computationally tractable for whole-genome imputation. GeneImp does not explicitly model recombination, instead it capitalizes on the existence of large reference panels—comprising thousands of reference haplotypes—and assumes that the reference haplotypes can adequately represent the target haplotypes over short regions unaltered. We validate GeneImp based on data from ultralow coverage sequencing (0.5×), and compare its performance to the most recent version of BEAGLE that can perform this task. We show that GeneImp achieves imputation quality very close to that of BEAGLE, using one to two orders of magnitude less time, without an increase in memory complexity. Therefore, GeneImp is the first practical choice for whole-genome imputation to a dense reference panel when prephasing cannot be applied, for instance, in datasets produced via ultralow coverage sequencing. A related future application for GeneImp is whole-genome imputation based on the off-target reads from deep whole-exome sequencing. PMID:28348060

  8. A Multi-scale Computational Platform to Mechanistically Assess the Effect of Genetic Variation on Drug Responses in Human Erythrocyte Metabolism

    PubMed Central

    Bordbar, Aarash; Palsson, Bernhard O.

    2016-01-01

    Progress in systems medicine brings promise to addressing patient heterogeneity and individualized therapies. Recently, genome-scale models of metabolism have been shown to provide insight into the mechanistic link between drug therapies and systems-level off-target effects while being expanded to explicitly include the three-dimensional structure of proteins. The integration of these molecular-level details, such as the physical, structural, and dynamical properties of proteins, notably expands the computational description of biochemical network-level properties and the possibility of understanding and predicting whole cell phenotypes. In this study, we present a multi-scale modeling framework that describes biological processes which range in scale from atomistic details to an entire metabolic network. Using this approach, we can understand how genetic variation, which impacts the structure and reactivity of a protein, influences both native and drug-induced metabolic states. As a proof-of-concept, we study three enzymes (catechol-O-methyltransferase, glucose-6-phosphate dehydrogenase, and glyceraldehyde-3-phosphate dehydrogenase) and their respective genetic variants which have clinically relevant associations. Using all-atom molecular dynamic simulations enables the sampling of long timescale conformational dynamics of the proteins (and their mutant variants) in complex with their respective native metabolites or drug molecules. We find that changes in a protein’s structure due to a mutation influences protein binding affinity to metabolites and/or drug molecules, and inflicts large-scale changes in metabolism. PMID:27467583

  9. A Multi-scale Computational Platform to Mechanistically Assess the Effect of Genetic Variation on Drug Responses in Human Erythrocyte Metabolism.

    PubMed

    Mih, Nathan; Brunk, Elizabeth; Bordbar, Aarash; Palsson, Bernhard O

    2016-07-01

    Progress in systems medicine brings promise to addressing patient heterogeneity and individualized therapies. Recently, genome-scale models of metabolism have been shown to provide insight into the mechanistic link between drug therapies and systems-level off-target effects while being expanded to explicitly include the three-dimensional structure of proteins. The integration of these molecular-level details, such as the physical, structural, and dynamical properties of proteins, notably expands the computational description of biochemical network-level properties and the possibility of understanding and predicting whole cell phenotypes. In this study, we present a multi-scale modeling framework that describes biological processes which range in scale from atomistic details to an entire metabolic network. Using this approach, we can understand how genetic variation, which impacts the structure and reactivity of a protein, influences both native and drug-induced metabolic states. As a proof-of-concept, we study three enzymes (catechol-O-methyltransferase, glucose-6-phosphate dehydrogenase, and glyceraldehyde-3-phosphate dehydrogenase) and their respective genetic variants which have clinically relevant associations. Using all-atom molecular dynamic simulations enables the sampling of long timescale conformational dynamics of the proteins (and their mutant variants) in complex with their respective native metabolites or drug molecules. We find that changes in a protein's structure due to a mutation influences protein binding affinity to metabolites and/or drug molecules, and inflicts large-scale changes in metabolism.

  10. The Minnesota Case Study Collection: New Historical Inquiry Case Studies for Nature of Science Education

    ERIC Educational Resources Information Center

    Allchin, Douglas

    2012-01-01

    The new Minnesota Case Study Collection is profiled, along with other examples. They complement the work of the HIPST Project in illustrating the aims of: (1) historically informed inquiry learning that fosters explicit NOS reflection, and (2) engagement with faithfully rendered samples of Whole Science.

  11. Supporting Pre-Service Teachers' Technology-Enabled Learning Design Thinking through Whole of Programme Transformation

    ERIC Educational Resources Information Center

    Bower, Matt; Highfield, Kate; Furney, Pam; Mowbray, Lee

    2013-01-01

    This paper explains a development and evaluation project aimed at transforming two pre-service teacher education programmes at Macquarie University to more effectively cultivate students' technology-enabled learning design thinking. The process of transformation was based upon an explicit and sustained focus on developing university academics'…

  12. Teaching Mathematical Induction: An Alternative Approach.

    ERIC Educational Resources Information Center

    Allen, Lucas G.

    2001-01-01

    Describes experience using a new approach to teaching induction that was developed by the Mathematical Methods in High School Project. The basic idea behind the new approach is to use induction to prove that two formulas, one in recursive form and the other in a closed or explicit form, will always agree for whole numbers. (KHR)

  13. Teaching as Interaction: Challenges in Transitioning Teachers' Instruction to Small Groups

    ERIC Educational Resources Information Center

    Wyatt, Tasha; Chapman-DeSousa, Brook

    2017-01-01

    Although small group instruction is often endorsed in teaching young children, teachers are rarely given explicit instruction on how to move instruction into small groups where effective adult-child interactions can take place. This study examines how 14 early childhood educators transitioned their instruction from whole to small group teaching…

  14. Stratificational Grammar.

    ERIC Educational Resources Information Center

    Algeo, John

    1968-01-01

    According to the author, most grammarians have been writing stratificational grammars without knowing it because they have dealt with units that are related to one another, but not simply as a whole to its parts, or as a class to its members. The question, then, is not whether a grammar is stratified but whether it is explicitly stratified. This…

  15. CDPOP: A spatially explicit cost distance population genetics program

    Treesearch

    Erin L. Landguth; S. A. Cushman

    2010-01-01

    Spatially explicit simulation of gene flow in complex landscapes is essential to explain observed population responses and provide a foundation for landscape genetics. To address this need, we wrote a spatially explicit, individual-based population genetics model (CDPOP). The model implements individual-based population modelling with Mendelian inheritance and k-allele...

  16. Projecting changes in the distribution and productivity of living marine resources: A critical review of the suite of modelling approaches used in the large European project VECTORS

    NASA Astrophysics Data System (ADS)

    Peck, Myron A.; Arvanitidis, Christos; Butenschön, Momme; Canu, Donata Melaku; Chatzinikolaou, Eva; Cucco, Andrea; Domenici, Paolo; Fernandes, Jose A.; Gasche, Loic; Huebert, Klaus B.; Hufnagl, Marc; Jones, Miranda C.; Kempf, Alexander; Keyl, Friedemann; Maar, Marie; Mahévas, Stéphanie; Marchal, Paul; Nicolas, Delphine; Pinnegar, John K.; Rivot, Etienne; Rochette, Sébastien; Sell, Anne F.; Sinerchia, Matteo; Solidoro, Cosimo; Somerfield, Paul J.; Teal, Lorna R.; Travers-Trolet, Morgan; van de Wolfshaar, Karen E.

    2018-02-01

    We review and compare four broad categories of spatially-explicit modelling approaches currently used to understand and project changes in the distribution and productivity of living marine resources including: 1) statistical species distribution models, 2) physiology-based, biophysical models of single life stages or the whole life cycle of species, 3) food web models, and 4) end-to-end models. Single pressures are rare and, in the future, models must be able to examine multiple factors affecting living marine resources such as interactions between: i) climate-driven changes in temperature regimes and acidification, ii) reductions in water quality due to eutrophication, iii) the introduction of alien invasive species, and/or iv) (over-)exploitation by fisheries. Statistical (correlative) approaches can be used to detect historical patterns which may not be relevant in the future. Advancing predictive capacity of changes in distribution and productivity of living marine resources requires explicit modelling of biological and physical mechanisms. New formulations are needed which (depending on the question) will need to strive for more realism in ecophysiology and behaviour of individuals, life history strategies of species, as well as trophodynamic interactions occurring at different spatial scales. Coupling existing models (e.g. physical, biological, economic) is one avenue that has proven successful. However, fundamental advancements are needed to address key issues such as the adaptive capacity of species/groups and ecosystems. The continued development of end-to-end models (e.g., physics to fish to human sectors) will be critical if we hope to assess how multiple pressures may interact to cause changes in living marine resources including the ecological and economic costs and trade-offs of different spatial management strategies. Given the strengths and weaknesses of the various types of models reviewed here, confidence in projections of changes in the distribution and productivity of living marine resources will be increased by assessing model structural uncertainty through biological ensemble modelling.

  17. Outside-In Systems Pharmacology Combines Innovative Computational Methods With High-Throughput Whole Vertebrate Studies.

    PubMed

    Schulthess, Pascal; van Wijk, Rob C; Krekels, Elke H J; Yates, James W T; Spaink, Herman P; van der Graaf, Piet H

    2018-04-25

    To advance the systems approach in pharmacology, experimental models and computational methods need to be integrated from early drug discovery onward. Here, we propose outside-in model development, a model identification technique to understand and predict the dynamics of a system without requiring prior biological and/or pharmacological knowledge. The advanced data required could be obtained by whole vertebrate, high-throughput, low-resource dose-exposure-effect experimentation with the zebrafish larva. Combinations of these innovative techniques could improve early drug discovery. © 2018 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.

  18. PyGirl: Generating Whole-System VMs from High-Level Prototypes Using PyPy

    NASA Astrophysics Data System (ADS)

    Bruni, Camillo; Verwaest, Toon

    Virtual machines (VMs) emulating hardware devices are generally implemented in low-level languages for performance reasons. This results in unmaintainable systems that are difficult to understand. In this paper we report on our experience using the PyPy toolchain to improve the portability and reduce the complexity of whole-system VM implementations. As a case study we implement a VM prototype for a Nintendo Game Boy, called PyGirl, in which the high-level model is separated from low-level VM implementation issues. We shed light on the process of refactoring from a low-level VM implementation in Java to a high-level model in RPython. We show that our whole-system VM written with PyPy is significantly less complex than standard implementations, without substantial loss in performance.

  19. Estimating the numerical diapycnal mixing in an eddy-permitting ocean model

    NASA Astrophysics Data System (ADS)

    Megann, Alex

    2018-01-01

    Constant-depth (or "z-coordinate") ocean models such as MOM4 and NEMO have become the de facto workhorse in climate applications, having attained a mature stage in their development and are well understood. A generic shortcoming of this model type, however, is a tendency for the advection scheme to produce unphysical numerical diapycnal mixing, which in some cases may exceed the explicitly parameterised mixing based on observed physical processes, and this is likely to have effects on the long-timescale evolution of the simulated climate system. Despite this, few quantitative estimates have been made of the typical magnitude of the effective diapycnal diffusivity due to numerical mixing in these models. GO5.0 is a recent ocean model configuration developed jointly by the UK Met Office and the National Oceanography Centre. It forms the ocean component of the GC2 climate model, and is closely related to the ocean component of the UKESM1 Earth System Model, the UK's contribution to the CMIP6 model intercomparison. GO5.0 uses version 3.4 of the NEMO model, on the ORCA025 global tripolar grid. An approach to quantifying the numerical diapycnal mixing in this model, based on the isopycnal watermass analysis of Lee et al. (2002), is described, and the estimates thereby obtained of the effective diapycnal diffusivity in GO5.0 are compared with the values of the explicit diffusivity used by the model. It is shown that the effective mixing in this model configuration is up to an order of magnitude higher than the explicit mixing in much of the ocean interior, implying that mixing in the model below the mixed layer is largely dominated by numerical mixing. This is likely to have adverse consequences for the representation of heat uptake in climate models intended for decadal climate projections, and in particular is highly relevant to the interpretation of the CMIP6 class of climate models, many of which use constant-depth ocean models at ¼° resolution

  20. Analytical modeling of soliton interactions in a nonlocal nonlinear medium analogous to gravitational force

    NASA Astrophysics Data System (ADS)

    Zeng, Shihao; Chen, Manna; Zhang, Ting; Hu, Wei; Guo, Qi; Lu, Daquan

    2018-01-01

    We illuminate an analytical model of soliton interactions in lead glass by analogizing to a gravitational force system. The orbits of spiraling solitons under a long-range interaction are given explicitly and demonstrated to follow Newton's second law of motion and the Binet equation by numerical simulations. The condition for circular orbits is obtained and the oscillating orbits are proved not to be closed. We prove the analogy between the nonlocal nonlinear optical system and gravitational system and specify the quantitative relation of the quantity between the two models.

  1. Neuman systems model-based research: an integrative review project.

    PubMed

    Fawcett, J; Giangrande, S K

    2001-07-01

    The project integrated Neuman systems model-based research literature. Two hundred published studies were located. This article is limited to the 59 full journal articles and 3 book chapters identified. A total of 37% focused on prevention interventions; 21% on perception of stressors; and 10% on stressor reactions. Only 50% of the reports explicitly linked the model with the study variables, and 61% did not include conclusions regarding model utility or credibility. No programs of research were identified. Academic courses and continuing education workshops are needed to help researchers design programs of Neuman systems model-based research and better explicate linkages between the model and the research.

  2. Towards a minimal stochastic model for a large class of diffusion-reactions on biological membranes.

    PubMed

    Chevalier, Michael W; El-Samad, Hana

    2012-08-28

    Diffusion of biological molecules on 2D biological membranes can play an important role in the behavior of stochastic biochemical reaction systems. Yet, we still lack a fundamental understanding of circumstances where explicit accounting of the diffusion and spatial coordinates of molecules is necessary. In this work, we illustrate how time-dependent, non-exponential reaction probabilities naturally arise when explicitly accounting for the diffusion of molecules. We use the analytical expression of these probabilities to derive a novel algorithm which, while ignoring the exact position of the molecules, can still accurately capture diffusion effects. We investigate the regions of validity of the algorithm and show that for most parameter regimes, it constitutes an accurate framework for studying these systems. We also document scenarios where large spatial fluctuation effects mandate explicit consideration of all the molecules and their positions. Taken together, our results derive a fundamental understanding of the role of diffusion and spatial fluctuations in these systems. Simultaneously, they provide a general computational methodology for analyzing a broad class of biological networks whose behavior is influenced by diffusion on membranes.

  3. On spectroscopy for a whole Abelian model

    NASA Astrophysics Data System (ADS)

    Chauca, J.; Doria, R.

    2012-10-01

    Postulated on the whole meaning a whole abelian gauge symmetry is being introduced. Various physical areas as complexity, statistical mechanics, quantum mechanics are partially supporting this approach where the whole is at origin. However, the reductionist crisis given by quark confinement definitely sustains this insight. It says that fundamental parts can not be seen isolatedely. Consequently, there is an experimental situation where the parts should be substituted by something more. This makes us to look for writing the wholeness principle under gauge theory. For this, one reinterprets the gauge parameter where instead of compensating fields it is organizing a systemic gauge symmetry. Now, it introduces a fields set {AμI} rotating under a common gauge symmetry. Thus, given a fields collection {AμI} as origin, the effort at this work is to investigate on its spectroscopy. Analyze for the abelian case the correspondent involved quanta. Understand that for a whole model diversity replaces elementarity. Derive the associated quantum numbers as spin, mass, charge, discrete symmetries in terms of such systemic symmetry. Observe how the particles diversity is manifested in terms of wholeness.

  4. Entropic multi-relaxation free-energy lattice Boltzmann model for two-phase flows

    NASA Astrophysics Data System (ADS)

    Bösch, F.; Dorschner, B.; Karlin, I.

    2018-04-01

    The entropic multi-relaxation lattice Boltzmann method is extended to two-phase systems following the free-energy approach. Gain in stability is achieved by incorporating the force term due to Korteweg's stress into the redefined entropic stabilizer, which allows simulation of higher Weber and Reynolds numbers with an efficient and explicit algorithm. Results for head-on droplet collisions and droplet impact on super-hydrophobic substrates are matching experimental data accurately. Furthermore, it is demonstrated that the entropic stabilization leads to smaller spurious currents without affecting the interface thickness. The present findings demonstrate the universality of the simple and explicit entropic lattice Boltzmann models and provide a viable and robust alternative to existing methods.

  5. Clusters in nonsmooth oscillator networks

    NASA Astrophysics Data System (ADS)

    Nicks, Rachel; Chambon, Lucie; Coombes, Stephen

    2018-03-01

    For coupled oscillator networks with Laplacian coupling, the master stability function (MSF) has proven a particularly powerful tool for assessing the stability of the synchronous state. Using tools from group theory, this approach has recently been extended to treat more general cluster states. However, the MSF and its generalizations require the determination of a set of Floquet multipliers from variational equations obtained by linearization around a periodic orbit. Since closed form solutions for periodic orbits are invariably hard to come by, the framework is often explored using numerical techniques. Here, we show that further insight into network dynamics can be obtained by focusing on piecewise linear (PWL) oscillator models. Not only do these allow for the explicit construction of periodic orbits, their variational analysis can also be explicitly performed. The price for adopting such nonsmooth systems is that many of the notions from smooth dynamical systems, and in particular linear stability, need to be modified to take into account possible jumps in the components of Jacobians. This is naturally accommodated with the use of saltation matrices. By augmenting the variational approach for studying smooth dynamical systems with such matrices we show that, for a wide variety of networks that have been used as models of biological systems, cluster states can be explicitly investigated. By way of illustration, we analyze an integrate-and-fire network model with event-driven synaptic coupling as well as a diffusively coupled network built from planar PWL nodes, including a reduction of the popular Morris-Lecar neuron model. We use these examples to emphasize that the stability of network cluster states can depend as much on the choice of single node dynamics as it does on the form of network structural connectivity. Importantly, the procedure that we present here, for understanding cluster synchronization in networks, is valid for a wide variety of systems in biology, physics, and engineering that can be described by PWL oscillators.

  6. System maintenance manual for master modeling of aerodynamic surfaces by three-dimensional explicit representation

    NASA Technical Reports Server (NTRS)

    Gibson, A. F.

    1983-01-01

    A system of computer programs has been developed to model general three-dimensional surfaces. Surfaces are modeled as sets of parametric bicubic patches. There are also capabilities to transform coordinate to compute mesh/surface intersection normals, and to format input data for a transonic potential flow analysis. A graphical display of surface models and intersection normals is available. There are additional capabilities to regulate point spacing on input curves and to compute surface intersection curves. Internal details of the implementation of this system are explained, and maintenance procedures are specified.

  7. Natural brain-information interfaces: Recommending information by relevance inferred from human brain signals

    PubMed Central

    Eugster, Manuel J. A.; Ruotsalo, Tuukka; Spapé, Michiel M.; Barral, Oswald; Ravaja, Niklas; Jacucci, Giulio; Kaski, Samuel

    2016-01-01

    Finding relevant information from large document collections such as the World Wide Web is a common task in our daily lives. Estimation of a user’s interest or search intention is necessary to recommend and retrieve relevant information from these collections. We introduce a brain-information interface used for recommending information by relevance inferred directly from brain signals. In experiments, participants were asked to read Wikipedia documents about a selection of topics while their EEG was recorded. Based on the prediction of word relevance, the individual’s search intent was modeled and successfully used for retrieving new relevant documents from the whole English Wikipedia corpus. The results show that the users’ interests toward digital content can be modeled from the brain signals evoked by reading. The introduced brain-relevance paradigm enables the recommendation of information without any explicit user interaction and may be applied across diverse information-intensive applications. PMID:27929077

  8. Deciphering the Origin of Dogs: From Fossils to Genomes.

    PubMed

    Freedman, Adam H; Wayne, Robert K

    2017-02-08

    Understanding the timing and geographic context of dog origins is a crucial component for understanding human history, as well as the evolutionary context in which the morphological and behavioral divergence of dogs from wolves occurred. A substantial challenge to understanding domestication is that dogs have experienced a complicated demographic history. An initial severe bottleneck was associated with domestication followed by postdivergence gene flow between dogs and wolves, as well as population expansions, contractions, and replacements. In addition, because the domestication of dogs occurred in the relatively recent past, much of the observed polymorphism may be shared between dogs and wolves, limiting the power to distinguish between alternative models of dog history. Greater insight into the domestication process will require explicit tests of alternative models of domestication through the joint analysis of whole genomes from modern lineages and ancient wolves and dogs from across Eurasia.

  9. Natural brain-information interfaces: Recommending information by relevance inferred from human brain signals

    NASA Astrophysics Data System (ADS)

    Eugster, Manuel J. A.; Ruotsalo, Tuukka; Spapé, Michiel M.; Barral, Oswald; Ravaja, Niklas; Jacucci, Giulio; Kaski, Samuel

    2016-12-01

    Finding relevant information from large document collections such as the World Wide Web is a common task in our daily lives. Estimation of a user’s interest or search intention is necessary to recommend and retrieve relevant information from these collections. We introduce a brain-information interface used for recommending information by relevance inferred directly from brain signals. In experiments, participants were asked to read Wikipedia documents about a selection of topics while their EEG was recorded. Based on the prediction of word relevance, the individual’s search intent was modeled and successfully used for retrieving new relevant documents from the whole English Wikipedia corpus. The results show that the users’ interests toward digital content can be modeled from the brain signals evoked by reading. The introduced brain-relevance paradigm enables the recommendation of information without any explicit user interaction and may be applied across diverse information-intensive applications.

  10. Theory of resonant x-ray emission spectra in compounds with localized f electrons

    NASA Astrophysics Data System (ADS)

    Kolorenč, Jindřich

    2018-05-01

    I discuss a theoretical description of the resonant x-ray emission spectroscopy (RXES) that is based on the Anderson impurity model. The parameters entering the model are determined from material-specific LDA+DMFT calculations. The theory is applicable across the whole f series, not only in the limits of nearly empty (La, Ce) or nearly full (Yb) valence f shell. Its performance is illustrated on the pressure-enhanced intermediate valency of elemental praseodymium. The obtained results are compared to the usual interpretation of RXES, which assumes that the spectrum is a superposition of several signals, each corresponding to one configuration of the 4f shell. The present theory simplifies to such superposition only if nearly all effects of hybridization of the 4f shell with the surrounding states are neglected. Although the assumption of negligible hybridization sounds reasonable for lanthanides, the explicit calculations show that it substantially distorts the analysis of the RXES data.

  11. Asynchronous discrete control of continuous processes

    NASA Astrophysics Data System (ADS)

    Kaliski, M. E.; Johnson, T. L.

    1984-07-01

    The research during this second contract year continued to deal with the development of sound theoretical models for asynchronous systems. Two criteria served to shape the research pursued: the first, that the developed models extend and generalize previously developed research for synchronous discrete control; the second, that the models explicitly address the question of how to incorporate system transition times into themselves. The following sections of this report concisely delineate this year's work. Our original proposal for this research identified four general tasks of investigation: (1.1) Analysis of Qualitative Properties of Asynchronous Hybrid Systems; (1.2) Acceptance and Control for Asynchronous Hybrid Systems.

  12. Model-based frequency response characterization of a digital-image analysis system for epifluorescence microscopy

    NASA Technical Reports Server (NTRS)

    Hazra, Rajeeb; Viles, Charles L.; Park, Stephen K.; Reichenbach, Stephen E.; Sieracki, Michael E.

    1992-01-01

    Consideration is given to a model-based method for estimating the spatial frequency response of a digital-imaging system (e.g., a CCD camera) that is modeled as a linear, shift-invariant image acquisition subsystem that is cascaded with a linear, shift-variant sampling subsystem. The method characterizes the 2D frequency response of the image acquisition subsystem to beyond the Nyquist frequency by accounting explicitly for insufficient sampling and the sample-scene phase. Results for simulated systems and a real CCD-based epifluorescence microscopy system are presented to demonstrate the accuracy of the method.

  13. Improving benchmarking by using an explicit framework for the development of composite indicators: an example using pediatric quality of care

    PubMed Central

    2010-01-01

    Background The measurement of healthcare provider performance is becoming more widespread. Physicians have been guarded about performance measurement, in part because the methodology for comparative measurement of care quality is underdeveloped. Comprehensive quality improvement will require comprehensive measurement, implying the aggregation of multiple quality metrics into composite indicators. Objective To present a conceptual framework to develop comprehensive, robust, and transparent composite indicators of pediatric care quality, and to highlight aspects specific to quality measurement in children. Methods We reviewed the scientific literature on composite indicator development, health systems, and quality measurement in the pediatric healthcare setting. Frameworks were selected for explicitness and applicability to a hospital-based measurement system. Results We synthesized various frameworks into a comprehensive model for the development of composite indicators of quality of care. Among its key premises, the model proposes identifying structural, process, and outcome metrics for each of the Institute of Medicine's six domains of quality (safety, effectiveness, efficiency, patient-centeredness, timeliness, and equity) and presents a step-by-step framework for embedding the quality of care measurement model into composite indicator development. Conclusions The framework presented offers researchers an explicit path to composite indicator development. Without a scientifically robust and comprehensive approach to measurement of the quality of healthcare, performance measurement will ultimately fail to achieve its quality improvement goals. PMID:20181129

  14. A new approach for developing adjoint models

    NASA Astrophysics Data System (ADS)

    Farrell, P. E.; Funke, S. W.

    2011-12-01

    Many data assimilation algorithms rely on the availability of gradients of misfit functionals, which can be efficiently computed with adjoint models. However, the development of an adjoint model for a complex geophysical code is generally very difficult. Algorithmic differentiation (AD, also called automatic differentiation) offers one strategy for simplifying this task: it takes the abstraction that a model is a sequence of primitive instructions, each of which may be differentiated in turn. While extremely successful, this low-level abstraction runs into time-consuming difficulties when applied to the whole codebase of a model, such as differentiating through linear solves, model I/O, calls to external libraries, language features that are unsupported by the AD tool, and the use of multiple programming languages. While these difficulties can be overcome, it requires a large amount of technical expertise and an intimate familiarity with both the AD tool and the model. An alternative to applying the AD tool to the whole codebase is to assemble the discrete adjoint equations and use these to compute the necessary gradients. With this approach, the AD tool must be applied to the nonlinear assembly operators, which are typically small, self-contained units of the codebase. The disadvantage of this approach is that the assembly of the discrete adjoint equations is still very difficult to perform correctly, especially for complex multiphysics models that perform temporal integration; as it stands, this approach is as difficult and time-consuming as applying AD to the whole model. In this work, we have developed a library which greatly simplifies and automates the alternate approach of assembling the discrete adjoint equations. We propose a complementary, higher-level abstraction to that of AD: that a model is a sequence of linear solves. The developer annotates model source code with library calls that build a 'tape' of the operators involved and their dependencies, and supplies callbacks to compute the action of these operators. The library, called libadjoint, is then capable of symbolically manipulating the forward annotation to automatically assemble the adjoint equations. Libadjoint is open source, and is explicitly designed to be bolted-on to an existing discrete model. It can be applied to any discretisation, steady or time-dependent problems, and both linear and nonlinear systems. Using libadjoint has several advantages. It requires the application of an AD tool only to small pieces of code, making the use of AD far more tractable. As libadjoint derives the adjoint equations, the expertise required to develop an adjoint model is greatly diminished. One major advantage of this approach is that the model developer is freed from implementing complex checkpointing strategies for the adjoint model: libadjoint has sufficient information about the forward model to re-play the entire forward solve when necessary, and thus the checkpointing algorithm can be implemented entirely within the library itself. Examples are shown using the Fluidity/ICOM framework, a complex ocean model under development at Imperial College London.

  15. Some aspects of algorithm performance and modeling in transient analysis of structures

    NASA Technical Reports Server (NTRS)

    Adelman, H. M.; Haftka, R. T.; Robinson, J. C.

    1981-01-01

    The status of an effort to increase the efficiency of calculating transient temperature fields in complex aerospace vehicle structures is described. The advantages and disadvantages of explicit algorithms with variable time steps, known as the GEAR package, is described. Four test problems, used for evaluating and comparing various algorithms, were selected and finite-element models of the configurations are described. These problems include a space shuttle frame component, an insulated cylinder, a metallic panel for a thermal protection system, and a model of the wing of the space shuttle orbiter. Results generally indicate a preference for implicit over explicit algorithms for solution of transient structural heat transfer problems when the governing equations are stiff (typical of many practical problems such as insulated metal structures).

  16. Exact models for isotropic matter

    NASA Astrophysics Data System (ADS)

    Thirukkanesh, S.; Maharaj, S. D.

    2006-04-01

    We study the Einstein-Maxwell system of equations in spherically symmetric gravitational fields for static interior spacetimes. The condition for pressure isotropy is reduced to a recurrence equation with variable, rational coefficients. We demonstrate that this difference equation can be solved in general using mathematical induction. Consequently, we can find an explicit exact solution to the Einstein-Maxwell field equations. The metric functions, energy density, pressure and the electric field intensity can be found explicitly. Our result contains models found previously, including the neutron star model of Durgapal and Bannerji. By placing restrictions on parameters arising in the general series, we show that the series terminate and there exist two linearly independent solutions. Consequently, it is possible to find exact solutions in terms of elementary functions, namely polynomials and algebraic functions.

  17. A data management system for engineering and scientific computing

    NASA Technical Reports Server (NTRS)

    Elliot, L.; Kunii, H. S.; Browne, J. C.

    1978-01-01

    Data elements and relationship definition capabilities for this data management system are explicitly tailored to the needs of engineering and scientific computing. System design was based upon studies of data management problems currently being handled through explicit programming. The system-defined data element types include real scalar numbers, vectors, arrays and special classes of arrays such as sparse arrays and triangular arrays. The data model is hierarchical (tree structured). Multiple views of data are provided at two levels. Subschemas provide multiple structural views of the total data base and multiple mappings for individual record types are supported through the use of a REDEFINES capability. The data definition language and the data manipulation language are designed as extensions to FORTRAN. Examples of the coding of real problems taken from existing practice in the data definition language and the data manipulation language are given.

  18. Three Dimensional Explicit Model for Cometary Tail Ions Interactions with Solar Wind

    NASA Astrophysics Data System (ADS)

    Al Bermani, M. J. F.; Alhamed, S. A.; Khalaf, S. Z.; Ali, H. Sh.; Selman, A. A.

    2009-06-01

    The different interactions between cometary tail and solar wind ions are studied in the present paper based on three-dimensional Lax explicit method. The model used in this research is based on the continuity equations describing the cometary tail-solar wind interactions. Three dimensional system was considered in this paper. Simulation of the physical system was achieved using computer code written using Matlab 7.0. The parameters studied here assumed Halley comet type and include the particle density rho, the particles velocity v, the magnetic field strength B, dynamic pressure p and internal energy E. The results of the present research showed that the interaction near the cometary nucleus is mainly affected by the new ions added to the plasma of the solar wind, which increases the average molecular weight and result in many unique characteristics of the cometary tail. These characteristics were explained in the presence of the IMF.

  19. The use of tacit knowledge in occupational safety and health management systems.

    PubMed

    Podgórski, Daniel

    2010-01-01

    A systematic approach to occupational safety and health (OSH) management and concepts of knowledge management (KM) have developed independently since the 1990s. Most KM models assume a division of knowledge into explicit and tacit. The role of tacit knowledge is stressed as necessary for higher performance in an enterprise. This article reviews literature on KM applications in OSH. Next, 10 sections of an OSH management system (OSH MS) are identified, in which creating and transferring tacit knowledge contributes significantly to prevention of occupational injuries and diseases. The roles of tacit knowledge in OSH MS are contrasted with those of explicit knowledge, but a lack of a model that would describe this process holistically is pointed out. Finally, examples of methods and tools supporting the use of KM in OSH MS are presented and topics of future research aimed at enhancing KM applications in OSH MS are proposed.

  20. Efficient and Extensible Quasi-Explicit Modular Nonlinear Multiscale Battery Model: GH-MSMD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Gi-Heon; Smith, Kandler; Lawrence-Simon, Jake

    Complex physics and long computation time hinder the adoption of computer aided engineering models in the design of large-format battery cells and systems. A modular, efficient battery simulation model -- the multiscale multidomain (MSMD) model -- was previously introduced to aid the scale-up of Li-ion material and electrode designs to complete cell and pack designs, capturing electrochemical interplay with 3-D electronic current pathways and thermal response. Here, this paper enhances the computational efficiency of the MSMD model using a separation of time-scales principle to decompose model field variables. The decomposition provides a quasi-explicit linkage between the multiple length-scale domains andmore » thus reduces time-consuming nested iteration when solving model equations across multiple domains. In addition to particle-, electrode- and cell-length scales treated in the previous work, the present formulation extends to bus bar- and multi-cell module-length scales. We provide example simulations for several variants of GH electrode-domain models.« less

  1. Efficient and Extensible Quasi-Explicit Modular Nonlinear Multiscale Battery Model: GH-MSMD

    DOE PAGES

    Kim, Gi-Heon; Smith, Kandler; Lawrence-Simon, Jake; ...

    2017-03-24

    Complex physics and long computation time hinder the adoption of computer aided engineering models in the design of large-format battery cells and systems. A modular, efficient battery simulation model -- the multiscale multidomain (MSMD) model -- was previously introduced to aid the scale-up of Li-ion material and electrode designs to complete cell and pack designs, capturing electrochemical interplay with 3-D electronic current pathways and thermal response. Here, this paper enhances the computational efficiency of the MSMD model using a separation of time-scales principle to decompose model field variables. The decomposition provides a quasi-explicit linkage between the multiple length-scale domains andmore » thus reduces time-consuming nested iteration when solving model equations across multiple domains. In addition to particle-, electrode- and cell-length scales treated in the previous work, the present formulation extends to bus bar- and multi-cell module-length scales. We provide example simulations for several variants of GH electrode-domain models.« less

  2. Academic Work from a Comparative Perspective: A Survey of Faculty Working Time across 13 Countries

    ERIC Educational Resources Information Center

    Bentley, Peter James; Kyvik, Svein

    2012-01-01

    Sociological institutional theory views universities as model driven organizations. The world's stratification system promotes conformity, imitation and isomorphism towards the "best" university models. Accordingly, academic roles may be locally shaped in minor ways, but are defined and measured explicitly in global terms. We test this proposition…

  3. Boundedness and global stability of the two-predator and one-prey models with nonlinear prey-taxis

    NASA Astrophysics Data System (ADS)

    Wang, Jianping; Wang, Mingxin

    2018-06-01

    This paper concerns the reaction-diffusion systems modeling the population dynamics of two predators and one prey with nonlinear prey-taxis. We first investigate the global existence and boundedness of the unique classical solution for the general model. Then, we study the global stabilities of nonnegative spatially homogeneous equilibria for an explicit system with type I functional responses and density-dependent death rates for the predators and logistic growth for the prey. Moreover, the convergence rates are also established.

  4. The Solution Construction of Heterotic Super-Liouville Model

    NASA Astrophysics Data System (ADS)

    Yang, Zhan-Ying; Zhen, Yi

    2001-12-01

    We investigate the heterotic super-Liouville model on the base of the basic Lie super-algebra Osp(1|2).Using the super extension of Leznov-Saveliev analysis and Drinfeld-Sokolov linear system, we construct the explicit solution of the heterotic super-Liouville system in component form. We also show that the solutions are local and periodic by calculating the exchange relation of the solution. Finally starting from the action of heterotic super-Liouville model, we obtain the conserved current and conserved charge which possessed the BRST properties.

  5. Overcoming Challenges in Kinetic Modeling of Magnetized Plasmas and Vacuum Electronic Devices

    NASA Astrophysics Data System (ADS)

    Omelchenko, Yuri; Na, Dong-Yeop; Teixeira, Fernando

    2017-10-01

    We transform the state-of-the art of plasma modeling by taking advantage of novel computational techniques for fast and robust integration of multiscale hybrid (full particle ions, fluid electrons, no displacement current) and full-PIC models. These models are implemented in 3D HYPERS and axisymmetric full-PIC CONPIC codes. HYPERS is a massively parallel, asynchronous code. The HYPERS solver does not step fields and particles synchronously in time but instead executes local variable updates (events) at their self-adaptive rates while preserving fundamental conservation laws. The charge-conserving CONPIC code has a matrix-free explicit finite-element (FE) solver based on a sparse-approximate inverse (SPAI) algorithm. This explicit solver approximates the inverse FE system matrix (``mass'' matrix) using successive sparsity pattern orders of the original matrix. It does not reduce the set of Maxwell's equations to a vector-wave (curl-curl) equation of second order but instead utilizes the standard coupled first-order Maxwell's system. We discuss the ability of our codes to accurately and efficiently account for multiscale physical phenomena in 3D magnetized space and laboratory plasmas and axisymmetric vacuum electronic devices.

  6. When everything is not everywhere but species evolve: an alternative method to model adaptive properties of marine ecosystems

    PubMed Central

    Sauterey, Boris; Ward, Ben A.; Follows, Michael J.; Bowler, Chris; Claessen, David

    2015-01-01

    The functional and taxonomic biogeography of marine microbial systems reflects the current state of an evolving system. Current models of marine microbial systems and biogeochemical cycles do not reflect this fundamental organizing principle. Here, we investigate the evolutionary adaptive potential of marine microbial systems under environmental change and introduce explicit Darwinian adaptation into an ocean modelling framework, simulating evolving phytoplankton communities in space and time. To this end, we adopt tools from adaptive dynamics theory, evaluating the fitness of invading mutants over annual timescales, replacing the resident if a fitter mutant arises. Using the evolutionary framework, we examine how community assembly, specifically the emergence of phytoplankton cell size diversity, reflects the combined effects of bottom-up and top-down controls. When compared with a species-selection approach, based on the paradigm that “Everything is everywhere, but the environment selects”, we show that (i) the selected optimal trait values are similar; (ii) the patterns emerging from the adaptive model are more robust, but (iii) the two methods lead to different predictions in terms of emergent diversity. We demonstrate that explicitly evolutionary approaches to modelling marine microbial populations and functionality are feasible and practical in time-varying, space-resolving settings and provide a new tool for exploring evolutionary interactions on a range of timescales in the ocean. PMID:25852217

  7. When everything is not everywhere but species evolve: an alternative method to model adaptive properties of marine ecosystems.

    PubMed

    Sauterey, Boris; Ward, Ben A; Follows, Michael J; Bowler, Chris; Claessen, David

    2015-01-01

    The functional and taxonomic biogeography of marine microbial systems reflects the current state of an evolving system. Current models of marine microbial systems and biogeochemical cycles do not reflect this fundamental organizing principle. Here, we investigate the evolutionary adaptive potential of marine microbial systems under environmental change and introduce explicit Darwinian adaptation into an ocean modelling framework, simulating evolving phytoplankton communities in space and time. To this end, we adopt tools from adaptive dynamics theory, evaluating the fitness of invading mutants over annual timescales, replacing the resident if a fitter mutant arises. Using the evolutionary framework, we examine how community assembly, specifically the emergence of phytoplankton cell size diversity, reflects the combined effects of bottom-up and top-down controls. When compared with a species-selection approach, based on the paradigm that "Everything is everywhere, but the environment selects", we show that (i) the selected optimal trait values are similar; (ii) the patterns emerging from the adaptive model are more robust, but (iii) the two methods lead to different predictions in terms of emergent diversity. We demonstrate that explicitly evolutionary approaches to modelling marine microbial populations and functionality are feasible and practical in time-varying, space-resolving settings and provide a new tool for exploring evolutionary interactions on a range of timescales in the ocean.

  8. Symmetric linear systems - An application of algebraic systems theory

    NASA Technical Reports Server (NTRS)

    Hazewinkel, M.; Martin, C.

    1983-01-01

    Dynamical systems which contain several identical subsystems occur in a variety of applications ranging from command and control systems and discretization of partial differential equations, to the stability augmentation of pairs of helicopters lifting a large mass. Linear models for such systems display certain obvious symmetries. In this paper, we discuss how these symmetries can be incorporated into a mathematical model that utilizes the modern theory of algebraic systems. Such systems are inherently related to the representation theory of algebras over fields. We will show that any control scheme which respects the dynamical structure either implicitly or explicitly uses the underlying algebra.

  9. GAMBIT: A Parameterless Model-Based Evolutionary Algorithm for Mixed-Integer Problems.

    PubMed

    Sadowski, Krzysztof L; Thierens, Dirk; Bosman, Peter A N

    2018-01-01

    Learning and exploiting problem structure is one of the key challenges in optimization. This is especially important for black-box optimization (BBO) where prior structural knowledge of a problem is not available. Existing model-based Evolutionary Algorithms (EAs) are very efficient at learning structure in both the discrete, and in the continuous domain. In this article, discrete and continuous model-building mechanisms are integrated for the Mixed-Integer (MI) domain, comprising discrete and continuous variables. We revisit a recently introduced model-based evolutionary algorithm for the MI domain, the Genetic Algorithm for Model-Based mixed-Integer opTimization (GAMBIT). We extend GAMBIT with a parameterless scheme that allows for practical use of the algorithm without the need to explicitly specify any parameters. We furthermore contrast GAMBIT with other model-based alternatives. The ultimate goal of processing mixed dependences explicitly in GAMBIT is also addressed by introducing a new mechanism for the explicit exploitation of mixed dependences. We find that processing mixed dependences with this novel mechanism allows for more efficient optimization. We further contrast the parameterless GAMBIT with Mixed-Integer Evolution Strategies (MIES) and other state-of-the-art MI optimization algorithms from the General Algebraic Modeling System (GAMS) commercial algorithm suite on problems with and without constraints, and show that GAMBIT is capable of solving problems where variable dependences prevent many algorithms from successfully optimizing them.

  10. Constant pH molecular dynamics of proteins in explicit solvent with proton tautomerism.

    PubMed

    Goh, Garrett B; Hulbert, Benjamin S; Zhou, Huiqing; Brooks, Charles L

    2014-07-01

    pH is a ubiquitous regulator of biological activity, including protein-folding, protein-protein interactions, and enzymatic activity. Existing constant pH molecular dynamics (CPHMD) models that were developed to address questions related to the pH-dependent properties of proteins are largely based on implicit solvent models. However, implicit solvent models are known to underestimate the desolvation energy of buried charged residues, increasing the error associated with predictions that involve internal ionizable residue that are important in processes like hydrogen transport and electron transfer. Furthermore, discrete water and ions cannot be modeled in implicit solvent, which are important in systems like membrane proteins and ion channels. We report on an explicit solvent constant pH molecular dynamics framework based on multi-site λ-dynamics (CPHMD(MSλD)). In the CPHMD(MSλD) framework, we performed seamless alchemical transitions between protonation and tautomeric states using multi-site λ-dynamics, and designed novel biasing potentials to ensure that the physical end-states are predominantly sampled. We show that explicit solvent CPHMD(MSλD) simulations model realistic pH-dependent properties of proteins such as the Hen-Egg White Lysozyme (HEWL), binding domain of 2-oxoglutarate dehydrogenase (BBL) and N-terminal domain of ribosomal protein L9 (NTL9), and the pKa predictions are in excellent agreement with experimental values, with a RMSE ranging from 0.72 to 0.84 pKa units. With the recent development of the explicit solvent CPHMD(MSλD) framework for nucleic acids, accurate modeling of pH-dependent properties of both major class of biomolecules-proteins and nucleic acids is now possible. © 2013 Wiley Periodicals, Inc.

  11. Modeling Quantum Dynamics in Multidimensional Systems

    NASA Astrophysics Data System (ADS)

    Liss, Kyle; Weinacht, Thomas; Pearson, Brett

    2017-04-01

    Coupling between different degrees-of-freedom is an inherent aspect of dynamics in multidimensional quantum systems. As experiments and theory begin to tackle larger molecular structures and environments, models that account for vibrational and/or electronic couplings are essential for interpretation. Relevant processes include intramolecular vibrational relaxation, conical intersections, and system-bath coupling. We describe a set of simulations designed to model coupling processes in multidimensional molecular systems, focusing on models that provide insight and allow visualization of the dynamics. Undergraduates carried out much of the work as part of a senior research project. In addition to the pedagogical value, the simulations allow for comparison between both explicit and implicit treatments of a system's many degrees-of-freedom.

  12. An integrated conceptual framework for long-term social-ecological research

    Treesearch

    S.L. Collins; S.R. Carpenter; S.M. Swinton; D.E. Orenstein; D.L. Childers; T.L. Gragson; N.B. Grimm; J.M. Grove; S.L. Harlan; J.P. Kaye; A.K. Knapp; G.P. Kofinas; J.J. Magnuson; W.H. McDowell; J.M. Melack; L.A. Ogden; G.P. Robertson; M.D. Smith; A.C. Whitmer

    2010-01-01

    The global reach of human activities affects all natural ecosystems, so that the environment is best viewed as a social-ecological system. Consequently, a more integrative approach to environmental science, one that bridges the biophysical and social domains, is sorely needed. Although models and frameworks for social-ecological systems exist, few are explicitly...

  13. Theory of the evolutionary minority game

    NASA Astrophysics Data System (ADS)

    Lo, T. S.; Hui, P. M.; Johnson, N. F.

    2000-09-01

    We present a theory describing a recently introduced model of an evolving, adaptive system in which agents compete to be in the minority. The agents themselves are able to evolve their strategies over time in an attempt to improve their performance. The theory explicitly demonstrates the self-interaction, or market impact, that agents in such systems experience.

  14. Multiple Replica Repulsion Technique for Efficient Conformational Sampling of Biological Systems

    PubMed Central

    Malevanets, Anatoly; Wodak, Shoshana J.

    2011-01-01

    Here, we propose a technique for sampling complex molecular systems with many degrees of freedom. The technique, termed “multiple replica repulsion” (MRR), does not suffer from poor scaling with the number of degrees of freedom associated with common replica exchange procedures and does not require sampling at high temperatures. The algorithm involves creation of multiple copies (replicas) of the system, which interact with one another through a repulsive potential that can be applied to the system as a whole or to portions of it. The proposed scheme prevents oversampling of the most populated states and provides accurate descriptions of conformational perturbations typically associated with sampling ground-state energy wells. The performance of MRR is illustrated for three systems of increasing complexity. A two-dimensional toy potential surface is used to probe the sampling efficiency as a function of key parameters of the procedure. MRR simulations of the Met-enkephalin pentapeptide, and the 76-residue protein ubiquitin, performed in presence of explicit water molecules and totaling 32 ns each, investigate the ability of MRR to characterize the conformational landscape of the peptide, and the protein native basin, respectively. Results obtained for the enkephalin peptide reflect more closely the extensive conformational flexibility of this peptide than previously reported simulations. Those obtained for ubiquitin show that conformational ensembles sampled by MRR largely encompass structural fluctuations relevant to biological recognition, which occur on the microsecond timescale, or are observed in crystal structures of ubiquitin complexes with other proteins. MRR thus emerges as a very promising simple and versatile technique for modeling the structural plasticity of complex biological systems. PMID:21843487

  15. Enterprise Architecture Tradespace Analysis

    DTIC Science & Technology

    2014-02-21

    EXECUTIVE SUMMARY The Department of Defense (DoD)’s Science & Technology (S&T) priority for Engineered Resilient Systems (ERS) calls for...Science & Technology (S&T) priority for Engineered Resilient Systems (ERS) calls for adaptable designs with diverse systems models that can easily be...Department of Defense [Holland, 2012]. Some explicit goals are: • Establish baseline resiliency of current capabilities • More complete and robust

  16. Design of a cooperative problem-solving system for en-route flight planning: An empirical evaluation

    NASA Technical Reports Server (NTRS)

    Layton, Charles; Smith, Philip J.; Mc Coy, C. Elaine

    1994-01-01

    Both optimization techniques and expert systems technologies are popular approaches for developing tools to assist in complex problem-solving tasks. Because of the underlying complexity of many such tasks, however, the models of the world implicitly or explicitly embedded in such tools are often incomplete and the problem-solving methods fallible. The result can be 'brittleness' in situations that were not anticipated by the system designers. To deal with this weakness, it has been suggested that 'cooperative' rather than 'automated' problem-solving systems be designed. Such cooperative systems are proposed to explicitly enhance the collaboration of the person (or a group of people) and the computer system. This study evaluates the impact of alternative design concepts on the performance of 30 airline pilots interacting with such a cooperative system designed to support en-route flight planning. The results clearly demonstrate that different system design concepts can strongly influence the cognitive processes and resultant performances of users. Based on think-aloud protocols, cognitive models are proposed to account for how features of the computer system interacted with specific types of scenarios to influence exploration and decision making by the pilots. The results are then used to develop recommendations for guiding the design of cooperative systems.

  17. Design of a cooperative problem-solving system for en-route flight planning: An empirical evaluation

    NASA Technical Reports Server (NTRS)

    Layton, Charles; Smith, Philip J.; McCoy, C. Elaine

    1994-01-01

    Both optimization techniques and expert systems technologies are popular approaches for developing tools to assist in complex problem-solving tasks. Because of the underlying complexity of many such tasks, however, the models of the world implicitly or explicitly embedded in such tools are often incomplete and the problem-solving methods fallible. The result can be 'brittleness' in situations that were not anticipated by the system designers. To deal with this weakness, it has been suggested that 'cooperative' rather than 'automated' problem-solving systems be designed. Such cooperative systems are proposed to explicitly enhance the collaboration of the person (or a group of people) and the computer system. This study evaluates the impact of alternative design concepts on the performance of 30 airline pilots interacting with such a cooperative system designed to support enroute flight planning. The results clearly demonstrate that different system design concepts can strongly influence the cognitive processes and resultant performances of users. Based on think-aloud protocols, cognitive models are proposed to account for how features of the computer system interacted with specific types of scenarios to influence exploration and decision making by the pilots. The results are then used to develop recommendations for guiding the design of cooperative systems.

  18. Harvesting cost model for small trees in natural stands in the interior northwest.

    Treesearch

    Bruce R. Hartsough; Xiaoshan Zhang; Roger D. Fight

    2001-01-01

    Realistic logging cost models are needed for long-term forest management planning. Data from numerous published studies were combined to estimate the costs of harvesting small trees in natural stands in the Interior Northwest of North America. Six harvesting systems were modeled. Four address gentle terrain: manual log-length, manual whole-tree, mechanized whole-tree,...

  19. Effects of Explicit Instructions, Metacognition, and Motivation on Creative Performance

    ERIC Educational Resources Information Center

    Hong, Eunsook; O'Neil, Harold F.; Peng, Yun

    2016-01-01

    Effects of explicit instructions, metacognition, and intrinsic motivation on creative homework performance were examined in 303 Chinese 10th-grade students. Models that represent hypothesized relations among these constructs and trait covariates were tested using structural equation modelling. Explicit instructions geared to originality were…

  20. Impact of Teachers' Planned Questions on Opportunities for Students to Reason Mathematically in Whole-Class Discussions around Mathematical Problem-Solving Tasks

    ERIC Educational Resources Information Center

    Enoch, Sarah Elizabeth

    2013-01-01

    While professional developers have been encouraging teachers to plan for discourse around problem solving tasks as a way to orchestrate mathematically productive discourse (Stein, Engle, Smith, & Hughes, 2008; Stein, Smith, Henningsen, & Silver, 2009) no research has been conducted explicitly examining the relationship between the plans…

  1. Dialing in to a Circle of Trust: A "Medium" Tech Experiment and Poetic Evaluation

    ERIC Educational Resources Information Center

    Love, Christine T.

    2012-01-01

    In his 2004 book "A Hidden Wholeness," Parker Palmer makes explicit the unique qualities of the transformational "circle of trust." He describes a group of people embracing the paradox of "being alone together," where the only goal of the group is to invite the emergence of the soul of each individual, through…

  2. Handling Errors as They Arise in Whole-Class Interactions

    ERIC Educational Resources Information Center

    Ingram, Jenni; Pitt, Andrea; Baldry, Fay

    2015-01-01

    There has been a long history of research into errors and their role in the teaching and learning of mathematics. This research has led to a change to pedagogical recommendations from avoiding errors to explicitly using them in lessons. In this study, 22 mathematics lessons were video-recorded and transcribed. A conversation analytic (CA) approach…

  3. R. S. Peters and J. H. Newman on the Aims of Education

    ERIC Educational Resources Information Center

    Ozolins, Janis T.

    2013-01-01

    R. S. Peters never explicitly talks about wisdom as being an aim of education. He does, however, in numerous places, emphasize that education is of the whole person and that, whatever else it might be about, it involves the development of knowledge and understanding. Being educated, he claims, is incompatible with being narrowly specialized.…

  4. Structure and dynamics of human vimentin intermediate filament dimer and tetramer in explicit and implicit solvent models.

    PubMed

    Qin, Zhao; Buehler, Markus J

    2011-01-01

    Intermediate filaments, in addition to microtubules and microfilaments, are one of the three major components of the cytoskeleton in eukaryotic cells, and play an important role in mechanotransduction as well as in providing mechanical stability to cells at large stretch. The molecular structures, mechanical and dynamical properties of the intermediate filament basic building blocks, the dimer and the tetramer, however, have remained elusive due to persistent experimental challenges owing to the large size and fibrillar geometry of this protein. We have recently reported an atomistic-level model of the human vimentin dimer and tetramer, obtained through a bottom-up approach based on structural optimization via molecular simulation based on an implicit solvent model (Qin et al. in PLoS ONE 2009 4(10):e7294, 9). Here we present extensive simulations and structural analyses of the model based on ultra large-scale atomistic-level simulations in an explicit solvent model, with system sizes exceeding 500,000 atoms and simulations carried out at 20 ns time-scales. We report a detailed comparison of the structural and dynamical behavior of this large biomolecular model with implicit and explicit solvent models. Our simulations confirm the stability of the molecular model and provide insight into the dynamical properties of the dimer and tetramer. Specifically, our simulations reveal a heterogeneous distribution of the bending stiffness along the molecular axis with the formation of rather soft and highly flexible hinge-like regions defined by non-alpha-helical linker domains. We report a comparison of Ramachandran maps and the solvent accessible surface area between implicit and explicit solvent models, and compute the persistence length of the dimer and tetramer structure of vimentin intermediate filaments for various subdomains of the protein. Our simulations provide detailed insight into the dynamical properties of the vimentin dimer and tetramer intermediate filament building blocks, which may guide the development of novel coarse-grained models of intermediate filaments, and could also help in understanding assembly mechanisms.

  5. Advanced Suspension and Control Algorithm for U.S. Army Ground Vehicles

    DTIC Science & Technology

    2013-04-01

    Army Materiel Systems Analysis Activity (AMSAA), for his assistance and guidance in building a multibody vehicle dynamics model of a typical light...Mobility Multipurpose Wheeled Vehicle [HMMWV] model) that was developed in collaboration with the U.S. Army Materiel Systems Analysis Activity (5) is...control weight for GPC With Explicit Disturbance was R = 1.0e-7 over the entire speed range. To simplify analysis , the control weights for the other two

  6. Socio-hydrologic Modeling to Understand and Mediate the Competition for Water between Humans and Ecosystems: Murrumbidgee River Basin, Australia

    NASA Astrophysics Data System (ADS)

    van Emmerik, Tim; Sivapalan, Murugesu; Li, Zheng; Pande, Saket; Savenije, Hubert

    2014-05-01

    Around the world the demand for water resources is growing in order to satisfy rapidly increasing human populations, leading to competition for water between humans and ecosystems. An entirely new and comprehensive quantitative framework is needed to establish a holistic understanding of that competition, thereby enabling development and evaluation of effective mediation strategies. We present a case study centered on the Murrumbidgee river basin in eastern Australia that illustrates the dynamics of the balance between water extraction and use for food production and efforts to mitigate and reverse consequent degradation of the riparian environment. Interactions between patterns of water resources management and climate driven hydrological variability within the prevailing socio-economic environment have contributed to the emergence of new whole system dynamics over the last 100 years. In particular, data analysis reveals a pendulum swing between an exclusive focus on agricultural development and food production in the initial stages of water resources development and its attendant socio-economic benefits, followed by the gradual realization of the adverse environmental impacts, efforts to mitigate these with the use of remedial measures, and ultimately concerted efforts and externally imposed solutions to restore environmental health and ecosystem services. A quasi-distributed coupled socio-hydrologic system model that explicitly includes the two-way coupling between human and hydrological systems, including evolution of human values/norms relating to water and the environment, is able to mimic broad features of this pendulum swing. The model consists of coupled nonlinear differential equations that include four state variables describing the co-evolution of storage capacity, irrigated area, human population, and ecosystem health, which are all connected by feedback mechanisms. The model is used to generate insights into the dominant controls of the trajectory of co-evolution of the coupled human-water system, to serve as the theoretical framework for more detailed analysis of the system, and to generate organizing principles that may be transferable to other systems in different climatic and socio-economic settings.

  7. The multilayer temporal network of public transport in Great Britain

    NASA Astrophysics Data System (ADS)

    Gallotti, Riccardo; Barthelemy, Marc

    2015-01-01

    Despite the widespread availability of information concerning public transport coming from different sources, it is extremely hard to have a complete picture, in particular at a national scale. Here, we integrate timetable data obtained from the United Kingdom open-data program together with timetables of domestic flights, and obtain a comprehensive snapshot of the temporal characteristics of the whole UK public transport system for a week in October 2010. In order to focus on multi-modal aspects of the system, we use a coarse graining procedure and define explicitly the coupling between different transport modes such as connections at airports, ferry docks, rail, metro, coach and bus stations. The resulting weighted, directed, temporal and multilayer network is provided in simple, commonly used formats, ensuring easy access and the possibility of a straightforward use of old or specifically developed methods on this new and extensive dataset.

  8. Conjunctive Coding of Complex Object Features

    PubMed Central

    Erez, Jonathan; Cusack, Rhodri; Kendall, William; Barense, Morgan D.

    2016-01-01

    Critical to perceiving an object is the ability to bind its constituent features into a cohesive representation, yet the manner by which the visual system integrates object features to yield a unified percept remains unknown. Here, we present a novel application of multivoxel pattern analysis of neuroimaging data that allows a direct investigation of whether neural representations integrate object features into a whole that is different from the sum of its parts. We found that patterns of activity throughout the ventral visual stream (VVS), extending anteriorly into the perirhinal cortex (PRC), discriminated between the same features combined into different objects. Despite this sensitivity to the unique conjunctions of features comprising objects, activity in regions of the VVS, again extending into the PRC, was invariant to the viewpoints from which the conjunctions were presented. These results suggest that the manner in which our visual system processes complex objects depends on the explicit coding of the conjunctions of features comprising them. PMID:25921583

  9. Violation of unitarity by Hawking radiation does not violate energy-momentum conservation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nikolić, Hrvoje

    2015-04-02

    An argument by Banks, Susskind and Peskin (BSP), according to which violation of unitarity would violate either locality or energy-momentum conservation, is widely believed to be a strong argument against non-unitarity of Hawking radiation. We find that the whole BSP argument rests on the crucial assumption that the Hamiltonian is not highly degenerate, and point out that this assumption is not satisfied for systems with many degrees of freedom. Using Lindblad equation, we show that high degeneracy of the Hamiltonian allows local non-unitary evolution without violating energy-momentum conservation. Moreover, since energy-momentum is the source of gravity, we argue that energy-momentummore » is necessarily conserved for a large class of non-unitary systems with gravity. Finally, we explicitly calculate the Lindblad operators for non-unitary Hawking radiation and show that they conserve energy-momentum.« less

  10. Potential profile near singularity point in kinetic Tonks-Langmuir discharges as a function of the ion sources temperature

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kos, L.; Tskhakaya, D. D.; Jelic, N.

    2011-05-15

    A plasma-sheath transition analysis requires a reliable mathematical expression for the plasma potential profile {Phi}(x) near the sheath edge x{sub s} in the limit {epsilon}{identical_to}{lambda}{sub D}/l=0 (where {lambda}{sub D} is the Debye length and l is a proper characteristic length of the discharge). Such expressions have been explicitly calculated for the fluid model and the singular (cold ion source) kinetic model, where exact analytic solutions for plasma equation ({epsilon}=0) are known, but not for the regular (warm ion source) kinetic model, where no analytic solution of the plasma equation has ever been obtained. For the latter case, Riemann [J. Phys.more » D: Appl. Phys. 24, 493 (1991)] only predicted a general formula assuming relatively high ion-source temperatures, i.e., much higher than the plasma-sheath potential drop. Riemann's formula, however, according to him, never was confirmed in explicit solutions of particular models (e.g., that of Bissell and Johnson [Phys. Fluids 30, 779 (1987)] and Scheuer and Emmert [Phys. Fluids 31, 3645 (1988)]) since ''the accuracy of the classical solutions is not sufficient to analyze the sheath vicinity''[Riemann, in Proceedings of the 62nd Annual Gaseous Electronic Conference, APS Meeting Abstracts, Vol. 54 (APS, 2009)]. Therefore, for many years, there has been a need for explicit calculation that might confirm the Riemann's general formula regarding the potential profile at the sheath edge in the cases of regular very warm ion sources. Fortunately, now we are able to achieve a very high accuracy of results [see, e.g., Kos et al., Phys. Plasmas 16, 093503 (2009)]. We perform this task by using both the analytic and the numerical method with explicit Maxwellian and ''water-bag'' ion source velocity distributions. We find the potential profile near the plasma-sheath edge in the whole range of ion source temperatures of general interest to plasma physics, from zero to ''practical infinity.'' While within limits of ''very low'' and ''relatively high'' ion source temperatures, the potential is proportional to the space coordinate powered by rational numbers {alpha}=1/2 and {alpha}=2/3, with medium ion source temperatures. We found {alpha} between these values being a non-rational number strongly dependent on the ion source temperature. The range of the non-rational power-law turns out to be a very narrow one, at the expense of the extension of {alpha}=2/3 region towards unexpectedly low ion source temperatures.« less

  11. Tuning critical failure with viscoelasticity: How aftershocks inhibit criticality in an analytical mean field model of fracture.

    NASA Astrophysics Data System (ADS)

    Baro Urbea, J.; Davidsen, J.

    2017-12-01

    The hypothesis of critical failure relates the presence of an ultimate stability point in the structural constitutive equation of materials to a divergence of characteristic scales in the microscopic dynamics responsible of deformation. Avalanche models involving critical failure have determined universality classes in different systems: from slip events in crystalline and amorphous materials to the jamming of granular media or the fracture of brittle materials. However, not all empirical failure processes exhibit the trademarks of critical failure. As an example, the statistical properties of ultrasonic acoustic events recorded during the failure of porous brittle materials are stationary, except for variations in the activity rate that can be interpreted in terms of aftershock and foreshock activity (J. Baró et al., PRL 2013).The rheological properties of materials introduce dissipation, usually reproduced in atomistic models as a hardening of the coarse-grained elements of the system. If the hardening is associated to a relaxation process the same mechanism is able to generate temporal correlations. We report the analytic solution of a mean field fracture model exemplifying how criticality and temporal correlations are tuned by transient hardening. We provide a physical meaning to the conceptual model by deriving the constitutive equation from the explicit representation of the transient hardening in terms of a generalized viscoelasticity model. The rate of 'aftershocks' is controlled by the temporal evolution of the viscoelastic creep. At the quasistatic limit, the moment release is invariant to rheology. Therefore, the lack of criticality is explained by the increase of the activity rate close to failure, i.e. 'foreshocks'. Finally, the avalanche propagation can be reinterpreted as a pure mathematical problem in terms of a stochastic counting process. The statistical properties depend only on the distance to a critical point, which is universal for any parametrization of the transient hardening and a whole category of fracture models.

  12. A position-dependent mass model for the Thomas–Fermi potential: Exact solvability and relation to δ-doped semiconductors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schulze-Halberg, Axel, E-mail: xbataxel@gmail.com; García-Ravelo, Jesús; Pacheco-García, Christian

    We consider the Schrödinger equation in the Thomas–Fermi field, a model that has been used for describing electron systems in δ-doped semiconductors. It is shown that the problem becomes exactly-solvable if a particular effective (position-dependent) mass distribution is incorporated. Orthogonal sets of normalizable bound state solutions are constructed in explicit form, and the associated energies are determined. We compare our results with the corresponding findings on the constant-mass problem discussed by Ioriatti (1990) [13]. -- Highlights: ► We introduce an exactly solvable, position-dependent mass model for the Thomas–Fermi potential. ► Orthogonal sets of solutions to our model are constructed inmore » closed form. ► Relation to delta-doped semiconductors is discussed. ► Explicit subband bottom energies are calculated and compared to results obtained in a previous study.« less

  13. A new stomatal paradigm for earth system models? (Invited)

    NASA Astrophysics Data System (ADS)

    Bonan, G. B.; Williams, M. D.; Fisher, R. A.; Oleson, K. W.; Lombardozzi, D.

    2013-12-01

    The land component of climate, and now earth system, models has simulated stomatal conductance since the introduction in the mid-1980s of the so-called second generation models that explicitly represented plant canopies. These second generation models used the Jarvis-style stomatal conductance model, which empirically relates stomatal conductance to photosynthetically active radiation, temperature, vapor pressure deficit, CO2 concentration, and other factors. Subsequent models of stomatal conductance were developed from a more mechanistic understanding of stomatal physiology, particularly that stomata are regulated so as to maximize net CO2 assimilation (An) and minimize water loss during transpiration (E). This concept is embodied in the Ball-Berry stomatal conductance model, which relates stomatal conductance (gs) to net assimilation (An), scaled by the ratio of leaf surface relative humidity to leaf surface CO2 concentration, or the Leuning variant which replaces relative humidity with a vapor pressure deficit term. This coupled gs-An model has been widely used in climate and earth system models since the mid-1990s. An alternative approach models stomatal conductance by directly optimizing water use efficiency, defined as the ratio An/gs or An/E. Conceptual developments over the past several years have shown that the Ball-Berry style model can be derived from optimization theory. However, an explicit optimization model has not been tested in an earth system model. We compare the Ball-Berry model with an explicit optimization model, both implemented in a new plant canopy parameterization developed for the Community Land Model, the land component of the Community Earth System Model. The optimization model is from the Soil-Plant-Atmosphere (SPA) model, which integrates plant and soil hydraulics, carbon assimilation, and gas diffusion. The canopy parameterization is multi-layer and resolves profiles of radiation, temperature, vapor pressure, leaf water stress, stomatal conductance, and photosynthetic capacity within the canopy. Stomatal conductance for each layer is calculated so as to maximize carbon gain, within the limitations of plant water storage and soil-to-canopy water transport. An iterative procedure determines for every model timestep the maximum stomatal conductance for a canopy layer and the associated assimilation rate. We compare the Ball-Berry stomatal model and the SPA stomatal model within the multi-layer canopy parameterization. We use eddy covariance flux tower data for six sites (three deciduous broadleaf forest and three evergreen needleleaf forest) spanning a total of 51 site-years. The multi-layer canopy has improved simulation of gross primary production (GPP), evapotranspiration, and sensible heat flux compared with the most recent version of the Community Land Model (CLM4.5). The Ball-Berry and SPA stomatal models have prominent differences in simulated fluxes and compared with observations. This is most evident during drought.

  14. Improvement, Verification, and Refinement of Spatially-Explicit Exposure Models in Risk Assessment - FishRand Spatially-Explicit Bioaccumulation Model Demonstration

    DTIC Science & Technology

    2015-08-01

    21  Figure 4. Data-based proportion of DDD , DDE and DDT in total DDx in fish and sediment by... DDD dichlorodiphenyldichloroethane DDE dichlorodiphenyldichloroethylene DDT dichlorodiphenyltrichloroethane DoD Department of Defense ERM... DDD ) at the other site. The spatially-explicit model consistently predicts tissue concentrations that closely match both the average and the

  15. Competing Orders and Anomalies

    PubMed Central

    Moon, Eun-Gook

    2016-01-01

    A conservation law is one of the most fundamental properties in nature, but a certain class of conservation “laws” could be spoiled by intrinsic quantum mechanical effects, so-called quantum anomalies. Profound properties of the anomalies have deepened our understanding in quantum many body systems. Here, we investigate quantum anomaly effects in quantum phase transitions between competing orders and striking consequences of their presence. We explicitly calculate topological nature of anomalies of non-linear sigma models (NLSMs) with the Wess-Zumino-Witten (WZW) terms. The non-perturbative nature is directly related with the ’t Hooft anomaly matching condition: anomalies are conserved in renormalization group flow. By applying the matching condition, we show massless excitations are enforced by the anomalies in a whole phase diagram in sharp contrast to the case of the Landau-Ginzburg-Wilson theory which only has massive excitations in symmetric phases. Furthermore, we find non-perturbative criteria to characterize quantum phase transitions between competing orders. For example, in 4D, we show the two competing order parameter theories, CP(1) and the NLSM with WZW, describe different universality class. Physical realizations and experimental implication of the anomalies are also discussed. PMID:27499184

  16. Knowledge-based automated technique for measuring total lung volume from CT

    NASA Astrophysics Data System (ADS)

    Brown, Matthew S.; McNitt-Gray, Michael F.; Mankovich, Nicholas J.; Goldin, Jonathan G.; Aberle, Denise R.

    1996-04-01

    A robust, automated technique has been developed for estimating total lung volumes from chest computed tomography (CT) images. The technique includes a method for segmenting major chest anatomy. A knowledge-based approach automates the calculation of separate volumes of the whole thorax, lungs, and central tracheo-bronchial tree from volumetric CT data sets. A simple, explicit 3D model describes properties such as shape, topology and X-ray attenuation, of the relevant anatomy, which constrain the segmentation of these anatomic structures. Total lung volume is estimated as the sum of the right and left lungs and excludes the central airways. The method requires no operator intervention. In preliminary testing, the system was applied to image data from two healthy subjects and four patients with emphysema who underwent both helical CT and pulmonary function tests. To obtain single breath-hold scans, the healthy subjects were scanned with a collimation of 5 mm and a pitch of 1.5, while the emphysema patients were scanned with collimation of 10 mm at a pitch of 2.0. CT data were reconstructed as contiguous image sets. Automatically calculated volumes were consistent with body plethysmography results (< 10% difference).

  17. Cerebral cartography and connectomics.

    PubMed

    Sporns, Olaf

    2015-05-19

    Cerebral cartography and connectomics pursue similar goals in attempting to create maps that can inform our understanding of the structural and functional organization of the cortex. Connectome maps explicitly aim at representing the brain as a complex network, a collection of nodes and their interconnecting edges. This article reflects on some of the challenges that currently arise in the intersection of cerebral cartography and connectomics. Principal challenges concern the temporal dynamics of functional brain connectivity, the definition of areal parcellations and their hierarchical organization into large-scale networks, the extension of whole-brain connectivity to cellular-scale networks, and the mapping of structure/function relations in empirical recordings and computational models. Successfully addressing these challenges will require extensions of methods and tools from network science to the mapping and analysis of human brain connectivity data. The emerging view that the brain is more than a collection of areas, but is fundamentally operating as a complex networked system, will continue to drive the creation of ever more detailed and multi-modal network maps as tools for on-going exploration and discovery in human connectomics. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  18. Computational neuroanatomy: ontology-based representation of neural components and connectivity

    PubMed Central

    Rubin, Daniel L; Talos, Ion-Florin; Halle, Michael; Musen, Mark A; Kikinis, Ron

    2009-01-01

    Background A critical challenge in neuroscience is organizing, managing, and accessing the explosion in neuroscientific knowledge, particularly anatomic knowledge. We believe that explicit knowledge-based approaches to make neuroscientific knowledge computationally accessible will be helpful in tackling this challenge and will enable a variety of applications exploiting this knowledge, such as surgical planning. Results We developed ontology-based models of neuroanatomy to enable symbolic lookup, logical inference and mathematical modeling of neural systems. We built a prototype model of the motor system that integrates descriptive anatomic and qualitative functional neuroanatomical knowledge. In addition to modeling normal neuroanatomy, our approach provides an explicit representation of abnormal neural connectivity in disease states, such as common movement disorders. The ontology-based representation encodes both structural and functional aspects of neuroanatomy. The ontology-based models can be evaluated computationally, enabling development of automated computer reasoning applications. Conclusion Neuroanatomical knowledge can be represented in machine-accessible format using ontologies. Computational neuroanatomical approaches such as described in this work could become a key tool in translational informatics, leading to decision support applications that inform and guide surgical planning and personalized care for neurological disease in the future. PMID:19208191

  19. A statistical metadata model for clinical trials' data management.

    PubMed

    Vardaki, Maria; Papageorgiou, Haralambos; Pentaris, Fragkiskos

    2009-08-01

    We introduce a statistical, process-oriented metadata model to describe the process of medical research data collection, management, results analysis and dissemination. Our approach explicitly provides a structure for pieces of information used in Clinical Study Data Management Systems, enabling a more active role for any associated metadata. Using the object-oriented paradigm, we describe the classes of our model that participate during the design of a clinical trial and the subsequent collection and management of the relevant data. The advantage of our approach is that we focus on presenting the structural inter-relation of these classes when used during datasets manipulation by proposing certain transformations that model the simultaneous processing of both data and metadata. Our solution reduces the possibility of human errors and allows for the tracking of all changes made during datasets lifecycle. The explicit modeling of processing steps improves data quality and assists in the problem of handling data collected in different clinical trials. The case study illustrates the applicability of the proposed framework demonstrating conceptually the simultaneous handling of datasets collected during two randomized clinical studies. Finally, we provide the main considerations for implementing the proposed framework into a modern Metadata-enabled Information System.

  20. Evaluation of protein-protein docking model structures using all-atom molecular dynamics simulations combined with the solution theory in the energy representation

    NASA Astrophysics Data System (ADS)

    Takemura, Kazuhiro; Guo, Hao; Sakuraba, Shun; Matubayasi, Nobuyuki; Kitao, Akio

    2012-12-01

    We propose a method to evaluate binding free energy differences among distinct protein-protein complex model structures through all-atom molecular dynamics simulations in explicit water using the solution theory in the energy representation. Complex model structures are generated from a pair of monomeric structures using the rigid-body docking program ZDOCK. After structure refinement by side chain optimization and all-atom molecular dynamics simulations in explicit water, complex models are evaluated based on the sum of their conformational and solvation free energies, the latter calculated from the energy distribution functions obtained from relatively short molecular dynamics simulations of the complex in water and of pure water based on the solution theory in the energy representation. We examined protein-protein complex model structures of two protein-protein complex systems, bovine trypsin/CMTI-1 squash inhibitor (PDB ID: 1PPE) and RNase SA/barstar (PDB ID: 1AY7), for which both complex and monomer structures were determined experimentally. For each system, we calculated the energies for the crystal complex structure and twelve generated model structures including the model most similar to the crystal structure and very different from it. In both systems, the sum of the conformational and solvation free energies tended to be lower for the structure similar to the crystal. We concluded that our energy calculation method is useful for selecting low energy complex models similar to the crystal structure from among a set of generated models.

  1. Evaluation of protein-protein docking model structures using all-atom molecular dynamics simulations combined with the solution theory in the energy representation.

    PubMed

    Takemura, Kazuhiro; Guo, Hao; Sakuraba, Shun; Matubayasi, Nobuyuki; Kitao, Akio

    2012-12-07

    We propose a method to evaluate binding free energy differences among distinct protein-protein complex model structures through all-atom molecular dynamics simulations in explicit water using the solution theory in the energy representation. Complex model structures are generated from a pair of monomeric structures using the rigid-body docking program ZDOCK. After structure refinement by side chain optimization and all-atom molecular dynamics simulations in explicit water, complex models are evaluated based on the sum of their conformational and solvation free energies, the latter calculated from the energy distribution functions obtained from relatively short molecular dynamics simulations of the complex in water and of pure water based on the solution theory in the energy representation. We examined protein-protein complex model structures of two protein-protein complex systems, bovine trypsin/CMTI-1 squash inhibitor (PDB ID: 1PPE) and RNase SA/barstar (PDB ID: 1AY7), for which both complex and monomer structures were determined experimentally. For each system, we calculated the energies for the crystal complex structure and twelve generated model structures including the model most similar to the crystal structure and very different from it. In both systems, the sum of the conformational and solvation free energies tended to be lower for the structure similar to the crystal. We concluded that our energy calculation method is useful for selecting low energy complex models similar to the crystal structure from among a set of generated models.

  2. Time-dependent density functional theory (TD-DFT) coupled with reference interaction site model self-consistent field explicitly including spatial electron density distribution (RISM-SCF-SEDD)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yokogawa, D., E-mail: d.yokogawa@chem.nagoya-u.ac.jp; Institute of Transformative Bio-Molecules

    2016-09-07

    Theoretical approach to design bright bio-imaging molecules is one of the most progressing ones. However, because of the system size and computational accuracy, the number of theoretical studies is limited to our knowledge. To overcome the difficulties, we developed a new method based on reference interaction site model self-consistent field explicitly including spatial electron density distribution and time-dependent density functional theory. We applied it to the calculation of indole and 5-cyanoindole at ground and excited states in gas and solution phases. The changes in the optimized geometries were clearly explained with resonance structures and the Stokes shift was correctly reproduced.

  3. The effectiveness of the motion picture association of America's rating system in screening explicit violence and sex in top-ranked movies from 1950 to 2006.

    PubMed

    Nalkur, Priya G; Jamieson, Patrick E; Romer, Daniel

    2010-11-01

    Youth exposure to explicit film violence and sex is linked to adverse health outcomes and is a serious public health concern. The Motion Picture Association of America's (MPAA's) rating system's effectiveness in reducing youth exposure to harmful content has been questioned. To determine the MPAA's rating system's effectiveness in screening explicit violence and sex since the system's initiation (1968) and the introduction of the PG-13 category (1984). Also, to examine evidence of less restrictive ratings over time ("ratings creep"). Top-grossing movies from 1950 to 2006 (N = 855) were coded for explicitness of violent and sexual content. Trends in rating assignments and in the content of different rating categories since 1968 were assessed. The explicitness of violent and sexual content significantly increased following the rating system's initiation. The system did not differentiate violent content as well as sexual content, and ratings creep was only evident for violent films. Explicit violence in R-rated films increased, while films that would previously have been rated R were increasingly assigned to PG-13. This pattern was not evident for sex; only R-rated films exhibited higher levels of explicit sex compared to preratings period. While relatively effective for screening explicit sex, the rating system has allowed increasingly violent content into PG-13 films, thereby increasing youth access to more harmful content. Assignment of films in the current rating system should be more sensitive to the link between violent media exposure and youth violence. Copyright © 2010 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.

  4. Performability modeling with continuous accomplishment sets

    NASA Technical Reports Server (NTRS)

    Meyer, J. F.

    1979-01-01

    A general modeling framework that permits the definition, formulation, and evaluation of performability is described. It is shown that performability relates directly to system effectiveness, and is a proper generalization of both performance and reliability. A hierarchical modeling scheme is used to formulate the capability function used to evaluate performability. The case in which performance variables take values in a continuous accomplishment set is treated explicitly.

  5. Climate Change Impacts on Freshwater Recreational Fishing in the United States

    EPA Science Inventory

    Using a geographic information system, a spatially explicit modeling framework was developed consisting grid cells organized into 2,099 eight-digit hydrologic unit code (HUC-8) polygons for the coterminous United States. Projected temperature and precipitation changes associated...

  6. Prediction of Complex Aerodynamic Flows with Explicit Algebraic Stress Models

    NASA Technical Reports Server (NTRS)

    Abid, Ridha; Morrison, Joseph H.; Gatski, Thomas B.; Speziale, Charles G.

    1996-01-01

    An explicit algebraic stress equation, developed by Gatski and Speziale, is used in the framework of K-epsilon formulation to predict complex aerodynamic turbulent flows. The nonequilibrium effects are modeled through coefficients that depend nonlinearly on both rotational and irrotational strains. The proposed model was implemented in the ISAAC Navier-Stokes code. Comparisons with the experimental data are presented which clearly demonstrate that explicit algebraic stress models can predict the correct response to nonequilibrium flow.

  7. Program SPACECAP: software for estimating animal density using spatially explicit capture-recapture models

    USGS Publications Warehouse

    Gopalaswamy, Arjun M.; Royle, J. Andrew; Hines, James E.; Singh, Pallavi; Jathanna, Devcharan; Kumar, N. Samba; Karanth, K. Ullas

    2012-01-01

    1. The advent of spatially explicit capture-recapture models is changing the way ecologists analyse capture-recapture data. However, the advantages offered by these new models are not fully exploited because they can be difficult to implement. 2. To address this need, we developed a user-friendly software package, created within the R programming environment, called SPACECAP. This package implements Bayesian spatially explicit hierarchical models to analyse spatial capture-recapture data. 3. Given that a large number of field biologists prefer software with graphical user interfaces for analysing their data, SPACECAP is particularly useful as a tool to increase the adoption of Bayesian spatially explicit capture-recapture methods in practice.

  8. e-Dairy: a dynamic and stochastic whole-farm model that predicts biophysical and economic performance of grazing dairy systems.

    PubMed

    Baudracco, J; Lopez-Villalobos, N; Holmes, C W; Comeron, E A; Macdonald, K A; Barry, T N

    2013-05-01

    A whole-farm, stochastic and dynamic simulation model was developed to predict biophysical and economic performance of grazing dairy systems. Several whole-farm models simulate grazing dairy systems, but most of them work at a herd level. This model, named e-Dairy, differs from the few models that work at an animal level, because it allows stochastic behaviour of the genetic merit of individual cows for several traits, namely, yields of milk, fat and protein, live weight (LW) and body condition score (BCS) within a whole-farm model. This model accounts for genetic differences between cows, is sensitive to genotype × environment interactions at an animal level and allows pasture growth, milk and supplements price to behave stochastically. The model includes an energy-based animal module that predicts intake at grazing, mammary gland functioning and body lipid change. This whole-farm model simulates a 365-day period for individual cows within a herd, with cow parameters randomly generated on the basis of the mean parameter values, defined as input and variance and co-variances from experimental data sets. The main inputs of e-Dairy are farm area, use of land, type of pasture, type of crops, monthly pasture growth rate, supplements offered, nutritional quality of feeds, herd description including herd size, age structure, calving pattern, BCS and LW at calving, probabilities of pregnancy, average genetic merit and economic values for items of income and costs. The model allows to set management policies to define: dry-off cows (ceasing of lactation), target pre- and post-grazing herbage mass and feed supplementation. The main outputs are herbage dry matter intake, annual pasture utilisation, milk yield, changes in BCS and LW, economic farm profit and return on assets. The model showed satisfactory accuracy of prediction when validated against two data sets from farmlet system experiments. Relative prediction errors were <10% for all variables, and concordance correlation coefficients over 0.80 for annual pasture utilisation, yields of milk and milk solids (MS; fat plus protein), and of 0.69 and 0.48 for LW and BCS, respectively. A simulation of two contrasting dairy systems is presented to show the practical use of the model. The model can be used to explore the effects of feeding level and genetic merit and their interactions for grazing dairy systems, evaluating the trade-offs between profit and the associated risk.

  9. Improving carbon monitoring and reporting in forests using spatially-explicit information.

    PubMed

    Boisvenue, Céline; Smiley, Byron P; White, Joanne C; Kurz, Werner A; Wulder, Michael A

    2016-12-01

    Understanding and quantifying carbon (C) exchanges between the biosphere and the atmosphere-specifically the process of C removal from the atmosphere, and how this process is changing-is the basis for developing appropriate adaptation and mitigation strategies for climate change. Monitoring forest systems and reporting on greenhouse gas (GHG) emissions and removals are now required components of international efforts aimed at mitigating rising atmospheric GHG. Spatially-explicit information about forests can improve the estimates of GHG emissions and removals. However, at present, remotely-sensed information on forest change is not commonly integrated into GHG reporting systems. New, detailed (30-m spatial resolution) forest change products derived from satellite time series informing on location, magnitude, and type of change, at an annual time step, have recently become available. Here we estimate the forest GHG balance using these new Landsat-based change data, a spatial forest inventory, and develop yield curves as inputs to the Carbon Budget Model of the Canadian Forest Sector (CBM-CFS3) to estimate GHG emissions and removals at a 30 m resolution for a 13 Mha pilot area in Saskatchewan, Canada. Our results depict the forests as cumulative C sink (17.98 Tg C or 0.64 Tg C year -1 ) between 1984 and 2012 with an average C density of 206.5 (±0.6) Mg C ha -1 . Comparisons between our estimates and estimates from Canada's National Forest Carbon Monitoring, Accounting and Reporting System (NFCMARS) were possible only on a subset of our study area. In our simulations the area was a C sink, while the official reporting simulations, it was a C source. Forest area and overall C stock estimates also differ between the two simulated estimates. Both estimates have similar uncertainties, but the spatially-explicit results we present here better quantify the potential improvement brought on by spatially-explicit modelling. We discuss the source of the differences between these estimates. This study represents an important first step towards the integration of spatially-explicit information into Canada's NFCMARS.

  10. High-Speed and Scalable Whole-Brain Imaging in Rodents and Primates.

    PubMed

    Seiriki, Kaoru; Kasai, Atsushi; Hashimoto, Takeshi; Schulze, Wiebke; Niu, Misaki; Yamaguchi, Shun; Nakazawa, Takanobu; Inoue, Ken-Ichi; Uezono, Shiori; Takada, Masahiko; Naka, Yuichiro; Igarashi, Hisato; Tanuma, Masato; Waschek, James A; Ago, Yukio; Tanaka, Kenji F; Hayata-Takano, Atsuko; Nagayasu, Kazuki; Shintani, Norihito; Hashimoto, Ryota; Kunii, Yasuto; Hino, Mizuki; Matsumoto, Junya; Yabe, Hirooki; Nagai, Takeharu; Fujita, Katsumasa; Matsuda, Toshio; Takuma, Kazuhiro; Baba, Akemichi; Hashimoto, Hitoshi

    2017-06-21

    Subcellular resolution imaging of the whole brain and subsequent image analysis are prerequisites for understanding anatomical and functional brain networks. Here, we have developed a very high-speed serial-sectioning imaging system named FAST (block-face serial microscopy tomography), which acquires high-resolution images of a whole mouse brain in a speed range comparable to that of light-sheet fluorescence microscopy. FAST enables complete visualization of the brain at a resolution sufficient to resolve all cells and their subcellular structures. FAST renders unbiased quantitative group comparisons of normal and disease model brain cells for the whole brain at a high spatial resolution. Furthermore, FAST is highly scalable to non-human primate brains and human postmortem brain tissues, and can visualize neuronal projections in a whole adult marmoset brain. Thus, FAST provides new opportunities for global approaches that will allow for a better understanding of brain systems in multiple animal models and in human diseases. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Energy efficient model based algorithm for control of building HVAC systems.

    PubMed

    Kirubakaran, V; Sahu, Chinmay; Radhakrishnan, T K; Sivakumaran, N

    2015-11-01

    Energy efficient designs are receiving increasing attention in various fields of engineering. Heating ventilation and air conditioning (HVAC) control system designs involve improved energy usage with an acceptable relaxation in thermal comfort. In this paper, real time data from a building HVAC system provided by BuildingLAB is considered. A resistor-capacitor (RC) framework for representing thermal dynamics of the building is estimated using particle swarm optimization (PSO) algorithm. With objective costs as thermal comfort (deviation of room temperature from required temperature) and energy measure (Ecm) explicit MPC design for this building model is executed based on its state space representation of the supply water temperature (input)/room temperature (output) dynamics. The controllers are subjected to servo tracking and external disturbance (ambient temperature) is provided from the real time data during closed loop control. The control strategies are ported on a PIC32mx series microcontroller platform. The building model is implemented in MATLAB and hardware in loop (HIL) testing of the strategies is executed over a USB port. Results indicate that compared to traditional proportional integral (PI) controllers, the explicit MPC's improve both energy efficiency and thermal comfort significantly. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Explicit Context Matching in Content-Based Publish/Subscribe Systems

    PubMed Central

    Vavassori, Sergio; Soriano, Javier; Lizcano, David; Jiménez, Miguel

    2013-01-01

    Although context could be exploited to improve performance, elasticity and adaptation in most distributed systems that adopt the publish/subscribe (P/S) communication model, only a few researchers have focused on the area of context-aware matching in P/S systems and have explored its implications in domains with highly dynamic context like wireless sensor networks (WSNs) and IoT-enabled applications. Most adopted P/S models are context agnostic or do not differentiate context from the other application data. In this article, we present a novel context-aware P/S model. SilboPS manages context explicitly, focusing on the minimization of network overhead in domains with recurrent context changes related, for example, to mobile ad hoc networks (MANETs). Our approach represents a solution that helps to efficiently share and use sensor data coming from ubiquitous WSNs across a plethora of applications intent on using these data to build context awareness. Specifically, we empirically demonstrate that decoupling a subscription from the changing context in which it is produced and leveraging contextual scoping in the filtering process notably reduces (un)subscription cost per node, while improving the global performance/throughput of the network of brokers without altering the cost of SIENA-like topology changes. PMID:23529118

  13. Concurrent processing simulation of the space station

    NASA Technical Reports Server (NTRS)

    Gluck, R.; Hale, A. L.; Sunkel, John W.

    1989-01-01

    The development of a new capability for the time-domain simulation of multibody dynamic systems and its application to the study of a large angle rotational maneuvers of the Space Station is described. The effort was divided into three sequential tasks, which required significant advancements of the state-of-the art to accomplish. These were: (1) the development of an explicit mathematical model via symbol manipulation of a flexible, multibody dynamic system; (2) the development of a methodology for balancing the computational load of an explicit mathematical model for concurrent processing; and (3) the implementation and successful simulation of the above on a prototype Custom Architectured Parallel Processing System (CAPPS) containing eight processors. The throughput rate achieved by the CAPPS operating at only 70 percent efficiency, was 3.9 times greater than that obtained sequentially by the IBM 3090 supercomputer simulating the same problem. More significantly, analysis of the results leads to the conclusion that the relative cost effectiveness of concurrent vs. sequential digital computation will grow substantially as the computational load is increased. This is a welcomed development in an era when very complex and cumbersome mathematical models of large space vehicles must be used as substitutes for full scale testing which has become impractical.

  14. Ewald method for polytropic potentials in arbitrary dimensionality

    NASA Astrophysics Data System (ADS)

    Osychenko, O. N.; Astrakharchik, G. E.; Boronat, J.

    2012-02-01

    The Ewald summation technique is generalized to power-law 1/| r | k potentials in three-, two- and one-dimensional geometries with explicit formulae for all the components of the sums. The cases of short-range, long-range and 'marginal' interactions are treated separately. The jellium model, as a particular case of a charge-neutral system, is discussed and the explicit forms of the Ewald sums for such a system are presented. A generalized form of the Ewald sums for a non-cubic (non-square) simulation cell for three- (two-) dimensional geometry is obtained and its possible field of application is discussed. A procedure for the optimization of the involved parameters in actual simulations is developed and an example of its application is presented.

  15. Another convex combination of product states for the separable Werner state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azuma, Hiroo; Ban, Masashi; CREST, Japan Science and Technology Agency, 1-1-9 Yaesu, Chuo-ku, Tokyo 103-0028

    2006-03-15

    In this paper, we write down the separable Werner state in a two-qubit system explicitly as a convex combination of product states, which is different from the convex combination obtained by Wootters' method. The Werner state in a two-qubit system has a single real parameter and varies from inseparable to separable according to the value of its parameter. We derive a hidden variable model that is induced by our decomposed form for the separable Werner state. From our explicit form of the convex combination of product states, we understand the following: The critical point of the parameter for separability ofmore » the Werner state comes from positivity of local density operators of the qubits.« less

  16. Electronic structure and microscopic model of V(2)GeO(4)F(2)-a quantum spin system with S = 1.

    PubMed

    Rahaman, Badiur; Saha-Dasgupta, T

    2007-07-25

    We present first-principles density functional calculations and downfolding studies of the electronic and magnetic properties of the oxide-fluoride quantum spin system V(2)GeO(4)F(2). We discuss explicitly the nature of the exchange paths and provide quantitative estimates of magnetic exchange couplings. A microscopic modelling based on analysis of the electronic structure of this systems puts it in the interesting class of weakly coupled alternating chain S = 1 systems. Based on the microscopic model, we make inferrences about its spin excitation spectra, which needs to be tested by rigorous experimental study.

  17. Skin-electrode circuit model for use in optimizing energy transfer in volume conduction systems.

    PubMed

    Hackworth, Steven A; Sun, Mingui; Sclabassi, Robert J

    2009-01-01

    The X-Delta model for through-skin volume conduction systems is introduced and analyzed. This new model has advantages over our previous X model in that it explicitly represents current pathways in the skin. A vector network analyzer is used to take measurements on pig skin to obtain data for use in finding the model's impedance parameters. An optimization method for obtaining this more complex model's parameters is described. Results show the model to accurately represent the impedance behavior of the skin system with error of generally less than one percent. Uses for the model include optimizing energy transfer across the skin in a volume conduction system with appropriate current exposure constraints, and exploring non-linear behavior of the electrode-skin system at moderate voltages (below ten) and frequencies (kilohertz to megahertz).

  18. Thinking Like a Whole Building: Whole Foods Market New Construction Summary, U.S. Department of Energy's Commercial Building Partnerships (Fact Sheet)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    2011-04-01

    Whole Foods Market participates in the U.S. Department of Energy's Commercial Building Partnerships (CBP) to identify and develop cost-effective, readily deployed, replicable energy efficiency measures (EEMs) for commercial buildings. Whole Foods Market is working with the National Renewable Energy Laboratory (NREL) on a retrofit and a new construction CBP project. Whole Foods Market's CBP new construction project is a standalone store in Raleigh, North Carolina. Whole Foods Market examined the energy systems and the interactions between those systems in the design for the new Raleigh store. Based on this collaboration and preliminary energy modeling, Whole Foods Market and NREL identifiedmore » a number of cost-effective EEMs that can be readily deployed in other Whole Foods Market stores and in other U.S. supermarkets. If the actual savings in the Raleigh store - which NREL will monitor and verify - match the modeling results, each year this store will save nearly $100,000 in operating costs (Raleigh's rates are about $0.06/kWh for electricity and $0.83/therm for natural gas). The store will also use 41% less energy than a Standard 90.1-compliant store and avoid about 3.7 million pounds of carbon dioxide emissions.« less

  19. Thinking Like a Whole Building: A Whole Foods Market New Construction Case Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deru, M.; Bonnema, E.; Doebber, I.

    2011-04-01

    Whole Foods Market participates in the U.S. Department of Energy's Commercial Building Partnerships (CBP) to identify and develop cost-effective, readily deployed, replicable energy efficiency measures (EEMs) for commercial buildings. Whole Foods Market is working with the National Renewable Energy Laboratory (NREL) on a retrofit and a new construction CBP project. Whole Foods Market's CBP new construction project is a standalone store in Raleigh, North Carolina. Whole Foods Market examined the energy systems and the interactions between those systems in the design for the new Raleigh store. Based on this collaboration and preliminary energy modeling, Whole Foods Market and NREL identifiedmore » a number of cost-effective EEMs that can be readily deployed in other Whole Foods Market stores and in other U.S. supermarkets. If the actual savings in the Raleigh store - which NREL will monitor and verify - match the modeling results, each year this store will save nearly $100,000 in operating costs (Raleigh's rates are about $0.06/kWh for electricity and $0.83/therm for natural gas). The store will also use 41% less energy than a Standard 90.1-compliant store and avoid about 3.7 million pounds of carbon dioxide emissions.« less

  20. Short-Time Glassy Dynamics in Viscous Protein Solutions with Competing Interactions

    DOE PAGES

    Godfrin, P. Douglas; Hudson, Steven; Hong, Kunlun; ...

    2015-11-24

    Although there have been numerous investigations of the glass transition for colloidal dispersions with only a short-ranged attraction, less is understood for systems interacting with a long-ranged repulsion in addition to this attraction, which is ubiquitous in aqueous protein solutions at low ionic strength. Highly puri ed concentrated lysozyme solutions are used as a model system and investigated over a large range of protein concentrations at very low ionic strength. Newtonian liquid behavior is observed at all concentrations, even up to 480 mg/mL, where the zero shear viscosity increases by more than three orders of magnitude with increasing concentration. Remarkably,more » despite this macroscopic liquid-like behavior, the measurements of the dynamics in the short-time limit shows features typical of glassy colloidal systems. Investigation of the inter-protein structure indicates that the reduced short-time mobility of the protein is caused by localized regions of high density within a heterogeneous density distribution. This structural heterogeneity occurs on intermediate range length scale, driven by the competing potential features, and is distinct from commonly studied colloidal gel systems in which a heterogeneous density distribution tends to extend to the whole system. The presence of long-ranged repulsion also allows for more mobility over large length and long time scales resulting in the macroscopic relaxation of the structure. The experimental results provide evidence for the need to explicitly include intermediate range order in theories for the macroscopic properties of protein solutions interacting via competing potential features.« less

  1. A Case Study in an Integrated Development and Problem Solving Environment

    ERIC Educational Resources Information Center

    Deek, Fadi P.; McHugh, James A.

    2003-01-01

    This article describes an integrated problem solving and program development environment, illustrating the application of the system with a detailed case study of a small-scale programming problem. The system, which is based on an explicit cognitive model, is intended to guide the novice programmer through the stages of problem solving and program…

  2. Mass balance modelling of contaminants in river basins: a flexible matrix approach.

    PubMed

    Warren, Christopher; Mackay, Don; Whelan, Mick; Fox, Kay

    2005-12-01

    A novel and flexible approach is described for simulating the behaviour of chemicals in river basins. A number (n) of river reaches are defined and their connectivity is described by entries in an n x n matrix. Changes in segmentation can be readily accommodated by altering the matrix entries, without the need for model revision. Two models are described. The simpler QMX-R model only considers advection and an overall loss due to the combined processes of volatilization, net transfer to sediment and degradation. The rate constant for the overall loss is derived from fugacity calculations for a single segment system. The more rigorous QMX-F model performs fugacity calculations for each segment and explicitly includes the processes of advection, evaporation, water-sediment exchange and degradation in both water and sediment. In this way chemical exposure in all compartments (including equilibrium concentrations in biota) can be estimated. Both models are designed to serve as intermediate-complexity exposure assessment tools for river basins with relatively low data requirements. By considering the spatially explicit nature of emission sources and the changes in concentration which occur with transport in the channel system, the approach offers significant advantages over simple one-segment simulations while being more readily applicable than more sophisticated, highly segmented, GIS-based models.

  3. Work distributions for random sudden quantum quenches

    NASA Astrophysics Data System (ADS)

    Łobejko, Marcin; Łuczka, Jerzy; Talkner, Peter

    2017-05-01

    The statistics of work performed on a system by a sudden random quench is investigated. Considering systems with finite dimensional Hilbert spaces we model a sudden random quench by randomly choosing elements from a Gaussian unitary ensemble (GUE) consisting of Hermitian matrices with identically, Gaussian distributed matrix elements. A probability density function (pdf) of work in terms of initial and final energy distributions is derived and evaluated for a two-level system. Explicit results are obtained for quenches with a sharply given initial Hamiltonian, while the work pdfs for quenches between Hamiltonians from two independent GUEs can only be determined in explicit form in the limits of zero and infinite temperature. The same work distribution as for a sudden random quench is obtained for an adiabatic, i.e., infinitely slow, protocol connecting the same initial and final Hamiltonians.

  4. Thermal transistor behavior of a harmonic chain

    NASA Astrophysics Data System (ADS)

    Kim, Sangrak

    2017-09-01

    Thermal transistor behavior of a harmonic chain with three heat reservoirs is explicitly analyzed. Temperature profile and heat currents of the rather general system are formulated and then heat currents for the simplest system are exactly calculated. The matrix connecting the three temperatures of the reservoirs and those of the particles comprises a stochastic matrix. The ratios R 1 and R 2 between heat currents, characterizing thermal signals can be expressed in terms of two external variables and two material parameters. It is shown that the ratios R 1 and R 2 can have wide range of real values. The thermal system shows a thermal transistor behavior such as the amplification of heat current by appropriately controlling the two variables and two parameters. We explicitly demonstrate the characteristics and mechanisms of thermal transistor with the simplest model.

  5. A hybrid approach to modeling and control of vehicle height for electronically controlled air suspension

    NASA Astrophysics Data System (ADS)

    Sun, Xiaoqiang; Cai, Yingfeng; Wang, Shaohua; Liu, Yanling; Chen, Long

    2016-01-01

    The control problems associated with vehicle height adjustment of electronically controlled air suspension (ECAS) still pose theoretical challenges for researchers, which manifest themselves in the publications on this subject over the last years. This paper deals with modeling and control of a vehicle height adjustment system for ECAS, which is an example of a hybrid dynamical system due to the coexistence and coupling of continuous variables and discrete events. A mixed logical dynamical (MLD) modeling approach is chosen for capturing enough details of the vehicle height adjustment process. The hybrid dynamic model is constructed on the basis of some assumptions and piecewise linear approximation for components nonlinearities. Then, the on-off statuses of solenoid valves and the piecewise approximation process are described by propositional logic, and the hybrid system is transformed into the set of linear mixed-integer equalities and inequalities, denoted as MLD model, automatically by HYSDEL. Using this model, a hybrid model predictive controller (HMPC) is tuned based on online mixed-integer quadratic optimization (MIQP). Two different scenarios are considered in the simulation, whose results verify the height adjustment effectiveness of the proposed approach. Explicit solutions of the controller are computed to control the vehicle height adjustment system in realtime using an offline multi-parametric programming technology (MPT), thus convert the controller into an equivalent explicit piecewise affine form. Finally, bench experiments for vehicle height lifting, holding and lowering procedures are conducted, which demonstrate that the HMPC can adjust the vehicle height by controlling the on-off statuses of solenoid valves directly. This research proposes a new modeling and control method for vehicle height adjustment of ECAS, which leads to a closed-loop system with favorable dynamical properties.

  6. Whole Atmosphere Modeling and Data Analysis: Success Stories, Challenges and Perspectives

    NASA Astrophysics Data System (ADS)

    Yudin, V. A.; Akmaev, R. A.; Goncharenko, L. P.; Fuller-Rowell, T. J.; Matsuo, T.; Ortland, D. A.; Maute, A. I.; Solomon, S. C.; Smith, A. K.; Liu, H.; Wu, Q.

    2015-12-01

    At the end of the 20-th century Raymond Roble suggested an ambitious target of developing an atmospheric general circulation model (GCM) that spans from the surface to the thermosphere for modeling the coupled atmosphere-ionosphere with drivers from terrestrial meteorology and solar-geomagnetic inputs. He pointed out several areas of research and applications that would benefit highly from the development and improvement of whole atmosphere modeling. At present several research groups using middle and whole atmosphere models have attempted to perform coupled ionosphere-thermosphere predictions to interpret the "unexpected" anomalies in the electron content, ions and plasma drifts observed during recent stratospheric warming events. The recent whole atmosphere inter-comparison case studies also displayed striking differences in simulations of prevailing flows, planetary waves and dominant tidal modes even when the lower atmosphere domain of those models were constrained by similar meteorological analyses. We will present the possible reasons of such differences between data-constrained whole atmosphere simulations when analyses with 6-hour time resolution are used and discuss the potential model-data and model-model differences above the stratopause. The possible shortcomings of the whole atmosphere simulations associated with model physics, dynamical cores and resolutions will be discussed. With the increased confidence in the space-borne temperature, winds and ozone observations and extensive collections of ground-based upper atmosphere observational facilities, the whole atmosphere modelers will be able to quantify annual and year-to-variability of the zonal mean flows, planetary wave and tides. We will demonstrate the value of tidal and planetary wave variability deduced from the space-borne data and ground-based systems for evaluation and tune-up of whole atmosphere simulations including corrections of systematic model errors. Several success stories on the middle and whole atmosphere simulations coupled with the ionosphere models will be highlighted, and future perspectives for links of the space and terrestrial weather predictions constrained by current and scheduled ionosphere-thermosphere-mesosphere satellite missions will be presented

  7. Simulating Surface Oil Transport During the Deepwater Horizon Oil Spill: Experiments with the BioCast System

    DTIC Science & Technology

    2014-01-25

    Virtual Special Issue Gulf of Mexico Modelling – Lessons from the spill Simulating surface oil transport during the Deepwater Horizon oil spill ...ocean surface materials. The Deepwater Horizon oil spill in the Gulf of Mexico provided a test case for the Bio-Optical Forecasting (BioCast) system...addition of explicit sources and sinks of surface oil concentrations provides a framework for increasingly complex oil spill modeling efforts that extend

  8. Template Directed Replication Supports the Maintenance of the Metabolically Coupled Replicator System

    NASA Astrophysics Data System (ADS)

    Könnyű, Balázs; Czárán, Tamás

    2015-06-01

    The RNA World scenario of prebiotic chemical evolution is among the most plausible conceptual framework available today for modelling the origin of life. RNA offers genetic and catalytic (metabolic) functionality embodied in a single chemical entity, and a metabolically cooperating community of RNA molecules would constitute a viable infrabiological subsystem with a potential to evolve into proto-cellular life. Our Metabolically Coupled Replicator System (MCRS) model is a spatially explicit computer simulation implementation of the RNA-World scenario, in which replicable ribozymes cooperate in supplying each other with monomers for their own replication. MCRS has been repeatedly demonstrated to be viable and evolvable, with different versions of the model improved in depth (chemical detail of metabolism) or in extension (additional functions of RNA molecules). One of the dynamically relevant extensions of the MCRS approach to prebiotic RNA evolution is the explicit inclusion of template replication into its assumptions, which we have studied in the present version. We found that this modification has not changed the behaviour of the system in the qualitative sense, just the range of the parameter space which is optimal for the coexistence of metabolically cooperating replicators has shifted in terms of metabolite mobility. The system also remains resistant and tolerant to parasitic replicators.

  9. Are the low-lying isovector 1 + states scissors vibrations?

    NASA Astrophysics Data System (ADS)

    Faessler, A.

    At the Technische Hochschule in Darmstadt the group of Richter and coworkers found in 1983/84 in deformed rare earth nuclei low-lying isovector 1 + states. Such states have been predicted in the generalized Bohr-Mottelson model and in the interacting boson model no. 2 (IBA2). In the generalized Bohr-Mottelson model one allows for proton and neutron quadrupole deformations separately. If one includes only static proton and neutron deformations the generalized Bohr-Mottelson model reduces to the two rotor model. It describes the excitation energy of these states in good agreement with the data but overestimates the magnetic dipole transition probabilities by a factor 5. In the interacting boson model (IBA2) where only the outermost nucleons participate in the excitation the magnetic dipole transition probability is only overestimated by a factor 2. The too large collectivity in both models results from the fact that they concentrate the whole strength of the scissors vibrations into one state. A microscopic description is needed to describe the spreading of the scissors strength over several states. For a microscopic determination of these scissors states one uses the Quasi-particle Random Phase Approximation (QRPA). But this approach has a serious difficulty. Since one rotates for the calculation the nucleus into the intrinsic system the state corresponding to the rotation of the whole nucleus is a spurious state. The usual procedure to remove this spuriosity is to use the Thouless theorem which says that a spurious state created by an operator which commutes with the total hamiltonian (here the total angular momentum, corresponding to a rotation of the whole system) produces the spurious state if applied to the ground state. It says further the energy of this spurious state lies at zero excitation energy (it is degenerate with the ground state) and is orthogonal to all physical states. Thus the usual approach is to vary the quadrupole-quadrupole force strength so that a state lies at zero excitation energy and to identify that with the spuríous state. This procedure assumes that a total angular momentum commutes with a total hamiltonian. But this is not the case since the total hamiltonian contains a deformed Saxon-Woods potential. Thus one has to take care explicitly that the spurious state is removed. This we do in our approach by introducing Lagrange multipliers for each excited states and requesting that these states are orthogonal to the spurious state which is explicitly constructed by applying the total angular momentum operator to the ground state. To reduce the number of free parameters in the hamiltonian we take the Saxon-Woods potential for the deformed nuclei from the literature (with minor adjustments) and determine the proton-proton, neutron-neutron and the proton-neutron quadrupole force constant by requesting that the hamiltonian commutes with the total angular momentum in the (QRPA) ground state. This yields equations fixing all three coupling constants for the quadrupole-quadrupole force allowing even for isospin symmetry violation. The spin-spin force is taken from the Reid soft core potential. A possible spin-quadrupole force has been taken from the work of Soloviev but it turns out that this is not important. The calculation shows that the strength of the scissors vibrations are spread over many states. The main 1 + state at around 3 MeV has an overlap of the order of 14 % of the scissors state. 50% of that state are spread over the physical states up to an excitation energy of 6 MeV. The rest is distributed over higher lying states. The expectation value of the many-body hamiltonian in the scissors vibrational state shows roughly an excitation energy of 7 MeV above the ground state. The results also support the experimental findings that these states are mainly orbital excitations. States are not very collective. Normally only a proton and neutron particle-hole pair are with a large amplitude participating in forming these states. But those protons and neutrons which are excited perform scissors type vibrations.

  10. A gauged finite-element potential formulation for accurate inductive and galvanic modelling of 3-D electromagnetic problems

    NASA Astrophysics Data System (ADS)

    Ansari, S. M.; Farquharson, C. G.; MacLachlan, S. P.

    2017-07-01

    In this paper, a new finite-element solution to the potential formulation of the geophysical electromagnetic (EM) problem that explicitly implements the Coulomb gauge, and that accurately computes the potentials and hence inductive and galvanic components, is proposed. The modelling scheme is based on using unstructured tetrahedral meshes for domain subdivision, which enables both realistic Earth models of complex geometries to be considered and efficient spatially variable refinement of the mesh to be done. For the finite-element discretization edge and nodal elements are used for approximating the vector and scalar potentials respectively. The issue of non-unique, incorrect potentials from the numerical solution of the usual incomplete-gauged potential system is demonstrated for a benchmark model from the literature that uses an electric-type EM source, through investigating the interface continuity conditions for both the normal and tangential components of the potential vectors, and by showing inconsistent results obtained from iterative and direct linear equation solvers. By explicitly introducing the Coulomb gauge condition as an extra equation, and by augmenting the Helmholtz equation with the gradient of a Lagrange multiplier, an explicitly gauged system for the potential formulation is formed. The solution to the discretized form of this system is validated for the above-mentioned example and for another classic example that uses a magnetic EM source. In order to stabilize the iterative solution of the gauged system, a block diagonal pre-conditioning scheme that is based upon the Schur complement of the potential system is used. For all examples, both the iterative and direct solvers produce the same responses for the potentials, demonstrating the uniqueness of the numerical solution for the potentials and fixing the problems with the interface conditions between cells observed for the incomplete-gauged system. These solutions of the gauged system also produce the physically anticipated behaviours for the inductive and galvanic components of the electric field. For a realistic geophysical scenario, the gauged scheme is also used to synthesize the magnetic field response of a model of the Ovoid ore deposit at Voisey's Bay, Labrador, Canada. The results are in good agreement with the helicopter-borne EM data from the real survey, and the inductive and galvanic parts of the current density show expected behaviours.

  11. Convergence studies of deterministic methods for LWR explicit reflector methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Canepa, S.; Hursin, M.; Ferroukhi, H.

    2013-07-01

    The standard approach in modem 3-D core simulators, employed either for steady-state or transient simulations, is to use Albedo coefficients or explicit reflectors at the core axial and radial boundaries. In the latter approach, few-group homogenized nuclear data are a priori produced with lattice transport codes using 2-D reflector models. Recently, the explicit reflector methodology of the deterministic CASMO-4/SIMULATE-3 code system was identified to potentially constitute one of the main sources of errors for core analyses of the Swiss operating LWRs, which are all belonging to GII design. Considering that some of the new GIII designs will rely on verymore » different reflector concepts, a review and assessment of the reflector methodology for various LWR designs appeared as relevant. Therefore, the purpose of this paper is to first recall the concepts of the explicit reflector modelling approach as employed by CASMO/SIMULATE. Then, for selected reflector configurations representative of both GII and GUI designs, a benchmarking of the few-group nuclear data produced with the deterministic lattice code CASMO-4 and its successor CASMO-5, is conducted. On this basis, a convergence study with regards to geometrical requirements when using deterministic methods with 2-D homogenous models is conducted and the effect on the downstream 3-D core analysis accuracy is evaluated for a typical GII deflector design in order to assess the results against available plant measurements. (authors)« less

  12. Medical diagnosis imaging systems: image and signal processing applications aided by fuzzy logic

    NASA Astrophysics Data System (ADS)

    Hata, Yutaka

    2010-04-01

    First, we describe an automated procedure for segmenting an MR image of a human brain based on fuzzy logic for diagnosing Alzheimer's disease. The intensity thresholds for segmenting the whole brain of a subject are automatically determined by finding the peaks of the intensity histogram. After these thresholds are evaluated in a region growing, the whole brain can be identified. Next, we describe a procedure for decomposing the obtained whole brain into the left and right cerebral hemispheres, the cerebellum and the brain stem. Our method then identified the whole brain, the left cerebral hemisphere, the right cerebral hemisphere, the cerebellum and the brain stem. Secondly, we describe a transskull sonography system that can visualize the shape of the skull and brain surface from any point to examine skull fracture and some brain diseases. We employ fuzzy signal processing to determine the skull and brain surface. The phantom model, the animal model with soft tissue, the animal model with brain tissue, and a human subjects' forehead is applied in our system. The all shapes of the skin surface, skull surface, skull bottom, and brain tissue surface are successfully determined.

  13. Symbolic Analysis of Concurrent Programs with Polymorphism

    NASA Technical Reports Server (NTRS)

    Rungta, Neha Shyam

    2010-01-01

    The current trend of multi-core and multi-processor computing is causing a paradigm shift from inherently sequential to highly concurrent and parallel applications. Certain thread interleavings, data input values, or combinations of both often cause errors in the system. Systematic verification techniques such as explicit state model checking and symbolic execution are extensively used to detect errors in such systems [7, 9]. Explicit state model checking enumerates possible thread schedules and input data values of a program in order to check for errors [3, 9]. To partially mitigate the state space explosion from data input values, symbolic execution techniques substitute data input values with symbolic values [5, 7, 6]. Explicit state model checking and symbolic execution techniques used in conjunction with exhaustive search techniques such as depth-first search are unable to detect errors in medium to large-sized concurrent programs because the number of behaviors caused by data and thread non-determinism is extremely large. We present an overview of abstraction-guided symbolic execution for concurrent programs that detects errors manifested by a combination of thread schedules and data values [8]. The technique generates a set of key program locations relevant in testing the reachability of the target locations. The symbolic execution is then guided along these locations in an attempt to generate a feasible execution path to the error state. This allows the execution to focus in parts of the behavior space more likely to contain an error.

  14. Computational study of the free energy landscape of the miniprotein CLN025 in explicit and implicit solvent.

    PubMed

    Rodriguez, Alex; Mokoema, Pol; Corcho, Francesc; Bisetty, Khrisna; Perez, Juan J

    2011-02-17

    The prediction capabilities of atomistic simulations of peptides are hampered by different difficulties, including the reliability of force fields, the treatment of the solvent or the adequate sampling of the conformational space. In this work, we have studied the conformational profile of the 10 residue miniprotein CLN025 known to exhibit a β-hairpin in its native state to understand the limitations of implicit methods to describe solvent effects and how these may be compensated by using different force fields. For this purpose, we carried out a thorough sampling of the conformational space of CLN025 in explicit solvent using the replica exchange molecular dynamics method as a sampling technique and compared the results with simulations of the system modeled using the analytical linearized Poisson-Boltzmann (ALPB) method with three different AMBER force fields: parm94, parm96, and parm99SB. The results show the peptide to exhibit a funnel-like free energy landscape with two minima in explicit solvent. In contrast, the higher minimum nearly disappears from the energy surface when the system is studied with an implicit representation of the solvent. Moreover, the different force fields used in combination with the ALPB method do not describe the system in the same manner. The results of this work suggest that the balance between intra- and intermolecular interactions is the cause of the differences between implicit and explicit solvent simulations in this system, stressing the role of the environment to define properly the conformational profile of a peptide in solution.

  15. Whole systems shared governance: a model for the integrated health system.

    PubMed

    Evan, K; Aubry, K; Hawkins, M; Curley, T A; Porter-O'Grady, T

    1995-05-01

    The healthcare system is under renovation and renewal. In the process, roles and structures are shifting to support a subscriber-based continuum of care. Alliances and partnerships are emerging as the models of integration for the future. But how do we structure to support these emerging integrated partnerships? As the nurse executive expands the role and assumes increasing responsibility for creating new frameworks for care, a structure that sustains the point-of-care innovations and interdisciplinary relationships must be built. Whole systems models of organization, such as shared governance, are expanding as demand grows for a sustainable structure for horizontal and partnered systems of healthcare delivery. The executive will have to apply these newer frameworks to the delivery of care to provide adequate support for the clinically integrated environment.

  16. Microphysics in Multi-scale Modeling System with Unified Physics

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo

    2012-01-01

    Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the microphysics development and its performance for the multi-scale modeling system will be presented.

  17. Facilitating preemptive hardware system design using partial reconfiguration techniques.

    PubMed

    Dondo Gazzano, Julio; Rincon, Fernando; Vaderrama, Carlos; Villanueva, Felix; Caba, Julian; Lopez, Juan Carlos

    2014-01-01

    In FPGA-based control system design, partial reconfiguration is especially well suited to implement preemptive systems. In real-time systems, the deadline for critical task can compel the preemption of noncritical one. Besides, an asynchronous event can demand immediate attention and, then, force launching a reconfiguration process for high-priority task implementation. If the asynchronous event is previously scheduled, an explicit activation of the reconfiguration process is performed. If the event cannot be previously programmed, such as in dynamically scheduled systems, an implicit activation to the reconfiguration process is demanded. This paper provides a hardware-based approach to explicit and implicit activation of the partial reconfiguration process in dynamically reconfigurable SoCs and includes all the necessary tasks to cope with this issue. Furthermore, the reconfiguration service introduced in this work allows remote invocation of the reconfiguration process and then the remote integration of off-chip components. A model that offers component location transparency is also presented to enhance and facilitate system integration.

  18. Facilitating Preemptive Hardware System Design Using Partial Reconfiguration Techniques

    PubMed Central

    Rincon, Fernando; Vaderrama, Carlos; Villanueva, Felix; Caba, Julian; Lopez, Juan Carlos

    2014-01-01

    In FPGA-based control system design, partial reconfiguration is especially well suited to implement preemptive systems. In real-time systems, the deadline for critical task can compel the preemption of noncritical one. Besides, an asynchronous event can demand immediate attention and, then, force launching a reconfiguration process for high-priority task implementation. If the asynchronous event is previously scheduled, an explicit activation of the reconfiguration process is performed. If the event cannot be previously programmed, such as in dynamically scheduled systems, an implicit activation to the reconfiguration process is demanded. This paper provides a hardware-based approach to explicit and implicit activation of the partial reconfiguration process in dynamically reconfigurable SoCs and includes all the necessary tasks to cope with this issue. Furthermore, the reconfiguration service introduced in this work allows remote invocation of the reconfiguration process and then the remote integration of off-chip components. A model that offers component location transparency is also presented to enhance and facilitate system integration. PMID:24672292

  19. Achieving enlightenment: what do we know about the implicit learning system and its interaction with explicit knowledge?

    PubMed

    Vidoni, Eric D; Boyd, Lara A

    2007-09-01

    Two major memory and learning systems operate in the brain: one for facts and ideas (ie, the declarative or explicit system), one for habits and behaviors (ie, the procedural or implicit system). Broadly speaking these two memory systems can operate either in concert or entirely independently of one another during the performance and learning of skilled motor behaviors. This Special Issue article has two parts. In the first, we present a review of implicit motor skill learning that is largely centered on the interactions between declarative and procedural learning and memory. Because distinct neuroanatomical substrates support unique aspects of learning and memory and thus focal injury can cause impairments that are dependent on lesion location, we also broadly consider which brain regions mediate implicit and explicit learning and memory. In the second part of this article, the interactive nature of these two memory systems is illustrated by the presentation of new data that reveal that both learning implicitly and acquiring explicit knowledge through physical practice lead to motor sequence learning. In our new data, we discovered that for healthy individuals use of the implicit versus explicit memory system differently affected variability of performance during acquisition practice; variability was higher early in practice for the implicit group and later in practice for the acquired explicit group. Despite the difference in performance variability, by retention both groups demonstrated comparable change in tracking accuracy and thus, motor sequence learning. Clinicians should be aware of the potential effects of implicit and explicit interactions when designing rehabilitation interventions, particularly when delivering explicit instructions before task practice, working with individuals with focal brain damage, and/or adjusting therapeutic parameters based on acquisition performance variability.

  20. One-dimensional numerical modeling of Blue Jet and its impact on stratospheric chemistry

    NASA Astrophysics Data System (ADS)

    Duruisseau, F.; Thiéblemont, R.; Huret, N.

    2011-12-01

    In the stratosphere the ozone layer is very sensitive to the NOx abundance. The ionisation of N2 and O2 molecules by TLE's (Transient Luminous Events) is a source of NOx which is currently not well quantified and could act as a loss of ozone. In this study a one dimensional explicit parameterization of a Blue-Jet propagation based on that proposed by Raizer et al. (2006 and 2007) has been developed. This parameterization considers Blue-Jet as a streamer initiated by a bidirectional leader discharge, emerging from the anvil and sustained by moderate cloud charge. The streamer growth varies with the electrical field induced by initial cloud charge and the initial altitude. This electrical parameterization and the chemical mechanisms associated with the discharge have been implemented into a detailed chemical model of stratospheric ozone including evolution of nitrogen, chlorine and bromine species. We will present several tests performed to validate the electrical code and evaluate the propagation velocity and the maximum altitude attains by the blue jet as a function of electrical parameters. The results obtained giving the spatiotemporal evolution of the electron density are then used to initiate the specific chemistry associated with the Blue Jet. Preliminary results on the impact of such discharge on the ozone content and the whole stratospheric system will be presented.

  1. Spatial Relation Predicates in Topographic Feature Semantics

    USGS Publications Warehouse

    Varanka, Dalia E.; Caro, Holly K.

    2013-01-01

    Topographic data are designed and widely used for base maps of diverse applications, yet the power of these information sources largely relies on the interpretive skills of map readers and relational database expert users once the data are in map or geographic information system (GIS) form. Advances in geospatial semantic technology offer data model alternatives for explicating concepts and articulating complex data queries and statements. To understand and enrich the vocabulary of topographic feature properties for semantic technology, English language spatial relation predicates were analyzed in three standard topographic feature glossaries. The analytical approach drew from disciplinary concepts in geography, linguistics, and information science. Five major classes of spatial relation predicates were identified from the analysis; representations for most of these are not widely available. The classes are: part-whole (which are commonly modeled throughout semantic and linked-data networks), geometric, processes, human intention, and spatial prepositions. These are commonly found in the ‘real world’ and support the environmental science basis for digital topographical mapping. The spatial relation concepts are based on sets of relation terms presented in this chapter, though these lists are not prescriptive or exhaustive. The results of this study make explicit the concepts forming a broad set of spatial relation expressions, which in turn form the basis for expanding the range of possible queries for topographical data analysis and mapping.

  2. Maize Cropping Systems Mapping Using RapidEye Observations in Agro-Ecological Landscapes in Kenya.

    PubMed

    Richard, Kyalo; Abdel-Rahman, Elfatih M; Subramanian, Sevgan; Nyasani, Johnson O; Thiel, Michael; Jozani, Hosein; Borgemeister, Christian; Landmann, Tobias

    2017-11-03

    Cropping systems information on explicit scales is an important but rarely available variable in many crops modeling routines and of utmost importance for understanding pests and disease propagation mechanisms in agro-ecological landscapes. In this study, high spatial and temporal resolution RapidEye bio-temporal data were utilized within a novel 2-step hierarchical random forest (RF) classification approach to map areas of mono- and mixed maize cropping systems. A small-scale maize farming site in Machakos County, Kenya was used as a study site. Within the study site, field data was collected during the satellite acquisition period on general land use/land cover (LULC) and the two cropping systems. Firstly, non-cropland areas were masked out from other land use/land cover using the LULC mapping result. Subsequently an optimized RF model was applied to the cropland layer to map the two cropping systems (2nd classification step). An overall accuracy of 93% was attained for the LULC classification, while the class accuracies (PA: producer's accuracy and UA: user's accuracy) for the two cropping systems were consistently above 85%. We concluded that explicit mapping of different cropping systems is feasible in complex and highly fragmented agro-ecological landscapes if high resolution and multi-temporal satellite data such as 5 m RapidEye data is employed. Further research is needed on the feasibility of using freely available 10-20 m Sentinel-2 data for wide-area assessment of cropping systems as an important variable in numerous crop productivity models.

  3. The Quantum Arnold Transformation for the damped harmonic oscillator: from the Caldirola-Kanai model toward the Bateman model

    NASA Astrophysics Data System (ADS)

    López-Ruiz, F. F.; Guerrero, J.; Aldaya, V.; Cossío, F.

    2012-08-01

    Using a quantum version of the Arnold transformation of classical mechanics, all quantum dynamical systems whose classical equations of motion are non-homogeneous linear second-order ordinary differential equations (LSODE), including systems with friction linear in velocity such as the damped harmonic oscillator, can be related to the quantum free-particle dynamical system. This implies that symmetries and simple computations in the free particle can be exported to the LSODE-system. The quantum Arnold transformation is given explicitly for the damped harmonic oscillator, and an algebraic connection between the Caldirola-Kanai model for the damped harmonic oscillator and the Bateman system will be sketched out.

  4. A new approach for the validation of skeletal muscle modelling using MRI data

    NASA Astrophysics Data System (ADS)

    Böl, Markus; Sturmat, Maike; Weichert, Christine; Kober, Cornelia

    2011-05-01

    Active and passive experiments on skeletal muscles are in general arranged on isolated muscles or by consideration of the whole muscle packages, such as the arm or the leg. Both methods exhibit advantages and disadvantages. By applying experiments on isolated muscles it turns out that no information about the surrounding tissues are considered what leads to insufficient specifications of the isolated muscle. Especially, the muscle shape and the fibre directions of an embedded muscle are completely different to that of the same isolated muscle. An explicit advantage, in contrast, is the possibility to study the mechanical characteristics in an unique, isolated way. On the other hand, by applying experiments on muscle packages the aforementioned pros and cons reverse. In such situation, the whole surrounding tissue is considered in the mechanical characteristics of the muscle which are much more difficult to identify. However, an embedded muscle reflects a much more realistic situation as in isolated condition. Thus, in the proposed work to our knowledge, we, for the first time, suggest a technique that allows to study characteristics of single skeletal muscles inside a muscle package without any computation of the tissue around the muscle of interest. In doing so, we use magnetic resonance imaging data of an upper arm during contraction. By applying a three-dimensional continuum constitutive muscle model we are able to study the biceps brachii inside the upper arm and validate the modelling approach by optical experiments.

  5. Self-Learning Variable Structure Control for a Class of Sensor-Actuator Systems

    PubMed Central

    Chen, Sanfeng; Li, Shuai; Liu, Bo; Lou, Yuesheng; Liang, Yongsheng

    2012-01-01

    Variable structure strategy is widely used for the control of sensor-actuator systems modeled by Euler-Lagrange equations. However, accurate knowledge on the model structure and model parameters are often required for the control design. In this paper, we consider model-free variable structure control of a class of sensor-actuator systems, where only the online input and output of the system are available while the mathematic model of the system is unknown. The problem is formulated from an optimal control perspective and the implicit form of the control law are analytically obtained by using the principle of optimality. The control law and the optimal cost function are explicitly solved iteratively. Simulations demonstrate the effectiveness and the efficiency of the proposed method. PMID:22778633

  6. An explicit plate kinematic model for the orogeny in the southern Uralides

    NASA Astrophysics Data System (ADS)

    Görz, Ines; Hielscher, Peggy

    2010-10-01

    The Palaeozoic Uralides formed in a three plate constellation between Europe, Siberia and Kazakhstan-Tarim. Starting from the first plate tectonic concepts, it was controversially discussed, whether the Uralide orogeny was the result of a relative plate motion between Europe and Siberia or between Europe and Kazakhstan. In this study, we use a new approach to address this problem. We perform a structural analysis on the sphere, reconstruct the positions of the Euler poles of the relative plate rotation Siberia-Europe and Tarim-Europe and describe Uralide structures by their relation to small circles about the two Euler poles. Using this method, changes in the strike of tectonic elements that are caused by the spherical geometry of the Earth's surface are eliminated and structures that are compatible with one of the relative plate motions can be identified. We show that only two Euler poles controlled the Palaeozoic tectonic evolution in the whole West Siberian region, but that they acted diachronously in different regions. We provide an explicit model describing the tectonism in West Siberia by an Euler pole, a sense of rotation and an approximate rotation angle. In the southern Uralides, Devonian structures resulted from a plate rotation of Siberia with respect to Europe, while the Permian structures were caused by a relative plate motion of Kazakhstan-Tarim with respect to Europe. The tectonic pause in the Carboniferous period correlates with a reorganization of the plate kinematics.

  7. Nonlinear Fano interferences in open quantum systems: An exactly solvable model

    NASA Astrophysics Data System (ADS)

    Finkelstein-Shapiro, Daniel; Calatayud, Monica; Atabek, Osman; Mujica, Vladimiro; Keller, Arne

    2016-06-01

    We obtain an explicit solution for the stationary-state populations of a dissipative Fano model, where a discrete excited state is coupled to a continuum set of states; both excited sets of states are reachable by photoexcitation from the ground state. The dissipative dynamic is described by a Liouville equation in Lindblad form and the field intensity can take arbitrary values within the model. We show that the population of the continuum states as a function of laser frequency can always be expressed as a Fano profile plus a Lorentzian function with effective parameters whose explicit expressions are given in the case of a closed system coupled to a bath as well as for the original Fano scattering framework. Although the solution is intricate, it can be elegantly expressed as a linear transformation of the kernel of a 4 ×4 matrix which has the meaning of an effective Liouvillian. We unveil key notable processes related to the optical nonlinearity and which had not been reported to date: electromagnetic-induced transparency, population inversions, power narrowing and broadening, as well as an effective reduction of the Fano asymmetry parameter.

  8. Multiscale modeling of a rectifying bipolar nanopore: explicit-water versus implicit-water simulations.

    PubMed

    Ható, Zoltán; Valiskó, Mónika; Kristóf, Tamás; Gillespie, Dirk; Boda, Dezsö

    2017-07-21

    In a multiscale modeling approach, we present computer simulation results for a rectifying bipolar nanopore at two modeling levels. In an all-atom model, we use explicit water to simulate ion transport directly with the molecular dynamics technique. In a reduced model, we use implicit water and apply the Local Equilibrium Monte Carlo method together with the Nernst-Planck transport equation. This hybrid method makes the fast calculation of ion transport possible at the price of lost details. We show that the implicit-water model is an appropriate representation of the explicit-water model when we look at the system at the device (i.e., input vs. output) level. The two models produce qualitatively similar behavior of the electrical current for different voltages and model parameters. Looking at the details of concentration and potential profiles, we find profound differences between the two models. These differences, however, do not influence the basic behavior of the model as a device because they do not influence the z-dependence of the concentration profiles which are the main determinants of current. These results then address an old paradox: how do reduced models, whose assumptions should break down in a nanoscale device, predict experimental data? Our simulations show that reduced models can still capture the overall device physics correctly, even though they get some important aspects of the molecular-scale physics quite wrong; reduced models work because they include the physics that is necessary from the point of view of device function. Therefore, reduced models can suffice for general device understanding and device design, but more detailed models might be needed for molecular level understanding.

  9. Predicting Fish Growth Potential and Identifying Water Quality Constraints: A Spatially-Explicit Bioenergetics Approach

    NASA Astrophysics Data System (ADS)

    Budy, Phaedra; Baker, Matthew; Dahle, Samuel K.

    2011-10-01

    Anthropogenic impairment of water bodies represents a global environmental concern, yet few attempts have successfully linked fish performance to thermal habitat suitability and fewer have distinguished co-varying water quality constraints. We interfaced fish bioenergetics, field measurements, and Thermal Remote Imaging to generate a spatially-explicit, high-resolution surface of fish growth potential, and next employed a structured hypothesis to detect relationships among measures of fish performance and co-varying water quality constraints. Our thermal surface of fish performance captured the amount and spatial-temporal arrangement of thermally-suitable habitat for three focal species in an extremely heterogeneous reservoir, but interpretation of this pattern was initially confounded by seasonal covariation of water residence time and water quality. Subsequent path analysis revealed that in terms of seasonal patterns in growth potential, catfish and walleye responded to temperature, positively and negatively, respectively; crappie and walleye responded to eutrophy (negatively). At the high eutrophy levels observed in this system, some desired fishes appear to suffer from excessive cultural eutrophication within the context of elevated temperatures whereas others appear to be largely unaffected or even enhanced. Our overall findings do not lead to the conclusion that this system is degraded by pollution; however, they do highlight the need to use a sensitive focal species in the process of determining allowable nutrient loading and as integrators of habitat suitability across multiple spatial and temporal scales. We provide an integrated approach useful for quantifying fish growth potential and identifying water quality constraints on fish performance at spatial scales appropriate for whole-system management.

  10. On testing two major cumulus parameterization schemes using the CSU Regional Atmospheric Modeling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kao, C.Y.J.; Bossert, J.E.; Winterkamp, J.

    1993-10-01

    One of the objectives of the DOE ARM Program is to improve the parameterization of clouds in general circulation models (GCMs). The approach taken in this research is two fold. We first examine the behavior of cumulus parameterization schemes by comparing their performance against the results from explicit cloud simulations with state-of-the-art microphysics. This is conducted in a two-dimensional (2-D) configuration of an idealized convective system. We then apply the cumulus parameterization schemes to realistic three-dimensional (3-D) simulations over the western US for a case with an enormous amount of convection in an extended period of five days. In themore » 2-D idealized tests, cloud effects are parameterized in the ``parameterization cases`` with a coarse resolution, whereas each cloud is explicitly resolved by the ``microphysics cases`` with a much finer resolution. Thus, the capability of the parameterization schemes in reproducing the growth and life cycle of a convective system can then be evaluated. These 2-D tests will form the basis for further 3-D realistic simulations which have the model resolution equivalent to that of the next generation of GCMs. Two cumulus parameterizations are used in this research: the Arakawa-Schubert (A-S) scheme (Arakawa and Schubert, 1974) used in Kao and Ogura (1987) and the Kuo scheme (Kuo, 1974) used in Tremback (1990). The numerical model used in this research is the Regional Atmospheric Modeling System (RAMS) developed at Colorado State University (CSU).« less

  11. Empirical evaluation of spatial and non-spatial European-scale multimedia fate models: results and implications for chemical risk assessment.

    PubMed

    Armitage, James M; Cousins, Ian T; Hauck, Mara; Harbers, Jasper V; Huijbregts, Mark A J

    2007-06-01

    Multimedia environmental fate models are commonly-applied tools for assessing the fate and distribution of contaminants in the environment. Owing to the large number of chemicals in use and the paucity of monitoring data, such models are often adopted as part of decision-support systems for chemical risk assessment. The purpose of this study was to evaluate the performance of three multimedia environmental fate models (spatially- and non-spatially-explicit) at a European scale. The assessment was conducted for four polycyclic aromatic hydrocarbons (PAHs) and hexachlorobenzene (HCB) and compared predicted and median observed concentrations using monitoring data collected for air, water, sediments and soils. Model performance in the air compartment was reasonable for all models included in the evaluation exercise as predicted concentrations were typically within a factor of 3 of the median observed concentrations. Furthermore, there was good correspondence between predictions and observations in regions that had elevated median observed concentrations for both spatially-explicit models. On the other hand, all three models consistently underestimated median observed concentrations in sediment and soil by 1-3 orders of magnitude. Although regions with elevated median observed concentrations in these environmental media were broadly identified by the spatially-explicit models, the magnitude of the discrepancy between predicted and median observed concentrations is of concern in the context of chemical risk assessment. These results were discussed in terms of factors influencing model performance such as the steady-state assumption, inaccuracies in emission estimates and the representativeness of monitoring data.

  12. A probabilistic approach to identify putative drug targets in biochemical networks.

    PubMed

    Murabito, Ettore; Smallbone, Kieran; Swinton, Jonathan; Westerhoff, Hans V; Steuer, Ralf

    2011-06-06

    Network-based drug design holds great promise in clinical research as a way to overcome the limitations of traditional approaches in the development of drugs with high efficacy and low toxicity. This novel strategy aims to study how a biochemical network as a whole, rather than its individual components, responds to specific perturbations in different physiological conditions. Proteins exerting little control over normal cells and larger control over altered cells may be considered as good candidates for drug targets. The application of network-based drug design would greatly benefit from using an explicit computational model describing the dynamics of the system under investigation. However, creating a fully characterized kinetic model is not an easy task, even for relatively small networks, as it is still significantly hampered by the lack of data about kinetic mechanisms and parameters values. Here, we propose a Monte Carlo approach to identify the differences between flux control profiles of a metabolic network in different physiological states, when information about the kinetics of the system is partially or totally missing. Based on experimentally accessible information on metabolic phenotypes, we develop a novel method to determine probabilistic differences in the flux control coefficients between the two observable phenotypes. Knowledge of how differences in flux control are distributed among the different enzymatic steps is exploited to identify points of fragility in one of the phenotypes. Using a prototypical cancerous phenotype as an example, we demonstrate how our approach can assist researchers in developing compounds with high efficacy and low toxicity. © 2010 The Royal Society

  13. Neutron coincidence measurements when nuclear parameters vary during the multiplication process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Ming-Shih; Teichmann, T.

    1995-07-01

    In a recent paper, a physical/mathematical model was developed for neutron coincidence counting, taking explicit account of neutron absorption and leakage, and using dual probability generating function to derive explicit formulae for the single and multiple count-rates in terms of the physical parameters of the system. The results of this modeling proved very successful in a number of cases in which the system parameters (neutron reaction cross-sections, detection probabilities, etc.) remained the same at the various stages of the process (i.e. from collision to collision). However, there are practical circumstances in which such system parameters change from collision to collision,more » and it is necessary to accommodate these, too, in a general theory, applicable to such situations. For instance, in the case of the neutron coincidence collar (NCC), the parameters for the initial, spontaneous fission neutrons, are not the same as those for the succeeding induced fission neutrons, and similar situations can be envisaged for certain other experimental configurations. This present document shows how the previous considerations can be elaborated to embrace these more general requirements.« less

  14. Deconstructing the core dynamics from a complex time-lagged regulatory biological circuit.

    PubMed

    Eriksson, O; Brinne, B; Zhou, Y; Björkegren, J; Tegnér, J

    2009-03-01

    Complex regulatory dynamics is ubiquitous in molecular networks composed of genes and proteins. Recent progress in computational biology and its application to molecular data generate a growing number of complex networks. Yet, it has been difficult to understand the governing principles of these networks beyond graphical analysis or extensive numerical simulations. Here the authors exploit several simplifying biological circumstances which thereby enable to directly detect the underlying dynamical regularities driving periodic oscillations in a dynamical nonlinear computational model of a protein-protein network. System analysis is performed using the cell cycle, a mathematically well-described complex regulatory circuit driven by external signals. By introducing an explicit time delay and using a 'tearing-and-zooming' approach the authors reduce the system to a piecewise linear system with two variables that capture the dynamics of this complex network. A key step in the analysis is the identification of functional subsystems by identifying the relations between state-variables within the model. These functional subsystems are referred to as dynamical modules operating as sensitive switches in the original complex model. By using reduced mathematical representations of the subsystems the authors derive explicit conditions on how the cell cycle dynamics depends on system parameters, and can, for the first time, analyse and prove global conditions for system stability. The approach which includes utilising biological simplifying conditions, identification of dynamical modules and mathematical reduction of the model complexity may be applicable to other well-characterised biological regulatory circuits. [Includes supplementary material].

  15. State analysis requirements database for engineering complex embedded systems

    NASA Technical Reports Server (NTRS)

    Bennett, Matthew B.; Rasmussen, Robert D.; Ingham, Michel D.

    2004-01-01

    It has become clear that spacecraft system complexity is reaching a threshold where customary methods of control are no longer affordable or sufficiently reliable. At the heart of this problem are the conventional approaches to systems and software engineering based on subsystem-level functional decomposition, which fail to scale in the tangled web of interactions typically encountered in complex spacecraft designs. Furthermore, there is a fundamental gap between the requirements on software specified by systems engineers and the implementation of these requirements by software engineers. Software engineers must perform the translation of requirements into software code, hoping to accurately capture the systems engineer's understanding of the system behavior, which is not always explicitly specified. This gap opens up the possibility for misinterpretation of the systems engineer's intent, potentially leading to software errors. This problem is addressed by a systems engineering tool called the State Analysis Database, which provides a tool for capturing system and software requirements in the form of explicit models. This paper describes how requirements for complex aerospace systems can be developed using the State Analysis Database.

  16. The scaling of geographic ranges: implications for species distribution models

    USGS Publications Warehouse

    Yackulic, Charles B.; Ginsberg, Joshua R.

    2016-01-01

    There is a need for timely science to inform policy and management decisions; however, we must also strive to provide predictions that best reflect our understanding of ecological systems. Species distributions evolve through time and reflect responses to environmental conditions that are mediated through individual and population processes. Species distribution models that reflect this understanding, and explicitly model dynamics, are likely to give more accurate predictions.

  17. Georeferenced model simulations efficiently support targeted monitoring

    NASA Astrophysics Data System (ADS)

    Berlekamp, Jürgen; Klasmeier, Jörg

    2010-05-01

    The European Water Framework Directive (WFD) demands the good ecological and chemical status of surface waters. To meet the definition of good chemical status of the WFD surface water concentrations of priority pollutants must not exceed established environmental quality standards (EQS). Surveillance of the concentrations of numerous chemical pollutants in whole river basins by monitoring is laborious and time-consuming. Moreover, measured data do often not allow for immediate source apportionment which is a prerequisite for defining promising reduction strategies to be implemented within the programme of measures. In this context, spatially explicit model approaches are highly advantageous because they provide a direct link between local point emissions (e.g. treated wastewater) or diffuse non-point emissions (e.g. agricultural runoff) and resulting surface water concentrations. Scenario analyses with such models allow for a priori investigation of potential positive effects of reduction measures such as optimization of wastewater treatment. The geo-referenced model GREAT-ER (Geography-referenced Regional Exposure Assessment Tool for European Rivers) has been designed to calculate spatially resolved averaged concentrations for different flow conditions (e.g. mean or low flow) based on emission estimations for local point source emissions such as treated effluents from wastewater treatment plants. The methodology was applied to selected pharmaceuticals (diclofenac, sotalol, metoprolol, carbamazepin) in the Main river basin in Germany (approx. 27,290 km²). Average concentrations of the compounds were calculated for each river reach in the whole catchment. Simulation results were evaluated by comparison with available data from orienting monitoring and used to develop an optimal monitoring strategy for the assessment of water quality regarding micropollutants at the catchment scale.

  18. Various Numerical Applications on Tropical Convective Systems Using a Cloud Resolving Model

    NASA Technical Reports Server (NTRS)

    Shie, C.-L.; Tao, W.-K.; Simpson, J.

    2003-01-01

    In recent years, increasing attention has been given to cloud resolving models (CRMs or cloud ensemble models-CEMs) for their ability to simulate the radiative-convective system, which plays a significant role in determining the regional heat and moisture budgets in the Tropics. The growing popularity of CRM usage can be credited to its inclusion of crucial and physically relatively realistic features such as explicit cloud-scale dynamics, sophisticated microphysical processes, and explicit cloud-radiation interaction. On the other hand, impacts of the environmental conditions (for example, the large-scale wind fields, heat and moisture advections as well as sea surface temperature) on the convective system can also be plausibly investigated using the CRMs with imposed explicit forcing. In this paper, by basically using a Goddard Cumulus Ensemble (GCE) model, three different studies on tropical convective systems are briefly presented. Each of these studies serves a different goal as well as uses a different approach. In the first study, which uses more of an idealized approach, the respective impacts of the large-scale horizontal wind shear and surface fluxes on the modeled tropical quasi-equilibrium states of temperature and water vapor are examined. In this 2-D study, the imposed large-scale horizontal wind shear is ideally either nudged (wind shear maintained strong) or mixed (wind shear weakened), while the minimum surface wind speed used for computing surface fluxes varies among various numerical experiments. For the second study, a handful of real tropical episodes (TRMM Kwajalein Experiment - KWAJEX, 1999; TRMM South China Sea Monsoon Experiment - SCSMEX, 1998) have been simulated such that several major atmospheric characteristics such as the rainfall amount and its associated stratiform contribution, the Qlheat and Q2/moisture budgets are investigated. In this study, the observed large-scale heat and moisture advections are continuously applied to the 2-D model. The modeled cloud generated from such an approach is termed continuously forced convection or continuous large-scale forced convection. A third study, which focuses on the respective impact of atmospheric components on upper Ocean heat and salt budgets, will be presented in the end. Unlike the two previous 2-D studies, this study employs the 3-D GCE-simulated diabatic source terms (using TOGA COARE observations) - radiation (longwave and shortwave), surface fluxes (sensible and latent heat, and wind stress), and precipitation as input for the Ocean mixed-layer (OML) model.

  19. A Behavioral Model of Landscape Change in the Amazon Basin: The Colonist Case

    NASA Technical Reports Server (NTRS)

    Walker, R. A.; Drzyzga, S. A.; Li, Y. L.; Wi, J. G.; Caldas, M.; Arima, E.; Vergara, D.

    2004-01-01

    This paper presents the prototype of a predictive model capable of describing both magnitudes of deforestation and its spatial articulation into patterns of forest fragmentation. In a departure from other landscape models, it establishes an explicit behavioral foundation for algorithm development, predicated on notions of the peasant economy and on household production theory. It takes a 'bottom-up' approach, generating the process of land-cover change occurring at lot level together with the geography of a transportation system to describe regional landscape change. In other words, it translates the decentralized decisions of individual households into a collective, spatial impact. In so doing, the model unites the richness of survey research on farm households with the analytical rigor of spatial analysis enabled by geographic information systems (GIs). The paper describes earlier efforts at spatial modeling, provides a critique of the so-called spatially explicit model, and elaborates a behavioral foundation by considering farm practices of colonists in the Amazon basin. It then uses, insight from the behavioral statement to motivate a GIs-based model architecture. The model is implemented for a long-standing colonization frontier in the eastern sector of the basin, along the Trans-Amazon Highway in the State of Para, Brazil. Results are subjected to both sensitivity analysis and error assessment, and suggestions are made about how the model could be improved.

  20. A system of recurrent neural networks for modularising, parameterising and dynamic analysis of cell signalling networks.

    PubMed

    Samarasinghe, S; Ling, H

    In this paper, we show how to extend our previously proposed novel continuous time Recurrent Neural Networks (RNN) approach that retains the advantage of continuous dynamics offered by Ordinary Differential Equations (ODE) while enabling parameter estimation through adaptation, to larger signalling networks using a modular approach. Specifically, the signalling network is decomposed into several sub-models based on important temporal events in the network. Each sub-model is represented by the proposed RNN and trained using data generated from the corresponding ODE model. Trained sub-models are assembled into a whole system RNN which is then subjected to systems dynamics and sensitivity analyses. The concept is illustrated by application to G1/S transition in cell cycle using Iwamoto et al. (2008) ODE model. We decomposed the G1/S network into 3 sub-models: (i) E2F transcription factor release; (ii) E2F and CycE positive feedback loop for elevating cyclin levels; and (iii) E2F and CycA negative feedback to degrade E2F. The trained sub-models accurately represented system dynamics and parameters were in good agreement with the ODE model. The whole system RNN however revealed couple of parameters contributing to compounding errors due to feedback and required refinement to sub-model 2. These related to the reversible reaction between CycE/CDK2 and p27, its inhibitor. The revised whole system RNN model very accurately matched dynamics of the ODE system. Local sensitivity analysis of the whole system model further revealed the most dominant influence of the above two parameters in perturbing G1/S transition, giving support to a recent hypothesis that the release of inhibitor p27 from Cyc/CDK complex triggers cell cycle stage transition. To make the model useful in a practical setting, we modified each RNN sub-model with a time relay switch to facilitate larger interval input data (≈20min) (original model used data for 30s or less) and retrained them that produced parameters and protein concentrations similar to the original RNN system. Results thus demonstrated the reliability of the proposed RNN method for modelling relatively large networks by modularisation for practical settings. Advantages of the method are its ability to represent accurate continuous system dynamics and ease of: parameter estimation through training with data from a practical setting, model analysis (40% faster than ODE), fine tuning parameters when more data are available, sub-model extension when new elements and/or interactions come to light and model expansion with addition of sub-models. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Implicit and explicit self-esteem and their reciprocal relationship with symptoms of depression and social anxiety: a longitudinal study in adolescents.

    PubMed

    van Tuijl, Lonneke A; de Jong, Peter J; Sportel, B Esther; de Hullu, Eva; Nauta, Maaike H

    2014-03-01

    A negative self-view is a prominent factor in most cognitive vulnerability models of depression and anxiety. Recently, there has been increased attention to differentiate between the implicit (automatic) and the explicit (reflective) processing of self-related evaluations. This longitudinal study aimed to test the association between implicit and explicit self-esteem and symptoms of adolescent depression and social anxiety disorder. Two complementary models were tested: the vulnerability model and the scarring effect model. Participants were 1641 first and second year pupils of secondary schools in the Netherlands. The Rosenberg Self-Esteem Scale, self-esteem Implicit Association Test and Revised Child Anxiety and Depression Scale were completed to measure explicit self-esteem, implicit self-esteem and symptoms of social anxiety disorder (SAD) and major depressive disorder (MDD), respectively, at baseline and two-year follow-up. Explicit self-esteem at baseline was associated with symptoms of MDD and SAD at follow-up. Symptomatology at baseline was not associated with explicit self-esteem at follow-up. Implicit self-esteem was not associated with symptoms of MDD or SAD in either direction. We relied on self-report measures of MDD and SAD symptomatology. Also, findings are based on a non-clinical sample. Our findings support the vulnerability model, and not the scarring effect model. The implications of these findings suggest support of an explicit self-esteem intervention to prevent increases in MDD and SAD symptomatology in non-clinical adolescents. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Behavioral response to contamination risk information in a spatially explicit groundwater environment: Experimental evidence

    NASA Astrophysics Data System (ADS)

    Li, Jingyuan; Michael, Holly A.; Duke, Joshua M.; Messer, Kent D.; Suter, Jordan F.

    2014-08-01

    This paper assesses the effectiveness of aquifer monitoring information in achieving more sustainable use of a groundwater resource in the absence of management policy. Groundwater user behavior in the face of an irreversible contamination threat is studied by applying methods of experimental economics to scenarios that combine a physics-based, spatially explicit, numerical groundwater model with different representations of information about an aquifer and its risk of contamination. The results suggest that the threat of catastrophic contamination affects pumping decisions: pumping is significantly reduced in experiments where contamination is possible compared to those where pumping cost is the only factor discouraging groundwater use. The level of information about the state of the aquifer also affects extraction behavior. Pumping rates differ when information that synthesizes data on aquifer conditions (a "risk gauge") is provided, despite invariant underlying economic incentives, and this result does not depend on whether the risk information is location-specific or from a whole aquifer perspective. Interestingly, users increase pumping when the risk gauge signals good aquifer status compared to a no-gauge treatment. When the gauge suggests impending contamination, however, pumping declines significantly, resulting in a lower probability of contamination. The study suggests that providing relatively simple aquifer condition guidance derived from monitoring data can lead to more sustainable use of groundwater resources.

  3. Simple models for studying complex spatiotemporal patterns of animal behavior

    NASA Astrophysics Data System (ADS)

    Tyutyunov, Yuri V.; Titova, Lyudmila I.

    2017-06-01

    Minimal mathematical models able to explain complex patterns of animal behavior are essential parts of simulation systems describing large-scale spatiotemporal dynamics of trophic communities, particularly those with wide-ranging species, such as occur in pelagic environments. We present results obtained with three different modelling approaches: (i) an individual-based model of animal spatial behavior; (ii) a continuous taxis-diffusion-reaction system of partial-difference equations; (iii) a 'hybrid' approach combining the individual-based algorithm of organism movements with explicit description of decay and diffusion of the movement stimuli. Though the models are based on extremely simple rules, they all allow description of spatial movements of animals in a predator-prey system within a closed habitat, reproducing some typical patterns of the pursuit-evasion behavior observed in natural populations. In all three models, at each spatial position the animal movements are determined by local conditions only, so the pattern of collective behavior emerges due to self-organization. The movement velocities of animals are proportional to the density gradients of specific cues emitted by individuals of the antagonistic species (pheromones, exometabolites or mechanical waves of the media, e.g., sound). These cues play a role of taxis stimuli: prey attract predators, while predators repel prey. Depending on the nature and the properties of the movement stimulus we propose using either a simplified individual-based model, a continuous taxis pursuit-evasion system, or a little more detailed 'hybrid' approach that combines simulation of the individual movements with the continuous model describing diffusion and decay of the stimuli in an explicit way. These can be used to improve movement models for many species, including large marine predators.

  4. Spatially explicit assessment of estuarine fish after Deepwater Horizon oil spill: trade-off in complexity and parsimony

    EPA Science Inventory

    Evaluating long- term contaminant effects on wildlife populations depends on spatial information about habitat quality, heterogeneity in contaminant exposure, and sensitivities and distributions of species integrated into a systems modeling approach. Rarely is this information re...

  5. Timescales and the management of ecological systems.

    PubMed

    Hastings, Alan

    2016-12-20

    Human management of ecological systems, including issues like fisheries, invasive species, and restoration, as well as others, often must be undertaken with limited information. This means that developing general principles and heuristic approaches is important. Here, I focus on one aspect, the importance of an explicit consideration of time, which arises because of the inherent limitations in the response of ecological systems. I focus mainly on simple systems and models, beginning with systems without density dependence, which are therefore linear. Even for these systems, it is important to recognize the necessary delays in the response of the ecological system to management. Here, I also provide details for optimization that show how general results emerge and emphasize how delays due to demography and life histories can change the optimal management approach. A brief discussion of systems with density dependence and tipping points shows that the same themes emerge, namely, that when considering issues of restoration or management to change the state of an ecological system, that timescales need explicit consideration and may change the optimal approach in important ways.

  6. Automating the Transformational Development of Software. Volume 1.

    DTIC Science & Technology

    1983-03-01

    DRACO system [Neighbors 80] uses meta-rules to derive information about which new transformations will be applicable after a particular transformation has...transformation over another. The new model, as Incorporated in a system called Glitter, explicitly represents transformation goals, methods, and selection...done anew for each new problem (compare this with Neighbor’s Draco system [Neighbors 80] which attempts to reuse domain analysis). o Is the user

  7. Diagram-based Analysis of Causal Systems (DACS): elucidating inter-relationships between determinants of acute lower respiratory infections among children in sub-Saharan Africa.

    PubMed

    Rehfuess, Eva A; Best, Nicky; Briggs, David J; Joffe, Mike

    2013-12-06

    Effective interventions require evidence on how individual causal pathways jointly determine disease. Based on the concept of systems epidemiology, this paper develops Diagram-based Analysis of Causal Systems (DACS) as an approach to analyze complex systems, and applies it by examining the contributions of proximal and distal determinants of childhood acute lower respiratory infections (ALRI) in sub-Saharan Africa. Diagram-based Analysis of Causal Systems combines the use of causal diagrams with multiple routinely available data sources, using a variety of statistical techniques. In a step-by-step process, the causal diagram evolves from conceptual based on a priori knowledge and assumptions, through operational informed by data availability which then undergoes empirical testing, to integrated which synthesizes information from multiple datasets. In our application, we apply different regression techniques to Demographic and Health Survey (DHS) datasets for Benin, Ethiopia, Kenya and Namibia and a pooled World Health Survey (WHS) dataset for sixteen African countries. Explicit strategies are employed to make decisions transparent about the inclusion/omission of arrows, the sign and strength of the relationships and homogeneity/heterogeneity across settings.Findings about the current state of evidence on the complex web of socio-economic, environmental, behavioral and healthcare factors influencing childhood ALRI, based on DHS and WHS data, are summarized in an integrated causal diagram. Notably, solid fuel use is structured by socio-economic factors and increases the risk of childhood ALRI mortality. Diagram-based Analysis of Causal Systems is a means of organizing the current state of knowledge about a specific area of research, and a framework for integrating statistical analyses across a whole system. This partly a priori approach is explicit about causal assumptions guiding the analysis and about researcher judgment, and wrong assumptions can be reversed following empirical testing. This approach is well-suited to dealing with complex systems, in particular where data are scarce.

  8. Diagram-based Analysis of Causal Systems (DACS): elucidating inter-relationships between determinants of acute lower respiratory infections among children in sub-Saharan Africa

    PubMed Central

    2013-01-01

    Background Effective interventions require evidence on how individual causal pathways jointly determine disease. Based on the concept of systems epidemiology, this paper develops Diagram-based Analysis of Causal Systems (DACS) as an approach to analyze complex systems, and applies it by examining the contributions of proximal and distal determinants of childhood acute lower respiratory infections (ALRI) in sub-Saharan Africa. Results Diagram-based Analysis of Causal Systems combines the use of causal diagrams with multiple routinely available data sources, using a variety of statistical techniques. In a step-by-step process, the causal diagram evolves from conceptual based on a priori knowledge and assumptions, through operational informed by data availability which then undergoes empirical testing, to integrated which synthesizes information from multiple datasets. In our application, we apply different regression techniques to Demographic and Health Survey (DHS) datasets for Benin, Ethiopia, Kenya and Namibia and a pooled World Health Survey (WHS) dataset for sixteen African countries. Explicit strategies are employed to make decisions transparent about the inclusion/omission of arrows, the sign and strength of the relationships and homogeneity/heterogeneity across settings. Findings about the current state of evidence on the complex web of socio-economic, environmental, behavioral and healthcare factors influencing childhood ALRI, based on DHS and WHS data, are summarized in an integrated causal diagram. Notably, solid fuel use is structured by socio-economic factors and increases the risk of childhood ALRI mortality. Conclusions Diagram-based Analysis of Causal Systems is a means of organizing the current state of knowledge about a specific area of research, and a framework for integrating statistical analyses across a whole system. This partly a priori approach is explicit about causal assumptions guiding the analysis and about researcher judgment, and wrong assumptions can be reversed following empirical testing. This approach is well-suited to dealing with complex systems, in particular where data are scarce. PMID:24314302

  9. Discovering latent commercial networks from online financial news articles

    NASA Astrophysics Data System (ADS)

    Xia, Yunqing; Su, Weifeng; Lau, Raymond Y. K.; Liu, Yi

    2013-08-01

    Unlike most online social networks where explicit links among individual users are defined, the relations among commercial entities (e.g. firms) may not be explicitly declared in commercial Web sites. One main contribution of this article is the development of a novel computational model for the discovery of the latent relations among commercial entities from online financial news. More specifically, a CRF model which can exploit both structural and contextual features is applied to commercial entity recognition. In addition, a point-wise mutual information (PMI)-based unsupervised learning method is developed for commercial relation identification. To evaluate the effectiveness of the proposed computational methods, a prototype system called CoNet has been developed. Based on the financial news articles crawled from Google finance, the CoNet system achieves average F-scores of 0.681 and 0.754 in commercial entity recognition and commercial relation identification, respectively. Our experimental results confirm that the proposed shallow natural language processing methods are effective for the discovery of latent commercial networks from online financial news.

  10. Aerosol effects on cloud water amounts were successfully simulated by a global cloud-system resolving model.

    PubMed

    Sato, Yousuke; Goto, Daisuke; Michibata, Takuro; Suzuki, Kentaroh; Takemura, Toshihiko; Tomita, Hirofumi; Nakajima, Teruyuki

    2018-03-07

    Aerosols affect climate by modifying cloud properties through their role as cloud condensation nuclei or ice nuclei, called aerosol-cloud interactions. In most global climate models (GCMs), the aerosol-cloud interactions are represented by empirical parameterisations, in which the mass of cloud liquid water (LWP) is assumed to increase monotonically with increasing aerosol loading. Recent satellite observations, however, have yielded contradictory results: LWP can decrease with increasing aerosol loading. This difference implies that GCMs overestimate the aerosol effect, but the reasons for the difference are not obvious. Here, we reproduce satellite-observed LWP responses using a global simulation with explicit representations of cloud microphysics, instead of the parameterisations. Our analyses reveal that the decrease in LWP originates from the response of evaporation and condensation processes to aerosol perturbations, which are not represented in GCMs. The explicit representation of cloud microphysics in global scale modelling reduces the uncertainty of climate prediction.

  11. Impact of mesophyll diffusion on estimated global land CO 2 fertilization

    DOE PAGES

    Sun, Ying; Gu, Lianhong; Dickinson, Robert E.; ...

    2014-10-13

    In C 3 plants, CO 2 concentrations drop considerably along mesophyll diffusion pathways from substomatal cavities to chloroplasts where CO 2 assimilation occurs. Global carbon cycle models have not explicitly represented this internal drawdown and so overestimate CO 2 available for carboxylation and underestimate photosynthetic responsiveness to atmospheric CO 2. An explicit consideration of mesophyll diffusion increases the modeled cumulative CO 2 fertilization effect (CFE) for global gross primary production (GPP) from 915 PgC to 1057 PgC for the period of 1901 to 2010. This increase represents a 16% correction, large enough to explain the persistent overestimation of growth ratesmore » of historical atmospheric CO 2 by Earth System Models. Without this correction, the CFE for global GPP is underestimated by 0.05 PgC yr -1ppm -1. This finding implies that the contemporary terrestrial biosphere is more CO 2-limited than previously thought.« less

  12. Sequential Probability Ratio Test for Collision Avoidance Maneuver Decisions Based on a Bank of Norm-Inequality-Constrained Epoch-State Filters

    NASA Technical Reports Server (NTRS)

    Carpenter, J. R.; Markley, F. L.; Alfriend, K. T.; Wright, C.; Arcido, J.

    2011-01-01

    Sequential probability ratio tests explicitly allow decision makers to incorporate false alarm and missed detection risks, and are potentially less sensitive to modeling errors than a procedure that relies solely on a probability of collision threshold. Recent work on constrained Kalman filtering has suggested an approach to formulating such a test for collision avoidance maneuver decisions: a filter bank with two norm-inequality-constrained epoch-state extended Kalman filters. One filter models 1he null hypothesis 1ha1 the miss distance is inside the combined hard body radius at the predicted time of closest approach, and one filter models the alternative hypothesis. The epoch-state filter developed for this method explicitly accounts for any process noise present in the system. The method appears to work well using a realistic example based on an upcoming highly-elliptical orbit formation flying mission.

  13. Noise focusing in neuronal tissues: Symmetry breaking and localization in excitable networks with quenched disorder

    NASA Astrophysics Data System (ADS)

    Orlandi, Javier G.; Casademunt, Jaume

    2017-05-01

    We introduce a coarse-grained stochastic model for the spontaneous activity of neuronal cultures to explain the phenomenon of noise focusing, which entails localization of the noise activity in excitable networks with metric correlations. The system is modeled as a continuum excitable medium with a state-dependent spatial coupling that accounts for the dynamics of synaptic connections. The most salient feature is the emergence at the mesoscale of a vector field V (r ) , which acts as an advective carrier of the noise. This entails an explicit symmetry breaking of isotropy and homogeneity that stems from the amplification of the quenched fluctuations of the network by the activity avalanches, concomitant with the excitable dynamics. We discuss the microscopic interpretation of V (r ) and propose an explicit construction of it. The coarse-grained model shows excellent agreement with simulations at the network level. The generic nature of the observed phenomena is discussed.

  14. Sequential Probability Ratio Test for Spacecraft Collision Avoidance Maneuver Decisions

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Markley, F. Landis

    2013-01-01

    A document discusses sequential probability ratio tests that explicitly allow decision-makers to incorporate false alarm and missed detection risks, and are potentially less sensitive to modeling errors than a procedure that relies solely on a probability of collision threshold. Recent work on constrained Kalman filtering has suggested an approach to formulating such a test for collision avoidance maneuver decisions: a filter bank with two norm-inequality-constrained epoch-state extended Kalman filters. One filter models the null hypotheses that the miss distance is inside the combined hard body radius at the predicted time of closest approach, and one filter models the alternative hypothesis. The epoch-state filter developed for this method explicitly accounts for any process noise present in the system. The method appears to work well using a realistic example based on an upcoming, highly elliptical orbit formation flying mission.

  15. Quantum morphogenesis: A variation on Thom's catastrophe theory

    NASA Astrophysics Data System (ADS)

    Aerts, Dirk; Czachor, Marek; Gabora, Liane; Kuna, Maciej; Posiewnik, Andrzej; Pykacz, Jarosław; Syty, Monika

    2003-05-01

    Noncommutative propositions are characteristic of both quantum and nonquantum (sociological, biological, and psychological) situations. In a Hilbert space model, states, understood as correlations between all the possible propositions, are represented by density matrices. If systems in question interact via feedback with environment, their dynamics is nonlinear. Nonlinear evolutions of density matrices lead to the phenomenon of morphogenesis that may occur in noncommutative systems. Several explicit exactly solvable models are presented, including “birth and death of an organism” and “development of complementary properties.”

  16. Explicit filtering in large eddy simulation using a discontinuous Galerkin method

    NASA Astrophysics Data System (ADS)

    Brazell, Matthew J.

    The discontinuous Galerkin (DG) method is a formulation of the finite element method (FEM). DG provides the ability for a high order of accuracy in complex geometries, and allows for highly efficient parallelization algorithms. These attributes make the DG method attractive for solving the Navier-Stokes equations for large eddy simulation (LES). The main goal of this work is to investigate the feasibility of adopting an explicit filter in the numerical solution of the Navier-Stokes equations with DG. Explicit filtering has been shown to increase the numerical stability of under-resolved simulations and is needed for LES with dynamic sub-grid scale (SGS) models. The explicit filter takes advantage of DG's framework where the solution is approximated using a polyno- mial basis where the higher modes of the solution correspond to a higher order polynomial basis. By removing high order modes, the filtered solution contains low order frequency content much like an explicit low pass filter. The explicit filter implementation is tested on a simple 1-D solver with an initial condi- tion that has some similarity to turbulent flows. The explicit filter does restrict the resolution as well as remove accumulated energy in the higher modes from aliasing. However, the ex- plicit filter is unable to remove numerical errors causing numerical dissipation. A second test case solves the 3-D Navier-Stokes equations of the Taylor-Green vortex flow (TGV). The TGV is useful for SGS model testing because it is initially laminar and transitions into a fully turbulent flow. The SGS models investigated include the constant coefficient Smagorinsky model, dynamic Smagorinsky model, and dynamic Heinz model. The constant coefficient Smagorinsky model is over dissipative, this is generally not desirable however it does add stability. The dynamic Smagorinsky model generally performs better, especially during the laminar-turbulent transition region as expected. The dynamic Heinz model which is based on an improved model, handles the laminar-turbulent transition region well while also showing additional robustness.

  17. Computer-Aided Group Problem Solving for Unified Life Cycle Engineering (ULCE)

    DTIC Science & Technology

    1989-02-01

    defining the problem, generating alternative solutions, evaluating alternatives, selecting alternatives, and implementing the solution. Systems...specialist in group dynamics, assists the group in formulating the problem and selecting a model framework. The analyst provides the group with computer...allocating resources, evaluating and selecting options, making judgments explicit, and analyzing dynamic systems. c. University of Rhode Island Drs. Geoffery

  18. Finding Your Literature Match - A Physics Literature Recommender System

    NASA Astrophysics Data System (ADS)

    Henneken, Edwin; Kurtz, Michael

    2010-03-01

    A recommender system is a filtering algorithm that helps you find the right match by offering suggestions based on your choices and information you have provided. A latent factor model is a successful approach. Here an item is characterized by a vector describing to what extent a product is described by each of N categories, and a person is characterized by an ``interest'' vector, based on explicit or implicit feedback by this user. The recommender system assigns ratings to new items and suggests items this user might be interested in. Here we present results of a recommender system designed to find recent literature of interest to people working in the field of solid state physics. Since we do not have explicit feedback, our user vector consists of (implicit) ``usage.'' Using a system of N keywords we construct normalized keyword vectors for articles based on the keywords of that article and its bibliography. The normalized ``interest'' vector is created by calculating the normalized frequency of keyword occurrence in the papers cited by the papers read.

  19. Embedded-explicit emergent literacy intervention I: Background and description of approach.

    PubMed

    Justice, Laura M; Kaderavek, Joan N

    2004-07-01

    This article, the first of a two-part series, provides background information and a general description of an emergent literacy intervention model for at-risk preschoolers and kindergartners. The embedded-explicit intervention model emphasizes the dual importance of providing young children with socially embedded opportunities for meaningful, naturalistic literacy experiences throughout the day, in addition to regular structured therapeutic interactions that explicitly target critical emergent literacy goals. The role of the speech-language pathologist (SLP) in the embedded-explicit model encompasses both indirect and direct service delivery: The SLP consults and collaborates with teachers and parents to ensure the highest quality and quantity of socially embedded literacy-focused experiences and serves as a direct provider of explicit interventions using structured curricula and/or lesson plans. The goal of this integrated model is to provide comprehensive emergent literacy interventions across a spectrum of early literacy skills to ensure the successful transition of at-risk children from prereaders to readers.

  20. Developing Spatially Explicit Habitat Models for Grassland Bird Conservation Planning in the Prairie Pothole Region of North Dakota

    Treesearch

    Neal D. Niemuth; Michael E. Estey; Charles R. Loesch

    2005-01-01

    Conservation planning for birds is increasingly focused on landscapes. However, little spatially explicit information is available to guide landscape-level conservation planning for many species of birds. We used georeferenced 1995 Breeding Bird Survey (BBS) data in conjunction with land-cover information to develop a spatially explicit habitat model predicting the...

  1. Explicit robust schemes for implementation of general principal value-based constitutive models

    NASA Technical Reports Server (NTRS)

    Arnold, S. M.; Saleeb, A. F.; Tan, H. Q.; Zhang, Y.

    1993-01-01

    The issue of developing effective and robust schemes to implement general hyperelastic constitutive models is addressed. To this end, special purpose functions are used to symbolically derive, evaluate, and automatically generate the associated FORTRAN code for the explicit forms of the corresponding stress function and material tangent stiffness tensors. These explicit forms are valid for the entire deformation range. The analytical form of these explicit expressions is given here for the case in which the strain-energy potential is taken as a nonseparable polynomial function of the principle stretches.

  2. Atom-bond electronegativity equalization method fused into molecular mechanics. I. A seven-site fluctuating charge and flexible body water potential function for water clusters.

    PubMed

    Yang, Zhong-Zhi; Wu, Yang; Zhao, Dong-Xia

    2004-02-08

    Recently, experimental and theoretical studies on the water system are very active and noticeable. A transferable intermolecular potential seven points approach including fluctuation charges and flexible body (ABEEM-7P) based on a combination of the atom-bond electronegativity equalization and molecular mechanics (ABEEM/MM), and its application to small water clusters are explored and tested in this paper. The consistent combination of ABEEM and molecular mechanics (MM) is to take the ABEEM charges of atoms, bonds, and lone-pair electrons into the intermolecular electrostatic interaction term in molecular mechanics. To examine the charge transfer we have used two models coming from the charge constraint types: one is a charge neutrality constraint on whole water system and the other is on each water molecule. Compared with previous water force fields, the ABEEM-7P model has two characters: (1) the ABEEM-7P model not only presents the electrostatic interaction of atoms, bonds and lone-pair electrons and their changing in respond to different ambient environment but also introduces "the hydrogen bond interaction region" in which a new parameter k(lp,H)(R(lp,H)) is used to describe the electrostatic interaction of the lone-pair electron and the hydrogen atom which can form the hydrogen bond; (2) nonrigid but flexible water body permitting the vibration of the bond length and angle is allowed due to the combination of ABEEM and molecular mechanics, and for van der Waals interaction the ABEEM-7P model takes an all atom-atom interaction, i.e., oxygen-oxygen, hydrogen-hydrogen, oxygen-hydrogen interaction into account. The ABEEM-7P model based on ABEEM/MM gives quite accurate predictions for gas-phase state properties of the small water clusters (H(2)O)(n) (n=2-6), such as optimized geometries, monomer dipole moments, vibrational frequencies, and cluster interaction energies. Due to its explicit description of charges and the hydrogen bond, the ABEEM-7P model will be applied to discuss properties of liquid water, ice, aqueous solutions, and biological systems.

  3. Warning systems in risk management.

    PubMed

    Paté-Cornell, M E

    1986-06-01

    A method is presented here that allows probabilistic evaluation and optimization of warning systems, and comparison of their performance and cost-effectiveness with those of other means of risk management. The model includes an assessment of the signals, and of human response, given the memory that people have kept of the quality of previous alerts. The trade-off between the rate of false alerts and the length of the lead time is studied to account for the long-term effects of "crying wolf" and the effectiveness of emergency actions. An explicit formulation of the system's benefits, including inputs from a signal model, a response model, and a consequence model, is given to allow optimization of the warning threshold and of the system's sensitivity.

  4. Adaptive and Rational Anticipations in Risk Management Systems and Economy

    NASA Astrophysics Data System (ADS)

    Dubois, Daniel M.; Holmberg, Stig C.

    2010-11-01

    The global financial crisis of year 2009 is explained as a result of uncoordinated risk management decisions in business firms and economic organisations. The underlying reason for this can be found in the current financial system. As the financial market has lost much of its direct coupling to the concrete economy it provides misleading information to economic decision makers at all levels. Hence, the financial system has moved from a state of moderate and slow cyclical fluctuations into a state of fast and chaotic ones. Those misleading decisions can further be described, but not explained, by help of adaptive and rational expectations from macroeconomic theory. In this context, AE, the Adaptive Expectations are related to weak passive Exo-anticipation, and RE, the Rational expectations can be related to a strong, active and design oriented anticipation. The shortcomings of conventional cures, which builds on a reactive paradigm, have already been demonstrated in economic literature and are here further underlined by help of Ashby's "Law of Requisite Variety", Weaver's distinction between systems of "Disorganized Complexity" and those of "Organized Complexity", and Klir's "Reconstructability Analysis". Anticipatory decision-making is hence here proposed as a replacement to current expectation based and passive risk management. An anticipatory model of the business cycle is presented for supporting that proposition. The model, which is an extension of the Kaldor-Kalecki model, includes both retardation and anticipation. While cybernetics with the feedback process in control system deals with an explicit goal or purpose given to a system, the anticipatory system discussed here deals with a behaviour for which the future state of the system is built by the system itself, without explicit goal. A system with weak anticipation is based on a predictive model of the system, while a system with strong anticipation builds its own future by itself. Numerical simulations on computer confirm the feasibility of this approach. Hence, functional differential equations with both retardation and anticipation are found to be useful tools for modelling financial systems.

  5. Multiscale optical imaging of rare-earth-doped nanocomposites in a small animal model

    NASA Astrophysics Data System (ADS)

    Higgins, Laura M.; Ganapathy, Vidya; Kantamneni, Harini; Zhao, Xinyu; Sheng, Yang; Tan, Mei-Chee; Roth, Charles M.; Riman, Richard E.; Moghe, Prabhas V.; Pierce, Mark C.

    2018-03-01

    Rare-earth-doped nanocomposites have appealing optical properties for use as biomedical contrast agents, but few systems exist for imaging these materials. We describe the design and characterization of (i) a preclinical system for whole animal in vivo imaging and (ii) an integrated optical coherence tomography/confocal microscopy system for high-resolution imaging of ex vivo tissues. We demonstrate these systems by administering erbium-doped nanocomposites to a murine model of metastatic breast cancer. Short-wave infrared emissions were detected in vivo and in whole organ imaging ex vivo. Visible upconversion emissions and tissue autofluorescence were imaged in biopsy specimens, alongside optical coherence tomography imaging of tissue microstructure. We anticipate that this work will provide guidance for researchers seeking to image these nanomaterials across a wide range of biological models.

  6. geophylobuilder 1.0: an arcgis extension for creating 'geophylogenies'.

    PubMed

    Kidd, David M; Liu, Xianhua

    2008-01-01

    Evolution is inherently a spatiotemporal process; however, despite this, phylogenetic and geographical data and models remain largely isolated from one another. Geographical information systems provide a ready-made spatial modelling, analysis and dissemination environment within which phylogenetic models can be explicitly linked with their associated spatial data and subsequently integrated with other georeferenced data sets describing the biotic and abiotic environment. geophylobuilder 1.0 is an extension for the arcgis geographical information system that builds a 'geophylogenetic' data model from a phylogenetic tree and associated geographical data. Geophylogenetic database objects can subsequently be queried, spatially analysed and visualized in both 2D and 3D within a geographical information systems. © 2007 The Authors.

  7. Multivariable Parametric Cost Model for Ground Optical Telescope Assembly

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip; Rowell, Ginger Holmes; Reese, Gayle; Byberg, Alicia

    2005-01-01

    A parametric cost model for ground-based telescopes is developed using multivariable statistical analysis of both engineering and performance parameters. While diameter continues to be the dominant cost driver, diffraction-limited wavelength is found to be a secondary driver. Other parameters such as radius of curvature are examined. The model includes an explicit factor for primary mirror segmentation and/or duplication (i.e., multi-telescope phased-array systems). Additionally, single variable models Based on aperture diameter are derived.

  8. Uncertainty in spatially explicit animal dispersal models

    USGS Publications Warehouse

    Mooij, Wolf M.; DeAngelis, Donald L.

    2003-01-01

    Uncertainty in estimates of survival of dispersing animals is a vexing difficulty in conservation biology. The current notion is that this uncertainty decreases the usefulness of spatially explicit population models in particular. We examined this problem by comparing dispersal models of three levels of complexity: (1) an event-based binomial model that considers only the occurrence of mortality or arrival, (2) a temporally explicit exponential model that employs mortality and arrival rates, and (3) a spatially explicit grid-walk model that simulates the movement of animals through an artificial landscape. Each model was fitted to the same set of field data. A first objective of the paper is to illustrate how the maximum-likelihood method can be used in all three cases to estimate the means and confidence limits for the relevant model parameters, given a particular set of data on dispersal survival. Using this framework we show that the structure of the uncertainty for all three models is strikingly similar. In fact, the results of our unified approach imply that spatially explicit dispersal models, which take advantage of information on landscape details, suffer less from uncertainly than do simpler models. Moreover, we show that the proposed strategy of model development safeguards one from error propagation in these more complex models. Finally, our approach shows that all models related to animal dispersal, ranging from simple to complex, can be related in a hierarchical fashion, so that the various approaches to modeling such dispersal can be viewed from a unified perspective.

  9. Quantum thermodynamics of the resonant-level model with driven system-bath coupling

    NASA Astrophysics Data System (ADS)

    Haughian, Patrick; Esposito, Massimiliano; Schmidt, Thomas L.

    2018-02-01

    We study nonequilibrium thermodynamics in a fermionic resonant-level model with arbitrary coupling strength to a fermionic bath, taking the wide-band limit. In contrast to previous theories, we consider a system where both the level energy and the coupling strength depend explicitly on time. We find that, even in this generalized model, consistent thermodynamic laws can be obtained, up to the second order in the drive speed, by splitting the coupling energy symmetrically between system and bath. We define observables for the system energy, work, heat, and entropy, and calculate them using nonequilibrium Green's functions. We find that the observables fulfill the laws of thermodynamics, and connect smoothly to the known equilibrium results.

  10. REVIEW OF THE GOVERNING EQUATIONS, COMPUTATIONAL ALGORITHMS, AND OTHER COMPONENTS OF THE MODELS-3 COMMUNITY MULTISCALE AIR QUALITY (CMAQ) MODELING SYSTEM

    EPA Science Inventory

    This article describes the governing equations, computational algorithms, and other components entering into the Community Multiscale Air Quality (CMAQ) modeling system. This system has been designed to approach air quality as a whole by including state-of-the-science capabiliti...

  11. A first-order analysis of the potential role of CO2 fertilization to affect the global carbon budget: A comparison of four terrestrial biosphere models

    USGS Publications Warehouse

    Kicklighter, D.W.; Bruno, M.; Donges, S.; Esser, G.; Heimann, Martin; Helfrich, J.; Ift, F.; Joos, F.; Kaduk, J.; Kohlmaier, G.H.; McGuire, A.D.; Melillo, J.M.; Meyer, R.; Moore, B.; Nadler, A.; Prentice, I.C.; Sauf, W.; Schloss, A.L.; Sitch, S.; Wittenberg, U.; Wurth, G.

    1999-01-01

    We compared the simulated responses of net primary production, heterotrophic respiration, net ecosystem production and carbon storage in natural terrestrial ecosystems to historical (1765 to 1990) and projected (1990 to 2300) changes of atmospheric CO2 concentration of four terrestrial biosphere models: the Bern model, the Frankfurt Biosphere Model (FBM), the High-Resolution Biosphere Model (HRBM) and the Terrestrial Ecosystem Model (TEM). The results of the model intercomparison suggest that CO2 fertilization of natural terrestrial vegetation has the potential to account for a large fraction of the so-called 'missing carbon sink' of 2.0 Pg C in 1990. Estimates of this potential are reduced when the models incorporate the concept that CO2 fertilization can be limited by nutrient availability. Although the model estimates differ on the potential size (126 to 461 Pg C) of the future terrestrial sink caused by CO2 fertilization, the results of the four models suggest that natural terrestrial ecosystems will have a limited capacity to act as a sink of atmospheric CO2 in the future as a result of physiological constraints and nutrient constraints on NPP. All the spatially explicit models estimate a carbon sink in both tropical and northern temperate regions, but the strength of these sinks varies over time. Differences in the simulated response of terrestrial ecosystems to CO2 fertilization among the models in this intercomparison study reflect the fact that the models have highlighted different aspects of the effect of CO2 fertilization on carbon dynamics of natural terrestrial ecosystems including feedback mechanisms. As interactions with nitrogen fertilization, climate change and forest regrowth may play an important role in simulating the response of terrestrial ecosystems to CO2 fertilization, these factors should be included in future analyses. Improvements in spatially explicit data sets, whole-ecosystems experiments and the availability of net carbon exchange measurements across the globe will also help to improve future evaluations of the role of CO2 fertilization on terrestrial carbon storage.

  12. A Verification System for Distributed Objects with Asynchronous Method Calls

    NASA Astrophysics Data System (ADS)

    Ahrendt, Wolfgang; Dylla, Maximilian

    We present a verification system for Creol, an object-oriented modeling language for concurrent distributed applications. The system is an instance of KeY, a framework for object-oriented software verification, which has so far been applied foremost to sequential Java. Building on KeY characteristic concepts, like dynamic logic, sequent calculus, explicit substitutions, and the taclet rule language, the system presented in this paper addresses functional correctness of Creol models featuring local cooperative thread parallelism and global communication via asynchronous method calls. The calculus heavily operates on communication histories which describe the interfaces of Creol units. Two example scenarios demonstrate the usage of the system.

  13. Thermodynamic evaluation of transonic compressor rotors using the finite volume approach

    NASA Technical Reports Server (NTRS)

    Moore, John; Nicholson, Stephen; Moore, Joan G.

    1986-01-01

    The development of a computational capability to handle viscous flow with an explicit time-marching method based on the finite volume approach is summarized. Emphasis is placed on the extensions to the computational procedure which allow the handling of shock induced separation and large regions of strong backflow. Appendices contain abstracts of papers and whole reports generated during the contract period.

  14. The Winds of Katrina Still Call Our Names: How Do Teachers and Schools Confront Social Justice Issues?

    ERIC Educational Resources Information Center

    Wynne, Joan T.

    2007-01-01

    Certainly, individuals in many colleges and public schools address the impact of race, class, and power on schools, yet the institutions as a whole continue, even a year after Katrina, to ignore the imperative to explicitly and consistently deal with these issues. Human justice must become an institutional mantra, not just the conversation of a…

  15. Clients or Consumers, Commonplace or Pioneers? Navigating the Contemporary Class Politics of Family, Parenting Skills and Education

    ERIC Educational Resources Information Center

    Edwards, Rosalind; Gillies, Val

    2011-01-01

    An explicit linking of the minutiae of everyday parenting practices and the good of society as a whole has been a feature of government policy. The state has taken responsibility for instilling the right parenting skills to deal with what is said to be the societal fall-out of contemporary and family change. "Knowledge" about parenting…

  16. Effective Reading and Writing Instruction: A Focus on Modeling

    ERIC Educational Resources Information Center

    Regan, Kelley; Berkeley, Sheri

    2012-01-01

    When providing effective reading and writing instruction, teachers need to provide explicit modeling. Modeling is particularly important when teaching students to use cognitive learning strategies. Examples of how teachers can provide specific, explicit, and flexible instructional modeling is presented in the context of two evidence-based…

  17. The Neural Basis of Event Simulation: An fMRI Study

    PubMed Central

    Yomogida, Yukihito; Sugiura, Motoaki; Akimoto, Yoritaka; Miyauchi, Carlos Makoto; Kawashima, Ryuta

    2014-01-01

    Event simulation (ES) is the situational inference process in which perceived event features such as objects, agents, and actions are associated in the brain to represent the whole situation. ES provides a common basis for various cognitive processes, such as perceptual prediction, situational understanding/prediction, and social cognition (such as mentalizing/trait inference). Here, functional magnetic resonance imaging was used to elucidate the neural substrates underlying important subdivisions within ES. First, the study investigated whether ES depends on different neural substrates when it is conducted explicitly and implicitly. Second, the existence of neural substrates specific to the future-prediction component of ES was assessed. Subjects were shown contextually related object pictures implying a situation and performed several picture–word-matching tasks. By varying task goals, subjects were made to infer the implied situation implicitly/explicitly or predict the future consequence of that situation. The results indicate that, whereas implicit ES activated the lateral prefrontal cortex and medial/lateral parietal cortex, explicit ES activated the medial prefrontal cortex, posterior cingulate cortex, and medial/lateral temporal cortex. Additionally, the left temporoparietal junction plays an important role in the future-prediction component of ES. These findings enrich our understanding of the neural substrates of the implicit/explicit/predictive aspects of ES-related cognitive processes. PMID:24789353

  18. Geometry and Function Definition for Discrete Analysis and Its Relationship to the Design Data Base.

    DTIC Science & Technology

    1977-08-01

    clarif y its dependenc e on the design process as a whole . The model generation capabilities of a state—of—the—art structural analysis system ( GIFTS ...a whole. The model generat ion capabilit ies of a state—of—the—art structural analysis system ( GIFTS 4), heav ily oriented toward s pre— and post...independentl y at a later stage. Ii . 1,1 ~1E TP IC HIER.ARCHY ~)F DEFINITION IN GIFTS ‘+ A three-d imensional object , to be designed or analyz ed

  19. Exploring a United States Maize Cellulose Biofuel Scenario Using an Integrated Energy and Agricultural Markets Solution Approach

    EPA Science Inventory

    Biofuel feedstock production in the United States (US) is an emergent environmental nutrient management issue, whose exploration can benefit from a multi-scale and multimedia systems modeling approach that explicitly addresses diverging stakeholder interests. In the present anal...

  20. Indexing Theory and Retrieval Effectiveness.

    ERIC Educational Resources Information Center

    Robertson, Stephen E.

    1978-01-01

    Describes recent attempts to make explicit connections between the indexing process and the use of the index or information retrieval system, particularly the utility-theoretic and automatic indexing models of William Cooper and Stephen Harter. Theory and performance, information storage and retrieval, search stage feedback, and indexing are also…

  1. Are baboons learning "orthographic" representations? Probably not

    PubMed Central

    Bröker, Franziska; Ramscar, Michael; Baayen, Harald

    2017-01-01

    The ability of Baboons (papio papio) to distinguish between English words and nonwords has been modeled using a deep learning convolutional network model that simulates a ventral pathway in which lexical representations of different granularity develop. However, given that pigeons (columba livia), whose brain morphology is drastically different, can also be trained to distinguish between English words and nonwords, it appears that a less species-specific learning algorithm may be required to explain this behavior. Accordingly, we examined whether the learning model of Rescorla and Wagner, which has proved to be amazingly fruitful in understanding animal and human learning could account for these data. We show that a discrimination learning network using gradient orientation features as input units and word and nonword units as outputs succeeds in predicting baboon lexical decision behavior—including key lexical similarity effects and the ups and downs in accuracy as learning unfolds—with surprising precision. The models performance, in which words are not explicitly represented, is remarkable because it is usually assumed that lexicality decisions, including the decisions made by baboons and pigeons, are mediated by explicit lexical representations. By contrast, our results suggest that in learning to perform lexical decision tasks, baboons and pigeons do not construct a hierarchy of lexical units. Rather, they make optimal use of low-level information obtained through the massively parallel processing of gradient orientation features. Accordingly, we suggest that reading in humans first involves initially learning a high-level system building on letter representations acquired from explicit instruction in literacy, which is then integrated into a conventionalized oral communication system, and that like the latter, fluent reading involves the massively parallel processing of the low-level features encoding semantic contrasts. PMID:28859134

  2. Are baboons learning "orthographic" representations? Probably not.

    PubMed

    Linke, Maja; Bröker, Franziska; Ramscar, Michael; Baayen, Harald

    2017-01-01

    The ability of Baboons (papio papio) to distinguish between English words and nonwords has been modeled using a deep learning convolutional network model that simulates a ventral pathway in which lexical representations of different granularity develop. However, given that pigeons (columba livia), whose brain morphology is drastically different, can also be trained to distinguish between English words and nonwords, it appears that a less species-specific learning algorithm may be required to explain this behavior. Accordingly, we examined whether the learning model of Rescorla and Wagner, which has proved to be amazingly fruitful in understanding animal and human learning could account for these data. We show that a discrimination learning network using gradient orientation features as input units and word and nonword units as outputs succeeds in predicting baboon lexical decision behavior-including key lexical similarity effects and the ups and downs in accuracy as learning unfolds-with surprising precision. The models performance, in which words are not explicitly represented, is remarkable because it is usually assumed that lexicality decisions, including the decisions made by baboons and pigeons, are mediated by explicit lexical representations. By contrast, our results suggest that in learning to perform lexical decision tasks, baboons and pigeons do not construct a hierarchy of lexical units. Rather, they make optimal use of low-level information obtained through the massively parallel processing of gradient orientation features. Accordingly, we suggest that reading in humans first involves initially learning a high-level system building on letter representations acquired from explicit instruction in literacy, which is then integrated into a conventionalized oral communication system, and that like the latter, fluent reading involves the massively parallel processing of the low-level features encoding semantic contrasts.

  3. Integrable aspects and rogue wave solution of Sasa-Satsuma equation with variable coefficients in the inhomogeneous fiber

    NASA Astrophysics Data System (ADS)

    Zhang, Yu-Ping; Yu, Lan; Wei, Guang-Mei

    2018-02-01

    Under investigation with symbolic computation in this paper, is a variable-coefficient Sasa-Satsuma equation (SSE) which can describe the ultra short pulses in optical fiber communications and propagation of deep ocean waves. By virtue of the extended Ablowitz-Kaup-Newell-Segur system, Lax pair for the model is directly constructed. Based on the obtained Lax pair, an auto-Bäcklund transformation is provided, then the explicit one-soliton solution is obtained. Meanwhile, an infinite number of conservation laws in explicit recursion forms are derived to indicate its integrability in the Liouville sense. Furthermore, exact explicit rogue wave (RW) solution is presented by use of a Darboux transformation. In addition to the double-peak structure and an analog of the Peregrine soliton, the RW can exhibit graphically an intriguing twisted rogue-wave (TRW) pair that involve four well-defined zero-amplitude points.

  4. A Petri-net coordination model for an intelligent mobile robot

    NASA Technical Reports Server (NTRS)

    Wang, F.-Y.; Kyriakopoulos, K. J.; Tsolkas, A.; Saridis, G. N.

    1990-01-01

    The authors present a Petri net model of the coordination level of an intelligent mobile robot system (IMRS). The purpose of this model is to specify the integration of the individual efforts on path planning, supervisory motion control, and vision systems that are necessary for the autonomous operation of the mobile robot in a structured dynamic environment. This is achieved by analytically modeling the various units of the system as Petri net transducers and explicitly representing the task precedence and information dependence among them. The model can also be used to simulate the task processing and to evaluate the efficiency of operations and the responsibility of decisions in the coordination level of the IMRS. Some simulation results on the task processing and learning are presented.

  5. Climatological temperature senstivity of soil carbon turnover: Observations, simple scaling models, and ESMs

    NASA Astrophysics Data System (ADS)

    Koven, C. D.; Hugelius, G.; Lawrence, D. M.; Wieder, W. R.

    2016-12-01

    The projected loss of soil carbon to the atmosphere resulting from climate change is a potentially large but highly uncertain feedback to warming. The magnitude of this feedback is poorly constrained by observations and theory, and is disparately represented in Earth system models. To assess the likely long-term response of soils to climate change, spatial gradients in soil carbon turnover times can identify broad-scale and long-term controls on the rate of carbon cycling as a function of climate and other factors. Here we show that the climatological temperature control on carbon turnover in the top meter of global soils is more sensitive in cold climates than in warm ones. We present a simplified model that explains the high cold-climate sensitivity using only the physical scaling of soil freeze-thaw state across climate gradients. Critically, current Earth system models (ESMs) fail to capture this pattern, however it emerges from an ESM that explicitly resolves vertical gradients in soil climate and turnover. The weak tropical temperature sensitivity emerges from a different model that explicitly resolves mineralogical control on decomposition. These results support projections of strong future carbon-climate feedbacks from northern soils and demonstrate a method for ESMs to capture this emergent behavior.

  6. Explicit continuous charge-based compact model for long channel heavily doped surrounding-gate MOSFETs incorporating interface traps and quantum effects

    NASA Astrophysics Data System (ADS)

    Hamzah, Afiq; Hamid, Fatimah A.; Ismail, Razali

    2016-12-01

    An explicit solution for long-channel surrounding-gate (SRG) MOSFETs is presented from intrinsic to heavily doped body including the effects of interface traps and fixed oxide charges. The solution is based on the core SRGMOSFETs model of the Unified Charge Control Model (UCCM) for heavily doped conditions. The UCCM model of highly doped SRGMOSFETs is derived to obtain the exact equivalent expression as in the undoped case. Taking advantage of the undoped explicit charge-based expression, the asymptotic limits for below threshold and above threshold have been redefined to include the effect of trap states for heavily doped cases. After solving the asymptotic limits, an explicit mobile charge expression is obtained which includes the trap state effects. The explicit mobile charge model shows very good agreement with respect to numerical simulation over practical terminal voltages, doping concentration, geometry effects, and trap state effects due to the fixed oxide charges and interface traps. Then, the drain current is obtained using the Pao-Sah's dual integral, which is expressed as a function of inversion charge densities at the source/drain ends. The drain current agreed well with the implicit solution and numerical simulation for all regions of operation without employing any empirical parameters. A comparison with previous explicit models has been conducted to verify the competency of the proposed model with the doping concentration of 1× {10}19 {{cm}}-3, as the proposed model has better advantages in terms of its simplicity and accuracy at a higher doping concentration.

  7. Molecular- and Domain-level Microstructure-dependent Material Model for Nano-segregated Polyurea

    DTIC Science & Technology

    2013-04-15

    material subroutine VUMAT of ABAQUS /Explicit (Dassault Systems, 2010), a commercial finite element code. This subroutine is called by the ABAQUS solver...rate of change of the local internal thermal energy is equal to the corresponding rate of dissipative work. Critical assessment of this model identified...The model also takes into account the plastic expansion or contraction of voids and therefore the stresses are appropriately modified to account for

  8. From Cycle Rooted Spanning Forests to the Critical Ising Model: an Explicit Construction

    NASA Astrophysics Data System (ADS)

    de Tilière, Béatrice

    2013-04-01

    Fisher established an explicit correspondence between the 2-dimensional Ising model defined on a graph G and the dimer model defined on a decorated version {{G}} of this graph (Fisher in J Math Phys 7:1776-1781, 1966). In this paper we explicitly relate the dimer model associated to the critical Ising model and critical cycle rooted spanning forests (CRSFs). This relation is established through characteristic polynomials, whose definition only depends on the respective fundamental domains, and which encode the combinatorics of the model. We first show a matrix-tree type theorem establishing that the dimer characteristic polynomial counts CRSFs of the decorated fundamental domain {{G}_1}. Our main result consists in explicitly constructing CRSFs of {{G}_1} counted by the dimer characteristic polynomial, from CRSFs of G 1, where edges are assigned Kenyon's critical weight function (Kenyon in Invent Math 150(2):409-439, 2002); thus proving a relation on the level of configurations between two well known 2-dimensional critical models.

  9. Global phenomena from local rules: Peer-to-peer networks and crystal steps

    NASA Astrophysics Data System (ADS)

    Finkbiner, Amy

    Even simple, deterministic rules can generate interesting behavior in dynamical systems. This dissertation examines some real world systems for which fairly simple, locally defined rules yield useful or interesting properties in the system as a whole. In particular, we study routing in peer-to-peer networks and the motion of crystal steps. Peers can vary by three orders of magnitude in their capacities to process network traffic. This heterogeneity inspires our use of "proportionate load balancing," where each peer provides resources in proportion to its individual capacity. We provide an implementation that employs small, local adjustments to bring the entire network into a global balance. Analytically and through simulations, we demonstrate the effectiveness of proportionate load balancing on two routing methods for de Bruijn graphs, introducing a new "reversed" routing method which performs better than standard forward routing in some cases. The prevalence of peer-to-peer applications prompts companies to locate the hosts participating in these networks. We explore the use of supervised machine learning to identify peer-to-peer hosts, without using application-specific information. We introduce a model for "triples," which exploits information about nearly contemporaneous flows to give a statistical picture of a host's activities. We find that triples, together with measurements of inbound vs. outbound traffic, can capture most of the behavior of peer-to-peer hosts. An understanding of crystal surface evolution is important for the development of modern nanoscale electronic devices. The most commonly studied surface features are steps, which form at low temperatures when the crystal is cut close to a plane of symmetry. Step bunching, when steps arrange into widely separated clusters of tightly packed steps, is one important step phenomenon. We analyze a discrete model for crystal steps, in which the motion of each step depends on the two steps on either side of it. We find an time-dependence term for the motion that does not appear in continuum models, and we determine an explicit dependence on step number.

  10. An explicit microphysics thunderstorm model.

    Treesearch

    R. Solomon; C.M. Medaglia; C. Adamo; S. Dietrick; A. Mugnai; U. Biader Ceipidor

    2005-01-01

    The authors present a brief description of a 1.5-dimensional thunderstorm model with a lightning parameterization that utilizes an explicit microphysical scheme to model lightning-producing clouds. The main intent of this work is to describe the basic microphysical and electrical properties of the model, with a small illustrative section to show how the model may be...

  11. Modelling Root Systems Using Oriented Density Distributions

    NASA Astrophysics Data System (ADS)

    Dupuy, Lionel X.

    2011-09-01

    Root architectural models are essential tools to understand how plants access and utilize soil resources during their development. However, root architectural models use complex geometrical descriptions of the root system and this has limitations to model interactions with the soil. This paper presents the development of continuous models based on the concept of oriented density distribution function. The growth of the root system is built as a hierarchical system of partial differential equations (PDEs) that incorporate single root growth parameters such as elongation rate, gravitropism and branching rate which appear explicitly as coefficients of the PDE. Acquisition and transport of nutrients are then modelled by extending Darcy's law to oriented density distribution functions. This framework was applied to build a model of the growth and water uptake of barley root system. This study shows that simplified and computer effective continuous models of the root system development can be constructed. Such models will allow application of root growth models at field scale.

  12. Scalable algorithms for 3D extended MHD.

    NASA Astrophysics Data System (ADS)

    Chacon, Luis

    2007-11-01

    In the modeling of plasmas with extended MHD (XMHD), the challenge is to resolve long time scales while rendering the whole simulation manageable. In XMHD, this is particularly difficult because fast (dispersive) waves are supported, resulting in a very stiff set of PDEs. In explicit schemes, such stiffness results in stringent numerical stability time-step constraints, rendering them inefficient and algorithmically unscalable. In implicit schemes, it yields very ill-conditioned algebraic systems, which are difficult to invert. In this talk, we present recent theoretical and computational progress that demonstrate a scalable 3D XMHD solver (i.e., CPU ˜N, with N the number of degrees of freedom). The approach is based on Newton-Krylov methods, which are preconditioned for efficiency. The preconditioning stage admits suitable approximations without compromising the quality of the overall solution. In this work, we employ optimal (CPU ˜N) multilevel methods on a parabolized XMHD formulation, which renders the whole algorithm scalable. The (crucial) parabolization step is required to render XMHD multilevel-friendly. Algebraically, the parabolization step can be interpreted as a Schur factorization of the Jacobian matrix, thereby providing a solid foundation for the current (and future extensions of the) approach. We will build towards 3D extended MHDootnotetextL. Chac'on, Comput. Phys. Comm., 163 (3), 143-171 (2004)^,ootnotetextL. Chac'on et al., 33rd EPS Conf. Plasma Physics, Rome, Italy, 2006 by discussing earlier algorithmic breakthroughs in 2D reduced MHDootnotetextL. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) and 2D Hall MHD.ootnotetextL. Chac'on et al., J. Comput. Phys., 188 (2), 573-592 (2003)

  13. Systematic Applications of Metabolomics in Metabolic Engineering

    PubMed Central

    Dromms, Robert A.; Styczynski, Mark P.

    2012-01-01

    The goals of metabolic engineering are well-served by the biological information provided by metabolomics: information on how the cell is currently using its biochemical resources is perhaps one of the best ways to inform strategies to engineer a cell to produce a target compound. Using the analysis of extracellular or intracellular levels of the target compound (or a few closely related molecules) to drive metabolic engineering is quite common. However, there is surprisingly little systematic use of metabolomics datasets, which simultaneously measure hundreds of metabolites rather than just a few, for that same purpose. Here, we review the most common systematic approaches to integrating metabolite data with metabolic engineering, with emphasis on existing efforts to use whole-metabolome datasets. We then review some of the most common approaches for computational modeling of cell-wide metabolism, including constraint-based models, and discuss current computational approaches that explicitly use metabolomics data. We conclude with discussion of the broader potential of computational approaches that systematically use metabolomics data to drive metabolic engineering. PMID:24957776

  14. Occupant Responses in a Full-Scale Crash Test of the Sikorsky ACAP Helicopter

    NASA Technical Reports Server (NTRS)

    Jackson, Karen E.; Fasanella, Edwin L.; Boitnott, Richard L.; McEntire, Joseph; Lewis, Alan

    2002-01-01

    A full-scale crash test of the Sikorsky Advanced Composite Airframe Program (ACAP) helicopter was performed in 1999 to generate experimental data for correlation with a crash simulation developed using an explicit nonlinear, transient dynamic finite element code. The airframe was the residual flight test hardware from the ACAP program. For the test, the aircraft was outfitted with two crew and two troop seats, and four anthropomorphic test dummies. While the results of the impact test and crash simulation have been documented fairly extensively in the literature, the focus of this paper is to present the detailed occupant response data obtained from the crash test and to correlate the results with injury prediction models. These injury models include the Dynamic Response Index (DRI), the Head Injury Criteria (HIC), the spinal load requirement defined in FAR Part 27.562(c), and a comparison of the duration and magnitude of the occupant vertical acceleration responses with the Eiband whole-body acceleration tolerance curve.

  15. Modelling zwitterions in solution: 3-fluoro-γ-aminobutyric acid (3F-GABA).

    PubMed

    Cao, Jie; Bjornsson, Ragnar; Bühl, Michael; Thiel, Walter; van Mourik, Tanja

    2012-01-02

    The conformations and relative stabilities of folded and extended 3-fluoro-γ-aminobutyric acid (3F-GABA) conformers were studied using explicit solvation models. Geometry optimisations in the gas phase with one or two explicit water molecules favour folded and neutral structures containing intramolecular NH···O-C hydrogen bonds. With three or five explicit water molecules zwitterionic minima are obtained, with folded structures being preferred over extended conformers. The stability of folded versus extended zwitterionic conformers increases on going from a PCM continuum solvation model to the microsolvated complexes, though extended structures become less disfavoured with the inclusion of more water molecules. Full explicit solvation was studied with a hybrid quantum-mechanical/molecular-mechanical (QM/MM) scheme and molecular dynamics simulations, including more than 6000 TIP3P water molecules. According to free energies obtained from thermodynamic integration at the PM3/MM level and corrected for B3LYP/MM total energies, the fully extended conformer is more stable than folded ones by about -4.5 kJ mol(-1). B3LYP-computed (3)J(F,H) NMR spin-spin coupling constants, averaged over PM3/MM-MD trajectories, agree best with experiment for this fully extended form, in accordance with the original NMR analysis. The seeming discrepancy between static PCM calculations and experiment noted previously is now resolved. That the inexpensive semiempirical PM3 method performs so well for this archetypical zwitterion is encouraging for further QM/MM studies of biomolecular systems. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Graph-based analysis of connectivity in spatially-explicit population models: HexSim and the Connectivity Analysis Toolkit

    EPA Science Inventory

    Background / Question / Methods Planning for the recovery of threatened species is increasingly informed by spatially-explicit population models. However, using simulation model results to guide land management decisions can be difficult due to the volume and complexity of model...

  17. On ultrasound-induced microbubble oscillation in a capillary blood vessel and its implications for the blood-brain barrier

    NASA Astrophysics Data System (ADS)

    Wiedemair, W.; Tuković, Ž.; Jasak, H.; Poulikakos, D.; Kurtcuoglu, V.

    2012-02-01

    The complex interaction between an ultrasound-driven microbubble and an enclosing capillary microvessel is investigated by means of a coupled, multi-domain numerical model using the finite volume formulation. This system is of interest in the study of transient blood-brain barrier disruption (BBBD) for drug delivery applications. The compliant vessel structure is incorporated explicitly as a distinct domain described by a dedicated physical model. Red blood cells (RBCs) are taken into account as elastic solids in the blood plasma. We report the temporal and spatial development of transmural pressure (Ptm) and wall shear stress (WSS) at the luminal endothelial interface, both of which are candidates for the yet unknown mediator of BBBD. The explicit introduction of RBCs shapes the Ptm and WSS distributions and their derivatives markedly. While the peak values of these mechanical wall parameters are not affected considerably by the presence of RBCs, a pronounced increase in their spatial gradients is observed compared to a configuration with blood plasma alone. The novelty of our work lies in the explicit treatment of the vessel wall, and in the modelling of blood as a composite fluid, which we show to be relevant for the mechanical processes at the endothelium.

  18. A hydroeconomic modeling framework for optimal integrated management of forest and water

    NASA Astrophysics Data System (ADS)

    Garcia-Prats, Alberto; del Campo, Antonio D.; Pulido-Velazquez, Manuel

    2016-10-01

    Forests play a determinant role in the hydrologic cycle, with water being the most important ecosystem service they provide in semiarid regions. However, this contribution is usually neither quantified nor explicitly valued. The aim of this study is to develop a novel hydroeconomic modeling framework for assessing and designing the optimal integrated forest and water management for forested catchments. The optimization model explicitly integrates changes in water yield in the stands (increase in groundwater recharge) induced by forest management and the value of the additional water provided to the system. The model determines the optimal schedule of silvicultural interventions in the stands of the catchment in order to maximize the total net benefit in the system. Canopy cover and biomass evolution over time were simulated using growth and yield allometric equations specific for the species in Mediterranean conditions. Silvicultural operation costs according to stand density and canopy cover were modeled using local cost databases. Groundwater recharge was simulated using HYDRUS, calibrated and validated with data from the experimental plots. In order to illustrate the presented modeling framework, a case study was carried out in a planted pine forest (Pinus halepensis Mill.) located in south-western Valencia province (Spain). The optimized scenario increased groundwater recharge. This novel modeling framework can be used in the design of a "payment for environmental services" scheme in which water beneficiaries could contribute to fund and promote efficient forest management operations.

  19. A Hierarchical Modeling Study of the Interactions Among Turbulence, Cloud Microphysics, and Radiative Transfer in the Evolution of Cirrus Clouds

    NASA Technical Reports Server (NTRS)

    Curry, Judith; Khvorostyanov, V. I.

    2005-01-01

    This project used a hierarchy of cloud resolving models to address the following science issues of relevance to CRYSTAL-FACE: What ice crystal nucleation mechanisms are active in the different types of cirrus clouds in the Florida area and how do these different nucleation processes influence the evolution of the cloud system and the upper tropospheric humidity? How does the feedback between supersaturation and nucleation impact the evolution of the cloud? What is the relative importance of the large-scale vertical motion and the turbulent motions in the evolution of the crystal size spectra? How does the size spectra impact the life-cycle of the cloud, stratospheric dehydration, and cloud radiative forcing? What is the nature of the turbulence and waves in the upper troposphere generated by precipitating deep convective cloud systems? How do cirrus microphysical and optical properties vary with the small-scale dynamics? How do turbulence and waves in the upper troposphere influence the cross-tropopause mixing and stratospheric and upper tropospheric humidity? The models used in this study were: 2-D hydrostatic model with explicit microphysics that can account for 30 size bins for both the droplet and crystal size spectra. Notably, a new ice crystal nucleation scheme has been incorporated into the model. Parcel model with explicit microphysics, for developing and evaluating microphysical parameterizations. Single column model for testing bulk microphysics parameterizations

  20. A new method for calculating time-dependent atomic level populations

    NASA Technical Reports Server (NTRS)

    Kastner, S. O.

    1981-01-01

    A method is described for reducing the number of levels to be dealt with in calculating time-dependent populations of atoms or ions in plasmas. The procedure effectively extends the collisional-radiative model to consecutive stages of ionization, treating ground and metastable levels explicitly and excited levels implicitly. Direct comparisons of full and simulated systems are carried out for five-level models.

  1. CONSTRUCTING, PERTURBATION ANALYSIIS AND TESTING OF A MULTI-HABITAT PERIODIC MATRIX POPULATION MODEL

    EPA Science Inventory

    We present a matrix model that explicitly incorporates spatial habitat structure and seasonality and discuss preliminary results from a landscape level experimental test. Ecological risk to populations is often modeled without explicit treatment of spatially or temporally distri...

  2. Quantization of a U(1) gauged chiral boson in the Batalin-Fradkin-Vilkovisky scheme

    NASA Astrophysics Data System (ADS)

    Ghosh, Subir

    1994-03-01

    The scheme developed by Batalin, Fradkin, and Vilkovisky (BFV) to convert a second-class constrained system to a first-class one (having gauge invariance) is used in the Floreanini-Jackiw formulation of the chiral boson interacting with a U(1) gauge field. Explicit expressions of the BRST charge, the unitarizing Hamiltonian, and the BRST invariant effective action are provided and the full quantization is carried through. The spectra in both cases have been analyzed to show the presence of the proper chiral components explicitly. In the gauged model, Wess-Zumino terms in terms of the Batalin-Fradkin fields are identified.

  3. Quantization of a U(1) gauged chiral boson in the Batalin-Fradkin-Vilkovisky scheme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghosh, S.

    1994-03-15

    The scheme developed by Batalin, Fradkin, and Vilkovisky (BFV) to convert a second-class constrained system to a first-class one (having gauge invariance) is used in the Floreanini-Jackiw formulation of the chiral boson interacting with a U(1) gauge field. Explicit expressions of the BRST charge, the unitarizing Hamiltonian, and the BRST invariant effective action are provided and the full quantization is carried through. The spectra in both cases have been analyzed to show the presence of the proper chiral components explicitly. In the gauged model, Wess-Zumino terms in terms of the Batalin-Fradkin fields are identified.

  4. Green-Ampt approximations: A comprehensive analysis

    NASA Astrophysics Data System (ADS)

    Ali, Shakir; Islam, Adlul; Mishra, P. K.; Sikka, Alok K.

    2016-04-01

    Green-Ampt (GA) model and its modifications are widely used for simulating infiltration process. Several explicit approximate solutions to the implicit GA model have been developed with varying degree of accuracy. In this study, performance of nine explicit approximations to the GA model is compared with the implicit GA model using the published data for broad range of soil classes and infiltration time. The explicit GA models considered are Li et al. (1976) (LI), Stone et al. (1994) (ST), Salvucci and Entekhabi (1994) (SE), Parlange et al. (2002) (PA), Barry et al. (2005) (BA), Swamee et al. (2012) (SW), Ali et al. (2013) (AL), Almedeij and Esen (2014) (AE), and Vatankhah (2015) (VA). Six statistical indicators (e.g., percent relative error, maximum absolute percent relative error, average absolute percent relative errors, percent bias, index of agreement, and Nash-Sutcliffe efficiency) and relative computer computation time are used for assessing the model performance. Models are ranked based on the overall performance index (OPI). The BA model is found to be the most accurate followed by the PA and VA models for variety of soil classes and infiltration periods. The AE, SW, SE, and LI model also performed comparatively better. Based on the overall performance index, the explicit models are ranked as BA > PA > VA > LI > AE > SE > SW > ST > AL. Results of this study will be helpful in selection of accurate and simple explicit approximate GA models for solving variety of hydrological problems.

  5. Modelling explicit tides in the Indonesian seas: An important process for surface sea water properties.

    PubMed

    Nugroho, Dwiyoga; Koch-Larrouy, Ariane; Gaspar, Philippe; Lyard, Florent; Reffray, Guillaume; Tranchant, Benoit

    2018-06-01

    Very intense internal tides take place in Indonesian seas. They dissipate and affect the vertical distribution of temperature and currents, which in turn influence the survival rates and transports of most planktonic organisms at the base of the whole marine ecosystem. This study uses the INDESO physical model to characterize the internal tides spatio-temporal patterns in the Indonesian Seas. The model reproduced internal tide dissipation in agreement with previous fine structure and microstructure observed in-situ in the sites of generation. The model also produced similar water mass transformation as the previous parameterization of Koch-Larrouy et al. (2007), and show good agreement with observations. The resulting cooling at the surface is 0.3°C, with maxima of 0.8°C at the location of internal tides energy, with stronger cooling in austral winter. The cycle of spring tides and neap tides modulates this impact by 0.1°C to 0.3°C. These results suggest that mixing due to internal tides might also upwell nutrients at the surface at a frequency similar to the tidal frequencies. Implications for biogeochemical modelling are important. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Attribute And-Or Grammar for Joint Parsing of Human Pose, Parts and Attributes.

    PubMed

    Park, Seyoung; Nie, Xiaohan; Zhu, Song-Chun

    2017-07-25

    This paper presents an attribute and-or grammar (A-AOG) model for jointly inferring human body pose and human attributes in a parse graph with attributes augmented to nodes in the hierarchical representation. In contrast to other popular methods in the current literature that train separate classifiers for poses and individual attributes, our method explicitly represents the decomposition and articulation of body parts, and account for the correlations between poses and attributes. The A-AOG model is an amalgamation of three traditional grammar formulations: (i)Phrase structure grammar representing the hierarchical decomposition of the human body from whole to parts; (ii)Dependency grammar modeling the geometric articulation by a kinematic graph of the body pose; and (iii)Attribute grammar accounting for the compatibility relations between different parts in the hierarchy so that their appearances follow a consistent style. The parse graph outputs human detection, pose estimation, and attribute prediction simultaneously, which are intuitive and interpretable. We conduct experiments on two tasks on two datasets, and experimental results demonstrate the advantage of joint modeling in comparison with computing poses and attributes independently. Furthermore, our model obtains better performance over existing methods for both pose estimation and attribute prediction tasks.

  7. A New Canopy Integration Factor

    NASA Astrophysics Data System (ADS)

    Badgley, G.; Anderegg, L. D. L.; Baker, I. T.; Berry, J. A.

    2017-12-01

    Ecosystem modelers have long debated how to best represent within-canopy heterogeneity. Can one big leaf represent the full range of canopy physiological responses? Or you need two leaves - sun and shade - to get things right? Is it sufficient to treat the canopy as a diffuse medium? Or would it be better to explicitly represent separate canopy layers? These are open questions that have been subject of an enormous amount of research and scrutiny. Yet regardless of how the canopy is represented, each model must grapple with correctly parameterizing its canopy in a way that properly translates leaf-level processes to the canopy and ecosystem scale. We present a new approach for integrating whole-canopy biochemistry by combining remote sensing with ecological theory. Using the Simple Biosphere model (SiB), we redefined how SiB scales photosynthetic processes from leaf-to-canopy as a function of satellite-derived measurements of solar-induced chlorophyll fluorescence (SIF). Across multiple long-term study sites, our approach improves the accuracy of daily modeled photosynthesis by as much as 25 percent. We share additional insights on how SIF might be more directly integrated into photosynthesis models, as well as present ideas for harnessing SIF to more accurately parameterize canopy biochemical variables.

  8. Direct versus Indirect Explicit Methods of Enhancing EFL Students' English Grammatical Competence: A Concept Checking-Based Consciousness-Raising Tasks Model

    ERIC Educational Resources Information Center

    Dang, Trang Thi Doan; Nguyen, Huong Thu

    2013-01-01

    Two approaches to grammar instruction are often discussed in the ESL literature: direct explicit grammar instruction (DEGI) (deduction) and indirect explicit grammar instruction (IEGI) (induction). This study aims to explore the effects of indirect explicit grammar instruction on EFL learners' mastery of English tenses. Ninety-four…

  9. Models for the Effects of G-seat Cuing on Roll-axis Tracking Performance

    NASA Technical Reports Server (NTRS)

    Levison, W. H.; Mcmillan, G. R.; Martin, E. A.

    1984-01-01

    Including whole-body motion in a flight simulator improves performance for a variety of tasks requiring a pilot to compensate for the effects of unexpected disturbances. A possible mechanism for this improvement is that whole-body motion provides high derivative vehicle state information whic allows the pilot to generate more lead in responding to the external disturbances. During development of motion simulating algorithms for an advanced g-cuing system it was discovered that an algorithm based on aircraft roll acceleration producted little or no performance improvement. On the other hand, algorithms based on roll position or roll velocity produced performance equivalent to whole-body motion. The analysis and modeling conducted at both the sensory system and manual control performance levels to explain the above results are described.

  10. Watershed System Model: The Essentials to Model Complex Human-Nature System at the River Basin Scale

    NASA Astrophysics Data System (ADS)

    Li, Xin; Cheng, Guodong; Lin, Hui; Cai, Ximing; Fang, Miao; Ge, Yingchun; Hu, Xiaoli; Chen, Min; Li, Weiyue

    2018-03-01

    Watershed system models are urgently needed to understand complex watershed systems and to support integrated river basin management. Early watershed modeling efforts focused on the representation of hydrologic processes, while the next-generation watershed models should represent the coevolution of the water-land-air-plant-human nexus in a watershed and provide capability of decision-making support. We propose a new modeling framework and discuss the know-how approach to incorporate emerging knowledge into integrated models through data exchange interfaces. We argue that the modeling environment is a useful tool to enable effective model integration, as well as create domain-specific models of river basin systems. The grand challenges in developing next-generation watershed system models include but are not limited to providing an overarching framework for linking natural and social sciences, building a scientifically based decision support system, quantifying and controlling uncertainties, and taking advantage of new technologies and new findings in the various disciplines of watershed science. The eventual goal is to build transdisciplinary, scientifically sound, and scale-explicit watershed system models that are to be codesigned by multidisciplinary communities.

  11. Design of a multi-agent hydroeconomic model to simulate a complex human-water system: Early insights from the Jordan Water Project

    NASA Astrophysics Data System (ADS)

    Yoon, J.; Klassert, C. J. A.; Lachaut, T.; Selby, P. D.; Knox, S.; Gorelick, S.; Rajsekhar, D.; Tilmant, A.; Avisse, N.; Harou, J. J.; Gawel, E.; Klauer, B.; Mustafa, D.; Talozi, S.; Sigel, K.

    2015-12-01

    Our work focuses on development of a multi-agent, hydroeconomic model for purposes of water policy evaluation in Jordan. The model adopts a modular approach, integrating biophysical modules that simulate natural and engineered phenomena with human modules that represent behavior at multiple levels of decision making. The hydrologic modules are developed using spatially-distributed groundwater and surface water models, which are translated into compact simulators for efficient integration into the multi-agent model. For the groundwater model, we adopt a response matrix method approach in which a 3-dimensional MODFLOW model of a complex regional groundwater system is converted into a linear simulator of groundwater response by pre-processing drawdown results from several hundred numerical simulation runs. Surface water models for each major surface water basin in the country are developed in SWAT and similarly translated into simple rainfall-runoff functions for integration with the multi-agent model. The approach balances physically-based, spatially-explicit representation of hydrologic systems with the efficiency required for integration into a complex multi-agent model that is computationally amenable to robust scenario analysis. For the multi-agent model, we explicitly represent human agency at multiple levels of decision making, with agents representing riparian, management, supplier, and water user groups. The agents' decision making models incorporate both rule-based heuristics as well as economic optimization. The model is programmed in Python using Pynsim, a generalizable, open-source object-oriented code framework for modeling network-based water resource systems. The Jordan model is one of the first applications of Pynsim to a real-world water management case study. Preliminary results from a tanker market scenario run through year 2050 are presented in which several salient features of the water system are investigated: competition between urban and private farmer agents, the emergence of a private tanker market, disparities in economic wellbeing to different user groups caused by unique supply conditions, and response of the complex system to various policy interventions.

  12. Keeping speed and distance for aligned motion

    NASA Astrophysics Data System (ADS)

    Farkas, Illés J.; Kun, Jeromos; Jin, Yi; He, Gaoqi; Xu, Mingliang

    2015-01-01

    The cohesive collective motion (flocking, swarming) of autonomous agents is ubiquitously observed and exploited in both natural and man-made settings, thus, minimal models for its description are essential. In a model with continuous space and time we find that if two particles arrive symmetrically in a plane at a large angle, then (i) radial repulsion and (ii) linear self-propelling toward a fixed preferred speed are sufficient for them to depart at a smaller angle. For this local gain of momentum explicit velocity alignment is not necessary, nor are adhesion or attraction, inelasticity or anisotropy of the particles, or nonlinear drag. With many particles obeying these microscopic rules of motion we find that their spatial confinement to a square with periodic boundaries (which is an indirect form of attraction) leads to stable macroscopic ordering. As a function of the strength of added noise we see—at finite system sizes—a critical slowing down close to the order-disorder boundary and a discontinuous transition. After varying the density of particles at constant system size and varying the size of the system with constant particle density we predict that in the infinite system size (or density) limit the hysteresis loop disappears and the transition becomes continuous. We note that animals, humans, drones, etc., tend to move asynchronously and are often more responsive to motion than positions. Thus, for them velocity-based continuous models can provide higher precision than coordinate-based models. An additional characteristic and realistic feature of the model is that convergence to the ordered state is fastest at a finite density, which is in contrast to models applying (discontinuous) explicit velocity alignments and discretized time. To summarize, we find that the investigated model can provide a minimal description of flocking.

  13. Keeping speed and distance for aligned motion.

    PubMed

    Farkas, Illés J; Kun, Jeromos; Jin, Yi; He, Gaoqi; Xu, Mingliang

    2015-01-01

    The cohesive collective motion (flocking, swarming) of autonomous agents is ubiquitously observed and exploited in both natural and man-made settings, thus, minimal models for its description are essential. In a model with continuous space and time we find that if two particles arrive symmetrically in a plane at a large angle, then (i) radial repulsion and (ii) linear self-propelling toward a fixed preferred speed are sufficient for them to depart at a smaller angle. For this local gain of momentum explicit velocity alignment is not necessary, nor are adhesion or attraction, inelasticity or anisotropy of the particles, or nonlinear drag. With many particles obeying these microscopic rules of motion we find that their spatial confinement to a square with periodic boundaries (which is an indirect form of attraction) leads to stable macroscopic ordering. As a function of the strength of added noise we see--at finite system sizes--a critical slowing down close to the order-disorder boundary and a discontinuous transition. After varying the density of particles at constant system size and varying the size of the system with constant particle density we predict that in the infinite system size (or density) limit the hysteresis loop disappears and the transition becomes continuous. We note that animals, humans, drones, etc., tend to move asynchronously and are often more responsive to motion than positions. Thus, for them velocity-based continuous models can provide higher precision than coordinate-based models. An additional characteristic and realistic feature of the model is that convergence to the ordered state is fastest at a finite density, which is in contrast to models applying (discontinuous) explicit velocity alignments and discretized time. To summarize, we find that the investigated model can provide a minimal description of flocking.

  14. Incorporation of expert variability into breast cancer treatment recommendation in designing clinical protocol guided fuzzy rule system models.

    PubMed

    Garibaldi, Jonathan M; Zhou, Shang-Ming; Wang, Xiao-Ying; John, Robert I; Ellis, Ian O

    2012-06-01

    It has been often demonstrated that clinicians exhibit both inter-expert and intra-expert variability when making difficult decisions. In contrast, the vast majority of computerized models that aim to provide automated support for such decisions do not explicitly recognize or replicate this variability. Furthermore, the perfect consistency of computerized models is often presented as a de facto benefit. In this paper, we describe a novel approach to incorporate variability within a fuzzy inference system using non-stationary fuzzy sets in order to replicate human variability. We apply our approach to a decision problem concerning the recommendation of post-operative breast cancer treatment; specifically, whether or not to administer chemotherapy based on assessment of five clinical variables: NPI (the Nottingham Prognostic Index), estrogen receptor status, vascular invasion, age and lymph node status. In doing so, we explore whether such explicit modeling of variability provides any performance advantage over a more conventional fuzzy approach, when tested on a set of 1310 unselected cases collected over a fourteen year period at the Nottingham University Hospitals NHS Trust, UK. The experimental results show that the standard fuzzy inference system (that does not model variability) achieves overall agreement to clinical practice around 84.6% (95% CI: 84.1-84.9%), while the non-stationary fuzzy model can significantly increase performance to around 88.1% (95% CI: 88.0-88.2%), p<0.001. We conclude that non-stationary fuzzy models provide a valuable new approach that may be applied to clinical decision support systems in any application domain. Copyright © 2012 Elsevier Inc. All rights reserved.

  15. GIAO-DFT calculation of 15 N NMR chemical shifts of Schiff bases: Accuracy factors and protonation effects.

    PubMed

    Semenov, Valentin A; Samultsev, Dmitry O; Krivdin, Leonid B

    2018-02-09

    15 N NMR chemical shifts in the representative series of Schiff bases together with their protonated forms have been calculated at the density functional theory level in comparison with available experiment. A number of functionals and basis sets have been tested in terms of a better agreement with experiment. Complimentary to gas phase results, 2 solvation models, namely, a classical Tomasi's polarizable continuum model (PCM) and that in combination with an explicit inclusion of one molecule of solvent into calculation space to form supermolecule 1:1 (SM + PCM), were examined. Best results are achieved with PCM and SM + PCM models resulting in mean absolute errors of calculated 15 N NMR chemical shifts in the whole series of neutral and protonated Schiff bases of accordingly 5.2 and 5.8 ppm as compared with 15.2 ppm in gas phase for the range of about 200 ppm. Noticeable protonation effects (exceeding 100 ppm) in protonated Schiff bases are rationalized in terms of a general natural bond orbital approach. Copyright © 2018 John Wiley & Sons, Ltd.

  16. Advancing reservoir operation description in physically based hydrological models

    NASA Astrophysics Data System (ADS)

    Anghileri, Daniela; Giudici, Federico; Castelletti, Andrea; Burlando, Paolo

    2016-04-01

    Last decades have seen significant advances in our capacity of characterizing and reproducing hydrological processes within physically based models. Yet, when the human component is considered (e.g. reservoirs, water distribution systems), the associated decisions are generally modeled with very simplistic rules, which might underperform in reproducing the actual operators' behaviour on a daily or sub-daily basis. For example, reservoir operations are usually described by a target-level rule curve, which represents the level that the reservoir should track during normal operating conditions. The associated release decision is determined by the current state of the reservoir relative to the rule curve. This modeling approach can reasonably reproduce the seasonal water volume shift due to reservoir operation. Still, it cannot capture more complex decision making processes in response, e.g., to the fluctuations of energy prices and demands, the temporal unavailability of power plants or varying amount of snow accumulated in the basin. In this work, we link a physically explicit hydrological model with detailed hydropower behavioural models describing the decision making process by the dam operator. In particular, we consider two categories of behavioural models: explicit or rule-based behavioural models, where reservoir operating rules are empirically inferred from observational data, and implicit or optimization based behavioural models, where, following a normative economic approach, the decision maker is represented as a rational agent maximising a utility function. We compare these two alternate modelling approaches on the real-world water system of Lake Como catchment in the Italian Alps. The water system is characterized by the presence of 18 artificial hydropower reservoirs generating almost 13% of the Italian hydropower production. Results show to which extent the hydrological regime in the catchment is affected by different behavioural models and reservoir operating strategies.

  17. The Systems Biology Markup Language (SBML) Level 3 Package: Qualitative Models, Version 1, Release 1.

    PubMed

    Chaouiya, Claudine; Keating, Sarah M; Berenguier, Duncan; Naldi, Aurélien; Thieffry, Denis; van Iersel, Martijn P; Le Novère, Nicolas; Helikar, Tomáš

    2015-09-04

    Quantitative methods for modelling biological networks require an in-depth knowledge of the biochemical reactions and their stoichiometric and kinetic parameters. In many practical cases, this knowledge is missing. This has led to the development of several qualitative modelling methods using information such as, for example, gene expression data coming from functional genomic experiments. The SBML Level 3 Version 1 Core specification does not provide a mechanism for explicitly encoding qualitative models, but it does provide a mechanism for SBML packages to extend the Core specification and add additional syntactical constructs. The SBML Qualitative Models package for SBML Level 3 adds features so that qualitative models can be directly and explicitly encoded. The approach taken in this package is essentially based on the definition of regulatory or influence graphs. The SBML Qualitative Models package defines the structure and syntax necessary to describe qualitative models that associate discrete levels of activities with entity pools and the transitions between states that describe the processes involved. This is particularly suited to logical models (Boolean or multi-valued) and some classes of Petri net models can be encoded with the approach.

  18. Complementary and alternative medicine whole systems research: beyond identification of inadequacies of the RCT.

    PubMed

    Verhoef, Marja J; Lewith, George; Ritenbaugh, Cheryl; Boon, Heather; Fleishman, Susan; Leis, Anne

    2005-09-01

    Complementary and alternative medicine (CAM) often consists of whole systems of care (such as naturopathic medicine or traditional Chinese medicine (TCM)) that combine a wide range of modalities to provide individualised treatment. The complexity of these interventions and their potential synergistic effect requires innovative evaluative approaches. Model validity, which encompasses the need for research to adequately address the unique healing theory and therapeutic context of the intervention, is central to whole systems research (WSR). Classical randomised controlled trials (RCTs) are limited in their ability to address this need. Therefore, we propose a mixed methods approach that includes a range of relevant and holistic outcome measures. As the individual components of most whole systems are inseparable, complementary and synergistic, WSR must not focus only on the "active" ingredients of a system. An emerging WSR framework must be non-hierarchical, cyclical, flexible and adaptive, as knowledge creation is continuous, evolutionary and necessitates a continuous interplay between research methods and "phases" of knowledge. Finally, WSR must hold qualitative and quantitative research methods in equal esteem to realize their unique research contribution. Whole systems are complex and therefore no one method can adequately capture the meaning, process and outcomes of these interventions.

  19. Engineering Complex Embedded Systems with State Analysis and the Mission Data System

    NASA Technical Reports Server (NTRS)

    Ingham, Michel D.; Rasmussen, Robert D.; Bennett, Matthew B.; Moncada, Alex C.

    2004-01-01

    It has become clear that spacecraft system complexity is reaching a threshold where customary methods of control are no longer affordable or sufficiently reliable. At the heart of this problem are the conventional approaches to systems and software engineering based on subsystem-level functional decomposition, which fail to scale in the tangled web of interactions typically encountered in complex spacecraft designs. Furthermore, there is a fundamental gap between the requirements on software specified by systems engineers and the implementation of these requirements by software engineers. Software engineers must perform the translation of requirements into software code, hoping to accurately capture the systems engineer's understanding of the system behavior, which is not always explicitly specified. This gap opens up the possibility for misinterpretation of the systems engineer s intent, potentially leading to software errors. This problem is addressed by a systems engineering methodology called State Analysis, which provides a process for capturing system and software requirements in the form of explicit models. This paper describes how requirements for complex aerospace systems can be developed using State Analysis and how these requirements inform the design of the system software, using representative spacecraft examples.

  20. Towards collaborative filtering recommender systems for tailored health communications.

    PubMed

    Marlin, Benjamin M; Adams, Roy J; Sadasivam, Rajani; Houston, Thomas K

    2013-01-01

    The goal of computer tailored health communications (CTHC) is to promote healthy behaviors by sending messages tailored to individual patients. Current CTHC systems collect baseline patient "profiles" and then use expert-written, rule-based systems to target messages to subsets of patients. Our main interest in this work is the study of collaborative filtering-based CTHC systems that can learn to tailor future message selections to individual patients based explicit feedback about past message selections. This paper reports the results of a study designed to collect explicit feedback (ratings) regarding four aspects of messages from 100 subjects in the smoking cessation support domain. Our results show that most users have positive opinions of most messages and that the ratings for all four aspects of the messages are highly correlated with each other. Finally, we conduct a range of rating prediction experiments comparing several different model variations. Our results show that predicting future ratings based on each user's past ratings contributes the most to predictive accuracy.

  1. Towards Collaborative Filtering Recommender Systems for Tailored Health Communications

    PubMed Central

    Marlin, Benjamin M.; Adams, Roy J.; Sadasivam, Rajani; Houston, Thomas K.

    2013-01-01

    The goal of computer tailored health communications (CTHC) is to promote healthy behaviors by sending messages tailored to individual patients. Current CTHC systems collect baseline patient “profiles” and then use expert-written, rule-based systems to target messages to subsets of patients. Our main interest in this work is the study of collaborative filtering-based CTHC systems that can learn to tailor future message selections to individual patients based explicit feedback about past message selections. This paper reports the results of a study designed to collect explicit feedback (ratings) regarding four aspects of messages from 100 subjects in the smoking cessation support domain. Our results show that most users have positive opinions of most messages and that the ratings for all four aspects of the messages are highly correlated with each other. Finally, we conduct a range of rating prediction experiments comparing several different model variations. Our results show that predicting future ratings based on each user’s past ratings contributes the most to predictive accuracy. PMID:24551430

  2. Error Generation in CATS-Based Agents

    NASA Technical Reports Server (NTRS)

    Callantine, Todd

    2003-01-01

    This research presents a methodology for generating errors from a model of nominally preferred correct operator activities, given a particular operational context, and maintaining an explicit link to the erroneous contextual information to support analyses. It uses the Crew Activity Tracking System (CATS) model as the basis for error generation. This report describes how the process works, and how it may be useful for supporting agent-based system safety analyses. The report presents results obtained by applying the error-generation process and discusses implementation issues. The research is supported by the System-Wide Accident Prevention Element of the NASA Aviation Safety Program.

  3. A multiscale, model-based analysis of the multi-tissue interplay underlying blood glucose regulation in type I diabetes.

    PubMed

    Wadehn, Federico; Schaller, Stephan; Eissing, Thomas; Krauss, Markus; Kupfer, Lars

    2016-08-01

    A multiscale model for blood glucose regulation in diabetes type I patients is constructed by integrating detailed metabolic network models for fat, liver and muscle cells into a whole body physiologically-based pharmacokinetic/pharmacodynamic (pBPK/PD) model. The blood glucose regulation PBPK/PD model simulates the distribution and metabolization of glucose, insulin and glucagon on an organ and whole body level. The genome-scale metabolic networks in contrast describe intracellular reactions. The developed multiscale model is fitted to insulin, glucagon and glucose measurements of a 48h clinical trial featuring 6 subjects and is subsequently used to simulate (in silico) the influence of geneknockouts and drug-induced enzyme inhibitions on whole body blood glucose levels. Simulations of diabetes associated gene knockouts and impaired cellular glucose metabolism, resulted in elevated whole body blood-glucose levels, but also in a metabolic shift within the cell's reaction network. Such multiscale models have the potential to be employed in the exploration of novel drug-targets or to be integrated into control algorithms for artificial pancreas systems.

  4. Using travel times to simulate multi-dimensional bioreactive transport in time-periodic flows.

    PubMed

    Sanz-Prat, Alicia; Lu, Chuanhe; Finkel, Michael; Cirpka, Olaf A

    2016-04-01

    In travel-time models, the spatially explicit description of reactive transport is replaced by associating reactive-species concentrations with the travel time or groundwater age at all locations. These models have been shown adequate for reactive transport in river-bank filtration under steady-state flow conditions. Dynamic hydrological conditions, however, can lead to fluctuations of infiltration velocities, putting the validity of travel-time models into question. In transient flow, the local travel-time distributions change with time. We show that a modified version of travel-time based reactive transport models is valid if only the magnitude of the velocity fluctuates, whereas its spatial orientation remains constant. We simulate nonlinear, one-dimensional, bioreactive transport involving oxygen, nitrate, dissolved organic carbon, aerobic and denitrifying bacteria, considering periodic fluctuations of velocity. These fluctuations make the bioreactive system pulsate: The aerobic zone decreases at times of low velocity and increases at those of high velocity. For the case of diurnal fluctuations, the biomass concentrations cannot follow the hydrological fluctuations and a transition zone containing both aerobic and obligatory denitrifying bacteria is established, whereas a clear separation of the two types of bacteria prevails in the case of seasonal velocity fluctuations. We map the 1-D results to a heterogeneous, two-dimensional domain by means of the mean groundwater age for steady-state flow in both domains. The mapped results are compared to simulation results of spatially explicit, two-dimensional, advective-dispersive-bioreactive transport subject to the same relative fluctuations of velocity as in the one-dimensional model. The agreement between the mapped 1-D and the explicit 2-D results is excellent. We conclude that travel-time models of nonlinear bioreactive transport are adequate in systems of time-periodic flow if the flow direction does not change. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Three-dimensional implicit lambda methods

    NASA Technical Reports Server (NTRS)

    Napolitano, M.; Dadone, A.

    1983-01-01

    This paper derives the three dimensional lambda-formulation equations for a general orthogonal curvilinear coordinate system and provides various block-explicit and block-implicit methods for solving them, numerically. Three model problems, characterized by subsonic, supersonic and transonic flow conditions, are used to assess the reliability and compare the efficiency of the proposed methods.

  6. Investigating Integer Restrictions in Linear Programming

    ERIC Educational Resources Information Center

    Edwards, Thomas G.; Chelst, Kenneth R.; Principato, Angela M.; Wilhelm, Thad L.

    2015-01-01

    Linear programming (LP) is an application of graphing linear systems that appears in many Algebra 2 textbooks. Although not explicitly mentioned in the Common Core State Standards for Mathematics, linear programming blends seamlessly into modeling with mathematics, the fourth Standard for Mathematical Practice (CCSSI 2010, p. 7). In solving a…

  7. Spatially-explicit and spectral soil carbon modeling in Florida

    USDA-ARS?s Scientific Manuscript database

    Profound shifts have occurred over the last three centuries in which human actions have become the main driver to global environmental change. In this new epoch, the Anthropocene, human-driven changes such as population growth, climate and land use change, are pushing the Earth system well outside i...

  8. Multivariable Parametric Cost Model for Ground Optical: Telescope Assembly

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip; Rowell, Ginger Holmes; Reese, Gayle; Byberg, Alicia

    2004-01-01

    A parametric cost model for ground-based telescopes is developed using multi-variable statistical analysis of both engineering and performance parameters. While diameter continues to be the dominant cost driver, diffraction limited wavelength is found to be a secondary driver. Other parameters such as radius of curvature were examined. The model includes an explicit factor for primary mirror segmentation and/or duplication (i.e. multi-telescope phased-array systems). Additionally, single variable models based on aperture diameter were derived.

  9. Evaluation of average daily gain predictions by the integrated farm system model for forage-finished beef steers

    USDA-ARS?s Scientific Manuscript database

    Representing the performance of cattle finished on an all forage diet in process-based whole farm system models has presented a challenge. To address this challenge, a study was done to evaluate average daily gain (ADG) predictions of the Integrated Farm System Model (IFSM) for steers consuming all-...

  10. Landscape ethnoecological knowledge base and management of ecosystem services in a Székely-Hungarian pre-capitalistic village system (Transylvania, Romania).

    PubMed

    Molnár, Zsolt; Gellény, Krisztina; Margóczi, Katalin; Biró, Marianna

    2015-01-07

    Previous studies showed an in-depth ecological understanding by traditional people of managing natural resources. We studied the landscape ethnoecological knowledge (LEEK) of Székelys on the basis of 16-19(th) century village laws. We analyzed the habitat types, ecosystem services and sustainable management types on which village laws had focused. Székelys had self-governed communities formed mostly of "noble peasants". Land-use was dominated by commons and regulated by village laws framed by the whole community. Seventy-two archival laws from 52 villages, resulting in 898 regulations, were analyzed using the DPSIR framework. Explicit and implicit information about the contemporary ecological knowledge of Székelys was extracted. We distinguished between responses that limited use and supported regeneration and those that protected produced/available ecosystem services and ensured their fair distribution. Most regulations referred to forests (674), arable lands (562), meadows (448) and pastures (134). Székelys regulated the proportion of arable land, pasture and forest areas consciously in order to maximize long-term exploitation of ecosystem services. The inner territory was protected against overuse by relocating certain uses to the outer territory. Competition for ecosystem services was demonstrated by conflicts of pressure-related (mostly personal) and response-related (mostly communal) driving forces. Felling of trees (oaks), grazing of forests, meadows and fallows, masting, use of wild apple/pear trees and fishing were strictly regulated. Cutting of leaf-fodder, grazing of green crops, burning of forest litter and the polluting of streams were prohibited. Marketing by villagers and inviting outsiders to use the ecosystem services were strictly regulated, and mostly prohibited. Székelys recognized at least 71 folk habitat types, understood ecological regeneration and degradation processes, the history of their landscape and the management possibilities of ecosystem services. Some aspects of LEEK were so well known within Székely communities that they were not made explicit in village laws, others remained implicit because they were not related to regulations. Based on explicit and implicit information, we argue that Székelys possessed detailed knowledge of the local ecological system. Moreover the world's first known explicit mention of ecosystem services ("Benefits that are provided by Nature for free") originated from this region from 1786.

  11. DEFINING RECOVERY GOALS AND STRATEGIES FOR ENDANGERED SPECIES USING SPATIALLY-EXPLICIT POPULATION MODELS

    EPA Science Inventory

    We used a spatially explicit population model of wolves (Canis lupus) to propose a framework for defining rangewide recovery priorities and finer-scale strategies for regional reintroductions. The model predicts that Yellowstone and central Idaho, where wolves have recently been ...

  12. Integrable models of quantum optics

    NASA Astrophysics Data System (ADS)

    Yudson, Vladimir; Makarov, Aleksander

    2017-10-01

    We give an overview of exactly solvable many-body models of quantum optics. Among them is a system of two-level atoms which interact with photons propagating in a one-dimensional (1D) chiral waveguide; exact eigenstates of this system can be explicitly constructed. This approach is used also for a system of closely located atoms in the usual (non-chiral) waveguide or in 3D space. Moreover, it is shown that for an arbitrary atomic system with a cascade spontaneous radiative decay, the fluorescence spectrum can be described by an exact analytic expression which accounts for interference of emitted photons. Open questions related with broken integrability are discussed.

  13. Speech transformation system (spectrum and/or excitation) without pitch extraction

    NASA Astrophysics Data System (ADS)

    Seneff, S.

    1980-07-01

    A speech analysis synthesis system was developed which is capable of independent manipulation of the fundamental frequency and spectral envelope of a speech waveform. The system deconvolved the original speech with the spectral envelope estimate to obtain a model for the excitation, explicit pitch extraction was not required and as a consequence, the transformed speech was more natural sounding than would be the case if the excitation were modeled as a sequence of pulses. It is shown that the system has applications in the areas of voice modifications, baseband excited vocoders, time scale modifications, and frequency compression as an aid to the partially deaf.

  14. Analytical steady-state solutions for water-limited cropping systems using saline irrigation water

    NASA Astrophysics Data System (ADS)

    Skaggs, T. H.; Anderson, R. G.; Corwin, D. L.; Suarez, D. L.

    2014-12-01

    Due to the diminishing availability of good quality water for irrigation, it is increasingly important that irrigation and salinity management tools be able to target submaximal crop yields and support the use of marginal quality waters. In this work, we present a steady-state irrigated systems modeling framework that accounts for reduced plant water uptake due to root zone salinity. Two explicit, closed-form analytical solutions for the root zone solute concentration profile are obtained, corresponding to two alternative functional forms of the uptake reduction function. The solutions express a general relationship between irrigation water salinity, irrigation rate, crop salt tolerance, crop transpiration, and (using standard approximations) crop yield. Example applications are illustrated, including the calculation of irrigation requirements for obtaining targeted submaximal yields, and the generation of crop-water production functions for varying irrigation waters, irrigation rates, and crops. Model predictions are shown to be mostly consistent with existing models and available experimental data. Yet the new solutions possess advantages over available alternatives, including: (i) the solutions were derived from a complete physical-mathematical description of the system, rather than based on an ad hoc formulation; (ii) the analytical solutions are explicit and can be evaluated without iterative techniques; (iii) the solutions permit consideration of two common functional forms of salinity induced reductions in crop water uptake, rather than being tied to one particular representation; and (iv) the utilized modeling framework is compatible with leading transient-state numerical models.

  15. Explicit symplectic algorithms based on generating functions for charged particle dynamics.

    PubMed

    Zhang, Ruili; Qin, Hong; Tang, Yifa; Liu, Jian; He, Yang; Xiao, Jianyuan

    2016-07-01

    Dynamics of a charged particle in the canonical coordinates is a Hamiltonian system, and the well-known symplectic algorithm has been regarded as the de facto method for numerical integration of Hamiltonian systems due to its long-term accuracy and fidelity. For long-term simulations with high efficiency, explicit symplectic algorithms are desirable. However, it is generally believed that explicit symplectic algorithms are only available for sum-separable Hamiltonians, and this restriction limits the application of explicit symplectic algorithms to charged particle dynamics. To overcome this difficulty, we combine the familiar sum-split method and a generating function method to construct second- and third-order explicit symplectic algorithms for dynamics of charged particle. The generating function method is designed to generate explicit symplectic algorithms for product-separable Hamiltonian with form of H(x,p)=p_{i}f(x) or H(x,p)=x_{i}g(p). Applied to the simulations of charged particle dynamics, the explicit symplectic algorithms based on generating functions demonstrate superiorities in conservation and efficiency.

  16. Explicit symplectic algorithms based on generating functions for charged particle dynamics

    NASA Astrophysics Data System (ADS)

    Zhang, Ruili; Qin, Hong; Tang, Yifa; Liu, Jian; He, Yang; Xiao, Jianyuan

    2016-07-01

    Dynamics of a charged particle in the canonical coordinates is a Hamiltonian system, and the well-known symplectic algorithm has been regarded as the de facto method for numerical integration of Hamiltonian systems due to its long-term accuracy and fidelity. For long-term simulations with high efficiency, explicit symplectic algorithms are desirable. However, it is generally believed that explicit symplectic algorithms are only available for sum-separable Hamiltonians, and this restriction limits the application of explicit symplectic algorithms to charged particle dynamics. To overcome this difficulty, we combine the familiar sum-split method and a generating function method to construct second- and third-order explicit symplectic algorithms for dynamics of charged particle. The generating function method is designed to generate explicit symplectic algorithms for product-separable Hamiltonian with form of H (x ,p ) =pif (x ) or H (x ,p ) =xig (p ) . Applied to the simulations of charged particle dynamics, the explicit symplectic algorithms based on generating functions demonstrate superiorities in conservation and efficiency.

  17. Solvable Hydrodynamics of Quantum Integrable Systems

    NASA Astrophysics Data System (ADS)

    Bulchandani, Vir B.; Vasseur, Romain; Karrasch, Christoph; Moore, Joel E.

    2017-12-01

    The conventional theory of hydrodynamics describes the evolution in time of chaotic many-particle systems from local to global equilibrium. In a quantum integrable system, local equilibrium is characterized by a local generalized Gibbs ensemble or equivalently a local distribution of pseudomomenta. We study time evolution from local equilibria in such models by solving a certain kinetic equation, the "Bethe-Boltzmann" equation satisfied by the local pseudomomentum density. Explicit comparison with density matrix renormalization group time evolution of a thermal expansion in the XXZ model shows that hydrodynamical predictions from smooth initial conditions can be remarkably accurate, even for small system sizes. Solutions are also obtained in the Lieb-Liniger model for free expansion into vacuum and collisions between clouds of particles, which model experiments on ultracold one-dimensional Bose gases.

  18. Integrated Farm System Model Version 4.3 and Dairy Gas Emissions Model Version 3.3 Software development and distribution

    USDA-ARS?s Scientific Manuscript database

    Modeling routines of the Integrated Farm System Model (IFSM version 4.2) and Dairy Gas Emission Model (DairyGEM version 3.2), two whole-farm simulation models developed and maintained by USDA-ARS, were revised with new components for: (1) simulation of ammonia (NH3) and greenhouse gas emissions gene...

  19. The most precise computations using Euler's method in standard floating-point arithmetic applied to modelling of biological systems.

    PubMed

    Kalinina, Elizabeth A

    2013-08-01

    The explicit Euler's method is known to be very easy and effective in implementation for many applications. This article extends results previously obtained for the systems of linear differential equations with constant coefficients to arbitrary systems of ordinary differential equations. Optimal (providing minimum total error) step size is calculated at each step of Euler's method. Several examples of solving stiff systems are included. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  20. Hysteretic behavior using the explicit material point method

    NASA Astrophysics Data System (ADS)

    Sofianos, Christos D.; Koumousis, Vlasis K.

    2018-05-01

    The material point method (MPM) is an advancement of particle in cell method, in which Lagrangian bodies are discretized by a number of material points that hold all the properties and the state of the material. All internal variables, stress, strain, velocity, etc., which specify the current state, and are required to advance the solution, are stored in the material points. A background grid is employed to solve the governing equations by interpolating the material point data to the grid. The derived momentum conservation equations are solved at the grid nodes and information is transferred back to the material points and the background grid is reset, ready to handle the next iteration. In this work, the standard explicit MPM is extended to account for smooth elastoplastic material behavior with mixed isotropic and kinematic hardening and stiffness and strength degradation. The strains are decomposed into an elastic and an inelastic part according to the strain decomposition rule. To account for the different phases during elastic loading or unloading and smoothening the transition from the elastic to inelastic regime, two Heaviside-type functions are introduced. These act as switches and incorporate the yield function and the hardening laws to control the whole cyclic behavior. A single expression is thus established for the plastic multiplier for the whole range of stresses. This overpasses the need for a piecewise approach and a demanding bookkeeping mechanism especially when multilinear models are concerned that account for stiffness and strength degradation. The final form of the constitutive stress rate-strain rate relation incorporates the tangent modulus of elasticity, which now includes the Heaviside functions and gathers all the governing behavior, facilitating considerably the simulation of nonlinear response in the MPM framework. Numerical results are presented that validate the proposed formulation in the context of the MPM in comparison with finite element method and experimental results.

Top