RuleMonkey: software for stochastic simulation of rule-based models
2010-01-01
Background The system-level dynamics of many molecular interactions, particularly protein-protein interactions, can be conveniently represented using reaction rules, which can be specified using model-specification languages, such as the BioNetGen language (BNGL). A set of rules implicitly defines a (bio)chemical reaction network. The reaction network implied by a set of rules is often very large, and as a result, generation of the network implied by rules tends to be computationally expensive. Moreover, the cost of many commonly used methods for simulating network dynamics is a function of network size. Together these factors have limited application of the rule-based modeling approach. Recently, several methods for simulating rule-based models have been developed that avoid the expensive step of network generation. The cost of these "network-free" simulation methods is independent of the number of reactions implied by rules. Software implementing such methods is now needed for the simulation and analysis of rule-based models of biochemical systems. Results Here, we present a software tool called RuleMonkey, which implements a network-free method for simulation of rule-based models that is similar to Gillespie's method. The method is suitable for rule-based models that can be encoded in BNGL, including models with rules that have global application conditions, such as rules for intramolecular association reactions. In addition, the method is rejection free, unlike other network-free methods that introduce null events, i.e., steps in the simulation procedure that do not change the state of the reaction system being simulated. We verify that RuleMonkey produces correct simulation results, and we compare its performance against DYNSTOC, another BNGL-compliant tool for network-free simulation of rule-based models. We also compare RuleMonkey against problem-specific codes implementing network-free simulation methods. Conclusions RuleMonkey enables the simulation of rule-based models for which the underlying reaction networks are large. It is typically faster than DYNSTOC for benchmark problems that we have examined. RuleMonkey is freely available as a stand-alone application http://public.tgen.org/rulemonkey. It is also available as a simulation engine within GetBonNie, a web-based environment for building, analyzing and sharing rule-based models. PMID:20673321
Compartmental and Spatial Rule-Based Modeling with Virtual Cell.
Blinov, Michael L; Schaff, James C; Vasilescu, Dan; Moraru, Ion I; Bloom, Judy E; Loew, Leslie M
2017-10-03
In rule-based modeling, molecular interactions are systematically specified in the form of reaction rules that serve as generators of reactions. This provides a way to account for all the potential molecular complexes and interactions among multivalent or multistate molecules. Recently, we introduced rule-based modeling into the Virtual Cell (VCell) modeling framework, permitting graphical specification of rules and merger of networks generated automatically (using the BioNetGen modeling engine) with hand-specified reaction networks. VCell provides a number of ordinary differential equation and stochastic numerical solvers for single-compartment simulations of the kinetic systems derived from these networks, and agent-based network-free simulation of the rules. In this work, compartmental and spatial modeling of rule-based models has been implemented within VCell. To enable rule-based deterministic and stochastic spatial simulations and network-free agent-based compartmental simulations, the BioNetGen and NFSim engines were each modified to support compartments. In the new rule-based formalism, every reactant and product pattern and every reaction rule are assigned locations. We also introduce the rule-based concept of molecular anchors. This assures that any species that has a molecule anchored to a predefined compartment will remain in this compartment. Importantly, in addition to formulation of compartmental models, this now permits VCell users to seamlessly connect reaction networks derived from rules to explicit geometries to automatically generate a system of reaction-diffusion equations. These may then be simulated using either the VCell partial differential equations deterministic solvers or the Smoldyn stochastic simulator. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Exact Hybrid Particle/Population Simulation of Rule-Based Models of Biochemical Systems
Stover, Lori J.; Nair, Niketh S.; Faeder, James R.
2014-01-01
Detailed modeling and simulation of biochemical systems is complicated by the problem of combinatorial complexity, an explosion in the number of species and reactions due to myriad protein-protein interactions and post-translational modifications. Rule-based modeling overcomes this problem by representing molecules as structured objects and encoding their interactions as pattern-based rules. This greatly simplifies the process of model specification, avoiding the tedious and error prone task of manually enumerating all species and reactions that can potentially exist in a system. From a simulation perspective, rule-based models can be expanded algorithmically into fully-enumerated reaction networks and simulated using a variety of network-based simulation methods, such as ordinary differential equations or Gillespie's algorithm, provided that the network is not exceedingly large. Alternatively, rule-based models can be simulated directly using particle-based kinetic Monte Carlo methods. This “network-free” approach produces exact stochastic trajectories with a computational cost that is independent of network size. However, memory and run time costs increase with the number of particles, limiting the size of system that can be feasibly simulated. Here, we present a hybrid particle/population simulation method that combines the best attributes of both the network-based and network-free approaches. The method takes as input a rule-based model and a user-specified subset of species to treat as population variables rather than as particles. The model is then transformed by a process of “partial network expansion” into a dynamically equivalent form that can be simulated using a population-adapted network-free simulator. The transformation method has been implemented within the open-source rule-based modeling platform BioNetGen, and resulting hybrid models can be simulated using the particle-based simulator NFsim. Performance tests show that significant memory savings can be achieved using the new approach and a monetary cost analysis provides a practical measure of its utility. PMID:24699269
Exact hybrid particle/population simulation of rule-based models of biochemical systems.
Hogg, Justin S; Harris, Leonard A; Stover, Lori J; Nair, Niketh S; Faeder, James R
2014-04-01
Detailed modeling and simulation of biochemical systems is complicated by the problem of combinatorial complexity, an explosion in the number of species and reactions due to myriad protein-protein interactions and post-translational modifications. Rule-based modeling overcomes this problem by representing molecules as structured objects and encoding their interactions as pattern-based rules. This greatly simplifies the process of model specification, avoiding the tedious and error prone task of manually enumerating all species and reactions that can potentially exist in a system. From a simulation perspective, rule-based models can be expanded algorithmically into fully-enumerated reaction networks and simulated using a variety of network-based simulation methods, such as ordinary differential equations or Gillespie's algorithm, provided that the network is not exceedingly large. Alternatively, rule-based models can be simulated directly using particle-based kinetic Monte Carlo methods. This "network-free" approach produces exact stochastic trajectories with a computational cost that is independent of network size. However, memory and run time costs increase with the number of particles, limiting the size of system that can be feasibly simulated. Here, we present a hybrid particle/population simulation method that combines the best attributes of both the network-based and network-free approaches. The method takes as input a rule-based model and a user-specified subset of species to treat as population variables rather than as particles. The model is then transformed by a process of "partial network expansion" into a dynamically equivalent form that can be simulated using a population-adapted network-free simulator. The transformation method has been implemented within the open-source rule-based modeling platform BioNetGen, and resulting hybrid models can be simulated using the particle-based simulator NFsim. Performance tests show that significant memory savings can be achieved using the new approach and a monetary cost analysis provides a practical measure of its utility.
Extending rule-based methods to model molecular geometry and 3D model resolution.
Hoard, Brittany; Jacobson, Bruna; Manavi, Kasra; Tapia, Lydia
2016-08-01
Computational modeling is an important tool for the study of complex biochemical processes associated with cell signaling networks. However, it is challenging to simulate processes that involve hundreds of large molecules due to the high computational cost of such simulations. Rule-based modeling is a method that can be used to simulate these processes with reasonably low computational cost, but traditional rule-based modeling approaches do not include details of molecular geometry. The incorporation of geometry into biochemical models can more accurately capture details of these processes, and may lead to insights into how geometry affects the products that form. Furthermore, geometric rule-based modeling can be used to complement other computational methods that explicitly represent molecular geometry in order to quantify binding site accessibility and steric effects. We propose a novel implementation of rule-based modeling that encodes details of molecular geometry into the rules and binding rates. We demonstrate how rules are constructed according to the molecular curvature. We then perform a study of antigen-antibody aggregation using our proposed method. We simulate the binding of antibody complexes to binding regions of the shrimp allergen Pen a 1 using a previously developed 3D rigid-body Monte Carlo simulation, and we analyze the aggregate sizes. Then, using our novel approach, we optimize a rule-based model according to the geometry of the Pen a 1 molecule and the data from the Monte Carlo simulation. We use the distances between the binding regions of Pen a 1 to optimize the rules and binding rates. We perform this procedure for multiple conformations of Pen a 1 and analyze the impact of conformation and resolution on the optimal rule-based model. We find that the optimized rule-based models provide information about the average steric hindrance between binding regions and the probability that antibodies will bind to these regions. These optimized models quantify the variation in aggregate size that results from differences in molecular geometry and from model resolution.
Simulation of large-scale rule-based models
Colvin, Joshua; Monine, Michael I.; Faeder, James R.; Hlavacek, William S.; Von Hoff, Daniel D.; Posner, Richard G.
2009-01-01
Motivation: Interactions of molecules, such as signaling proteins, with multiple binding sites and/or multiple sites of post-translational covalent modification can be modeled using reaction rules. Rules comprehensively, but implicitly, define the individual chemical species and reactions that molecular interactions can potentially generate. Although rules can be automatically processed to define a biochemical reaction network, the network implied by a set of rules is often too large to generate completely or to simulate using conventional procedures. To address this problem, we present DYNSTOC, a general-purpose tool for simulating rule-based models. Results: DYNSTOC implements a null-event algorithm for simulating chemical reactions in a homogenous reaction compartment. The simulation method does not require that a reaction network be specified explicitly in advance, but rather takes advantage of the availability of the reaction rules in a rule-based specification of a network to determine if a randomly selected set of molecular components participates in a reaction during a time step. DYNSTOC reads reaction rules written in the BioNetGen language which is useful for modeling protein–protein interactions involved in signal transduction. The method of DYNSTOC is closely related to that of StochSim. DYNSTOC differs from StochSim by allowing for model specification in terms of BNGL, which extends the range of protein complexes that can be considered in a model. DYNSTOC enables the simulation of rule-based models that cannot be simulated by conventional methods. We demonstrate the ability of DYNSTOC to simulate models accounting for multisite phosphorylation and multivalent binding processes that are characterized by large numbers of reactions. Availability: DYNSTOC is free for non-commercial use. The C source code, supporting documentation and example input files are available at http://public.tgen.org/dynstoc/. Contact: dynstoc@tgen.org Supplementary information: Supplementary data are available at Bioinformatics online. PMID:19213740
Bittig, Arne T; Uhrmacher, Adelinde M
2017-01-01
Spatio-temporal dynamics of cellular processes can be simulated at different levels of detail, from (deterministic) partial differential equations via the spatial Stochastic Simulation algorithm to tracking Brownian trajectories of individual particles. We present a spatial simulation approach for multi-level rule-based models, which includes dynamically hierarchically nested cellular compartments and entities. Our approach ML-Space combines discrete compartmental dynamics, stochastic spatial approaches in discrete space, and particles moving in continuous space. The rule-based specification language of ML-Space supports concise and compact descriptions of models and to adapt the spatial resolution of models easily.
Yang, Jin; Hlavacek, William S.
2011-01-01
Rule-based models, which are typically formulated to represent cell signaling systems, can now be simulated via various network-free simulation methods. In a network-free method, reaction rates are calculated for rules that characterize molecular interactions, and these rule rates, which each correspond to the cumulative rate of all reactions implied by a rule, are used to perform a stochastic simulation of reaction kinetics. Network-free methods, which can be viewed as generalizations of Gillespie’s method, are so named because these methods do not require that a list of individual reactions implied by a set of rules be explicitly generated, which is a requirement of other methods for simulating rule-based models. This requirement is impractical for rule sets that imply large reaction networks (i.e., long lists of individual reactions), as reaction network generation is expensive. Here, we compare the network-free simulation methods implemented in RuleMonkey and NFsim, general-purpose software tools for simulating rule-based models encoded in the BioNetGen language. The method implemented in NFsim uses rejection sampling to correct overestimates of rule rates, which introduces null events (i.e., time steps that do not change the state of the system being simulated). The method implemented in RuleMonkey uses iterative updates to track rule rates exactly, which avoids null events. To ensure a fair comparison of the two methods, we developed implementations of the rejection and rejection-free methods specific to a particular class of kinetic models for multivalent ligand-receptor interactions. These implementations were written with the intention of making them as much alike as possible, minimizing the contribution of irrelevant coding differences to efficiency differences. Simulation results show that performance of the rejection method is equal to or better than that of the rejection-free method over wide parameter ranges. However, when parameter values are such that ligand-induced aggregation of receptors yields a large connected receptor cluster, the rejection-free method is more efficient. PMID:21832806
A Data Stream Model For Runoff Simulation In A Changing Environment
NASA Astrophysics Data System (ADS)
Yang, Q.; Shao, J.; Zhang, H.; Wang, G.
2017-12-01
Runoff simulation is of great significance for water engineering design, water disaster control, water resources planning and management in a catchment or region. A large number of methods including concept-based process-driven models and statistic-based data-driven models, have been proposed and widely used in worldwide during past decades. Most existing models assume that the relationship among runoff and its impacting factors is stationary. However, in the changing environment (e.g., climate change, human disturbance), their relationship usually evolves over time. In this study, we propose a data stream model for runoff simulation in a changing environment. Specifically, the proposed model works in three steps: learning a rule set, expansion of a rule, and simulation. The first step is to initialize a rule set. When a new observation arrives, the model will check which rule covers it and then use the rule for simulation. Meanwhile, Page-Hinckley (PH) change detection test is used to monitor the online simulation error of each rule. If a change is detected, the corresponding rule is removed from the rule set. In the second step, for each rule, if it covers more than a given number of instance, the rule is expected to expand. In the third step, a simulation model of each leaf node is learnt with a perceptron without activation function, and is updated with adding a newly incoming observation. Taking Fuxi River catchment as a case study, we applied the model to simulate the monthly runoff in the catchment. Results show that abrupt change is detected in the year of 1997 by using the Page-Hinckley change detection test method, which is consistent with the historic record of flooding. In addition, the model achieves good simulation results with the RMSE of 13.326, and outperforms many established methods. The findings demonstrated that the proposed data stream model provides a promising way to simulate runoff in a changing environment.
Rule-based modeling with Virtual Cell
Schaff, James C.; Vasilescu, Dan; Moraru, Ion I.; Loew, Leslie M.; Blinov, Michael L.
2016-01-01
Summary: Rule-based modeling is invaluable when the number of possible species and reactions in a model become too large to allow convenient manual specification. The popular rule-based software tools BioNetGen and NFSim provide powerful modeling and simulation capabilities at the cost of learning a complex scripting language which is used to specify these models. Here, we introduce a modeling tool that combines new graphical rule-based model specification with existing simulation engines in a seamless way within the familiar Virtual Cell (VCell) modeling environment. A mathematical model can be built integrating explicit reaction networks with reaction rules. In addition to offering a large choice of ODE and stochastic solvers, a model can be simulated using a network free approach through the NFSim simulation engine. Availability and implementation: Available as VCell (versions 6.0 and later) at the Virtual Cell web site (http://vcell.org/). The application installs and runs on all major platforms and does not require registration for use on the user’s computer. Tutorials are available at the Virtual Cell website and Help is provided within the software. Source code is available at Sourceforge. Contact: vcell_support@uchc.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27497444
NASA Technical Reports Server (NTRS)
Nieten, Joseph L.; Seraphine, Kathleen M.
1991-01-01
Procedural modeling systems, rule based modeling systems, and a method for converting a procedural model to a rule based model are described. Simulation models are used to represent real time engineering systems. A real time system can be represented by a set of equations or functions connected so that they perform in the same manner as the actual system. Most modeling system languages are based on FORTRAN or some other procedural language. Therefore, they must be enhanced with a reaction capability. Rule based systems are reactive by definition. Once the engineering system has been decomposed into a set of calculations using only basic algebraic unary operations, a knowledge network of calculations and functions can be constructed. The knowledge network required by a rule based system can be generated by a knowledge acquisition tool or a source level compiler. The compiler would take an existing model source file, a syntax template, and a symbol table and generate the knowledge network. Thus, existing procedural models can be translated and executed by a rule based system. Neural models can be provide the high capacity data manipulation required by the most complex real time models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Delgoshaei, Parastoo; Austin, Mark A.; Pertzborn, Amanda J.
State-of-the-art building simulation control methods incorporate physical constraints into their mathematical models, but omit implicit constraints associated with policies of operation and dependency relationships among rules representing those constraints. To overcome these shortcomings, there is a recent trend in enabling the control strategies with inference-based rule checking capabilities. One solution is to exploit semantic web technologies in building simulation control. Such approaches provide the tools for semantic modeling of domains, and the ability to deduce new information based on the models through use of Description Logic (DL). In a step toward enabling this capability, this paper presents a cross-disciplinary data-drivenmore » control strategy for building energy management simulation that integrates semantic modeling and formal rule checking mechanisms into a Model Predictive Control (MPC) formulation. The results show that MPC provides superior levels of performance when initial conditions and inputs are derived from inference-based rules.« less
Movement rules for individual-based models of stream fish
Steven F. Railsback; Roland H. Lamberson; Bret C. Harvey; Walter E. Duffy
1999-01-01
Abstract - Spatially explicit individual-based models (IBMs) use movement rules to determine when an animal departs its current location and to determine its movement destination; these rules are therefore critical to accurate simulations. Movement rules typically define some measure of how an individual's expected fitness varies among locations, under the...
Systematic reconstruction of TRANSPATH data into Cell System Markup Language
Nagasaki, Masao; Saito, Ayumu; Li, Chen; Jeong, Euna; Miyano, Satoru
2008-01-01
Background Many biological repositories store information based on experimental study of the biological processes within a cell, such as protein-protein interactions, metabolic pathways, signal transduction pathways, or regulations of transcription factors and miRNA. Unfortunately, it is difficult to directly use such information when generating simulation-based models. Thus, modeling rules for encoding biological knowledge into system-dynamics-oriented standardized formats would be very useful for fully understanding cellular dynamics at the system level. Results We selected the TRANSPATH database, a manually curated high-quality pathway database, which provides a plentiful source of cellular events in humans, mice, and rats, collected from over 31,500 publications. In this work, we have developed 16 modeling rules based on hybrid functional Petri net with extension (HFPNe), which is suitable for graphical representing and simulating biological processes. In the modeling rules, each Petri net element is incorporated with Cell System Ontology to enable semantic interoperability of models. As a formal ontology for biological pathway modeling with dynamics, CSO also defines biological terminology and corresponding icons. By combining HFPNe with the CSO features, it is possible to make TRANSPATH data to simulation-based and semantically valid models. The results are encoded into a biological pathway format, Cell System Markup Language (CSML), which eases the exchange and integration of biological data and models. Conclusion By using the 16 modeling rules, 97% of the reactions in TRANSPATH are converted into simulation-based models represented in CSML. This reconstruction demonstrates that it is possible to use our rules to generate quantitative models from static pathway descriptions. PMID:18570683
Systematic reconstruction of TRANSPATH data into cell system markup language.
Nagasaki, Masao; Saito, Ayumu; Li, Chen; Jeong, Euna; Miyano, Satoru
2008-06-23
Many biological repositories store information based on experimental study of the biological processes within a cell, such as protein-protein interactions, metabolic pathways, signal transduction pathways, or regulations of transcription factors and miRNA. Unfortunately, it is difficult to directly use such information when generating simulation-based models. Thus, modeling rules for encoding biological knowledge into system-dynamics-oriented standardized formats would be very useful for fully understanding cellular dynamics at the system level. We selected the TRANSPATH database, a manually curated high-quality pathway database, which provides a plentiful source of cellular events in humans, mice, and rats, collected from over 31,500 publications. In this work, we have developed 16 modeling rules based on hybrid functional Petri net with extension (HFPNe), which is suitable for graphical representing and simulating biological processes. In the modeling rules, each Petri net element is incorporated with Cell System Ontology to enable semantic interoperability of models. As a formal ontology for biological pathway modeling with dynamics, CSO also defines biological terminology and corresponding icons. By combining HFPNe with the CSO features, it is possible to make TRANSPATH data to simulation-based and semantically valid models. The results are encoded into a biological pathway format, Cell System Markup Language (CSML), which eases the exchange and integration of biological data and models. By using the 16 modeling rules, 97% of the reactions in TRANSPATH are converted into simulation-based models represented in CSML. This reconstruction demonstrates that it is possible to use our rules to generate quantitative models from static pathway descriptions.
Generalizing Gillespie’s Direct Method to Enable Network-Free Simulations
Suderman, Ryan T.; Mitra, Eshan David; Lin, Yen Ting; ...
2018-03-28
Gillespie’s direct method for stochastic simulation of chemical kinetics is a staple of computational systems biology research. However, the algorithm requires explicit enumeration of all reactions and all chemical species that may arise in the system. In many cases, this is not feasible due to the combinatorial explosion of reactions and species in biological networks. Rule-based modeling frameworks provide a way to exactly represent networks containing such combinatorial complexity, and generalizations of Gillespie’s direct method have been developed as simulation engines for rule-based modeling languages. Here, we provide both a high-level description of the algorithms underlying the simulation engines, termedmore » network-free simulation algorithms, and how they have been applied in systems biology research. We also define a generic rule-based modeling framework and describe a number of technical details required for adapting Gillespie’s direct method for network-free simulation. Lastly, we briefly discuss potential avenues for advancing network-free simulation and the role they continue to play in modeling dynamical systems in biology.« less
Generalizing Gillespie’s Direct Method to Enable Network-Free Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suderman, Ryan T.; Mitra, Eshan David; Lin, Yen Ting
Gillespie’s direct method for stochastic simulation of chemical kinetics is a staple of computational systems biology research. However, the algorithm requires explicit enumeration of all reactions and all chemical species that may arise in the system. In many cases, this is not feasible due to the combinatorial explosion of reactions and species in biological networks. Rule-based modeling frameworks provide a way to exactly represent networks containing such combinatorial complexity, and generalizations of Gillespie’s direct method have been developed as simulation engines for rule-based modeling languages. Here, we provide both a high-level description of the algorithms underlying the simulation engines, termedmore » network-free simulation algorithms, and how they have been applied in systems biology research. We also define a generic rule-based modeling framework and describe a number of technical details required for adapting Gillespie’s direct method for network-free simulation. Lastly, we briefly discuss potential avenues for advancing network-free simulation and the role they continue to play in modeling dynamical systems in biology.« less
Hybrid modeling of nitrate fate in large catchments using fuzzy-rules
NASA Astrophysics Data System (ADS)
van der Heijden, Sven; Haberlandt, Uwe
2010-05-01
Especially for nutrient balance simulations, physically based ecohydrological modeling needs an abundance of measured data and model parameters, which for large catchments all too often are not available in sufficient spatial or temporal resolution or are simply unknown. For efficient large-scale studies it is thus beneficial to have methods at one's disposal which are parsimonious concerning the number of model parameters and the necessary input data. One such method is fuzzy-rule based modeling, which compared to other machine-learning techniques has the advantages to produce models (the fuzzy-rules) which are physically interpretable to a certain extent, and to allow the explicit introduction of expert knowledge through pre-defined rules. The study focuses on the application of fuzzy-rule based modeling for nitrate simulation in large catchments, in particular concerning decision support. Fuzzy-rule based modeling enables the generation of simple, efficient, easily understandable models with nevertheless satisfactory accuracy for problems of decision support. The chosen approach encompasses a hybrid metamodeling, which includes the generation of fuzzy-rules with data originating from physically based models as well as a coupling with a physically based water balance model. For the generation of the needed training data and also as coupled water balance model the ecohydrological model SWAT is employed. The conceptual model divides the nitrate pathway into three parts. The first fuzzy-module calculates nitrate leaching with the percolating water from soil surface to groundwater, the second module simulates groundwater passage, and the final module replaces the in-stream processes. The aim of this modularization is to create flexibility for using each of the modules on its own, for changing or completely replacing it. For fuzzy-rule based modeling this can explicitly mean that the re-training of one of the modules with newly available data will be possible without problem, while the module assembly does not have to be modified. Apart from the concept of hybrid metamodeling first results are presented for the fuzzy-module for nitrate passage through the unsaturated zone.
Li, Chen; Nagasaki, Masao; Ueno, Kazuko; Miyano, Satoru
2009-04-27
Model checking approaches were applied to biological pathway validations around 2003. Recently, Fisher et al. have proved the importance of model checking approach by inferring new regulation of signaling crosstalk in C. elegans and confirming the regulation with biological experiments. They took a discrete and state-based approach to explore all possible states of the system underlying vulval precursor cell (VPC) fate specification for desired properties. However, since both discrete and continuous features appear to be an indispensable part of biological processes, it is more appropriate to use quantitative models to capture the dynamics of biological systems. Our key motivation of this paper is to establish a quantitative methodology to model and analyze in silico models incorporating the use of model checking approach. A novel method of modeling and simulating biological systems with the use of model checking approach is proposed based on hybrid functional Petri net with extension (HFPNe) as the framework dealing with both discrete and continuous events. Firstly, we construct a quantitative VPC fate model with 1761 components by using HFPNe. Secondly, we employ two major biological fate determination rules - Rule I and Rule II - to VPC fate model. We then conduct 10,000 simulations for each of 48 sets of different genotypes, investigate variations of cell fate patterns under each genotype, and validate the two rules by comparing three simulation targets consisting of fate patterns obtained from in silico and in vivo experiments. In particular, an evaluation was successfully done by using our VPC fate model to investigate one target derived from biological experiments involving hybrid lineage observations. However, the understandings of hybrid lineages are hard to make on a discrete model because the hybrid lineage occurs when the system comes close to certain thresholds as discussed by Sternberg and Horvitz in 1986. Our simulation results suggest that: Rule I that cannot be applied with qualitative based model checking, is more reasonable than Rule II owing to the high coverage of predicted fate patterns (except for the genotype of lin-15ko; lin-12ko double mutants). More insights are also suggested. The quantitative simulation-based model checking approach is a useful means to provide us valuable biological insights and better understandings of biological systems and observation data that may be hard to capture with the qualitative one.
Agent-based model of angiogenesis simulates capillary sprout initiation in multicellular networks
Walpole, J.; Chappell, J.C.; Cluceru, J.G.; Mac Gabhann, F.; Bautch, V.L.; Peirce, S. M.
2015-01-01
Many biological processes are controlled by both deterministic and stochastic influences. However, efforts to model these systems often rely on either purely stochastic or purely rule-based methods. To better understand the balance between stochasticity and determinism in biological processes a computational approach that incorporates both influences may afford additional insight into underlying biological mechanisms that give rise to emergent system properties. We apply a combined approach to the simulation and study of angiogenesis, the growth of new blood vessels from existing networks. This complex multicellular process begins with selection of an initiating endothelial cell, or tip cell, which sprouts from the parent vessels in response to stimulation by exogenous cues. We have constructed an agent-based model of sprouting angiogenesis to evaluate endothelial cell sprout initiation frequency and location, and we have experimentally validated it using high-resolution time-lapse confocal microscopy. ABM simulations were then compared to a Monte Carlo model, revealing that purely stochastic simulations could not generate sprout locations as accurately as the rule-informed agent-based model. These findings support the use of rule-based approaches for modeling the complex mechanisms underlying sprouting angiogenesis over purely stochastic methods. PMID:26158406
Agent-based model of angiogenesis simulates capillary sprout initiation in multicellular networks.
Walpole, J; Chappell, J C; Cluceru, J G; Mac Gabhann, F; Bautch, V L; Peirce, S M
2015-09-01
Many biological processes are controlled by both deterministic and stochastic influences. However, efforts to model these systems often rely on either purely stochastic or purely rule-based methods. To better understand the balance between stochasticity and determinism in biological processes a computational approach that incorporates both influences may afford additional insight into underlying biological mechanisms that give rise to emergent system properties. We apply a combined approach to the simulation and study of angiogenesis, the growth of new blood vessels from existing networks. This complex multicellular process begins with selection of an initiating endothelial cell, or tip cell, which sprouts from the parent vessels in response to stimulation by exogenous cues. We have constructed an agent-based model of sprouting angiogenesis to evaluate endothelial cell sprout initiation frequency and location, and we have experimentally validated it using high-resolution time-lapse confocal microscopy. ABM simulations were then compared to a Monte Carlo model, revealing that purely stochastic simulations could not generate sprout locations as accurately as the rule-informed agent-based model. These findings support the use of rule-based approaches for modeling the complex mechanisms underlying sprouting angiogenesis over purely stochastic methods.
Fuzzylot: a novel self-organising fuzzy-neural rule-based pilot system for automated vehicles.
Pasquier, M; Quek, C; Toh, M
2001-10-01
This paper presents part of our research work concerned with the realisation of an Intelligent Vehicle and the technologies required for its routing, navigation, and control. An automated driver prototype has been developed using a self-organising fuzzy rule-based system (POPFNN-CRI(S)) to model and subsequently emulate human driving expertise. The ability of fuzzy logic to represent vague information using linguistic variables makes it a powerful tool to develop rule-based control systems when an exact working model is not available, as is the case of any vehicle-driving task. Designing a fuzzy system, however, is a complex endeavour, due to the need to define the variables and their associated fuzzy sets, and determine a suitable rule base. Many efforts have thus been devoted to automating this process, yielding the development of learning and optimisation techniques. One of them is the family of POP-FNNs, or Pseudo-Outer Product Fuzzy Neural Networks (TVR, AARS(S), AARS(NS), CRI, Yager). These generic self-organising neural networks developed at the Intelligent Systems Laboratory (ISL/NTU) are based on formal fuzzy mathematical theory and are able to objectively extract a fuzzy rule base from training data. In this application, a driving simulator has been developed, that integrates a detailed model of the car dynamics, complete with engine characteristics and environmental parameters, and an OpenGL-based 3D-simulation interface coupled with driving wheel and accelerator/ brake pedals. The simulator has been used on various road scenarios to record from a human pilot driving data consisting of steering and speed control actions associated to road features. Specifically, the POPFNN-CRI(S) system is used to cluster the data and extract a fuzzy rule base modelling the human driving behaviour. Finally, the effectiveness of the generated rule base has been validated using the simulator in autopilot mode.
ALC: automated reduction of rule-based models
Koschorreck, Markus; Gilles, Ernst Dieter
2008-01-01
Background Combinatorial complexity is a challenging problem for the modeling of cellular signal transduction since the association of a few proteins can give rise to an enormous amount of feasible protein complexes. The layer-based approach is an approximative, but accurate method for the mathematical modeling of signaling systems with inherent combinatorial complexity. The number of variables in the simulation equations is highly reduced and the resulting dynamic models show a pronounced modularity. Layer-based modeling allows for the modeling of systems not accessible previously. Results ALC (Automated Layer Construction) is a computer program that highly simplifies the building of reduced modular models, according to the layer-based approach. The model is defined using a simple but powerful rule-based syntax that supports the concepts of modularity and macrostates. ALC performs consistency checks on the model definition and provides the model output in different formats (C MEX, MATLAB, Mathematica and SBML) as ready-to-run simulation files. ALC also provides additional documentation files that simplify the publication or presentation of the models. The tool can be used offline or via a form on the ALC website. Conclusion ALC allows for a simple rule-based generation of layer-based reduced models. The model files are given in different formats as ready-to-run simulation files. PMID:18973705
Chylek, Lily A.; Harris, Leonard A.; Tung, Chang-Shung; Faeder, James R.; Lopez, Carlos F.
2013-01-01
Rule-based modeling was developed to address the limitations of traditional approaches for modeling chemical kinetics in cell signaling systems. These systems consist of multiple interacting biomolecules (e.g., proteins), which themselves consist of multiple parts (e.g., domains, linear motifs, and sites of phosphorylation). Consequently, biomolecules that mediate information processing generally have the potential to interact in multiple ways, with the number of possible complexes and post-translational modification states tending to grow exponentially with the number of binary interactions considered. As a result, only large reaction networks capture all possible consequences of the molecular interactions that occur in a cell signaling system, which is problematic because traditional modeling approaches for chemical kinetics (e.g., ordinary differential equations) require explicit network specification. This problem is circumvented through representation of interactions in terms of local rules. With this approach, network specification is implicit and model specification is concise. Concise representation results in a coarse graining of chemical kinetics, which is introduced because all reactions implied by a rule inherit the rate law associated with that rule. Coarse graining can be appropriate if interactions are modular, and the coarseness of a model can be adjusted as needed. Rules can be specified using specialized model-specification languages, and recently developed tools designed for specification of rule-based models allow one to leverage powerful software engineering capabilities. A rule-based model comprises a set of rules, which can be processed by general-purpose simulation and analysis tools to achieve different objectives (e.g., to perform either a deterministic or stochastic simulation). PMID:24123887
NASA Astrophysics Data System (ADS)
Jun, Jinhyuck; Park, Minwoo; Park, Chanha; Yang, Hyunjo; Yim, Donggyu; Do, Munhoe; Lee, Dongchan; Kim, Taehoon; Choi, Junghoe; Luk-Pat, Gerard; Miloslavsky, Alex
2015-03-01
As the industry pushes to ever more complex illumination schemes to increase resolution for next generation memory and logic circuits, sub-resolution assist feature (SRAF) placement requirements become increasingly severe. Therefore device manufacturers are evaluating improvements in SRAF placement algorithms which do not sacrifice main feature (MF) patterning capability. There are known-well several methods to generate SRAF such as Rule based Assist Features (RBAF), Model Based Assist Features (MBAF) and Hybrid Assisted Features combining features of the different algorithms using both RBAF and MBAF. Rule Based Assist Features (RBAF) continue to be deployed, even with the availability of Model Based Assist Features (MBAF) and Inverse Lithography Technology (ILT). Certainly for the 3x nm node, and even at the 2x nm nodes and lower, RBAF is used because it demands less run time and provides better consistency. Since RBAF is needed now and in the future, what is also needed is a faster method to create the AF rule tables. The current method typically involves making masks and printing wafers that contain several experiments, varying the main feature configurations, AF configurations, dose conditions, and defocus conditions - this is a time consuming and expensive process. In addition, as the technology node shrinks, wafer process changes and source shape redesigns occur more frequently, escalating the cost of rule table creation. Furthermore, as the demand on process margin escalates, there is a greater need for multiple rule tables: each tailored to a specific set of main-feature configurations. Model Assisted Rule Tables(MART) creates a set of test patterns, and evaluates the simulated CD at nominal conditions, defocused conditions and off-dose conditions. It also uses lithographic simulation to evaluate the likelihood of AF printing. It then analyzes the simulation data to automatically create AF rule tables. It means that analysis results display the cost of different AF configurations as the space grows between a pair of main features. In summary, model based rule tables method is able to make it much easier to create rule tables, leading to faster rule-table creation and a lower barrier to the creation of more rule tables.
Research on complex 3D tree modeling based on L-system
NASA Astrophysics Data System (ADS)
Gang, Chen; Bin, Chen; Yuming, Liu; Hui, Li
2018-03-01
L-system as a fractal iterative system could simulate complex geometric patterns. Based on the field observation data of trees and knowledge of forestry experts, this paper extracted modeling constraint rules and obtained an L-system rules set. Using the self-developed L-system modeling software the L-system rule set was parsed to generate complex tree 3d models.The results showed that the geometrical modeling method based on l-system could be used to describe the morphological structure of complex trees and generate 3D tree models.
An Intelligent Decision Support System for Workforce Forecast
2011-01-01
ARIMA ) model to forecast the demand for construction skills in Hong Kong. This model was based...Decision Trees ARIMA Rule Based Forecasting Segmentation Forecasting Regression Analysis Simulation Modeling Input-Output Models LP and NLP Markovian...data • When results are needed as a set of easily interpretable rules 4.1.4 ARIMA Auto-regressive, integrated, moving-average ( ARIMA ) models
How should we build a generic open-source water management simulator?
NASA Astrophysics Data System (ADS)
Khadem, M.; Meier, P.; Rheinheimer, D. E.; Padula, S.; Matrosov, E.; Selby, P. D.; Knox, S.; Harou, J. J.
2014-12-01
Increasing water needs for agriculture, industry and cities mean effective and flexible water resource system management tools will remain in high demand. Currently many regions or countries use simulators that have been adapted over time to their unique system properties and water management rules and realities. Most regions operate with a preferred short-list of water management and planning decision support systems. Is there scope for a simulator, shared within the water management community, that could be adapted to different contexts, integrate community contributions, and connect to generic data and model management software? What role could open-source play in such a project? How could a genericuser-interface and data/model management software sustainably be attached to this model or suite of models? Finally, how could such a system effectively leverage existing model formulations, modeling technologies and software? These questions are addressed by the initial work presented here. We introduce a generic water resource simulation formulation that enables and integrates both rule-based and optimization driven technologies. We suggest how it could be linked to other sub-models allowing for detailed agent-based simulation of water management behaviours. An early formulation is applied as an example to the Thames water resource system in the UK. The model uses centralised optimisation to calculate allocations but allows for rule-based operations as well in an effort to represent observed behaviours and rules with fidelity. The model is linked through import/export commands to a generic network model platform named Hydra. Benefits and limitations of the approach are discussed and planned work and potential use cases are outlined.
Analysis, Simulation, and Verification of Knowledge-Based, Rule-Based, and Expert Systems
NASA Technical Reports Server (NTRS)
Hinchey, Mike; Rash, James; Erickson, John; Gracanin, Denis; Rouff, Chris
2010-01-01
Mathematically sound techniques are used to view a knowledge-based system (KBS) as a set of processes executing in parallel and being enabled in response to specific rules being fired. The set of processes can be manipulated, examined, analyzed, and used in a simulation. The tool that embodies this technology may warn developers of errors in their rules, but may also highlight rules (or sets of rules) in the system that are underspecified (or overspecified) and need to be corrected for the KBS to operate as intended. The rules embodied in a KBS specify the allowed situations, events, and/or results of the system they describe. In that sense, they provide a very abstract specification of a system. The system is implemented through the combination of the system specification together with an appropriate inference engine, independent of the algorithm used in that inference engine. Viewing the rule base as a major component of the specification, and choosing an appropriate specification notation to represent it, reveals how additional power can be derived from an approach to the knowledge-base system that involves analysis, simulation, and verification. This innovative approach requires no special knowledge of the rules, and allows a general approach where standardized analysis, verification, simulation, and model checking techniques can be applied to the KBS.
Expert systems and simulation models; Proceedings of the Seminar, Tucson, AZ, November 18, 19, 1985
NASA Technical Reports Server (NTRS)
1986-01-01
The seminar presents papers on modeling and simulation methodology, artificial intelligence and expert systems, environments for simulation/expert system development, and methodology for simulation/expert system development. Particular attention is given to simulation modeling concepts and their representation, modular hierarchical model specification, knowledge representation, and rule-based diagnostic expert system development. Other topics include the combination of symbolic and discrete event simulation, real time inferencing, and the management of large knowledge-based simulation projects.
Pilot interaction with automated airborne decision making systems
NASA Technical Reports Server (NTRS)
Rouse, W. B.; Hammer, J. M.; Mitchell, C. M.; Morris, N. M.; Lewis, C. M.; Yoon, W. C.
1985-01-01
Progress was made in the three following areas. In the rule-based modeling area, two papers related to identification and significane testing of rule-based models were presented. In the area of operator aiding, research focused on aiding operators in novel failure situations; a discrete control modeling approach to aiding PLANT operators was developed; and a set of guidelines were developed for implementing automation. In the area of flight simulator hardware and software, the hardware will be completed within two months and initial simulation software will then be integrated and tested.
Simulated Students and Classroom Use of Model-Based Intelligent Tutoring
NASA Technical Reports Server (NTRS)
Koedinger, Kenneth R.
2008-01-01
Two educational uses of models and simulations: 1) Students create models and use simulations ; and 2) Researchers create models of learners to guide development of reliably effective materials. Cognitive tutors simulate and support tutoring - data is crucial to create effective model. Pittsburgh Science of Learning Center: Resources for modeling, authoring, experimentation. Repository of data and theory. Examples of advanced modeling efforts: SimStudent learns rule-based model. Help-seeking model: Tutors metacognition. Scooter uses machine learning detectors of student engagement.
NASA Astrophysics Data System (ADS)
Macian-Sorribes, Hector; Pulido-Velazquez, Manuel
2013-04-01
Water resources systems are operated, mostly, using a set of pre-defined rules not regarding, usually, to an optimal allocation in terms of water use or economic benefits, but to historical and institutional reasons. These operating policies are reproduced, commonly, as hedging rules, pack rules or zone-based operations, and simulation models can be used to test their performance under a wide range of hydrological and/or socio-economic hypothesis. Despite the high degree of acceptation and testing that these models have achieved, the actual operation of water resources systems hardly follows all the time the pre-defined rules with the consequent uncertainty on the system performance. Real-world reservoir operation is very complex, affected by input uncertainty (imprecision in forecast inflow, seepage and evaporation losses, etc.), filtered by the reservoir operator's experience and natural risk-aversion, while considering the different physical and legal/institutional constraints in order to meet the different demands and system requirements. The aim of this work is to expose a fuzzy logic approach to derive and assess the historical operation of a system. This framework uses a fuzzy rule-based system to reproduce pre-defined rules and also to match as close as possible the actual decisions made by managers. After built up, the fuzzy rule-based system can be integrated in a water resources management model, making possible to assess the system performance at the basin scale. The case study of the Mijares basin (eastern Spain) is used to illustrate the method. A reservoir operating curve regulates the two main reservoir releases (operated in a conjunctive way) with the purpose of guaranteeing a high realiability of supply to the traditional irrigation districts with higher priority (more senior demands that funded the reservoir construction). A fuzzy rule-based system has been created to reproduce the operating curve's performance, defining the system state (total water stored in the reservoirs) and the month of the year as inputs; and the demand deliveries as outputs. The developed simulation management model integrates the fuzzy-ruled system of the operation of the two main reservoirs of the basin with the corresponding mass balance equations, the physical or boundary conditions and the water allocation rules among the competing demands. Historical information on inflow time series is used as inputs to the model simulation, being trained and validated using historical information on reservoir storage level and flow in several streams of the Mijares river. This methodology provides a more flexible and close to real policies approach. The model is easy to develop and to understand due to its rule-based structure, which mimics the human way of thinking. This can improve cooperation and negotiation between managers, decision-makers and stakeholders. The approach can be also applied to analyze the historical operation of the reservoir (what we have called a reservoir operation "audit").
Spatial Rule-Based Modeling: A Method and Its Application to the Human Mitotic Kinetochore
Ibrahim, Bashar; Henze, Richard; Gruenert, Gerd; Egbert, Matthew; Huwald, Jan; Dittrich, Peter
2013-01-01
A common problem in the analysis of biological systems is the combinatorial explosion that emerges from the complexity of multi-protein assemblies. Conventional formalisms, like differential equations, Boolean networks and Bayesian networks, are unsuitable for dealing with the combinatorial explosion, because they are designed for a restricted state space with fixed dimensionality. To overcome this problem, the rule-based modeling language, BioNetGen, and the spatial extension, SRSim, have been developed. Here, we describe how to apply rule-based modeling to integrate experimental data from different sources into a single spatial simulation model and how to analyze the output of that model. The starting point for this approach can be a combination of molecular interaction data, reaction network data, proximities, binding and diffusion kinetics and molecular geometries at different levels of detail. We describe the technique and then use it to construct a model of the human mitotic inner and outer kinetochore, including the spindle assembly checkpoint signaling pathway. This allows us to demonstrate the utility of the procedure, show how a novel perspective for understanding such complex systems becomes accessible and elaborate on challenges that arise in the formulation, simulation and analysis of spatial rule-based models. PMID:24709796
NASA Astrophysics Data System (ADS)
Lee, K. J.; Choi, Y.; Choi, H. J.; Lee, J. Y.; Lee, M. G.
2018-03-01
Finite element simulations and experiments for the split-ring test were conducted to investigate the effect of anisotropic constitutive models on the predictive capability of sheet springback. As an alternative to the commonly employed associated flow rule, a non-associated flow rule for Hill1948 yield function was implemented in the simulations. Moreover, the evolution of anisotropy with plastic deformation was efficiently modeled by identifying equivalent plastic strain-dependent anisotropic coefficients. Comparative study with different yield surfaces and elasticity models showed that the split-ring springback could be best predicted when the anisotropy in both the R value and yield stress, their evolution and variable apparent elastic modulus were taken into account in the simulations. Detailed analyses based on deformation paths superimposed on the anisotropic yield functions predicted by different constitutive models were provided to understand the complex springback response in the split-ring test.
NASA Astrophysics Data System (ADS)
Lee, K. J.; Choi, Y.; Choi, H. J.; Lee, J. Y.; Lee, M. G.
2018-06-01
Finite element simulations and experiments for the split-ring test were conducted to investigate the effect of anisotropic constitutive models on the predictive capability of sheet springback. As an alternative to the commonly employed associated flow rule, a non-associated flow rule for Hill1948 yield function was implemented in the simulations. Moreover, the evolution of anisotropy with plastic deformation was efficiently modeled by identifying equivalent plastic strain-dependent anisotropic coefficients. Comparative study with different yield surfaces and elasticity models showed that the split-ring springback could be best predicted when the anisotropy in both the R value and yield stress, their evolution and variable apparent elastic modulus were taken into account in the simulations. Detailed analyses based on deformation paths superimposed on the anisotropic yield functions predicted by different constitutive models were provided to understand the complex springback response in the split-ring test.
NASA Astrophysics Data System (ADS)
Shelef, Eitan; Hilley, George E.
2013-12-01
Flow routing across real or modeled topography determines the modeled discharge and wetness index and thus plays a central role in predicting surface lowering rate, runoff generation, likelihood of slope failure, and transition from hillslope to channel forming processes. In this contribution, we compare commonly used flow-routing rules as well as a new routing rule, to commonly used benchmarks. We also compare results for different routing rules using Airborne Laser Swath Mapping (ALSM) topography to explore the impact of different flow-routing schemes on inferring the generation of saturation overland flow and the transition between hillslope to channel forming processes, as well as on location of saturation overland flow. Finally, we examined the impact of flow-routing and slope-calculation rules on modeled topography produced by Geomorphic Transport Law (GTL)-based simulations. We found that different rules produce substantive differences in the structure of the modeled topography and flow patterns over ALSM data. Our results highlight the impact of flow-routing and slope-calculation rules on modeled topography, as well as on calculated geomorphic metrics across real landscapes. As such, studies that use a variety of routing rules to analyze and simulate topography are necessary to determine those aspects that most strongly depend on a chosen routing rule.
Realization of planning design of mechanical manufacturing system by Petri net simulation model
NASA Astrophysics Data System (ADS)
Wu, Yanfang; Wan, Xin; Shi, Weixiang
1991-09-01
Planning design is to work out a more overall long-term plan. In order to guarantee a mechanical manufacturing system (MMS) designed to obtain maximum economical benefit, it is necessary to carry out a reasonable planning design for the system. First, some principles on planning design for MMS are introduced. Problems of production scheduling and their decision rules for computer simulation are presented. Realizable method of each production scheduling decision rule in Petri net model is discussed. Second, the solution of conflict rules for conflict problems during running Petri net is given. Third, based on the Petri net model of MMS which includes part flow and tool flow, according to the principle of minimum event time advance, a computer dynamic simulation of the Petri net model, that is, a computer dynamic simulation of MMS, is realized. Finally, the simulation program is applied to a simulation exmple, so the scheme of a planning design for MMS can be evaluated effectively.
Rasmussen's model of human behavior in laparoscopy training.
Wentink, M; Stassen, L P S; Alwayn, I; Hosman, R J A W; Stassen, H G
2003-08-01
Compared to aviation, where virtual reality (VR) training has been standardized and simulators have proven their benefits, the objectives, needs, and means of VR training in minimally invasive surgery (MIS) still have to be established. The aim of the study presented is to introduce Rasmussen's model of human behavior as a practical framework for the definition of the training objectives, needs, and means in MIS. Rasmussen distinguishes three levels of human behavior: skill-, rule-, and knowledge-based behaviour. The training needs of a laparoscopic novice can be determined by identifying the specific skill-, rule-, and knowledge-based behavior that is required for performing safe laparoscopy. Future objectives of VR laparoscopy trainers should address all three levels of behavior. Although most commercially available simulators for laparoscopy aim at training skill-based behavior, especially the training of knowledge-based behavior during complications in surgery will improve safety levels. However, the cost and complexity of a training means increases when the training objectives proceed from the training of skill-based behavior to the training of complex knowledge-based behavior. In aviation, human behavior models have been used successfully to integrate the training of skill-, rule-, and knowledge-based behavior in a full flight simulator. Understanding surgeon behavior is one of the first steps towards a future full-scale laparoscopy simulator.
Simulation Of Combat With An Expert System
NASA Technical Reports Server (NTRS)
Provenzano, J. P.
1989-01-01
Proposed expert system predicts outcomes of combat situations. Called "COBRA", combat outcome based on rules for attrition, system selects rules for mathematical modeling of losses and discrete events in combat according to previous experiences. Used with another software module known as the "Game". Game/COBRA software system, consisting of Game and COBRA modules, provides for both quantitative aspects and qualitative aspects in simulations of battles. COBRA intended for simulation of large-scale military exercises, concepts embodied in it have much broader applicability. In industrial research, knowledge-based system enables qualitative as well as quantitative simulations.
Leveraging Modeling Approaches: Reaction Networks and Rules
Blinov, Michael L.; Moraru, Ion I.
2012-01-01
We have witnessed an explosive growth in research involving mathematical models and computer simulations of intracellular molecular interactions, ranging from metabolic pathways to signaling and gene regulatory networks. Many software tools have been developed to aid in the study of such biological systems, some of which have a wealth of features for model building and visualization, and powerful capabilities for simulation and data analysis. Novel high resolution and/or high throughput experimental techniques have led to an abundance of qualitative and quantitative data related to the spatio-temporal distribution of molecules and complexes, their interactions kinetics, and functional modifications. Based on this information, computational biology researchers are attempting to build larger and more detailed models. However, this has proved to be a major challenge. Traditionally, modeling tools require the explicit specification of all molecular species and interactions in a model, which can quickly become a major limitation in the case of complex networks – the number of ways biomolecules can combine to form multimolecular complexes can be combinatorially large. Recently, a new breed of software tools has been created to address the problems faced when building models marked by combinatorial complexity. These have a different approach for model specification, using reaction rules and species patterns. Here we compare the traditional modeling approach with the new rule-based methods. We make a case for combining the capabilities of conventional simulation software with the unique features and flexibility of a rule-based approach in a single software platform for building models of molecular interaction networks. PMID:22161349
Leveraging modeling approaches: reaction networks and rules.
Blinov, Michael L; Moraru, Ion I
2012-01-01
We have witnessed an explosive growth in research involving mathematical models and computer simulations of intracellular molecular interactions, ranging from metabolic pathways to signaling and gene regulatory networks. Many software tools have been developed to aid in the study of such biological systems, some of which have a wealth of features for model building and visualization, and powerful capabilities for simulation and data analysis. Novel high-resolution and/or high-throughput experimental techniques have led to an abundance of qualitative and quantitative data related to the spatiotemporal distribution of molecules and complexes, their interactions kinetics, and functional modifications. Based on this information, computational biology researchers are attempting to build larger and more detailed models. However, this has proved to be a major challenge. Traditionally, modeling tools require the explicit specification of all molecular species and interactions in a model, which can quickly become a major limitation in the case of complex networks - the number of ways biomolecules can combine to form multimolecular complexes can be combinatorially large. Recently, a new breed of software tools has been created to address the problems faced when building models marked by combinatorial complexity. These have a different approach for model specification, using reaction rules and species patterns. Here we compare the traditional modeling approach with the new rule-based methods. We make a case for combining the capabilities of conventional simulation software with the unique features and flexibility of a rule-based approach in a single software platform for building models of molecular interaction networks.
NASA Astrophysics Data System (ADS)
He, Yingqing; Ai, Bin; Yao, Yao; Zhong, Fajun
2015-06-01
Cellular automata (CA) have proven to be very effective for simulating and predicting the spatio-temporal evolution of complex geographical phenomena. Traditional methods generally pose problems in determining the structure and parameters of CA for a large, complex region or a long-term simulation. This study presents a self-adaptive CA model integrated with an artificial immune system to discover dynamic transition rules automatically. The model's parameters are allowed to be self-modified with the application of multi-temporal remote sensing images: that is, the CA can adapt itself to the changed and complex environment. Therefore, urban dynamic evolution rules over time can be efficiently retrieved by using this integrated model. The proposed AIS-based CA model was then used to simulate the rural-urban land conversion of Guangzhou city, located in the core of China's Pearl River Delta. The initial urban land was directly classified from TM satellite image in the year 1990. Urban land in the years 1995, 2000, 2005, 2009 and 2012 was correspondingly used as the observed data to calibrate the model's parameters. With the quantitative index figure of merit (FoM) and pattern similarity, the comparison was further performed between the AIS-based model and a Logistic CA model. The results indicate that the AIS-based CA model can perform better and with higher precision in simulating urban evolution, and the simulated spatial pattern is closer to the actual development situation.
One Giant Leap for Categorizers: One Small Step for Categorization Theory
Smith, J. David; Ell, Shawn W.
2015-01-01
We explore humans’ rule-based category learning using analytic approaches that highlight their psychological transitions during learning. These approaches confirm that humans show qualitatively sudden psychological transitions during rule learning. These transitions contribute to the theoretical literature contrasting single vs. multiple category-learning systems, because they seem to reveal a distinctive learning process of explicit rule discovery. A complete psychology of categorization must describe this learning process, too. Yet extensive formal-modeling analyses confirm that a wide range of current (gradient-descent) models cannot reproduce these transitions, including influential rule-based models (e.g., COVIS) and exemplar models (e.g., ALCOVE). It is an important theoretical conclusion that existing models cannot explain humans’ rule-based category learning. The problem these models have is the incremental algorithm by which learning is simulated. Humans descend no gradient in rule-based tasks. Very different formal-modeling systems will be required to explain humans’ psychology in these tasks. An important next step will be to build a new generation of models that can do so. PMID:26332587
Local rules simulation of the kinetics of virus capsid self-assembly.
Schwartz, R; Shor, P W; Prevelige, P E; Berger, B
1998-12-01
A computer model is described for studying the kinetics of the self-assembly of icosahedral viral capsids. Solution of this problem is crucial to an understanding of the viral life cycle, which currently cannot be adequately addressed through laboratory techniques. The abstract simulation model employed to address this is based on the local rules theory of. Proc. Natl. Acad. Sci. USA. 91:7732-7736). It is shown that the principle of local rules, generalized with a model of kinetics and other extensions, can be used to simulate complicated problems in self-assembly. This approach allows for a computationally tractable molecular dynamics-like simulation of coat protein interactions while retaining many relevant features of capsid self-assembly. Three simple simulation experiments are presented to illustrate the use of this model. These show the dependence of growth and malformation rates on the energetics of binding interactions, the tolerance of errors in binding positions, and the concentration of subunits in the examples. These experiments demonstrate a tradeoff within the model between growth rate and fidelity of assembly for the three parameters. A detailed discussion of the computational model is also provided.
The numerical modelling and process simulation for the fault diagnosis of rotary kiln incinerator.
Roh, S D; Kim, S W; Cho, W S
2001-10-01
The numerical modelling and process simulation for the fault diagnosis of rotary kiln incinerator were accomplished. In the numerical modelling, two models applied to the modelling within the kiln are the combustion chamber model including the mass and energy balance equations for two combustion chambers and 3D thermal model. The combustion chamber model predicts temperature within the kiln, flue gas composition, flux and heat of combustion. Using the combustion chamber model and 3D thermal model, the production-rules for the process simulation can be obtained through interrelation analysis between control and operation variables. The process simulation of the kiln is operated with the production-rules for automatic operation. The process simulation aims to provide fundamental solutions to the problems in incineration process by introducing an online expert control system to provide an integrity in process control and management. Knowledge-based expert control systems use symbolic logic and heuristic rules to find solutions for various types of problems. It was implemented to be a hybrid intelligent expert control system by mutually connecting with the process control systems which has the capability of process diagnosis, analysis and control.
Misirli, Goksel; Cavaliere, Matteo; Waites, William; Pocock, Matthew; Madsen, Curtis; Gilfellon, Owen; Honorato-Zimmer, Ricardo; Zuliani, Paolo; Danos, Vincent; Wipat, Anil
2016-03-15
Biological systems are complex and challenging to model and therefore model reuse is highly desirable. To promote model reuse, models should include both information about the specifics of simulations and the underlying biology in the form of metadata. The availability of computationally tractable metadata is especially important for the effective automated interpretation and processing of models. Metadata are typically represented as machine-readable annotations which enhance programmatic access to information about models. Rule-based languages have emerged as a modelling framework to represent the complexity of biological systems. Annotation approaches have been widely used for reaction-based formalisms such as SBML. However, rule-based languages still lack a rich annotation framework to add semantic information, such as machine-readable descriptions, to the components of a model. We present an annotation framework and guidelines for annotating rule-based models, encoded in the commonly used Kappa and BioNetGen languages. We adapt widely adopted annotation approaches to rule-based models. We initially propose a syntax to store machine-readable annotations and describe a mapping between rule-based modelling entities, such as agents and rules, and their annotations. We then describe an ontology to both annotate these models and capture the information contained therein, and demonstrate annotating these models using examples. Finally, we present a proof of concept tool for extracting annotations from a model that can be queried and analyzed in a uniform way. The uniform representation of the annotations can be used to facilitate the creation, analysis, reuse and visualization of rule-based models. Although examples are given, using specific implementations the proposed techniques can be applied to rule-based models in general. The annotation ontology for rule-based models can be found at http://purl.org/rbm/rbmo The krdf tool and associated executable examples are available at http://purl.org/rbm/rbmo/krdf anil.wipat@newcastle.ac.uk or vdanos@inf.ed.ac.uk. © The Author 2015. Published by Oxford University Press.
Constructing Agent Model for Virtual Training Systems
NASA Astrophysics Data System (ADS)
Murakami, Yohei; Sugimoto, Yuki; Ishida, Toru
Constructing highly realistic agents is essential if agents are to be employed in virtual training systems. In training for collaboration based on face-to-face interaction, the generation of emotional expressions is one key. In training for guidance based on one-to-many interaction such as direction giving for evacuations, emotional expressions must be supplemented by diverse agent behaviors to make the training realistic. To reproduce diverse behavior, we characterize agents by using a various combinations of operation rules instantiated by the user operating the agent. To accomplish this goal, we introduce a user modeling method based on participatory simulations. These simulations enable us to acquire information observed by each user in the simulation and the operating history. Using these data and the domain knowledge including known operation rules, we can generate an explanation for each behavior. Moreover, the application of hypothetical reasoning, which offers consistent selection of hypotheses, to the generation of explanations allows us to use otherwise incompatible operation rules as domain knowledge. In order to validate the proposed modeling method, we apply it to the acquisition of an evacuee's model in a fire-drill experiment. We successfully acquire a subject's model corresponding to the results of an interview with the subject.
Das, Saptarshi; Pan, Indranil; Das, Shantanu; Gupta, Amitava
2012-03-01
Genetic algorithm (GA) has been used in this study for a new approach of suboptimal model reduction in the Nyquist plane and optimal time domain tuning of proportional-integral-derivative (PID) and fractional-order (FO) PI(λ)D(μ) controllers. Simulation studies show that the new Nyquist-based model reduction technique outperforms the conventional H(2)-norm-based reduced parameter modeling technique. With the tuned controller parameters and reduced-order model parameter dataset, optimum tuning rules have been developed with a test-bench of higher-order processes via genetic programming (GP). The GP performs a symbolic regression on the reduced process parameters to evolve a tuning rule which provides the best analytical expression to map the data. The tuning rules are developed for a minimum time domain integral performance index described by a weighted sum of error index and controller effort. From the reported Pareto optimal front of the GP-based optimal rule extraction technique, a trade-off can be made between the complexity of the tuning formulae and the control performance. The efficacy of the single-gene and multi-gene GP-based tuning rules has been compared with the original GA-based control performance for the PID and PI(λ)D(μ) controllers, handling four different classes of representative higher-order processes. These rules are very useful for process control engineers, as they inherit the power of the GA-based tuning methodology, but can be easily calculated without the requirement for running the computationally intensive GA every time. Three-dimensional plots of the required variation in PID/fractional-order PID (FOPID) controller parameters with reduced process parameters have been shown as a guideline for the operator. Parametric robustness of the reported GP-based tuning rules has also been shown with credible simulation examples. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Mcmanus, Shawn; Mcdaniel, Michael
1989-01-01
Planning for processing payloads was always difficult and time-consuming. With the advent of Space Station Freedom and its capability to support a myriad of complex payloads, the planning to support this ground processing maze involves thousands of man-hours of often tedious data manipulation. To provide the capability to analyze various processing schedules, an object oriented knowledge-based simulation environment called the Advanced Generic Accomodations Planning Environment (AGAPE) is being developed. Having nearly completed the baseline system, the emphasis in this paper is directed toward rule definition and its relation to model development and simulation. The focus is specifically on the methodologies implemented during knowledge acquisition, analysis, and representation within the AGAPE rule structure. A model is provided to illustrate the concepts presented. The approach demonstrates a framework for AGAPE rule development to assist expert system development.
MOAB: a spatially explicit, individual-based expert system for creating animal foraging models
Carter, J.; Finn, John T.
1999-01-01
We describe the development, structure, and corroboration process of a simulation model of animal behavior (MOAB). MOAB can create spatially explicit, individual-based animal foraging models. Users can create or replicate heterogeneous landscape patterns, and place resources and individual animals of a goven species on that landscape to simultaneously simulate the foraging behavior of multiple species. The heuristic rules for animal behavior are maintained in a user-modifiable expert system. MOAB can be used to explore hypotheses concerning the influence of landscape patttern on animal movement and foraging behavior. A red fox (Vulpes vulpes L.) foraging and nest predation model was created to test MOAB's capabilities. Foxes were simulated for 30-day periods using both expert system and random movement rules. Home range size, territory formation and other available simulation studies. A striped skunk (Mephitis mephitis L.) model also was developed. The expert system model proved superior to stochastic in respect to territory formation, general movement patterns and home range size.
Opinion evolution based on cellular automata rules in small world networks
NASA Astrophysics Data System (ADS)
Shi, Xiao-Ming; Shi, Lun; Zhang, Jie-Fang
2010-03-01
In this paper, we apply cellular automata rules, which can be given by a truth table, to human memory. We design each memory as a tracking survey mode that keeps the most recent three opinions. Each cellular automata rule, as a personal mechanism, gives the final ruling in one time period based on the data stored in one's memory. The key focus of the paper is to research the evolution of people's attitudes to the same question. Based on a great deal of empirical observations from computer simulations, all the rules can be classified into 20 groups. We highlight the fact that the phenomenon shown by some rules belonging to the same group will be altered within several steps by other rules in different groups. It is truly amazing that, compared with the last hundreds of presidential voting in America, the eras of important events in America's history coincide with the simulation results obtained by our model.
Andrews, Steven S
2017-03-01
Smoldyn is a spatial and stochastic biochemical simulator. It treats each molecule of interest as an individual particle in continuous space, simulating molecular diffusion, molecule-membrane interactions and chemical reactions, all with good accuracy. This article presents several new features. Smoldyn now supports two types of rule-based modeling. These are a wildcard method, which is very convenient, and the BioNetGen package with extensions for spatial simulation, which is better for complicated models. Smoldyn also includes new algorithms for simulating the diffusion of surface-bound molecules and molecules with excluded volume. Both are exact in the limit of short time steps and reasonably good with longer steps. In addition, Smoldyn supports single-molecule tracking simulations. Finally, the Smoldyn source code can be accessed through a C/C ++ language library interface. Smoldyn software, documentation, code, and examples are at http://www.smoldyn.org . steven.s.andrews@gmail.com. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Naghibi, Fereydoun; Delavar, Mahmoud Reza; Pijanowski, Bryan
2016-12-14
Cellular Automata (CA) is one of the most common techniques used to simulate the urbanization process. CA-based urban models use transition rules to deliver spatial patterns of urban growth and urban dynamics over time. Determining the optimum transition rules of the CA is a critical step because of the heterogeneity and nonlinearities existing among urban growth driving forces. Recently, new CA models integrated with optimization methods based on swarm intelligence algorithms were proposed to overcome this drawback. The Artificial Bee Colony (ABC) algorithm is an advanced meta-heuristic swarm intelligence-based algorithm. Here, we propose a novel CA-based urban change model that uses the ABC algorithm to extract optimum transition rules. We applied the proposed ABC-CA model to simulate future urban growth in Urmia (Iran) with multi-temporal Landsat images from 1997, 2006 and 2015. Validation of the simulation results was made through statistical methods such as overall accuracy, the figure of merit and total operating characteristics (TOC). Additionally, we calibrated the CA model by ant colony optimization (ACO) to assess the performance of our proposed model versus similar swarm intelligence algorithm methods. We showed that the overall accuracy and the figure of merit of the ABC-CA model are 90.1% and 51.7%, which are 2.9% and 8.8% higher than those of the ACO-CA model, respectively. Moreover, the allocation disagreement of the simulation results for the ABC-CA model is 9.9%, which is 2.9% less than that of the ACO-CA model. Finally, the ABC-CA model also outperforms the ACO-CA model with fewer quantity and allocation errors and slightly more hits.
Naghibi, Fereydoun; Delavar, Mahmoud Reza; Pijanowski, Bryan
2016-01-01
Cellular Automata (CA) is one of the most common techniques used to simulate the urbanization process. CA-based urban models use transition rules to deliver spatial patterns of urban growth and urban dynamics over time. Determining the optimum transition rules of the CA is a critical step because of the heterogeneity and nonlinearities existing among urban growth driving forces. Recently, new CA models integrated with optimization methods based on swarm intelligence algorithms were proposed to overcome this drawback. The Artificial Bee Colony (ABC) algorithm is an advanced meta-heuristic swarm intelligence-based algorithm. Here, we propose a novel CA-based urban change model that uses the ABC algorithm to extract optimum transition rules. We applied the proposed ABC-CA model to simulate future urban growth in Urmia (Iran) with multi-temporal Landsat images from 1997, 2006 and 2015. Validation of the simulation results was made through statistical methods such as overall accuracy, the figure of merit and total operating characteristics (TOC). Additionally, we calibrated the CA model by ant colony optimization (ACO) to assess the performance of our proposed model versus similar swarm intelligence algorithm methods. We showed that the overall accuracy and the figure of merit of the ABC-CA model are 90.1% and 51.7%, which are 2.9% and 8.8% higher than those of the ACO-CA model, respectively. Moreover, the allocation disagreement of the simulation results for the ABC-CA model is 9.9%, which is 2.9% less than that of the ACO-CA model. Finally, the ABC-CA model also outperforms the ACO-CA model with fewer quantity and allocation errors and slightly more hits. PMID:27983633
Simulation of root forms using cellular automata model
NASA Astrophysics Data System (ADS)
Winarno, Nanang; Prima, Eka Cahya; Afifah, Ratih Mega Ayu
2016-02-01
This research aims to produce a simulation program for root forms using cellular automata model. Stephen Wolfram in his book entitled "A New Kind of Science" discusses the formation rules based on the statistical analysis. In accordance with Stephen Wolfram's investigation, the research will develop a basic idea of computer program using Delphi 7 programming language. To best of our knowledge, there is no previous research developing a simulation describing root forms using the cellular automata model compared to the natural root form with the presence of stone addition as the disturbance. The result shows that (1) the simulation used four rules comparing results of the program towards the natural photographs and each rule had shown different root forms; (2) the stone disturbances prevent the root growth and the multiplication of root forms had been successfully modeled. Therefore, this research had added some stones, which have size of 120 cells placed randomly in the soil. Like in nature, stones cannot be penetrated by plant roots. The result showed that it is very likely to further develop the program of simulating root forms by 50 variations.
An improved cellular automaton method to model multispecies biofilms.
Tang, Youneng; Valocchi, Albert J
2013-10-01
Biomass-spreading rules used in previous cellular automaton methods to simulate multispecies biofilm introduced extensive mixing between different biomass species or resulted in spatially discontinuous biomass concentration and distribution; this caused results based on the cellular automaton methods to deviate from experimental results and those from the more computationally intensive continuous method. To overcome the problems, we propose new biomass-spreading rules in this work: Excess biomass spreads by pushing a line of grid cells that are on the shortest path from the source grid cell to the destination grid cell, and the fractions of different biomass species in the grid cells on the path change due to the spreading. To evaluate the new rules, three two-dimensional simulation examples are used to compare the biomass distribution computed using the continuous method and three cellular automaton methods, one based on the new rules and the other two based on rules presented in two previous studies. The relationship between the biomass species is syntrophic in one example and competitive in the other two examples. Simulation results generated using the cellular automaton method based on the new rules agree much better with the continuous method than do results using the other two cellular automaton methods. The new biomass-spreading rules are no more complex to implement than the existing rules. Copyright © 2013 Elsevier Ltd. All rights reserved.
Rule-Based Simulation of Multi-Cellular Biological Systems—A Review of Modeling Techniques
Hwang, Minki; Garbey, Marc; Berceli, Scott A.; Tran-Son-Tay, Roger
2011-01-01
Emergent behaviors of multi-cellular biological systems (MCBS) result from the behaviors of each individual cells and their interactions with other cells and with the environment. Modeling MCBS requires incorporating these complex interactions among the individual cells and the environment. Modeling approaches for MCBS can be grouped into two categories: continuum models and cell-based models. Continuum models usually take the form of partial differential equations, and the model equations provide insight into the relationship among the components in the system. Cell-based models simulate each individual cell behavior and interactions among them enabling the observation of the emergent system behavior. This review focuses on the cell-based models of MCBS, and especially, the technical aspect of the rule-based simulation method for MCBS is reviewed. How to implement the cell behaviors and the interactions with other cells and with the environment into the computational domain is discussed. The cell behaviors reviewed in this paper are division, migration, apoptosis/necrosis, and differentiation. The environmental factors such as extracellular matrix, chemicals, microvasculature, and forces are also discussed. Application examples of these cell behaviors and interactions are presented. PMID:21369345
NASA Astrophysics Data System (ADS)
Pulido-Velazquez, Manuel; Lopez-Nicolas, Antonio; Harou, Julien J.; Andreu, Joaquin
2013-04-01
Hydrologic-economic models allow integrated analysis of water supply, demand and infrastructure management at the river basin scale. These models simultaneously analyze engineering, hydrology and economic aspects of water resources management. Two new tools have been designed to develop models within this approach: a simulation tool (SIM_GAMS), for models in which water is allocated each month based on supply priorities to competing uses and system operating rules, and an optimization tool (OPT_GAMS), in which water resources are allocated optimally following economic criteria. The characterization of the water resource network system requires a connectivity matrix representing the topology of the elements, generated using HydroPlatform. HydroPlatform, an open-source software platform for network (node-link) models, allows to store, display and export all information needed to characterize the system. Two generic non-linear models have been programmed in GAMS to use the inputs from HydroPlatform in simulation and optimization models. The simulation model allocates water resources on a monthly basis, according to different targets (demands, storage, environmental flows, hydropower production, etc.), priorities and other system operating rules (such as reservoir operating rules). The optimization model's objective function is designed so that the system meets operational targets (ranked according to priorities) each month while following system operating rules. This function is analogous to the one used in the simulation module of the DSS AQUATOOL. Each element of the system has its own contribution to the objective function through unit cost coefficients that preserve the relative priority rank and the system operating rules. The model incorporates groundwater and stream-aquifer interaction (allowing conjunctive use simulation) with a wide range of modeling options, from lumped and analytical approaches to parameter-distributed models (eigenvalue approach). Such functionality is not typically included in other water DSS. Based on the resulting water resources allocation, the model calculates operating and water scarcity costs caused by supply deficits based on economic demand functions for each demand node. The optimization model allocates the available resource over time based on economic criteria (net benefits from demand curves and cost functions), minimizing the total water scarcity and operating cost of water use. This approach provides solutions that optimize the economic efficiency (as total net benefit) in water resources management over the optimization period. Both models must be used together in water resource planning and management. The optimization model provides an initial insight on economically efficient solutions, from which different operating rules can be further developed and tested using the simulation model. The hydro-economic simulation model allows assessing economic impacts of alternative policies or operating criteria, avoiding the perfect foresight issues associated with the optimization. The tools have been applied to the Jucar river basin (Spain) in order to assess the economic results corresponding to the current modus operandi of the system and compare them with the solution from the optimization that maximizes economic efficiency. Acknowledgments: The study has been partially supported by the European Community 7th Framework Project (GENESIS project, n. 226536) and the Plan Nacional I+D+I 2008-2011 of the Spanish Ministry of Science and Innovation (CGL2009-13238-C02-01 and CGL2009-13238-C02-02).
Research on the Dynamic Hysteresis Loop Model of the Residence Times Difference (RTD)-Fluxgate
Wang, Yanzhang; Wu, Shujun; Zhou, Zhijian; Cheng, Defu; Pang, Na; Wan, Yunxia
2013-01-01
Based on the core hysteresis features, the RTD-fluxgate core, while working, is repeatedly saturated with excitation field. When the fluxgate simulates, the accurate characteristic model of the core may provide a precise simulation result. As the shape of the ideal hysteresis loop model is fixed, it cannot accurately reflect the actual dynamic changing rules of the hysteresis loop. In order to improve the fluxgate simulation accuracy, a dynamic hysteresis loop model containing the parameters which have actual physical meanings is proposed based on the changing rule of the permeability parameter when the fluxgate is working. Compared with the ideal hysteresis loop model, this model has considered the dynamic features of the hysteresis loop, which makes the simulation results closer to the actual output. In addition, other hysteresis loops of different magnetic materials can be explained utilizing the described model for an example of amorphous magnetic material in this manuscript. The model has been validated by the output response comparison between experiment results and fitting results using the model. PMID:24002230
NASA Astrophysics Data System (ADS)
Perry, Dan; Nakamoto, Mark; Verghese, Nishath; Hurat, Philippe; Rouse, Rich
2007-03-01
Model-based hotspot detection and silicon-aware parametric analysis help designers optimize their chips for yield, area and performance without the high cost of applying foundries' recommended design rules. This set of DFM/ recommended rules is primarily litho-driven, but cannot guarantee a manufacturable design without imposing overly restrictive design requirements. This rule-based methodology of making design decisions based on idealized polygons that no longer represent what is on silicon needs to be replaced. Using model-based simulation of the lithography, OPC, RET and etch effects, followed by electrical evaluation of the resulting shapes, leads to a more realistic and accurate analysis. This analysis can be used to evaluate intelligent design trade-offs and identify potential failures due to systematic manufacturing defects during the design phase. The successful DFM design methodology consists of three parts: 1. Achieve a more aggressive layout through limited usage of litho-related recommended design rules. A 10% to 15% area reduction is achieved by using more aggressive design rules. DFM/recommended design rules are used only if there is no impact on cell size. 2. Identify and fix hotspots using a model-based layout printability checker. Model-based litho and etch simulation are done at the cell level to identify hotspots. Violations of recommended rules may cause additional hotspots, which are then fixed. The resulting design is ready for step 3. 3. Improve timing accuracy with a process-aware parametric analysis tool for transistors and interconnect. Contours of diffusion, poly and metal layers are used for parametric analysis. In this paper, we show the results of this physical and electrical DFM methodology at Qualcomm. We describe how Qualcomm was able to develop more aggressive cell designs that yielded a 10% to 15% area reduction using this methodology. Model-based shape simulation was employed during library development to validate architecture choices and to optimize cell layout. At the physical verification stage, the shape simulator was run at full-chip level to identify and fix residual hotspots on interconnect layers, on poly or metal 1 due to interaction between adjacent cells, or on metal 1 due to interaction between routing (via and via cover) and cell geometry. To determine an appropriate electrical DFM solution, Qualcomm developed an experiment to examine various electrical effects. After reporting the silicon results of this experiment, which showed sizeable delay variations due to lithography-related systematic effects, we also explain how contours of diffusion, poly and metal can be used for silicon-aware parametric analysis of transistors and interconnect at the cell-, block- and chip-level.
A cellular automaton model for ship traffic flow in waterways
NASA Astrophysics Data System (ADS)
Qi, Le; Zheng, Zhongyi; Gang, Longhui
2017-04-01
With the development of marine traffic, waterways become congested and more complicated traffic phenomena in ship traffic flow are observed. It is important and necessary to build a ship traffic flow model based on cellular automata (CAs) to study the phenomena and improve marine transportation efficiency and safety. Spatial discretization rules for waterways and update rules for ship movement are two important issues that are very different from vehicle traffic. To solve these issues, a CA model for ship traffic flow, called a spatial-logical mapping (SLM) model, is presented. In this model, the spatial discretization rules are improved by adding a mapping rule. And the dynamic ship domain model is considered in the update rules to describe ships' interaction more exactly. Take the ship traffic flow in the Singapore Strait for example, some simulations were carried out and compared. The simulations show that the SLM model could avoid ship pseudo lane-change efficiently, which is caused by traditional spatial discretization rules. The ship velocity change in the SLM model is consistent with the measured data. At finally, from the fundamental diagram, the relationship between traffic ability and the lengths of ships is explored. The number of ships in the waterway declines when the proportion of large ships increases.
A Bayesian model averaging method for the derivation of reservoir operating rules
NASA Astrophysics Data System (ADS)
Zhang, Jingwen; Liu, Pan; Wang, Hao; Lei, Xiaohui; Zhou, Yanlai
2015-09-01
Because the intrinsic dynamics among optimal decision making, inflow processes and reservoir characteristics are complex, functional forms of reservoir operating rules are always determined subjectively. As a result, the uncertainty of selecting form and/or model involved in reservoir operating rules must be analyzed and evaluated. In this study, we analyze the uncertainty of reservoir operating rules using the Bayesian model averaging (BMA) model. Three popular operating rules, namely piecewise linear regression, surface fitting and a least-squares support vector machine, are established based on the optimal deterministic reservoir operation. These individual models provide three-member decisions for the BMA combination, enabling the 90% release interval to be estimated by the Markov Chain Monte Carlo simulation. A case study of China's the Baise reservoir shows that: (1) the optimal deterministic reservoir operation, superior to any reservoir operating rules, is used as the samples to derive the rules; (2) the least-squares support vector machine model is more effective than both piecewise linear regression and surface fitting; (3) BMA outperforms any individual model of operating rules based on the optimal trajectories. It is revealed that the proposed model can reduce the uncertainty of operating rules, which is of great potential benefit in evaluating the confidence interval of decisions.
Rule-based spatial modeling with diffusing, geometrically constrained molecules.
Gruenert, Gerd; Ibrahim, Bashar; Lenser, Thorsten; Lohel, Maiko; Hinze, Thomas; Dittrich, Peter
2010-06-07
We suggest a new type of modeling approach for the coarse grained, particle-based spatial simulation of combinatorially complex chemical reaction systems. In our approach molecules possess a location in the reactor as well as an orientation and geometry, while the reactions are carried out according to a list of implicitly specified reaction rules. Because the reaction rules can contain patterns for molecules, a combinatorially complex or even infinitely sized reaction network can be defined. For our implementation (based on LAMMPS), we have chosen an already existing formalism (BioNetGen) for the implicit specification of the reaction network. This compatibility allows to import existing models easily, i.e., only additional geometry data files have to be provided. Our simulations show that the obtained dynamics can be fundamentally different from those simulations that use classical reaction-diffusion approaches like Partial Differential Equations or Gillespie-type spatial stochastic simulation. We show, for example, that the combination of combinatorial complexity and geometric effects leads to the emergence of complex self-assemblies and transportation phenomena happening faster than diffusion (using a model of molecular walkers on microtubules). When the mentioned classical simulation approaches are applied, these aspects of modeled systems cannot be observed without very special treatment. Further more, we show that the geometric information can even change the organizational structure of the reaction system. That is, a set of chemical species that can in principle form a stationary state in a Differential Equation formalism, is potentially unstable when geometry is considered, and vice versa. We conclude that our approach provides a new general framework filling a gap in between approaches with no or rigid spatial representation like Partial Differential Equations and specialized coarse-grained spatial simulation systems like those for DNA or virus capsid self-assembly.
Rule-based spatial modeling with diffusing, geometrically constrained molecules
2010-01-01
Background We suggest a new type of modeling approach for the coarse grained, particle-based spatial simulation of combinatorially complex chemical reaction systems. In our approach molecules possess a location in the reactor as well as an orientation and geometry, while the reactions are carried out according to a list of implicitly specified reaction rules. Because the reaction rules can contain patterns for molecules, a combinatorially complex or even infinitely sized reaction network can be defined. For our implementation (based on LAMMPS), we have chosen an already existing formalism (BioNetGen) for the implicit specification of the reaction network. This compatibility allows to import existing models easily, i.e., only additional geometry data files have to be provided. Results Our simulations show that the obtained dynamics can be fundamentally different from those simulations that use classical reaction-diffusion approaches like Partial Differential Equations or Gillespie-type spatial stochastic simulation. We show, for example, that the combination of combinatorial complexity and geometric effects leads to the emergence of complex self-assemblies and transportation phenomena happening faster than diffusion (using a model of molecular walkers on microtubules). When the mentioned classical simulation approaches are applied, these aspects of modeled systems cannot be observed without very special treatment. Further more, we show that the geometric information can even change the organizational structure of the reaction system. That is, a set of chemical species that can in principle form a stationary state in a Differential Equation formalism, is potentially unstable when geometry is considered, and vice versa. Conclusions We conclude that our approach provides a new general framework filling a gap in between approaches with no or rigid spatial representation like Partial Differential Equations and specialized coarse-grained spatial simulation systems like those for DNA or virus capsid self-assembly. PMID:20529264
Simulation-based MDP verification for leading-edge masks
NASA Astrophysics Data System (ADS)
Su, Bo; Syrel, Oleg; Pomerantsev, Michael; Hagiwara, Kazuyuki; Pearman, Ryan; Pang, Leo; Fujimara, Aki
2017-07-01
For IC design starts below the 20nm technology node, the assist features on photomasks shrink well below 60nm and the printed patterns of those features on masks written by VSB eBeam writers start to show a large deviation from the mask designs. Traditional geometry-based fracturing starts to show large errors for those small features. As a result, other mask data preparation (MDP) methods have become available and adopted, such as rule-based Mask Process Correction (MPC), model-based MPC and eventually model-based MDP. The new MDP methods may place shot edges slightly differently from target to compensate for mask process effects, so that the final patterns on a mask are much closer to the design (which can be viewed as the ideal mask), especially for those assist features. Such an alteration generally produces better masks that are closer to the intended mask design. Traditional XOR-based MDP verification cannot detect problems caused by eBeam effects. Much like model-based OPC verification which became a necessity for OPC a decade ago, we see the same trend in MDP today. Simulation-based MDP verification solution requires a GPU-accelerated computational geometry engine with simulation capabilities. To have a meaningful simulation-based mask check, a good mask process model is needed. The TrueModel® system is a field tested physical mask model developed by D2S. The GPU-accelerated D2S Computational Design Platform (CDP) is used to run simulation-based mask check, as well as model-based MDP. In addition to simulation-based checks such as mask EPE or dose margin, geometry-based rules are also available to detect quality issues such as slivers or CD splits. Dose margin related hotspots can also be detected by setting a correct detection threshold. In this paper, we will demonstrate GPU-acceleration for geometry processing, and give examples of mask check results and performance data. GPU-acceleration is necessary to make simulation-based mask MDP verification acceptable.
Relaxing the rule of ten events per variable in logistic and Cox regression.
Vittinghoff, Eric; McCulloch, Charles E
2007-03-15
The rule of thumb that logistic and Cox models should be used with a minimum of 10 outcome events per predictor variable (EPV), based on two simulation studies, may be too conservative. The authors conducted a large simulation study of other influences on confidence interval coverage, type I error, relative bias, and other model performance measures. They found a range of circumstances in which coverage and bias were within acceptable levels despite less than 10 EPV, as well as other factors that were as influential as or more influential than EPV. They conclude that this rule can be relaxed, in particular for sensitivity analyses undertaken to demonstrate adequate control of confounding.
Simulation of root forms using cellular automata model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winarno, Nanang, E-mail: nanang-winarno@upi.edu; Prima, Eka Cahya; Afifah, Ratih Mega Ayu
This research aims to produce a simulation program for root forms using cellular automata model. Stephen Wolfram in his book entitled “A New Kind of Science” discusses the formation rules based on the statistical analysis. In accordance with Stephen Wolfram’s investigation, the research will develop a basic idea of computer program using Delphi 7 programming language. To best of our knowledge, there is no previous research developing a simulation describing root forms using the cellular automata model compared to the natural root form with the presence of stone addition as the disturbance. The result shows that (1) the simulation usedmore » four rules comparing results of the program towards the natural photographs and each rule had shown different root forms; (2) the stone disturbances prevent the root growth and the multiplication of root forms had been successfully modeled. Therefore, this research had added some stones, which have size of 120 cells placed randomly in the soil. Like in nature, stones cannot be penetrated by plant roots. The result showed that it is very likely to further develop the program of simulating root forms by 50 variations.« less
A.I.-based real-time support for high performance aircraft operations
NASA Technical Reports Server (NTRS)
Vidal, J. J.
1985-01-01
Artificial intelligence (AI) based software and hardware concepts are applied to the handling system malfunctions during flight tests. A representation of malfunction procedure logic using Boolean normal forms are presented. The representation facilitates the automation of malfunction procedures and provides easy testing for the embedded rules. It also forms a potential basis for a parallel implementation in logic hardware. The extraction of logic control rules, from dynamic simulation and their adaptive revision after partial failure are examined. It uses a simplified 2-dimensional aircraft model with a controller that adaptively extracts control rules for directional thrust that satisfies a navigational goal without exceeding pre-established position and velocity limits. Failure recovery (rule adjusting) is examined after partial actuator failure. While this experiment was performed with primitive aircraft and mission models, it illustrates an important paradigm and provided complexity extrapolations for the proposed extraction of expertise from simulation, as discussed. The use of relaxation and inexact reasoning in expert systems was also investigated.
Stochastic simulation of multiscale complex systems with PISKaS: A rule-based approach.
Perez-Acle, Tomas; Fuenzalida, Ignacio; Martin, Alberto J M; Santibañez, Rodrigo; Avaria, Rodrigo; Bernardin, Alejandro; Bustos, Alvaro M; Garrido, Daniel; Dushoff, Jonathan; Liu, James H
2018-03-29
Computational simulation is a widely employed methodology to study the dynamic behavior of complex systems. Although common approaches are based either on ordinary differential equations or stochastic differential equations, these techniques make several assumptions which, when it comes to biological processes, could often lead to unrealistic models. Among others, model approaches based on differential equations entangle kinetics and causality, failing when complexity increases, separating knowledge from models, and assuming that the average behavior of the population encompasses any individual deviation. To overcome these limitations, simulations based on the Stochastic Simulation Algorithm (SSA) appear as a suitable approach to model complex biological systems. In this work, we review three different models executed in PISKaS: a rule-based framework to produce multiscale stochastic simulations of complex systems. These models span multiple time and spatial scales ranging from gene regulation up to Game Theory. In the first example, we describe a model of the core regulatory network of gene expression in Escherichia coli highlighting the continuous model improvement capacities of PISKaS. The second example describes a hypothetical outbreak of the Ebola virus occurring in a compartmentalized environment resembling cities and highways. Finally, in the last example, we illustrate a stochastic model for the prisoner's dilemma; a common approach from social sciences describing complex interactions involving trust within human populations. As whole, these models demonstrate the capabilities of PISKaS providing fertile scenarios where to explore the dynamics of complex systems. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Stability of INFIT and OUTFIT Compared to Simulated Estimates in Applied Setting.
Hodge, Kari J; Morgan, Grant B
Residual-based fit statistics are commonly used as an indication of the extent to which the item response data fit the Rash model. Fit statistic estimates are influenced by sample size and rules-of thumb estimates may result in incorrect conclusions about the extent to which the model fits the data. Estimates obtained in this analysis were compared to 250 simulated data sets to examine the stability of the estimates. All INFIT estimates were within the rule-of-thumb range of 0.7 to 1.3. However, only 82% of the INFIT estimates fell within the 2.5th and 97.5th percentile of the simulated item's INFIT distributions using this 95% confidence-like interval. This is a 18 percentage point difference in items that were classified as acceptable. Fourty-eight percent of OUTFIT estimates fell within the 0.7 to 1.3 rule- of-thumb range. Whereas 34% of OUTFIT estimates fell within the 2.5th and 97.5th percentile of the simulated item's OUTFIT distributions. This is a 13 percentage point difference in items that were classified as acceptable. When using the rule-of- thumb ranges for fit estimates the magnitude of misfit was smaller than with the 95% confidence interval of the simulated distribution. The findings indicate that the use of confidence intervals as critical values for fit statistics leads to different model data fit conclusions than traditional rule of thumb critical values.
Rands, Sean A.
2011-01-01
Functional explanations of behaviour often propose optimal strategies for organisms to follow. These ‘best’ strategies could be difficult to perform given biological constraints such as neural architecture and physiological constraints. Instead, simple heuristics or ‘rules-of-thumb’ that approximate these optimal strategies may instead be performed. From a modelling perspective, rules-of-thumb are also useful tools for considering how group behaviour is shaped by the behaviours of individuals. Using simple rules-of-thumb reduces the complexity of these models, but care needs to be taken to use rules that are biologically relevant. Here, we investigate the similarity between the outputs of a two-player dynamic foraging game (which generated optimal but complex solutions) and a computational simulation of the behaviours of the two members of a foraging pair, who instead followed a rule-of-thumb approximation of the game's output. The original game generated complex results, and we demonstrate here that the simulations following the much-simplified rules-of-thumb also generate complex results, suggesting that the rule-of-thumb was sufficient to make some of the model outcomes unpredictable. There was some agreement between both modelling techniques, but some differences arose – particularly when pair members were not identical in how they gained and lost energy. We argue that exploring how rules-of-thumb perform in comparison to their optimal counterparts is an important exercise for biologically validating the output of agent-based models of group behaviour. PMID:21765938
Rands, Sean A
2011-01-01
Functional explanations of behaviour often propose optimal strategies for organisms to follow. These 'best' strategies could be difficult to perform given biological constraints such as neural architecture and physiological constraints. Instead, simple heuristics or 'rules-of-thumb' that approximate these optimal strategies may instead be performed. From a modelling perspective, rules-of-thumb are also useful tools for considering how group behaviour is shaped by the behaviours of individuals. Using simple rules-of-thumb reduces the complexity of these models, but care needs to be taken to use rules that are biologically relevant. Here, we investigate the similarity between the outputs of a two-player dynamic foraging game (which generated optimal but complex solutions) and a computational simulation of the behaviours of the two members of a foraging pair, who instead followed a rule-of-thumb approximation of the game's output. The original game generated complex results, and we demonstrate here that the simulations following the much-simplified rules-of-thumb also generate complex results, suggesting that the rule-of-thumb was sufficient to make some of the model outcomes unpredictable. There was some agreement between both modelling techniques, but some differences arose - particularly when pair members were not identical in how they gained and lost energy. We argue that exploring how rules-of-thumb perform in comparison to their optimal counterparts is an important exercise for biologically validating the output of agent-based models of group behaviour.
Prediction of Land use changes using CA in GIS Environment
NASA Astrophysics Data System (ADS)
Kiavarz Moghaddam, H.; Samadzadegan, F.
2009-04-01
Urban growth is a typical self-organized system that results from the interaction between three defined systems; developed urban system, natural non-urban system and planned urban system. Urban growth simulation for an artificial city is carried out first. It evaluates a number of urban sprawl parameters including the size and shape of neighborhood besides testing different types of constraints on urban growth simulation. The results indicate that circular-type neighborhood shows smoother but faster urban growth as compared to nine-cell Moore neighborhood. Cellular Automata is proved to be very efficient in simulating the urban growth simulation over time. The strength of this technology comes from the ability of urban modeler to implement the growth simulation model, evaluating the results and presenting the output simulation results in visual interpretable environment. Artificial city simulation model provides an excellent environment to test a number of simulation parameters such as neighborhood influence on growth results and constraints role in driving the urban growth .Also, CA rules definition is critical stage in simulating the urban growth pattern in a close manner to reality. CA urban growth simulation and prediction of Tehran over the last four decades succeeds to simulate specified tested growth years at a high accuracy level. Some real data layer have been used in the CA simulation training phase such as 1995 while others used for testing the prediction results such as 2002. Tuning the CA growth rules is important through comparing the simulated images with the real data to obtain feedback. An important notice is that CA rules need also to be modified over time to adapt to the urban growth pattern. The evaluation method used on region basis has its advantage in covering the spatial distribution component of the urban growth process. Next step includes running the developed CA simulation over classified raster data for three years in a developed ArcGIS extention. A set of crisp rules are defined and calibrated based on real urban growth pattern. Uncertainty analysis is performed to evaluate the accuracy of the simulated results as compared to the historical real data. Evaluation shows promising results represented by the high average accuracies achieved. The average accuracy for the predicted growth images 1964 and 2002 is over 80 %. Modifying CA growth rules over time to match the growth pattern changes is important to obtain accurate simulation. This modification is based on the urban growth relationship for Tehran over time as can be seen in the historical raster data. The feedback obtained from comparing the simulated and real data is crucial in identifying the optimal set of CA rules for reliable simulation and calibrating growth steps.
Simulation of land use change in the three gorges reservoir area based on CART-CA
NASA Astrophysics Data System (ADS)
Yuan, Min
2018-05-01
This study proposes a new method to simulate spatiotemporal complex multiple land uses by using classification and regression tree algorithm (CART) based CA model. In this model, we use classification and regression tree algorithm to calculate land class conversion probability, and combine neighborhood factor, random factor to extract cellular transformation rules. The overall Kappa coefficient is 0.8014 and the overall accuracy is 0.8821 in the land dynamic simulation results of the three gorges reservoir area from 2000 to 2010, and the simulation results are satisfactory.
Integrated modelling of stormwater treatment systems uptake.
Castonguay, A C; Iftekhar, M S; Urich, C; Bach, P M; Deletic, A
2018-05-24
Nature-based solutions provide a variety of benefits in growing cities, ranging from stormwater treatment to amenity provision such as aesthetics. However, the decision-making process involved in the installation of such green infrastructure is not straightforward, as much uncertainty around the location, size, costs and benefits impedes systematic decision-making. We developed a model to simulate decision rules used by local municipalities to install nature-based stormwater treatment systems, namely constructed wetlands, ponds/basins and raingardens. The model was used to test twenty-four scenarios of policy-making, by combining four asset selection, two location selection and three budget constraint decision rules. Based on the case study of a local municipality in Metropolitan Melbourne, Australia, the modelled uptake of stormwater treatment systems was compared with attributes of real-world systems for the simulation period. Results show that the actual budgeted funding is not reliable to predict systems' uptake and that policy-makers are more likely to plan expenditures based on installation costs. The model was able to replicate the cumulative treatment capacity and the location of systems. As such, it offers a novel approach to investigate the impact of using different decision rules to provide environmental services considering biophysical and economic factors. Copyright © 2018 Elsevier Ltd. All rights reserved.
Agent-based modeling of the interaction between CD8+ T cells and Beta cells in type 1 diabetes.
Ozturk, Mustafa Cagdas; Xu, Qian; Cinar, Ali
2018-01-01
We propose an agent-based model for the simulation of the autoimmune response in T1D. The model incorporates cell behavior from various rules derived from the current literature and is implemented on a high-performance computing system, which enables the simulation of a significant portion of the islets in the mouse pancreas. Simulation results indicate that the model is able to capture the trends that emerge during the progression of the autoimmunity. The multi-scale nature of the model enables definition of rules or equations that govern cellular or sub-cellular level phenomena and observation of the outcomes at the tissue scale. It is expected that such a model would facilitate in vivo clinical studies through rapid testing of hypotheses and planning of future experiments by providing insight into disease progression at different scales, some of which may not be obtained easily in clinical studies. Furthermore, the modular structure of the model simplifies tasks such as the addition of new cell types, and the definition or modification of different behaviors of the environment and the cells with ease.
Rule based design of conceptual models for formative evaluation
NASA Technical Reports Server (NTRS)
Moore, Loretta A.; Chang, Kai; Hale, Joseph P.; Bester, Terri; Rix, Thomas; Wang, Yaowen
1994-01-01
A Human-Computer Interface (HCI) Prototyping Environment with embedded evaluation capability has been investigated. This environment will be valuable in developing and refining HCI standards and evaluating program/project interface development, especially Space Station Freedom on-board displays for payload operations. This environment, which allows for rapid prototyping and evaluation of graphical interfaces, includes the following four components: (1) a HCI development tool; (2) a low fidelity simulator development tool; (3) a dynamic, interactive interface between the HCI and the simulator; and (4) an embedded evaluator that evaluates the adequacy of a HCI based on a user's performance. The embedded evaluation tool collects data while the user is interacting with the system and evaluates the adequacy of an interface based on a user's performance. This paper describes the design of conceptual models for the embedded evaluation system using a rule-based approach.
Rule based design of conceptual models for formative evaluation
NASA Technical Reports Server (NTRS)
Moore, Loretta A.; Chang, Kai; Hale, Joseph P.; Bester, Terri; Rix, Thomas; Wang, Yaowen
1994-01-01
A Human-Computer Interface (HCI) Prototyping Environment with embedded evaluation capability has been investigated. This environment will be valuable in developing and refining HCI standards and evaluating program/project interface development, especially Space Station Freedom on-board displays for payload operations. This environment, which allows for rapid prototyping and evaluation of graphical interfaces, includes the following four components: (1) a HCI development tool, (2) a low fidelity simulator development tool, (3) a dynamic, interactive interface between the HCI and the simulator, and (4) an embedded evaluator that evaluates the adequacy of a HCI based on a user's performance. The embedded evaluation tool collects data while the user is interacting with the system and evaluates the adequacy of an interface based on a user's performance. This paper describes the design of conceptual models for the embedded evaluation system using a rule-based approach.
A hybrid agent-based approach for modeling microbiological systems.
Guo, Zaiyi; Sloot, Peter M A; Tay, Joc Cing
2008-11-21
Models for systems biology commonly adopt Differential Equations or Agent-Based modeling approaches for simulating the processes as a whole. Models based on differential equations presuppose phenomenological intracellular behavioral mechanisms, while models based on Multi-Agent approach often use directly translated, and quantitatively less precise if-then logical rule constructs. We propose an extendible systems model based on a hybrid agent-based approach where biological cells are modeled as individuals (agents) while molecules are represented by quantities. This hybridization in entity representation entails a combined modeling strategy with agent-based behavioral rules and differential equations, thereby balancing the requirements of extendible model granularity with computational tractability. We demonstrate the efficacy of this approach with models of chemotaxis involving an assay of 10(3) cells and 1.2x10(6) molecules. The model produces cell migration patterns that are comparable to laboratory observations.
Kinetic Monte Carlo Method for Rule-based Modeling of Biochemical Networks
Yang, Jin; Monine, Michael I.; Faeder, James R.; Hlavacek, William S.
2009-01-01
We present a kinetic Monte Carlo method for simulating chemical transformations specified by reaction rules, which can be viewed as generators of chemical reactions, or equivalently, definitions of reaction classes. A rule identifies the molecular components involved in a transformation, how these components change, conditions that affect whether a transformation occurs, and a rate law. The computational cost of the method, unlike conventional simulation approaches, is independent of the number of possible reactions, which need not be specified in advance or explicitly generated in a simulation. To demonstrate the method, we apply it to study the kinetics of multivalent ligand-receptor interactions. We expect the method will be useful for studying cellular signaling systems and other physical systems involving aggregation phenomena. PMID:18851068
DOE Office of Scientific and Technical Information (OSTI.GOV)
Medvedev, Nikita; Li, Zheng; Tkachenko, Victor
2017-01-31
In the present study, a theoretical study of electron-phonon (electron-ion) coupling rates in semiconductors driven out of equilibrium is performed. Transient change of optical coefficients reflects the band gap shrinkage in covalently bonded materials, and thus, the heating of atomic lattice. Utilizing this dependence, we test various models of electron-ion coupling. The simulation technique is based on tight-binding molecular dynamics. Our simulations with the dedicated hybrid approach (XTANT) indicate that the widely used Fermi's golden rule can break down describing material excitation on femtosecond time scales. In contrast, dynamical coupling proposed in this work yields a reasonably good agreement ofmore » simulation results with available experimental data.« less
Designing efficient nitrous oxide sampling strategies in agroecosystems using simulation models
NASA Astrophysics Data System (ADS)
Saha, Debasish; Kemanian, Armen R.; Rau, Benjamin M.; Adler, Paul R.; Montes, Felipe
2017-04-01
Annual cumulative soil nitrous oxide (N2O) emissions calculated from discrete chamber-based flux measurements have unknown uncertainty. We used outputs from simulations obtained with an agroecosystem model to design sampling strategies that yield accurate cumulative N2O flux estimates with a known uncertainty level. Daily soil N2O fluxes were simulated for Ames, IA (corn-soybean rotation), College Station, TX (corn-vetch rotation), Fort Collins, CO (irrigated corn), and Pullman, WA (winter wheat), representing diverse agro-ecoregions of the United States. Fertilization source, rate, and timing were site-specific. These simulated fluxes surrogated daily measurements in the analysis. We ;sampled; the fluxes using a fixed interval (1-32 days) or a rule-based (decision tree-based) sampling method. Two types of decision trees were built: a high-input tree (HI) that included soil inorganic nitrogen (SIN) as a predictor variable, and a low-input tree (LI) that excluded SIN. Other predictor variables were identified with Random Forest. The decision trees were inverted to be used as rules for sampling a representative number of members from each terminal node. The uncertainty of the annual N2O flux estimation increased along with the fixed interval length. A 4- and 8-day fixed sampling interval was required at College Station and Ames, respectively, to yield ±20% accuracy in the flux estimate; a 12-day interval rendered the same accuracy at Fort Collins and Pullman. Both the HI and the LI rule-based methods provided the same accuracy as that of fixed interval method with up to a 60% reduction in sampling events, particularly at locations with greater temporal flux variability. For instance, at Ames, the HI rule-based and the fixed interval methods required 16 and 91 sampling events, respectively, to achieve the same absolute bias of 0.2 kg N ha-1 yr-1 in estimating cumulative N2O flux. These results suggest that using simulation models along with decision trees can reduce the cost and improve the accuracy of the estimations of cumulative N2O fluxes using the discrete chamber-based method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matsumoto, H.; Eki, Y.; Kaji, A.
1993-12-01
An expert system which can support operators of fossil power plants in creating the optimum startup schedule and executing it accurately is described. The optimum turbine speed-up and load-up pattern is obtained through an iterative manner which is based on fuzzy resonating using quantitative calculations as plant dynamics models and qualitative knowledge as schedule optimization rules with fuzziness. The rules represent relationships between stress margins and modification rates of the schedule parameters. Simulations analysis proves that the system provides quick and accurate plant startups.
IFC BIM-Based Methodology for Semi-Automated Building Energy Performance Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bazjanac, Vladimir
2008-07-01
Building energy performance (BEP) simulation is still rarely used in building design, commissioning and operations. The process is too costly and too labor intensive, and it takes too long to deliver results. Its quantitative results are not reproducible due to arbitrary decisions and assumptions made in simulation model definition, and can be trusted only under special circumstances. A methodology to semi-automate BEP simulation preparation and execution makes this process much more effective. It incorporates principles of information science and aims to eliminate inappropriate human intervention that results in subjective and arbitrary decisions. This is achieved by automating every part ofmore » the BEP modeling and simulation process that can be automated, by relying on data from original sources, and by making any necessary data transformation rule-based and automated. This paper describes the new methodology and its relationship to IFC-based BIM and software interoperability. It identifies five steps that are critical to its implementation, and shows what part of the methodology can be applied today. The paper concludes with a discussion of application to simulation with EnergyPlus, and describes data transformation rules embedded in the new Geometry Simplification Tool (GST).« less
Risk Reduction and Resource Pooling on a Cooperation Task
ERIC Educational Resources Information Center
Pietras, Cynthia J.; Cherek, Don R.; Lane, Scott D.; Tcheremissine, Oleg
2006-01-01
Two experiments investigated choice in adult humans on a simulated cooperation task to evaluate a risk-reduction account of sharing based on the energy-budget rule. The energy-budget rule is an optimal foraging model that predicts risk-averse choices when net energy gains exceed energy requirements (positive energy budget) and risk-prone choices…
NASA Technical Reports Server (NTRS)
Brown, Robert B.
1994-01-01
A software pilot model for Space Shuttle proximity operations is developed, utilizing fuzzy logic. The model is designed to emulate a human pilot during the terminal phase of a Space Shuttle approach to the Space Station. The model uses the same sensory information available to a human pilot and is based upon existing piloting rules and techniques determined from analysis of human pilot performance. Such a model is needed to generate numerous rendezvous simulations to various Space Station assembly stages for analysis of current NASA procedures and plume impingement loads on the Space Station. The advantages of a fuzzy logic pilot model are demonstrated by comparing its performance with NASA's man-in-the-loop simulations and with a similar model based upon traditional Boolean logic. The fuzzy model is shown to respond well from a number of initial conditions, with results typical of an average human. In addition, the ability to model different individual piloting techniques and new piloting rules is demonstrated.
A neurocomputational account of cognitive deficits in Parkinson’s disease
Hélie, Sébastien; Paul, Erick J.; Ashby, F. Gregory
2014-01-01
Parkinson’s disease (PD) is caused by the accelerated death of dopamine (DA) producing neurons. Numerous studies documenting cognitive deficits of PD patients have revealed impairments in a variety of tasks related to memory, learning, visuospatial skills, and attention. While there have been several studies documenting cognitive deficits of PD patients, very few computational models have been proposed. In this article, we use the COVIS model of category learning to simulate DA depletion and show that the model suffers from cognitive symptoms similar to those of human participants affected by PD. Specifically, DA depletion in COVIS produced deficits in rule-based categorization, non-linear information-integration categorization, probabilistic classification, rule maintenance, and rule switching. These were observed by simulating results from younger controls, older controls, PD patients, and severe PD patients in five well-known tasks. Differential performance among the different age groups and clinical populations was modeled simply by changing the amount of DA available in the model. This suggests that COVIS may not only be an adequate model of the simulated tasks and phenomena but also more generally of the role of DA in these tasks and phenomena. PMID:22683450
NASA Astrophysics Data System (ADS)
Colasante, Annarita
2017-02-01
This paper presents an investigation about cooperation in a Public Good Game using an Agent Based Model calibrated on experimental data. Starting from the experiment proposed in Colasante and Russo (2016), we analyze the dynamic of cooperation in a Public Good Game where agents receive an heterogeneous income and choose both the level of contribution and the distribution rule. The starting point is the calibration and the output validation of the model using the experimental results. Once tested the goodness of fit of the Agent Based Model, we run some policy experiment in order to verify how each distribution rule, i.e. equidistribution, proportional to contribution and progressive, affects the level of contribution in the simulated model. We find out that the share of cooperators decreases over time if we exogenously set the equidistribution rule. On the contrary, the share of cooperators converges to 100 % if we impose the progressive rule. Finally, the most interesting result refers to the effect of the progressive rule. We observe that, in the case of high inequality, this rule is not able to reduce the heterogeneity of income.
NASA Astrophysics Data System (ADS)
Enayatifar, Rasul; Sadaei, Hossein Javedani; Abdullah, Abdul Hanan; Lee, Malrey; Isnin, Ismail Fauzi
2015-08-01
Currently, there are many studies have conducted on developing security of the digital image in order to protect such data while they are sending on the internet. This work aims to propose a new approach based on a hybrid model of the Tinkerbell chaotic map, deoxyribonucleic acid (DNA) and cellular automata (CA). DNA rules, DNA sequence XOR operator and CA rules are used simultaneously to encrypt the plain-image pixels. To determine rule number in DNA sequence and also CA, a 2-dimension Tinkerbell chaotic map is employed. Experimental results and computer simulations, both confirm that the proposed scheme not only demonstrates outstanding encryption, but also resists various typical attacks.
NASA Astrophysics Data System (ADS)
Wei, Ding; Cong-cong, Yu; Chen-hui, Wu; Zheng-yi, Shu
2018-03-01
To analyse the strain localization behavior of geomaterials, the forward Euler schemes and the tangent modulus matrix are formulated based on the transversely isotropic yield criterion with non-coaxial flow rule developed by Lade, the program code is implemented based on the user subroutine (UMAT) of ABAQUS. The influence of the material principal direction on the strain localization and the bearing capacity of the structure are investigated and analyzed. Numerical results show the validity and performance of the proposed model in simulating the strain localization behavior of geostructures.
Making sense of information in noisy networks: human communication, gossip, and distortion.
Laidre, Mark E; Lamb, Alex; Shultz, Susanne; Olsen, Megan
2013-01-21
Information from others can be unreliable. Humans nevertheless act on such information, including gossip, to make various social calculations, thus raising the question of whether individuals can sort through social information to identify what is, in fact, true. Inspired by empirical literature on people's decision-making when considering gossip, we built an agent-based simulation model to examine how well simple decision rules could make sense of information as it propagated through a network. Our simulations revealed that a minimalistic decision-rule 'Bit-wise mode' - which compared information from multiple sources and then sought a consensus majority for each component bit within the message - was consistently the most successful at converging upon the truth. This decision rule attained high relative fitness even in maximally noisy networks, composed entirely of nodes that distorted the message. The rule was also superior to other decision rules regardless of its frequency in the population. Simulations carried out with variable agent memory constraints, different numbers of observers who initiated information propagation, and a variety of network types suggested that the single most important factor in making sense of information was the number of independent sources that agents could consult. Broadly, our model suggests that despite the distortion information is subject to in the real world, it is nevertheless possible to make sense of it based on simple Darwinian computations that integrate multiple sources. Copyright © 2012 Elsevier Ltd. All rights reserved.
Simulator for concurrent processing data flow architectures
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar R.; Stoughton, John W.; Mielke, Roland R.
1992-01-01
A software simulator capability of simulating execution of an algorithm graph on a given system under the Algorithm to Architecture Mapping Model (ATAMM) rules is presented. ATAMM is capable of modeling the execution of large-grained algorithms on distributed data flow architectures. Investigating the behavior and determining the performance of an ATAMM based system requires the aid of software tools. The ATAMM Simulator presented is capable of determining the performance of a system without having to build a hardware prototype. Case studies are performed on four algorithms to demonstrate the capabilities of the ATAMM Simulator. Simulated results are shown to be comparable to the experimental results of the Advanced Development Model System.
Knowledge-Based Motion Control of AN Intelligent Mobile Autonomous System
NASA Astrophysics Data System (ADS)
Isik, Can
An Intelligent Mobile Autonomous System (IMAS), which is equipped with vision and low level sensors to cope with unknown obstacles, is modeled as a hierarchy of path planning and motion control. This dissertation concentrates on the lower level of this hierarchy (Pilot) with a knowledge-based controller. The basis of a theory of knowledge-based controllers is established, using the example of the Pilot level motion control of IMAS. In this context, the knowledge-based controller with a linguistic world concept is shown to be adequate for the minimum time control of an autonomous mobile robot motion. The Pilot level motion control of IMAS is approached in the framework of production systems. The three major components of the knowledge-based control that are included here are the hierarchies of the database, the rule base and the rule evaluator. The database, which is the representation of the state of the world, is organized as a semantic network, using a concept of minimal admissible vocabulary. The hierarchy of rule base is derived from the analytical formulation of minimum-time control of IMAS motion. The procedure introduced for rule derivation, which is called analytical model verbalization, utilizes the concept of causalities to describe the system behavior. A realistic analytical system model is developed and the minimum-time motion control in an obstacle strewn environment is decomposed to a hierarchy of motion planning and control. The conditions for the validity of the hierarchical problem decomposition are established, and the consistency of operation is maintained by detecting the long term conflicting decisions of the levels of the hierarchy. The imprecision in the world description is modeled using the theory of fuzzy sets. The method developed for the choice of the rule that prescribes the minimum-time motion control among the redundant set of applicable rules is explained and the usage of fuzzy set operators is justified. Also included in the dissertation are the description of the computer simulation of Pilot within the hierarchy of IMAS control and the simulated experiments that demonstrate the theoretical work.
Water-Balance Model to Simulate Historical Lake Levels for Lake Merced, California
NASA Astrophysics Data System (ADS)
Maley, M. P.; Onsoy, S.; Debroux, J.; Eagon, B.
2009-12-01
Lake Merced is a freshwater lake located in southwestern San Francisco, California. In the late 1980s and early 1990s, an extended, severe drought impacted the area that resulted in significant declines in Lake Merced lake levels that raised concerns about the long-term health of the lake. In response to these concerns, the Lake Merced Water Level Restoration Project was developed to evaluate an engineered solution to increase and maintain Lake Merced lake levels. The Lake Merced Lake-Level Model was developed to support the conceptual engineering design to restore lake levels. It is a spreadsheet-based water-balance model that performs monthly water-balance calculations based on the hydrological conceptual model. The model independently calculates each water-balance component based on available climate and hydrological data. The model objective was to develop a practical, rule-based approach for the water balance and to calibrate the model results to measured lake levels. The advantage of a rule-based approach is that once the rules are defined, they enhance the ability to then adapt the model for use in future-case simulations. The model was calibrated to historical lake levels over a 70-year period from 1939 to 2009. Calibrating the model over this long historical range tested the model over a variety of hydrological conditions including wet, normal and dry precipitation years, flood events, and periods of high and low lake levels. The historical lake level range was over 16 feet. The model calibration of historical to simulated lake levels had a residual mean of 0.02 feet and an absolute residual mean of 0.42 feet. More importantly, the model demonstrated the ability to simulate both long-term and short-term trends with a strong correlation of the magnitude for both annual and seasonal fluctuations in lake levels. The calibration results demonstrate an improved conceptual understanding of the key hydrological factors that control lake levels, reduce uncertainty in the hydrological conceptual model, and increase confidence in the model’s ability to forecast future lake conditions. The Lake Merced Lake-Level Model will help decision-makers with a straightforward, practical analysis of the major contributions to lake-level declines that can be used to support engineering, environmental and other decisions.
Expert systems for automated maintenance of a Mars oxygen production system
NASA Astrophysics Data System (ADS)
Huang, Jen-Kuang; Ho, Ming-Tsang; Ash, Robert L.
1992-08-01
Application of expert system concepts to a breadboard Mars oxygen processor unit have been studied and tested. The research was directed toward developing the methodology required to enable autonomous operation and control of these simple chemical processors at Mars. Failure detection and isolation was the key area of concern, and schemes using forward chaining, backward chaining, knowledge-based expert systems, and rule-based expert systems were examined. Tests and simulations were conducted that investigated self-health checkout, emergency shutdown, and fault detection, in addition to normal control activities. A dynamic system model was developed using the Bond-Graph technique. The dynamic model agreed well with tests involving sudden reductions in throughput. However, nonlinear effects were observed during tests that incorporated step function increases in flow variables. Computer simulations and experiments have demonstrated the feasibility of expert systems utilizing rule-based diagnosis and decision-making algorithms.
Reducing the computational footprint for real-time BCPNN learning
Vogginger, Bernhard; Schüffny, René; Lansner, Anders; Cederström, Love; Partzsch, Johannes; Höppner, Sebastian
2015-01-01
The implementation of synaptic plasticity in neural simulation or neuromorphic hardware is usually very resource-intensive, often requiring a compromise between efficiency and flexibility. A versatile, but computationally-expensive plasticity mechanism is provided by the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm. Building upon Bayesian statistics, and having clear links to biological plasticity processes, the BCPNN learning rule has been applied in many fields, ranging from data classification, associative memory, reward-based learning, probabilistic inference to cortical attractor memory networks. In the spike-based version of this learning rule the pre-, postsynaptic and coincident activity is traced in three low-pass-filtering stages, requiring a total of eight state variables, whose dynamics are typically simulated with the fixed step size Euler method. We derive analytic solutions allowing an efficient event-driven implementation of this learning rule. Further speedup is achieved by first rewriting the model which reduces the number of basic arithmetic operations per update to one half, and second by using look-up tables for the frequently calculated exponential decay. Ultimately, in a typical use case, the simulation using our approach is more than one order of magnitude faster than with the fixed step size Euler method. Aiming for a small memory footprint per BCPNN synapse, we also evaluate the use of fixed-point numbers for the state variables, and assess the number of bits required to achieve same or better accuracy than with the conventional explicit Euler method. All of this will allow a real-time simulation of a reduced cortex model based on BCPNN in high performance computing. More important, with the analytic solution at hand and due to the reduced memory bandwidth, the learning rule can be efficiently implemented in dedicated or existing digital neuromorphic hardware. PMID:25657618
Reducing the computational footprint for real-time BCPNN learning.
Vogginger, Bernhard; Schüffny, René; Lansner, Anders; Cederström, Love; Partzsch, Johannes; Höppner, Sebastian
2015-01-01
The implementation of synaptic plasticity in neural simulation or neuromorphic hardware is usually very resource-intensive, often requiring a compromise between efficiency and flexibility. A versatile, but computationally-expensive plasticity mechanism is provided by the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm. Building upon Bayesian statistics, and having clear links to biological plasticity processes, the BCPNN learning rule has been applied in many fields, ranging from data classification, associative memory, reward-based learning, probabilistic inference to cortical attractor memory networks. In the spike-based version of this learning rule the pre-, postsynaptic and coincident activity is traced in three low-pass-filtering stages, requiring a total of eight state variables, whose dynamics are typically simulated with the fixed step size Euler method. We derive analytic solutions allowing an efficient event-driven implementation of this learning rule. Further speedup is achieved by first rewriting the model which reduces the number of basic arithmetic operations per update to one half, and second by using look-up tables for the frequently calculated exponential decay. Ultimately, in a typical use case, the simulation using our approach is more than one order of magnitude faster than with the fixed step size Euler method. Aiming for a small memory footprint per BCPNN synapse, we also evaluate the use of fixed-point numbers for the state variables, and assess the number of bits required to achieve same or better accuracy than with the conventional explicit Euler method. All of this will allow a real-time simulation of a reduced cortex model based on BCPNN in high performance computing. More important, with the analytic solution at hand and due to the reduced memory bandwidth, the learning rule can be efficiently implemented in dedicated or existing digital neuromorphic hardware.
A framework for the use of agent based modeling to simulate ...
Simulation of human behavior in exposure modeling is a complex task. Traditionally, inter-individual variation in human activity has been modeled by drawing from a pool of single day time-activity diaries such as the US EPA Consolidated Human Activity Database (CHAD). Here, an agent-based model (ABM) is used to simulate population distributions of longitudinal patterns of four macro activities (sleeping, eating, working, and commuting) in populations of adults over a period of one year. In this ABM, an individual is modeled as an agent whose movement through time and space is determined by a set of decision rules. The rules are based on the agent having time-varying “needs” that are satisfied by performing actions. Needs are modeled as increasing over time, and taking an action reduces the need. Need-satisfying actions include sleeping (meeting the need for rest), eating (meeting the need for food), and commuting/working (meeting the need for income). Every time an action is completed, the model determines the next action the agent will take based on the magnitude of each of the agent’s needs at that point in time. Different activities advertise their ability to satisfy various needs of the agent (such as food to eat or sleeping in a bed or on a couch). The model then chooses the activity that satisfies the greatest of the agent’s needs. When multiple actions could address a need, the model will choose the most effective of the actions (bed over the couc
Rule-based modeling and simulations of the inner kinetochore structure.
Tschernyschkow, Sergej; Herda, Sabine; Gruenert, Gerd; Döring, Volker; Görlich, Dennis; Hofmeister, Antje; Hoischen, Christian; Dittrich, Peter; Diekmann, Stephan; Ibrahim, Bashar
2013-09-01
Combinatorial complexity is a central problem when modeling biochemical reaction networks, since the association of a few components can give rise to a large variation of protein complexes. Available classical modeling approaches are often insufficient for the analysis of very large and complex networks in detail. Recently, we developed a new rule-based modeling approach that facilitates the analysis of spatial and combinatorially complex problems. Here, we explore for the first time how this approach can be applied to a specific biological system, the human kinetochore, which is a multi-protein complex involving over 100 proteins. Applying our freely available SRSim software to a large data set on kinetochore proteins in human cells, we construct a spatial rule-based simulation model of the human inner kinetochore. The model generates an estimation of the probability distribution of the inner kinetochore 3D architecture and we show how to analyze this distribution using information theory. In our model, the formation of a bridge between CenpA and an H3 containing nucleosome only occurs efficiently for higher protein concentration realized during S-phase but may be not in G1. Above a certain nucleosome distance the protein bridge barely formed pointing towards the importance of chromatin structure for kinetochore complex formation. We define a metric for the distance between structures that allow us to identify structural clusters. Using this modeling technique, we explore different hypothetical chromatin layouts. Applying a rule-based network analysis to the spatial kinetochore complex geometry allowed us to integrate experimental data on kinetochore proteins, suggesting a 3D model of the human inner kinetochore architecture that is governed by a combinatorial algebraic reaction network. This reaction network can serve as bridge between multiple scales of modeling. Our approach can be applied to other systems beyond kinetochores. Copyright © 2013 Elsevier Ltd. All rights reserved.
Models and Methods for Adaptive Management of Individual and Team-Based Training Using a Simulator
NASA Astrophysics Data System (ADS)
Lisitsyna, L. S.; Smetyuh, N. P.; Golikov, S. P.
2017-05-01
Research of adaptive individual and team-based training has been analyzed and helped find out that both in Russia and abroad, individual and team-based training and retraining of AASTM operators usually includes: production training, training of general computer and office equipment skills, simulator training including virtual simulators which use computers to simulate real-world manufacturing situation, and, as a rule, the evaluation of AASTM operators’ knowledge determined by completeness and adequacy of their actions under the simulated conditions. Such approach to training and re-training of AASTM operators stipulates only technical training of operators and testing their knowledge based on assessing their actions in a simulated environment.
Intelligent fault management for the Space Station active thermal control system
NASA Technical Reports Server (NTRS)
Hill, Tim; Faltisco, Robert M.
1992-01-01
The Thermal Advanced Automation Project (TAAP) approach and architecture is described for automating the Space Station Freedom (SSF) Active Thermal Control System (ATCS). The baseline functionally and advanced automation techniques for Fault Detection, Isolation, and Recovery (FDIR) will be compared and contrasted. Advanced automation techniques such as rule-based systems and model-based reasoning should be utilized to efficiently control, monitor, and diagnose this extremely complex physical system. TAAP is developing advanced FDIR software for use on the SSF thermal control system. The goal of TAAP is to join Knowledge-Based System (KBS) technology, using a combination of rules and model-based reasoning, with conventional monitoring and control software in order to maximize autonomy of the ATCS. TAAP's predecessor was NASA's Thermal Expert System (TEXSYS) project which was the first large real-time expert system to use both extensive rules and model-based reasoning to control and perform FDIR on a large, complex physical system. TEXSYS showed that a method is needed for safely and inexpensively testing all possible faults of the ATCS, particularly those potentially damaging to the hardware, in order to develop a fully capable FDIR system. TAAP therefore includes the development of a high-fidelity simulation of the thermal control system. The simulation provides realistic, dynamic ATCS behavior and fault insertion capability for software testing without hardware related risks or expense. In addition, thermal engineers will gain greater confidence in the KBS FDIR software than was possible prior to this kind of simulation testing. The TAAP KBS will initially be a ground-based extension of the baseline ATCS monitoring and control software and could be migrated on-board as additional computation resources are made available.
Extending radiative transfer models by use of Bayes rule. [in atmospheric science
NASA Technical Reports Server (NTRS)
Whitney, C.
1977-01-01
This paper presents a procedure that extends some existing radiative transfer modeling techniques to problems in atmospheric science where curvature and layering of the medium and dynamic range and angular resolution of the signal are important. Example problems include twilight and limb scan simulations. Techniques that are extended include successive orders of scattering, matrix operator, doubling, Gauss-Seidel iteration, discrete ordinates and spherical harmonics. The procedure for extending them is based on Bayes' rule from probability theory.
The composite load spectra project
NASA Technical Reports Server (NTRS)
Newell, J. F.; Ho, H.; Kurth, R. E.
1990-01-01
Probabilistic methods and generic load models capable of simulating the load spectra that are induced in space propulsion system components are being developed. Four engine component types (the transfer ducts, the turbine blades, the liquid oxygen posts and the turbopump oxidizer discharge duct) were selected as representative hardware examples. The composite load spectra that simulate the probabilistic loads for these components are typically used as the input loads for a probabilistic structural analysis. The knowledge-based system approach used for the composite load spectra project provides an ideal environment for incremental development. The intelligent database paradigm employed in developing the expert system provides a smooth coupling between the numerical processing and the symbolic (information) processing. Large volumes of engine load information and engineering data are stored in database format and managed by a database management system. Numerical procedures for probabilistic load simulation and database management functions are controlled by rule modules. Rules were hard-wired as decision trees into rule modules to perform process control tasks. There are modules to retrieve load information and models. There are modules to select loads and models to carry out quick load calculations or make an input file for full duty-cycle time dependent load simulation. The composite load spectra load expert system implemented today is capable of performing intelligent rocket engine load spectra simulation. Further development of the expert system will provide tutorial capability for users to learn from it.
Deduction of reservoir operating rules for application in global hydrological models
NASA Astrophysics Data System (ADS)
Coerver, Hubertus M.; Rutten, Martine M.; van de Giesen, Nick C.
2018-01-01
A big challenge in constructing global hydrological models is the inclusion of anthropogenic impacts on the water cycle, such as caused by dams. Dam operators make decisions based on experience and often uncertain information. In this study information generally available to dam operators, like inflow into the reservoir and storage levels, was used to derive fuzzy rules describing the way a reservoir is operated. Using an artificial neural network capable of mimicking fuzzy logic, called the ANFIS adaptive-network-based fuzzy inference system, fuzzy rules linking inflow and storage with reservoir release were determined for 11 reservoirs in central Asia, the US and Vietnam. By varying the input variables of the neural network, different configurations of fuzzy rules were created and tested. It was found that the release from relatively large reservoirs was significantly dependent on information concerning recent storage levels, while release from smaller reservoirs was more dependent on reservoir inflows. Subsequently, the derived rules were used to simulate reservoir release with an average Nash-Sutcliffe coefficient of 0.81.
NASA Astrophysics Data System (ADS)
Fyta, Maria; Netz, Roland R.
2012-03-01
Using molecular dynamics (MD) simulations in conjunction with the SPC/E water model, we optimize ionic force-field parameters for seven different halide and alkali ions, considering a total of eight ion-pairs. Our strategy is based on simultaneous optimizing single-ion and ion-pair properties, i.e., we first fix ion-water parameters based on single-ion solvation free energies, and in a second step determine the cation-anion interaction parameters (traditionally given by mixing or combination rules) based on the Kirkwood-Buff theory without modification of the ion-water interaction parameters. In doing so, we have introduced scaling factors for the cation-anion Lennard-Jones (LJ) interaction that quantify deviations from the standard mixing rules. For the rather size-symmetric salt solutions involving bromide and chloride ions, the standard mixing rules work fine. On the other hand, for the iodide and fluoride solutions, corresponding to the largest and smallest anion considered in this work, a rescaling of the mixing rules was necessary. For iodide, the experimental activities suggest more tightly bound ion pairing than given by the standard mixing rules, which is achieved in simulations by reducing the scaling factor of the cation-anion LJ energy. For fluoride, the situation is different and the simulations show too large attraction between fluoride and cations when compared with experimental data. For NaF, the situation can be rectified by increasing the cation-anion LJ energy. For KF, it proves necessary to increase the effective cation-anion Lennard-Jones diameter. The optimization strategy outlined in this work can be easily adapted to different kinds of ions.
2015-01-01
still necessary. One such model that could bridge this gap is discrete dis- location dynamics ( DDD ) simulations, in which both the time- and length-scale...limitations from atomic simulations are greatly reduced. Over the past two decades, two-dimen- sional (2D) and three-dimensional (3D) DDD methods have...dislocation ensem- bles according to physics-based rules [27–34]. The physics that can be incorporated in DDD simulations can range http://dx.doi.org
NASA Astrophysics Data System (ADS)
Li, Chang; Wang, Qing; Shi, Wenzhong; Zhao, Sisi
2018-05-01
The accuracy of earthwork calculations that compute terrain volume is critical to digital terrain analysis (DTA). The uncertainties in volume calculations (VCs) based on a DEM are primarily related to three factors: 1) model error (ME), which is caused by an adopted algorithm for a VC model, 2) discrete error (DE), which is usually caused by DEM resolution and terrain complexity, and 3) propagation error (PE), which is caused by the variables' error. Based on these factors, the uncertainty modelling and analysis of VCs based on a regular grid DEM are investigated in this paper. Especially, how to quantify the uncertainty of VCs is proposed by a confidence interval based on truncation error (TE). In the experiments, the trapezoidal double rule (TDR) and Simpson's double rule (SDR) were used to calculate volume, where the TE is the major ME, and six simulated regular grid DEMs with different terrain complexity and resolution (i.e. DE) were generated by a Gauss synthetic surface to easily obtain the theoretical true value and eliminate the interference of data errors. For PE, Monte-Carlo simulation techniques and spatial autocorrelation were used to represent DEM uncertainty. This study can enrich uncertainty modelling and analysis-related theories of geographic information science.
Numerical Simulation of the Detonation of Condensed Explosives
NASA Astrophysics Data System (ADS)
Wang, Cheng; Ye, Ting; Ning, Jianguo
Detonation process of a condensed explosive was simulated using a finite difference method. Euler equations were applied to describe the detonation flow field, an ignition and growth model for the chemical reaction and Jones-Wilkins-Lee (JWL) equations of state for the state of explosives and detonation products. Based on the simple mixture rule that assumes the reacting explosives to be a mixture of the reactant and product components, 1D and 2D codes were developed to simulate the detonation process of high explosive PBX9404. The numerical results are in good agreement with the experimental results, which demonstrates that the finite difference method, mixture rule and chemical reaction proposed in this paper are adequate and feasible.
Cost efficiency of the non-associative flow rule simulation of an industrial component
NASA Astrophysics Data System (ADS)
Galdos, Lander; de Argandoña, Eneko Saenz; Mendiguren, Joseba
2017-10-01
In the last decade, metal forming industry is becoming more and more competitive. In this context, the FEM modeling has become a primary tool of information for the component and process design. Numerous researchers have been focused on improving the accuracy of the material models implemented on the FEM in order to improve the efficiency of the simulations. Aimed at increasing the efficiency of the anisotropic behavior modelling, in the last years the use of non-associative flow rule models (NAFR) has been presented as an alternative to the classic associative flow rule models (AFR). In this work, the cost efficiency of the used flow rule model has been numerically analyzed by simulating an industrial drawing operation with two different models of the same degree of flexibility: one AFR model and one NAFR model. From the present study, it has been concluded that the flow rule has a negligible influence on the final drawing prediction; this is mainly driven by the model parameter identification procedure. Even though the NAFR formulation is complex when compared to the AFR, the present study shows that the total simulation time while using explicit FE solvers has been reduced without loss of accuracy. Furthermore, NAFR formulations have an advantage over AFR formulations in parameter identification because the formulation decouples the yield stress and the Lankford coefficients.
Empirical Analysis and Refinement of Expert System Knowledge Bases
1988-08-31
refinement. Both a simulated case generation program, and a random rule basher were developed to enhance rule refinement experimentation. *Substantial...the second fiscal year 88 objective was fully met. Rule Refinement System Simulated Rule Basher Case Generator Stored Cases Expert System Knowledge...generated until the rule is satisfied. Cases may be randomly generated for a given rule or hypothesis. Rule Basher Given that one has a correct
Zhang, Hang; Xu, Qingyan; Liu, Baicheng
2014-01-01
The rapid development of numerical modeling techniques has led to more accurate results in modeling metal solidification processes. In this study, the cellular automaton-finite difference (CA-FD) method was used to simulate the directional solidification (DS) process of single crystal (SX) superalloy blade samples. Experiments were carried out to validate the simulation results. Meanwhile, an intelligent model based on fuzzy control theory was built to optimize the complicate DS process. Several key parameters, such as mushy zone width and temperature difference at the cast-mold interface, were recognized as the input variables. The input variables were functioned with the multivariable fuzzy rule to get the output adjustment of withdrawal rate (v) (a key technological parameter). The multivariable fuzzy rule was built, based on the structure feature of casting, such as the relationship between section area, and the delay time of the temperature change response by changing v, and the professional experience of the operator as well. Then, the fuzzy controlling model coupled with CA-FD method could be used to optimize v in real-time during the manufacturing process. The optimized process was proven to be more flexible and adaptive for a steady and stray-grain free DS process. PMID:28788535
Montovan, Kathryn J; Karst, Nathaniel; Jones, Laura E; Seeley, Thomas D
2013-11-07
In the beeswax combs of honey bees, the cells of brood, pollen, and honey have a consistent spatial pattern that is sustained throughout the life of a colony. This spatial pattern is believed to emerge from simple behavioral rules that specify how the queen moves, where foragers deposit honey/pollen and how honey/pollen is consumed from cells. Prior work has shown that a set of such rules can explain the formation of the allocation pattern starting from an empty comb. We show that these rules cannot maintain the pattern once the brood start to vacate their cells, and we propose new, biologically realistic rules that better sustain the observed allocation pattern. We analyze the three resulting models by performing hundreds of simulation runs over many gestational periods and a wide range of parameter values. We develop new metrics for pattern assessment and employ them in analyzing pattern retention over each simulation run. Applied to our simulation results, these metrics show alteration of an accepted model for honey/pollen consumption based on local information can stabilize the cell allocation pattern over time. We also show that adding global information, by biasing the queen's movements towards the center of the comb, expands the parameter regime over which pattern retention occurs. © 2013 Published by Elsevier Ltd. All rights reserved.
Analyzing Strategic Business Rules through Simulation Modeling
NASA Astrophysics Data System (ADS)
Orta, Elena; Ruiz, Mercedes; Toro, Miguel
Service Oriented Architecture (SOA) holds promise for business agility since it allows business process to change to meet new customer demands or market needs without causing a cascade effect of changes in the underlying IT systems. Business rules are the instrument chosen to help business and IT to collaborate. In this paper, we propose the utilization of simulation models to model and simulate strategic business rules that are then disaggregated at different levels of an SOA architecture. Our proposal is aimed to help find a good configuration for strategic business objectives and IT parameters. The paper includes a case study where a simulation model is built to help business decision-making in a context where finding a good configuration for different business parameters and performance is too complex to analyze by trial and error.
Large-Scale Simulations of Plastic Neural Networks on Neuromorphic Hardware
Knight, James C.; Tully, Philip J.; Kaplan, Bernhard A.; Lansner, Anders; Furber, Steve B.
2016-01-01
SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 2.0 × 104 neurons and 5.1 × 107 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately 45× more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models. PMID:27092061
A comprehensive probabilistic analysis model of oil pipelines network based on Bayesian network
NASA Astrophysics Data System (ADS)
Zhang, C.; Qin, T. X.; Jiang, B.; Huang, C.
2018-02-01
Oil pipelines network is one of the most important facilities of energy transportation. But oil pipelines network accident may result in serious disasters. Some analysis models for these accidents have been established mainly based on three methods, including event-tree, accident simulation and Bayesian network. Among these methods, Bayesian network is suitable for probabilistic analysis. But not all the important influencing factors are considered and the deployment rule of the factors has not been established. This paper proposed a probabilistic analysis model of oil pipelines network based on Bayesian network. Most of the important influencing factors, including the key environment condition and emergency response are considered in this model. Moreover, the paper also introduces a deployment rule for these factors. The model can be used in probabilistic analysis and sensitive analysis of oil pipelines network accident.
Derivation of optimal joint operating rules for multi-purpose multi-reservoir water-supply system
NASA Astrophysics Data System (ADS)
Tan, Qiao-feng; Wang, Xu; Wang, Hao; Wang, Chao; Lei, Xiao-hui; Xiong, Yi-song; Zhang, Wei
2017-08-01
The derivation of joint operating policy is a challenging task for a multi-purpose multi-reservoir system. This study proposed an aggregation-decomposition model to guide the joint operation of multi-purpose multi-reservoir system, including: (1) an aggregated model based on the improved hedging rule to ensure the long-term water-supply operating benefit; (2) a decomposed model to allocate the limited release to individual reservoirs for the purpose of maximizing the total profit of the facing period; and (3) a double-layer simulation-based optimization model to obtain the optimal time-varying hedging rules using the non-dominated sorting genetic algorithm II, whose objectives were to minimize maximum water deficit and maximize water supply reliability. The water-supply system of Li River in Guangxi Province, China, was selected for the case study. The results show that the operating policy proposed in this study is better than conventional operating rules and aggregated standard operating policy for both water supply and hydropower generation due to the use of hedging mechanism and effective coordination among multiple objectives.
Combined Economic and Hydrologic Modeling to Support Collaborative Decision Making Processes
NASA Astrophysics Data System (ADS)
Sheer, D. P.
2008-12-01
For more than a decade, the core concept of the author's efforts in support of collaborative decision making has been a combination of hydrologic simulation and multi-objective optimization. The modeling has generally been used to support collaborative decision making processes. The OASIS model developed by HydroLogics Inc. solves a multi-objective optimization at each time step using a mixed integer linear program (MILP). The MILP can be configured to include any user defined objective, including but not limited too economic objectives. For example, an estimated marginal value for water for crops and M&I use were included in the objective function to drive trades in a model of the lower Rio Grande. The formulation of the MILP, constraints and objectives, in any time step is conditional: it changes based on the value of state variables and dynamic external forcing functions, such as rainfall, hydrology, market prices, arrival of migratory fish, water temperature, etc. It therefore acts as a dynamic short term multi-objective economic optimization for each time step. MILP is capable of solving a general problem that includes a very realistic representation of the physical system characteristics in addition to the normal multi-objective optimization objectives and constraints included in economic models. In all of these models, the short term objective function is a surrogate for achieving long term multi-objective results. The long term performance for any alternative (especially including operating strategies) is evaluated by simulation. An operating rule is the combination of conditions, parameters, constraints and objectives used to determine the formulation of the short term optimization in each time step. Heuristic wrappers for the simulation program have been developed improve the parameters of an operating rule, and are initiating research on a wrapper that will allow us to employ a genetic algorithm to improve the form of the rule (conditions, constraints, and short term objectives) as well. In the models operating rules represent different models of human behavior, and the objective of the modeling is to find rules for human behavior that perform well in terms of long term human objectives. The conceptual model used to represent human behavior incorporates economic multi-objective optimization for surrogate objectives, and rules that set those objectives based on current conditions and accounting for uncertainty, at least implicitly. The author asserts that real world operating rules follow this form and have evolved because they have been perceived as successful in the past. Thus, the modeling efforts focus on human behavior in much the same way that economic models focus on human behavior. This paper illustrates the above concepts with real world examples.
An agent-based model for queue formation of powered two-wheelers in heterogeneous traffic
NASA Astrophysics Data System (ADS)
Lee, Tzu-Chang; Wong, K. I.
2016-11-01
This paper presents an agent-based model (ABM) for simulating the queue formation of powered two-wheelers (PTWs) in heterogeneous traffic at a signalized intersection. The main novelty is that the proposed interaction rule describing the position choice behavior of PTWs when queuing in heterogeneous traffic can capture the stochastic nature of the decision making process. The interaction rule is formulated as a multinomial logit model, which is calibrated by using a microscopic traffic trajectory dataset obtained from video footage. The ABM is validated against the survey data for the vehicular trajectory patterns, queuing patterns, queue lengths, and discharge rates. The results demonstrate that the proposed model is capable of replicating the observed queue formation process for heterogeneous traffic.
Analysis of habitat-selection rules using an individual-based model
Steven F. Railsback; Bret C. Harvey
2002-01-01
Abstract - Despite their promise for simulating natural complexity,individual-based models (IBMs) are rarely used for ecological research or resource management. Few IBMs have been shown to reproduce realistic patterns of behavior by individual organisms.To test our IBM of stream salmonids and draw conclusions about foraging theory,we analyzed the IBM âs ability to...
Modelling irradiation-induced softening in BCC iron by crystal plasticity approach
NASA Astrophysics Data System (ADS)
Xiao, Xiazi; Terentyev, Dmitry; Yu, Long; Song, Dingkun; Bakaev, A.; Duan, Huiling
2015-11-01
Crystal plasticity model (CPM) for BCC iron to account for radiation-induced strain softening is proposed. CPM is based on the plastically-driven and thermally-activated removal of dislocation loops. Atomistic simulations are applied to parameterize dislocation-defect interactions. Combining experimental microstructures, defect-hardening/absorption rules from atomistic simulations, and CPM fitted to properties of non-irradiated iron, the model achieves a good agreement with experimental data regarding radiation-induced strain softening and flow stress increase under neutron irradiation.
Reinforcement Learning in a Nonstationary Environment: The El Farol Problem
NASA Technical Reports Server (NTRS)
Bell, Ann Maria
1999-01-01
This paper examines the performance of simple learning rules in a complex adaptive system based on a coordination problem modeled on the El Farol problem. The key features of the El Farol problem are that it typically involves a medium number of agents and that agents' pay-off functions have a discontinuous response to increased congestion. First we consider a single adaptive agent facing a stationary environment. We demonstrate that the simple learning rules proposed by Roth and Er'ev can be extremely sensitive to small changes in the initial conditions and that events early in a simulation can affect the performance of the rule over a relatively long time horizon. In contrast, a reinforcement learning rule based on standard practice in the computer science literature converges rapidly and robustly. The situation is reversed when multiple adaptive agents interact: the RE algorithms often converge rapidly to a stable average aggregate attendance despite the slow and erratic behavior of individual learners, while the CS based learners frequently over-attend in the early and intermediate terms. The symmetric mixed strategy equilibria is unstable: all three learning rules ultimately tend towards pure strategies or stabilize in the medium term at non-equilibrium probabilities of attendance. The brittleness of the algorithms in different contexts emphasize the importance of thorough and thoughtful examination of simulation-based results.
Kawakami, Tomoya; Fujita, Naotaka; Yoshihisa, Tomoki; Tsukamoto, Masahiko
2014-01-01
In recent years, sensors become popular and Home Energy Management System (HEMS) takes an important role in saving energy without decrease in QoL (Quality of Life). Currently, many rule-based HEMSs have been proposed and almost all of them assume "IF-THEN" rules. The Rete algorithm is a typical pattern matching algorithm for IF-THEN rules. Currently, we have proposed a rule-based Home Energy Management System (HEMS) using the Rete algorithm. In the proposed system, rules for managing energy are processed by smart taps in network, and the loads for processing rules and collecting data are distributed to smart taps. In addition, the number of processes and collecting data are reduced by processing rules based on the Rete algorithm. In this paper, we evaluated the proposed system by simulation. In the simulation environment, rules are processed by a smart tap that relates to the action part of each rule. In addition, we implemented the proposed system as HEMS using smart taps.
NASA Astrophysics Data System (ADS)
Chang, Ya-Ting; Chang, Li-Chiu; Chang, Fi-John
2005-04-01
To bridge the gap between academic research and actual operation, we propose an intelligent control system for reservoir operation. The methodology includes two major processes, the knowledge acquired and implemented, and the inference system. In this study, a genetic algorithm (GA) and a fuzzy rule base (FRB) are used to extract knowledge based on the historical inflow data with a design objective function and on the operating rule curves respectively. The adaptive network-based fuzzy inference system (ANFIS) is then used to implement the knowledge, to create the fuzzy inference system, and then to estimate the optimal reservoir operation. To investigate its applicability and practicability, the Shihmen reservoir, Taiwan, is used as a case study. For the purpose of comparison, a simulation of the currently used M-5 operating rule curve is also performed. The results demonstrate that (1) the GA is an efficient way to search the optimal input-output patterns, (2) the FRB can extract the knowledge from the operating rule curves, and (3) the ANFIS models built on different types of knowledge can produce much better performance than the traditional M-5 curves in real-time reservoir operation. Moreover, we show that the model can be more intelligent for reservoir operation if more information (or knowledge) is involved.
DAMS: A Model to Assess Domino Effects by Using Agent-Based Modeling and Simulation.
Zhang, Laobing; Landucci, Gabriele; Reniers, Genserik; Khakzad, Nima; Zhou, Jianfeng
2017-12-19
Historical data analysis shows that escalation accidents, so-called domino effects, have an important role in disastrous accidents in the chemical and process industries. In this study, an agent-based modeling and simulation approach is proposed to study the propagation of domino effects in the chemical and process industries. Different from the analytical or Monte Carlo simulation approaches, which normally study the domino effect at probabilistic network levels, the agent-based modeling technique explains the domino effects from a bottom-up perspective. In this approach, the installations involved in a domino effect are modeled as agents whereas the interactions among the installations (e.g., by means of heat radiation) are modeled via the basic rules of the agents. Application of the developed model to several case studies demonstrates the ability of the model not only in modeling higher-level domino effects and synergistic effects but also in accounting for temporal dependencies. The model can readily be applied to large-scale complicated cases. © 2017 Society for Risk Analysis.
NASA Astrophysics Data System (ADS)
Yu, Nanpeng
As U.S. regional electricity markets continue to refine their market structures, designs and rules of operation in various ways, two critical issues are emerging. First, although much experience has been gained and costly and valuable lessons have been learned, there is still a lack of a systematic platform for evaluation of the impact of a new market design from both engineering and economic points of view. Second, the transition from a monopoly paradigm characterized by a guaranteed rate of return to a competitive market created various unfamiliar financial risks for various market participants, especially for the Investor Owned Utilities (IOUs) and Independent Power Producers (IPPs). This dissertation uses agent-based simulation methods to tackle the market rules evaluation and financial risk management problems. The California energy crisis in 2000-01 showed what could happen to an electricity market if it did not go through a comprehensive and rigorous testing before its implementation. Due to the complexity of the market structure, strategic interaction between the participants, and the underlying physics, it is difficult to fully evaluate the implications of potential changes to market rules. This dissertation presents a flexible and integrative method to assess market designs through agent-based simulations. Realistic simulation scenarios on a 225-bus system are constructed for evaluation of the proposed PJM-like market power mitigation rules of the California electricity market. Simulation results show that in the absence of market power mitigation, generation company (GenCo) agents facilitated by Q-learning are able to exploit the market flaws and make significantly higher profits relative to the competitive benchmark. The incorporation of PJM-like local market power mitigation rules is shown to be effective in suppressing the exercise of market power. The importance of financial risk management is exemplified by the recent financial crisis. In this dissertation, basic financial risk management concepts relevant for wholesale electric power markets are carefully explained and illustrated. In addition, the financial risk management problem in wholesale electric power markets is generalized as a four-stage process. Within the proposed financial risk management framework, the critical problem of financial bilateral contract negotiation is addressed. This dissertation analyzes a financial bilateral contract negotiation process between a generating company and a load-serving entity in a wholesale electric power market with congestion managed by locational marginal pricing. Nash bargaining theory is used to model a Pareto-efficient settlement point. The model predicts negotiation results under varied conditions and identifies circumstances in which the two parties might fail to reach an agreement. Both analysis and agent-based simulation are used to gain insight regarding how relative risk aversion and biased price estimates influence negotiated outcomes. These results should provide useful guidance to market participants in their bilateral contract negotiation processes.
Generic framework for mining cellular automata models on protein-folding simulations.
Diaz, N; Tischer, I
2016-05-13
Cellular automata model identification is an important way of building simplified simulation models. In this study, we describe a generic architectural framework to ease the development process of new metaheuristic-based algorithms for cellular automata model identification in protein-folding trajectories. Our framework was developed by a methodology based on design patterns that allow an improved experience for new algorithms development. The usefulness of the proposed framework is demonstrated by the implementation of four algorithms, able to obtain extremely precise cellular automata models of the protein-folding process with a protein contact map representation. Dynamic rules obtained by the proposed approach are discussed, and future use for the new tool is outlined.
Criterion learning in rule-based categorization: Simulation of neural mechanism and new data
Helie, Sebastien; Ell, Shawn W.; Filoteo, J. Vincent; Maddox, W. Todd
2015-01-01
In perceptual categorization, rule selection consists of selecting one or several stimulus-dimensions to be used to categorize the stimuli (e.g, categorize lines according to their length). Once a rule has been selected, criterion learning consists of defining how stimuli will be grouped using the selected dimension(s) (e.g., if the selected rule is line length, define ‘long’ and ‘short’). Very little is known about the neuroscience of criterion learning, and most existing computational models do not provide a biological mechanism for this process. In this article, we introduce a new model of rule learning called Heterosynaptic Inhibitory Criterion Learning (HICL). HICL includes a biologically-based explanation of criterion learning, and we use new category-learning data to test key aspects of the model. In HICL, rule selective cells in prefrontal cortex modulate stimulus-response associations using pre-synaptic inhibition. Criterion learning is implemented by a new type of heterosynaptic error-driven Hebbian learning at inhibitory synapses that uses feedback to drive cell activation above/below thresholds representing ionic gating mechanisms. The model is used to account for new human categorization data from two experiments showing that: (1) changing rule criterion on a given dimension is easier if irrelevant dimensions are also changing (Experiment 1), and (2) showing that changing the relevant rule dimension and learning a new criterion is more difficult, but also facilitated by a change in the irrelevant dimension (Experiment 2). We conclude with a discussion of some of HICL’s implications for future research on rule learning. PMID:25682349
Criterion learning in rule-based categorization: simulation of neural mechanism and new data.
Helie, Sebastien; Ell, Shawn W; Filoteo, J Vincent; Maddox, W Todd
2015-04-01
In perceptual categorization, rule selection consists of selecting one or several stimulus-dimensions to be used to categorize the stimuli (e.g., categorize lines according to their length). Once a rule has been selected, criterion learning consists of defining how stimuli will be grouped using the selected dimension(s) (e.g., if the selected rule is line length, define 'long' and 'short'). Very little is known about the neuroscience of criterion learning, and most existing computational models do not provide a biological mechanism for this process. In this article, we introduce a new model of rule learning called Heterosynaptic Inhibitory Criterion Learning (HICL). HICL includes a biologically-based explanation of criterion learning, and we use new category-learning data to test key aspects of the model. In HICL, rule selective cells in prefrontal cortex modulate stimulus-response associations using pre-synaptic inhibition. Criterion learning is implemented by a new type of heterosynaptic error-driven Hebbian learning at inhibitory synapses that uses feedback to drive cell activation above/below thresholds representing ionic gating mechanisms. The model is used to account for new human categorization data from two experiments showing that: (1) changing rule criterion on a given dimension is easier if irrelevant dimensions are also changing (Experiment 1), and (2) showing that changing the relevant rule dimension and learning a new criterion is more difficult, but also facilitated by a change in the irrelevant dimension (Experiment 2). We conclude with a discussion of some of HICL's implications for future research on rule learning. Copyright © 2015 Elsevier Inc. All rights reserved.
Agent-Based Modeling of Cancer Stem Cell Driven Solid Tumor Growth.
Poleszczuk, Jan; Macklin, Paul; Enderling, Heiko
2016-01-01
Computational modeling of tumor growth has become an invaluable tool to simulate complex cell-cell interactions and emerging population-level dynamics. Agent-based models are commonly used to describe the behavior and interaction of individual cells in different environments. Behavioral rules can be informed and calibrated by in vitro assays, and emerging population-level dynamics may be validated with both in vitro and in vivo experiments. Here, we describe the design and implementation of a lattice-based agent-based model of cancer stem cell driven tumor growth.
Lehrer, Roni; Schumacher, Gijs
2018-01-01
The policy positions parties choose are central to both attracting voters and forming coalition governments. How then should parties choose positions to best represent voters? Laver and Sergenti show that in an agent-based model with boundedly rational actors a decision rule (Aggregator) that takes the mean policy position of its supporters is the best rule to achieve high congruence between voter preferences and party positions. But this result only pertains to representation by the legislature, not representation by the government. To evaluate this we add a coalition formation procedure with boundedly rational parties to the Laver and Sergenti model of party competition. We also add two new decision rules that are sensitive to government formation outcomes rather than voter positions. We develop two simulations: a single-rule one in which parties with the same rule compete and an evolutionary simulation in which parties with different rules compete. In these simulations we analyze party behavior under a large number of different parameters that describe real-world variance in political parties' motives and party system characteristics. Our most important conclusion is that Aggregators also produce the best match between government policy and voter preferences. Moreover, even though citizens often frown upon politicians' interest in the prestige and rents that come with winning political office (office pay-offs), we find that citizens actually receive better representation by the government if politicians are motivated by these office pay-offs in contrast to politicians with ideological motivations (policy pay-offs). Finally, we show that while more parties are linked to better political representation, how parties choose policy positions affects political representation as well. Overall, we conclude that to understand variation in the quality of political representation scholars should look beyond electoral systems and take into account variation in party behavior as well.
2018-01-01
The policy positions parties choose are central to both attracting voters and forming coalition governments. How then should parties choose positions to best represent voters? Laver and Sergenti show that in an agent-based model with boundedly rational actors a decision rule (Aggregator) that takes the mean policy position of its supporters is the best rule to achieve high congruence between voter preferences and party positions. But this result only pertains to representation by the legislature, not representation by the government. To evaluate this we add a coalition formation procedure with boundedly rational parties to the Laver and Sergenti model of party competition. We also add two new decision rules that are sensitive to government formation outcomes rather than voter positions. We develop two simulations: a single-rule one in which parties with the same rule compete and an evolutionary simulation in which parties with different rules compete. In these simulations we analyze party behavior under a large number of different parameters that describe real-world variance in political parties’ motives and party system characteristics. Our most important conclusion is that Aggregators also produce the best match between government policy and voter preferences. Moreover, even though citizens often frown upon politicians’ interest in the prestige and rents that come with winning political office (office pay-offs), we find that citizens actually receive better representation by the government if politicians are motivated by these office pay-offs in contrast to politicians with ideological motivations (policy pay-offs). Finally, we show that while more parties are linked to better political representation, how parties choose policy positions affects political representation as well. Overall, we conclude that to understand variation in the quality of political representation scholars should look beyond electoral systems and take into account variation in party behavior as well. PMID:29394268
Automated Classification of Phonological Errors in Aphasic Language
Ahuja, Sanjeev B.; Reggia, James A.; Berndt, Rita S.
1984-01-01
Using heuristically-guided state space search, a prototype program has been developed to simulate and classify phonemic errors occurring in the speech of neurologically-impaired patients. Simulations are based on an interchangeable rule/operator set of elementary errors which represent a theory of phonemic processing faults. This work introduces and evaluates a novel approach to error simulation and classification, it provides a prototype simulation tool for neurolinguistic research, and it forms the initial phase of a larger research effort involving computer modelling of neurolinguistic processes.
Morineau, Thierry; Meineri, Sebastien; Chapelain, Pascal
2017-03-01
Several methods and theoretical frameworks have been proposed for efficient debriefing after clinical simulation sessions. In these studies, however, the cognitive processes underlying the debriefing stage are not directly addressed. Cognitive control constitutes a conceptual link between behavior and reflection on behavior to apprehend debriefing cognitively. Our goal was to analyze cognitive control from verbal reports using the Skill-Rule-Knowledge model. This model considers different cognitive control levels from skill-based to rule-based and knowledge-based control. An experiment was conducted with teams of nursing students who were confronted with emergency scenarios during high-fidelity simulation sessions. Participants' descriptions of their actions were asked in the course of the simulation scenarios or during the debriefing stage. 52 nursing students working in 26 pairs participated in this study. Participants were divided into two groups: an "in situ" group in which they had to describe their actions at different moments of a deteriorating patient scenario, and a "debriefing" group, in which, at the same moments, they had to describe their actions displayed on a video recording. In addition to a cognitive analysis, the teams' clinical performance was measured. The cognitive control level in the debriefing group was generally higher than in the in situ group. Good team performance was associated with a high level of cognitive control after a patient's significant state deterioration. These findings are in conformity with the "Skill-Rule-Knowledge" model. The debriefing stage allows a deeper reflection on action compared with the in situ condition. If an abnormal event occurs as an adverse event, then participants' mental processes tend to migrate towards knowledge-based control. This migration particularly concerns students with the best clinical performance. Thus, this cognitive framework can help to strengthen the analysis of verbal reports. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhu, Wei; Timmermans, Harry
2011-06-01
Models of geographical choice behavior have been dominantly based on rational choice models, which assume that decision makers are utility-maximizers. Rational choice models may be less appropriate as behavioral models when modeling decisions in complex environments in which decision makers may simplify the decision problem using heuristics. Pedestrian behavior in shopping streets is an example. We therefore propose a modeling framework for pedestrian shopping behavior incorporating principles of bounded rationality. We extend three classical heuristic rules (conjunctive, disjunctive and lexicographic rule) by introducing threshold heterogeneity. The proposed models are implemented using data on pedestrian behavior in Wang Fujing Street, the city center of Beijing, China. The models are estimated and compared with multinomial logit models and mixed logit models. Results show that the heuristic models are the best for all the decisions that are modeled. Validation tests are carried out through multi-agent simulation by comparing simulated spatio-temporal agent behavior with the observed pedestrian behavior. The predictions of heuristic models are slightly better than those of the multinomial logit models.
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Basham, Bryan D.
1989-01-01
CONFIG is a modeling and simulation tool prototype for analyzing the normal and faulty qualitative behaviors of engineered systems. Qualitative modeling and discrete-event simulation have been adapted and integrated, to support early development, during system design, of software and procedures for management of failures, especially in diagnostic expert systems. Qualitative component models are defined in terms of normal and faulty modes and processes, which are defined by invocation statements and effect statements with time delays. System models are constructed graphically by using instances of components and relations from object-oriented hierarchical model libraries. Extension and reuse of CONFIG models and analysis capabilities in hybrid rule- and model-based expert fault-management support systems are discussed.
Allawi, Mohammed Falah; Jaafar, Othman; Mohamad Hamzah, Firdaus; Abdullah, Sharifah Mastura Syed; El-Shafie, Ahmed
2018-05-01
Efficacious operation for dam and reservoir system could guarantee not only a defenselessness policy against natural hazard but also identify rule to meet the water demand. Successful operation of dam and reservoir systems to ensure optimal use of water resources could be unattainable without accurate and reliable simulation models. According to the highly stochastic nature of hydrologic parameters, developing accurate predictive model that efficiently mimic such a complex pattern is an increasing domain of research. During the last two decades, artificial intelligence (AI) techniques have been significantly utilized for attaining a robust modeling to handle different stochastic hydrological parameters. AI techniques have also shown considerable progress in finding optimal rules for reservoir operation. This review research explores the history of developing AI in reservoir inflow forecasting and prediction of evaporation from a reservoir as the major components of the reservoir simulation. In addition, critical assessment of the advantages and disadvantages of integrated AI simulation methods with optimization methods has been reported. Future research on the potential of utilizing new innovative methods based AI techniques for reservoir simulation and optimization models have also been discussed. Finally, proposal for the new mathematical procedure to accomplish the realistic evaluation of the whole optimization model performance (reliability, resilience, and vulnerability indices) has been recommended.
A Cellular Automata-based Model for Simulating Restitution Property in a Single Heart Cell.
Sabzpoushan, Seyed Hojjat; Pourhasanzade, Fateme
2011-01-01
Ventricular fibrillation is the cause of the most sudden mortalities. Restitution is one of the specific properties of ventricular cell. The recent findings have clearly proved the correlation between the slope of restitution curve with ventricular fibrillation. This; therefore, mandates the modeling of cellular restitution to gain high importance. A cellular automaton is a powerful tool for simulating complex phenomena in a simple language. A cellular automaton is a lattice of cells where the behavior of each cell is determined by the behavior of its neighboring cells as well as the automata rule. In this paper, a simple model is depicted for the simulation of the property of restitution in a single cardiac cell using cellular automata. At first, two state variables; action potential and recovery are introduced in the automata model. In second, automata rule is determined and then recovery variable is defined in such a way so that the restitution is developed. In order to evaluate the proposed model, the generated restitution curve in our study is compared with the restitution curves from the experimental findings of valid sources. Our findings indicate that the presented model is not only capable of simulating restitution in cardiac cell, but also possesses the capability of regulating the restitution curve.
Bankhead, Armand; Magnuson, Nancy S; Heckendorn, Robert B
2007-06-07
A computer simulation is used to model ductal carcinoma in situ, a form of non-invasive breast cancer. The simulation uses known histological morphology, cell types, and stochastic cell proliferation to evolve tumorous growth within a duct. The ductal simulation is based on a hybrid cellular automaton design using genetic rules to determine each cell's behavior. The genetic rules are a mutable abstraction that demonstrate genetic heterogeneity in a population. Our goal was to examine the role (if any) that recently discovered mammary stem cell hierarchies play in genetic heterogeneity, DCIS initiation and aggressiveness. Results show that simpler progenitor hierarchies result in greater genetic heterogeneity and evolve DCIS significantly faster. However, the more complex progenitor hierarchy structure was able to sustain the rapid reproduction of a cancer cell population for longer periods of time.
NASA Astrophysics Data System (ADS)
Huang, Yin; Chen, Jianhua; Xiong, Shaojun
2009-07-01
Mobile-Learning (M-learning) makes many learners get the advantages of both traditional learning and E-learning. Currently, Web-based Mobile-Learning Systems have created many new ways and defined new relationships between educators and learners. Association rule mining is one of the most important fields in data mining and knowledge discovery in databases. Rules explosion is a serious problem which causes great concerns, as conventional mining algorithms often produce too many rules for decision makers to digest. Since Web-based Mobile-Learning System collects vast amounts of student profile data, data mining and knowledge discovery techniques can be applied to find interesting relationships between attributes of learners, assessments, the solution strategies adopted by learners and so on. Therefore ,this paper focus on a new data-mining algorithm, combined with the advantages of genetic algorithm and simulated annealing algorithm , called ARGSA(Association rules based on an improved Genetic Simulated Annealing Algorithm), to mine the association rules. This paper first takes advantage of the Parallel Genetic Algorithm and Simulated Algorithm designed specifically for discovering association rules. Moreover, the analysis and experiment are also made to show the proposed method is superior to the Apriori algorithm in this Mobile-Learning system.
Automatic Detection of Electric Power Troubles (ADEPT)
NASA Technical Reports Server (NTRS)
Wang, Caroline; Zeanah, Hugh; Anderson, Audie; Patrick, Clint; Brady, Mike; Ford, Donnie
1988-01-01
ADEPT is an expert system that integrates knowledge from three different suppliers to offer an advanced fault-detection system, and is designed for two modes of operation: real-time fault isolation and simulated modeling. Real time fault isolation of components is accomplished on a power system breadboard through the Fault Isolation Expert System (FIES II) interface with a rule system developed in-house. Faults are quickly detected and displayed and the rules and chain of reasoning optionally provided on a Laser printer. This system consists of a simulated Space Station power module using direct-current power supplies for Solar arrays on three power busses. For tests of the system's ability to locate faults inserted via switches, loads are configured by an INTEL microcomputer and the Symbolics artificial intelligence development system. As these loads are resistive in nature, Ohm's Law is used as the basis for rules by which faults are located. The three-bus system can correct faults automatically where there is a surplus of power available on any of the three busses. Techniques developed and used can be applied readily to other control systems requiring rapid intelligent decisions. Simulated modelling, used for theoretical studies, is implemented using a modified version of Kennedy Space Center's KATE (Knowledge-Based Automatic Test Equipment), FIES II windowing, and an ADEPT knowledge base. A load scheduler and a fault recovery system are currently under development to support both modes of operation.
Automatic Detection of Electric Power Troubles (ADEPT)
NASA Technical Reports Server (NTRS)
Wang, Caroline; Zeanah, Hugh; Anderson, Audie; Patrick, Clint; Brady, Mike; Ford, Donnie
1988-01-01
Automatic Detection of Electric Power Troubles (A DEPT) is an expert system that integrates knowledge from three different suppliers to offer an advanced fault-detection system. It is designed for two modes of operation: real time fault isolation and simulated modeling. Real time fault isolation of components is accomplished on a power system breadboard through the Fault Isolation Expert System (FIES II) interface with a rule system developed in-house. Faults are quickly detected and displayed and the rules and chain of reasoning optionally provided on a laser printer. This system consists of a simulated space station power module using direct-current power supplies for solar arrays on three power buses. For tests of the system's ablilty to locate faults inserted via switches, loads are configured by an INTEL microcomputer and the Symbolics artificial intelligence development system. As these loads are resistive in nature, Ohm's Law is used as the basis for rules by which faults are located. The three-bus system can correct faults automatically where there is a surplus of power available on any of the three buses. Techniques developed and used can be applied readily to other control systems requiring rapid intelligent decisions. Simulated modeling, used for theoretical studies, is implemented using a modified version of Kennedy Space Center's KATE (Knowledge-Based Automatic Test Equipment), FIES II windowing, and an ADEPT knowledge base.
Automatic Detection of Electric Power Troubles (ADEPT)
NASA Astrophysics Data System (ADS)
Wang, Caroline; Zeanah, Hugh; Anderson, Audie; Patrick, Clint; Brady, Mike; Ford, Donnie
1988-11-01
Automatic Detection of Electric Power Troubles (A DEPT) is an expert system that integrates knowledge from three different suppliers to offer an advanced fault-detection system. It is designed for two modes of operation: real time fault isolation and simulated modeling. Real time fault isolation of components is accomplished on a power system breadboard through the Fault Isolation Expert System (FIES II) interface with a rule system developed in-house. Faults are quickly detected and displayed and the rules and chain of reasoning optionally provided on a laser printer. This system consists of a simulated space station power module using direct-current power supplies for solar arrays on three power buses. For tests of the system's ablilty to locate faults inserted via switches, loads are configured by an INTEL microcomputer and the Symbolics artificial intelligence development system. As these loads are resistive in nature, Ohm's Law is used as the basis for rules by which faults are located. The three-bus system can correct faults automatically where there is a surplus of power available on any of the three buses. Techniques developed and used can be applied readily to other control systems requiring rapid intelligent decisions. Simulated modeling, used for theoretical studies, is implemented using a modified version of Kennedy Space Center's KATE (Knowledge-Based Automatic Test Equipment), FIES II windowing, and an ADEPT knowledge base.
Data driven model generation based on computational intelligence
NASA Astrophysics Data System (ADS)
Gemmar, Peter; Gronz, Oliver; Faust, Christophe; Casper, Markus
2010-05-01
The simulation of discharges at a local gauge or the modeling of large scale river catchments are effectively involved in estimation and decision tasks of hydrological research and practical applications like flood prediction or water resource management. However, modeling such processes using analytical or conceptual approaches is made difficult by both complexity of process relations and heterogeneity of processes. It was shown manifold that unknown or assumed process relations can principally be described by computational methods, and that system models can automatically be derived from observed behavior or measured process data. This study describes the development of hydrological process models using computational methods including Fuzzy logic and artificial neural networks (ANN) in a comprehensive and automated manner. Methods We consider a closed concept for data driven development of hydrological models based on measured (experimental) data. The concept is centered on a Fuzzy system using rules of Takagi-Sugeno-Kang type which formulate the input-output relation in a generic structure like Ri : IFq(t) = lowAND...THENq(t+Δt) = ai0 +ai1q(t)+ai2p(t-Δti1)+ai3p(t+Δti2)+.... The rule's premise part (IF) describes process states involving available process information, e.g. actual outlet q(t) is low where low is one of several Fuzzy sets defined over variable q(t). The rule's conclusion (THEN) estimates expected outlet q(t + Δt) by a linear function over selected system variables, e.g. actual outlet q(t), previous and/or forecasted precipitation p(t ?Δtik). In case of river catchment modeling we use head gauges, tributary and upriver gauges in the conclusion part as well. In addition, we consider temperature and temporal (season) information in the premise part. By creating a set of rules R = {Ri|(i = 1,...,N)} the space of process states can be covered as concise as necessary. Model adaptation is achieved by finding on optimal set A = (aij) of conclusion parameters with respect to a defined rating function and experimental data. To find A, we use for example a linear equation solver and RMSE-function. In practical process models, the number of Fuzzy sets and the according number of rules is fairly low. Nevertheless, creating the optimal model requires some experience. Therefore, we improved this development step by methods for automatic generation of Fuzzy sets, rules, and conclusions. Basically, the model achievement depends to a great extend on the selection of the conclusion variables. It is the aim that variables having most influence on the system reaction being considered and superfluous ones being neglected. At first, we use Kohonen maps, a specialized ANN, to identify relevant input variables from the large set of available system variables. A greedy algorithm selects a comprehensive set of dominant and uncorrelated variables. Next, the premise variables are analyzed with clustering methods (e.g. Fuzzy-C-means) and Fuzzy sets are then derived from cluster centers and outlines. The rule base is automatically constructed by permutation of the Fuzzy sets of the premise variables. Finally, the conclusion parameters are calculated and the total coverage of the input space is iteratively tested with experimental data, rarely firing rules are combined and coarse coverage of sensitive process states results in refined Fuzzy sets and rules. Results The described methods were implemented and integrated in a development system for process models. A series of models has already been built e.g. for rainfall-runoff modeling or for flood prediction (up to 72 hours) in river catchments. The models required significantly less development effort and showed advanced simulation results compared to conventional models. The models can be used operationally and simulation takes only some minutes on a standard PC e.g. for a gauge forecast (up to 72 hours) for the whole Mosel (Germany) river catchment.
Tuning rules for robust FOPID controllers based on multi-objective optimization with FOPDT models.
Sánchez, Helem Sabina; Padula, Fabrizio; Visioli, Antonio; Vilanova, Ramon
2017-01-01
In this paper a set of optimally balanced tuning rules for fractional-order proportional-integral-derivative controllers is proposed. The control problem of minimizing at once the integrated absolute error for both the set-point and the load disturbance responses is addressed. The control problem is stated as a multi-objective optimization problem where a first-order-plus-dead-time process model subject to a robustness, maximum sensitivity based, constraint has been considered. A set of Pareto optimal solutions is obtained for different normalized dead times and then the optimal balance between the competing objectives is obtained by choosing the Nash solution among the Pareto-optimal ones. A curve fitting procedure has then been applied in order to generate suitable tuning rules. Several simulation results show the effectiveness of the proposed approach. Copyright © 2016. Published by Elsevier Ltd.
Evaluation of Decision Rules in a Tiered Assessment of Inhalation Exposure to Nanomaterials.
Brouwer, Derk; Boessen, Ruud; van Duuren-Stuurman, Birgit; Bard, Delphine; Moehlmann, Carsten; Bekker, Cindy; Fransman, Wouter; Klein Entink, Rinke
2016-10-01
Tiered or stepwise approaches to assess occupational exposure to nano-objects, and their agglomerates and aggregates have been proposed, which require decision rules (DRs) to move to a next tier, or terminate the assessment. In a desk study the performance of a number of DRs based on the evaluation of results from direct reading instruments was investigated by both statistical simulations and the application of the DRs to real workplace data sets. A statistical model that accounts for autocorrelation patterns in time-series, i.e. autoregressive integrated moving average (ARIMA), was used as 'gold' standard. The simulations showed that none of the proposed DRs covered the entire range of simulated scenarios with respect to the ARIMA model parameters, however, a combined DR showed a slightly better agreement. Application of the DRs to real workplace datasets (n = 117) revealed sensitivity up to 0.72, whereas the lowest observed specificity was 0.95. The selection of the most appropriate DR is very much dependent on the consequences of the decision, i.e. ruling in or ruling out of scenarios for further evaluation. Since a basic assessment may also comprise of other type of measurements and information, an evaluation logic was proposed which embeds the DRs, but furthermore supports decision making in view of a tiered-approach exposure assessment. © The Author 2016. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.
Simulating future water temperatures in the North Santiam River, Oregon
NASA Astrophysics Data System (ADS)
Buccola, Norman L.; Risley, John C.; Rounds, Stewart A.
2016-04-01
A previously calibrated two-dimensional hydrodynamic and water-quality model (CE-QUAL-W2) of Detroit Lake in western Oregon was used in conjunction with inflows derived from Precipitation-Runoff Modeling System (PRMS) hydrologic models to examine in-lake and downstream water temperature effects under future climate conditions. Current and hypothetical operations and structures at Detroit Dam were imposed on boundary conditions derived from downscaled General Circulation Models in base (1990-1999) and future (2059-2068) periods. Compared with the base period, future air temperatures were about 2 °C warmer year-round. Higher air temperature and lower precipitation under the future period resulted in a 23% reduction in mean annual PRMS-simulated discharge and a 1 °C increase in mean annual estimated stream temperatures flowing into the lake compared to the base period. Simulations incorporating current operational rules and minimum release rates at Detroit Dam to support downstream habitat, irrigation, and water supply during key times of year resulted in lower future lake levels. That scenario results in a lake level that is above the dam's spillway crest only about half as many days in the future compared to historical frequencies. Managing temperature downstream of Detroit Dam depends on the ability to blend warmer water from the lake's surface with cooler water from deep in the lake, and the spillway is an important release point near the lake's surface. Annual average in-lake and release temperatures from Detroit Lake warmed 1.1 °C and 1.5 °C from base to future periods under present-day dam operational rules and fill schedules. Simulated dam operations such as beginning refill of the lake 30 days earlier or reducing minimum release rates (to keep more water in the lake to retain the use of the spillway) mitigated future warming to 0.4 and 0.9 °C below existing operational scenarios during the critical autumn spawning period for endangered salmonids. A hypothetical floating surface withdrawal at Detroit Dam improved temperature control in summer and autumn (0.6 °C warmer in summer, 0.6 °C cooler in autumn compared to existing structures) without altering release rates or lake level management rules.
Fine, Jason M.; Kuniansky, Eve L.
2014-01-01
Onslow County, North Carolina, is located within the designated Central Coastal Plain Capacity Use Area (CCPCUA). The CCPCUA was designated by law as a result of groundwater level declines of as much as 200 feet during the past four decades within aquifers in rocks of Cretaceous age in the central Coastal Plain of North Carolina and a depletion of water in storage from increased groundwater withdrawals in the area. The declines and depletion of water in storage within the Cretaceous aquifers increase the potential for saltwater migration—both lateral encroachment and upward leakage of brackish water. Within the CCPCUA, a reduction in groundwater withdrawals over a period of 16 years from 2003 to 2018 is mandated. Under the CCPCUA rules, withdrawals in excess of 100,000 gallons per day from any of the Cretaceous aquifer well systems are subject to water-use reductions of as much as 75 percent. To assess the effects of the CCPCUA rules and to assist with groundwater-management decisions, a numerical model was developed to simulate the groundwater flow and chloride concentrations in the surficial Castle Hayne, Beaufort, Peedee, and Black Creek aquifers in the Onslow County area. The model was used to (1) simulate groundwater flow from 1900 to 2010; (2) assess chloride movement throughout the aquifer system; and (3) create hypothetical scenarios of future groundwater development. After calibration of a groundwater flow model and conversion to a variable-density model, five scenarios were created to simulate future groundwater conditions in the Onslow County area: (1) full implementation of the CCPCUA rules with three phases of withdrawal reductions simulated through 2028; (2) implementation of only phase 1 withdrawal reductions of the CCPCUA rules and simulated through 2028; (3) implementation of only phases 1 and 2 withdrawal reductions of the CCPCUA rules and simulated through 2028; (4) full implementation of the CCPCUA rules with the addition of withdrawals from the Castle Hayne aquifer in Onslow County at the fully permitted amount in the final stress period and simulated through 2028; and (5) full implementation of the CCPCUA rules as in scenario 1 except simulated through 2100. Results from the scenarios give an indication of the water-level recovery in the Black Creek aquifer throughout each phase of the CCPCUA rules in Onslow County. Furthermore, as development of the Castle Hayne aquifers was increased in the scenarios, cones of depression were created around pumping centers. Additionally, the scenarios indicated little to no change in chloride concentrations for the time periods simulated.
Integrated layout based Monte-Carlo simulation for design arc optimization
NASA Astrophysics Data System (ADS)
Shao, Dongbing; Clevenger, Larry; Zhuang, Lei; Liebmann, Lars; Wong, Robert; Culp, James
2016-03-01
Design rules are created considering a wafer fail mechanism with the relevant design levels under various design cases, and the values are set to cover the worst scenario. Because of the simplification and generalization, design rule hinders, rather than helps, dense device scaling. As an example, SRAM designs always need extensive ground rule waivers. Furthermore, dense design also often involves "design arc", a collection of design rules, the sum of which equals critical pitch defined by technology. In design arc, a single rule change can lead to chain reaction of other rule violations. In this talk we present a methodology using Layout Based Monte-Carlo Simulation (LBMCS) with integrated multiple ground rule checks. We apply this methodology on SRAM word line contact, and the result is a layout that has balanced wafer fail risks based on Process Assumptions (PAs). This work was performed at the IBM Microelectronics Div, Semiconductor Research and Development Center, Hopewell Junction, NY 12533
A framework for plasticity implementation on the SpiNNaker neural architecture.
Galluppi, Francesco; Lagorce, Xavier; Stromatias, Evangelos; Pfeiffer, Michael; Plana, Luis A; Furber, Steve B; Benosman, Ryad B
2014-01-01
Many of the precise biological mechanisms of synaptic plasticity remain elusive, but simulations of neural networks have greatly enhanced our understanding of how specific global functions arise from the massively parallel computation of neurons and local Hebbian or spike-timing dependent plasticity rules. For simulating large portions of neural tissue, this has created an increasingly strong need for large scale simulations of plastic neural networks on special purpose hardware platforms, because synaptic transmissions and updates are badly matched to computing style supported by current architectures. Because of the great diversity of biological plasticity phenomena and the corresponding diversity of models, there is a great need for testing various hypotheses about plasticity before committing to one hardware implementation. Here we present a novel framework for investigating different plasticity approaches on the SpiNNaker distributed digital neural simulation platform. The key innovation of the proposed architecture is to exploit the reconfigurability of the ARM processors inside SpiNNaker, dedicating a subset of them exclusively to process synaptic plasticity updates, while the rest perform the usual neural and synaptic simulations. We demonstrate the flexibility of the proposed approach by showing the implementation of a variety of spike- and rate-based learning rules, including standard Spike-Timing dependent plasticity (STDP), voltage-dependent STDP, and the rate-based BCM rule. We analyze their performance and validate them by running classical learning experiments in real time on a 4-chip SpiNNaker board. The result is an efficient, modular, flexible and scalable framework, which provides a valuable tool for the fast and easy exploration of learning models of very different kinds on the parallel and reconfigurable SpiNNaker system.
A framework for plasticity implementation on the SpiNNaker neural architecture
Galluppi, Francesco; Lagorce, Xavier; Stromatias, Evangelos; Pfeiffer, Michael; Plana, Luis A.; Furber, Steve B.; Benosman, Ryad B.
2015-01-01
Many of the precise biological mechanisms of synaptic plasticity remain elusive, but simulations of neural networks have greatly enhanced our understanding of how specific global functions arise from the massively parallel computation of neurons and local Hebbian or spike-timing dependent plasticity rules. For simulating large portions of neural tissue, this has created an increasingly strong need for large scale simulations of plastic neural networks on special purpose hardware platforms, because synaptic transmissions and updates are badly matched to computing style supported by current architectures. Because of the great diversity of biological plasticity phenomena and the corresponding diversity of models, there is a great need for testing various hypotheses about plasticity before committing to one hardware implementation. Here we present a novel framework for investigating different plasticity approaches on the SpiNNaker distributed digital neural simulation platform. The key innovation of the proposed architecture is to exploit the reconfigurability of the ARM processors inside SpiNNaker, dedicating a subset of them exclusively to process synaptic plasticity updates, while the rest perform the usual neural and synaptic simulations. We demonstrate the flexibility of the proposed approach by showing the implementation of a variety of spike- and rate-based learning rules, including standard Spike-Timing dependent plasticity (STDP), voltage-dependent STDP, and the rate-based BCM rule. We analyze their performance and validate them by running classical learning experiments in real time on a 4-chip SpiNNaker board. The result is an efficient, modular, flexible and scalable framework, which provides a valuable tool for the fast and easy exploration of learning models of very different kinds on the parallel and reconfigurable SpiNNaker system. PMID:25653580
Caccavale, Justin; Fiumara, David; Stapf, Michael; Sweitzer, Liedeke; Anderson, Hannah J; Gorky, Jonathan; Dhurjati, Prasad; Galileo, Deni S
2017-12-11
Glioblastoma multiforme (GBM) is a devastating brain cancer for which there is no known cure. Its malignancy is due to rapid cell division along with high motility and invasiveness of cells into the brain tissue. Simple 2-dimensional laboratory assays (e.g., a scratch assay) commonly are used to measure the effects of various experimental perturbations, such as treatment with chemical inhibitors. Several mathematical models have been developed to aid the understanding of the motile behavior and proliferation of GBM cells. However, many are mathematically complicated, look at multiple interdependent phenomena, and/or use modeling software not freely available to the research community. These attributes make the adoption of models and simulations of even simple 2-dimensional cell behavior an uncommon practice by cancer cell biologists. Herein, we developed an accurate, yet simple, rule-based modeling framework to describe the in vitro behavior of GBM cells that are stimulated by the L1CAM protein using freely available NetLogo software. In our model L1CAM is released by cells to act through two cell surface receptors and a point of signaling convergence to increase cell motility and proliferation. A simple graphical interface is provided so that changes can be made easily to several parameters controlling cell behavior, and behavior of the cells is viewed both pictorially and with dedicated graphs. We fully describe the hierarchical rule-based modeling framework, show simulation results under several settings, describe the accuracy compared to experimental data, and discuss the potential usefulness for predicting future experimental outcomes and for use as a teaching tool for cell biology students. It is concluded that this simple modeling framework and its simulations accurately reflect much of the GBM cell motility behavior observed experimentally in vitro in the laboratory. Our framework can be modified easily to suit the needs of investigators interested in other similar intrinsic or extrinsic stimuli that influence cancer or other cell behavior. This modeling framework of a commonly used experimental motility assay (scratch assay) should be useful to both researchers of cell motility and students in a cell biology teaching laboratory.
NASA Technical Reports Server (NTRS)
Nieten, Joseph; Burke, Roger
1993-01-01
Consideration is given to the System Diagnostic Builder (SDB), an automated knowledge acquisition tool using state-of-the-art AI technologies. The SDB employs an inductive machine learning technique to generate rules from data sets that are classified by a subject matter expert. Thus, data are captured from the subject system, classified, and used to drive the rule generation process. These rule bases are used to represent the observable behavior of the subject system, and to represent knowledge about this system. The knowledge bases captured from the Shuttle Mission Simulator can be used as black box simulations by the Intelligent Computer Aided Training devices. The SDB can also be used to construct knowledge bases for the process control industry, such as chemical production or oil and gas production.
MacDonald, Chad; Moussavi, Zahra; Sarkodie-Gyan, Thompson
2007-01-01
This paper presents the development and simulation of a fuzzy logic based learning mechanism to emulate human motor learning. In particular, fuzzy inference was used to develop an internal model of a novel dynamic environment experienced during planar reaching movements with the upper limb. A dynamic model of the human arm was developed and a fuzzy if-then rule base was created to relate trajectory movement and velocity errors to internal model update parameters. An experimental simulation was performed to compare the fuzzy system's performance with that of human subjects. It was found that the dynamic model behaved as expected, and the fuzzy learning mechanism created an internal model that was capable of opposing the environmental force field to regain a trajectory closely resembling the desired ideal.
NASA Astrophysics Data System (ADS)
Takács, Ondřej; Kostolányová, Kateřina
2016-06-01
This paper describes the Virtual Teacher that uses a set of rules to automatically adapt the way of teaching. These rules compose of two parts: conditions on various students' properties or learning situation; conclusions that specify different adaptation parameters. The rules can be used for general adaptation of each subject or they can be specific to some subject. The rule based system of Virtual Teacher is dedicated to be used in pedagogical experiments in adaptive e-learning and is therefore designed for users without education in computer science. The Virtual Teacher was used in dissertation theses of two students, who executed two pedagogical experiments. This paper also describes the phase of simulating and modeling of the theoretically prepared adaptive process in the modeling tool, which has all the required parameters and has been created especially for the occasion. The experiments are being conducted on groups of virtual students and by using a virtual study material.
Molecular Dynamics Evaluation of Dielectric-Constant Mixing Rules for H2O-CO2 at Geologic Conditions
Mountain, Raymond D.; Harvey, Allan H.
2015-01-01
Modeling of mineral reaction equilibria and aqueous-phase speciation of C-O-H fluids requires the dielectric constant of the fluid mixture, which is not known from experiment and is typically estimated by some rule for mixing pure-component values. In order to evaluate different proposed mixing rules, we use molecular dynamics simulation to calculate the dielectric constant of a model H2O–CO2 mixture at temperatures of 700 K and 1000 K at pressures up to 3 GPa. We find that theoretically based mixing rules that depend on combining the molar polarizations of the pure fluids systematically overestimate the dielectric constant of the mixture, as would be expected for mixtures of nonpolar and strongly polar components. The commonly used semiempirical mixing rule due to Looyenga works well for this system at the lower pressures studied, but somewhat underestimates the dielectric constant at higher pressures and densities, especially at the water-rich end of the composition range. PMID:26664009
Mountain, Raymond D; Harvey, Allan H
2015-10-01
Modeling of mineral reaction equilibria and aqueous-phase speciation of C-O-H fluids requires the dielectric constant of the fluid mixture, which is not known from experiment and is typically estimated by some rule for mixing pure-component values. In order to evaluate different proposed mixing rules, we use molecular dynamics simulation to calculate the dielectric constant of a model H 2 O-CO 2 mixture at temperatures of 700 K and 1000 K at pressures up to 3 GPa. We find that theoretically based mixing rules that depend on combining the molar polarizations of the pure fluids systematically overestimate the dielectric constant of the mixture, as would be expected for mixtures of nonpolar and strongly polar components. The commonly used semiempirical mixing rule due to Looyenga works well for this system at the lower pressures studied, but somewhat underestimates the dielectric constant at higher pressures and densities, especially at the water-rich end of the composition range.
Multi-agent simulation of the von Thunen model formation mechanism
NASA Astrophysics Data System (ADS)
Tao, Haiyan; Li, Xia; Chen, Xiaoxiang; Deng, Chengbin
2008-10-01
This research tries to explain the internal driving forces of circular structure formation in urban geography via the simulation of interaction between individual behavior and market. On the premise of single city center, unchanged scale merit and complete competition, enterprise migration theory as well, an R-D algorithm, that has agents searched the best behavior rules in some given locations, is introduced with agent-based modeling technique. The experiment conducts a simulation on Swarm platform, whose result reflects and replays the formation process of Von Thünen circular structure. Introducing and considering some heterogeneous factors, such as traffic roads, the research verifies several landuse models and discusses the self-adjustment function of price mechanism.
Communication: Modeling electrolyte mixtures with concentration dependent dielectric permittivity
NASA Astrophysics Data System (ADS)
Chen, Hsieh; Panagiotopoulos, Athanassios Z.
2018-01-01
We report a new implicit-solvent simulation model for electrolyte mixtures based on the concept of concentration dependent dielectric permittivity. A combining rule is found to predict the dielectric permittivity of electrolyte mixtures based on the experimentally measured dielectric permittivity for pure electrolytes as well as the mole fractions of the electrolytes in mixtures. Using grand canonical Monte Carlo simulations, we demonstrate that this approach allows us to accurately reproduce the mean ionic activity coefficients of NaCl in NaCl-CaCl2 mixtures at ionic strengths up to I = 3M. These results are important for thermodynamic studies of geologically relevant brines and physiological fluids.
Combination Rules for Morse-Based van der Waals Force Fields.
Yang, Li; Sun, Lei; Deng, Wei-Qiao
2018-02-15
In traditional force fields (FFs), van der Waals interactions have been usually described by the Lennard-Jones potentials. Conventional combination rules for the parameters of van der Waals (VDW) cross-termed interactions were developed for the Lennard-Jones based FFs. Here, we report that the Morse potentials were a better function to describe VDW interactions calculated by highly precise quantum mechanics methods. A new set of combination rules was developed for Morse-based FFs, in which VDW interactions were described by Morse potentials. The new set of combination rules has been verified by comparing the second virial coefficients of 11 noble gas mixtures. For all of the mixed binaries considered in this work, the combination rules work very well and are superior to all three other existing sets of combination rules reported in the literature. We further used the Morse-based FF by using the combination rules to simulate the adsorption isotherms of CH 4 at 298 K in four covalent-organic frameworks (COFs). The overall agreement is great, which supports the further applications of this new set of combination rules in more realistic simulation systems.
Cyclic softening based on dislocation annihilation at sub-cell boundary for SA333 Grade-6 C-Mn steel
NASA Astrophysics Data System (ADS)
Bhattacharjee, S.; Dhar, S.; Acharyya, S. K.; Gupta, S. K.
2018-01-01
In this work, the response of SA333 Grade-6 C-Mn steel subjected to uniaxial and in-phase biaxial tension-torsion cyclic loading is experimented and an attempt is made to model the material behaviour. Experimentally observed cyclic softening is modelled based on ‘dislocation annihilation at low angle grain boundary’, while Ohno-Wang kinematic hardening rule is used to simulate the stress-strain hysteresis loops. The relevant material parameters are extracted from the appropriate experimental results and metallurgical investigations. The material model is plugged as user material subroutine into ABAQUS FE platform to simulate pre-saturation low cycle fatigue loops with cyclic softening and other cyclic plastic behaviour under prescribed loading. The stress-strain hysteresis loops and peak stress with cycles were compared with the experimental results and good agreements between experimental and simulated results validated the material model.
NASA Astrophysics Data System (ADS)
Hadi, M. Z.; Djatna, T.; Sugiarto
2018-04-01
This paper develops a dynamic storage assignment model to solve storage assignment problem (SAP) for beverages order picking in a drive-in rack warehousing system to determine the appropriate storage location and space for each beverage products dynamically so that the performance of the system can be improved. This study constructs a graph model to represent drive-in rack storage position then combine association rules mining, class-based storage policies and an arrangement rule algorithm to determine an appropriate storage location and arrangement of the product according to dynamic orders from customers. The performance of the proposed model is measured as rule adjacency accuracy, travel distance (for picking process) and probability a product become expiry using Last Come First Serve (LCFS) queue approach. Finally, the proposed model is implemented through computer simulation and compare the performance for different storage assignment methods as well. The result indicates that the proposed model outperforms other storage assignment methods.
Aggregate age-at-marriage patterns from individual mate-search heuristics.
Todd, Peter M; Billari, Francesco C; Simão, Jorge
2005-08-01
The distribution of age at first marriage shows well-known strong regularities across many countries and recent historical periods. We accounted for these patterns by developing agent-based models that simulate the aggregate behavior of individuals who are searching for marriage partners. Past models assumed fully rational agents with complete knowledge of the marriage market; our simulated agents used psychologically plausible simple heuristic mate search rules that adjust aspiration levels on the basis of a sequence of encounters with potential partners. Substantial individual variation must be included in the models to account for the demographically observed age-at-marriage patterns.
NASA Technical Reports Server (NTRS)
Lafuse, Sharon A.
1991-01-01
The paper describes the Shuttle Leak Management Expert System (SLMES), a preprototype expert system developed to enable the ECLSS subsystem manager to analyze subsystem anomalies and to formulate flight procedures based on flight data. The SLMES combines the rule-based expert system technology with the traditional FORTRAN-based software into an integrated system. SLMES analyzes the data using rules, and, when it detects a problem that requires simulation, it sets up the input for the FORTRAN-based simulation program ARPCS2AT2, which predicts the cabin total pressure and composition as a function of time. The program simulates the pressure control system, the crew oxygen masks, the airlock repress/depress valves, and the leakage. When the simulation has completed, other SLMES rules are triggered to examine the results of simulation contrary to flight data and to suggest methods for correcting the problem. Results are then presented in form of graphs and tables.
De Paris, Renata; Frantz, Fábio A.; Norberto de Souza, Osmar; Ruiz, Duncan D. A.
2013-01-01
Molecular docking simulations of fully flexible protein receptor (FFR) models are coming of age. In our studies, an FFR model is represented by a series of different conformations derived from a molecular dynamic simulation trajectory of the receptor. For each conformation in the FFR model, a docking simulation is executed and analyzed. An important challenge is to perform virtual screening of millions of ligands using an FFR model in a sequential mode since it can become computationally very demanding. In this paper, we propose a cloud-based web environment, called web Flexible Receptor Docking Workflow (wFReDoW), which reduces the CPU time in the molecular docking simulations of FFR models to small molecules. It is based on the new workflow data pattern called self-adaptive multiple instances (P-SaMIs) and on a middleware built on Amazon EC2 instances. P-SaMI reduces the number of molecular docking simulations while the middleware speeds up the docking experiments using a High Performance Computing (HPC) environment on the cloud. The experimental results show a reduction in the total elapsed time of docking experiments and the quality of the new reduced receptor models produced by discarding the nonpromising conformations from an FFR model ruled by the P-SaMI data pattern. PMID:23691504
A Simulation Study of Methods for Selecting Subgroup-Specific Doses in Phase I Trials
Morita, Satoshi; Thall, Peter F.; Takeda, Kentaro
2016-01-01
Summary Patient heterogeneity may complicate dose-finding in phase I clinical trials if the dose-toxicity curves differ between subgroups. Conducting separate trials within subgroups may lead to infeasibly small sample sizes in subgroups having low prevalence. Alternatively, it is not obvious how to conduct a single trial while accounting for heterogeneity. To address this problem, we consider a generalization of the continual reassessment method (O’Quigley, et al., 1990) based on a hierarchical Bayesian dose-toxicity model that borrows strength between subgroups under the assumption that the subgroups are exchangeable. We evaluate a design using this model that includes subgroup-specific dose selection and safety rules. A simulation study is presented that includes comparison of this method to three alternative approaches, based on non-hierarchical models, that make different types of assumptions about within-subgroup dose-toxicity curves. The simulations show that the hierarchical model-based method is recommended in settings where the dose-toxicity curves are exchangeable between subgroups. We present practical guidelines for application, and provide computer programs for trial simulation and conduct. PMID:28111916
2014-01-01
Background mRNA translation involves simultaneous movement of multiple ribosomes on the mRNA and is also subject to regulatory mechanisms at different stages. Translation can be described by various codon-based models, including ODE, TASEP, and Petri net models. Although such models have been extensively used, the overlap and differences between these models and the implications of the assumptions of each model has not been systematically elucidated. The selection of the most appropriate modelling framework, and the most appropriate way to develop coarse-grained/fine-grained models in different contexts is not clear. Results We systematically analyze and compare how different modelling methodologies can be used to describe translation. We define various statistically equivalent codon-based simulation algorithms and analyze the importance of the update rule in determining the steady state, an aspect often neglected. Then a novel probabilistic Boolean network (PBN) model is proposed for modelling translation, which enjoys an exact numerical solution. This solution matches those of numerical simulation from other methods and acts as a complementary tool to analytical approximations and simulations. The advantages and limitations of various codon-based models are compared, and illustrated by examples with real biological complexities such as slow codons, premature termination and feedback regulation. Our studies reveal that while different models gives broadly similiar trends in many cases, important differences also arise and can be clearly seen, in the dependence of the translation rate on different parameters. Furthermore, the update rule affects the steady state solution. Conclusions The codon-based models are based on different levels of abstraction. Our analysis suggests that a multiple model approach to understanding translation allows one to ascertain which aspects of the conclusions are robust with respect to the choice of modelling methodology, and when (and why) important differences may arise. This approach also allows for an optimal use of analysis tools, which is especially important when additional complexities or regulatory mechanisms are included. This approach can provide a robust platform for dissecting translation, and results in an improved predictive framework for applications in systems and synthetic biology. PMID:24576337
Miwa, Yoshimasa; Li, Chen; Ge, Qi-Wei; Matsuno, Hiroshi; Miyano, Satoru
2010-01-01
Parameter determination is important in modeling and simulating biological pathways including signaling pathways. Parameters are determined according to biological facts obtained from biological experiments and scientific publications. However, such reliable data describing detailed reactions are not reported in most cases. This prompted us to develop a general methodology of determining the parameters of a model in the case of that no information of the underlying biological facts is provided. In this study, we use the Petri net approach for modeling signaling pathways, and propose a method to determine firing delay times of transitions for Petri net models of signaling pathways by introducing stochastic decision rules. Petri net technology provides a powerful approach to modeling and simulating various concurrent systems, and recently have been widely accepted as a description method for biological pathways. Our method enables to determine the range of firing delay time which realizes smooth token flows in the Petri net model of a signaling pathway. The availability of this method has been confirmed by the results of an application to the interleukin-1 induced signaling pathway.
Miwa, Yoshimasa; Li, Chen; Ge, Qi-Wei; Matsuno, Hiroshi; Miyano, Satoru
2011-01-01
Parameter determination is important in modeling and simulating biological pathways including signaling pathways. Parameters are determined according to biological facts obtained from biological experiments and scientific publications. However, such reliable data describing detailed reactions are not reported in most cases. This prompted us to develop a general methodology of determining the parameters of a model in the case of that no information of the underlying biological facts is provided. In this study, we use the Petri net approach for modeling signaling pathways, and propose a method to determine firing delay times of transitions for Petri net models of signaling pathways by introducing stochastic decision rules. Petri net technology provides a powerful approach to modeling and simulating various concurrent systems, and recently have been widely accepted as a description method for biological pathways. Our method enables to determine the range of firing delay time which realizes smooth token flows in the Petri net model of a signaling pathway. The availability of this method has been confirmed by the results of an application to the interleukin-1 induced signaling pathway.
Lin, Chin-Teng; Wu, Rui-Cheng; Chang, Jyh-Yeong; Liang, Sheng-Fu
2004-02-01
In this paper, a new technique for the Chinese text-to-speech (TTS) system is proposed. Our major effort focuses on the prosodic information generation. New methodologies for constructing fuzzy rules in a prosodic model simulating human's pronouncing rules are developed. The proposed Recurrent Fuzzy Neural Network (RFNN) is a multilayer recurrent neural network (RNN) which integrates a Self-cOnstructing Neural Fuzzy Inference Network (SONFIN) into a recurrent connectionist structure. The RFNN can be functionally divided into two parts. The first part adopts the SONFIN as a prosodic model to explore the relationship between high-level linguistic features and prosodic information based on fuzzy inference rules. As compared to conventional neural networks, the SONFIN can always construct itself with an economic network size in high learning speed. The second part employs a five-layer network to generate all prosodic parameters by directly using the prosodic fuzzy rules inferred from the first part as well as other important features of syllables. The TTS system combined with the proposed method can behave not only sandhi rules but also the other prosodic phenomena existing in the traditional TTS systems. Moreover, the proposed scheme can even find out some new rules about prosodic phrase structure. The performance of the proposed RFNN-based prosodic model is verified by imbedding it into a Chinese TTS system with a Chinese monosyllable database based on the time-domain pitch synchronous overlap add (TD-PSOLA) method. Our experimental results show that the proposed RFNN can generate proper prosodic parameters including pitch means, pitch shapes, maximum energy levels, syllable duration, and pause duration. Some synthetic sounds are online available for demonstration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Zuwei; Zhao, Haibo, E-mail: klinsmannzhb@163.com; Zheng, Chuguang
2015-01-15
This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule providesmore » a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance–rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are demonstrated in a physically realistic Brownian coagulation case. The computational accuracy is validated with benchmark solution of discrete-sectional method. The simulation results show that the comprehensive approach can attain very favorable improvement in cost without sacrificing computational accuracy.« less
When push comes to shove: Exclusion processes with nonlocal consequences
NASA Astrophysics Data System (ADS)
Almet, Axel A.; Pan, Michael; Hughes, Barry D.; Landman, Kerry A.
2015-11-01
Stochastic agent-based models are useful for modelling collective movement of biological cells. Lattice-based random walk models of interacting agents where each site can be occupied by at most one agent are called simple exclusion processes. An alternative motility mechanism to simple exclusion is formulated, in which agents are granted more freedom to move under the compromise that interactions are no longer necessarily local. This mechanism is termed shoving. A nonlinear diffusion equation is derived for a single population of shoving agents using mean-field continuum approximations. A continuum model is also derived for a multispecies problem with interacting subpopulations, which either obey the shoving rules or the simple exclusion rules. Numerical solutions of the derived partial differential equations compare well with averaged simulation results for both the single species and multispecies processes in two dimensions, while some issues arise in one dimension for the multispecies case.
Bayesian learning and the psychology of rule induction
Endress, Ansgar D.
2014-01-01
In recent years, Bayesian learning models have been applied to an increasing variety of domains. While such models have been criticized on theoretical grounds, the underlying assumptions and predictions are rarely made concrete and tested experimentally. Here, I use Frank and Tenenbaum's (2011) Bayesian model of rule-learning as a case study to spell out the underlying assumptions, and to confront them with the empirical results Frank and Tenenbaum (2011) propose to simulate, as well as with novel experiments. While rule-learning is arguably well suited to rational Bayesian approaches, I show that their models are neither psychologically plausible nor ideal observer models. Further, I show that their central assumption is unfounded: humans do not always preferentially learn more specific rules, but, at least in some situations, those rules that happen to be more salient. Even when granting the unsupported assumptions, I show that all of the experiments modeled by Frank and Tenenbaum (2011) either contradict their models, or have a large number of more plausible interpretations. I provide an alternative account of the experimental data based on simple psychological mechanisms, and show that this account both describes the data better, and is easier to falsify. I conclude that, despite the recent surge in Bayesian models of cognitive phenomena, psychological phenomena are best understood by developing and testing psychological theories rather than models that can be fit to virtually any data. PMID:23454791
Simulator for heterogeneous dataflow architectures
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar R.
1993-01-01
A new simulator is developed to simulate the execution of an algorithm graph in accordance with the Algorithm to Architecture Mapping Model (ATAMM) rules. ATAMM is a Petri Net model which describes the periodic execution of large-grained, data-independent dataflow graphs and which provides predictable steady state time-optimized performance. This simulator extends the ATAMM simulation capability from a heterogenous set of resources, or functional units, to a more general heterogenous architecture. Simulation test cases show that the simulator accurately executes the ATAMM rules for both a heterogenous architecture and a homogenous architecture, which is the special case for only one processor type. The simulator forms one tool in an ATAMM Integrated Environment which contains other tools for graph entry, graph modification for performance optimization, and playback of simulations for analysis.
2012-01-01
Background To explain eyespot colour-pattern determination in butterfly wings, the induction model has been discussed based on colour-pattern analyses of various butterfly eyespots. However, a detailed structural analysis of eyespots that can serve as a foundation for future studies is still lacking. In this study, fundamental structural rules related to butterfly eyespots are proposed, and the induction model is elaborated in terms of the possible dynamics of morphogenic signals involved in the development of eyespots and parafocal elements (PFEs) based on colour-pattern analysis of the nymphalid butterfly Junonia almana. Results In a well-developed eyespot, the inner black core ring is much wider than the outer black ring; this is termed the inside-wide rule. It appears that signals are wider near the focus of the eyespot and become narrower as they expand. Although fundamental signal dynamics are likely to be based on a reaction-diffusion mechanism, they were described well mathematically as a type of simple uniformly decelerated motion in which signals associated with the outer and inner black rings of eyespots and PFEs are released at different time points, durations, intervals, and initial velocities into a two-dimensional field of fundamentally uniform or graded resistance; this produces eyespots and PFEs that are diverse in size and structure. The inside-wide rule, eyespot distortion, structural differences between small and large eyespots, and structural changes in eyespots and PFEs in response to physiological treatments were explained well using mathematical simulations. Natural colour patterns and previous experimental findings that are not easily explained by the conventional gradient model were also explained reasonably well by the formal mathematical simulations performed in this study. Conclusions In a mode free from speculative molecular interactions, the present study clarifies fundamental structural rules related to butterfly eyespots, delineates a theoretical basis for the induction model, and proposes a mathematically simple mode of long-range signalling that may reflect developmental mechanisms associated with butterfly eyespots. PMID:22409965
Otaki, Joji M
2012-03-13
To explain eyespot colour-pattern determination in butterfly wings, the induction model has been discussed based on colour-pattern analyses of various butterfly eyespots. However, a detailed structural analysis of eyespots that can serve as a foundation for future studies is still lacking. In this study, fundamental structural rules related to butterfly eyespots are proposed, and the induction model is elaborated in terms of the possible dynamics of morphogenic signals involved in the development of eyespots and parafocal elements (PFEs) based on colour-pattern analysis of the nymphalid butterfly Junonia almana. In a well-developed eyespot, the inner black core ring is much wider than the outer black ring; this is termed the inside-wide rule. It appears that signals are wider near the focus of the eyespot and become narrower as they expand. Although fundamental signal dynamics are likely to be based on a reaction-diffusion mechanism, they were described well mathematically as a type of simple uniformly decelerated motion in which signals associated with the outer and inner black rings of eyespots and PFEs are released at different time points, durations, intervals, and initial velocities into a two-dimensional field of fundamentally uniform or graded resistance; this produces eyespots and PFEs that are diverse in size and structure. The inside-wide rule, eyespot distortion, structural differences between small and large eyespots, and structural changes in eyespots and PFEs in response to physiological treatments were explained well using mathematical simulations. Natural colour patterns and previous experimental findings that are not easily explained by the conventional gradient model were also explained reasonably well by the formal mathematical simulations performed in this study. In a mode free from speculative molecular interactions, the present study clarifies fundamental structural rules related to butterfly eyespots, delineates a theoretical basis for the induction model, and proposes a mathematically simple mode of long-range signalling that may reflect developmental mechanisms associated with butterfly eyespots.
Loading Deformation Characteristic Simulation Study of Engineering Vehicle Refurbished Tire
NASA Astrophysics Data System (ADS)
Qiang, Wang; Xiaojie, Qi; Zhao, Yang; Yunlong, Wang; Guotian, Wang; Degang, Lv
2018-05-01
The paper constructed engineering vehicle refurbished tire computer geometry model, mechanics model, contact model, finite element analysis model, did simulation study on load-deformation property of engineering vehicle refurbished tire by comparing with that of the new and the same type tire, got load-deformation of engineering vehicle refurbished tire under the working condition of static state and ground contact. The analysis result shows that change rules of radial-direction deformation and side-direction deformation of engineering vehicle refurbished tire are close to that of the new tire, radial-direction and side-direction deformation value is a little less than that of the new tire. When air inflation pressure was certain, radial-direction deformation linear rule of engineer vehicle refurbished tire would increase with load adding, however, side-direction deformation showed linear change rule, when air inflation pressure was low; and it would show increase of non-linear change rule, when air inflation pressure was very high.
Biomimicry of quorum sensing using bacterial lifecycle model.
Niu, Ben; Wang, Hong; Duan, Qiqi; Li, Li
2013-01-01
Recent microbiologic studies have shown that quorum sensing mechanisms, which serve as one of the fundamental requirements for bacterial survival, exist widely in bacterial intra- and inter-species cell-cell communication. Many simulation models, inspired by the social behavior of natural organisms, are presented to provide new approaches for solving realistic optimization problems. Most of these simulation models follow population-based modelling approaches, where all the individuals are updated according to the same rules. Therefore, it is difficult to maintain the diversity of the population. In this paper, we present a computational model termed LCM-QS, which simulates the bacterial quorum-sensing (QS) mechanism using an individual-based modelling approach under the framework of Agent-Environment-Rule (AER) scheme, i.e. bacterial lifecycle model (LCM). LCM-QS model can be classified into three main sub-models: chemotaxis with QS sub-model, reproduction and elimination sub-model and migration sub-model. The proposed model is used to not only imitate the bacterial evolution process at the single-cell level, but also concentrate on the study of bacterial macroscopic behaviour. Comparative experiments under four different scenarios have been conducted in an artificial 3-D environment with nutrients and noxious distribution. Detailed study on bacterial chemotatic processes with quorum sensing and without quorum sensing are compared. By using quorum sensing mechanisms, artificial bacteria working together can find the nutrient concentration (or global optimum) quickly in the artificial environment. Biomimicry of quorum sensing mechanisms using the lifecycle model allows the artificial bacteria endowed with the communication abilities, which are essential to obtain more valuable information to guide their search cooperatively towards the preferred nutrient concentrations. It can also provide an inspiration for designing new swarm intelligence optimization algorithms, which can be used for solving the real-world problems.
Biomimicry of quorum sensing using bacterial lifecycle model
2013-01-01
Background Recent microbiologic studies have shown that quorum sensing mechanisms, which serve as one of the fundamental requirements for bacterial survival, exist widely in bacterial intra- and inter-species cell-cell communication. Many simulation models, inspired by the social behavior of natural organisms, are presented to provide new approaches for solving realistic optimization problems. Most of these simulation models follow population-based modelling approaches, where all the individuals are updated according to the same rules. Therefore, it is difficult to maintain the diversity of the population. Results In this paper, we present a computational model termed LCM-QS, which simulates the bacterial quorum-sensing (QS) mechanism using an individual-based modelling approach under the framework of Agent-Environment-Rule (AER) scheme, i.e. bacterial lifecycle model (LCM). LCM-QS model can be classified into three main sub-models: chemotaxis with QS sub-model, reproduction and elimination sub-model and migration sub-model. The proposed model is used to not only imitate the bacterial evolution process at the single-cell level, but also concentrate on the study of bacterial macroscopic behaviour. Comparative experiments under four different scenarios have been conducted in an artificial 3-D environment with nutrients and noxious distribution. Detailed study on bacterial chemotatic processes with quorum sensing and without quorum sensing are compared. By using quorum sensing mechanisms, artificial bacteria working together can find the nutrient concentration (or global optimum) quickly in the artificial environment. Conclusions Biomimicry of quorum sensing mechanisms using the lifecycle model allows the artificial bacteria endowed with the communication abilities, which are essential to obtain more valuable information to guide their search cooperatively towards the preferred nutrient concentrations. It can also provide an inspiration for designing new swarm intelligence optimization algorithms, which can be used for solving the real-world problems. PMID:23815296
Simulation of urban land surface temperature based on sub-pixel land cover in a coastal city
NASA Astrophysics Data System (ADS)
Zhao, Xiaofeng; Deng, Lei; Feng, Huihui; Zhao, Yanchuang
2014-11-01
The sub-pixel urban land cover has been proved to have obvious correlations with land surface temperature (LST). Yet these relationships have seldom been used to simulate LST. In this study we provided a new approach of urban LST simulation based on sub-pixel land cover modeling. Landsat TM/ETM+ images of Xiamen city, China on both the January of 2002 and 2007 were used to acquire land cover and then extract the transformation rule using logistic regression. The transformation possibility was taken as its percent in the same pixel after normalization. And cellular automata were used to acquire simulated sub-pixel land cover on 2007 and 2017. On the other hand, the correlations between retrieved LST and sub-pixel land cover achieved by spectral mixture analysis in 2002 were examined and a regression model was built. Then the regression model was used on simulated 2007 land cover to model the LST of 2007. Finally the LST of 2017 was simulated for urban planning and management. The results showed that our method is useful in LST simulation. Although the simulation accuracy is not quite satisfactory, it provides an important idea and a good start in the modeling of urban LST.
A new method for qualitative simulation of water resources systems: 1. Theory
NASA Astrophysics Data System (ADS)
Camara, A. S.; Pinheiro, M.; Antunes, M. P.; Seixas, M. J.
1987-11-01
A new dynamic modeling methodology, SLIN (Simulação Linguistica), allowing for the analysis of systems defined by linguistic variables, is presented. SLIN applies a set of logical rules avoiding fuzzy theoretic concepts. To make the transition from qualitative to quantitative modes, logical rules are used as well. Extensions of the methodology to simulation-optimization applications and multiexpert system modeling are also discussed.
Tree Branching: Leonardo da Vinci's Rule versus Biomechanical Models
Minamino, Ryoko; Tateno, Masaki
2014-01-01
This study examined Leonardo da Vinci's rule (i.e., the sum of the cross-sectional area of all tree branches above a branching point at any height is equal to the cross-sectional area of the trunk or the branch immediately below the branching point) using simulations based on two biomechanical models: the uniform stress and elastic similarity models. Model calculations of the daughter/mother ratio (i.e., the ratio of the total cross-sectional area of the daughter branches to the cross-sectional area of the mother branch at the branching point) showed that both biomechanical models agreed with da Vinci's rule when the branching angles of daughter branches and the weights of lateral daughter branches were small; however, the models deviated from da Vinci's rule as the weights and/or the branching angles of lateral daughter branches increased. The calculated values of the two models were largely similar but differed in some ways. Field measurements of Fagus crenata and Abies homolepis also fit this trend, wherein models deviated from da Vinci's rule with increasing relative weights of lateral daughter branches. However, this deviation was small for a branching pattern in nature, where empirical measurements were taken under realistic measurement conditions; thus, da Vinci's rule did not critically contradict the biomechanical models in the case of real branching patterns, though the model calculations described the contradiction between da Vinci's rule and the biomechanical models. The field data for Fagus crenata fit the uniform stress model best, indicating that stress uniformity is the key constraint of branch morphology in Fagus crenata rather than elastic similarity or da Vinci's rule. On the other hand, mechanical constraints are not necessarily significant in the morphology of Abies homolepis branches, depending on the number of daughter branches. Rather, these branches were often in agreement with da Vinci's rule. PMID:24714065
Tree branching: Leonardo da Vinci's rule versus biomechanical models.
Minamino, Ryoko; Tateno, Masaki
2014-01-01
This study examined Leonardo da Vinci's rule (i.e., the sum of the cross-sectional area of all tree branches above a branching point at any height is equal to the cross-sectional area of the trunk or the branch immediately below the branching point) using simulations based on two biomechanical models: the uniform stress and elastic similarity models. Model calculations of the daughter/mother ratio (i.e., the ratio of the total cross-sectional area of the daughter branches to the cross-sectional area of the mother branch at the branching point) showed that both biomechanical models agreed with da Vinci's rule when the branching angles of daughter branches and the weights of lateral daughter branches were small; however, the models deviated from da Vinci's rule as the weights and/or the branching angles of lateral daughter branches increased. The calculated values of the two models were largely similar but differed in some ways. Field measurements of Fagus crenata and Abies homolepis also fit this trend, wherein models deviated from da Vinci's rule with increasing relative weights of lateral daughter branches. However, this deviation was small for a branching pattern in nature, where empirical measurements were taken under realistic measurement conditions; thus, da Vinci's rule did not critically contradict the biomechanical models in the case of real branching patterns, though the model calculations described the contradiction between da Vinci's rule and the biomechanical models. The field data for Fagus crenata fit the uniform stress model best, indicating that stress uniformity is the key constraint of branch morphology in Fagus crenata rather than elastic similarity or da Vinci's rule. On the other hand, mechanical constraints are not necessarily significant in the morphology of Abies homolepis branches, depending on the number of daughter branches. Rather, these branches were often in agreement with da Vinci's rule.
NASA Astrophysics Data System (ADS)
Maslova, I.; Ticlavilca, A. M.; McKee, M.
2012-12-01
There has been an increased interest in wavelet-based streamflow forecasting models in recent years. Often overlooked in this approach are the circularity assumptions of the wavelet transform. We propose a novel technique for minimizing the wavelet decomposition boundary condition effect to produce long-term, up to 12 months ahead, forecasts of streamflow. A simulation study is performed to evaluate the effects of different wavelet boundary rules using synthetic and real streamflow data. A hybrid wavelet-multivariate relevance vector machine model is developed for forecasting the streamflow in real-time for Yellowstone River, Uinta Basin, Utah, USA. The inputs of the model utilize only the past monthly streamflow records. They are decomposed into components formulated in terms of wavelet multiresolution analysis. It is shown that the model model accuracy can be increased by using the wavelet boundary rule introduced in this study. This long-term streamflow modeling and forecasting methodology would enable better decision-making and managing water availability risk.
Automated visualization of rule-based models
Tapia, Jose-Juan; Faeder, James R.
2017-01-01
Frameworks such as BioNetGen, Kappa and Simmune use “reaction rules” to specify biochemical interactions compactly, where each rule specifies a mechanism such as binding or phosphorylation and its structural requirements. Current rule-based models of signaling pathways have tens to hundreds of rules, and these numbers are expected to increase as more molecule types and pathways are added. Visual representations are critical for conveying rule-based models, but current approaches to show rules and interactions between rules scale poorly with model size. Also, inferring design motifs that emerge from biochemical interactions is an open problem, so current approaches to visualize model architecture rely on manual interpretation of the model. Here, we present three new visualization tools that constitute an automated visualization framework for rule-based models: (i) a compact rule visualization that efficiently displays each rule, (ii) the atom-rule graph that conveys regulatory interactions in the model as a bipartite network, and (iii) a tunable compression pipeline that incorporates expert knowledge and produces compact diagrams of model architecture when applied to the atom-rule graph. The compressed graphs convey network motifs and architectural features useful for understanding both small and large rule-based models, as we show by application to specific examples. Our tools also produce more readable diagrams than current approaches, as we show by comparing visualizations of 27 published models using standard graph metrics. We provide an implementation in the open source and freely available BioNetGen framework, but the underlying methods are general and can be applied to rule-based models from the Kappa and Simmune frameworks also. We expect that these tools will promote communication and analysis of rule-based models and their eventual integration into comprehensive whole-cell models. PMID:29131816
Spread prediction model of continuous steel tube based on BP neural network
NASA Astrophysics Data System (ADS)
Zhai, Jian-wei; Yu, Hui; Zou, Hai-bei; Wang, San-zhong; Liu, Li-gang
2017-07-01
According to the geometric pass of roll and technological parameters of three-roller continuous mandrel rolling mill in a factory, a finite element model is established to simulate the continuous rolling process of seamless steel tube, and the reliability of finite element model is verified by comparing with the simulation results and actual results of rolling force, wall thickness and outer diameter of the tube. The effect of roller reduction, roller rotation speed and blooming temperature on the spread rule is studied. Based on BP(Back Propagation) neural network technology, a spread prediction model of continuous rolling tube is established for training wall thickness coefficient and spread coefficient of the continuous rolling tube, and the rapid and accurate prediction of continuous rolling tube size is realized.
Intelligent model-based diagnostics for vehicle health management
NASA Astrophysics Data System (ADS)
Luo, Jianhui; Tu, Fang; Azam, Mohammad S.; Pattipati, Krishna R.; Willett, Peter K.; Qiao, Liu; Kawamoto, Masayuki
2003-08-01
The recent advances in sensor technology, remote communication and computational capabilities, and standardized hardware/software interfaces are creating a dramatic shift in the way the health of vehicles is monitored and managed. These advances facilitate remote monitoring, diagnosis and condition-based maintenance of automotive systems. With the increased sophistication of electronic control systems in vehicles, there is a concomitant increased difficulty in the identification of the malfunction phenomena. Consequently, the current rule-based diagnostic systems are difficult to develop, validate and maintain. New intelligent model-based diagnostic methodologies that exploit the advances in sensor, telecommunications, computing and software technologies are needed. In this paper, we will investigate hybrid model-based techniques that seamlessly employ quantitative (analytical) models and graph-based dependency models for intelligent diagnosis. Automotive engineers have found quantitative simulation (e.g. MATLAB/SIMULINK) to be a vital tool in the development of advanced control systems. The hybrid method exploits this capability to improve the diagnostic system's accuracy and consistency, utilizes existing validated knowledge on rule-based methods, enables remote diagnosis, and responds to the challenges of increased system complexity. The solution is generic and has the potential for application in a wide range of systems.
NASA Astrophysics Data System (ADS)
Feng, Maoyuan; Liu, Pan; Guo, Shenglian; Shi, Liangsheng; Deng, Chao; Ming, Bo
2017-08-01
Operating rules have been used widely to decide reservoir operations because of their capacity for coping with uncertain inflow. However, stationary operating rules lack adaptability; thus, under changing environmental conditions, they cause inefficient reservoir operation. This paper derives adaptive operating rules based on time-varying parameters generated using the ensemble Kalman filter (EnKF). A deterministic optimization model is established to obtain optimal water releases, which are further taken as observations of the reservoir simulation model. The EnKF is formulated to update the operating rules sequentially, providing a series of time-varying parameters. To identify the index that dominates the variations of the operating rules, three hydrologic factors are selected: the reservoir inflow, ratio of future inflow to current available water, and available water. Finally, adaptive operating rules are derived by fitting the time-varying parameters with the identified dominant hydrologic factor. China's Three Gorges Reservoir was selected as a case study. Results show that (1) the EnKF has the capability of capturing the variations of the operating rules, (2) reservoir inflow is the factor that dominates the variations of the operating rules, and (3) the derived adaptive operating rules are effective in improving hydropower benefits compared with stationary operating rules. The insightful findings of this study could be used to help adapt reservoir operations to mitigate the effects of changing environmental conditions.
Limit of validity of Ostwald's rule of stages in a statistical mechanical model of crystallization.
Hedges, Lester O; Whitelam, Stephen
2011-10-28
We have only rules of thumb with which to predict how a material will crystallize, chief among which is Ostwald's rule of stages. It states that the first phase to appear upon transformation of a parent phase is the one closest to it in free energy. Although sometimes upheld, the rule is without theoretical foundation and is not universally obeyed, highlighting the need for microscopic understanding of crystallization controls. Here we study in detail the crystallization pathways of a prototypical model of patchy particles. The range of crystallization pathways it exhibits is richer than can be predicted by Ostwald's rule, but a combination of simulation and analytic theory reveals clearly how these pathways are selected by microscopic parameters. Our results suggest strategies for controlling self-assembly pathways in simulation and experiment.
A review on vegetation models and applicability to climate simulations at regional scale
NASA Astrophysics Data System (ADS)
Myoung, Boksoon; Choi, Yong-Sang; Park, Seon Ki
2011-11-01
The lack of accurate representations of biospheric components and their biophysical and biogeochemical processes is a great source of uncertainty in current climate models. The interactions between terrestrial ecosystems and the climate include exchanges not only of energy, water and momentum, but also of carbon and nitrogen. Reliable simulations of these interactions are crucial for predicting the potential impacts of future climate change and anthropogenic intervention on terrestrial ecosystems. In this paper, two biogeographical (Neilson's rule-based model and BIOME), two biogeochemical (BIOME-BGC and PnET-BGC), and three dynamic global vegetation models (Hybrid, LPJ, and MC1) were reviewed and compared in terms of their biophysical and physiological processes. The advantages and limitations of the models were also addressed. Lastly, the applications of the dynamic global vegetation models to regional climate simulations have been discussed.
A fuzzy-theory-based behavioral model for studying pedestrian evacuation from a single-exit room
NASA Astrophysics Data System (ADS)
Fu, Libi; Song, Weiguo; Lo, Siuming
2016-08-01
Many mass events in recent years have highlighted the importance of research on pedestrian evacuation dynamics. A number of models have been developed to analyze crowd behavior under evacuation situations. However, few focus on pedestrians' decision-making with respect to uncertainty, vagueness and imprecision. In this paper, a discrete evacuation model defined on the cellular space is proposed according to the fuzzy theory which is able to describe imprecise and subjective information. Pedestrians' percept information and various characteristics are regarded as fuzzy input. Then fuzzy inference systems with rule bases, which resemble human reasoning, are established to obtain fuzzy output that decides pedestrians' movement direction. This model is tested in two scenarios, namely in a single-exit room with and without obstacles. Simulation results reproduce some classic dynamics phenomena discovered in real building evacuation situations, and are consistent with those in other models and experiments. It is hoped that this study will enrich movement rules and approaches in traditional cellular automaton models for evacuation dynamics.
NASA Astrophysics Data System (ADS)
Turnbull, S. J.
2017-12-01
Within the US Army Corps of Engineers (USACE), reservoirs are typically operated according to a rule curve that specifies target water levels based on the time of year. The rule curve is intended to maximize flood protection by specifying releases of water before the dominant rainfall period for a region. While some operating allowances are permissible, generally the rule curve elevations must be maintained. While this operational approach provides for the required flood control purpose, it may not result in optimal reservoir operations for multi-use impoundments. In the Russian River Valley of California a multi-agency research effort called Forecast-Informed Reservoir Operations (FIRO) is assessing the application of forecast weather and streamflow predictions to potentially enhance the operation of reservoirs in the watershed. The focus of the study has been on Lake Mendocino, a USACE project important for flood control, water supply, power generation and ecological flows. As part of this effort the Engineer Research and Development Center is assessing the ability of utilizing the physics based, distributed watershed model Gridded Surface Subsurface Hydrologic Analysis (GSSHA) model to simulate stream flows, reservoir stages, and discharges while being driven by weather forecast products. A key question in this application is the effect of watershed model resolution on forecasted stream flows. To help resolve this question, GSSHA models of multiple grid resolutions, 30, 50, and 270m, were developed for the upper Russian River, which includes Lake Mendocino. The models were derived from common inputs: DEM, soils, land use, stream network, reservoir characteristics, and specified inflows and discharges. All the models were calibrated in both event and continuous simulation mode using measured precipitation gages and then driven with the West-WRF atmospheric model in prediction mode to assess the ability of the model to function in short term, less than one week, forecasting mode. In this presentation we will discuss the effect the grid resolution has model development, parameter assignment, streamflow prediction and forecasting capability utilizing the West-WRF forecast hydro-meteorology.
Sakieh, Yousef; Salmanmahiny, Abdolrassoul
2016-03-01
Performance evaluation is a critical step when developing land-use and cover change (LUCC) models. The present study proposes a spatially explicit model performance evaluation method, adopting a landscape metric-based approach. To quantify GEOMOD model performance, a set of composition- and configuration-based landscape metrics including number of patches, edge density, mean Euclidean nearest neighbor distance, largest patch index, class area, landscape shape index, and splitting index were employed. The model takes advantage of three decision rules including neighborhood effect, persistence of change direction, and urbanization suitability values. According to the results, while class area, largest patch index, and splitting indices demonstrated insignificant differences between spatial pattern of ground truth and simulated layers, there was a considerable inconsistency between simulation results and real dataset in terms of the remaining metrics. Specifically, simulation outputs were simplistic and the model tended to underestimate number of developed patches by producing a more compact landscape. Landscape-metric-based performance evaluation produces more detailed information (compared to conventional indices such as the Kappa index and overall accuracy) on the model's behavior in replicating spatial heterogeneity features of a landscape such as frequency, fragmentation, isolation, and density. Finally, as the main characteristic of the proposed method, landscape metrics employ the maximum potential of observed and simulated layers for a performance evaluation procedure, provide a basis for more robust interpretation of a calibration process, and also deepen modeler insight into the main strengths and pitfalls of a specific land-use change model when simulating a spatiotemporal phenomenon.
Theoretical Development of an Orthotropic Elasto-Plastic Generalized Composite Material Model
NASA Technical Reports Server (NTRS)
Goldberg, Robert K.; Carney, Kelly S.; DuBois, Paul; Hoffarth, Canio; Harrington, Joseph; Subramanian, Rajan; Blankenhorn, Gunther
2014-01-01
The need for accurate material models to simulate the deformation, damage and failure of polymer matrix composites is becoming critical as these materials are gaining increased usage in the aerospace and automotive industries. While there are several composite material models currently available within LS-DYNA (Registered), there are several features that have been identified that could improve the predictive capability of a composite model. To address these needs, a combined plasticity and damage model suitable for use with both solid and shell elements is being developed and is being implemented into LS-DYNA as MAT_213. A key feature of the improved material model is the use of tabulated stress-strain data in a variety of coordinate directions to fully define the stress-strain response of the material. To date, the model development efforts have focused on creating the plasticity portion of the model. The Tsai-Wu composite failure model has been generalized and extended to a strain-hardening based orthotropic material model with a non-associative flow rule. The coefficients of the yield function, and the stresses to be used in both the yield function and the flow rule, are computed based on the input stress-strain curves using the effective plastic strain as the tracking variable. The coefficients in the flow rule are computed based on the obtained stress-strain data. The developed material model is suitable for implementation within LS-DYNA for use in analyzing the nonlinear response of polymer composites.
NASA Astrophysics Data System (ADS)
Singh, Sandeep; Patel, B. P.
2018-06-01
Computationally efficient multiscale modelling based on Cauchy-Born rule in conjunction with finite element method is employed to study static and dynamic characteristics of graphene sheets, with/without considering initial strain, involving Green-Lagrange geometric and material nonlinearities. The strain energy density function at continuum level is established by coupling the deformation at continuum level to that at atomic level through Cauchy-Born rule. The atomic interactions between carbon atoms are modelled through Tersoff-Brenner potential. The governing equation of motion obtained using Hamilton's principle is solved through standard Newton-Raphson method for nonlinear static response and Newmark's time integration technique to obtain nonlinear transient response characteristics. Effect of initial strain on the linear free vibration frequencies, nonlinear static and dynamic response characteristics is investigated in detail. The present multiscale modelling based results are found to be in good agreement with those obtained through molecular mechanics simulation. Two different types of boundary constraints generally used in MM simulation are explored in detail and few interesting findings are brought out. The effect of initial strain is found to be greater in linear response when compared to that in nonlinear response.
Simulating future water temperatures in the North Santiam River, Oregon
Buccola, Norman; Risley, John C.; Rounds, Stewart A.
2016-01-01
A previously calibrated two-dimensional hydrodynamic and water-quality model (CE-QUAL-W2) of Detroit Lake in western Oregon was used in conjunction with inflows derived from Precipitation-Runoff Modeling System (PRMS) hydrologic models to examine in-lake and downstream water temperature effects under future climate conditions. Current and hypothetical operations and structures at Detroit Dam were imposed on boundary conditions derived from downscaled General Circulation Models in base (1990–1999) and future (2059–2068) periods. Compared with the base period, future air temperatures were about 2 °C warmer year-round. Higher air temperature and lower precipitation under the future period resulted in a 23% reduction in mean annual PRMS-simulated discharge and a 1 °C increase in mean annual estimated stream temperatures flowing into the lake compared to the base period. Simulations incorporating current operational rules and minimum release rates at Detroit Dam to support downstream habitat, irrigation, and water supply during key times of year resulted in lower future lake levels. That scenario results in a lake level that is above the dam’s spillway crest only about half as many days in the future compared to historical frequencies. Managing temperature downstream of Detroit Dam depends on the ability to blend warmer water from the lake’s surface with cooler water from deep in the lake, and the spillway is an important release point near the lake’s surface. Annual average in-lake and release temperatures from Detroit Lake warmed 1.1 °C and 1.5 °C from base to future periods under present-day dam operational rules and fill schedules. Simulated dam operations such as beginning refill of the lake 30 days earlier or reducing minimum release rates (to keep more water in the lake to retain the use of the spillway) mitigated future warming to 0.4 and 0.9 °C below existing operational scenarios during the critical autumn spawning period for endangered salmonids. A hypothetical floating surface withdrawal at Detroit Dam improved temperature control in summer and autumn (0.6 °C warmer in summer, 0.6 °C cooler in autumn compared to existing structures) without altering release rates or lake level management rules.
Systematic methods for knowledge acquisition and expert system development
NASA Technical Reports Server (NTRS)
Belkin, Brenda L.; Stengel, Robert F.
1991-01-01
Nine cooperating rule-based systems, collectively called AUTOCREW, were designed to automate functions and decisions associated with a combat aircraft's subsystem. The organization of tasks within each system is described; performance metrics were developed to evaluate the workload of each rule base, and to assess the cooperation between the rule-bases. Each AUTOCREW subsystem is composed of several expert systems that perform specific tasks. AUTOCREW's NAVIGATOR was analyzed in detail to understand the difficulties involved in designing the system and to identify tools and methodologies that ease development. The NAVIGATOR determines optimal navigation strategies from a set of available sensors. A Navigation Sensor Management (NSM) expert system was systematically designed from Kalman filter covariance data; four ground-based, a satellite-based, and two on-board INS-aiding sensors were modeled and simulated to aid an INS. The NSM Expert was developed using the Analysis of Variance (ANOVA) and the ID3 algorithm. Navigation strategy selection is based on an RSS position error decision metric, which is computed from the covariance data. Results show that the NSM Expert predicts position error correctly between 45 and 100 percent of the time for a specified navaid configuration and aircraft trajectory. The NSM Expert adapts to new situations, and provides reasonable estimates of hybrid performance. The systematic nature of the ANOVA/ID3 method makes it broadly applicable to expert system design when experimental or simulation data is available.
Timing Interactions in Social Simulations: The Voter Model
NASA Astrophysics Data System (ADS)
Fernández-Gracia, Juan; Eguíluz, Víctor M.; Miguel, Maxi San
The recent availability of huge high resolution datasets on human activities has revealed the heavy-tailed nature of the interevent time distributions. In social simulations of interacting agents the standard approach has been to use Poisson processes to update the state of the agents, which gives rise to very homogeneous activity patterns with a well defined characteristic interevent time. As a paradigmatic opinion model we investigate the voter model and review the standard update rules and propose two new update rules which are able to account for heterogeneous activity patterns. For the new update rules each node gets updated with a probability that depends on the time since the last event of the node, where an event can be an update attempt (exogenous update) or a change of state (endogenous update). We find that both update rules can give rise to power law interevent time distributions, although the endogenous one more robustly. Apart from that for the exogenous update rule and the standard update rules the voter model does not reach consensus in the infinite size limit, while for the endogenous update there exist a coarsening process that drives the system toward consensus configurations.
Nonequivalence of updating rules in evolutionary games under high mutation rates.
Kaiping, G A; Jacobs, G S; Cox, S J; Sluckin, T J
2014-10-01
Moran processes are often used to model selection in evolutionary simulations. The updating rule in Moran processes is a birth-death process, i. e., selection according to fitness of an individual to give birth, followed by the death of a random individual. For well-mixed populations with only two strategies this updating rule is known to be equivalent to selecting unfit individuals for death and then selecting randomly for procreation (biased death-birth process). It is, however, known that this equivalence does not hold when considering structured populations. Here we study whether changing the updating rule can also have an effect in well-mixed populations in the presence of more than two strategies and high mutation rates. We find, using three models from different areas of evolutionary simulation, that the choice of updating rule can change model results. We show, e. g., that going from the birth-death process to the death-birth process can change a public goods game with punishment from containing mostly defectors to having a majority of cooperative strategies. From the examples given we derive guidelines indicating when the choice of the updating rule can be expected to have an impact on the results of the model.
Nonequivalence of updating rules in evolutionary games under high mutation rates
NASA Astrophysics Data System (ADS)
Kaiping, G. A.; Jacobs, G. S.; Cox, S. J.; Sluckin, T. J.
2014-10-01
Moran processes are often used to model selection in evolutionary simulations. The updating rule in Moran processes is a birth-death process, i. e., selection according to fitness of an individual to give birth, followed by the death of a random individual. For well-mixed populations with only two strategies this updating rule is known to be equivalent to selecting unfit individuals for death and then selecting randomly for procreation (biased death-birth process). It is, however, known that this equivalence does not hold when considering structured populations. Here we study whether changing the updating rule can also have an effect in well-mixed populations in the presence of more than two strategies and high mutation rates. We find, using three models from different areas of evolutionary simulation, that the choice of updating rule can change model results. We show, e. g., that going from the birth-death process to the death-birth process can change a public goods game with punishment from containing mostly defectors to having a majority of cooperative strategies. From the examples given we derive guidelines indicating when the choice of the updating rule can be expected to have an impact on the results of the model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abercrombie, Robert K; Schlicher, Bob G
Vulnerability in security of an information system is quantitatively predicted. The information system may receive malicious actions against its security and may receive corrective actions for restoring the security. A game oriented agent based model is constructed in a simulator application. The game ABM model represents security activity in the information system. The game ABM model has two opposing participants including an attacker and a defender, probabilistic game rules and allowable game states. A specified number of simulations are run and a probabilistic number of the plurality of allowable game states are reached in each simulation run. The probability ofmore » reaching a specified game state is unknown prior to running each simulation. Data generated during the game states is collected to determine a probability of one or more aspects of security in the information system.« less
Exploiting Motion Capture to Enhance Avoidance Behaviour in Games
NASA Astrophysics Data System (ADS)
van Basten, Ben J. H.; Jansen, Sander E. M.; Karamouzas, Ioannis
Realistic simulation of interacting virtual characters is essential in computer games, training and simulation applications. The problem is very challenging since people are accustomed to real-world situations and thus, they can easily detect inconsistencies and artifacts in the simulations. Over the past twenty years several models have been proposed for simulating individuals, groups and crowds of characters. However, little effort has been made to actually understand how humans solve interactions and avoid inter-collisions in real-life. In this paper, we exploit motion capture data to gain more insights into human-human interactions. We propose four measures to describe the collision-avoidance behavior. Based on these measures, we extract simple rules that can be applied on top of existing agent and force based approaches, increasing the realism of the resulting simulations.
Theoretical Development of an Orthotropic Elasto-Plastic Generalized Composite Material Model
NASA Technical Reports Server (NTRS)
Goldberg, Robert; Carney, Kelly; DuBois, Paul; Hoffarth, Canio; Harrington, Joseph; Rajan, Subramaniam; Blankenhorn, Gunther
2014-01-01
The need for accurate material models to simulate the deformation, damage and failure of polymer matrix composites is becoming critical as these materials are gaining increased usage in the aerospace and automotive industries. While there are several composite material models currently available within LSDYNA (Livermore Software Technology Corporation), there are several features that have been identified that could improve the predictive capability of a composite model. To address these needs, a combined plasticity and damage model suitable for use with both solid and shell elements is being developed and is being implemented into LS-DYNA as MAT_213. A key feature of the improved material model is the use of tabulated stress-strain data in a variety of coordinate directions to fully define the stress-strain response of the material. To date, the model development efforts have focused on creating the plasticity portion of the model. The Tsai-Wu composite failure model has been generalized and extended to a strain-hardening based orthotropic yield function with a nonassociative flow rule. The coefficients of the yield function, and the stresses to be used in both the yield function and the flow rule, are computed based on the input stress-strain curves using the effective plastic strain as the tracking variable. The coefficients in the flow rule are computed based on the obtained stress-strain data. The developed material model is suitable for implementation within LS-DYNA for use in analyzing the nonlinear response of polymer composites.
NASA Astrophysics Data System (ADS)
Sharma, Pankaj; Jain, Ajai
2014-12-01
Stochastic dynamic job shop scheduling problem with consideration of sequence-dependent setup times are among the most difficult classes of scheduling problems. This paper assesses the performance of nine dispatching rules in such shop from makespan, mean flow time, maximum flow time, mean tardiness, maximum tardiness, number of tardy jobs, total setups and mean setup time performance measures viewpoint. A discrete event simulation model of a stochastic dynamic job shop manufacturing system is developed for investigation purpose. Nine dispatching rules identified from literature are incorporated in the simulation model. The simulation experiments are conducted under due date tightness factor of 3, shop utilization percentage of 90% and setup times less than processing times. Results indicate that shortest setup time (SIMSET) rule provides the best performance for mean flow time and number of tardy jobs measures. The job with similar setup and modified earliest due date (JMEDD) rule provides the best performance for makespan, maximum flow time, mean tardiness, maximum tardiness, total setups and mean setup time measures.
ASP-G: an ASP-based method for finding attractors in genetic regulatory networks
Mushthofa, Mushthofa; Torres, Gustavo; Van de Peer, Yves; Marchal, Kathleen; De Cock, Martine
2014-01-01
Motivation: Boolean network models are suitable to simulate GRNs in the absence of detailed kinetic information. However, reducing the biological reality implies making assumptions on how genes interact (interaction rules) and how their state is updated during the simulation (update scheme). The exact choice of the assumptions largely determines the outcome of the simulations. In most cases, however, the biologically correct assumptions are unknown. An ideal simulation thus implies testing different rules and schemes to determine those that best capture an observed biological phenomenon. This is not trivial because most current methods to simulate Boolean network models of GRNs and to compute their attractors impose specific assumptions that cannot be easily altered, as they are built into the system. Results: To allow for a more flexible simulation framework, we developed ASP-G. We show the correctness of ASP-G in simulating Boolean network models and obtaining attractors under different assumptions by successfully recapitulating the detection of attractors of previously published studies. We also provide an example of how performing simulation of network models under different settings help determine the assumptions under which a certain conclusion holds. The main added value of ASP-G is in its modularity and declarativity, making it more flexible and less error-prone than traditional approaches. The declarative nature of ASP-G comes at the expense of being slower than the more dedicated systems but still achieves a good efficiency with respect to computational time. Availability and implementation: The source code of ASP-G is available at http://bioinformatics.intec.ugent.be/kmarchal/Supplementary_Information_Musthofa_2014/asp-g.zip. Contact: Kathleen.Marchal@UGent.be or Martine.DeCock@UGent.be Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25028722
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lindskog, M., E-mail: martin.lindskog@teorfys.lu.se; Wacker, A.; Wolf, J. M.
2014-09-08
We study the operation of an 8.5 μm quantum cascade laser based on GaInAs/AlInAs lattice matched to InP using three different simulation models based on density matrix (DM) and non-equilibrium Green's function (NEGF) formulations. The latter advanced scheme serves as a validation for the simpler DM schemes and, at the same time, provides additional insight, such as the temperatures of the sub-band carrier distributions. We find that for the particular quantum cascade laser studied here, the behavior is well described by simple quantum mechanical estimates based on Fermi's golden rule. As a consequence, the DM model, which includes second order currents,more » agrees well with the NEGF results. Both these simulations are in accordance with previously reported data and a second regrown device.« less
Kinetic theory of situated agents applied to pedestrian flow in a corridor
NASA Astrophysics Data System (ADS)
Rangel-Huerta, A.; Muñoz-Meléndez, A.
2010-03-01
A situated agent-based model for simulation of pedestrian flow in a corridor is presented. In this model, pedestrians choose their paths freely and make decisions based on local criteria for solving collision conflicts. The crowd consists of multiple walking agents equipped with a function of perception as well as a competitive rule-based strategy that enables pedestrians to reach free access areas. Pedestrians in our model are autonomous entities capable of perceiving and making decisions. They apply socially accepted conventions, such as avoidance rules, as well as individual preferences such as the use of specific exit points, or the execution of eventual comfort turns resulting in spontaneous changes of walking speed. Periodic boundary conditions were considered in order to determine the density-average walking speed, and the density-average activity with respect to specific parameters: comfort angle turn and frequency of angle turn of walking agents. The main contribution of this work is an agent-based model where each pedestrian is represented as an autonomous agent. At the same time the pedestrian crowd dynamics is framed by the kinetic theory of biological systems.
A Comparison of Three Approaches to Model Human Behavior
NASA Astrophysics Data System (ADS)
Palmius, Joel; Persson-Slumpi, Thomas
2010-11-01
One way of studying social processes is through the use of simulations. The use of simulations for this purpose has been established as its own field, social simulations, and has been used for studying a variety of phenomena. A simulation of a social setting can serve as an aid for thinking about that social setting, and for experimenting with different parameters and studying the outcomes caused by them. When using the simulation as an aid for thinking and experimenting, the chosen simulation approach will implicitly steer the simulationist towards thinking in a certain fashion in order to fit the model. To study the implications of model choice on the understanding of a setting where human anticipation comes into play, a simulation scenario of a coffee room was constructed using three different simulation approaches: Cellular Automata, Systems Dynamics and Agent-based modeling. The practical implementations of the models were done in three different simulation packages: Stella for Systems Dynamic, CaFun for Cellular automata and SesAM for Agent-based modeling. The models were evaluated both using Randers' criteria for model evaluation, and through introspection where the authors reflected upon how their understanding of the scenario was steered through the model choice. Further the software used for implementing the simulation models was evaluated, and practical considerations for the choice of software package are listed. It is concluded that the models have very different strengths. The Agent-based modeling approach offers the most intuitive support for thinking about and modeling a social setting where the behavior of the individual is in focus. The Systems Dynamics model would be preferable in situations where populations and large groups would be studied as wholes, but where individual behavior is of less concern. The Cellular Automata models would be preferable where processes need to be studied from the basis of a small set of very simple rules. It is further concluded that in most social simulation settings the Agent-based modeling approach would be the probable choice. This since the other models does not offer much in the way of supporting the modeling of the anticipatory behavior of humans acting in an organization.
Multi Groups Cooperation based Symbiotic Evolution for TSK-type Neuro-Fuzzy Systems Design
Cheng, Yi-Chang; Hsu, Yung-Chi
2010-01-01
In this paper, a TSK-type neuro-fuzzy system with multi groups cooperation based symbiotic evolution method (TNFS-MGCSE) is proposed. The TNFS-MGCSE is developed from symbiotic evolution. The symbiotic evolution is different from traditional GAs (genetic algorithms) that each chromosome in symbiotic evolution represents a rule of fuzzy model. The MGCSE is different from the traditional symbiotic evolution; with a population in MGCSE is divided to several groups. Each group formed by a set of chromosomes represents a fuzzy rule and cooperate with other groups to generate the better chromosomes by using the proposed cooperation based crossover strategy (CCS). In this paper, the proposed TNFS-MGCSE is used to evaluate by numerical examples (Mackey-Glass chaotic time series and sunspot number forecasting). The performance of the TNFS-MGCSE achieves excellently with other existing models in the simulations. PMID:21709856
Impacts of Climate Change on Stream Temperatures in the Clearwater River, Idaho
NASA Astrophysics Data System (ADS)
Yearsley, J. R.; Chegwidden, O.; Nijssen, B.
2016-12-01
Dworshak Dam in northern Idaho impounds the waters of the North Fork of the Clearwater River, creating a reservoir of approximately 4.278 km3 at full pool elevation. The dam's primary purpose is for flood control and hydroelectric power generation. It also provides important water quality benefits by releasing cold water into the Clearwater River during the summer when conditions become critical for migrating endangered species of salmon. Changes in the climate may have an impact on the ability of Dworshak Dam and Reservoir to provide these benefits. To investigate the potential for extreme outcomes that would limit cold water releases from Dworshak Reservoir and compromise the fishery, we implemented a system of hydrologic and water temperature models that simulate daily-averaged water temperatures in both the riverine and reservoir environments. We used the macroscale hydrologic model, VIC, to simulate land surface water and energy fluxes, the one-dimensional, time-dependent stream temperature model, RBM, to simulate river temperatures and a modified version of CEQUAL-W2 to simulate water temperatures in Dworshak Reservoir. A long-term hydrologically based gridded data set of meteorological forcing provided the input for comparing model results with available observations of flow and water temperature. For purposes of investigating the impacts of climate change, we used the results from ten of the most recent Climate Model Intercomparison Project (CMIP5) climate change models scenarios in conjunction with the estimates of anthropogenic inputs of climate change gases from two representative concentration pathways (RCP). We compared the simulated results associated with a range of outcomes at critical river locations from the climate scenarios with existing conditions assuming that the reservoir would be operated under a rule curve based on the average reservoir elevation for the period 2006-2015 rule curve and for power demands represented by that same period.
A fully-online Neuro-Fuzzy model for flow forecasting in basins with limited data
NASA Astrophysics Data System (ADS)
Ashrafi, Mohammad; Chua, Lloyd Hock Chye; Quek, Chai; Qin, Xiaosheng
2017-02-01
Current state-of-the-art online neuro fuzzy models (NFMs) such as DENFIS (Dynamic Evolving Neural-Fuzzy Inference System) have been used for runoff forecasting. Online NFMs adopt a local learning approach and are able to adapt to changes continuously. The DENFIS model however requires upper/lower bound for normalization and also the number of rules increases monotonically. This requirement makes the model unsuitable for use in basins with limited data, since a priori data is required. In order to address this and other drawbacks of current online models, the Generic Self-Evolving Takagi-Sugeno-Kang (GSETSK) is adopted in this study for forecast applications in basins with limited data. GSETSK is a fully-online NFM which updates its structure and parameters based on the most recent data. The model does not require the need for historical data and adopts clustering and rule pruning techniques to generate a compact and up-to-date rule-base. GSETSK was used in two forecast applications, rainfall-runoff (a catchment in Sweden) and river routing (Lower Mekong River) forecasts. Each of these two applications was studied under two scenarios: (i) there is no prior data, and (ii) only limited data is available (1 year for the Swedish catchment and 1 season for the Mekong River). For the Swedish Basin, GSETSK model results were compared to available results from a calibrated HBV (Hydrologiska Byråns Vattenbalansavdelning) model. For the Mekong River, GSETSK results were compared against the URBS (Unified River Basin Simulator) model. Both comparisons showed that results from GSETSK are comparable with the physically based models, which were calibrated with historical data. Thus, even though GSETSK was trained with a very limited dataset in comparison with HBV or URBS, similar results were achieved. Similarly, further comparisons between GSETSK with DENFIS and the RBF (Radial Basis Function) models highlighted further advantages of GSETSK as having a rule-base (compared to opaque RBF) which is more compact, up-to-date and more easily interpretable.
An agent-based model for an air emissions cap and trade program: A case study in Taiwan.
Huang, Hsing-Fu; Ma, Hwong-Wen
2016-12-01
To determine the actual status of individuals in a system and the trading interaction between polluters, this study uses an agent-based model to set up a virtual world that represents the Kaohsiung and Pingtung regions in Taiwan, which are under the country's air emissions cap and trade program. The model can simulate each controlled industry's dynamic behavioral condition with the bottom-up method and can investigate the impact of the program and determine the industry's emissions reduction and trading condition. This model can be used elastically to predict the impact of the trading market through adjusting different settings of the program rules or combining the settings with other measures. The simulation results show that the emissions trading market has an oversupply, but we find that the market trading amounts are low. Additionally, we find that increasing the air pollution fee and offset rate restrains the agents' trading decision, according to the simulation results of each scenario. In particular, NO x and SO x trading amounts are easily impacted by the pollution fee, reduction rate, and offset rate. Also, the more transparent the market, the more it can help polluters trade. Therefore, if authorities want to intervene in the emissions trading market, they must be careful in adjusting the air pollution fee and program rules; otherwise, the trading market system cannot work effectively. We also suggest setting up a trading platform to help the dealers negotiate successfully. Copyright © 2016 Elsevier Ltd. All rights reserved.
Mesoscale Modeling of LX-17 Under Isentropic Compression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Springer, H K; Willey, T M; Friedman, G
Mesoscale simulations of LX-17 incorporating different equilibrium mixture models were used to investigate the unreacted equation-of-state (UEOS) of TATB. Candidate TATB UEOS were calculated using the equilibrium mixture models and benchmarked with mesoscale simulations of isentropic compression experiments (ICE). X-ray computed tomography (XRCT) data provided the basis for initializing the simulations with realistic microstructural details. Three equilibrium mixture models were used in this study. The single constituent with conservation equations (SCCE) model was based on a mass-fraction weighted specific volume and the conservation of mass, momentum, and energy. The single constituent equation-of-state (SCEOS) model was based on a mass-fraction weightedmore » specific volume and the equation-of-state of the constituents. The kinetic energy averaging (KEA) model was based on a mass-fraction weighted particle velocity mixture rule and the conservation equations. The SCEOS model yielded the stiffest TATB EOS (0.121{micro} + 0.4958{micro}{sup 2} + 2.0473{micro}{sup 3}) and, when incorporated in mesoscale simulations of the ICE, demonstrated the best agreement with VISAR velocity data for both specimen thicknesses. The SCCE model yielded a relatively more compliant EOS (0.1999{micro}-0.6967{micro}{sup 2} + 4.9546{micro}{sup 3}) and the KEA model yielded the most compliant EOS (0.1999{micro}-0.6967{micro}{sup 2}+4.9546{micro}{sup 3}) of all the equilibrium mixture models. Mesoscale simulations with the lower density TATB adiabatic EOS data demonstrated the least agreement with VISAR velocity data.« less
The Space Environmental Impact System
NASA Astrophysics Data System (ADS)
Kihn, E. A.
2009-12-01
The Space Environmental Impact System (SEIS) is an operational tool for incorporating environmental data sets into DoD Modeling and Simulation (M&S) which allows for enhanced decision making regarding acquisitions, testing, operations and planning. The SEIS system creates, from the environmental archives and developed rule-base, a tool for describing the effects of the space environment on particular military systems, both historically and in real-time. The system uses data available over the web, and in particular data provided by NASA’s virtual observatory network, as well as modeled data generated specifically for this purpose. The rule base system developed to support SEIS is an open XML based model which can be extended to events from any environmental domain. This presentation will show how the SEIS tool allows users to easily and accurately evaluate the effect of space weather in terms that are meaningful to them as well as discuss the relevant standards used in its construction and go over lessons learned from fielding an operational environmental decision tool.
Fuzzy energy management for hybrid fuel cell/battery systems for more electric aircraft
NASA Astrophysics Data System (ADS)
Corcau, Jenica-Ileana; Dinca, Liviu; Grigorie, Teodor Lucian; Tudosie, Alexandru-Nicolae
2017-06-01
In this paper is presented the simulation and analysis of a Fuzzy Energy Management for Hybrid Fuel cell/Battery Systems used for More Electric Aircraft. The fuel cell hybrid system contains of fuel cell, lithium-ion batteries along with associated dc to dc boost converters. In this configuration the battery has a dc to dc converter, because it is an active in the system. The energy management scheme includes the rule based fuzzy logic strategy. This scheme has a faster response to load change and is more robust to measurement imprecisions. Simulation will be provided using Matlab/Simulink based models. Simulation results are given to show the overall system performance.
Adaptive WTA with an analog VLSI neuromorphic learning chip.
Häfliger, Philipp
2007-03-01
In this paper, we demonstrate how a particular spike-based learning rule (where exact temporal relations between input and output spikes of a spiking model neuron determine the changes of the synaptic weights) can be tuned to express rate-based classical Hebbian learning behavior (where the average input and output spike rates are sufficient to describe the synaptic changes). This shift in behavior is controlled by the input statistic and by a single time constant. The learning rule has been implemented in a neuromorphic very large scale integration (VLSI) chip as part of a neurally inspired spike signal image processing system. The latter is the result of the European Union research project Convolution AER Vision Architecture for Real-Time (CAVIAR). Since it is implemented as a spike-based learning rule (which is most convenient in the overall spike-based system), even if it is tuned to show rate behavior, no explicit long-term average signals are computed on the chip. We show the rule's rate-based Hebbian learning ability in a classification task in both simulation and chip experiment, first with artificial stimuli and then with sensor input from the CAVIAR system.
Circular analysis in complex stochastic systems
Valleriani, Angelo
2015-01-01
Ruling out observations can lead to wrong models. This danger occurs unwillingly when one selects observations, experiments, simulations or time-series based on their outcome. In stochastic processes, conditioning on the future outcome biases all local transition probabilities and makes them consistent with the selected outcome. This circular self-consistency leads to models that are inconsistent with physical reality. It is also the reason why models built solely on macroscopic observations are prone to this fallacy. PMID:26656656
Developing a reversible rapid coordinate transformation model for the cylindrical projection
NASA Astrophysics Data System (ADS)
Ye, Si-jing; Yan, Tai-lai; Yue, Yan-li; Lin, Wei-yan; Li, Lin; Yao, Xiao-chuang; Mu, Qin-yun; Li, Yong-qin; Zhu, De-hai
2016-04-01
Numerical models are widely used for coordinate transformations. However, in most numerical models, polynomials are generated to approximate "true" geographic coordinates or plane coordinates, and one polynomial is hard to make simultaneously appropriate for both forward and inverse transformations. As there is a transformation rule between geographic coordinates and plane coordinates, how accurate and efficient is the calculation of the coordinate transformation if we construct polynomials to approximate the transformation rule instead of "true" coordinates? In addition, is it preferable to compare models using such polynomials with traditional numerical models with even higher exponents? Focusing on cylindrical projection, this paper reports on a grid-based rapid numerical transformation model - a linear rule approximation model (LRA-model) that constructs linear polynomials to approximate the transformation rule and uses a graticule to alleviate error propagation. Our experiments on cylindrical projection transformation between the WGS 84 Geographic Coordinate System (EPSG 4326) and the WGS 84 UTM ZONE 50N Plane Coordinate System (EPSG 32650) with simulated data demonstrate that the LRA-model exhibits high efficiency, high accuracy, and high stability; is simple and easy to use for both forward and inverse transformations; and can be applied to the transformation of a large amount of data with a requirement of high calculation efficiency. Furthermore, the LRA-model exhibits advantages in terms of calculation efficiency, accuracy and stability for coordinate transformations, compared to the widely used hyperbolic transformation model.
A modified Galam’s model for word-of-mouth information exchange
NASA Astrophysics Data System (ADS)
Ellero, Andrea; Fasano, Giovanni; Sorato, Annamaria
2009-09-01
In this paper we analyze the stochastic model proposed by Galam in [S. Galam, Modelling rumors: The no plane Pentagon French hoax case, Physica A 320 (2003), 571-580], for information spreading in a ‘word-of-mouth’ process among agents, based on a majority rule. Using the communications rules among agents defined in the above reference, we first perform simulations of the ‘word-of-mouth’ process and compare the results with the theoretical values predicted by Galam’s model. Some dissimilarities arise in particular when a small number of agents is considered. We find motivations for these dissimilarities and suggest some enhancements by introducing a new parameter dependent model. We propose a modified Galam’s scheme which is asymptotically coincident with the original model in the above reference. Furthermore, for relatively small values of the parameter, we provide a numerical experience proving that the modified model often outperforms the original one.
Simulating Social Networks of Online Communities: Simulation as a Method for Sociability Design
NASA Astrophysics Data System (ADS)
Ang, Chee Siang; Zaphiris, Panayiotis
We propose the use of social simulations to study and support the design of online communities. In this paper, we developed an Agent-Based Model (ABM) to simulate and study the formation of social networks in a Massively Multiplayer Online Role Playing Game (MMORPG) guild community. We first analyzed the activities and the social network (who-interacts-with-whom) of an existing guild community to identify its interaction patterns and characteristics. Then, based on the empirical results, we derived and formalized the interaction rules, which were implemented in our simulation. Using the simulation, we reproduced the observed social network of the guild community as a means of validation. The simulation was then used to examine how various parameters of the community (e.g. the level of activity, the number of neighbors of each agent, etc) could potentially influence the characteristic of the social networks.
Spiking neural network model for memorizing sequences with forward and backward recall.
Borisyuk, Roman; Chik, David; Kazanovich, Yakov; da Silva Gomes, João
2013-06-01
We present an oscillatory network of conductance based spiking neurons of Hodgkin-Huxley type as a model of memory storage and retrieval of sequences of events (or objects). The model is inspired by psychological and neurobiological evidence on sequential memories. The building block of the model is an oscillatory module which contains excitatory and inhibitory neurons with all-to-all connections. The connection architecture comprises two layers. A lower layer represents consecutive events during their storage and recall. This layer is composed of oscillatory modules. Plastic excitatory connections between the modules are implemented using an STDP type learning rule for sequential storage. Excitatory neurons in the upper layer project star-like modifiable connections toward the excitatory lower layer neurons. These neurons in the upper layer are used to tag sequences of events represented in the lower layer. Computer simulations demonstrate good performance of the model including difficult cases when different sequences contain overlapping events. We show that the model with STDP type or anti-STDP type learning rules can be applied for the simulation of forward and backward replay of neural spikes respectively. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Module-based multiscale simulation of angiogenesis in skeletal muscle
2011-01-01
Background Mathematical modeling of angiogenesis has been gaining momentum as a means to shed new light on the biological complexity underlying blood vessel growth. A variety of computational models have been developed, each focusing on different aspects of the angiogenesis process and occurring at different biological scales, ranging from the molecular to the tissue levels. Integration of models at different scales is a challenging and currently unsolved problem. Results We present an object-oriented module-based computational integration strategy to build a multiscale model of angiogenesis that links currently available models. As an example case, we use this approach to integrate modules representing microvascular blood flow, oxygen transport, vascular endothelial growth factor transport and endothelial cell behavior (sensing, migration and proliferation). Modeling methodologies in these modules include algebraic equations, partial differential equations and agent-based models with complex logical rules. We apply this integrated model to simulate exercise-induced angiogenesis in skeletal muscle. The simulation results compare capillary growth patterns between different exercise conditions for a single bout of exercise. Results demonstrate how the computational infrastructure can effectively integrate multiple modules by coordinating their connectivity and data exchange. Model parameterization offers simulation flexibility and a platform for performing sensitivity analysis. Conclusions This systems biology strategy can be applied to larger scale integration of computational models of angiogenesis in skeletal muscle, or other complex processes in other tissues under physiological and pathological conditions. PMID:21463529
BioNetFit: a fitting tool compatible with BioNetGen, NFsim and distributed computing environments
Thomas, Brandon R.; Chylek, Lily A.; Colvin, Joshua; ...
2015-11-09
Rule-based models are analyzed with specialized simulators, such as those provided by the BioNetGen and NFsim open-source software packages. Here in this paper, we present BioNetFit, a general-purpose fitting tool that is compatible with BioNetGen and NFsim. BioNetFit is designed to take advantage of distributed computing resources. This feature facilitates fitting (i.e. optimization of parameter values for consistency with data) when simulations are computationally expensive.
EXPOSURES AND INTERNAL DOSES OF ...
The National Center for Environmental Assessment (NCEA) has released a final report that presents and applies a method to estimate distributions of internal concentrations of trihalomethanes (THMs) in humans resulting from a residential drinking water exposure. The report presents simulations of oral, dermal and inhalation exposures and demonstrates the feasibility of linking the US EPA’s information Collection Rule database with other databases on external exposure factors and physiologically based pharmacokinetic modeling to refine population-based estimates of exposure. Review Draft - by 2010, develop scientifically sound data and approaches to assess and manage risks to human health posed by exposure to specific regulated waterborne pathogens and chemicals, including those addressed by the Arsenic, M/DBP and Six-Year Review Rules.
Two-dimensional lattice Boltzmann model for magnetohydrodynamics.
Schaffenberger, Werner; Hanslmeier, Arnold
2002-10-01
We present a lattice Boltzmann model for the simulation of two-dimensional magnetohydro dynamic (MHD) flows. The model is an extension of a hydrodynamic lattice Boltzman model with 9 velocities on a square lattice resulting in a model with 17 velocities. Earlier lattice Boltzmann models for two-dimensional MHD used a bidirectional streaming rule. However, the use of such a bidirectional streaming rule is not necessary. In our model, the standard streaming rule is used, allowing smaller viscosities. To control the viscosity and the resistivity independently, a matrix collision operator is used. The model is then applied to the Hartmann flow, giving reasonable results.
A simulation framework for mapping risks in clinical processes: the case of in-patient transfers.
Dunn, Adam G; Ong, Mei-Sing; Westbrook, Johanna I; Magrabi, Farah; Coiera, Enrico; Wobcke, Wayne
2011-05-01
To model how individual violations in routine clinical processes cumulatively contribute to the risk of adverse events in hospital using an agent-based simulation framework. An agent-based simulation was designed to model the cascade of common violations that contribute to the risk of adverse events in routine clinical processes. Clinicians and the information systems that support them were represented as a group of interacting agents using data from direct observations. The model was calibrated using data from 101 patient transfers observed in a hospital and results were validated for one of two scenarios (a misidentification scenario and an infection control scenario). Repeated simulations using the calibrated model were undertaken to create a distribution of possible process outcomes. The likelihood of end-of-chain risk is the main outcome measure, reported for each of the two scenarios. The simulations demonstrate end-of-chain risks of 8% and 24% for the misidentification and infection control scenarios, respectively. Over 95% of the simulations in both scenarios are unique, indicating that the in-patient transfer process diverges from prescribed work practices in a variety of ways. The simulation allowed us to model the risk of adverse events in a clinical process, by generating the variety of possible work subject to violations, a novel prospective risk analysis method. The in-patient transfer process has a high proportion of unique trajectories, implying that risk mitigation may benefit from focusing on reducing complexity rather than augmenting the process with further rule-based protocols.
A non-coaxial critical state soil model and its application to simple shear simulations
NASA Astrophysics Data System (ADS)
Yang, Yunming; Yu, H. S.
2006-11-01
The yield vertex non-coaxial theory is implemented into a critical state soil model, CASM (Int. J. Numer. Anal. Meth. Geomech. 1998; 22:621-653) to investigate the non-coaxial influences on the stress-strain simulations of real soil behaviour in the presence of principal stress rotations. The CASM is a unified clay and sand model, developed based on the soil critical state concept and the state parameter concept. Without loss of simplicity, it is capable of simulating the behaviour of sands and clays within a wide range of densities. The non-coaxial CASM is employed to simulate the simple shear responses of Erksak sand and Weald clay under different densities and initial stress states. Dependence of the soil behaviour on the Lode angle and different plastic flow rules in the deviatoric plane are also considered in the study of non-coaxial influences. All the predictions indicate that the use of the non-coaxial model makes the orientations of the principal stress and the principal strain rate different during the early stage of shearing, and they approach the same ultimate values with an increase in loading. These ultimate orientations are dependent on the density of soils, and independent of their initial stress states. The use of the non-coaxial model also softens the shear stress evolutions, compared with the coaxial model. It is also found that the ultimate shear strengths by using the coaxial and non-coaxial models are dependent on the plastic flow rules in the deviatoric plane. Copyright
Frazier, Zachary
2012-01-01
Abstract Particle-based Brownian dynamics simulations offer the opportunity to not only simulate diffusion of particles but also the reactions between them. They therefore provide an opportunity to integrate varied biological data into spatially explicit models of biological processes, such as signal transduction or mitosis. However, particle based reaction-diffusion methods often are hampered by the relatively small time step needed for accurate description of the reaction-diffusion framework. Such small time steps often prevent simulation times that are relevant for biological processes. It is therefore of great importance to develop reaction-diffusion methods that tolerate larger time steps while maintaining relatively high accuracy. Here, we provide an algorithm, which detects potential particle collisions prior to a BD-based particle displacement and at the same time rigorously obeys the detailed balance rule of equilibrium reactions. We can show that for reaction-diffusion processes of particles mimicking proteins, the method can increase the typical BD time step by an order of magnitude while maintaining similar accuracy in the reaction diffusion modelling. PMID:22697237
Feng, Yongjiu; Tong, Xiaohua
2017-09-22
Defining transition rules is an important issue in cellular automaton (CA)-based land use modeling because these models incorporate highly correlated driving factors. Multicollinearity among correlated driving factors may produce negative effects that must be eliminated from the modeling. Using exploratory regression under pre-defined criteria, we identified all possible combinations of factors from the candidate factors affecting land use change. Three combinations that incorporate five driving factors meeting pre-defined criteria were assessed. With the selected combinations of factors, three logistic regression-based CA models were built to simulate dynamic land use change in Shanghai, China, from 2000 to 2015. For comparative purposes, a CA model with all candidate factors was also applied to simulate the land use change. Simulations using three CA models with multicollinearity eliminated performed better (with accuracy improvements about 3.6%) than the model incorporating all candidate factors. Our results showed that not all candidate factors are necessary for accurate CA modeling and the simulations were not sensitive to changes in statistically non-significant driving factors. We conclude that exploratory regression is an effective method to search for the optimal combinations of driving factors, leading to better land use change models that are devoid of multicollinearity. We suggest identification of dominant factors and elimination of multicollinearity before building land change models, making it possible to simulate more realistic outcomes.
NASA Technical Reports Server (NTRS)
Lowrie, J. W.; Fermelia, A. J.; Haley, D. C.; Gremban, K. D.; Vanbaalen, J.; Walsh, R. W.
1982-01-01
A variety of artificial intelligence techniques which could be used with regard to NASA space applications and robotics were evaluated. The techniques studied were decision tree manipulators, problem solvers, rule based systems, logic programming languages, representation language languages, and expert systems. The overall structure of a robotic simulation tool was defined and a framework for that tool developed. Nonlinear and linearized dynamics equations were formulated for n link manipulator configurations. A framework for the robotic simulation was established which uses validated manipulator component models connected according to a user defined configuration.
Temporal and contextual knowledge in model-based expert systems
NASA Technical Reports Server (NTRS)
Toth-Fejel, Tihamer; Heher, Dennis
1987-01-01
A basic paradigm that allows representation of physical systems with a focus on context and time is presented. Paragon provides the capability to quickly capture an expert's knowledge in a cognitively resonant manner. From that description, Paragon creates a simulation model in LISP, which when executed, verifies that the domain expert did not make any mistakes. The Achille's heel of rule-based systems has been the lack of a systematic methodology for testing, and Paragon's developers are certain that the model-based approach overcomes that problem. The reason this testing is now possible is that software, which is very difficult to test, has in essence been transformed into hardware.
Employment, Production and Consumption model: Patterns of phase transitions
NASA Astrophysics Data System (ADS)
Lavička, H.; Lin, L.; Novotný, J.
2010-04-01
We have simulated the model of Employment, Production and Consumption (EPC) using Monte Carlo. The EPC model is an agent based model that mimics very basic rules of industrial economy. From the perspective of physics, the nature of the interactions in the EPC model represents multi-agent interactions where the relations among agents follow the key laws for circulation of capital and money. Monte Carlo simulations of the stochastic model reveal phase transition in the model economy. The two phases are the phase with full unemployment and the phase with nearly full employment. The economy switches between these two states suddenly as a reaction to a slight variation in the exogenous parameter, thus the system exhibits strong non-linear behavior as a response to the change of the exogenous parameters.
Lung Cancer Assistant: a hybrid clinical decision support application for lung cancer care.
Sesen, M Berkan; Peake, Michael D; Banares-Alcantara, Rene; Tse, Donald; Kadir, Timor; Stanley, Roz; Gleeson, Fergus; Brady, Michael
2014-09-06
Multidisciplinary team (MDT) meetings are becoming the model of care for cancer patients worldwide. While MDTs have improved the quality of cancer care, the meetings impose substantial time pressure on the members, who generally attend several such MDTs. We describe Lung Cancer Assistant (LCA), a clinical decision support (CDS) prototype designed to assist the experts in the treatment selection decisions in the lung cancer MDTs. A novel feature of LCA is its ability to provide rule-based and probabilistic decision support within a single platform. The guideline-based CDS is based on clinical guideline rules, while the probabilistic CDS is based on a Bayesian network trained on the English Lung Cancer Audit Database (LUCADA). We assess rule-based and probabilistic recommendations based on their concordances with the treatments recorded in LUCADA. Our results reveal that the guideline rule-based recommendations perform well in simulating the recorded treatments with exact and partial concordance rates of 0.57 and 0.79, respectively. On the other hand, the exact and partial concordance rates achieved with probabilistic results are relatively poorer with 0.27 and 0.76. However, probabilistic decision support fulfils a complementary role in providing accurate survival estimations. Compared to recorded treatments, both CDS approaches promote higher resection rates and multimodality treatments.
Experimental validation of thermodynamic mixture rules at extreme pressures and densities
NASA Astrophysics Data System (ADS)
Bradley, P. A.; Loomis, E. N.; Merritt, E. C.; Guzik, J. A.; Denne, P. H.; Clark, T. T.
2018-01-01
Accurate modeling of a mixed material Equation of State (EOS) at high pressures (˜1 to 100 Mbar) is critical for simulating inertial confinement fusion and high energy density systems. This paper presents a comparison of two mixing rule models to the experiment to assess their applicability in this regime. The shock velocities of polystyrene, aluminum, and nickel aluminide (NiAl) were measured at a shock pressure of ˜3 TPa (˜30 Mbar) on the Omega EP laser facility (Laboratory for Laser Energetics, University of Rochester, New York). The resultant shock velocities were compared to those derived from the RAGE (Eulerian) hydrodynamics code to validate various mixing rules used to construct an EOS for NiAl. The simulated shock transit time through the sample (Al or NiAl) matched the measurements to within the ±45ps measurement uncertainty. The law of partial volume (Amagat) and the law of partial pressure (Dalton) mixture rules provided equally good matches to the NiAl shock data. Other studies showed that the Amagat mixing rule is superior, and we recommend it since our results also show a satisfactory match. The comparable quality of the simulation to data for the Al and NiAl samples implies that a mixture rule can supply an EOS for plasma mixtures with adequate fidelity for simulations where mixing takes place, such as advective mix in an Eulerian code or when two materials are mixed together via diffusion, turbulence, or other physical processes.
Experimental validation of thermodynamic mixture rules at extreme pressures and densities
Bradley, Paul Andrew; Loomis, Eric Nicholas; Merritt, Elizabeth Catherine; ...
2018-01-19
Accurate modeling of a mixed material Equation of State (EOS) at high pressures (~1 to 100 Mbar) is critical for simulating inertial confinement fusion and high energy density systems. Here, this paper presents a comparison of two mixing rule models to the experiment to assess their applicability in this regime. The shock velocities of polystyrene, aluminum, and nickel aluminide (NiAl) were measured at a shock pressure of ~3 TPa (~30 Mbar) on the Omega EP laser facility (Laboratory for Laser Energetics, University of Rochester, New York). The resultant shock velocities were compared to those derived from the RAGE (Eulerian) hydrodynamicsmore » code to validate various mixing rules used to construct an EOS for NiAl. The simulated shock transit time through the sample (Al or NiAl) matched the measurements to within the ±45ps measurement uncertainty. The law of partial volume (Amagat) and the law of partial pressure (Dalton) mixture rules provided equally good matches to the NiAl shock data. Other studies showed that the Amagat mixing rule is superior, and we recommend it since our results also show a satisfactory match. In conclusion, the comparable quality of the simulation to data for the Al and NiAl samples implies that a mixture rule can supply an EOS for plasma mixtures with adequate fidelity for simulations where mixing takes place, such as advective mix in an Eulerian code or when two materials are mixed together via diffusion, turbulence, or other physical processes.« less
Representation and the Rules of the Game: An Electoral Simulation
ERIC Educational Resources Information Center
Hoffman, Donna R.
2009-01-01
It is often a difficult proposition for introductory American government students to comprehend different electoral systems and how the rules of the game affect the representation that results. I have developed a simulation in which different proportional-based electoral systems are compared with a single-member plurality electoral system. In…
An entropy model to measure heterogeneity of pedestrian crowds using self-propelled agents
NASA Astrophysics Data System (ADS)
Rangel-Huerta, A.; Ballinas-Hernández, A. L.; Muñoz-Meléndez, A.
2017-05-01
An entropy model to characterize the heterogeneity of a pedestrian crowd in a counter-flow corridor is presented. Pedestrians are modeled as self-propelled autonomous agents that are able to perform maneuvers to avoid collisions based on a set of simple rules of perception and action. An observer can determine a probability distribution function of the displayed behavior of pedestrians based only on external information. Three types of pedestrian are modeled, relaxed, standard and hurried pedestrians depending on their preferences of turn and non-turn when walking. Thus, using these types of pedestrians two crowds can be simulated: homogeneous and heterogeneous crowds. Heterogeneity is measured in this research based on the entropy in function of time. For that, the entropy of a homogeneous crowd comprising standard pedestrians is used as reference. A number of simulations to measure entropy of pedestrian crowds were conducted by varying different combinations of types of pedestrians, initial simulation conditions of macroscopic flow, as well as density of the crowd. Results from these simulations show that our entropy model is sensitive enough to capture the effect of both the initial simulation conditions about the spatial distribution of pedestrians in a corridor, and the composition of a crowd. Also, a relevant finding is that entropy in function of density presents a phase transition in the critical region.
How effective is advertising in duopoly markets?
NASA Astrophysics Data System (ADS)
Sznajd-Weron, K.; Weron, R.
2003-06-01
A simple Ising spin model which can describe the mechanism of advertising in a duopoly market is proposed. In contrast to other agent-based models, the influence does not flow inward from the surrounding neighbors to the center site, but spreads outward from the center to the neighbors. The model thus describes the spread of opinions among customers. It is shown via standard Monte Carlo simulations that very simple rules and inclusion of an external field-an advertising campaign-lead to phase transitions.
Scattering property based contextual PolSAR speckle filter
NASA Astrophysics Data System (ADS)
Mullissa, Adugna G.; Tolpekin, Valentyn; Stein, Alfred
2017-12-01
Reliability of the scattering model based polarimetric SAR (PolSAR) speckle filter depends upon the accurate decomposition and classification of the scattering mechanisms. This paper presents an improved scattering property based contextual speckle filter based upon an iterative classification of the scattering mechanisms. It applies a Cloude-Pottier eigenvalue-eigenvector decomposition and a fuzzy H/α classification to determine the scattering mechanisms on a pre-estimate of the coherency matrix. The H/α classification identifies pixels with homogeneous scattering properties. A coarse pixel selection rule groups pixels that are either single bounce, double bounce or volume scatterers. A fine pixel selection rule is applied to pixels within each canonical scattering mechanism. We filter the PolSAR data and depending on the type of image scene (urban or rural) use either the coarse or fine pixel selection rule. Iterative refinement of the Wishart H/α classification reduces the speckle in the PolSAR data. Effectiveness of this new filter is demonstrated by using both simulated and real PolSAR data. It is compared with the refined Lee filter, the scattering model based filter and the non-local means filter. The study concludes that the proposed filter compares favorably with other polarimetric speckle filters in preserving polarimetric information, point scatterers and subtle features in PolSAR data.
Modeling of pilot's visual behavior for low-level flight
NASA Astrophysics Data System (ADS)
Schulte, Axel; Onken, Reiner
1995-06-01
Developers of synthetic vision systems for low-level flight simulators deal with the problem to decide which features to incorporate in order to achieve most realistic training conditions. This paper supports an approach to this problem on the basis of modeling the pilot's visual behavior. This approach is founded upon the basic requirement that the pilot's mechanisms of visual perception should be identical in simulated and real low-level flight. Flight simulator experiments with pilots were conducted for knowledge acquisition. During the experiments video material of a real low-level flight mission containing different situations was displayed to the pilot who was acting under a realistic mission assignment in a laboratory environment. Pilot's eye movements could be measured during the replay. The visual mechanisms were divided into rule based strategies for visual navigation, based on the preflight planning process, as opposed to skill based processes. The paper results in a model of the pilot's planning strategy of a visual fixing routine as part of the navigation task. The model is a knowledge based system based upon the fuzzy evaluation of terrain features in order to determine the landmarks used by pilots. It can be shown that a computer implementation of the model selects those features, which were preferred by trained pilots, too.
Visualization and Rule Validation in Human-Behavior Representation
ERIC Educational Resources Information Center
Moya, Lisa Jean; McKenzie, Frederic D.; Nguyen, Quynh-Anh H.
2008-01-01
Human behavior representation (HBR) models simulate human behaviors and responses. The Joint Crowd Federate [TM] cognitive model developed by the Virginia Modeling, Analysis, and Simulation Center (VMASC) and licensed by WernerAnderson, Inc., models the cognitive behavior of crowds to provide credible crowd behavior in support of military…
Excellent approach to modeling urban expansion by fuzzy cellular automata: agent base model
NASA Astrophysics Data System (ADS)
Khajavigodellou, Yousef; Alesheikh, Ali A.; Mohammed, Abdulrazak A. S.; Chapi, Kamran
2014-09-01
Recently, the interaction between humans and their environment is the one of important challenges in the world. Landuse/ cover change (LUCC) is a complex process that includes actors and factors at different social and spatial levels. The complexity and dynamics of urban systems make the applicable practice of urban modeling very difficult. With the increased computational power and the greater availability of spatial data, micro-simulation such as the agent based and cellular automata simulation methods, has been developed by geographers, planners, and scholars, and it has shown great potential for representing and simulating the complexity of the dynamic processes involved in urban growth and land use change. This paper presents Fuzzy Cellular Automata in Geospatial Information System and remote Sensing to simulated and predicted urban expansion pattern. These FCA-based dynamic spatial urban models provide an improved ability to forecast and assess future urban growth and to create planning scenarios, allowing us to explore the potential impacts of simulations that correspond to urban planning and management policies. A fuzzy inference guided cellular automata approach. Semantic or linguistic knowledge on Land use change is expressed as fuzzy rules, based on which fuzzy inference is applied to determine the urban development potential for each pixel. The model integrates an ABM (agent-based model) and FCA (Fuzzy Cellular Automata) to investigate a complex decision-making process and future urban dynamic processes. Based on this model rapid development and green land protection under the influences of the behaviors and decision modes of regional authority agents, real estate developer agents, resident agents and non- resident agents and their interactions have been applied to predict the future development patterns of the Erbil metropolitan region.
BUDEM: an urban growth simulation model using CA for Beijing metropolitan area
NASA Astrophysics Data System (ADS)
Long, Ying; Shen, Zhenjiang; Du, Liqun; Mao, Qizhi; Gao, Zhanping
2008-10-01
It is in great need of identifying the future urban form of Beijing, which faces challenges of rapid growth in urban development projects implemented in Beijing. We develop Beijing Urban Developing Model (BUDEM in short) to support urban planning and corresponding policies evaluation. BUDEM is the spatio-temporal dynamic model for simulating urban growth in Beijing metropolitan area, using cellular automata (CA) and Multi-agent system (MAS) approaches. In this phase, the computer simulation using CA in Beijing metropolitan area is conducted, which attempts to provide a premise of urban activities including different kinds of urban development projects for industrial plants, shopping facilities, houses. In the paper, concept model of BUDEM is introduced, which is established basing on prevalent urban growth theories. The method integrating logistic regression and MonoLoop is used to retrieve weights in the transition rule by MCE. After model sensibility analysis, we apply BUDEM into three aspects of urban planning practices: (1) Identifying urban growth mechanism in various historical phases since 1986; (2) Identifying urban growth policies needed to implement desired urban form (BEIJING2020), namely planned urban form; (3) Simulating urban growth scenarios of 2049 (BEIJING2049) basing on the urban form and parameter set of BEIJING2020.
POPEYE: A production rule-based model of multitask supervisory control (POPCORN)
NASA Technical Reports Server (NTRS)
Townsend, James T.; Kadlec, Helena; Kantowitz, Barry H.
1988-01-01
Recent studies of relationships between subjective ratings of mental workload, performance, and human operator and task characteristics have indicated that these relationships are quite complex. In order to study the various relationships and place subjective mental workload within a theoretical framework, we developed a production system model for the performance component of the complex supervisory task called POPCORN. The production system model is represented by a hierarchial structure of goals and subgoals, and the information flow is controlled by a set of condition-action rules. The implementation of this production system, called POPEYE, generates computer simulated data under different task difficulty conditions which are comparable to those of human operators performing the task. This model is the performance aspect of an overall dynamic psychological model which we are developing to examine and quantify relationships between performance and psychological aspects in a complex environment.
Predicting the stability of nanodevices
NASA Astrophysics Data System (ADS)
Lin, Z. Z.; Yu, W. F.; Wang, Y.; Ning, X. J.
2011-05-01
A simple model based on the statistics of single atoms is developed to predict the stability or lifetime of nanodevices without empirical parameters. Under certain conditions, the model produces the Arrhenius law and the Meyer-Neldel compensation rule. Compared with the classical molecular-dynamics simulations for predicting the stability of monatomic carbon chain at high temperature, the model is proved to be much more accurate than the transition state theory. Based on the ab initio calculation of the static potential, the model can give out a corrected lifetime of monatomic carbon and gold chains at higher temperature, and predict that the monatomic chains are very stable at room temperature.
Multi-Sensory Aerosol Data and the NRL NAAPS model for Regulatory Exceptional Event Analysis
NASA Astrophysics Data System (ADS)
Husar, R. B.; Hoijarvi, K.; Westphal, D. L.; Haynes, J.; Omar, A. H.; Frank, N. H.
2013-12-01
Beyond scientific exploration and analysis, multi-sensory observations along with models are finding increasing applications for operational air quality management. EPA's Exceptional Event (EE) Rule allows the exclusion of data strongly influenced by impacts from "exceptional events," such as smoke from wildfires or dust from abnormally high winds. The EE Rule encourages the use of satellite observations and other non-standard data along with models as evidence for formal documentation of EE samples for exclusion. Thus, the implementation of the EE Rule is uniquely suited for the direct application of integrated multi-sensory observations and indirectly through the assimilation into an aerosol simulation model. Here we report the results of a project: NASA and NAAPS Products for Air Quality Decision Making. The project uses of observations from multiple satellite sensors, surface-based aerosol measurements and the NRL Aerosol Analysis and Prediction System (NAAPS) model that assimilates key satellite observations. The satellite sensor data for detecting and documenting smoke and dust events include: MODIS AOD and Images; OMI Aerosol Index, Tropospheric NO2; AIRS, CO. The surface observations include the EPA regulatory PM2.5 network; the IMPROVE/STN aerosol chemical network; AIRNOW PM2.5 mass network, and surface met. data. Within this application, crucial role is assigned to the NAAPS model for estimating the surface concentration of windblown dust and biomass smoke. The operational model assimilates quality-assured daily MODIS data and 2DVAR to adjust the model concentrations and CALIOP-based climatology to adjust the vertical profiles at 6-hour intervals. The assimilation of satellite data from multiple satellites significantly contributes to the usefulness of NAAPS for EE analysis. The NAAPS smoke and dust simulations were evaluated using the IMPROVE/STN chemical data. The multi-sensory observations along with the model simulations are integrated into a web-based Exceptional Event Decision System (EE DSS) application program, designed to support air quality analysts at the Federal and Regional EPA offices and the EE-affected States. EE DSS screening tool automatically identifies the EPA PM2.5 mass samples that are candidates for EE flagging, based mainly on the NAAPS-simulated surface concentration of dust and smoke. The AQ analysts at the States and the EPA can also use the EE DSS to gather further evidence from the examination of spatio-temporal pattern, Absorbing Aerosol Index, CO and NO2 concentration, backward and forward airmass trajectories and other signatures. Since early 2013, the DSS has been used for the identification and analysis of dozens of events. Hence, integration of multi-sensory observations and modeling with data assimilation is maturing to support real-world operational AQ management applications. The remaining challenges can be resolved by seeking ';closure' of the system components; i.e. the systematic adjustments to reconcile the satellite and surface observations, the emissions and their integration through a suitable AQ model.
Allen, Corrie H; Parrott, Lael; Kyle, Catherine
2016-01-01
Background. Preserving connectivity, or the ability of a landscape to support species movement, is among the most commonly recommended strategies to reduce the negative effects of climate change and human land use development on species. Connectivity analyses have traditionally used a corridor-based approach and rely heavily on least cost path modeling and circuit theory to delineate corridors. Individual-based models are gaining popularity as a potentially more ecologically realistic method of estimating landscape connectivity. However, this remains a relatively unexplored approach. We sought to explore the utility of a simple, individual-based model as a land-use management support tool in identifying and implementing landscape connectivity. Methods. We created an individual-based model of bighorn sheep (Ovis canadensis) that simulates a bighorn sheep traversing a landscape by following simple movement rules. The model was calibrated for bighorn sheep in the Okanagan Valley, British Columbia, Canada, a region containing isolated herds that are vital to conservation of the species in its northern range. Simulations were run to determine baseline connectivity between subpopulations in the study area. We then applied the model to explore two land management scenarios on simulated connectivity: restoring natural fire regimes and identifying appropriate sites for interventions that would increase road permeability for bighorn sheep. Results. This model suggests there are no continuous areas of good habitat between current subpopulations of sheep in the study area; however, a series of stepping-stones or circuitous routes could facilitate movement between subpopulations and into currently unoccupied, yet suitable, bighorn habitat. Restoring natural fire regimes or mimicking fire with prescribed burns and tree removal could considerably increase bighorn connectivity in this area. Moreover, several key road crossing sites that could benefit from wildlife overpasses were identified. Discussion. By linking individual-scale movement rules to landscape-scale outcomes, our individual-based model of bighorn sheep allows for the exploration of how on-the-ground management or conservation scenarios may increase functional connectivity for the species in the study area. More generally, this study highlights the usefulness of individual-based models to identify how a species makes broad use of a landscape for movement. Application of this approach can provide effective quantitative support for decision makers seeking to incorporate wildlife conservation and connectivity into land use planning.
Allen, Corrie H.; Kyle, Catherine
2016-01-01
Background. Preserving connectivity, or the ability of a landscape to support species movement, is among the most commonly recommended strategies to reduce the negative effects of climate change and human land use development on species. Connectivity analyses have traditionally used a corridor-based approach and rely heavily on least cost path modeling and circuit theory to delineate corridors. Individual-based models are gaining popularity as a potentially more ecologically realistic method of estimating landscape connectivity. However, this remains a relatively unexplored approach. We sought to explore the utility of a simple, individual-based model as a land-use management support tool in identifying and implementing landscape connectivity. Methods. We created an individual-based model of bighorn sheep (Ovis canadensis) that simulates a bighorn sheep traversing a landscape by following simple movement rules. The model was calibrated for bighorn sheep in the Okanagan Valley, British Columbia, Canada, a region containing isolated herds that are vital to conservation of the species in its northern range. Simulations were run to determine baseline connectivity between subpopulations in the study area. We then applied the model to explore two land management scenarios on simulated connectivity: restoring natural fire regimes and identifying appropriate sites for interventions that would increase road permeability for bighorn sheep. Results. This model suggests there are no continuous areas of good habitat between current subpopulations of sheep in the study area; however, a series of stepping-stones or circuitous routes could facilitate movement between subpopulations and into currently unoccupied, yet suitable, bighorn habitat. Restoring natural fire regimes or mimicking fire with prescribed burns and tree removal could considerably increase bighorn connectivity in this area. Moreover, several key road crossing sites that could benefit from wildlife overpasses were identified. Discussion. By linking individual-scale movement rules to landscape-scale outcomes, our individual-based model of bighorn sheep allows for the exploration of how on-the-ground management or conservation scenarios may increase functional connectivity for the species in the study area. More generally, this study highlights the usefulness of individual-based models to identify how a species makes broad use of a landscape for movement. Application of this approach can provide effective quantitative support for decision makers seeking to incorporate wildlife conservation and connectivity into land use planning. PMID:27168997
Residential water demand model under block rate pricing: A case study of Beijing, China
NASA Astrophysics Data System (ADS)
Chen, H.; Yang, Z. F.
2009-05-01
In many cities, the inconsistency between water supply and water demand has become a critical problem because of deteriorating water shortage and increasing water demand. Uniform price of residential water cannot promote the efficient water allocation. In China, block water price will be put into practice in the future, but the outcome of such regulation measure is unpredictable without theory support. In this paper, the residential water is classified by the volume of water usage based on economic rules and block water is considered as different kinds of goods. A model based on extended linear expenditure system (ELES) is constructed to simulate the relationship between block water price and water demand, which provide theoretical support for the decision-makers. Finally, the proposed model is used to simulate residential water demand under block rate pricing in Beijing.
A Simulation Approach to Decision Making in IT Service Strategy
2014-01-01
We propose to use simulation modeling to support decision making in IT service strategy scope. Our main contribution is a simulation model that helps service providers analyze the consequences of changes in both the service capacity assigned to their customers and the tendency of service requests received on the fulfillment of a business rule associated with the strategic goal of customer satisfaction. This business rule is set in the SLAs that service provider and its customers agree to, which determine the maximum percentage of service requests that are permitted to be abandoned because they have exceeded the waiting time allowed. To illustrate the use and applications of the model, we include some of the experiments conducted and describe our conclusions. PMID:24790583
Modeling formalisms in Systems Biology
2011-01-01
Systems Biology has taken advantage of computational tools and high-throughput experimental data to model several biological processes. These include signaling, gene regulatory, and metabolic networks. However, most of these models are specific to each kind of network. Their interconnection demands a whole-cell modeling framework for a complete understanding of cellular systems. We describe the features required by an integrated framework for modeling, analyzing and simulating biological processes, and review several modeling formalisms that have been used in Systems Biology including Boolean networks, Bayesian networks, Petri nets, process algebras, constraint-based models, differential equations, rule-based models, interacting state machines, cellular automata, and agent-based models. We compare the features provided by different formalisms, and discuss recent approaches in the integration of these formalisms, as well as possible directions for the future. PMID:22141422
An Efficient Functional Test Generation Method For Processors Using Genetic Algorithms
NASA Astrophysics Data System (ADS)
Hudec, Ján; Gramatová, Elena
2015-07-01
The paper presents a new functional test generation method for processors testing based on genetic algorithms and evolutionary strategies. The tests are generated over an instruction set architecture and a processor description. Such functional tests belong to the software-oriented testing. Quality of the tests is evaluated by code coverage of the processor description using simulation. The presented test generation method uses VHDL models of processors and the professional simulator ModelSim. The rules, parameters and fitness functions were defined for various genetic algorithms used in automatic test generation. Functionality and effectiveness were evaluated using the RISC type processor DP32.
NASA Astrophysics Data System (ADS)
Ramasahayam, Veda Krishna Vyas; Diwakar, Anant; Bodi, Kowsik
2017-11-01
To study the flow of high temperature air in vibrational and chemical equilibrium, accurate models for thermodynamic state and transport phenomena are required. In the present work, the performance of a state equation model and two mixing rules for determining equilibrium air thermodynamic and transport properties are compared with that of curve fits. The thermodynamic state model considers 11 species which computes flow chemistry by an iterative process and the mixing rules considered for viscosity are Wilke and Armaly-Sutton. The curve fits of Srinivasan, which are based on Grabau type transition functions, are chosen for comparison. A two-dimensional Navier-Stokes solver is developed to simulate high enthalpy flows with numerical fluxes computed by AUSM+-up. The accuracy of state equation model and curve fits for thermodynamic properties is determined using hypersonic inviscid flow over a circular cylinder. The performance of mixing rules and curve fits for viscosity are compared using hypersonic laminar boundary layer prediction on a flat plate. It is observed that steady state solutions from state equation model and curve fits match with each other. Though curve fits are significantly faster the state equation model is more general and can be adapted to any flow composition.
NASA Astrophysics Data System (ADS)
Sessoms, D. A.; Amon, A.; Courbin, L.; Panizza, P.
2010-10-01
The binary path selection of droplets reaching a T junction is regulated by time-delayed feedback and nonlinear couplings. Such mechanisms result in complex dynamics of droplet partitioning: numerous discrete bifurcations between periodic regimes are observed. We introduce a model based on an approximation that makes this problem tractable. This allows us to derive analytical formulae that predict the occurrence of the bifurcations between consecutive regimes, establish selection rules for the period of a regime, and describe the evolutions of the period and complexity of droplet pattern in a cycle with the key parameters of the system. We discuss the validity and limitations of our model which describes semiquantitatively both numerical simulations and microfluidic experiments.
Multi-Objective Lake Superior Regulation
NASA Astrophysics Data System (ADS)
Asadzadeh, M.; Razavi, S.; Tolson, B.
2011-12-01
At the direction of the International Joint Commission (IJC) the International Upper Great Lakes Study (IUGLS) Board is investigating possible changes to the present method of regulating the outflows of Lake Superior (SUP) to better meet the contemporary needs of the stakeholders. In this study, a new plan in the form of a rule curve that is directly interpretable for regulation of SUP is proposed. The proposed rule curve has 18 parameters that should be optimized. The IUGLS Board is also interested in a regulation strategy that considers potential effects of climate uncertainty. Therefore, the quality of the rule curve is assessed simultaneously for multiple supply sequences that represent various future climate scenarios. The rule curve parameters are obtained by solving a computationally intensive bi-objective simulation-optimization problem that maximizes the total increase in navigation and hydropower benefits of the new regulation plan and minimizes the sum of all normalized constraint violations. The objective and constraint values are obtained from a Microsoft Excel based Shared Vision Model (SVM) that compares any new SUP regulation plan with the current regulation policy. The underlying optimization problem is solved by a recently developed, highly efficient multi-objective optimization algorithm called Pareto Archived Dynamically Dimensioned Search (PA-DDS). To further improve the computational efficiency of the simulation-optimization problem, the model pre-emption strategy is used in a novel way to avoid the complete evaluation of regulation plans with low quality in both objectives. Results show that the generated rule curve is robust and typically more reliable when facing unpredictable climate conditions compared to other SUP regulation plans.
Automation for pattern library creation and in-design optimization
NASA Astrophysics Data System (ADS)
Deng, Rock; Zou, Elain; Hong, Sid; Wang, Jinyan; Zhang, Yifan; Sweis, Jason; Lai, Ya-Chieh; Ding, Hua; Huang, Jason
2015-03-01
Semiconductor manufacturing technologies are becoming increasingly complex with every passing node. Newer technology nodes are pushing the limits of optical lithography and requiring multiple exposures with exotic material stacks for each critical layer. All of this added complexity usually amounts to further restrictions in what can be designed. Furthermore, the designs must be checked against all these restrictions in verification and sign-off stages. Design rules are intended to capture all the manufacturing limitations such that yield can be maximized for any given design adhering to all the rules. Most manufacturing steps employ some sort of model based simulation which characterizes the behavior of each step. The lithography models play a very big part of the overall yield and design restrictions in patterning. However, lithography models are not practical to run during design creation due to their slow and prohibitive run times. Furthermore, the models are not usually given to foundry customers because of the confidential and sensitive nature of every foundry's processes. The design layout locations where a model flags unacceptable simulated results can be used to define pattern rules which can be shared with customers. With advanced technology nodes we see a large growth of pattern based rules. This is due to the fact that pattern matching is very fast and the rules themselves can be very complex to describe in a standard DRC language. Therefore, the patterns are left as either pattern layout clips or abstracted into pattern-like syntax which a pattern matcher can use directly. The patterns themselves can be multi-layered with "fuzzy" designations such that groups of similar patterns can be found using one description. The pattern matcher is often integrated with a DRC tool such that verification and signoff can be done in one step. The patterns can be layout constructs that are "forbidden", "waived", or simply low-yielding in nature. The patterns can also contain remedies built in so that fixing happens either automatically or in a guided manner. Building a comprehensive library of patterns is a very difficult task especially when a new technology node is being developed or the process keeps changing. The main dilemma is not having enough representative layouts to use for model simulation where pattern locations can be marked and extracted. This paper will present an automatic pattern library creation flow by using a few known yield detractor patterns to systematically expand the pattern library and generate optimized patterns. We will also look at the specific fixing hints in terms of edge movements, additive, or subtractive changes needed during optimization. Optimization will be shown for both the digital physical implementation and custom design methods.
Skrivanek, Zachary; Berry, Scott; Berry, Don; Chien, Jenny; Geiger, Mary Jane; Anderson, James H.; Gaydos, Brenda
2012-01-01
Background Dulaglutide (dula, LY2189265), a long-acting glucagon-like peptide-1 analog, is being developed to treat type 2 diabetes mellitus. Methods To foster the development of dula, we designed a two-stage adaptive, dose-finding, inferentially seamless phase 2/3 study. The Bayesian theoretical framework is used to adaptively randomize patients in stage 1 to 7 dula doses and, at the decision point, to either stop for futility or to select up to 2 dula doses for stage 2. After dose selection, patients continue to be randomized to the selected dula doses or comparator arms. Data from patients assigned the selected doses will be pooled across both stages and analyzed with an analysis of covariance model, using baseline hemoglobin A1c and country as covariates. The operating characteristics of the trial were assessed by extensive simulation studies. Results Simulations demonstrated that the adaptive design would identify the correct doses 88% of the time, compared to as low as 6% for a fixed-dose design (the latter value based on frequentist decision rules analogous to the Bayesian decision rules for adaptive design). Conclusions This article discusses the decision rules used to select the dula dose(s); the mathematical details of the adaptive algorithm—including a description of the clinical utility index used to mathematically quantify the desirability of a dose based on safety and efficacy measurements; and a description of the simulation process and results that quantify the operating characteristics of the design. PMID:23294775
A logical model of cooperating rule-based systems
NASA Technical Reports Server (NTRS)
Bailin, Sidney C.; Moore, John M.; Hilberg, Robert H.; Murphy, Elizabeth D.; Bahder, Shari A.
1989-01-01
A model is developed to assist in the planning, specification, development, and verification of space information systems involving distributed rule-based systems. The model is based on an analysis of possible uses of rule-based systems in control centers. This analysis is summarized as a data-flow model for a hypothetical intelligent control center. From this data-flow model, the logical model of cooperating rule-based systems is extracted. This model consists of four layers of increasing capability: (1) communicating agents, (2) belief-sharing knowledge sources, (3) goal-sharing interest areas, and (4) task-sharing job roles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meshkati, N.; Buller, B.J.; Azadeh, M.A.
1995-04-01
The goal of this research is threefold: (1) use of the Skill-, Rule-, and Knowledge-based levels of cognitive control -- the SRK framework -- to develop an integrated information processing conceptual framework (for integration of workstation, job, and team design); (2) to evaluate the user interface component of this framework -- the Ecological display; and (3) to analyze the effect of operators` individual information processing behavior and decision styles on handling plant disturbances plus their performance on, and preference for, Traditional and Ecological user interfaces. A series of studies were conducted. In Part I, a computer simulation model and amore » mathematical model were developed. In Part II, an experiment was designed and conducted at the EBR-II plant of the Argonne National Laboratory-West in Idaho Falls, Idaho. It is concluded that: the integrated SRK-based information processing model for control room operations is superior to the conventional rule-based model; operators` individual decision styles and the combination of their styles play a significant role in effective handling of nuclear power plant disturbances; use of the Ecological interface results in significantly more accurate event diagnosis and recall of various plant parameters, faster response to plant transients, and higher ratings of subject preference; and operators` decision styles affect on both their performance and preference for the Ecological interface.« less
Rule-based navigation control design for autonomous flight
NASA Astrophysics Data System (ADS)
Contreras, Hugo; Bassi, Danilo
2008-04-01
This article depicts a navigation control system design that is based on a set of rules in order to follow a desired trajectory. The full control of the aircraft considered here comprises: a low level stability control loop, based on classic PID controller and the higher level navigation whose main job is to exercise lateral control (course) and altitude control, trying to follow a desired trajectory. The rules and PID gains were adjusted systematically according to the result of flight simulation. In spite of its simplicity, the rule-based navigation control proved to be robust, even with big perturbation, like crossing winds.
Simulation of data safety components for corporative systems
NASA Astrophysics Data System (ADS)
Yaremko, Svetlana A.; Kuzmina, Elena M.; Savchuk, Tamara O.; Krivonosov, Valeriy E.; Smolarz, Andrzej; Arman, Abenov; Smailova, Saule; Kalizhanova, Aliya
2017-08-01
The article deals with research of designing data safety components for corporations by means of mathematical simulations and modern information technologies. Simulation of threats ranks has been done which is based on definite values of data components. The rules of safety policy for corporative information systems have been presented. The ways of realization of safety policy rules have been proposed on the basis of taken conditions and appropriate class of valuable data protection.
3D morphology-based clustering and simulation of human pyramidal cell dendritic spines.
Luengo-Sanchez, Sergio; Fernaud-Espinosa, Isabel; Bielza, Concha; Benavides-Piccione, Ruth; Larrañaga, Pedro; DeFelipe, Javier
2018-06-13
The dendritic spines of pyramidal neurons are the targets of most excitatory synapses in the cerebral cortex. They have a wide variety of morphologies, and their morphology appears to be critical from the functional point of view. To further characterize dendritic spine geometry, we used in this paper over 7,000 individually 3D reconstructed dendritic spines from human cortical pyramidal neurons to group dendritic spines using model-based clustering. This approach uncovered six separate groups of human dendritic spines. To better understand the differences between these groups, the discriminative characteristics of each group were identified as a set of rules. Model-based clustering was also useful for simulating accurate 3D virtual representations of spines that matched the morphological definitions of each cluster. This mathematical approach could provide a useful tool for theoretical predictions on the functional features of human pyramidal neurons based on the morphology of dendritic spines.
Obi, Andrea; Chung, Jennifer; Chen, Ryan; Lin, Wandi; Sun, Siyuan; Pozehl, William; Cohn, Amy M; Daskin, Mark S; Seagull, F Jacob; Reddy, Rishindra M
2015-11-01
Certain operative cases occur unpredictably and/or have long operative times, creating a conflict between Accreditation Council for Graduate Medical Education (ACGME) rules and adequate training experience. A ProModel-based simulation was developed based on historical data. Probabilistic distributions of operative time calculated and combined with an ACGME compliant call schedule. For the advanced surgical cases modeled (cardiothoracic transplants), 80-hour violations were 6.07% and the minimum number of days off was violated 22.50%. There was a 36% chance of failure to fulfill any (either heart or lung) minimum case requirement despite adequate volume. The variable nature of emergency cases inevitably leads to work hour violations under ACGME regulations. Unpredictable cases mandate higher operative volume to ensure achievement of adequate caseloads. Publically available simulation technology provides a valuable avenue to identify adequacy of case volumes for trainees in both the elective and emergency setting. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Müller, Ruben; Schütze, Niels
2014-05-01
Water resources systems with reservoirs are expected to be sensitive to climate change. Assessment studies that analyze the impact of climate change on the performance of reservoirs can be divided in two groups: (1) Studies that simulate the operation under projected inflows with the current set of operational rules. Due to non adapted operational rules the future performance of these reservoirs can be underestimated and the impact overestimated. (2) Studies that optimize the operational rules for best adaption of the system to the projected conditions before the assessment of the impact. The latter allows for estimating more realistically future performance and adaption strategies based on new operation rules are available if required. Multi-purpose reservoirs serve various, often conflicting functions. If all functions cannot be served simultaneously at a maximum level, an effective compromise between multiple objectives of the reservoir operation has to be provided. Yet under climate change the historically preferenced compromise may no longer be the most suitable compromise in the future. Therefore a multi-objective based climate change impact assessment approach for multi-purpose multi-reservoir systems is proposed in the study. Projected inflows are provided in a first step using a physically based rainfall-runoff model. In a second step, a time series model is applied to generate long-term inflow time series. Finally, the long-term inflow series are used as driving variables for a simulation-based multi-objective optimization of the reservoir system in order to derive optimal operation rules. As a result, the adapted Pareto-optimal set of diverse best compromise solutions can be presented to the decision maker in order to assist him in assessing climate change adaption measures with respect to the future performance of the multi-purpose reservoir system. The approach is tested on a multi-purpose multi-reservoir system in a mountainous catchment in Germany. A climate change assessment is performed for climate change scenarios based on the SRES emission scenarios A1B, B1 and A2 for a set of statistically downscaled meteorological data. The future performance of the multi-purpose multi-reservoir system is quantified and possible intensifications of trade-offs between management goals or reservoir utilizations are shown.
Modeling and Control of Airport Queueing Dynamics under Severe Flow Restrictions
NASA Technical Reports Server (NTRS)
Carr, Francis; Evans, Antony; Clarke, John-Paul; Deron, Eric
2003-01-01
Based on field observations and interviews with controllers at BOS and EWR, we identify the closure of local departure fixes as the most severe class of airport departure restrictions. A set of simple queueing dynamics and traffic rules are developed to model departure traffic under such restrictions. The validity of the proposed model is tested via Monte Carlo simulation against 10 hours of actual operations data collected during a case-study at EWR on June 29,2000. In general, the model successfully reproduces the aggregate departure congestion. An analysis of the average error over 40 simulation runs indicates that flow-rate restrictions also significantly impact departure traffic; work is underway to capture these effects. Several applications and what-if scenarios are discussed for future evaluation using the calibrated model.
Models of Quantitative Estimations: Rule-Based and Exemplar-Based Processes Compared
ERIC Educational Resources Information Center
von Helversen, Bettina; Rieskamp, Jorg
2009-01-01
The cognitive processes underlying quantitative estimations vary. Past research has identified task-contingent changes between rule-based and exemplar-based processes (P. Juslin, L. Karlsson, & H. Olsson, 2008). B. von Helversen and J. Rieskamp (2008), however, proposed a simple rule-based model--the mapping model--that outperformed the…
Analysis of traffic congestion induced by the work zone
NASA Astrophysics Data System (ADS)
Fei, L.; Zhu, H. B.; Han, X. L.
2016-05-01
Based on the cellular automata model, a meticulous two-lane cellular automata model is proposed, in which the driving behavior difference and the difference of vehicles' accelerations between the moving state and the starting state are taken into account. Furthermore the vehicles' motion is refined by using the small cell of one meter long. Then accompanied by coming up with a traffic management measure, a two-lane highway traffic model containing a work zone is presented, in which the road is divided into normal area, merging area and work zone. The vehicles in different areas move forward according to different lane changing rules and position updating rules. After simulation it is found that when the density is small the cluster length in front of the work zone increases with the decrease of the merging probability. Then the suitable merging length and the appropriate speed limit value are recommended. The simulation result in the form of the speed-flow diagram is in good agreement with the empirical data. It indicates that the presented model is efficient and can partially reflect the real traffic. The results may be meaningful for traffic optimization and road construction management.
Biologically-inspired hexapod robot design and simulation
NASA Technical Reports Server (NTRS)
Espenschied, Kenneth S.; Quinn, Roger D.
1994-01-01
The design and construction of a biologically-inspired hexapod robot is presented. A previously developed simulation is modified to include models of the DC drive motors, the motor driver circuits and their transmissions. The application of this simulation to the design and development of the robot is discussed. The mechanisms thought to be responsible for the leg coordination of the walking stick insect were previously applied to control the straight-line locomotion of a robot. We generalized these rules for a robot walking on a plane. This biologically-inspired control strategy is used to control the robot in simulation. Numerical results show that the general body motion and performance of the simulated robot is similar to that of the robot based on our preliminary experimental results.
Model-based multiple patterning layout decomposition
NASA Astrophysics Data System (ADS)
Guo, Daifeng; Tian, Haitong; Du, Yuelin; Wong, Martin D. F.
2015-10-01
As one of the most promising next generation lithography technologies, multiple patterning lithography (MPL) plays an important role in the attempts to keep in pace with 10 nm technology node and beyond. With feature size keeps shrinking, it has become impossible to print dense layouts within one single exposure. As a result, MPL such as double patterning lithography (DPL) and triple patterning lithography (TPL) has been widely adopted. There is a large volume of literature on DPL/TPL layout decomposition, and the current approach is to formulate the problem as a classical graph-coloring problem: Layout features (polygons) are represented by vertices in a graph G and there is an edge between two vertices if and only if the distance between the two corresponding features are less than a minimum distance threshold value dmin. The problem is to color the vertices of G using k colors (k = 2 for DPL, k = 3 for TPL) such that no two vertices connected by an edge are given the same color. This is a rule-based approach, which impose a geometric distance as a minimum constraint to simply decompose polygons within the distance into different masks. It is not desired in practice because this criteria cannot completely capture the behavior of the optics. For example, it lacks of sufficient information such as the optical source characteristics and the effects between the polygons outside the minimum distance. To remedy the deficiency, a model-based layout decomposition approach to make the decomposition criteria base on simulation results was first introduced at SPIE 2013.1 However, the algorithm1 is based on simplified assumption on the optical simulation model and therefore its usage on real layouts is limited. Recently AMSL2 also proposed a model-based approach to layout decomposition by iteratively simulating the layout, which requires excessive computational resource and may lead to sub-optimal solutions. The approach2 also potentially generates too many stiches. In this paper, we propose a model-based MPL layout decomposition method using a pre-simulated library of frequent layout patterns. Instead of using the graph G in the standard graph-coloring formulation, we build an expanded graph H where each vertex represents a group of adjacent features together with a coloring solution. By utilizing the library and running sophisticated graph algorithms on H, our approach can obtain optimal decomposition results efficiently. Our model-based solution can achieve a practical mask design which significantly improves the lithography quality on the wafer compared to the rule based decomposition.
Learning by stimulation avoidance: A principle to control spiking neural networks dynamics
Sinapayen, Lana; Ikegami, Takashi
2017-01-01
Learning based on networks of real neurons, and learning based on biologically inspired models of neural networks, have yet to find general learning rules leading to widespread applications. In this paper, we argue for the existence of a principle allowing to steer the dynamics of a biologically inspired neural network. Using carefully timed external stimulation, the network can be driven towards a desired dynamical state. We term this principle “Learning by Stimulation Avoidance” (LSA). We demonstrate through simulation that the minimal sufficient conditions leading to LSA in artificial networks are also sufficient to reproduce learning results similar to those obtained in biological neurons by Shahaf and Marom, and in addition explains synaptic pruning. We examined the underlying mechanism by simulating a small network of 3 neurons, then scaled it up to a hundred neurons. We show that LSA has a higher explanatory power than existing hypotheses about the response of biological neural networks to external simulation, and can be used as a learning rule for an embodied application: learning of wall avoidance by a simulated robot. In other works, reinforcement learning with spiking networks can be obtained through global reward signals akin simulating the dopamine system; we believe that this is the first project demonstrating sensory-motor learning with random spiking networks through Hebbian learning relying on environmental conditions without a separate reward system. PMID:28158309
Learning by stimulation avoidance: A principle to control spiking neural networks dynamics.
Sinapayen, Lana; Masumori, Atsushi; Ikegami, Takashi
2017-01-01
Learning based on networks of real neurons, and learning based on biologically inspired models of neural networks, have yet to find general learning rules leading to widespread applications. In this paper, we argue for the existence of a principle allowing to steer the dynamics of a biologically inspired neural network. Using carefully timed external stimulation, the network can be driven towards a desired dynamical state. We term this principle "Learning by Stimulation Avoidance" (LSA). We demonstrate through simulation that the minimal sufficient conditions leading to LSA in artificial networks are also sufficient to reproduce learning results similar to those obtained in biological neurons by Shahaf and Marom, and in addition explains synaptic pruning. We examined the underlying mechanism by simulating a small network of 3 neurons, then scaled it up to a hundred neurons. We show that LSA has a higher explanatory power than existing hypotheses about the response of biological neural networks to external simulation, and can be used as a learning rule for an embodied application: learning of wall avoidance by a simulated robot. In other works, reinforcement learning with spiking networks can be obtained through global reward signals akin simulating the dopamine system; we believe that this is the first project demonstrating sensory-motor learning with random spiking networks through Hebbian learning relying on environmental conditions without a separate reward system.
A two-stage stochastic rule-based model to determine pre-assembly buffer content
NASA Astrophysics Data System (ADS)
Gunay, Elif Elcin; Kula, Ufuk
2018-01-01
This study considers instant decision-making needs of the automobile manufactures for resequencing vehicles before final assembly (FA). We propose a rule-based two-stage stochastic model to determine the number of spare vehicles that should be kept in the pre-assembly buffer to restore the altered sequence due to paint defects and upstream department constraints. First stage of the model decides the spare vehicle quantities, where the second stage model recovers the scrambled sequence respect to pre-defined rules. The problem is solved by sample average approximation (SAA) algorithm. We conduct a numerical study to compare the solutions of heuristic model with optimal ones and provide following insights: (i) as the mismatch between paint entrance and scheduled sequence decreases, the rule-based heuristic model recovers the scrambled sequence as good as the optimal resequencing model, (ii) the rule-based model is more sensitive to the mismatch between the paint entrance and scheduled sequences for recovering the scrambled sequence, (iii) as the defect rate increases, the difference in recovery effectiveness between rule-based heuristic and optimal solutions increases, (iv) as buffer capacity increases, the recovery effectiveness of the optimization model outperforms heuristic model, (v) as expected the rule-based model holds more inventory than the optimization model.
NASA Astrophysics Data System (ADS)
Kashani, Jamal; Pettet, Graeme John; Gu, YuanTong; Zhang, Lihai; Oloyede, Adekunle
2017-10-01
Single-phase porous materials contain multiple components that intermingle up to the ultramicroscopic level. Although the structures of the porous materials have been simulated with agent-based methods, the results of the available methods continue to provide patterns of distinguishable solid and fluid agents which do not represent materials with indistinguishable phases. This paper introduces a new agent (hybrid agent) and category of rules (intra-agent rule) that can be used to create emergent structures that would more accurately represent single-phase structures and materials. The novel hybrid agent carries the characteristics of system's elements and it is capable of changing within itself, while also responding to its neighbours as they also change. As an example, the hybrid agent under one-dimensional cellular automata formalism in a two-dimensional domain is used to generate patterns that demonstrate the striking morphological and characteristic similarities with the porous saturated single-phase structures where each agent of the ;structure; carries semi-permeability property and consists of both fluid and solid in space and at all times. We conclude that the ability of the hybrid agent to change locally provides an enhanced protocol to simulate complex porous structures such as biological tissues which could facilitate models for agent-based techniques and numerical methods.
Simulation of Nonisothermal Consolidation of Saturated Soils Based on a Thermodynamic Model
Cheng, Xiaohui
2013-01-01
Based on the nonequilibrium thermodynamics, a thermo-hydro-mechanical coupling model for saturated soils is established, including a constitutive model without such concepts as yield surface and flow rule. An elastic potential energy density function is defined to derive a hyperelastic relation among the effective stress, the elastic strain, and the dry density. The classical linear non-equilibrium thermodynamic theory is employed to quantitatively describe the unrecoverable energy processes like the nonelastic deformation development in materials by the concepts of dissipative force and dissipative flow. In particular the granular fluctuation, which represents the kinetic energy fluctuation and elastic potential energy fluctuation at particulate scale caused by the irregular mutual movement between particles, is introduced in the model and described by the concept of granular entropy. Using this model, the nonisothermal consolidation of saturated clays under cyclic thermal loadings is simulated in this paper to validate the model. The results show that the nonisothermal consolidation is heavily OCR dependent and unrecoverable. PMID:23983623
Simulation of nonisothermal consolidation of saturated soils based on a thermodynamic model.
Zhang, Zhichao; Cheng, Xiaohui
2013-01-01
Based on the nonequilibrium thermodynamics, a thermo-hydro-mechanical coupling model for saturated soils is established, including a constitutive model without such concepts as yield surface and flow rule. An elastic potential energy density function is defined to derive a hyperelastic relation among the effective stress, the elastic strain, and the dry density. The classical linear non-equilibrium thermodynamic theory is employed to quantitatively describe the unrecoverable energy processes like the nonelastic deformation development in materials by the concepts of dissipative force and dissipative flow. In particular the granular fluctuation, which represents the kinetic energy fluctuation and elastic potential energy fluctuation at particulate scale caused by the irregular mutual movement between particles, is introduced in the model and described by the concept of granular entropy. Using this model, the nonisothermal consolidation of saturated clays under cyclic thermal loadings is simulated in this paper to validate the model. The results show that the nonisothermal consolidation is heavily OCR dependent and unrecoverable.
NASA Astrophysics Data System (ADS)
Nieten, Joseph L.; Burke, Roger
1993-03-01
The system diagnostic builder (SDB) is an automated knowledge acquisition tool using state- of-the-art artificial intelligence (AI) technologies. The SDB uses an inductive machine learning technique to generate rules from data sets that are classified by a subject matter expert (SME). Thus, data is captured from the subject system, classified by an expert, and used to drive the rule generation process. These rule-bases are used to represent the observable behavior of the subject system, and to represent knowledge about this system. The rule-bases can be used in any knowledge based system which monitors or controls a physical system or simulation. The SDB has demonstrated the utility of using inductive machine learning technology to generate reliable knowledge bases. In fact, we have discovered that the knowledge captured by the SDB can be used in any number of applications. For example, the knowledge bases captured from the SMS can be used as black box simulations by intelligent computer aided training devices. We can also use the SDB to construct knowledge bases for the process control industry, such as chemical production, or oil and gas production. These knowledge bases can be used in automated advisory systems to ensure safety, productivity, and consistency.
Simulation-Based Rule Generation Considering Readability
Yahagi, H.; Shimizu, S.; Ogata, T.; Hara, T.; Ota, J.
2015-01-01
Rule generation method is proposed for an aircraft control problem in an airport. Designing appropriate rules for motion coordination of taxiing aircraft in the airport is important, which is conducted by ground control. However, previous studies did not consider readability of rules, which is important because it should be operated and maintained by humans. Therefore, in this study, using the indicator of readability, we propose a method of rule generation based on parallel algorithm discovery and orchestration (PADO). By applying our proposed method to the aircraft control problem, the proposed algorithm can generate more readable and more robust rules and is found to be superior to previous methods. PMID:27347501
Dynamic Simulation of Community Crime and Crime-Reporting Behavior
NASA Astrophysics Data System (ADS)
Yonas, Michael A.; Borrebach, Jeffrey D.; Burke, Jessica G.; Brown, Shawn T.; Philp, Katherine D.; Burke, Donald S.; Grefenstette, John J.
An agent-based model was developed to explore the effectiveness of possible interventions to reduce neighborhood crime and violence. Both offenders and non-offenders (or citizens) were modeled as agents living in neighborhoods, with a set of rules controlling changes in behavior based on individual experience. Offenders may become more or less inclined to actively commit criminal offenses, depending on the behavior of the neighborhood residents and other nearby offenders, and on their arrest experience. In turn, citizens may become more or less inclined to report crimes, based on the observed prevalence of criminal activity within their neighborhood. This paper describes the basic design and dynamics of the model, and how such models might be used to investigate practical crime intervention programs.
NASA Astrophysics Data System (ADS)
Politikos, D.; Somarakis, S.; Tsiaras, K. P.; Giannoulaki, M.; Petihakis, G.; Machias, A.; Triantafyllou, G.
2015-11-01
A 3-D full life cycle population model for the North Aegean Sea (NAS) anchovy stock is presented. The model is two-way coupled with a hydrodynamic-biogeochemical model (POM-ERSEM). The anchovy life span is divided into seven life stages/age classes. Embryos and early larvae are passive particles, but subsequent stages exhibit active horizontal movements based on specific rules. A bioenergetics model simulates the growth in both the larval and juvenile/adult stages, while the microzooplankton and mesozooplankton fields of the biogeochemical model provide the food for fish consumption. The super-individual approach is adopted for the representation of the anchovy population. A dynamic egg production module, with an energy allocation algorithm, is embedded in the bioenergetics equation and produces eggs based on a new conceptual model for anchovy vitellogenesis. A model simulation for the period 2003-2006 with realistic initial conditions reproduced well the magnitude of population biomass and daily egg production estimated from acoustic and daily egg production method (DEPM) surveys, carried out in the NAS during June 2003-2006. Model simulated adult and egg habitats were also in good agreement with observed spatial distributions of acoustic biomass and egg abundance in June. Sensitivity simulations were performed to investigate the effect of different formulations adopted for key processes, such as reproduction and movement. The effect of the anchovy population on plankton dynamics was also investigated, by comparing simulations adopting a two-way or a one-way coupling of the fish with the biogeochemical model.
A novel methodology for building robust design rules by using design based metrology (DBM)
NASA Astrophysics Data System (ADS)
Lee, Myeongdong; Choi, Seiryung; Choi, Jinwoo; Kim, Jeahyun; Sung, Hyunju; Yeo, Hyunyoung; Shim, Myoungseob; Jin, Gyoyoung; Chung, Eunseung; Roh, Yonghan
2013-03-01
This paper addresses a methodology for building robust design rules by using design based metrology (DBM). Conventional method for building design rules has been using a simulation tool and a simple pattern spider mask. At the early stage of the device, the estimation of simulation tool is poor. And the evaluation of the simple pattern spider mask is rather subjective because it depends on the experiential judgment of an engineer. In this work, we designed a huge number of pattern situations including various 1D and 2D design structures. In order to overcome the difficulties of inspecting many types of patterns, we introduced Design Based Metrology (DBM) of Nano Geometry Research, Inc. And those mass patterns could be inspected at a fast speed with DBM. We also carried out quantitative analysis on PWQ silicon data to estimate process variability. Our methodology demonstrates high speed and accuracy for building design rules. All of test patterns were inspected within a few hours. Mass silicon data were handled with not personal decision but statistical processing. From the results, robust design rules are successfully verified and extracted. Finally we found out that our methodology is appropriate for building robust design rules.
Adaptive time-variant models for fuzzy-time-series forecasting.
Wong, Wai-Keung; Bai, Enjian; Chu, Alice Wai-Ching
2010-12-01
A fuzzy time series has been applied to the prediction of enrollment, temperature, stock indices, and other domains. Related studies mainly focus on three factors, namely, the partition of discourse, the content of forecasting rules, and the methods of defuzzification, all of which greatly influence the prediction accuracy of forecasting models. These studies use fixed analysis window sizes for forecasting. In this paper, an adaptive time-variant fuzzy-time-series forecasting model (ATVF) is proposed to improve forecasting accuracy. The proposed model automatically adapts the analysis window size of fuzzy time series based on the prediction accuracy in the training phase and uses heuristic rules to generate forecasting values in the testing phase. The performance of the ATVF model is tested using both simulated and actual time series including the enrollments at the University of Alabama, Tuscaloosa, and the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX). The experiment results show that the proposed ATVF model achieves a significant improvement in forecasting accuracy as compared to other fuzzy-time-series forecasting models.
An Interval Type-2 Neural Fuzzy System for Online System Identification and Feature Elimination.
Lin, Chin-Teng; Pal, Nikhil R; Wu, Shang-Lin; Liu, Yu-Ting; Lin, Yang-Yin
2015-07-01
We propose an integrated mechanism for discarding derogatory features and extraction of fuzzy rules based on an interval type-2 neural fuzzy system (NFS)-in fact, it is a more general scheme that can discard bad features, irrelevant antecedent clauses, and even irrelevant rules. High-dimensional input variable and a large number of rules not only enhance the computational complexity of NFSs but also reduce their interpretability. Therefore, a mechanism for simultaneous extraction of fuzzy rules and reducing the impact of (or eliminating) the inferior features is necessary. The proposed approach, namely an interval type-2 Neural Fuzzy System for online System Identification and Feature Elimination (IT2NFS-SIFE), uses type-2 fuzzy sets to model uncertainties associated with information and data in designing the knowledge base. The consequent part of the IT2NFS-SIFE is of Takagi-Sugeno-Kang type with interval weights. The IT2NFS-SIFE possesses a self-evolving property that can automatically generate fuzzy rules. The poor features can be discarded through the concept of a membership modulator. The antecedent and modulator weights are learned using a gradient descent algorithm. The consequent part weights are tuned via the rule-ordered Kalman filter algorithm to enhance learning effectiveness. Simulation results show that IT2NFS-SIFE not only simplifies the system architecture by eliminating derogatory/irrelevant antecedent clauses, rules, and features but also maintains excellent performance.
Grid occupancy estimation for environment perception based on belief functions and PCR6
NASA Astrophysics Data System (ADS)
Moras, Julien; Dezert, Jean; Pannetier, Benjamin
2015-05-01
In this contribution, we propose to improve the grid map occupancy estimation method developed so far based on belief function modeling and the classical Dempster's rule of combination. Grid map offers a useful representation of the perceived world for mobile robotics navigation. It will play a major role for the security (obstacle avoidance) of next generations of terrestrial vehicles, as well as for future autonomous navigation systems. In a grid map, the occupancy of each cell representing a small piece of the surrounding area of the robot must be estimated at first from sensors measurements (typically LIDAR, or camera), and then it must also be classified into different classes in order to get a complete and precise perception of the dynamic environment where the robot moves. So far, the estimation and the grid map updating have been done using fusion techniques based on the probabilistic framework, or on the classical belief function framework thanks to an inverse model of the sensors. Mainly because the latter offers an interesting management of uncertainties when the quality of available information is low, and when the sources of information appear as conflicting. To improve the performances of the grid map estimation, we propose in this paper to replace Dempster's rule of combination by the PCR6 rule (Proportional Conflict Redistribution rule #6) proposed in DSmT (Dezert-Smarandache) Theory. As an illustrating scenario, we consider a platform moving in dynamic area and we compare our new realistic simulation results (based on a LIDAR sensor) with those obtained by the probabilistic and the classical belief-based approaches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradley, Paul Andrew; Loomis, Eric Nicholas; Merritt, Elizabeth Catherine
Accurate modeling of a mixed material Equation of State (EOS) at high pressures (~1 to 100 Mbar) is critical for simulating inertial confinement fusion and high energy density systems. Here, this paper presents a comparison of two mixing rule models to the experiment to assess their applicability in this regime. The shock velocities of polystyrene, aluminum, and nickel aluminide (NiAl) were measured at a shock pressure of ~3 TPa (~30 Mbar) on the Omega EP laser facility (Laboratory for Laser Energetics, University of Rochester, New York). The resultant shock velocities were compared to those derived from the RAGE (Eulerian) hydrodynamicsmore » code to validate various mixing rules used to construct an EOS for NiAl. The simulated shock transit time through the sample (Al or NiAl) matched the measurements to within the ±45ps measurement uncertainty. The law of partial volume (Amagat) and the law of partial pressure (Dalton) mixture rules provided equally good matches to the NiAl shock data. Other studies showed that the Amagat mixing rule is superior, and we recommend it since our results also show a satisfactory match. In conclusion, the comparable quality of the simulation to data for the Al and NiAl samples implies that a mixture rule can supply an EOS for plasma mixtures with adequate fidelity for simulations where mixing takes place, such as advective mix in an Eulerian code or when two materials are mixed together via diffusion, turbulence, or other physical processes.« less
NASA Astrophysics Data System (ADS)
Pulido-Velazquez, Manuel; Macian-Sorribes, Hector; María Benlliure-Moreno, Jose; Fullana-Montoro, Juan
2015-04-01
Water resources systems in areas with a strong tradition in water use are complex to manage by the high amount of constraints that overlap in time and space, creating a complicated framework in which past, present and future collide between them. In addition, it is usual to find "hidden constraints" in system operations, which condition operation decisions being unnoticed by anyone but the river managers and users. Being aware of those hidden constraints requires usually years of experience and a degree of involvement in that system's management operations normally beyond the possibilities of technicians. However, their impact in the management decisions is strongly imprinted in the historical data records available. The purpose of this contribution is to present a methodology capable of assessing operating rules in complex water resources systems combining historical records and expert criteria. Both sources are coupled using fuzzy logic. The procedure stages are: 1) organize expert-technicians preliminary meetings to let the first explain how they manage the system; 2) set up a fuzzy rule-based system (FRB) structure according to the way the system is managed; 3) use the historical records available to estimate the inputs' fuzzy numbers, to assign preliminary output values to the FRB rules and to train and validate these rules; 4) organize expert-technician meetings to discuss the rule structure and the input's quantification, returning if required to the second stage; 5) once the FRB structure is accepted, its output values must be refined and completed with the aid of the experts by using meetings, workshops or surveys; 6) combine the FRB with a Decision Support System (DSS) to simulate the effect of those management decisions; 7) compare its results with the ones offered by the historical records and/or simulation or optimization models; and 8) discuss with the stakeholders the model performance returning, if it's required, to the fifth or the second stage. The methodology proposed has been applied to the Jucar River Basin (Spain). This basin has 3 reservoirs, 4 headwaters, 11 demands and 5 environmental flows; which form together a complex constraint set. After the preliminary meetings, one 81-rule FRB was created, using as inputs the system state variables at the start of the hydrologic year, and as outputs the target reservoir release schedule. The inputs' fuzzy numbers were estimated jointly using surveys. Fifteen years of historical records were used to train the system's outputs. The obtained FRB was then refined during additional expert-technician meetings. After that, the resulting FRB was introduced into a DSS simulating the effect of those management rules for different hydrological conditions. Three additional FRB's were created using: 1) exclusively the historical records; 2) a stochastic optimization model; and 3) a deterministic optimization model. The results proved to be consistent with the expectations, with the stakeholder's FRB performance located between the data-driven simulation and the stochastic optimization FRB's; and reflect the stakeholders' major goals and concerns about the river management. ACKNOWLEDGEMENT: This study has been partially supported by the IMPADAPT project (CGL2013-48424-C2-1-R) with Spanish MINECO (Ministerio de Economía y Competitividad) funds.
NASA Technical Reports Server (NTRS)
Padgett, Mary L. (Editor)
1993-01-01
The present conference discusses such neural networks (NN) related topics as their current development status, NN architectures, NN learning rules, NN optimization methods, NN temporal models, NN control methods, NN pattern recognition systems and applications, biological and biomedical applications of NNs, VLSI design techniques for NNs, NN systems simulation, fuzzy logic, and genetic algorithms. Attention is given to missileborne integrated NNs, adaptive-mixture NNs, implementable learning rules, an NN simulator for travelling salesman problem solutions, similarity-based forecasting, NN control of hypersonic aircraft takeoff, NN control of the Space Shuttle Arm, an adaptive NN robot manipulator controller, a synthetic approach to digital filtering, NNs for speech analysis, adaptive spline networks, an anticipatory fuzzy logic controller, and encoding operations for fuzzy associative memories.
NASA Technical Reports Server (NTRS)
Goldberg, Robert K.; Carney, Kelly S.; DuBois, Paul; Hoffarth, Canio; Rajan, Subramaniam; Blackenhorn, Gunther
2015-01-01
The need for accurate material models to simulate the deformation, damage and failure of polymer matrix composites under impact conditions is becoming critical as these materials are gaining increased usage in the aerospace and automotive industries. While there are several composite material models currently available within commercial transient dynamic finite element codes, several features have been identified as being lacking in the currently available material models that could substantially enhance the predictive capability of the impact simulations. A specific desired feature pertains to the incorporation of both plasticity and damage within the material model. Another desired feature relates to using experimentally based tabulated stress-strain input to define the evolution of plasticity and damage as opposed to specifying discrete input properties (such as modulus and strength) and employing analytical functions to track the response of the material. To begin to address these needs, a combined plasticity and damage model suitable for use with both solid and shell elements is being developed for implementation within the commercial code LS-DYNA. The plasticity model is based on extending the Tsai-Wu composite failure model into a strain-hardening based orthotropic plasticity model with a non-associative flow rule. The evolution of the yield surface is determined based on tabulated stress-strain curves in the various normal and shear directions and is tracked using the effective plastic strain. The effective plastic strain is computed by using the non-associative flow rule in combination with appropriate numerical methods. To compute the evolution of damage, a strain equivalent semi-coupled formulation is used, in which a load in one direction results in a stiffness reduction in multiple coordinate directions. A specific laminated composite is examined to demonstrate the process of characterizing and analyzing the response of a composite using the developed model.
Condensation in AN Economic Model with Brand Competition
NASA Astrophysics Data System (ADS)
Casillas, L.; Espinosa, F. J.; Huerta-Quintanilla, R.; Rodriguez-Achach, M.
We present a linear agent based model on brand competition. Each agent belongs to one of the two brands and interacts with its nearest neighbors. In the process the agent can decide to change to the other brand if the move is beneficial. The numerical simulations show that the systems always condenses into a state when all agents belong to a single brand. We study the condensation times for different parameters of the model and the influence of different mechanisms to avoid condensation, like anti monopoly rules and brand fidelity.
Zhang, Ziyu; Yuan, Lang; Lee, Peter D; Jones, Eric; Jones, Julian R
2014-01-01
Bone augmentation implants are porous to allow cellular growth, bone formation and fixation. However, the design of the pores is currently based on simple empirical rules, such as minimum pore and interconnects sizes. We present a three-dimensional (3D) transient model of cellular growth based on the Navier–Stokes equations that simulates the body fluid flow and stimulation of bone precursor cellular growth, attachment, and proliferation as a function of local flow shear stress. The model's effectiveness is demonstrated for two additive manufactured (AM) titanium scaffold architectures. The results demonstrate that there is a complex interaction of flow rate and strut architecture, resulting in partially randomized structures having a preferential impact on stimulating cell migration in 3D porous structures for higher flow rates. This novel result demonstrates the potential new insights that can be gained via the modeling tool developed, and how the model can be used to perform what-if simulations to design AM structures to specific functional requirements. PMID:24664988
Mitigating randomness of consumer preferences under certain conditional choices
NASA Astrophysics Data System (ADS)
Bothos, John M. A.; Thanos, Konstantinos-Georgios; Papadopoulou, Eirini; Daveas, Stelios; Thomopoulos, Stelios C. A.
2017-05-01
Agent-based crowd behaviour consists a significant field of research that has drawn a lot of attention in recent years. Agent-based crowd simulation techniques have been used excessively to forecast the behaviour of larger or smaller crowds in terms of certain given conditions influenced by specific cognition models and behavioural rules and norms, imposed from the beginning. Our research employs conditional event algebra, statistical methodology and agent-based crowd simulation techniques in developing a behavioural econometric model about the selection of certain economic behaviour by a consumer that faces a spectre of potential choices when moving and acting in a multiplex mall. More specifically we try to analyse the influence of demographic, economic, social and cultural factors on the economic behaviour of a certain individual and then we try to link its behaviour with the general behaviour of the crowds of consumers in multiplex malls using agent-based crowd simulation techniques. We then run our model using Generalized Least Squares and Maximum Likelihood methods to come up with the most probable forecast estimations, regarding the agent's behaviour. Our model is indicative about the formation of consumers' spectre of choices in multiplex malls under the condition of predefined preferences and can be used as a guide for further research in this area.
Agent-based modeling of the immune system: NetLogo, a promising framework.
Chiacchio, Ferdinando; Pennisi, Marzio; Russo, Giulia; Motta, Santo; Pappalardo, Francesco
2014-01-01
Several components that interact with each other to evolve a complex, and, in some cases, unexpected behavior, represents one of the main and fascinating features of the mammalian immune system. Agent-based modeling and cellular automata belong to a class of discrete mathematical approaches in which entities (agents) sense local information and undertake actions over time according to predefined rules. The strength of this approach is characterized by the appearance of a global behavior that emerges from interactions among agents. This behavior is unpredictable, as it does not follow linear rules. There are a lot of works that investigates the immune system with agent-based modeling and cellular automata. They have shown the ability to see clearly and intuitively into the nature of immunological processes. NetLogo is a multiagent programming language and modeling environment for simulating complex phenomena. It is designed for both research and education and is used across a wide range of disciplines and education levels. In this paper, we summarize NetLogo applications to immunology and, particularly, how this framework can help in the development and formulation of hypotheses that might drive further experimental investigations of disease mechanisms.
Mesoscopic model for binary fluids
NASA Astrophysics Data System (ADS)
Echeverria, C.; Tucci, K.; Alvarez-Llamoza, O.; Orozco-Guillén, E. E.; Morales, M.; Cosenza, M. G.
2017-10-01
We propose a model for studying binary fluids based on the mesoscopic molecular simulation technique known as multiparticle collision, where the space and state variables are continuous, and time is discrete. We include a repulsion rule to simulate segregation processes that does not require calculation of the interaction forces between particles, so binary fluids can be described on a mesoscopic scale. The model is conceptually simple and computationally efficient; it maintains Galilean invariance and conserves the mass and energy in the system at the micro- and macro-scale, whereas momentum is conserved globally. For a wide range of temperatures and densities, the model yields results in good agreement with the known properties of binary fluids, such as the density profile, interface width, phase separation, and phase growth. We also apply the model to the study of binary fluids in crowded environments with consistent results.
A 3-D model of tumor progression based on complex automata driven by particle dynamics.
Wcisło, Rafał; Dzwinel, Witold; Yuen, David A; Dudek, Arkadiusz Z
2009-12-01
The dynamics of a growing tumor involving mechanical remodeling of healthy tissue and vasculature is neglected in most of the existing tumor models. This is due to the lack of efficient computational framework allowing for simulation of mechanical interactions. Meanwhile, just these interactions trigger critical changes in tumor growth dynamics and are responsible for its volumetric and directional progression. We describe here a novel 3-D model of tumor growth, which combines particle dynamics with cellular automata concept. The particles represent both tissue cells and fragments of the vascular network. They interact with their closest neighbors via semi-harmonic central forces simulating mechanical resistance of the cell walls. The particle dynamics is governed by both the Newtonian laws of motion and the cellular automata rules. These rules can represent cell life-cycle and other biological interactions involving smaller spatio-temporal scales. We show that our complex automata, particle based model can reproduce realistic 3-D dynamics of the entire system consisting of the tumor, normal tissue cells, blood vessels and blood flow. It can explain phenomena such as the inward cell motion in avascular tumor, stabilization of tumor growth by the external pressure, tumor vascularization due to the process of angiogenesis, trapping of healthy cells by invading tumor, and influence of external (boundary) conditions on the direction of tumor progression. We conclude that the particle model can serve as a general framework for designing advanced multiscale models of tumor dynamics and it is very competitive to the modeling approaches presented before.
NASA Astrophysics Data System (ADS)
Uysal, G.; Sensoy, A.; Yavuz, O.; Sorman, A. A.; Gezgin, T.
2012-04-01
Effective management of a controlled reservoir system where it involves multiple and sometimes conflicting objectives is a complex problem especially in real time operations. Yuvacık Dam Reservoir, located in the Marmara region of Turkey, is built to supply annual demand of 142 hm3 water for Kocaeli city requires such a complex management strategy since it has relatively small (51 hm3) effective capacity. On the other hand, the drainage basin is fed by both rainfall and snowmelt since the elevation ranges between 80 - 1548 m. Excessive water must be stored behind the radial gates between February and May in terms of sustainability especially for summer and autumn periods. Moreover, the downstream channel physical conditions constraint the spillway releases up to 100 m3/s although the spillway is large enough to handle major floods. Thus, this situation makes short term release decisions the challenging task. Long term water supply curves, based on historical inflows and annual water demand, are in conflict with flood regulation (control) levels, based on flood attenuation and routing curves, for this reservoir. A guide curve, that is generated using both water supply and flood control of downstream channel, generally corresponds to upper elevation of conservation pool for simulation of a reservoir. However, sometimes current operation necessitates exceeding this target elevation. Since guide curves can be developed as a function of external variables, the water potential of a basin can be an indicator to explain current conditions and decide on the further strategies. Besides, releases with respect to guide curve are managed and restricted by user-defined rules. Although the managers operate the reservoir due to several variable conditions and predictions, still the simulation model using variable guide curve is an urgent need to test alternatives quickly. To that end, using HEC-ResSim, the several variable guide curves are defined to meet the requirements by taking inflow, elevation, precipitation and snow water equivalent into consideration to propose alternative simulations as a decision support system. After that, the releases are subjected to user-defined rules. Thus, previous year reservoir simulations are compared with observed reservoir levels and releases. Hypothetical flood scenarios are tested in case of different storm event timing and sizing. Numerical weather prediction data of Mesoscale Model 5 (MM5) can be used for temperature and precipitation forecasts that will form the inputs for a hydrological model. The estimated flows can be used for real time short term decisions for reservoir simulation based on variable guide curve and user defined rules.
Cheung, Kit; Schultz, Simon R; Luk, Wayne
2015-01-01
NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation.
Cheung, Kit; Schultz, Simon R.; Luk, Wayne
2016-01-01
NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation. PMID:26834542
Wheel life prediction model - an alternative to the FASTSIM algorithm for RCF
NASA Astrophysics Data System (ADS)
Hossein-Nia, Saeed; Sichani, Matin Sh.; Stichel, Sebastian; Casanueva, Carlos
2018-07-01
In this article, a wheel life prediction model considering wear and rolling contact fatigue (RCF) is developed and applied to a heavy-haul locomotive. For wear calculations, a methodology based on Archard's wear calculation theory is used. The simulated wear depth is compared with profile measurements within 100,000 km. For RCF, a shakedown-based theory is applied locally, using the FaStrip algorithm to estimate the tangential stresses instead of FASTSIM. The differences between the two algorithms on damage prediction models are studied. The running distance between the two reprofiling due to RCF is estimated based on a Wöhler-like relationship developed from laboratory test results from the literature and the Palmgren-Miner rule. The simulated crack locations and their angles are compared with a five-year field study. Calculations to study the effects of electro-dynamic braking, track gauge, harder wheel material and the increase of axle load on the wheel life are also carried out.
NASA Astrophysics Data System (ADS)
Yu, Zhang; Xiaohui, Song; Jianfang, Li; Fei, Gao
2017-05-01
Cable overheating will lead to the cable insulation level reducing, speed up the cable insulation aging, even easy to cause short circuit faults. Cable overheating risk identification and warning is nessesary for distribution network operators. Cable overheating risk warning method based on impedance parameter estimation is proposed in the paper to improve the safty and reliability operation of distribution network. Firstly, cable impedance estimation model is established by using least square method based on the data from distribiton SCADA system to improve the impedance parameter estimation accuracy. Secondly, calculate the threshold value of cable impedance based on the historical data and the forecast value of cable impedance based on the forecasting data in future from distribiton SCADA system. Thirdly, establish risks warning rules library of cable overheating, calculate the cable impedance forecast value and analysis the change rate of impedance, and then warn the overheating risk of cable line based on the overheating risk warning rules library according to the variation relationship between impedance and line temperature rise. Overheating risk warning method is simulated in the paper. The simulation results shows that the method can identify the imedance and forecast the temperature rise of cable line in distribution network accurately. The result of overheating risk warning can provide decision basis for operation maintenance and repair.
Association Rule-based Predictive Model for Machine Failure in Industrial Internet of Things
NASA Astrophysics Data System (ADS)
Kwon, Jung-Hyok; Lee, Sol-Bee; Park, Jaehoon; Kim, Eui-Jik
2017-09-01
This paper proposes an association rule-based predictive model for machine failure in industrial Internet of things (IIoT), which can accurately predict the machine failure in real manufacturing environment by investigating the relationship between the cause and type of machine failure. To develop the predictive model, we consider three major steps: 1) binarization, 2) rule creation, 3) visualization. The binarization step translates item values in a dataset into one or zero, then the rule creation step creates association rules as IF-THEN structures using the Lattice model and Apriori algorithm. Finally, the created rules are visualized in various ways for users’ understanding. An experimental implementation was conducted using R Studio version 3.3.2. The results show that the proposed predictive model realistically predicts machine failure based on association rules.
NASA Technical Reports Server (NTRS)
Bergeron, H. P.; Haynie, A. T.; Mcdede, J. B.
1980-01-01
A general aviation single pilot instrument flight rule simulation capability was developed. Problems experienced by single pilots flying in IFR conditions were investigated. The simulation required a three dimensional spatial navaid environment of a flight navigational area. A computer simulation of all the navigational aids plus 12 selected airports located in the Washington/Norfolk area was developed. All programmed locations in the list were referenced to a Cartesian coordinate system with the origin located at a specified airport's reference point. All navigational aids with their associated frequencies, call letters, locations, and orientations plus runways and true headings are included in the data base. The simulation included a TV displayed out-the-window visual scene of country and suburban terrain and a scaled model runway complex. Any of the programmed runways, with all its associated navaids, can be referenced to a runway on the airport in this visual scene. This allows a simulation of a full mission scenario including breakout and landing.
Significance testing of rules in rule-based models of human problem solving
NASA Technical Reports Server (NTRS)
Lewis, C. M.; Hammer, J. M.
1986-01-01
Rule-based models of human problem solving have typically not been tested for statistical significance. Three methods of testing rules - analysis of variance, randomization, and contingency tables - are presented. Advantages and disadvantages of the methods are also described.
A hybrid learning method for constructing compact rule-based fuzzy models.
Zhao, Wanqing; Niu, Qun; Li, Kang; Irwin, George W
2013-12-01
The Takagi–Sugeno–Kang-type rule-based fuzzy model has found many applications in different fields; a major challenge is, however, to build a compact model with optimized model parameters which leads to satisfactory model performance. To produce a compact model, most existing approaches mainly focus on selecting an appropriate number of fuzzy rules. In contrast, this paper considers not only the selection of fuzzy rules but also the structure of each rule premise and consequent, leading to the development of a novel compact rule-based fuzzy model. Here, each fuzzy rule is associated with two sets of input attributes, in which the first is used for constructing the rule premise and the other is employed in the rule consequent. A new hybrid learning method combining the modified harmony search method with a fast recursive algorithm is hereby proposed to determine the structure and the parameters for the rule premises and consequents. This is a hard mixed-integer nonlinear optimization problem, and the proposed hybrid method solves the problem by employing an embedded framework, leading to a significantly reduced number of model parameters and a small number of fuzzy rules with each being as simple as possible. Results from three examples are presented to demonstrate the compactness (in terms of the number of model parameters and the number of rules) and the performance of the fuzzy models obtained by the proposed hybrid learning method, in comparison with other techniques from the literature.
Modeling the Population Dynamics of Antibiotic-Resistant Bacteria:. AN Agent-Based Approach
NASA Astrophysics Data System (ADS)
Murphy, James T.; Walshe, Ray; Devocelle, Marc
The response of bacterial populations to antibiotic treatment is often a function of a diverse range of interacting factors. In order to develop strategies to minimize the spread of antibiotic resistance in pathogenic bacteria, a sound theoretical understanding of the systems of interactions taking place within a colony must be developed. The agent-based approach to modeling bacterial populations is a useful tool for relating data obtained at the molecular and cellular level with the overall population dynamics. Here we demonstrate an agent-based model, called Micro-Gen, which has been developed to simulate the growth and development of bacterial colonies in culture. The model also incorporates biochemical rules and parameters describing the kinetic interactions of bacterial cells with antibiotic molecules. Simulations were carried out to replicate the development of methicillin-resistant S. aureus (MRSA) colonies growing in the presence of antibiotics. The model was explored to see how the properties of the system emerge from the interactions of the individual bacterial agents in order to achieve a better mechanistic understanding of the population dynamics taking place. Micro-Gen provides a good theoretical framework for investigating the effects of local environmental conditions and cellular properties on the response of bacterial populations to antibiotic exposure in the context of a simulated environment.
An experimental investigation of internal area ruling for transonic and supersonic channel flow
NASA Technical Reports Server (NTRS)
Roberts, W. B.; Vanrintel, H. L.; Rizvi, G.
1982-01-01
A simulated transonic rotor channel model was examined experimentally to verify the flow physics of internal area ruling. Pressure measurements were performed in the high speed wind tunnel at transonic speeds with Mach 1.5 and Mach 2 nozzle blocks to get an indication of the approximate shock losses. The results showed a reduction in losses due to internal area ruling with the Mach 1.5 nozzle blocks. The reduction in total loss coefficient was of the order of 17 percent for a high blockage model and 7 percent for a cut-down model.
A Dual-Route Model that Learns to Pronounce English Words
NASA Technical Reports Server (NTRS)
Remington, Roger W.; Miller, Craig S.; Null, Cynthia H. (Technical Monitor)
1995-01-01
This paper describes a model that learns to pronounce English words. Learning occurs in two modules: 1) a rule-based module that constructs pronunciations by phonetic analysis of the letter string, and 2) a whole-word module that learns to associate subsets of letters to the pronunciation, without phonetic analysis. In a simulation on a corpus of over 300 words the model produced pronunciation latencies consistent with the effects of word frequency and orthographic regularity observed in human data. Implications of the model for theories of visual word processing and reading instruction are discussed.
A water market simulator considering pair-wise trades between agents
NASA Astrophysics Data System (ADS)
Huskova, I.; Erfani, T.; Harou, J. J.
2012-04-01
In many basins in England no further water abstraction licences are available. Trading water between water rights holders has been recognized as a potentially effective and economically efficient strategy to mitigate increasing scarcity. A screening tool that could assess the potential for trade through realistic simulation of individual water rights holders would help assess the solution's potential contribution to local water management. We propose an optimisation-driven water market simulator that predicts pair-wise trade in a catchment and represents its interaction with natural hydrology and engineered infrastructure. A model is used to emulate licence-holders' willingness to engage in short-term trade transactions. In their simplest form agents are represented using an economic benefit function. The working hypothesis is that trading behaviour can be partially predicted based on differences in marginal values of water over space and time and estimates of transaction costs on pair-wise trades. We discuss the further possibility of embedding rules, norms and preferences of the different water user sectors to more realistically represent the behaviours, motives and constraints of individual licence holders. The potential benefits and limitations of such a social simulation (agent-based) approach is contrasted with our simulator where agents are driven by economic optimization. A case study based on the Dove River Basin (UK) demonstrates model inputs and outputs. The ability of the model to suggest impacts of water rights policy reforms on trading is discussed.
Application of artifical intelligence principles to the analysis of "crazy" speech.
Garfield, D A; Rapp, C
1994-04-01
Artificial intelligence computer simulation methods can be used to investigate psychotic or "crazy" speech. Here, symbolic reasoning algorithms establish semantic networks that schematize speech. These semantic networks consist of two main structures: case frames and object taxonomies. Node-based reasoning rules apply to object taxonomies and pathway-based reasoning rules apply to case frames. Normal listeners may recognize speech as "crazy talk" based on violations of node- and pathway-based reasoning rules. In this article, three separate segments of schizophrenic speech illustrate violations of these rules. This artificial intelligence approach is compared and contrasted with other neurolinguistic approaches and is discussed as a conceptual link between neurobiological and psychodynamic understandings of psychopathology.
Automatic 3d Building Model Generations with Airborne LiDAR Data
NASA Astrophysics Data System (ADS)
Yastikli, N.; Cetin, Z.
2017-11-01
LiDAR systems become more and more popular because of the potential use for obtaining the point clouds of vegetation and man-made objects on the earth surface in an accurate and quick way. Nowadays, these airborne systems have been frequently used in wide range of applications such as DEM/DSM generation, topographic mapping, object extraction, vegetation mapping, 3 dimensional (3D) modelling and simulation, change detection, engineering works, revision of maps, coastal management and bathymetry. The 3D building model generation is the one of the most prominent applications of LiDAR system, which has the major importance for urban planning, illegal construction monitoring, 3D city modelling, environmental simulation, tourism, security, telecommunication and mobile navigation etc. The manual or semi-automatic 3D building model generation is costly and very time-consuming process for these applications. Thus, an approach for automatic 3D building model generation is needed in a simple and quick way for many studies which includes building modelling. In this study, automatic 3D building models generation is aimed with airborne LiDAR data. An approach is proposed for automatic 3D building models generation including the automatic point based classification of raw LiDAR point cloud. The proposed point based classification includes the hierarchical rules, for the automatic production of 3D building models. The detailed analyses for the parameters which used in hierarchical rules have been performed to improve classification results using different test areas identified in the study area. The proposed approach have been tested in the study area which has partly open areas, forest areas and many types of the buildings, in Zekeriyakoy, Istanbul using the TerraScan module of TerraSolid. The 3D building model was generated automatically using the results of the automatic point based classification. The obtained results of this research on study area verified that automatic 3D building models can be generated successfully using raw LiDAR point cloud data.
Harding, R. M.; Boyce, A. J.; Martinson, J. J.; Flint, J.; Clegg, J. B.
1993-01-01
Extensive allelic diversity in variable numbers of tandem repeats (VNTRs) has been discovered in the human genome. For population genetic studies of VNTRs, such as forensic applications, it is important to know whether a neutral mutation-drift balance of VNTR polymorphism can be represented by the infinite alleles model. The assumption of the infinite alleles model that each new mutant is unique is very likely to be violated by unequal sister chromatid exchange (USCE), the primary process believed to generate VNTR mutants. We show that increasing both mutation rates and misalignment constraint for intrachromosomal recombination in a computer simulation model reduces simulated VNTR diversity below the expectations of the infinite alleles model. Maximal constraint, represented as slippage of single repeats, reduces simulated VNTR diversity to levels expected from the stepwise mutation model. Although misalignment rule is the more important variable, mutation rate also has an effect. At moderate rates of USCE, simulated VNTR diversity fluctuates around infinite alleles expectation. However, if rates of USCE are high, as for hypervariable VNTRs, simulated VNTR diversity is consistently lower than predicted by the infinite alleles model. This has been observed for many VNTRs and accounted for by technical problems in distinguishing alleles of neighboring size classes. We use sampling theory to confirm the intrinsically poor fit to the infinite alleles model of both simulated VNTR diversity and observed VNTR polymorphisms sampled from two Papua New Guinean populations. PMID:8293988
Harding, R M; Boyce, A J; Martinson, J J; Flint, J; Clegg, J B
1993-11-01
Extensive allelic diversity in variable numbers of tandem repeats (VNTRs) has been discovered in the human genome. For population genetic studies of VNTRs, such as forensic applications, it is important to know whether a neutral mutation-drift balance of VNTR polymorphism can be represented by the infinite alleles model. The assumption of the infinite alleles model that each new mutant is unique is very likely to be violated by unequal sister chromatid exchange (USCE), the primary process believed to generate VNTR mutants. We show that increasing both mutation rates and misalignment constraint for intrachromosomal recombination in a computer simulation model reduces simulated VNTR diversity below the expectations of the infinite alleles model. Maximal constraint, represented as slippage of single repeats, reduces simulated VNTR diversity to levels expected from the stepwise mutation model. Although misalignment rule is the more important variable, mutation rate also has an effect. At moderate rates of USCE, simulated VNTR diversity fluctuates around infinite alleles expectation. However, if rates of USCE are high, as for hypervariable VNTRs, simulated VNTR diversity is consistently lower than predicted by the infinite alleles model. This has been observed for many VNTRs and accounted for by technical problems in distinguishing alleles of neighboring size classes. We use sampling theory to confirm the intrinsically poor fit to the infinite alleles model of both simulated VNTR diversity and observed VNTR polymorphisms sampled from two Papua New Guinean populations.
Methodology for balancing design and process tradeoffs for deep-subwavelength technologies
NASA Astrophysics Data System (ADS)
Graur, Ioana; Wagner, Tina; Ryan, Deborah; Chidambarrao, Dureseti; Kumaraswamy, Anand; Bickford, Jeanne; Styduhar, Mark; Wang, Lee
2011-04-01
For process development of deep-subwavelength technologies, it has become accepted practice to use model-based simulation to predict systematic and parametric failures. Increasingly, these techniques are being used by designers to ensure layout manufacturability, as an alternative to, or complement to, restrictive design rules. The benefit of model-based simulation tools in the design environment is that manufacturability problems are addressed in a design-aware way by making appropriate trade-offs, e.g., between overall chip density and manufacturing cost and yield. The paper shows how library elements and the full ASIC design flow benefit from eliminating hot spots and improving design robustness early in the design cycle. It demonstrates a path to yield optimization and first time right designs implemented in leading edge technologies. The approach described herein identifies those areas in the design that could benefit from being fixed early, leading to design updates and avoiding later design churn by careful selection of design sensitivities. This paper shows how to achieve this goal by using simulation tools incorporating various models from sparse to rigorously physical, pattern detection and pattern matching, checking and validating failure thresholds.
Exploitation of Self Organization in UAV Swarms for Optimization in Combat Environments
2008-03-01
behaviors and entangled hierarchy into Swarmfare [59] UAV simulation environment to include these models. • Validate this new model’s success through...Figure 4.3. The hierarchy of control emerges from the entangled hierarchy of the state relations at the simulation , swarm and rule/behaviors level...majors, major) Abstract Model Types (AMT) Figure A.1: SO Abstract Model Type Table 142 Appendix B. Simulators Comparision Name MATLAB Multi UAV MultiUAV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mian, Muhammad Umer, E-mail: umermian@gmail.com; Khir, M. H. Md.; Tang, T. B.
Pre-fabrication, behavioural and performance analysis with computer aided design (CAD) tools is a common and fabrication cost effective practice. In light of this we present a simulation methodology for a dual-mass oscillator based 3 Degree of Freedom (3-DoF) MEMS gyroscope. 3-DoF Gyroscope is modeled through lumped parameter models using equivalent circuit elements. These equivalent circuits consist of elementary components which are counterpart of their respective mechanical components, used to design and fabricate 3-DoF MEMS gyroscope. Complete designing of equivalent circuit model, mathematical modeling and simulation are being presented in this paper. Behaviors of the equivalent lumped models derived for themore » proposed device design are simulated in MEMSPRO T-SPICE software. Simulations are carried out with the design specifications following design rules of the MetalMUMPS fabrication process. Drive mass resonant frequencies simulated by this technique are 1.59 kHz and 2.05 kHz respectively, which are close to the resonant frequencies found by the analytical formulation of the gyroscope. The lumped equivalent circuit modeling technique proved to be a time efficient modeling technique for the analysis of complex MEMS devices like 3-DoF gyroscopes. The technique proves to be an alternative approach to the complex and time consuming couple field analysis Finite Element Analysis (FEA) previously used.« less
Rules based process window OPC
NASA Astrophysics Data System (ADS)
O'Brien, Sean; Soper, Robert; Best, Shane; Mason, Mark
2008-03-01
As a preliminary step towards Model-Based Process Window OPC we have analyzed the impact of correcting post-OPC layouts using rules based methods. Image processing on the Brion Tachyon was used to identify sites where the OPC model/recipe failed to generate an acceptable solution. A set of rules for 65nm active and poly were generated by classifying these failure sites. The rules were based upon segment runlengths, figure spaces, and adjacent figure widths. 2.1 million sites for active were corrected in a small chip (comparing the pre and post rules based operations), and 59 million were found at poly. Tachyon analysis of the final reticle layout found weak margin sites distinct from those sites repaired by rules-based corrections. For the active layer more than 75% of the sites corrected by rules would have printed without a defect indicating that most rulesbased cleanups degrade the lithographic pattern. Some sites were missed by the rules based cleanups due to either bugs in the DRC software or gaps in the rules table. In the end dramatic changes to the reticle prevented catastrophic lithography errors, but this method is far too blunt. A more subtle model-based procedure is needed changing only those sites which have unsatisfactory lithographic margin.
Integration of object-oriented knowledge representation with the CLIPS rule based system
NASA Technical Reports Server (NTRS)
Logie, David S.; Kamil, Hasan
1990-01-01
The paper describes a portion of the work aimed at developing an integrated, knowledge based environment for the development of engineering-oriented applications. An Object Representation Language (ORL) was implemented in C++ which is used to build and modify an object-oriented knowledge base. The ORL was designed in such a way so as to be easily integrated with other representation schemes that could effectively reason with the object base. Specifically, the integration of the ORL with the rule based system C Language Production Systems (CLIPS), developed at the NASA Johnson Space Center, will be discussed. The object-oriented knowledge representation provides a natural means of representing problem data as a collection of related objects. Objects are comprised of descriptive properties and interrelationships. The object-oriented model promotes efficient handling of the problem data by allowing knowledge to be encapsulated in objects. Data is inherited through an object network via the relationship links. Together, the two schemes complement each other in that the object-oriented approach efficiently handles problem data while the rule based knowledge is used to simulate the reasoning process. Alone, the object based knowledge is little more than an object-oriented data storage scheme; however, the CLIPS inference engine adds the mechanism to directly and automatically reason with that knowledge. In this hybrid scheme, the expert system dynamically queries for data and can modify the object base with complete access to all the functionality of the ORL from rules.
NASA Astrophysics Data System (ADS)
Park, E.; Jeong, J.; Choi, J.; Han, W. S.; Yun, S. T.
2016-12-01
Three modified outlier identification methods: the three sigma rule (3s), inter quantile range (IQR) and median absolute deviation (MAD), which take advantage of the ensemble regression method are proposed. For validation purposes, the performance of the methods is compared using simulated and actual groundwater data with a few hypothetical conditions. In the validations using simulated data, all of the proposed methods reasonably identify outliers at a 5% outlier level; whereas, only the IQR method performs well for identifying outliers at a 30% outlier level. When applying the methods to real groundwater data, the outlier identification performance of the IQR method is found to be superior to the other two methods. However, the IQR method is found to have a limitation in the false identification of excessive outliers, which may be supplemented by joint applications with the other methods (i.e., the 3s rule and MAD methods). The proposed methods can be also applied as a potential tool for future anomaly detection by model training based on currently available data.
Li, Chen; Nagasaki, Masao; Koh, Chuan Hock; Miyano, Satoru
2011-05-01
Mathematical modeling and simulation studies are playing an increasingly important role in helping researchers elucidate how living organisms function in cells. In systems biology, researchers typically tune many parameters manually to achieve simulation results that are consistent with biological knowledge. This severely limits the size and complexity of simulation models built. In order to break this limitation, we propose a computational framework to automatically estimate kinetic parameters for a given network structure. We utilized an online (on-the-fly) model checking technique (which saves resources compared to the offline approach), with a quantitative modeling and simulation architecture named hybrid functional Petri net with extension (HFPNe). We demonstrate the applicability of this framework by the analysis of the underlying model for the neuronal cell fate decision model (ASE fate model) in Caenorhabditis elegans. First, we built a quantitative ASE fate model containing 3327 components emulating nine genetic conditions. Then, using our developed efficient online model checker, MIRACH 1.0, together with parameter estimation, we ran 20-million simulation runs, and were able to locate 57 parameter sets for 23 parameters in the model that are consistent with 45 biological rules extracted from published biological articles without much manual intervention. To evaluate the robustness of these 57 parameter sets, we run another 20 million simulation runs using different magnitudes of noise. Our simulation results concluded that among these models, one model is the most reasonable and robust simulation model owing to the high stability against these stochastic noises. Our simulation results provide interesting biological findings which could be used for future wet-lab experiments.
Feng, Song; Ollivier, Julien F; Swain, Peter S; Soyer, Orkun S
2015-10-30
Systems biologists aim to decipher the structure and dynamics of signaling and regulatory networks underpinning cellular responses; synthetic biologists can use this insight to alter existing networks or engineer de novo ones. Both tasks will benefit from an understanding of which structural and dynamic features of networks can emerge from evolutionary processes, through which intermediary steps these arise, and whether they embody general design principles. As natural evolution at the level of network dynamics is difficult to study, in silico evolution of network models can provide important insights. However, current tools used for in silico evolution of network dynamics are limited to ad hoc computer simulations and models. Here we introduce BioJazz, an extendable, user-friendly tool for simulating the evolution of dynamic biochemical networks. Unlike previous tools for in silico evolution, BioJazz allows for the evolution of cellular networks with unbounded complexity by combining rule-based modeling with an encoding of networks that is akin to a genome. We show that BioJazz can be used to implement biologically realistic selective pressures and allows exploration of the space of network architectures and dynamics that implement prescribed physiological functions. BioJazz is provided as an open-source tool to facilitate its further development and use. Source code and user manuals are available at: http://oss-lab.github.io/biojazz and http://osslab.lifesci.warwick.ac.uk/BioJazz.aspx. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
John, Shalini; Thangapandian, Sundarapandian; Lee, Keun Woo
2012-01-01
Human pancreatic cholesterol esterase (hCEase) is one of the lipases found to involve in the digestion of large and broad spectrum of substrates including triglycerides, phospholipids, cholesteryl esters, etc. The presence of bile salts is found to be very important for the activation of hCEase. Molecular dynamic simulations were performed for the apoform and bile salt complexed form of hCEase using the co-ordinates of two bile salts from bovine CEase. The stability of the systems throughout the simulation time was checked and two representative structures from the highly populated regions were selected using cluster analysis. These two representative structures were used in pharmacophore model generation. The generated pharmacophore models were validated and used in database screening. The screened hits were refined for their drug-like properties based on Lipinski's rule of five and ADMET properties. The drug-like compounds were further refined by molecular docking simulation using GOLD program based on the GOLD fitness score, mode of binding, and molecular interactions with the active site amino acids. Finally, three hits of novel scaffolds were selected as potential leads to be used in novel and potent hCEase inhibitor design. The stability of binding modes and molecular interactions of these final hits were re-assured by molecular dynamics simulations.
PAM: Particle automata model in simulation of Fusarium graminearum pathogen expansion.
Wcisło, Rafał; Miller, S Shea; Dzwinel, Witold
2016-01-21
The multi-scale nature and inherent complexity of biological systems are a great challenge for computer modeling and classical modeling paradigms. We present a novel particle automata modeling metaphor in the context of developing a 3D model of Fusarium graminearum infection in wheat. The system consisting of the host plant and Fusarium pathogen cells can be represented by an ensemble of discrete particles defined by a set of attributes. The cells-particles can interact with each other mimicking mechanical resistance of the cell walls and cell coalescence. The particles can move, while some of their attributes can be changed according to prescribed rules. The rules can represent cellular scales of a complex system, while the integrated particle automata model (PAM) simulates its overall multi-scale behavior. We show that due to the ability of mimicking mechanical interactions of Fusarium tip cells with the host tissue, the model is able to simulate realistic penetration properties of the colonization process reproducing both vertical and lateral Fusarium invasion scenarios. The comparison of simulation results with micrographs from laboratory experiments shows encouraging qualitative agreement between the two. Copyright © 2015 Elsevier Ltd. All rights reserved.
Modelling and simulating a crisis management system: an organisational perspective
NASA Astrophysics Data System (ADS)
Chaawa, Mohamed; Thabet, Inès; Hanachi, Chihab; Ben Said, Lamjed
2017-04-01
Crises are complex situations due to the dynamism of the environment, its unpredictability and the complexity of the interactions among several different and autonomous involved organisations. In such a context, establishing an organisational view as well as structuring organisations' communications and their functioning is a crucial requirement. In this article, we propose a multi-agent organisational model (OM) to abstract, simulate and analyse a crisis management system (CMS). The objective is to evaluate the CMS from an organisational view, to assess its strength as well as its weakness and to provide deciders with some recommendations for a more flexible and reactive CMS. The proposed OM is illustrated through a real case study: a snowstorm in a Tunisian region. More precisely, we made the following contribution: firstly, we provide an environmental model that identifies the concepts involved in the crisis. Then, we define a role model that copes with the involved actors. In addition, we specify the organisational structure and the interaction model that rule communications and structure actors' functioning. Those models, built following the GAIA methodology, abstract the CMS from an organisational perspective. Finally, we implemented a customisable multi-agent simulator based on the Janus platform to analyse, through several performed simulations, the organisational model.
A Neurocomputational Model of Goal-Directed Navigation in Insect-Inspired Artificial Agents
Goldschmidt, Dennis; Manoonpong, Poramate; Dasgupta, Sakyasingha
2017-01-01
Despite their small size, insect brains are able to produce robust and efficient navigation in complex environments. Specifically in social insects, such as ants and bees, these navigational capabilities are guided by orientation directing vectors generated by a process called path integration. During this process, they integrate compass and odometric cues to estimate their current location as a vector, called the home vector for guiding them back home on a straight path. They further acquire and retrieve path integration-based vector memories globally to the nest or based on visual landmarks. Although existing computational models reproduced similar behaviors, a neurocomputational model of vector navigation including the acquisition of vector representations has not been described before. Here we present a model of neural mechanisms in a modular closed-loop control—enabling vector navigation in artificial agents. The model consists of a path integration mechanism, reward-modulated global learning, random search, and action selection. The path integration mechanism integrates compass and odometric cues to compute a vectorial representation of the agent's current location as neural activity patterns in circular arrays. A reward-modulated learning rule enables the acquisition of vector memories by associating the local food reward with the path integration state. A motor output is computed based on the combination of vector memories and random exploration. In simulation, we show that the neural mechanisms enable robust homing and localization, even in the presence of external sensory noise. The proposed learning rules lead to goal-directed navigation and route formation performed under realistic conditions. Consequently, we provide a novel approach for vector learning and navigation in a simulated, situated agent linking behavioral observations to their possible underlying neural substrates. PMID:28446872
A Neurocomputational Model of Goal-Directed Navigation in Insect-Inspired Artificial Agents.
Goldschmidt, Dennis; Manoonpong, Poramate; Dasgupta, Sakyasingha
2017-01-01
Despite their small size, insect brains are able to produce robust and efficient navigation in complex environments. Specifically in social insects, such as ants and bees, these navigational capabilities are guided by orientation directing vectors generated by a process called path integration. During this process, they integrate compass and odometric cues to estimate their current location as a vector, called the home vector for guiding them back home on a straight path. They further acquire and retrieve path integration-based vector memories globally to the nest or based on visual landmarks. Although existing computational models reproduced similar behaviors, a neurocomputational model of vector navigation including the acquisition of vector representations has not been described before. Here we present a model of neural mechanisms in a modular closed-loop control-enabling vector navigation in artificial agents. The model consists of a path integration mechanism, reward-modulated global learning, random search, and action selection. The path integration mechanism integrates compass and odometric cues to compute a vectorial representation of the agent's current location as neural activity patterns in circular arrays. A reward-modulated learning rule enables the acquisition of vector memories by associating the local food reward with the path integration state. A motor output is computed based on the combination of vector memories and random exploration. In simulation, we show that the neural mechanisms enable robust homing and localization, even in the presence of external sensory noise. The proposed learning rules lead to goal-directed navigation and route formation performed under realistic conditions. Consequently, we provide a novel approach for vector learning and navigation in a simulated, situated agent linking behavioral observations to their possible underlying neural substrates.
NASA Astrophysics Data System (ADS)
Ferrando, N.; Gosálvez, M. A.; Cerdá, J.; Gadea, R.; Sato, K.
2011-03-01
Presently, dynamic surface-based models are required to contain increasingly larger numbers of points and to propagate them over longer time periods. For large numbers of surface points, the octree data structure can be used as a balance between low memory occupation and relatively rapid access to the stored data. For evolution rules that depend on neighborhood states, extended simulation periods can be obtained by using simplified atomistic propagation models, such as the Cellular Automata (CA). This method, however, has an intrinsic parallel updating nature and the corresponding simulations are highly inefficient when performed on classical Central Processing Units (CPUs), which are designed for the sequential execution of tasks. In this paper, a series of guidelines is presented for the efficient adaptation of octree-based, CA simulations of complex, evolving surfaces into massively parallel computing hardware. A Graphics Processing Unit (GPU) is used as a cost-efficient example of the parallel architectures. For the actual simulations, we consider the surface propagation during anisotropic wet chemical etching of silicon as a computationally challenging process with a wide-spread use in microengineering applications. A continuous CA model that is intrinsically parallel in nature is used for the time evolution. Our study strongly indicates that parallel computations of dynamically evolving surfaces simulated using CA methods are significantly benefited by the incorporation of octrees as support data structures, substantially decreasing the overall computational time and memory usage.
NASA Astrophysics Data System (ADS)
Inkoom, J. N.; Nyarko, B. K.
2014-12-01
The integration of geographic information systems (GIS) and agent-based modelling (ABM) can be an efficient tool to improve spatial planning practices. This paper utilizes GIS and ABM approaches to simulate spatial growth patterns of settlement structures in Shama. A preliminary household survey on residential location decision-making choice served as the behavioural rule for household agents in the model. Physical environment properties of the model were extracted from a 2005 image implemented in NetLogo. The resulting growth pattern model was compared with empirical growth patterns to ascertain the model's accuracy. The paper establishes that the development of unplanned structures and its evolving structural pattern are a function of land price, proximity to economic centres, household economic status and location decision-making patterns. The application of the proposed model underlines its potential for integration into urban planning policies and practices, and for understanding residential decision-making processes in emerging cities in developing countries. Key Words: GIS; Agent-based modelling; Growth patterns; NetLogo; Location decision making; Computational Intelligence.
NASA Astrophysics Data System (ADS)
Vrabec, Jadran; Kedia, Gaurav Kumar; Buchhauser, Ulrich; Meyer-Pittroff, Roland; Hasse, Hans
2009-02-01
For the design and optimization of CO 2 recovery from alcoholic fermentation processes by distillation, models for vapor-liquid equilibria (VLE) are needed. Two such thermodynamic models, the Peng-Robinson equation of state (EOS) and a model based on Henry's law constants, are proposed for the ternary mixture N 2 + O 2 + CO 2. Pure substance parameters of the Peng-Robinson EOS are taken from the literature, whereas the binary parameters of the Van der Waals one-fluid mixing rule are adjusted to experimental binary VLE data. The Peng-Robinson EOS describes both binary and ternary experimental data well, except at high pressures approaching the critical region. A molecular model is validated by simulation using binary and ternary experimental VLE data. On the basis of this model, the Henry's law constants of N 2 and O 2 in CO 2 are predicted by molecular simulation. An easy-to-use thermodynamic model, based on those Henry's law constants, is developed to reliably describe the VLE in the CO 2-rich region.
Approach to design neural cryptography: a generalized architecture and a heuristic rule.
Mu, Nankun; Liao, Xiaofeng; Huang, Tingwen
2013-06-01
Neural cryptography, a type of public key exchange protocol, is widely considered as an effective method for sharing a common secret key between two neural networks on public channels. How to design neural cryptography remains a great challenge. In this paper, in order to provide an approach to solve this challenge, a generalized network architecture and a significant heuristic rule are designed. The proposed generic framework is named as tree state classification machine (TSCM), which extends and unifies the existing structures, i.e., tree parity machine (TPM) and tree committee machine (TCM). Furthermore, we carefully study and find that the heuristic rule can improve the security of TSCM-based neural cryptography. Therefore, TSCM and the heuristic rule can guide us to designing a great deal of effective neural cryptography candidates, in which it is possible to achieve the more secure instances. Significantly, in the light of TSCM and the heuristic rule, we further expound that our designed neural cryptography outperforms TPM (the most secure model at present) on security. Finally, a series of numerical simulation experiments are provided to verify validity and applicability of our results.
Characteristics of traffic flow at a non-signalized intersection in the framework of game theory
NASA Astrophysics Data System (ADS)
Fan, Hongqiang; Jia, Bin; Tian, Junfang; Yun, Lifen
2014-12-01
At a non-signalized intersection, some vehicles violate the traffic rules to pass the intersection as soon as possible. These behaviors may cause many traffic conflicts even traffic accidents. In this paper, a simulation model is proposed to research the effects of these behaviors at a non-signalized intersection. Vehicle’s movement is simulated by the cellular automaton (CA) model. The game theory is introduced for simulating the intersection dynamics. Two types of driver participate the game process: cooperator (C) and defector (D). The cooperator obey the traffic rules, but the defector does not. A transition process may occur when the cooperator is waiting before the intersection. The critical value of waiting time follows the Weibull distribution. One transition regime is found in the phase diagram. The simulation results illustrate the applicability of the proposed model and reveal a number of interesting insights into the intersection management, including that the existence of defectors is benefit for the capacity of intersection, but also reduce the safety of intersection.
Agent Based Modeling Applications for Geosciences
NASA Astrophysics Data System (ADS)
Stein, J. S.
2004-12-01
Agent-based modeling techniques have successfully been applied to systems in which complex behaviors or outcomes arise from varied interactions between individuals in the system. Each individual interacts with its environment, as well as with other individuals, by following a set of relatively simple rules. Traditionally this "bottom-up" modeling approach has been applied to problems in the fields of economics and sociology, but more recently has been introduced to various disciplines in the geosciences. This technique can help explain the origin of complex processes from a relatively simple set of rules, incorporate large and detailed datasets when they exist, and simulate the effects of extreme events on system-wide behavior. Some of the challenges associated with this modeling method include: significant computational requirements in order to keep track of thousands to millions of agents, methods and strategies of model validation are lacking, as is a formal methodology for evaluating model uncertainty. Challenges specific to the geosciences, include how to define agents that control water, contaminant fluxes, climate forcing and other physical processes and how to link these "geo-agents" into larger agent-based simulations that include social systems such as demographics economics and regulations. Effective management of limited natural resources (such as water, hydrocarbons, or land) requires an understanding of what factors influence the demand for these resources on a regional and temporal scale. Agent-based models can be used to simulate this demand across a variety of sectors under a range of conditions and determine effective and robust management policies and monitoring strategies. The recent focus on the role of biological processes in the geosciences is another example of an area that could benefit from agent-based applications. A typical approach to modeling the effect of biological processes in geologic media has been to represent these processes in a thermodynamic framework as a set of reactions that roll-up the integrated effect that diverse biological communities exert on a geological system. This approach may work well to predict the effect of certain biological communities in specific environments in which experimental data is available. However, it does not further our knowledge of how the geobiological system actually functions on a micro scale. Agent-based techniques may provide a framework to explore the fundamental interactions required to explain the system-wide behavior. This presentation will present a survey of several promising applications of agent-based modeling approaches to problems in the geosciences and describe specific contributions to some of the inherent challenges facing this approach.
Simulation of salt production process
NASA Astrophysics Data System (ADS)
Muraveva, E. A.
2017-10-01
In this paper an approach to the use of simulation software iThink to simulate the salt production system has been proposed. The dynamic processes of the original system are substituted by processes simulated in the abstract model, but in compliance with the basic rules of the original system, which allows one to accelerate and reduce the cost of the research. As a result, a stable workable simulation model was obtained that can display the rate of the salt exhaustion and many other parameters which are important for business planning.
Understanding the complex dynamics of stock markets through cellular automata
NASA Astrophysics Data System (ADS)
Qiu, G.; Kandhai, D.; Sloot, P. M. A.
2007-04-01
We present a cellular automaton (CA) model for simulating the complex dynamics of stock markets. Within this model, a stock market is represented by a two-dimensional lattice, of which each vertex stands for a trader. According to typical trading behavior in real stock markets, agents of only two types are adopted: fundamentalists and imitators. Our CA model is based on local interactions, adopting simple rules for representing the behavior of traders and a simple rule for price updating. This model can reproduce, in a simple and robust manner, the main characteristics observed in empirical financial time series. Heavy-tailed return distributions due to large price variations can be generated through the imitating behavior of agents. In contrast to other microscopic simulation (MS) models, our results suggest that it is not necessary to assume a certain network topology in which agents group together, e.g., a random graph or a percolation network. That is, long-range interactions can emerge from local interactions. Volatility clustering, which also leads to heavy tails, seems to be related to the combined effect of a fast and a slow process: the evolution of the influence of news and the evolution of agents’ activity, respectively. In a general sense, these causes of heavy tails and volatility clustering appear to be common among some notable MS models that can confirm the main characteristics of financial markets.
NASA Astrophysics Data System (ADS)
Qi, Le; Zheng, Zhongyi; Gang, Longhui
2017-10-01
It was found that the ships' velocity change, which is impacted by the weather and sea, e.g., wind, sea wave, sea current, tide, etc., is significant and must be considered in the marine traffic model. Therefore, a new marine traffic model based on cellular automaton (CA) was proposed in this paper. The characteristics of the ship's velocity change are taken into account in the model. First, the acceleration of a ship was divided into two components: regular component and random component. Second, the mathematical functions and statistical distribution parameters of the two components were confirmed by spectral analysis, curve fitting and auto-correlation analysis methods. Third, by combining the two components, the acceleration was regenerated in the update rules for ships' movement. To test the performance of the model, the ship traffic flows in the Dover Strait, the Changshan Channel and the Qiongzhou Strait were studied and simulated. The results show that the characteristics of ships' velocities in the simulations are consistent with the measured data by Automatic Identification System (AIS). Although the characteristics of the traffic flow in different areas are different, the velocities of ships can be simulated correctly. It proves that the velocities of ships under the influence of weather and sea can be simulated successfully using the proposed model.
Mah, In Kyoung
2017-01-01
For decades, the mechanism of skeletal patterning along a proximal-distal axis has been an area of intense inquiry. Here, we examine the development of the ribs, simple structures that in most terrestrial vertebrates consist of two skeletal elements—a proximal bone and a distal cartilage portion. While the ribs have been shown to arise from the somites, little is known about how the two segments are specified. During our examination of genetically modified mice, we discovered a series of progressively worsening phenotypes that could not be easily explained. Here, we combine genetic analysis of rib development with agent-based simulations to conclude that proximal-distal patterning and outgrowth could occur based on simple rules. In our model, specification occurs during somite stages due to varying Hedgehog protein levels, while later expansion refines the pattern. This framework is broadly applicable for understanding the mechanisms of skeletal patterning along a proximal-distal axis. PMID:29068314
Research on three-phase traffic flow modeling based on interaction range
NASA Astrophysics Data System (ADS)
Zeng, Jun-Wei; Yang, Xu-Gang; Qian, Yong-Sheng; Wei, Xu-Ting
2017-12-01
On the basis of the multiple velocity difference effect (MVDE) model and under short-range interaction, a new three-phase traffic flow model (S-MVDE) is proposed through careful consideration of the influence of the relationship between the speeds of the two adjacent cars on the running state of the rear car. The random slowing rule in the MVDE model is modified in order to emphasize the influence of vehicle interaction between two vehicles on the probability of vehicles’ deceleration. A single-lane model which without bottleneck structure under periodic boundary conditions is simulated, and it is proved that the traffic flow simulated by S-MVDE model will generate the synchronous flow of three-phase traffic theory. Under the open boundary, the model is expanded by adding an on-ramp, the congestion pattern caused by the bottleneck is simulated at different main road flow rates and on-ramp flow rates, which is compared with the traffic congestion pattern observed by Kerner et al. and it is found that the results are consistent with the congestion characteristics in the three-phase traffic flow theory.
NASA Astrophysics Data System (ADS)
Bultreys, Tom; Van Hoorebeke, Luc; Cnudde, Veerle
2016-09-01
The two-phase flow properties of natural rocks depend strongly on their pore structure and wettability, both of which are often heterogeneous throughout the rock. To better understand and predict these properties, image-based models are being developed. Resulting simulations are however problematic in several important classes of rocks with broad pore-size distributions. We present a new multiscale pore network model to simulate secondary waterflooding in these rocks, which may undergo wettability alteration after primary drainage. This novel approach permits to include the effect of microporosity on the imbibition sequence without the need to describe each individual micropore. Instead, we show that fluid transport through unresolved pores can be taken into account in an upscaled fashion, by the inclusion of symbolic links between macropores, resulting in strongly decreased computational demands. Rules to describe the behavior of these links in the quasistatic invasion sequence are derived from percolation theory. The model is validated by comparison to a fully detailed network representation, which takes each separate micropore into account. Strongly and weakly water-and oil-wet simulations show good results, as do mixed-wettability scenarios with different pore-scale wettability distributions. We also show simulations on a network extracted from a micro-CT scan of Estaillades limestone, which yields good agreement with water-wet and mixed-wet experimental results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harding, R.M.; Martinson, J.J.; Flint, J.
1993-11-01
Extensive allelic diversity in variable numbers of tandem repeats (VNTRs) has been discovered in the human genome. For population genetic studies of VNTRs, such as forensic applications, it is important to know whether a neutral mutation-drift balance of VNTR polymorphism can be represented by the infinite alleles model. The assumption of the infinite alleles model that each new mutant is unique is very likely to be violated by unequal sister chromatid exchange (USCE), the primary process believed to generate VNTR mutants. The authors show that increasing both mutation rates and misalignment constraint for intrachromosomal recombination in a computer simulation modelmore » reduces simulated VNTR diversity below the expectations of the infinite alleles model. Maximal constraint, represented as slippage of single repeats, reduces simulated VNTR diversity to levels expected from the stepwise mutation model. Although misalignment rule is the more important variable, mutation rate also has an effect. At moderate rates of USCE, simulated VNTR diversity fluctuates around infinite alleles expectation. However, if rates of USCE are high, as for hypervariable VNTRs, simulated VNTR diversity is consistently lower than predicted by the infinite alleles model. This has been observed for many VNTRs and accounted for by technical problems in distinguishing alleles of neighboring size classes. The authors use sampling theory to confirm the intrinsically poor fit to the infinite model of both simulated VNTR diversity and observed VNTR polymorphisms sampled from two Papua New Guinean populations. 25 refs., 20 figs., 4 tabs.« less
Image segmentation using association rule features.
Rushing, John A; Ranganath, Heggere; Hinke, Thomas H; Graves, Sara J
2002-01-01
A new type of texture feature based on association rules is described. Association rules have been used in applications such as market basket analysis to capture relationships present among items in large data sets. It is shown that association rules can be adapted to capture frequently occurring local structures in images. The frequency of occurrence of these structures can be used to characterize texture. Methods for segmentation of textured images based on association rule features are described. Simulation results using images consisting of man made and natural textures show that association rule features perform well compared to other widely used texture features. Association rule features are used to detect cumulus cloud fields in GOES satellite images and are found to achieve higher accuracy than other statistical texture features for this problem.
Moving-window dynamic optimization: design of stimulation profiles for walking.
Dosen, Strahinja; Popović, Dejan B
2009-05-01
The overall goal of the research is to improve control for electrical stimulation-based assistance of walking in hemiplegic individuals. We present the simulation for generating offline input (sensors)-output (intensity of muscle stimulation) representation of walking that serves in synthesizing a rule-base for control of electrical stimulation for restoration of walking. The simulation uses new algorithm termed moving-window dynamic optimization (MWDO). The optimization criterion was to minimize the sum of the squares of tracking errors from desired trajectories with the penalty function on the total muscle efforts. The MWDO was developed in the MATLAB environment and tested using target trajectories characteristic for slow-to-normal walking recorded in healthy individual and a model with the parameters characterizing the potential hemiplegic user. The outputs of the simulation are piecewise constant intensities of electrical stimulation and trajectories generated when the calculated stimulation is applied to the model. We demonstrated the importance of this simulation by showing the outputs for healthy and hemiplegic individuals, using the same target trajectories. Results of the simulation show that the MWDO is an efficient tool for analyzing achievable trajectories and for determining the stimulation profiles that need to be delivered for good tracking.
SimPackJ/S: a web-oriented toolkit for discrete event simulation
NASA Astrophysics Data System (ADS)
Park, Minho; Fishwick, Paul A.
2002-07-01
SimPackJ/S is the JavaScript and Java version of SimPack, which means SimPackJ/S is a collection of JavaScript and Java libraries and executable programs for computer simulations. The main purpose of creating SimPackJ/S is that we allow existing SimPack users to expand simulation areas and provide future users with a freeware simulation toolkit to simulate and model a system in web environments. One of the goals for this paper is to introduce SimPackJ/S. The other goal is to propose translation rules for converting C to JavaScript and Java. Most parts demonstrate the translation rules with examples. In addition, we discuss a 3D dynamic system model and overview an approach to 3D dynamic systems using SimPackJ/S. We explain an interface between SimPackJ/S and the 3D language--Virtual Reality Modeling Language (VRML). This paper documents how to translate C to JavaScript and Java and how to utilize SimPackJ/S within a 3D web environment.
An empirical and model study on automobile market in Taiwan
NASA Astrophysics Data System (ADS)
Tang, Ji-Ying; Qiu, Rong; Zhou, Yueping; He, Da-Ren
2006-03-01
We have done an empirical investigation on automobile market in Taiwan including the development of the possession rate of the companies in the market from 1979 to 2003, the development of the largest possession rate, and so on. A dynamic model for describing the competition between the companies is suggested based on the empirical study. In the model each company is given a long-term competition factor (such as technology, capital and scale) and a short-term competition factor (such as management, service and advertisement). Then the companies play games in order to obtain more possession rate in the market under certain rules. Numerical simulation based on the model display a competition developing process, which qualitatively and quantitatively agree with our empirical investigation results.
A fast elitism Gaussian estimation of distribution algorithm and application for PID optimization.
Xu, Qingyang; Zhang, Chengjin; Zhang, Li
2014-01-01
Estimation of distribution algorithm (EDA) is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA) is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA.
A Fast Elitism Gaussian Estimation of Distribution Algorithm and Application for PID Optimization
Xu, Qingyang; Zhang, Chengjin; Zhang, Li
2014-01-01
Estimation of distribution algorithm (EDA) is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA) is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA. PMID:24892059
Kareiva, Peter; Morse, Douglass H; Eccleston, Jill
1989-03-01
We compared the patch-choice performances of an ambush predator, the crab spider Misumena vatia (Thomisidae) hunting on common milkweed Asclepias syriaca (Asclepiadaceae) umbles, with two stochastic rule-of-thumb simulation models: one that employed a threshold giving-up time and one that assumed a fixed probability of moving. Adult female Misumena were placed on milkweed plants with three umbels, each with markedly different numbers of flower-seeking prey. Using a variety of visitation regimes derived from observed visitation patterns of insect prey, we found that decreases in among-umbel variance in visitation rates or increases in overall mean visitation rates reduced the "clarity of the optimum" (the difference in the yield obtained as foraging behavior changes), both locally and globally. Yield profiles from both models were extremely flat or jagged over a wide range of prey visitation regimes; thus, differences between optimal and "next-best" strategies differed only modestly over large parts of the "foraging landscape". Although optimal yields from fixed probability simulations were one-third to one-half those obtained from threshold simulations, spiders appear to depart umbels in accordance with the fixed probability rule.
NASA Astrophysics Data System (ADS)
Liu, Z.; Li, Y.
2018-04-01
This paper from the perspective of the Neighbor cellular space, Proposed a new urban space expansion model based on a new multi-objective gray decision and CA. The model solved the traditional cellular automata conversion rules is difficult to meet the needs of the inner space-time analysis of urban changes and to overcome the problem of uncertainty in the combination of urban drivers and urban cellular automata. At the same time, the study takes Pidu District as a research area and carries out urban spatial simulation prediction and analysis, and draws the following conclusions: (1) The design idea of the urban spatial expansion model proposed in this paper is that the urban driving factor and the neighborhood function are tightly coupled by the multi-objective grey decision method based on geographical conditions. The simulation results show that the simulation error of urban spatial expansion is less than 5.27 %. The Kappa coefficient is 0.84. It shows that the model can better capture the inner transformation mechanism of the city. (2) We made a simulation prediction for Pidu District of Chengdu by discussing Pidu District of Chengdu as a system instance.In this way, we analyzed the urban growth tendency of this area.presenting a contiguous increasing mode, which is called "urban intensive development". This expansion mode accorded with sustainable development theory and the ecological urbanization design theory.
Connecting clinical and actuarial prediction with rule-based methods.
Fokkema, Marjolein; Smits, Niels; Kelderman, Henk; Penninx, Brenda W J H
2015-06-01
Meta-analyses comparing the accuracy of clinical versus actuarial prediction have shown actuarial methods to outperform clinical methods, on average. However, actuarial methods are still not widely used in clinical practice, and there has been a call for the development of actuarial prediction methods for clinical practice. We argue that rule-based methods may be more useful than the linear main effect models usually employed in prediction studies, from a data and decision analytic as well as a practical perspective. In addition, decision rules derived with rule-based methods can be represented as fast and frugal trees, which, unlike main effects models, can be used in a sequential fashion, reducing the number of cues that have to be evaluated before making a prediction. We illustrate the usability of rule-based methods by applying RuleFit, an algorithm for deriving decision rules for classification and regression problems, to a dataset on prediction of the course of depressive and anxiety disorders from Penninx et al. (2011). The RuleFit algorithm provided a model consisting of 2 simple decision rules, requiring evaluation of only 2 to 4 cues. Predictive accuracy of the 2-rule model was very similar to that of a logistic regression model incorporating 20 predictor variables, originally applied to the dataset. In addition, the 2-rule model required, on average, evaluation of only 3 cues. Therefore, the RuleFit algorithm appears to be a promising method for creating decision tools that are less time consuming and easier to apply in psychological practice, and with accuracy comparable to traditional actuarial methods. (c) 2015 APA, all rights reserved).
Investigation of model-based physical design restrictions (Invited Paper)
NASA Astrophysics Data System (ADS)
Lucas, Kevin; Baron, Stanislas; Belledent, Jerome; Boone, Robert; Borjon, Amandine; Couderc, Christophe; Patterson, Kyle; Riviere-Cazaux, Lionel; Rody, Yves; Sundermann, Frank; Toublan, Olivier; Trouiller, Yorick; Urbani, Jean-Christophe; Wimmer, Karl
2005-05-01
As lithography and other patterning processes become more complex and more non-linear with each generation, the task of physical design rules necessarily increases in complexity also. The goal of the physical design rules is to define the boundary between the physical layout structures which will yield well from those which will not. This is essentially a rule-based pre-silicon guarantee of layout correctness. However the rapid increase in design rule requirement complexity has created logistical problems for both the design and process functions. Therefore, similar to the semiconductor industry's transition from rule-based to model-based optical proximity correction (OPC) due to increased patterning complexity, opportunities for improving physical design restrictions by implementing model-based physical design methods are evident. In this paper we analyze the possible need and applications for model-based physical design restrictions (MBPDR). We first analyze the traditional design rule evolution, development and usage methodologies for semiconductor manufacturers. Next we discuss examples of specific design rule challenges requiring new solution methods in the patterning regime of low K1 lithography and highly complex RET. We then evaluate possible working strategies for MBPDR in the process development and product design flows, including examples of recent model-based pre-silicon verification techniques. Finally we summarize with a proposed flow and key considerations for MBPDR implementation.
CATS - A process-based model for turbulent turbidite systems at the reservoir scale
NASA Astrophysics Data System (ADS)
Teles, Vanessa; Chauveau, Benoît; Joseph, Philippe; Weill, Pierre; Maktouf, Fakher
2016-09-01
The Cellular Automata for Turbidite systems (CATS) model is intended to simulate the fine architecture and facies distribution of turbidite reservoirs with a multi-event and process-based approach. The main processes of low-density turbulent turbidity flow are modeled: downslope sediment-laden flow, entrainment of ambient water, erosion and deposition of several distinct lithologies. This numerical model, derived from (Salles, 2006; Salles et al., 2007), proposes a new approach based on the Rouse concentration profile to consider the flow capacity to carry the sediment load in suspension. In CATS, the flow distribution on a given topography is modeled with local rules between neighboring cells (cellular automata) based on potential and kinetic energy balance and diffusion concepts. Input parameters are the initial flow parameters and a 3D topography at depositional time. An overview of CATS capabilities in different contexts is presented and discussed.
A downscaling scheme for atmospheric variables to drive soil-vegetation-atmosphere transfer models
NASA Astrophysics Data System (ADS)
Schomburg, A.; Venema, V.; Lindau, R.; Ament, F.; Simmer, C.
2010-09-01
For driving soil-vegetation-transfer models or hydrological models, high-resolution atmospheric forcing data is needed. For most applications the resolution of atmospheric model output is too coarse. To avoid biases due to the non-linear processes, a downscaling system should predict the unresolved variability of the atmospheric forcing. For this purpose we derived a disaggregation system consisting of three steps: (1) a bi-quadratic spline-interpolation of the low-resolution data, (2) a so-called `deterministic' part, based on statistical rules between high-resolution surface variables and the desired atmospheric near-surface variables and (3) an autoregressive noise-generation step. The disaggregation system has been developed and tested based on high-resolution model output (400m horizontal grid spacing). A novel automatic search-algorithm has been developed for deriving the deterministic downscaling rules of step 2. When applied to the atmospheric variables of the lowest layer of the atmospheric COSMO-model, the disaggregation is able to adequately reconstruct the reference fields. Applying downscaling step 1 and 2, root mean square errors are decreased. Step 3 finally leads to a close match of the subgrid variability and temporal autocorrelation with the reference fields. The scheme can be applied to the output of atmospheric models, both for stand-alone offline simulations, and a fully coupled model system.
2010-01-01
Background Previously two prediction rules identifying children at risk of hearing loss and academic or behavioral limitations after bacterial meningitis were developed. Streptococcus pneumoniae as causative pathogen was an important risk factor in both. Since 2006 Dutch children receive seven-valent conjugate vaccination against S. pneumoniae. The presumed effect of vaccination was simulated by excluding all children infected by S. pneumoniae with the serotypes included in the vaccine, from both previous collected cohorts (between 1990-1995). Methods Children infected by one of the vaccine serotypes were excluded from both original cohorts (hearing loss: 70 of 628 children; academic or behavioral limitations: 26 of 182 children). All identified risk factors were included in multivariate logistic regression models. The discriminative ability of both new models was calculated. Results The same risk factors as in the original models were significant. The discriminative ability of the original hearing loss model was 0.84 and of the new model 0.87. In the academic or behavioral limitations model it was 0.83 and 0.84 respectively. Conclusion It can be assumed that the prediction rules will also be applicable on a vaccinated population. However, vaccination does not provide 100% coverage and evidence is available that serotype replacement will occur. The impact of vaccination on serotype replacement needs to be investigated, and the prediction rules must be validated externally. PMID:20815866
GIS Data Based Automatic High-Fidelity 3D Road Network Modeling
NASA Technical Reports Server (NTRS)
Wang, Jie; Shen, Yuzhong
2011-01-01
3D road models are widely used in many computer applications such as racing games and driving simulations_ However, almost all high-fidelity 3D road models were generated manually by professional artists at the expense of intensive labor. There are very few existing methods for automatically generating 3D high-fidelity road networks, especially those existing in the real world. This paper presents a novel approach thai can automatically produce 3D high-fidelity road network models from real 2D road GIS data that mainly contain road. centerline in formation. The proposed method first builds parametric representations of the road centerlines through segmentation and fitting . A basic set of civil engineering rules (e.g., cross slope, superelevation, grade) for road design are then selected in order to generate realistic road surfaces in compliance with these rules. While the proposed method applies to any types of roads, this paper mainly addresses automatic generation of complex traffic interchanges and intersections which are the most sophisticated elements in the road networks
Model simulation of the Manasquan water-supply system in Monmouth County, New Jersey
Chang, Ming; Tasker, Gary D.; Nieswand, Steven
2001-01-01
Model simulation of the Manasquan Water Supply System in Monmouth County, New Jersey, was completed using historic hydrologic data to evaluate the effects of operational and withdrawal alternatives on the Manasquan reservoir and pumping system. Changes in the system operations can be simulated with the model using precipitation forecasts. The Manasquan Reservoir system model operates by using daily streamflow values, which were reconstructed from historical U.S. Geological Survey streamflow-gaging station records. The model is able to run in two modes--General Risk analysis Model (GRAM) and Position Analysis Model (POSA). The GRAM simulation procedure uses reconstructed historical streamflow records to provide probability estimates of certain events, such as reservoir storage levels declining below a specific level, when given an assumed set of operating rules and withdrawal rates. POSA can be used to forecast the likelihood of specified outcomes, such as streamflows falling below statutory passing flows, associated with a specific working plan for the water-supply system over a period of months. The user can manipulate the model and generate graphs and tables of streamflows and storage, for example. This model can be used as a management tool to facilitate the development of drought warning and drought emergency rule curves and safe yield values for the water-supply system.
LISA: a java API for performing simulations of trajectories for all types of balloons
NASA Astrophysics Data System (ADS)
Conessa, Huguette
2016-07-01
LISA (LIbrarie de Simulation pour les Aerostats) is a java API for performing simulations of trajectories for all types of balloons (Zero Pressure Balloons, Pressurized Balloons, Infrared Montgolfier), and for all phases of flight (ascent, ceiling, descent). This library has for goals to establish a reliable repository of Balloons flight physics models, to capitalize developments and control models used in different tools. It is already used for flight physics study software in CNES, to understand and reproduce the behavior of balloons, observed during real flights. It will be used operationally for the ground segment of the STRATEOLE2 mission. It was developed with quality rules of "critical software." It is based on fundamental generic concepts, linking the simulation state variables to interchangeable calculation models. Each LISA model defines how to calculate a consistent set of state variables combining validity checks. To perform a simulation for a type of balloon and a phase of flight, it is necessary to select or create a macro-model that is to say, a consistent set of models to choose from among those offered by LISA, defining the behavior of the environment and the balloon. The purpose of this presentation is to introduce the main concepts of LISA, and the new perspectives offered by this library.
NASA Astrophysics Data System (ADS)
Du, E.; Cai, X.; Minsker, B. S.
2014-12-01
Agriculture comprises about 80 percent of the total water consumption in the US. Under conditions of water shortage and fully committed water rights, market-based water allocations could be promising instruments for agricultural water redistribution from marginally profitable areas to more profitable ones. Previous studies on water market have mainly focused on theoretical or statistical analysis. However, how water users' heterogeneous physical attributes and decision rules about water use and water right trading will affect water market efficiency has been less addressed. In this study, we developed an agent-based model to evaluate the benefits of an agricultural water market in the Guadalupe River Basin during drought events. Agricultural agents with different attributes (i.e., soil type for crops, annual water diversion permit and precipitation) are defined to simulate the dynamic feedback between water availability, irrigation demand and water trading activity. Diversified crop irrigation rules and water bidding rules are tested in terms of crop yield, agricultural profit, and water-use efficiency. The model was coupled with a real-time hydrologic model and run under different water scarcity scenarios. Preliminary results indicate that an agricultural water market is capable of increasing crop yield, agricultural profit, and water-use efficiency. This capability is more significant under moderate drought scenarios than in mild and severe drought scenarios. The water market mechanism also increases agricultural resilience to climate uncertainty by reducing crop yield variance in drought events. The challenges of implementing an agricultural water market under climate uncertainty are also discussed.
Optimizing Reservoir Operation to Adapt to the Climate Change
NASA Astrophysics Data System (ADS)
Madadgar, S.; Jung, I.; Moradkhani, H.
2010-12-01
Climate change and upcoming variation in flood timing necessitates the adaptation of current rule curves developed for operation of water reservoirs as to reduce the potential damage from either flood or draught events. This study attempts to optimize the current rule curves of Cougar Dam on McKenzie River in Oregon addressing some possible climate conditions in 21th century. The objective is to minimize the failure of operation to meet either designated demands or flood limit at a downstream checkpoint. A simulation/optimization model including the standard operation policy and a global optimization method, tunes the current rule curve upon 8 GCMs and 2 greenhouse gases emission scenarios. The Precipitation Runoff Modeling System (PRMS) is used as the hydrology model to project the streamflow for the period of 2000-2100 using downscaled precipitation and temperature forcing from 8 GCMs and two emission scenarios. An ensemble of rule curves, each associated with an individual scenario, is obtained by optimizing the reservoir operation. The simulation of reservoir operation, for all the scenarios and the expected value of the ensemble, is conducted and performance assessment using statistical indices including reliability, resilience, vulnerability and sustainability is made.
65nm OPC and design optimization by using simple electrical transistor simulation
NASA Astrophysics Data System (ADS)
Trouiller, Yorick; Devoivre, Thierry; Belledent, Jerome; Foussadier, Franck; Borjon, Amandine; Patterson, Kyle; Lucas, Kevin; Couderc, Christophe; Sundermann, Frank; Urbani, Jean-Christophe; Baron, Stanislas; Rody, Yves; Chapon, Jean-Damien; Arnaud, Franck; Entradas, Jorge
2005-05-01
In the context of 65nm logic technology where gate CD control budget requirements are below 5nm, it is mandatory to properly quantify the impact of the 2D effects on the electrical behavior of the transistor [1,2]. This study uses the following sequence to estimate the impact on transistor performance: 1) A lithographic simulation is performed after OPC (Optical Proximity Correction) of active and poly using a calibrated model at best conditions. Some extrapolation of this model can also be used to assess marginalities due to process window (focus, dose, mask errors, and overlay). In our case study, we mainly checked the poly to active misalignment effects. 2) Electrical behavior of the transistor (Ion, Ioff, Vt) is calculated based on a derivative spice model using the simulated image of the gate as an input. In most of the cases Ion analysis, rather than Vt or leakage, gives sufficient information for patterning optimization. We have demonstrated the benefit of this approach with two different examples: -design rule trade-off : we estimated the impact with and without misalignment of critical rules like poly corner to active distance, active corner to poly distance or minimum space between small transistor and big transistor. -Library standard cell debugging: we applied this methodology to the most critical one hundred transistors of our standard cell libraries and calculate Ion behavior with and without misalignment between active and poly. We compared two scanner illumination modes and two OPC versions based on the behavior of the one hundred transistors. We were able to see the benefits of one illumination, and also the improvement in the OPC maturity.
Robust Strategy for Rocket Engine Health Monitoring
NASA Technical Reports Server (NTRS)
Santi, L. Michael
2001-01-01
Monitoring the health of rocket engine systems is essentially a two-phase process. The acquisition phase involves sensing physical conditions at selected locations, converting physical inputs to electrical signals, conditioning the signals as appropriate to establish scale or filter interference, and recording results in a form that is easy to interpret. The inference phase involves analysis of results from the acquisition phase, comparison of analysis results to established health measures, and assessment of health indications. A variety of analytical tools may be employed in the inference phase of health monitoring. These tools can be separated into three broad categories: statistical, rule based, and model based. Statistical methods can provide excellent comparative measures of engine operating health. They require well-characterized data from an ensemble of "typical" engines, or "golden" data from a specific test assumed to define the operating norm in order to establish reliable comparative measures. Statistical methods are generally suitable for real-time health monitoring because they do not deal with the physical complexities of engine operation. The utility of statistical methods in rocket engine health monitoring is hindered by practical limits on the quantity and quality of available data. This is due to the difficulty and high cost of data acquisition, the limited number of available test engines, and the problem of simulating flight conditions in ground test facilities. In addition, statistical methods incur a penalty for disregarding flow complexity and are therefore limited in their ability to define performance shift causality. Rule based methods infer the health state of the engine system based on comparison of individual measurements or combinations of measurements with defined health norms or rules. This does not mean that rule based methods are necessarily simple. Although binary yes-no health assessment can sometimes be established by relatively simple rules, the causality assignment needed for refined health monitoring often requires an exceptionally complex rule base involving complicated logical maps. Structuring the rule system to be clear and unambiguous can be difficult, and the expert input required to maintain a large logic network and associated rule base can be prohibitive.
Monine, Michael I.; Posner, Richard G.; Savage, Paul B.; Faeder, James R.; Hlavacek, William S.
2010-01-01
Abstract We use flow cytometry to characterize equilibrium binding of a fluorophore-labeled trivalent model antigen to bivalent IgE-FcεRI complexes on RBL cells. We find that flow cytometric measurements are consistent with an equilibrium model for ligand-receptor binding in which binding sites are assumed to be equivalent and ligand-induced receptor aggregates are assumed to be acyclic. However, this model predicts extensive receptor aggregation at antigen concentrations that yield strong cellular secretory responses, which is inconsistent with the expectation that large receptor aggregates should inhibit such responses. To investigate possible explanations for this discrepancy, we evaluate four rule-based models for interaction of a trivalent ligand with a bivalent cell-surface receptor that relax simplifying assumptions of the equilibrium model. These models are simulated using a rule-based kinetic Monte Carlo approach to investigate the kinetics of ligand-induced receptor aggregation and to study how the kinetics and equilibria of ligand-receptor interaction are affected by steric constraints on receptor aggregate configurations and by the formation of cyclic receptor aggregates. The results suggest that formation of linear chains of cyclic receptor dimers may be important for generating secretory signals. Steric effects that limit receptor aggregation and transient formation of small receptor aggregates may also be important. PMID:20085718
NASA Astrophysics Data System (ADS)
Sidles, John A.; Garbini, Joseph L.; Harrell, Lee E.; Hero, Alfred O.; Jacky, Jonathan P.; Malcomb, Joseph R.; Norman, Anthony G.; Williamson, Austin M.
2009-06-01
Practical recipes are presented for simulating high-temperature and nonequilibrium quantum spin systems that are continuously measured and controlled. The notion of a spin system is broadly conceived, in order to encompass macroscopic test masses as the limiting case of large-j spins. The simulation technique has three stages: first the deliberate introduction of noise into the simulation, then the conversion of that noise into an equivalent continuous measurement and control process, and finally, projection of the trajectory onto state-space manifolds having reduced dimensionality and possessing a Kähler potential of multilinear algebraic form. These state-spaces can be regarded as ruled algebraic varieties upon which a projective quantum model order reduction (MOR) is performed. The Riemannian sectional curvature of ruled Kählerian varieties is analyzed, and proved to be non-positive upon all sections that contain a rule. These manifolds are shown to contain Slater determinants as a special case and their identity with Grassmannian varieties is demonstrated. The resulting simulation formalism is used to construct a positive P-representation for the thermal density matrix. Single-spin detection by magnetic resonance force microscopy (MRFM) is simulated, and the data statistics are shown to be those of a random telegraph signal with additive white noise. Larger-scale spin-dust models are simulated, having no spatial symmetry and no spatial ordering; the high-fidelity projection of numerically computed quantum trajectories onto low dimensionality Kähler state-space manifolds is demonstrated. The reconstruction of quantum trajectories from sparse random projections is demonstrated, the onset of Donoho-Stodden breakdown at the Candès-Tao sparsity limit is observed, a deterministic construction for sampling matrices is given and methods for quantum state optimization by Dantzig selection are given.
MacGillivray, Brian H
2017-08-01
In many environmental and public health domains, heuristic methods of risk and decision analysis must be relied upon, either because problem structures are ambiguous, reliable data is lacking, or decisions are urgent. This introduces an additional source of uncertainty beyond model and measurement error - uncertainty stemming from relying on inexact inference rules. Here we identify and analyse heuristics used to prioritise risk objects, to discriminate between signal and noise, to weight evidence, to construct models, to extrapolate beyond datasets, and to make policy. Some of these heuristics are based on causal generalisations, yet can misfire when these relationships are presumed rather than tested (e.g. surrogates in clinical trials). Others are conventions designed to confer stability to decision analysis, yet which may introduce serious error when applied ritualistically (e.g. significance testing). Some heuristics can be traced back to formal justifications, but only subject to strong assumptions that are often violated in practical applications. Heuristic decision rules (e.g. feasibility rules) in principle act as surrogates for utility maximisation or distributional concerns, yet in practice may neglect costs and benefits, be based on arbitrary thresholds, and be prone to gaming. We highlight the problem of rule-entrenchment, where analytical choices that are in principle contestable are arbitrarily fixed in practice, masking uncertainty and potentially introducing bias. Strategies for making risk and decision analysis more rigorous include: formalising the assumptions and scope conditions under which heuristics should be applied; testing rather than presuming their underlying empirical or theoretical justifications; using sensitivity analysis, simulations, multiple bias analysis, and deductive systems of inference (e.g. directed acyclic graphs) to characterise rule uncertainty and refine heuristics; adopting "recovery schemes" to correct for known biases; and basing decision rules on clearly articulated values and evidence, rather than convention. Copyright © 2017. Published by Elsevier Ltd.
The research of selection model based on LOD in multi-scale display of electronic map
NASA Astrophysics Data System (ADS)
Zhang, Jinming; You, Xiong; Liu, Yingzhen
2008-10-01
This paper proposes a selection model based on LOD to aid the display of electronic map. The ratio of display scale to map scale is regarded as a LOD operator. The categorization rule, classification rule, elementary rule and spatial geometry character rule of LOD operator setting are also concluded.
Gonzales, Matthew J.; Sturgeon, Gregory; Segars, W. Paul; McCulloch, Andrew D.
2016-01-01
Cubic Hermite hexahedral finite element meshes have some well-known advantages over linear tetrahedral finite element meshes in biomechanical and anatomic modeling using isogeometric analysis. These include faster convergence rates as well as the ability to easily model rule-based anatomic features such as cardiac fiber directions. However, it is not possible to create closed complex objects with only regular nodes; these objects require the presence of extraordinary nodes (nodes with 3 or >= 5 adjacent elements in 2D) in the mesh. The presence of extraordinary nodes requires new constraints on the derivatives of adjacent elements to maintain continuity. We have developed a new method that uses an ensemble coordinate frame at the nodes and a local-to-global mapping to maintain continuity. In this paper, we make use of this mapping to create cubic Hermite models of the human ventricles and a four-chamber heart. We also extend the methods to the finite element equations to perform biomechanics simulations using these meshes. The new methods are validated using simple test models and applied to anatomically accurate ventricular meshes with valve annuli to simulate complete cardiac cycle simulations. PMID:27182096
Modeling social learning of language and skills.
Vogt, Paul; Haasdijk, Evert
2010-01-01
We present a model of social learning of both language and skills, while assuming—insofar as possible—strict autonomy, virtual embodiment, and situatedness. This model is built by integrating various previous models of language development and social learning, and it is this integration that, under the mentioned assumptions, provides novel challenges. The aim of the article is to investigate what sociocognitive mechanisms agents should have in order to be able to transmit language from one generation to the next so that it can be used as a medium to transmit internalized rules that represent skill knowledge. We have performed experiments where this knowledge solves the familiar poisonous-food problem. Simulations reveal under what conditions, regarding population structure, agents can successfully solve this problem. In addition to issues relating to perspective taking and mutual exclusivity, we show that agents need to coordinate interactions so that they can establish joint attention in order to form a scaffold for language learning, which in turn forms a scaffold for the learning of rule-based skills. Based on these findings, we conclude by hypothesizing that social learning at one level forms a scaffold for the social learning at another, higher level, thus contributing to the accumulation of cultural knowledge.
Zhang, Ziyu; Yuan, Lang; Lee, Peter D; Jones, Eric; Jones, Julian R
2014-11-01
Bone augmentation implants are porous to allow cellular growth, bone formation and fixation. However, the design of the pores is currently based on simple empirical rules, such as minimum pore and interconnects sizes. We present a three-dimensional (3D) transient model of cellular growth based on the Navier-Stokes equations that simulates the body fluid flow and stimulation of bone precursor cellular growth, attachment, and proliferation as a function of local flow shear stress. The model's effectiveness is demonstrated for two additive manufactured (AM) titanium scaffold architectures. The results demonstrate that there is a complex interaction of flow rate and strut architecture, resulting in partially randomized structures having a preferential impact on stimulating cell migration in 3D porous structures for higher flow rates. This novel result demonstrates the potential new insights that can be gained via the modeling tool developed, and how the model can be used to perform what-if simulations to design AM structures to specific functional requirements. © 2014 Wiley Periodicals, Inc.
A plausible neural circuit for decision making and its formation based on reinforcement learning.
Wei, Hui; Dai, Dawei; Bu, Yijie
2017-06-01
A human's, or lower insects', behavior is dominated by its nervous system. Each stable behavior has its own inner steps and control rules, and is regulated by a neural circuit. Understanding how the brain influences perception, thought, and behavior is a central mandate of neuroscience. The phototactic flight of insects is a widely observed deterministic behavior. Since its movement is not stochastic, the behavior should be dominated by a neural circuit. Based on the basic firing characteristics of biological neurons and the neural circuit's constitution, we designed a plausible neural circuit for this phototactic behavior from logic perspective. The circuit's output layer, which generates a stable spike firing rate to encode flight commands, controls the insect's angular velocity when flying. The firing pattern and connection type of excitatory and inhibitory neurons are considered in this computational model. We simulated the circuit's information processing using a distributed PC array, and used the real-time average firing rate of output neuron clusters to drive a flying behavior simulation. In this paper, we also explored how a correct neural decision circuit is generated from network flow view through a bee's behavior experiment based on the reward and punishment feedback mechanism. The significance of this study: firstly, we designed a neural circuit to achieve the behavioral logic rules by strictly following the electrophysiological characteristics of biological neurons and anatomical facts. Secondly, our circuit's generality permits the design and implementation of behavioral logic rules based on the most general information processing and activity mode of biological neurons. Thirdly, through computer simulation, we achieved new understanding about the cooperative condition upon which multi-neurons achieve some behavioral control. Fourthly, this study aims in understanding the information encoding mechanism and how neural circuits achieve behavior control. Finally, this study also helps establish a transitional bridge between the microscopic activity of the nervous system and macroscopic animal behavior.
Fulton, Lawrence; Kerr, Bernie; Inglis, James M; Brooks, Matthew; Bastian, Nathaniel D
2015-07-01
In this study, we re-evaluate air ambulance requirements (rules of allocation) and planning considerations based on an Army-approved, Theater Army Analysis scenario. A previous study using workload only estimated a requirement of 0.4 to 0.6 aircraft per admission, a significant bolus over existence-based rules. In this updated study, we estimate requirements for Phase III (major combat operations) using a simulation grounded in previously published work and Phase IV (stability operations) based on four rules of allocation: unit existence rules, workload factors, theater structure (geography), and manual input. This study improves upon previous work by including the new air ambulance mission requirements of Department of Defense 51001.1, Roles and Functions of the Services, by expanding the analysis over two phases, and by considering unit rotation requirements known as Army Force Generation based on Department of Defense policy. The recommendations of this study are intended to inform future planning factors and already provided decision support to the Army Aviation Branch in determining force structure requirements. Reprint & Copyright © 2015 Association of Military Surgeons of the U.S.
GraDit: graph-based data repair algorithm for multiple data edits rule violations
NASA Astrophysics Data System (ADS)
Ode Zuhayeni Madjida, Wa; Gusti Bagus Baskara Nugraha, I.
2018-03-01
Constraint-based data cleaning captures data violation to a set of rule called data quality rules. The rules consist of integrity constraint and data edits. Structurally, they are similar, where the rule contain left hand side and right hand side. Previous research proposed a data repair algorithm for integrity constraint violation. The algorithm uses undirected hypergraph as rule violation representation. Nevertheless, this algorithm can not be applied for data edits because of different rule characteristics. This study proposed GraDit, a repair algorithm for data edits rule. First, we use bipartite-directed hypergraph as model representation of overall defined rules. These representation is used for getting interaction between violation rules and clean rules. On the other hand, we proposed undirected graph as violation representation. Our experimental study showed that algorithm with undirected graph as violation representation model gave better data quality than algorithm with undirected hypergraph as representation model.
ERIC Educational Resources Information Center
Zhang, Zhidong
2016-01-01
This study explored an alternative assessment procedure to examine learning trajectories of matrix multiplication. It took rule-based analytical and cognitive task analysis methods specifically to break down operation rules for a given matrix multiplication. Based on the analysis results, a hierarchical Bayesian network, an assessment model,…
Bilinearity in Spatiotemporal Integration of Synaptic Inputs
Li, Songting; Liu, Nan; Zhang, Xiao-hui; Zhou, Douglas; Cai, David
2014-01-01
Neurons process information via integration of synaptic inputs from dendrites. Many experimental results demonstrate dendritic integration could be highly nonlinear, yet few theoretical analyses have been performed to obtain a precise quantitative characterization analytically. Based on asymptotic analysis of a two-compartment passive cable model, given a pair of time-dependent synaptic conductance inputs, we derive a bilinear spatiotemporal dendritic integration rule. The summed somatic potential can be well approximated by the linear summation of the two postsynaptic potentials elicited separately, plus a third additional bilinear term proportional to their product with a proportionality coefficient . The rule is valid for a pair of synaptic inputs of all types, including excitation-inhibition, excitation-excitation, and inhibition-inhibition. In addition, the rule is valid during the whole dendritic integration process for a pair of synaptic inputs with arbitrary input time differences and input locations. The coefficient is demonstrated to be nearly independent of the input strengths but is dependent on input times and input locations. This rule is then verified through simulation of a realistic pyramidal neuron model and in electrophysiological experiments of rat hippocampal CA1 neurons. The rule is further generalized to describe the spatiotemporal dendritic integration of multiple excitatory and inhibitory synaptic inputs. The integration of multiple inputs can be decomposed into the sum of all possible pairwise integration, where each paired integration obeys the bilinear rule. This decomposition leads to a graph representation of dendritic integration, which can be viewed as functionally sparse. PMID:25521832
The salt marsh vegetation spread dynamics simulation and prediction based on conditions optimized CA
NASA Astrophysics Data System (ADS)
Guan, Yujuan; Zhang, Liquan
2006-10-01
The biodiversity conservation and management of the salt marsh vegetation relies on processing their spatial information. Nowadays, more attentions are focused on their classification surveying and describing qualitatively dynamics based on RS images interpreted, rather than on simulating and predicting their dynamics quantitatively, which is of greater importance for managing and planning the salt marsh vegetation. In this paper, our notion is to make a dynamic model on large-scale and to provide a virtual laboratory in which researchers can run it according requirements. Firstly, the characteristic of the cellular automata was analyzed and a conclusion indicated that it was necessary for a CA model to be extended geographically under varying conditions of space-time circumstance in order to make results matched the facts accurately. Based on the conventional cellular automata model, the author introduced several new conditions to optimize it for simulating the vegetation objectively, such as elevation, growth speed, invading ability, variation and inheriting and so on. Hence the CA cells and remote sensing image pixels, cell neighbors and pixel neighbors, cell rules and nature of the plants were unified respectively. Taking JiuDuanSha as the test site, where holds mainly Phragmites australis (P.australis) community, Scirpus mariqueter (S.mariqueter) community and Spartina alterniflora (S.alterniflora) community. The paper explored the process of making simulation and predictions about these salt marsh vegetable changing with the conditions optimized CA (COCA) model, and examined the links among data, statistical models, and ecological predictions. This study exploited the potential of applying Conditioned Optimized CA model technique to solve this problem.
Chiêm, Jean-Christophe; Van Durme, Thérèse; Vandendorpe, Florence; Schmitz, Olivier; Speybroeck, Niko; Cès, Sophie; Macq, Jean
2014-08-01
Various elderly case management projects have been implemented in Belgium. This type of long-term health care intervention involves contextual factors and human interactions. These underlying complex mechanisms can be usefully informed with field experts' knowledge, which are hard to make explicit. However, computer simulation has been suggested as one possible method of overcoming the difficulty of articulating such elicited qualitative views. A simulation model of case management was designed using an agent-based methodology, based on the initial qualitative research material. Variables and rules of interaction were formulated into a simple conceptual framework. This model has been implemented and was used as a support for a structured discussion with experts in case management. The rigorous formulation provided by the agent-based methodology clarified the descriptions of the interventions and the problems encountered regarding: the diverse network topologies of health care actors in the project; the adaptation time required by the intervention; the communication between the health care actors; the institutional context; the organization of the care; and the role of the case manager and his or hers personal ability to interpret the informal demands of the frail older person. The simulation model should be seen primarily as a tool for thinking and learning. A number of insights were gained as part of a valuable cognitive process. Computer simulation supporting field experts' elicitation can lead to better-informed decisions in the organization of complex health care interventions. © 2013 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Ahmadianfar, Iman; Adib, Arash; Taghian, Mehrdad
2017-10-01
The reservoir hedging rule curves are used to avoid severe water shortage during drought periods. In this method reservoir storage is divided into several zones, wherein the rationing factors are changed immediately when water storage level moves from one zone to another. In the present study, a hedging rule with fuzzy rationing factors was applied for creating a transition zone in up and down each rule curve, and then the rationing factor will be changed in this zone gradually. For this propose, a monthly simulation model was developed and linked to the non-dominated sorting genetic algorithm for calculation of the modified shortage index of two objective functions involving water supply of minimum flow and agriculture demands in a long-term simulation period. Zohre multi-reservoir system in south Iran has been considered as a case study. The results of the proposed hedging rule have improved the long-term system performance from 10 till 27 percent in comparison with the simple hedging rule, where these results demonstrate that the fuzzification of hedging factors increase the applicability and the efficiency of the new hedging rule in comparison to the conventional rule curve for mitigating the water shortage problem.
NASA Astrophysics Data System (ADS)
Hu, Kun; Zhu, Qi-zhi; Chen, Liang; Shao, Jian-fu; Liu, Jian
2018-06-01
As confining pressure increases, crystalline rocks of moderate porosity usually undergo a transition in failure mode from localized brittle fracture to diffused damage and ductile failure. This transition has been widely reported experimentally for several decades; however, satisfactory modeling is still lacking. The present paper aims at modeling the brittle-ductile transition process of rocks under conventional triaxial compression. Based on quantitative analyses of experimental results, it is found that there is a quite satisfactory linearity between the axial inelastic strain at failure and the confining pressure prescribed. A micromechanics-based frictional damage model is then formulated using an associated plastic flow rule and a strain energy release rate-based damage criterion. The analytical solution to the strong plasticity-damage coupling problem is provided and applied to simulate the nonlinear mechanical behaviors of Tennessee marble, Indiana limestone and Jinping marble, each presenting a brittle-ductile transition in stress-strain curves.
Toward improved simulation of river operations through integration with a hydrologic model
Morway, Eric D.; Niswonger, Richard G.; Triana, Enrique
2016-01-01
Advanced modeling tools are needed for informed water resources planning and management. Two classes of modeling tools are often used to this end–(1) distributed-parameter hydrologic models for quantifying supply and (2) river-operation models for sorting out demands under rule-based systems such as the prior-appropriation doctrine. Within each of these two broad classes of models, there are many software tools that excel at simulating the processes specific to each discipline, but have historically over-simplified, or at worse completely neglected, aspects of the other. As a result, water managers reliant on river-operation models for administering water resources need improved tools for representing spatially and temporally varying groundwater resources in conjunctive-use systems. A new tool is described that improves the representation of groundwater/surface-water (GW-SW) interaction within a river-operations modeling context and, in so doing, advances evaluation of system-wide hydrologic consequences of new or altered management regimes.
How much expert knowledge is it worth to put in conceptual hydrological models?
NASA Astrophysics Data System (ADS)
Antonetti, Manuel; Zappa, Massimiliano
2017-04-01
Both modellers and experimentalists agree on using expert knowledge to improve our conceptual hydrological simulations on ungauged basins. However, they use expert knowledge differently for both hydrologically mapping the landscape and parameterising a given hydrological model. Modellers use generally very simplified (e.g. topography-based) mapping approaches and put most of the knowledge for constraining the model by defining parameter and process relational rules. In contrast, experimentalists tend to invest all their detailed and qualitative knowledge about processes to obtain a spatial distribution of areas with different dominant runoff generation processes (DRPs) as realistic as possible, and for defining plausible narrow value ranges for each model parameter. Since, most of the times, the modelling goal is exclusively to simulate runoff at a specific site, even strongly simplified hydrological classifications can lead to satisfying results due to equifinality of hydrological models, overfitting problems and the numerous uncertainty sources affecting runoff simulations. Therefore, to test to which extent expert knowledge can improve simulation results under uncertainty, we applied a typical modellers' modelling framework relying on parameter and process constraints defined based on expert knowledge to several catchments on the Swiss Plateau. To map the spatial distribution of the DRPs, mapping approaches with increasing involvement of expert knowledge were used. Simulation results highlighted the potential added value of using all the expert knowledge available on a catchment. Also, combinations of event types and landscapes, where even a simplified mapping approach can lead to satisfying results, were identified. Finally, the uncertainty originated by the different mapping approaches was compared with the one linked to meteorological input data and catchment initial conditions.
An expert system to manage the operation of the Space Shuttle's fuel cell cryogenic reactant tanks
NASA Technical Reports Server (NTRS)
Murphey, Amy Y.
1990-01-01
This paper describes a rule-based expert system to manage the operation of the Space Shuttle's cryogenic fuel system. Rules are based on standard fuel tank operating procedures described in the EECOM Console Handbook. The problem of configuring the operation of the Space Shuttle's fuel tanks is well-bounded and well defined. Moreover, the solution of this problem can be encoded in a knowledge-based system. Therefore, a rule-based expert system is the appropriate paradigm. Furthermore, the expert system could be used in coordination with power system simulation software to design operating procedures for specific missions.
On the fusion of tuning parameters of fuzzy rules and neural network
NASA Astrophysics Data System (ADS)
Mamuda, Mamman; Sathasivam, Saratha
2017-08-01
Learning fuzzy rule-based system with neural network can lead to a precise valuable empathy of several problems. Fuzzy logic offers a simple way to reach at a definite conclusion based upon its vague, ambiguous, imprecise, noisy or missing input information. Conventional learning algorithm for tuning parameters of fuzzy rules using training input-output data usually end in a weak firing state, this certainly powers the fuzzy rule and makes it insecure for a multiple-input fuzzy system. In this paper, we introduce a new learning algorithm for tuning the parameters of the fuzzy rules alongside with radial basis function neural network (RBFNN) in training input-output data based on the gradient descent method. By the new learning algorithm, the problem of weak firing using the conventional method was addressed. We illustrated the efficiency of our new learning algorithm by means of numerical examples. MATLAB R2014(a) software was used in simulating our result The result shows that the new learning method has the best advantage of training the fuzzy rules without tempering with the fuzzy rule table which allowed a membership function of the rule to be used more than one time in the fuzzy rule base.
Wave propagation modeling in composites reinforced by randomly oriented fibers
NASA Astrophysics Data System (ADS)
Kudela, Pawel; Radzienski, Maciej; Ostachowicz, Wieslaw
2018-02-01
A new method for prediction of elastic constants in randomly oriented fiber composites is proposed. It is based on mechanics of composites, the rule of mixtures and total mass balance tailored to the spectral element mesh composed of 3D brick elements. Selected elastic properties predicted by the proposed method are compared with values obtained by another theoretical method. The proposed method is applied for simulation of Lamb waves in glass-epoxy composite plate reinforced by randomly oriented fibers. Full wavefield measurements conducted by the scanning laser Doppler vibrometer are in good agreement with simulations performed by using the time domain spectral element method.
Influence of dispatching rules on average production lead time for multi-stage production systems.
Hübl, Alexander; Jodlbauer, Herbert; Altendorfer, Klaus
2013-08-01
In this paper the influence of different dispatching rules on the average production lead time is investigated. Two theorems based on covariance between processing time and production lead time are formulated and proved theoretically. Theorem 1 links the average production lead time to the "processing time weighted production lead time" for the multi-stage production systems analytically. The influence of different dispatching rules on average lead time, which is well known from simulation and empirical studies, can be proved theoretically in Theorem 2 for a single stage production system. A simulation study is conducted to gain more insight into the influence of dispatching rules on average production lead time in a multi-stage production system. We find that the "processing time weighted average production lead time" for a multi-stage production system is not invariant of the applied dispatching rule and can be used as a dispatching rule independent indicator for single-stage production systems.
Fayle, Tom M; Eggleton, Paul; Manica, Andrea; Yusah, Kalsum M; Foster, William A
2015-01-01
Understanding how species assemble into communities is a key goal in ecology. However, assembly rules are rarely tested experimentally, and their ability to shape real communities is poorly known. We surveyed a diverse community of epiphyte-dwelling ants and found that similar-sized species co-occurred less often than expected. Laboratory experiments demonstrated that invasion was discouraged by the presence of similarly sized resident species. The size difference for which invasion was less likely was the same as that for which wild species exhibited reduced co-occurrence. Finally we explored whether our experimentally derived assembly rules could simulate realistic communities. Communities simulated using size-based species assembly exhibited diversities closer to wild communities than those simulated using size-independent assembly, with results being sensitive to the combination of rules employed. Hence, species segregation in the wild can be driven by competitive species assembly, and this process is sufficient to generate observed species abundance distributions for tropical epiphyte-dwelling ants. PMID:25622647
NASA Astrophysics Data System (ADS)
Rougé, Charles; Harou, Julien J.; Pulido-Velazquez, Manuel; Matrosov, Evgenii S.
2017-04-01
The marginal opportunity cost of water refers to benefits forgone by not allocating an additional unit of water to its most economically productive use at a specific location in a river basin at a specific moment in time. Estimating the opportunity cost of water is an important contribution to water management as it can be used for better water allocation or better system operation, and can suggest where future water infrastructure could be most beneficial. Opportunity costs can be estimated using 'shadow values' provided by hydro-economic optimization models. Yet, such models' use of optimization means the models had difficulty accurately representing the impact of operating rules and regulatory and institutional mechanisms on actual water allocation. In this work we use more widely available river basin simulation models to estimate opportunity costs. This has been done before by adding in the model a small quantity of water at the place and time where the opportunity cost should be computed, then running a simulation and comparing the difference in system benefits. The added system benefits per unit of water added to the system then provide an approximation of the opportunity cost. This approximation can then be used to design efficient pricing policies that provide incentives for users to reduce their water consumption. Yet, this method requires one simulation run per node and per time step, which is demanding computationally for large-scale systems and short time steps (e.g., a day or a week). Besides, opportunity cost estimates are supposed to reflect the most productive use of an additional unit of water, yet the simulation rules do not necessarily use water that way. In this work, we propose an alternative approach, which computes the opportunity cost through a double backward induction, first recursively from outlet to headwaters within the river network at each time step, then recursively backwards in time. Both backward inductions only require linear operations, and the resulting algorithm tracks the maximal benefit that can be obtained by having an additional unit of water at any node in the network and at any date in time. Results 1) can be obtained from the results of a rule-based simulation using a single post-processing run, and 2) are exactly the (gross) benefit forgone by not allocating an additional unit of water to its most productive use. The proposed method is applied to London's water resource system to track the value of storage in the city's water supply reservoirs on the Thames River throughout a weekly 85-year simulation. Results, obtained in 0.4 seconds on a single processor, reflect the environmental cost of water shortage. This fast computation allows visualizing the seasonal variations of the opportunity cost depending on reservoir levels, demonstrating the potential of this approach for exploring water values and its variations using simulation models with multiple runs (e.g. of stochastically generated plausible future river inflows).
NASA Astrophysics Data System (ADS)
Watanabe, Y.; Abe, S.
2014-06-01
Terrestrial neutron-induced soft errors in MOSFETs from a 65 nm down to a 25 nm design rule are analyzed by means of multi-scale Monte Carlo simulation using the PHITS-HyENEXSS code system. Nuclear reaction models implemented in PHITS code are validated by comparisons with experimental data. From the analysis of calculated soft error rates, it is clarified that secondary He and H ions provide a major impact on soft errors with decreasing critical charge. It is also found that the high energy component from 10 MeV up to several hundreds of MeV in secondary cosmic-ray neutrons has the most significant source of soft errors regardless of design rule.
Genetic algorithms in adaptive fuzzy control
NASA Technical Reports Server (NTRS)
Karr, C. Lucas; Harper, Tony R.
1992-01-01
Researchers at the U.S. Bureau of Mines have developed adaptive process control systems in which genetic algorithms (GA's) are used to augment fuzzy logic controllers (FLC's). GA's are search algorithms that rapidly locate near-optimum solutions to a wide spectrum of problems by modeling the search procedures of natural genetics. FLC's are rule based systems that efficiently manipulate a problem environment by modeling the 'rule-of-thumb' strategy used in human decision making. Together, GA's and FLC's possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, an analysis element to recognize changes in the problem environment, and a learning element to adjust fuzzy membership functions in response to the changes in the problem environment. Details of an overall adaptive control system are discussed. A specific computer-simulated chemical system is used to demonstrate the ideas presented.
NASA Astrophysics Data System (ADS)
Moura, R. C.; Mengaldo, G.; Peiró, J.; Sherwin, S. J.
2017-02-01
We present estimates of spectral resolution power for under-resolved turbulent Euler flows obtained with high-order discontinuous Galerkin (DG) methods. The '1% rule' based on linear dispersion-diffusion analysis introduced by Moura et al. (2015) [10] is here adapted for 3D energy spectra and validated through the inviscid Taylor-Green vortex problem. The 1% rule estimates the wavenumber beyond which numerical diffusion induces an artificial dissipation range on measured energy spectra. As the original rule relies on standard upwinding, different Riemann solvers are tested. Very good agreement is found for solvers which treat the different physical waves in a consistent manner. Relatively good agreement is still found for simpler solvers. The latter however displayed spurious features attributed to the inconsistent treatment of different physical waves. It is argued that, in the limit of vanishing viscosity, such features might have a significant impact on robustness and solution quality. The estimates proposed are regarded as useful guidelines for no-model DG-based simulations of free turbulence at very high Reynolds numbers.
Hofman, Abe D.; Visser, Ingmar; Jansen, Brenda R. J.; van der Maas, Han L. J.
2015-01-01
We propose and test three statistical models for the analysis of children’s responses to the balance scale task, a seminal task to study proportional reasoning. We use a latent class modelling approach to formulate a rule-based latent class model (RB LCM) following from a rule-based perspective on proportional reasoning and a new statistical model, the Weighted Sum Model, following from an information-integration approach. Moreover, a hybrid LCM using item covariates is proposed, combining aspects of both a rule-based and information-integration perspective. These models are applied to two different datasets, a standard paper-and-pencil test dataset (N = 779), and a dataset collected within an online learning environment that included direct feedback, time-pressure, and a reward system (N = 808). For the paper-and-pencil dataset the RB LCM resulted in the best fit, whereas for the online dataset the hybrid LCM provided the best fit. The standard paper-and-pencil dataset yielded more evidence for distinct solution rules than the online data set in which quantitative item characteristics are more prominent in determining responses. These results shed new light on the discussion on sequential rule-based and information-integration perspectives of cognitive development. PMID:26505905
Guillemot, Joannès; Delpierre, Nicolas; Vallet, Patrick; François, Christophe; Martin-StPaul, Nicolas K; Soudani, Kamel; Nicolas, Manuel; Badeau, Vincent; Dufrêne, Eric
2014-09-01
The structure of a forest stand, i.e. the distribution of tree size features, has strong effects on its functioning. The management of the structure is therefore an important tool in mitigating the impact of predicted changes in climate on forests, especially with respect to drought. Here, a new functional-structural model is presented and is used to assess the effects of management on forest functioning at a national scale. The stand process-based model (PBM) CASTANEA was coupled to a stand structure module (SSM) based on empirical tree-to-tree competition rules. The calibration of the SSM was based on a thorough analysis of intersite and interannual variability of competition asymmetry. The coupled CASTANEA-SSM model was evaluated across France using forest inventory data, and used to compare the effect of contrasted silvicultural practices on simulated stand carbon fluxes and growth. The asymmetry of competition varied consistently with stand productivity at both spatial and temporal scales. The modelling of the competition rules enabled efficient prediction of changes in stand structure within the CASTANEA PBM. The coupled model predicted an increase in net primary productivity (NPP) with management intensity, resulting in higher growth. This positive effect of management was found to vary at a national scale across France: the highest increases in NPP were attained in forests facing moderate to high water stress; however, the absolute effect of management on simulated stand growth remained moderate to low because stand thinning involved changes in carbon allocation at the tree scale. This modelling approach helps to identify the areas where management efforts should be concentrated in order to mitigate near-future drought impact on national forest productivity. Around a quarter of the French temperate oak and beech forests are currently in zones of high vulnerability, where management could thus mitigate the influence of climate change on forest yield.
Non-fragile consensus algorithms for a network of diffusion PDEs with boundary local interaction
NASA Astrophysics Data System (ADS)
Xiong, Jun; Li, Junmin
2017-07-01
In this study, non-fragile consensus algorithm is proposed to solve the average consensus problem of a network of diffusion PDEs, modelled by boundary controlled heat equations. The problem deals with the case where the Neumann-type boundary controllers are corrupted by additive persistent disturbances. To achieve consensus between agents, a linear local interaction rule addressing this requirement is given. The proposed local interaction rules are analysed by applying a Lyapunov-based approach. The multiplicative and additive non-fragile feedback control algorithms are designed and sufficient conditions for the consensus of the multi-agent systems are presented in terms of linear matrix inequalities, respectively. Simulation results are presented to support the effectiveness of the proposed algorithms.
Mathematical interpretation of Brownian motor model: Limit cycles and directed transport phenomena
NASA Astrophysics Data System (ADS)
Yang, Jianqiang; Ma, Hong; Zhong, Suchuang
2018-03-01
In this article, we first suggest that the attractor of Brownian motor model is one of the reasons for the directed transport phenomenon of Brownian particle. We take the classical Smoluchowski-Feynman (SF) ratchet model as an example to investigate the relationship between limit cycles and directed transport phenomenon of the Brownian particle. We study the existence and variation rule of limit cycles of SF ratchet model at changing parameters through mathematical methods. The influences of these parameters on the directed transport phenomenon of a Brownian particle are then analyzed through numerical simulations. Reasonable mathematical explanations for the directed transport phenomenon of Brownian particle in SF ratchet model are also formulated on the basis of the existence and variation rule of the limit cycles and numerical simulations. These mathematical explanations provide a theoretical basis for applying these theories in physics, biology, chemistry, and engineering.
Assessment of fatigue life of remanufactured impeller based on FEA
NASA Astrophysics Data System (ADS)
Xu, Lei; Cao, Huajun; Liu, Hailong; Zhang, Yubo
2016-09-01
Predicting the fatigue life of remanufactured centrifugal compressor impellers is a critical problem. In this paper, the S-N curve data were obtained by combining experimentation and theory deduction. The load spectrum was compiled by the rain-flow counting method based on the comprehensive consideration of the centrifugal force, residual stress, and aerodynamic loads in the repair region. A fatigue life simulation model was built, and fatigue life was analyzed based on the fatigue cumulative damage rule. Although incapable of providing a high-precision prediction, the simulation results were useful for the analysis of fatigue life impact factors and fatigue fracture areas. Results showed that the load amplitude greatly affected fatigue life, the impeller was protected from running at over-speed, and the predicted fatigue life was satisfied within the next service cycle safely at the rated speed.
Phase transitions in the q -voter model with noise on a duplex clique
NASA Astrophysics Data System (ADS)
Chmiel, Anna; Sznajd-Weron, Katarzyna
2015-11-01
We study a nonlinear q -voter model with stochastic noise, interpreted in the social context as independence, on a duplex network. To study the role of the multilevelness in this model we propose three methods of transferring the model from a mono- to a multiplex network. They take into account two criteria: one related to the status of independence (LOCAL vs GLOBAL) and one related to peer pressure (AND vs OR). In order to examine the influence of the presence of more than one level in the social network, we perform simulations on a particularly simple multiplex: a duplex clique, which consists of two fully overlapped complete graphs (cliques). Solving numerically the rate equation and simultaneously conducting Monte Carlo simulations, we provide evidence that even a simple rearrangement into a duplex topology may lead to significant changes in the observed behavior. However, qualitative changes in the phase transitions can be observed for only one of the considered rules: LOCAL&AND. For this rule the phase transition becomes discontinuous for q =5 , whereas for a monoplex such behavior is observed for q =6 . Interestingly, only this rule admits construction of realistic variants of the model, in line with recent social experiments.
Security Analysis of Selected AMI Failure Scenarios Using Agent Based Game Theoretic Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abercrombie, Robert K; Schlicher, Bob G; Sheldon, Frederick T
Information security analysis can be performed using game theory implemented in dynamic Agent Based Game Theoretic (ABGT) simulations. Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, and information assets. We concentrated our analysis on the Advanced Metering Infrastructure (AMI) functional domain which the National Electric Sector Cyber security Organization Resource (NESCOR) working group has currently documented 29 failure scenarios. The strategy for the game was developed by analyzing five electric sector representative failure scenarios contained in the AMI functional domain. From thesemore » five selected scenarios, we characterize them into three specific threat categories affecting confidentiality, integrity and availability (CIA). The analysis using our ABGT simulation demonstrates how to model the AMI functional domain using a set of rationalized game theoretic rules decomposed from the failure scenarios in terms of how those scenarios might impact the AMI network with respect to CIA.« less
Object oriented studies into artificial space debris
NASA Technical Reports Server (NTRS)
Adamson, J. M.; Marshall, G.
1988-01-01
A prototype simulation is being developed under contract to the Royal Aerospace Establishment (RAE), Farnborough, England, to assist in the discrimination of artificial space objects/debris. The methodology undertaken has been to link Object Oriented programming, intelligent knowledge based system (IKBS) techniques and advanced computer technology with numeric analysis to provide a graphical, symbolic simulation. The objective is to provide an additional layer of understanding on top of conventional classification methods. Use is being made of object and rule based knowledge representation, multiple reasoning, truth maintenance and uncertainty. Software tools being used include Knowledge Engineering Environment (KEE) and SymTactics for knowledge representation. Hooks are being developed within the SymTactics framework to incorporate mathematical models describing orbital motion and fragmentation. Penetration and structural analysis can also be incorporated. SymTactics is an Object Oriented discrete event simulation tool built as a domain specific extension to the KEE environment. The tool provides facilities for building, debugging and monitoring dynamic (military) simulations.
Modelling language evolution: Examples and predictions
NASA Astrophysics Data System (ADS)
Gong, Tao; Shuai, Lan; Zhang, Menghan
2014-06-01
We survey recent computer modelling research of language evolution, focusing on a rule-based model simulating the lexicon-syntax coevolution and an equation-based model quantifying the language competition dynamics. We discuss four predictions of these models: (a) correlation between domain-general abilities (e.g. sequential learning) and language-specific mechanisms (e.g. word order processing); (b) coevolution of language and relevant competences (e.g. joint attention); (c) effects of cultural transmission and social structure on linguistic understandability; and (d) commonalities between linguistic, biological, and physical phenomena. All these contribute significantly to our understanding of the evolutions of language structures, individual learning mechanisms, and relevant biological and socio-cultural factors. We conclude the survey by highlighting three future directions of modelling studies of language evolution: (a) adopting experimental approaches for model evaluation; (b) consolidating empirical foundations of models; and (c) multi-disciplinary collaboration among modelling, linguistics, and other relevant disciplines.
Modelling the morphology of migrating bacterial colonies
NASA Astrophysics Data System (ADS)
Nishiyama, A.; Tokihiro, T.; Badoual, M.; Grammaticos, B.
2010-08-01
We present a model which aims at describing the morphology of colonies of Proteus mirabilis and Bacillus subtilis. Our model is based on a cellular automaton which is obtained by the adequate discretisation of a diffusion-like equation, describing the migration of the bacteria, to which we have added rules simulating the consolidation process. Our basic assumption, following the findings of the group of Chuo University, is that the migration and consolidation processes are controlled by the local density of the bacteria. We show that it is possible within our model to reproduce the morphological diagrams of both bacteria species. Moreover, we model some detailed experiments done by the Chuo University group, obtaining a fine agreement.
A novel approach for connecting temporal-ontologies with blood flow simulations.
Weichert, F; Mertens, C; Walczak, L; Kern-Isberner, G; Wagner, M
2013-06-01
In this paper an approach for developing a temporal domain ontology for biomedical simulations is introduced. The ideas are presented in the context of simulations of blood flow in aneurysms using the Lattice Boltzmann Method. The advantages in using ontologies are manyfold: On the one hand, ontologies having been proven to be able to provide medical special knowledge e.g., key parameters for simulations. On the other hand, based on a set of rules and the usage of a reasoner, a system for checking the plausibility as well as tracking the outcome of medical simulations can be constructed. Likewise, results of simulations including data derived from them can be stored and communicated in a way that can be understood by computers. Later on, this set of results can be analyzed. At the same time, the ontologies provide a way to exchange knowledge between researchers. Lastly, this approach can be seen as a black-box abstraction of the internals of the simulation for the biomedical researcher as well. This approach is able to provide the complete parameter sets for simulations, part of the corresponding results and part of their analysis as well as e.g., geometry and boundary conditions. These inputs can be transferred to different simulation methods for comparison. Variations on the provided parameters can be automatically used to drive these simulations. Using a rule base, unphysical inputs or outputs of the simulation can be detected and communicated to the physician in a suitable and familiar way. An example for an instantiation of the blood flow simulation ontology and exemplary rules for plausibility checking are given. Copyright © 2013 Elsevier Inc. All rights reserved.
Fuchs, L; Beeneken, T
2005-01-01
The paper describes the realization of a real-time control for the Vienna sewer system. The project is scheduled for completion for 2004. The 3.5 year project comprises all planning stages starting with the recording of data up to the planning of measuring and controlling units. The concrete steps of the planning stages are explained. A measuring system including 25 rainfall measurements, 40 flow measurements and 20 water level measurements is implemented as an online system. This measuring system is designed to achieve two objectives, on the one hand the real-time control and on the other hand the calibration of the model that is used for the hydrodynamic sewer system simulation. The approx. 53,000 pipes have served to generate a coarse network of no more than approx. 2600 pipes. The area data were derived with high accuracy from available aerial photograph interpretations. With simulation runs of a rule-based control software the system operation was examined. A self-learning system will improve the rule basis. A forecasting model that uses weather observation radar will additionally influence the controlling decisions. The findings from the investigations are immediately considered in the planning of measuring and control units. The simulated results for the first phase of implementation, which demonstrate the benefit of RTC for the Vienna sewer system, are explained.
NASA Technical Reports Server (NTRS)
Goldberg, Robert K.; Carney, Kelly S.; DuBois, Paul; Hoffarth, Canio; Rajan, Subramaniam; Blankenhorn, Gunther
2015-01-01
Several key capabilities have been identified by the aerospace community as lacking in the material/models for composite materials currently available within commercial transient dynamic finite element codes such as LS-DYNA. Some of the specific desired features that have been identified include the incorporation of both plasticity and damage within the material model, the capability of using the material model to analyze the response of both three-dimensional solid elements and two dimensional shell elements, and the ability to simulate the response of composites composed with a variety of composite architectures, including laminates, weaves and braids. In addition, a need has been expressed to have a material model that utilizes tabulated experimentally based input to define the evolution of plasticity and damage as opposed to utilizing discrete input parameters (such as modulus and strength) and analytical functions based on curve fitting. To begin to address these needs, an orthotropic macroscopic plasticity based model suitable for implementation within LS-DYNA has been developed. Specifically, the Tsai-Wu composite failure model has been generalized and extended to a strain-hardening based orthotropic plasticity model with a non-associative flow rule. The coefficients in the yield function are determined based on tabulated stress-strain curves in the various normal and shear directions, along with selected off-axis curves. Incorporating rate dependence into the yield function is achieved by using a series of tabluated input curves, each at a different constant strain rate. The non-associative flow-rule is used to compute the evolution of the effective plastic strain. Systematic procedures have been developed to determine the values of the various coefficients in the yield function and the flow rule based on the tabulated input data. An algorithm based on the radial return method has been developed to facilitate the numerical implementation of the material model. The presented paper will present in detail the development of the orthotropic plasticity model and the procedures used to obtain the required material parameters. Methods in which a combination of actual testing and selective numerical testing can be combined to yield the appropriate input data for the model will be described. A specific laminated polymer matrix composite will be examined to demonstrate the application of the model.
Neuromodulated Synaptic Plasticity on the SpiNNaker Neuromorphic System
Mikaitis, Mantas; Pineda García, Garibaldi; Knight, James C.; Furber, Steve B.
2018-01-01
SpiNNaker is a digital neuromorphic architecture, designed specifically for the low power simulation of large-scale spiking neural networks at speeds close to biological real-time. Unlike other neuromorphic systems, SpiNNaker allows users to develop their own neuron and synapse models as well as specify arbitrary connectivity. As a result SpiNNaker has proved to be a powerful tool for studying different neuron models as well as synaptic plasticity—believed to be one of the main mechanisms behind learning and memory in the brain. A number of Spike-Timing-Dependent-Plasticity(STDP) rules have already been implemented on SpiNNaker and have been shown to be capable of solving various learning tasks in real-time. However, while STDP is an important biological theory of learning, it is a form of Hebbian or unsupervised learning and therefore does not explain behaviors that depend on feedback from the environment. Instead, learning rules based on neuromodulated STDP (three-factor learning rules) have been shown to be capable of solving reinforcement learning tasks in a biologically plausible manner. In this paper we demonstrate for the first time how a model of three-factor STDP, with the third-factor representing spikes from dopaminergic neurons, can be implemented on the SpiNNaker neuromorphic system. Using this learning rule we first show how reward and punishment signals can be delivered to a single synapse before going on to demonstrate it in a larger network which solves the credit assignment problem in a Pavlovian conditioning experiment. Because of its extra complexity, we find that our three-factor learning rule requires approximately 2× as much processing time as the existing SpiNNaker STDP learning rules. However, we show that it is still possible to run our Pavlovian conditioning model with up to 1 × 104 neurons in real-time, opening up new research opportunities for modeling behavioral learning on SpiNNaker. PMID:29535600
Structurally Dynamic Spin Market Networks
NASA Astrophysics Data System (ADS)
Horváth, Denis; Kuscsik, Zoltán
The agent-based model of stock price dynamics on a directed evolving complex network is suggested and studied by direct simulation. The stationary regime is maintained as a result of the balance between the extremal dynamics, adaptivity of strategic variables and reconnection rules. The inherent structure of node agent "brain" is modeled by a recursive neural network with local and global inputs and feedback connections. For specific parametric combination the complex network displays small-world phenomenon combined with scale-free behavior. The identification of a local leader (network hub, agent whose strategies are frequently adapted by its neighbors) is carried out by repeated random walk process through network. The simulations show empirically relevant dynamics of price returns and volatility clustering. The additional emerging aspects of stylized market statistics are Zipfian distributions of fitness.
Terahertz emission driven by two-color laser pulses at various frequency ratios
NASA Astrophysics Data System (ADS)
Wang, W.-M.; Sheng, Z.-M.; Li, Y.-T.; Zhang, Y.; Zhang, J.
2017-08-01
We present a simulation study of terahertz radiation from a gas driven by two-color laser pulses in a broad range of frequency ratios ω1/ω0 . Our particle-in-cell simulation results show that there are three series with ω1/ω0=2 n , n +1 /2 , n ±1 /3 (n is a positive integer) for high-efficiency and stable radiation generation. The radiation strength basically decreases with the increasing ω1 and scales linearly with the laser wavelength. These rules are broken when ω1/ω0<1 and much stronger radiation may be generated at any ω1/ω0 . These results can be explained with a model based on gas ionization by two linear-superposition laser fields, rather than a multiwave mixing model.
New scheduling rules for a dynamic flexible flow line problem with sequence-dependent setup times
NASA Astrophysics Data System (ADS)
Kia, Hamidreza; Ghodsypour, Seyed Hassan; Davoudpour, Hamid
2017-09-01
In the literature, the application of multi-objective dynamic scheduling problem and simple priority rules are widely studied. Although these rules are not efficient enough due to simplicity and lack of general insight, composite dispatching rules have a very suitable performance because they result from experiments. In this paper, a dynamic flexible flow line problem with sequence-dependent setup times is studied. The objective of the problem is minimization of mean flow time and mean tardiness. A 0-1 mixed integer model of the problem is formulated. Since the problem is NP-hard, four new composite dispatching rules are proposed to solve it by applying genetic programming framework and choosing proper operators. Furthermore, a discrete-event simulation model is made to examine the performances of scheduling rules considering four new heuristic rules and the six adapted heuristic rules from the literature. It is clear from the experimental results that composite dispatching rules that are formed from genetic programming have a better performance in minimization of mean flow time and mean tardiness than others.
Experimental study and simulation of space charge stimulated discharge
NASA Astrophysics Data System (ADS)
Noskov, M. D.; Malinovski, A. S.; Cooke, C. M.; Wright, K. A.; Schwab, A. J.
2002-11-01
The electrical discharge of volume distributed space charge in poly(methylmethacrylate) (PMMA) has been investigated both experimentally and by computer simulation. The experimental space charge was implanted in dielectric samples by exposure to a monoenergetic electron beam of 3 MeV. Electrical breakdown through the implanted space charge region within the sample was initiated by a local electric field enhancement applied to the sample surface. A stochastic-deterministic dynamic model for electrical discharge was developed and used in a computer simulation of these breakdowns. The model employs stochastic rules to describe the physical growth of the discharge channels, and deterministic laws to describe the electric field, the charge, and energy dynamics within the discharge channels and the dielectric. Simulated spatial-temporal and current characteristics of the expanding discharge structure during physical growth are quantitatively compared with the experimental data to confirm the discharge model. It was found that a single fixed set of physically based dielectric parameter values was adequate to simulate the complete family of experimental space charge discharges in PMMA. It is proposed that such a set of parameters also provides a useful means to quantify the breakdown properties of other dielectrics.
ERIC Educational Resources Information Center
Van Norman, Ethan R.; Parker, David C.
2018-01-01
Recent simulations suggest that trend line decision rules applied to curriculum-based measurement of reading progress monitoring data may lead to inaccurate interpretations unless data are collected for upward of 3 months. The authors of those studies did not manipulate goal line slope or account for a student's level of initial performance when…
Analysis of electric vehicle extended range misalignment based on rigid-flexible dynamics
NASA Astrophysics Data System (ADS)
Xu, Xiaowei; Lv, Mingliang; Chen, Zibo; Ji, Wei; Gao, Ruiceng
2017-04-01
The safety of the extended range electric vehicle is seriously affected by the misalignment fault. Therefore, this paper analyzed the electric vehicle extended range misalignment based on rigid-flexible dynamics. Through comprehensively applied the hybrid modeling of rigid-flexible and the method of fault diagnosis of machinery and equipment comprehensively, it established a extender hybrid rigid flexible mechanical model by means of the software ADAMS and ANSYS. By setting the relevant parameters to simulate the misalignment of shafting, the failure phenomenon, the spectrum analysis and the evolution rules were analyzed. It concluded that 0.5th and 1 harmonics are considered as the characteristic parameters of misalignment diagnostics for electric vehicle extended range.
Zhang, Shuo; Zhang, Chengning; Han, Guangwei; Wang, Qinghui
2014-01-01
A dual-motor coupling-propulsion electric bus (DMCPEB) is modeled, and its optimal control strategy is studied in this paper. The necessary dynamic features of energy loss for subsystems is modeled. Dynamic programming (DP) technique is applied to find the optimal control strategy including upshift threshold, downshift threshold, and power split ratio between the main motor and auxiliary motor. Improved control rules are extracted from the DP-based control solution, forming near-optimal control strategies. Simulation results demonstrate that a significant improvement in reducing energy loss due to the dual-motor coupling-propulsion system (DMCPS) running is realized without increasing the frequency of the mode switch. PMID:25540814
Zhang, Shuo; Zhang, Chengning; Han, Guangwei; Wang, Qinghui
2014-01-01
A dual-motor coupling-propulsion electric bus (DMCPEB) is modeled, and its optimal control strategy is studied in this paper. The necessary dynamic features of energy loss for subsystems is modeled. Dynamic programming (DP) technique is applied to find the optimal control strategy including upshift threshold, downshift threshold, and power split ratio between the main motor and auxiliary motor. Improved control rules are extracted from the DP-based control solution, forming near-optimal control strategies. Simulation results demonstrate that a significant improvement in reducing energy loss due to the dual-motor coupling-propulsion system (DMCPS) running is realized without increasing the frequency of the mode switch.
eFSM--a novel online neural-fuzzy semantic memory model.
Tung, Whye Loon; Quek, Chai
2010-01-01
Fuzzy rule-based systems (FRBSs) have been successfully applied to many areas. However, traditional fuzzy systems are often manually crafted, and their rule bases that represent the acquired knowledge are static and cannot be trained to improve the modeling performance. This subsequently leads to intensive research on the autonomous construction and tuning of a fuzzy system directly from the observed training data to address the knowledge acquisition bottleneck, resulting in well-established hybrids such as neural-fuzzy systems (NFSs) and genetic fuzzy systems (GFSs). However, the complex and dynamic nature of real-world problems demands that fuzzy rule-based systems and models be able to adapt their parameters and ultimately evolve their rule bases to address the nonstationary (time-varying) characteristics of their operating environments. Recently, considerable research efforts have been directed to the study of evolving Tagaki-Sugeno (T-S)-type NFSs based on the concept of incremental learning. In contrast, there are very few incremental learning Mamdani-type NFSs reported in the literature. Hence, this paper presents the evolving neural-fuzzy semantic memory (eFSM) model, a neural-fuzzy Mamdani architecture with a data-driven progressively adaptive structure (i.e., rule base) based on incremental learning. Issues related to the incremental learning of the eFSM rule base are carefully investigated, and a novel parameter learning approach is proposed for the tuning of the fuzzy set parameters in eFSM. The proposed eFSM model elicits highly interpretable semantic knowledge in the form of Mamdani-type if-then fuzzy rules from low-level numeric training data. These Mamdani fuzzy rules define the computing structure of eFSM and are incrementally learned with the arrival of each training data sample. New rules are constructed from the emergence of novel training data and obsolete fuzzy rules that no longer describe the recently observed data trends are pruned. This enables eFSM to maintain a current and compact set of Mamdani-type if-then fuzzy rules that collectively generalizes and describes the salient associative mappings between the inputs and outputs of the underlying process being modeled. The learning and modeling performances of the proposed eFSM are evaluated using several benchmark applications and the results are encouraging.
Modeling of a production system using the multi-agent approach
NASA Astrophysics Data System (ADS)
Gwiazda, A.; Sękala, A.; Banaś, W.
2017-08-01
The method that allows for the analysis of complex systems is a multi-agent simulation. The multi-agent simulation (Agent-based modeling and simulation - ABMS) is modeling of complex systems consisting of independent agents. In the case of the model of the production system agents may be manufactured pieces set apart from other types of agents like machine tools, conveyors or replacements stands. Agents are magazines and buffers. More generally speaking, the agents in the model can be single individuals, but you can also be defined as agents of collective entities. They are allowed hierarchical structures. It means that a single agent could belong to a certain class. Depending on the needs of the agent may also be a natural or physical resource. From a technical point of view, the agent is a bundle of data and rules describing its behavior in different situations. Agents can be autonomous or non-autonomous in making the decision about the types of classes of agents, class sizes and types of connections between elements of the system. Multi-agent modeling is a very flexible technique for modeling and model creating in the convention that could be adapted to any research problem analyzed from different points of views. One of the major problems associated with the organization of production is the spatial organization of the production process. Secondly, it is important to include the optimal scheduling. For this purpose use can approach multi-purposeful. In this regard, the model of the production process will refer to the design and scheduling of production space for four different elements. The program system was developed in the environment NetLogo. It was also used elements of artificial intelligence. The main agent represents the manufactured pieces that, according to previously assumed rules, generate the technological route and allow preprint the schedule of that line. Machine lines, reorientation stands, conveyors and transport devices also represent the other type of agent that are utilized in the described simulation. The article presents the idea of an integrated program approach and shows the resulting production layout as a virtual model. This model was developed in the NetLogo multi-agent program environment.
NASA Astrophysics Data System (ADS)
Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kueyoung; Choung, Sungwook; Chung, Il Moon
2017-05-01
A hydrogeological dataset often includes substantial deviations that need to be inspected. In the present study, three outlier identification methods - the three sigma rule (3σ), inter quantile range (IQR), and median absolute deviation (MAD) - that take advantage of the ensemble regression method are proposed by considering non-Gaussian characteristics of groundwater data. For validation purposes, the performance of the methods is compared using simulated and actual groundwater data with a few hypothetical conditions. In the validations using simulated data, all of the proposed methods reasonably identify outliers at a 5% outlier level; whereas, only the IQR method performs well for identifying outliers at a 30% outlier level. When applying the methods to real groundwater data, the outlier identification performance of the IQR method is found to be superior to the other two methods. However, the IQR method shows limitation by identifying excessive false outliers, which may be overcome by its joint application with other methods (for example, the 3σ rule and MAD methods). The proposed methods can be also applied as potential tools for the detection of future anomalies by model training based on currently available data.
Ghiglietti, Andrea; Scarale, Maria Giovanna; Miceli, Rosalba; Ieva, Francesca; Mariani, Luigi; Gavazzi, Cecilia; Paganoni, Anna Maria; Edefonti, Valeria
2018-03-22
Recently, response-adaptive designs have been proposed in randomized clinical trials to achieve ethical and/or cost advantages by using sequential accrual information collected during the trial to dynamically update the probabilities of treatment assignments. In this context, urn models-where the probability to assign patients to treatments is interpreted as the proportion of balls of different colors available in a virtual urn-have been used as response-adaptive randomization rules. We propose the use of Randomly Reinforced Urn (RRU) models in a simulation study based on a published randomized clinical trial on the efficacy of home enteral nutrition in cancer patients after major gastrointestinal surgery. We compare results with the RRU design with those previously published with the non-adaptive approach. We also provide a code written with the R software to implement the RRU design in practice. In detail, we simulate 10,000 trials based on the RRU model in three set-ups of different total sample sizes. We report information on the number of patients allocated to the inferior treatment and on the empirical power of the t-test for the treatment coefficient in the ANOVA model. We carry out a sensitivity analysis to assess the effect of different urn compositions. For each sample size, in approximately 75% of the simulation runs, the number of patients allocated to the inferior treatment by the RRU design is lower, as compared to the non-adaptive design. The empirical power of the t-test for the treatment effect is similar in the two designs.
Game Theoretic Modeling of Water Resources Allocation Under Hydro-Climatic Uncertainty
NASA Astrophysics Data System (ADS)
Brown, C.; Lall, U.; Siegfried, T.
2005-12-01
Typical hydrologic and economic modeling approaches rely on assumptions of climate stationarity and economic conditions of ideal markets and rational decision-makers. In this study, we incorporate hydroclimatic variability with a game theoretic approach to simulate and evaluate common water allocation paradigms. Game Theory may be particularly appropriate for modeling water allocation decisions. First, a game theoretic approach allows economic analysis in situations where price theory doesn't apply, which is typically the case in water resources where markets are thin, players are few, and rules of exchange are highly constrained by legal or cultural traditions. Previous studies confirm that game theory is applicable to water resources decision problems, yet applications and modeling based on these principles is only rarely observed in the literature. Second, there are numerous existing theoretical and empirical studies of specific games and human behavior that may be applied in the development of predictive water allocation models. With this framework, one can evaluate alternative orderings and rules regarding the fraction of available water that one is allowed to appropriate. Specific attributes of the players involved in water resources management complicate the determination of solutions to game theory models. While an analytical approach will be useful for providing general insights, the variety of preference structures of individual players in a realistic water scenario will likely require a simulation approach. We propose a simulation approach incorporating the rationality, self-interest and equilibrium concepts of game theory with an agent-based modeling framework that allows the distinct properties of each player to be expressed and allows the performance of the system to manifest the integrative effect of these factors. Underlying this framework, we apply a realistic representation of spatio-temporal hydrologic variability and incorporate the impact of decision-making a priori to hydrologic realizations and those made a posteriori on alternative allocation mechanisms. Outcomes are evaluated in terms of water productivity, net social benefit and equity. The performance of hydro-climate prediction modeling in each allocation mechanism will be assessed. Finally, year-to-year system performance and feedback pathways are explored. In this way, the system can be adaptively managed toward equitable and efficient water use.
Integration of Genetic Algorithms and Fuzzy Logic for Urban Growth Modeling
NASA Astrophysics Data System (ADS)
Foroutan, E.; Delavar, M. R.; Araabi, B. N.
2012-07-01
Urban growth phenomenon as a spatio-temporal continuous process is subject to spatial uncertainty. This inherent uncertainty cannot be fully addressed by the conventional methods based on the Boolean algebra. Fuzzy logic can be employed to overcome this limitation. Fuzzy logic preserves the continuity of dynamic urban growth spatially by choosing fuzzy membership functions, fuzzy rules and the fuzzification-defuzzification process. Fuzzy membership functions and fuzzy rule sets as the heart of fuzzy logic are rather subjective and dependent on the expert. However, due to lack of a definite method for determining the membership function parameters, certain optimization is needed to tune the parameters and improve the performance of the model. This paper integrates genetic algorithms and fuzzy logic as a genetic fuzzy system (GFS) for modeling dynamic urban growth. The proposed approach is applied for modeling urban growth in Tehran Metropolitan Area in Iran. Historical land use/cover data of Tehran Metropolitan Area extracted from the 1988 and 1999 Landsat ETM+ images are employed in order to simulate the urban growth. The extracted land use classes of the year 1988 include urban areas, street, vegetation areas, slope and elevation used as urban growth physical driving forces. Relative Operating Characteristic (ROC) curve as an fitness function has been used to evaluate the performance of the GFS algorithm. The optimum membership function parameter is applied for generating a suitability map for the urban growth. Comparing the suitability map and real land use map of 1999 gives the threshold value for the best suitability map which can simulate the land use map of 1999. The simulation outcomes in terms of kappa of 89.13% and overall map accuracy of 95.58% demonstrated the efficiency and reliability of the proposed model.
The Value Range of Contact Stiffness Factor between Pile and Soil Based on Penalty Function
NASA Astrophysics Data System (ADS)
Chen, Sandy H. L.; Wu, Xinliu
2018-03-01
The value range of contact stiffness factor based on penalty function is studied when we use finite element software ANSYS to analyze contact problems, take single pile and soil of a certain project for example, the normal contact between pile and soil is analyzed with 2D simplified model in horizontal load. The study shows that when adopting linear elastic model to simulate soil, the maximum contact pressure and penetration approach steady value as the contact stiffness factor increases. The reasonable value range of contact stiffness factor reduces as the underlying element thickness decreases, but the rule reverses when refers to the soil stiffness. If choose DP model to simulate soil, the stiffness factor should be magnified 100 times compares to the elastic model regardless of the soil bears small force and still in elastic deformation stage or into the plastic deformation stage. When the soil bears big force and into plastic deformation stage, the value range of stiffness factor relates to the plastic strain range of the soil, and reduces as the horizontal load increases.
Individual based simulations of bacterial growth on agar plates
NASA Astrophysics Data System (ADS)
Ginovart, M.; López, D.; Valls, J.; Silbert, M.
2002-03-01
The individual based simulator, INDividual DIScrete SIMulations (INDISIM) has been used to study the behaviour of the growth of bacterial colonies on a finite dish. The simulations reproduce the qualitative trends of pattern formation that appear during the growth of Bacillus subtilis on an agar plate under different initial conditions of nutrient peptone concentration, the amount of agar on the plate, and the temperature. The simulations are carried out by imposing closed boundary conditions on a square lattice divided into square spatial cells. The simulator studies the temporal evolution of the bacterial population possible by setting rules of behaviour for each bacterium, such as its uptake, metabolism and reproduction, as well as rules for the medium in which the bacterial cells grow, such as concentration of nutrient particles and their diffusion. The determining factors that characterize the structure of the bacterial colony patterns in the presents simulations, are the initial concentrations of nutrient particles, that mimic the amount of peptone in the experiments, and the set of values for the microscopic diffusion parameter related, in the experiments, to the amount of the agar medium.
Virtual tissues in toxicology.
Shah, Imran; Wambaugh, John
2010-02-01
New approaches are vital for efficiently evaluating human health risk of thousands of chemicals in commerce. In vitro models offer a high-throughput approach for assaying chemical-induced molecular and cellular changes; however, bridging these perturbations to in vivo effects across chemicals, dose, time, and species remains challenging. Technological advances in multiresolution imaging and multiscale simulation are making it feasible to reconstruct tissues in silico. In toxicology, these "virtual" tissues (VT) aim to predict histopathological outcomes from alterations of cellular phenotypes that are controlled by chemical-induced perturbations in molecular pathways. The behaviors of thousands of heterogeneous cells in tissues are simulated discretely using agent-based modeling (ABM), in which computational "agents" mimic cell interactions and cellular responses to the microenvironment. The behavior of agents is constrained by physical laws and biological rules derived from experimental evidence. VT extend compartmental physiologic models to simulate both acute insults as well as the chronic effects of low-dose exposure. Furthermore, agent behavior can encode the logic of signaling and genetic regulatory networks to evaluate the role of different pathways in chemical-induced injury. To extrapolate toxicity across species, chemicals, and doses, VT require four main components: (a) organization of prior knowledge on physiologic events to define the mechanistic rules for agent behavior, (b) knowledge on key chemical-induced molecular effects, including activation of stress sensors and changes in molecular pathways that alter the cellular phenotype, (c) multiresolution quantitative and qualitative analysis of histologic data to characterize and measure chemical-, dose-, and time-dependent physiologic events, and (d) multiscale, spatiotemporal simulation frameworks to effectively calibrate and evaluate VT using experimental data. This investigation presents the motivation, implementation, and application of VT with examples from hepatotoxicity and carcinogenesis.
Simulation of California's Major Reservoirs Outflow Using Data Mining Technique
NASA Astrophysics Data System (ADS)
Yang, T.; Gao, X.; Sorooshian, S.
2014-12-01
The reservoir's outflow is controlled by reservoir operators, which is different from the upstream inflow. The outflow is more important than the reservoir's inflow for the downstream water users. In order to simulate the complicated reservoir operation and extract the outflow decision making patterns for California's 12 major reservoirs, we build a data-driven, computer-based ("artificial intelligent") reservoir decision making tool, using decision regression and classification tree approach. This is a well-developed statistical and graphical modeling methodology in the field of data mining. A shuffled cross validation approach is also employed to extract the outflow decision making patterns and rules based on the selected decision variables (inflow amount, precipitation, timing, water type year etc.). To show the accuracy of the model, a verification study is carried out comparing the model-generated outflow decisions ("artificial intelligent" decisions) with that made by reservoir operators (human decisions). The simulation results show that the machine-generated outflow decisions are very similar to the real reservoir operators' decisions. This conclusion is based on statistical evaluations using the Nash-Sutcliffe test. The proposed model is able to detect the most influential variables and their weights when the reservoir operators make an outflow decision. While the proposed approach was firstly applied and tested on California's 12 major reservoirs, the method is universally adaptable to other reservoir systems.
NASA Astrophysics Data System (ADS)
You, Youngjun; Rhee, Key-Pyo; Ahn, Kyoungsoo
2013-06-01
In constructing a collision avoidance system, it is important to determine the time for starting collision avoidance maneuver. Many researchers have attempted to formulate various indices by applying a range of techniques. Among these indices, collision risk obtained by combining Distance to the Closest Point of Approach (DCPA) and Time to the Closest Point of Approach (TCPA) information with fuzzy theory is mostly used. However, the collision risk has a limit, in that membership functions of DCPA and TCPA are empirically determined. In addition, the collision risk is not able to consider several critical collision conditions where the target ship fails to take appropriate actions. It is therefore necessary to design a new concept based on logical approaches. In this paper, a collision ratio is proposed, which is the expected ratio of unavoidable paths to total paths under suitably characterized operation conditions. Total paths are determined by considering categories such as action space and methodology of avoidance. The International Regulations for Preventing Collisions at Sea (1972) and collision avoidance rules (2001) are considered to solve the slower ship's dilemma. Different methods which are based on a constant speed model and simulated speed model are used to calculate the relative positions between own ship and target ship. In the simulated speed model, fuzzy control is applied to determination of command rudder angle. At various encounter situations, the time histories of the collision ratio based on the simulated speed model are compared with those based on the constant speed model.
Simulation of Electromigration Based on Resistor Networks
NASA Astrophysics Data System (ADS)
Patrinos, Anthony John
A two dimensional computer simulation of electromigration based on resistor networks was designed and implemented. The model utilizes a realistic grain structure generated by the Monte Carlo method and takes specific account of the local effects through which electromigration damage progresses. The dynamic evolution of the simulated thin film is governed by the local current and temperature distributions. The current distribution is calculated by superimposing a two dimensional electrical network on the lattice whose nodes correspond to the particles in the lattice and the branches to interparticle bonds. Current is assumed to flow from site to site via nearest neighbor bonds. The current distribution problem is solved by applying Kirchhoff's rules on the resulting electrical network. The calculation of the temperature distribution in the lattice proceeds by discretizing the partial differential equation for heat conduction, with appropriate material parameters chosen for the lattice and its defects. SEReNe (for Simulation of Electromigration using Resistor Networks) was tested by applying it to common situations arising in experiments with real films with satisfactory results. Specifically, the model successfully reproduces the expected grain size, line width and bamboo effects, the lognormal failure time distribution and the relationship between current density exponent and current density. It has also been modified to simulate temperature ramp experiments but with mixed, in this case, results.
NASA Astrophysics Data System (ADS)
Seoud, Ahmed; Kim, Juhwan; Ma, Yuansheng; Jayaram, Srividya; Hong, Le; Chae, Gyu-Yeol; Lee, Jeong-Woo; Park, Dae-Jin; Yune, Hyoung-Soon; Oh, Se-Young; Park, Chan-Ha
2018-03-01
Sub-resolution assist feature (SRAF) insertion techniques have been effectively used for a long time now to increase process latitude in the lithography patterning process. Rule-based SRAF and model-based SRAF are complementary solutions, and each has its own benefits, depending on the objectives of applications and the criticality of the impact on manufacturing yield, efficiency, and productivity. Rule-based SRAF provides superior geometric output consistency and faster runtime performance, but the associated recipe development time can be of concern. Model-based SRAF provides better coverage for more complicated pattern structures in terms of shapes and sizes, with considerably less time required for recipe development, although consistency and performance may be impacted. In this paper, we introduce a new model-assisted template extraction (MATE) SRAF solution, which employs decision tree learning in a model-based solution to provide the benefits of both rule-based and model-based SRAF insertion approaches. The MATE solution is designed to automate the creation of rules/templates for SRAF insertion, and is based on the SRAF placement predicted by model-based solutions. The MATE SRAF recipe provides optimum lithographic quality in relation to various manufacturing aspects in a very short time, compared to traditional methods of rule optimization. Experiments were done using memory device pattern layouts to compare the MATE solution to existing model-based SRAF and pixelated SRAF approaches, based on lithographic process window quality, runtime performance, and geometric output consistency.
An Evolutionary Optimization of the Refueling Simulation for a CANDU Reactor
NASA Astrophysics Data System (ADS)
Do, Q. B.; Choi, H.; Roh, G. H.
2006-10-01
This paper presents a multi-cycle and multi-objective optimization method for the refueling simulation of a 713 MWe Canada deuterium uranium (CANDU-6) reactor based on a genetic algorithm, an elitism strategy and a heuristic rule. The proposed algorithm searches for the optimal refueling patterns for a single cycle that maximizes the average discharge burnup, minimizes the maximum channel power and minimizes the change in the zone controller unit water fills while satisfying the most important safety-related neutronic parameters of the reactor core. The heuristic rule generates an initial population of individuals very close to a feasible solution and it reduces the computing time of the optimization process. The multi-cycle optimization is carried out based on a single cycle refueling simulation. The proposed approach was verified by a refueling simulation of a natural uranium CANDU-6 reactor for an operation period of 6 months at an equilibrium state and compared with the experience-based automatic refueling simulation and the generalized perturbation theory. The comparison has shown that the simulation results are consistent from each other and the proposed approach is a reasonable optimization method of the refueling simulation that controls all the safety-related parameters of the reactor core during the simulation
A cell-based study on pedestrian acceleration and overtaking in a transfer station corridor
NASA Astrophysics Data System (ADS)
Ji, Xiangfeng; Zhou, Xuemei; Ran, Bin
2013-04-01
Pedestrian speed in a transfer station corridor is faster than usual and sometimes running can be found among some of them. In this paper, pedestrians are divided into two categories. The first one is aggressive, and the other is conservative. Aggressive pedestrians weaving their way through crowd in the corridor are the study object of this paper. During recent decades, much attention has been paid to the pedestrians' behavior, such as overtaking (also deceleration) and collision avoidance, and that continues in this paper. After sufficiently analyzing the characteristics of pedestrian flow in transfer station corridor, a cell-based model is presented in this paper, including the acceleration (also deceleration) and overtaking analysis. Acceleration (also deceleration) in a corridor is fixed according to Newton's Law and then speed calculated with a kinematic formula is discretized into cells based on the fuzzy logic. After the speed is updated, overtaking is analyzed based on updated speed and force explicitly, compared to rule-based models, which herein we call implicit ones. During the analysis of overtaking, a threshold value to determine the overtaking direction is introduced. Actually, model in this paper is a two-step one. The first step is to update speed, which is the cells the pedestrian can move in one time interval and the other is to analyze the overtaking. Finally, a comparison between the rule-based cellular automata, the model in this paper and data in HCM 2000 is made to demonstrate our model can be used to achieve reasonable simulation of acceleration (also deceleration) and overtaking among pedestrians.
The Umbra Simulation and Integration Framework Applied to Emergency Response Training
NASA Technical Reports Server (NTRS)
Hamilton, Paul Lawrence; Britain, Robert
2010-01-01
The Mine Emergency Response Interactive Training Simulation (MERITS) is intended to prepare personnel to manage an emergency in an underground coal mine. The creation of an effective training environment required realistic emergent behavior in response to simulation events and trainee interventions, exploratory modification of miner behavior rules, realistic physics, and incorporation of legacy code. It also required the ability to add rich media to the simulation without conflicting with normal desktop security settings. Our Umbra Simulation and Integration Framework facilitated agent-based modeling of miners and rescuers and made it possible to work with subject matter experts to quickly adjust behavior through script editing, rather than through lengthy programming and recompilation. Integration of Umbra code with the WebKit browser engine allowed the use of JavaScript-enabled local web pages for media support. This project greatly extended the capabilities of Umbra in support of training simulations and has implications for simulations that combine human behavior, physics, and rich media.
Decision Aids for Airborne Intercept Operations in Advanced Aircrafts
NASA Technical Reports Server (NTRS)
Madni, A.; Freedy, A.
1981-01-01
A tactical decision aid (TDA) for the F-14 aircrew, i.e., the naval flight officer and pilot, in conducting a multitarget attack during the performance of a Combat Air Patrol (CAP) role is presented. The TDA employs hierarchical multiattribute utility models for characterizing mission objectives in operationally measurable terms, rule based AI-models for tactical posture selection, and fast time simulation for maneuver consequence prediction. The TDA makes aspect maneuver recommendations, selects and displays the optimum mission posture, evaluates attackable and potentially attackable subsets, and recommends the 'best' attackable subset along with the required course perturbation.
The investigation of social networks based on multi-component random graphs
NASA Astrophysics Data System (ADS)
Zadorozhnyi, V. N.; Yudin, E. B.
2018-01-01
The methods of non-homogeneous random graphs calibration are developed for social networks simulation. The graphs are calibrated by the degree distributions of the vertices and the edges. The mathematical foundation of the methods is formed by the theory of random graphs with the nonlinear preferential attachment rule and the theory of Erdôs-Rényi random graphs. In fact, well-calibrated network graph models and computer experiments with these models would help developers (owners) of the networks to predict their development correctly and to choose effective strategies for controlling network projects.
Power System Transient Stability Based on Data Mining Theory
NASA Astrophysics Data System (ADS)
Cui, Zhen; Shi, Jia; Wu, Runsheng; Lu, Dan; Cui, Mingde
2018-01-01
In order to study the stability of power system, a power system transient stability based on data mining theory is designed. By introducing association rules analysis in data mining theory, an association classification method for transient stability assessment is presented. A mathematical model of transient stability assessment based on data mining technology is established. Meanwhile, combining rule reasoning with classification prediction, the method of association classification is proposed to perform transient stability assessment. The transient stability index is used to identify the samples that cannot be correctly classified in association classification. Then, according to the critical stability of each sample, the time domain simulation method is used to determine the state, so as to ensure the accuracy of the final results. The results show that this stability assessment system can improve the speed of operation under the premise that the analysis result is completely correct, and the improved algorithm can find out the inherent relation between the change of power system operation mode and the change of transient stability degree.
Extract useful knowledge from agro-hydrological simulations data for decision making
NASA Astrophysics Data System (ADS)
Gascuel-odoux, C.; Bouadi, T.; Cordier, M.; Quiniou, R.
2013-12-01
In recent years, models have been developed and used to test the effect of scenarios and help stakeholders in decision making. Agro-hydrological models have guided agricultural water management by testing the effect of landscape structure and farming system changes on water quantity and quality. Such models generate a large amount of data but few are stored and are often not customized for stakeholders, so that a great amount of information is lost from the simulation process or not transformed in a usable format. A first approach, already published (Trepos et al., 2012), has been developed to identify object oriented tree patterns, that represent surface flow and pollutant pathways from plot to plot, involved in water pollution by herbicides. A simulation model (Gascuel-odoux et al., 2009) predicted herbicide transfer rate, defined as the proportion of applied herbicide that reaches water courses. The predictions were used as a set of learning examples for symbolic learning techniques to induce rules based on qualitative and quantitative attributes and explain two extreme classes in transfer rate. Two automatic symbolic learning techniques were used: the inductive logic programming approach to induce spatial tree patterns, and an attribute-value method to induce aggregated attributes of the trees. A visualization interface allows the users to identify rules explaining contamination and mitigation measures improving the current situation. A second approach has been recently developed to analyse directly the simulated data (Bouadi et al, submitted). A data warehouse called N-catch has been built to store and manage simulation data from the agro-hydrological model TNT2 (Beaujouan et al., 2002). 44 output key simulated variables are stored per plot and at a daily time step on a 50 squared km area, i.e, 8 GB of storage size. After identifying the set of multileveled dimensions integrating hierarchical structures and relationships among related dimension levels, N-Catch has been designed using the open source Business Intelligence Platform Pentaho. We show how to use online analytical processing (OLAP) to access and exploit, intuitively and quickly, the multidimensional and aggregated data from the N-Catch data warehouse. We illustrate how the data warehouse can be used to explore spatio-temporal dimensions efficiently and to discover new knowledge at multiple levels of simulation. OLAP tool can be used to synthesize environmental information and understand nitrogen emissions in water bodies by generating comparative and personalized views of historical data. This DWH is currently extended with data mining or information retrieval methods as Skyline queries to perform advanced analyses (Bouadi et al., 2012). Bouadi et al. N-Catch: A Data Warehouse for Multilevel Analysis of Simulated Nitrogen Data from an Agro-hydrological Model. Submitted. Bouadi et al., 2012) Bouadi, T., Cordier, M., and Quiniou, R. (2012). Incremental computation of skyline queries with dynamic preferences. In DEXA (1), pages 219-233. Trepos et al. 2012. Mining simulation data by rule induction to determine critical source areas of stream water pollution by herbicides. Computers and Electronics in Agriculture 86, 75-88.
A statistical model of aggregate fragmentation
NASA Astrophysics Data System (ADS)
Spahn, F.; Vieira Neto, E.; Guimarães, A. H. F.; Gorban, A. N.; Brilliantov, N. V.
2014-01-01
A statistical model of fragmentation of aggregates is proposed, based on the stochastic propagation of cracks through the body. The propagation rules are formulated on a lattice and mimic two important features of the process—a crack moves against the stress gradient while dissipating energy during its growth. We perform numerical simulations of the model for two-dimensional lattice and reveal that the mass distribution for small- and intermediate-size fragments obeys a power law, F(m)∝m-3/2, in agreement with experimental observations. We develop an analytical theory which explains the detected power law and demonstrate that the overall fragment mass distribution in our model agrees qualitatively with that one observed in experiments.
Modeling and assessment of civil aircraft evacuation based on finer-grid
NASA Astrophysics Data System (ADS)
Fang, Zhi-Ming; Lv, Wei; Jiang, Li-Xue; Xu, Qing-Feng; Song, Wei-Guo
2016-04-01
Studying civil aircraft emergency evacuation process by using computer model is an effective way. In this study, the evacuation of Airbus A380 is simulated using a Finer-Grid Civil Aircraft Evacuation (FGCAE) model. In this model, the effect of seat area and others on escape process and pedestrian's "hesitation" before leaving exits are considered, and an optimized rule of exit choice is defined. Simulations reproduce typical characteristics of aircraft evacuation, such as the movement synchronization between adjacent pedestrians, route choice and so on, and indicate that evacuation efficiency will be determined by pedestrian's "preference" and "hesitation". Based on the model, an assessment procedure of aircraft evacuation safety is presented. The assessment and comparison with the actual evacuation test demonstrate that the available exit setting of "one exit from each exit pair" used by practical demonstration test is not the worst scenario. The half exits of one end of the cabin are all unavailable is the worst one, that should be paid more attention to, and even be adopted in the certification test. The model and method presented in this study could be useful for assessing, validating and improving the evacuation performance of aircraft.
Chartier, Sylvain; Proulx, Robert
2005-11-01
This paper presents a new unsupervised attractor neural network, which, contrary to optimal linear associative memory models, is able to develop nonbipolar attractors as well as bipolar attractors. Moreover, the model is able to develop less spurious attractors and has a better recall performance under random noise than any other Hopfield type neural network. Those performances are obtained by a simple Hebbian/anti-Hebbian online learning rule that directly incorporates feedback from a specific nonlinear transmission rule. Several computer simulations show the model's distinguishing properties.
Coupling Cellular Automata Land Use Change with Distributed Hydrologic Models
NASA Astrophysics Data System (ADS)
Shu, L.; Duffy, C.
2017-12-01
There has been extensive research on LUC modeling with broad applications to simulating urban growth and changing demographic patterns across multiple scales. The importance of land conversion is a critical issue in watershed scale studies and is generally not treated in most watershed modeling approaches. In this study we apply spatially explicit hydrologic and landuse change models and the Conestoga Watershed in Lancaster County, Pennsylvania. The Penn State Integrated Hydrologic Model (PIHM) partitions the water balance in space and time over the urban catchment, the coupled Cellular Automata Land Use Change model (CALUC) dynamically simulates the evolution of land use classes based on physical measures associated with population change and land use demand factors. The CALUC model is based on iteratively applying discrete rules to each individual spatial cell. The essence the CA modeling involves calculation of the Transition Potential (TP) for conversion of a grid cell from one land use class to another. This potential includes five factors: random perturbation, suitability, accessibility, neighborhood effect, inertia effects and zonal factors. In spite of simplicity, this CALUC model has been shown to be very effective for simulating LUC leading to the emergence of complex spatial patterns. The components of TP are derived from present land use data for landuse reanalysis and for realistic future land use scenarios. For the CALUC we use early-settlement (circa 1790) initial land class values and final or present-day (2010) land classes to calibrate the model. CALUC- PIHM dynamically simulates the hydrologic response of conversion from pre-settlement to present landuse. The simulations highlight the capability and value of dynamic coupling of catchment hydrology with land use change over long time periods. Analysis of the simulation uses various metrics such as the distributed water balance, flow duration curves, etc. to show how deforestation, urbanization and agricultural land development interact for the period 1790- present.
Lin, Chih-Hao; Kao, Chung-Yao; Huang, Chong-Ye
2015-01-01
Ambulance diversion (AD) is considered one of the possible solutions to relieve emergency department (ED) overcrowding. Study of the effectiveness of various AD strategies is prerequisite for policy-making. Our aim is to develop a tool that quantitatively evaluates the effectiveness of various AD strategies. A simulation model and a computer simulation program were developed. Three sets of simulations were executed to evaluate AD initiating criteria, patient-blocking rules, and AD intervals, respectively. The crowdedness index, the patient waiting time for service, and the percentage of adverse patients were assessed to determine the effect of various AD policies. Simulation results suggest that, in a certain setting, the best timing for implementing AD is when the crowdedness index reaches the critical value, 1.0 - an indicator that ED is operating at its maximal capacity. The strategy to divert all patients transported by ambulance is more effective than to divert either high-acuity patients only or low-acuity patients only. Given a total allowable AD duration, implementing AD multiple times with short intervals generally has better effect than having a single AD with maximal allowable duration. An input-throughput-output simulation model is proposed for simulating ED operation. Effectiveness of several AD strategies on relieving ED overcrowding was assessed via computer simulations based on this model. By appropriate parameter settings, the model can represent medical resource providers of different scales. It is also feasible to expand the simulations to evaluate the effect of AD strategies on a community basis. The results may offer insights for making effective AD policies. Copyright © 2012. Published by Elsevier B.V.
Gao, Hao; Wang, Huiming; Berry, Colin; Luo, Xiaoyu; Griffith, Boyce E
2014-01-01
Finite stress and strain analyses of the heart provide insight into the biomechanics of myocardial function and dysfunction. Herein, we describe progress toward dynamic patient-specific models of the left ventricle using an immersed boundary (IB) method with a finite element (FE) structural mechanics model. We use a structure-based hyperelastic strain-energy function to describe the passive mechanics of the ventricular myocardium, a realistic anatomical geometry reconstructed from clinical magnetic resonance images of a healthy human heart, and a rule-based fiber architecture. Numerical predictions of this IB/FE model are compared with results obtained by a commercial FE solver. We demonstrate that the IB/FE model yields results that are in good agreement with those of the conventional FE model under diastolic loading conditions, and the predictions of the LV model using either numerical method are shown to be consistent with previous computational and experimental data. These results are among the first to analyze the stress and strain predictions of IB models of ventricular mechanics, and they serve both to verify the IB/FE simulation framework and to validate the IB/FE model. Moreover, this work represents an important step toward using such models for fully dynamic fluid–structure interaction simulations of the heart. © 2014 The Authors. International Journal for Numerical Methods in Engineering published by John Wiley & Sons, Ltd. PMID:24799090
Hot spot initiation and chemical reaction in shocked polymeric bonded explosives
NASA Astrophysics Data System (ADS)
An, Qi; Zybin, Sergey; Jaramillo-Botero, Andres; Goddard, William; Materials; Process Simulation Center, Caltech Team
2011-06-01
A polymer bonded explosive (PBX) model based on PBXN-106 is studied via molecular dynamics (MD) simulations using reactive force field (ReaxFF) under shock loading conditions. Hotspot is observed when shock waves pass through the non-planar interface of explosives and elastomers. Adiabatic shear localization is proposed as the main mechanism of hotspot ignition in PBX for high velocity impact. Our simulation also shows that the coupling of shear localization and chemical reactions at hotspot region play important rules at stress relaxtion for explosives. The phenomenon that shock waves are obsorbed by elastomers is also observed in the MD simulations. This research received supports from ARO (W911NF-05-1-0345; W911NF-08-1-0124), ONR (N00014-05-1-0778), and Los Alamos National Laboratory (LANL).
Integrating Growth Stage Deficit Irrigation into a Process Based Crop Model
NASA Technical Reports Server (NTRS)
Lopez, Jose R.; Winter, Jonathan M.; Elliott, Joshua; Ruane, Alex C.; Porter, Cheryl; Hoogenboom, Gerrit
2017-01-01
Current rates of agricultural water use are unsustainable in many regions, creating an urgent need to identify improved irrigation strategies for water limited areas. Crop models can be used to quantify plant water requirements, predict the impact of water shortages on yield, and calculate water productivity (WP) to link water availability and crop yields for economic analyses. Many simulations of crop growth and development, especially in regional and global assessments, rely on automatic irrigation algorithms to estimate irrigation dates and amounts. However, these algorithms are not well suited for water limited regions because they have simplistic irrigation rules, such as a single soil-moisture based threshold, and assume unlimited water. To address this constraint, a new modeling framework to simulate agricultural production in water limited areas was developed. The framework consists of a new automatic irrigation algorithm for the simulation of growth stage based deficit irrigation under limited seasonal water availability; and optimization of growth stage specific parameters. The new automatic irrigation algorithm was used to simulate maize and soybean in Gainesville, Florida, and first used to evaluate the sensitivity of maize and soybean simulations to irrigation at different growth stages and then to test the hypothesis that water productivity calculated using simplistic irrigation rules underestimates WP. In the first experiment, the effect of irrigating at specific growth stages on yield and irrigation water use efficiency (IWUE) in maize and soybean was evaluated. In the reproductive stages, IWUE tended to be higher than in the vegetative stages (e.g. IWUE was 18% higher than the well watered treatment when irrigating only during R3 in soybean), and when rainfall events were less frequent. In the second experiment, water productivity (WP) was significantly greater with optimized irrigation schedules compared to non-optimized irrigation schedules in water restricted scenarios. For example, the mean WP across 38 years of maize production was 1.1 kg/cu m for non-optimized irrigation schedules with 50 mm of seasonal available water and 2.1 kg/cu m optimized ion schedules, a 91% improvement in WP with optimized irrigation schedules. The framework described in this work could be used to estimate WP for regional to global assessments, as well as derive location specific irrigation guidance.
How synapses can enhance sensibility of a neural network
NASA Astrophysics Data System (ADS)
Protachevicz, P. R.; Borges, F. S.; Iarosz, K. C.; Caldas, I. L.; Baptista, M. S.; Viana, R. L.; Lameu, E. L.; Macau, E. E. N.; Batista, A. M.
2018-02-01
In this work, we study the dynamic range in a neural network modelled by cellular automaton. We consider deterministic and non-deterministic rules to simulate electrical and chemical synapses. Chemical synapses have an intrinsic time-delay and are susceptible to parameter variations guided by learning Hebbian rules of behaviour. The learning rules are related to neuroplasticity that describes change to the neural connections in the brain. Our results show that chemical synapses can abruptly enhance sensibility of the neural network, a manifestation that can become even more predominant if learning rules of evolution are applied to the chemical synapses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watanabe, Y., E-mail: watanabe@aees.kyushu-u.ac.jp; Abe, S.
Terrestrial neutron-induced soft errors in MOSFETs from a 65 nm down to a 25 nm design rule are analyzed by means of multi-scale Monte Carlo simulation using the PHITS-HyENEXSS code system. Nuclear reaction models implemented in PHITS code are validated by comparisons with experimental data. From the analysis of calculated soft error rates, it is clarified that secondary He and H ions provide a major impact on soft errors with decreasing critical charge. It is also found that the high energy component from 10 MeV up to several hundreds of MeV in secondary cosmic-ray neutrons has the most significant sourcemore » of soft errors regardless of design rule.« less
Model and Dynamic Behavior of Malware Propagation over Wireless Sensor Networks
NASA Astrophysics Data System (ADS)
Song, Yurong; Jiang, Guo-Ping
Based on the inherent characteristics of wireless sensor networks (WSN), the dynamic behavior of malware propagation in flat WSN is analyzed and investigated. A new model is proposed using 2-D cellular automata (CA), which extends the traditional definition of CA and establishes whole transition rules for malware propagation in WSN. Meanwhile, the validations of the model are proved through theoretical analysis and simulations. The theoretical analysis yields closed-form expressions which show good agreement with the simulation results of the proposed model. It is shown that the malware propaga-tion in WSN unfolds neighborhood saturation, which dominates the effects of increasing infectivity and limits the spread of the malware. MAC mechanism of wireless sensor networks greatly slows down the speed of malware propagation and reduces the risk of large-scale malware prevalence in these networks. The proposed model can describe accurately the dynamic behavior of malware propagation over WSN, which can be applied in developing robust and efficient defense system on WSN.
NASA Astrophysics Data System (ADS)
Wolfs, Vincent; Willems, Patrick
2013-10-01
Many applications in support of water management decisions require hydrodynamic models with limited calculation time, including real time control of river flooding, uncertainty and sensitivity analyses by Monte-Carlo simulations, and long term simulations in support of the statistical analysis of the model simulation results (e.g. flood frequency analysis). Several computationally efficient hydrodynamic models exist, but little attention is given to the modelling of floodplains. This paper presents a methodology that can emulate output from a full hydrodynamic model by predicting one or several levels in a floodplain, together with the flow rate between river and floodplain. The overtopping of the embankment is modelled as an overflow at a weir. Adaptive neuro fuzzy inference systems (ANFIS) are exploited to cope with the varying factors affecting the flow. Different input sets and identification methods are considered in model construction. Because of the dual use of simplified physically based equations and data-driven techniques, the ANFIS consist of very few rules with a low number of input variables. A second calculation scheme can be followed for exceptionally large floods. The obtained nominal emulation model was tested for four floodplains along the river Dender in Belgium. Results show that the obtained models are accurate with low computational cost.
Predicting Mycobacterium tuberculosis Complex Clades Using Knowledge-Based Bayesian Networks
Bennett, Kristin P.
2014-01-01
We develop a novel approach for incorporating expert rules into Bayesian networks for classification of Mycobacterium tuberculosis complex (MTBC) clades. The proposed knowledge-based Bayesian network (KBBN) treats sets of expert rules as prior distributions on the classes. Unlike prior knowledge-based support vector machine approaches which require rules expressed as polyhedral sets, KBBN directly incorporates the rules without any modification. KBBN uses data to refine rule-based classifiers when the rule set is incomplete or ambiguous. We develop a predictive KBBN model for 69 MTBC clades found in the SITVIT international collection. We validate the approach using two testbeds that model knowledge of the MTBC obtained from two different experts and large DNA fingerprint databases to predict MTBC genetic clades and sublineages. These models represent strains of MTBC using high-throughput biomarkers called spacer oligonucleotide types (spoligotypes), since these are routinely gathered from MTBC isolates of tuberculosis (TB) patients. Results show that incorporating rules into problems can drastically increase classification accuracy if data alone are insufficient. The SITVIT KBBN is publicly available for use on the World Wide Web. PMID:24864238
1991-02-01
3 2.2 Hybrid Rule/Fact Schemas .............................................................. 3 3 THE LIMITATIONS OF RULE BASED KNOWLEDGE...or hybrid rule/fact schemas. 2 UNCLASSIFIED .WA UNCLASSIFIED ERL-0520-RR 2.1 Propositional Logic The simplest form of production-rules are based upon...requirements which may lead to poor system performance. 2.2 Hybrid Rule/Fact Schemas Hybrid rule/fact relationships (also known as Predicate Calculus ) have
A fuzzy hill-climbing algorithm for the development of a compact associative classifier
NASA Astrophysics Data System (ADS)
Mitra, Soumyaroop; Lam, Sarah S.
2012-02-01
Classification, a data mining technique, has widespread applications including medical diagnosis, targeted marketing, and others. Knowledge discovery from databases in the form of association rules is one of the important data mining tasks. An integrated approach, classification based on association rules, has drawn the attention of the data mining community over the last decade. While attention has been mainly focused on increasing classifier accuracies, not much efforts have been devoted towards building interpretable and less complex models. This paper discusses the development of a compact associative classification model using a hill-climbing approach and fuzzy sets. The proposed methodology builds the rule-base by selecting rules which contribute towards increasing training accuracy, thus balancing classification accuracy with the number of classification association rules. The results indicated that the proposed associative classification model can achieve competitive accuracies on benchmark datasets with continuous attributes and lend better interpretability, when compared with other rule-based systems.
Modeling for (physical) biologists: an introduction to the rule-based approach
Chylek, Lily A; Harris, Leonard A; Faeder, James R; Hlavacek, William S
2015-01-01
Models that capture the chemical kinetics of cellular regulatory networks can be specified in terms of rules for biomolecular interactions. A rule defines a generalized reaction, meaning a reaction that permits multiple reactants, each capable of participating in a characteristic transformation and each possessing certain, specified properties, which may be local, such as the state of a particular site or domain of a protein. In other words, a rule defines a transformation and the properties that reactants must possess to participate in the transformation. A rule also provides a rate law. A rule-based approach to modeling enables consideration of mechanistic details at the level of functional sites of biomolecules and provides a facile and visual means for constructing computational models, which can be analyzed to study how system-level behaviors emerge from component interactions. PMID:26178138
Evidence flow graph methods for validation and verification of expert systems
NASA Technical Reports Server (NTRS)
Becker, Lee A.; Green, Peter G.; Bhatnagar, Jayant
1989-01-01
The results of an investigation into the use of evidence flow graph techniques for performing validation and verification of expert systems are given. A translator to convert horn-clause rule bases into evidence flow graphs, a simulation program, and methods of analysis were developed. These tools were then applied to a simple rule base which contained errors. It was found that the method was capable of identifying a variety of problems, for example that the order of presentation of input data or small changes in critical parameters could affect the output from a set of rules.
Infrared small target detection based on Danger Theory
NASA Astrophysics Data System (ADS)
Lan, Jinhui; Yang, Xiao
2009-11-01
To solve the problem that traditional method can't detect the small objects whose local SNR is less than 2 in IR images, a Danger Theory-based model to detect infrared small target is presented in this paper. First, on the analog with immunology, the definition is given, in this paper, to such terms as dangerous signal, antigens, APC, antibodies. Besides, matching rule between antigen and antibody is improved. Prior to training the detection model and detecting the targets, the IR images are processed utilizing adaptive smooth filter to decrease the stochastic noise. Then at the training process, deleting rule, generating rule, crossover rule and the mutation rule are established after a large number of experiments in order to realize immediate convergence and obtain good antibodies. The Danger Theory-based model is built after the training process, and this model can detect the target whose local SNR is only 1.5.
Foxes and Rabbits - and a Spreadsheet.
ERIC Educational Resources Information Center
Carson, S. R.
1996-01-01
Presents a numerical simulation of a simple food chain together with a set of mathematical rules generalizing the model to a food web of any complexity. Discusses some of the model's interesting features and its use by students. (Author/JRH)
Exploration of SWRL Rule Bases through Visualization, Paraphrasing, and Categorization of Rules
NASA Astrophysics Data System (ADS)
Hassanpour, Saeed; O'Connor, Martin J.; Das, Amar K.
Rule bases are increasingly being used as repositories of knowledge content on the Semantic Web. As the size and complexity of these rule bases increases, developers and end users need methods of rule abstraction to facilitate rule management. In this paper, we describe a rule abstraction method for Semantic Web Rule Language (SWRL) rules that is based on lexical analysis and a set of heuristics. Our method results in a tree data structure that we exploit in creating techniques to visualize, paraphrase, and categorize SWRL rules. We evaluate our approach by applying it to several biomedical ontologies that contain SWRL rules, and show how the results reveal rule patterns within the rule base. We have implemented our method as a plug-in tool for Protégé-OWL, the most widely used ontology modeling software for the Semantic Web. Our tool can allow users to rapidly explore content and patterns in SWRL rule bases, enabling their acquisition and management.
NASA Astrophysics Data System (ADS)
Yoon, J.; Klassert, C. J. A.; Lachaut, T.; Selby, P. D.; Knox, S.; Gorelick, S.; Rajsekhar, D.; Tilmant, A.; Avisse, N.; Harou, J. J.; Gawel, E.; Klauer, B.; Mustafa, D.; Talozi, S.; Sigel, K.
2015-12-01
Our work focuses on development of a multi-agent, hydroeconomic model for purposes of water policy evaluation in Jordan. The model adopts a modular approach, integrating biophysical modules that simulate natural and engineered phenomena with human modules that represent behavior at multiple levels of decision making. The hydrologic modules are developed using spatially-distributed groundwater and surface water models, which are translated into compact simulators for efficient integration into the multi-agent model. For the groundwater model, we adopt a response matrix method approach in which a 3-dimensional MODFLOW model of a complex regional groundwater system is converted into a linear simulator of groundwater response by pre-processing drawdown results from several hundred numerical simulation runs. Surface water models for each major surface water basin in the country are developed in SWAT and similarly translated into simple rainfall-runoff functions for integration with the multi-agent model. The approach balances physically-based, spatially-explicit representation of hydrologic systems with the efficiency required for integration into a complex multi-agent model that is computationally amenable to robust scenario analysis. For the multi-agent model, we explicitly represent human agency at multiple levels of decision making, with agents representing riparian, management, supplier, and water user groups. The agents' decision making models incorporate both rule-based heuristics as well as economic optimization. The model is programmed in Python using Pynsim, a generalizable, open-source object-oriented code framework for modeling network-based water resource systems. The Jordan model is one of the first applications of Pynsim to a real-world water management case study. Preliminary results from a tanker market scenario run through year 2050 are presented in which several salient features of the water system are investigated: competition between urban and private farmer agents, the emergence of a private tanker market, disparities in economic wellbeing to different user groups caused by unique supply conditions, and response of the complex system to various policy interventions.
Adaptive scapula bone remodeling computational simulation: Relevance to regenerative medicine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Gulshan B., E-mail: gbsharma@ucalgary.ca; University of Pittsburgh, Swanson School of Engineering, Department of Bioengineering, Pittsburgh, Pennsylvania 15213; University of Calgary, Schulich School of Engineering, Department of Mechanical and Manufacturing Engineering, Calgary, Alberta T2N 1N4
Shoulder arthroplasty success has been attributed to many factors including, bone quality, soft tissue balancing, surgeon experience, and implant design. Improved long-term success is primarily limited by glenoid implant loosening. Prosthesis design examines materials and shape and determines whether the design should withstand a lifetime of use. Finite element (FE) analyses have been extensively used to study stresses and strains produced in implants and bone. However, these static analyses only measure a moment in time and not the adaptive response to the altered environment produced by the therapeutic intervention. Computational analyses that integrate remodeling rules predict how bone will respondmore » over time. Recent work has shown that subject-specific two- and three dimensional adaptive bone remodeling models are feasible and valid. Feasibility and validation were achieved computationally, simulating bone remodeling using an intact human scapula, initially resetting the scapular bone material properties to be uniform, numerically simulating sequential loading, and comparing the bone remodeling simulation results to the actual scapula’s material properties. Three-dimensional scapula FE bone model was created using volumetric computed tomography images. Muscle and joint load and boundary conditions were applied based on values reported in the literature. Internal bone remodeling was based on element strain-energy density. Initially, all bone elements were assigned a homogeneous density. All loads were applied for 10 iterations. After every iteration, each bone element’s remodeling stimulus was compared to its corresponding reference stimulus and its material properties modified. The simulation achieved convergence. At the end of the simulation the predicted and actual specimen bone apparent density were plotted and compared. Location of high and low predicted bone density was comparable to the actual specimen. High predicted bone density was greater than actual specimen. Low predicted bone density was lower than actual specimen. Differences were probably due to applied muscle and joint reaction loads, boundary conditions, and values of constants used. Work is underway to study this. Nonetheless, the results demonstrate three dimensional bone remodeling simulation validity and potential. Such adaptive predictions take physiological bone remodeling simulations one step closer to reality. Computational analyses are needed that integrate biological remodeling rules and predict how bone will respond over time. We expect the combination of computational static stress analyses together with adaptive bone remodeling simulations to become effective tools for regenerative medicine research.« less
Adaptive scapula bone remodeling computational simulation: Relevance to regenerative medicine
NASA Astrophysics Data System (ADS)
Sharma, Gulshan B.; Robertson, Douglas D.
2013-07-01
Shoulder arthroplasty success has been attributed to many factors including, bone quality, soft tissue balancing, surgeon experience, and implant design. Improved long-term success is primarily limited by glenoid implant loosening. Prosthesis design examines materials and shape and determines whether the design should withstand a lifetime of use. Finite element (FE) analyses have been extensively used to study stresses and strains produced in implants and bone. However, these static analyses only measure a moment in time and not the adaptive response to the altered environment produced by the therapeutic intervention. Computational analyses that integrate remodeling rules predict how bone will respond over time. Recent work has shown that subject-specific two- and three dimensional adaptive bone remodeling models are feasible and valid. Feasibility and validation were achieved computationally, simulating bone remodeling using an intact human scapula, initially resetting the scapular bone material properties to be uniform, numerically simulating sequential loading, and comparing the bone remodeling simulation results to the actual scapula's material properties. Three-dimensional scapula FE bone model was created using volumetric computed tomography images. Muscle and joint load and boundary conditions were applied based on values reported in the literature. Internal bone remodeling was based on element strain-energy density. Initially, all bone elements were assigned a homogeneous density. All loads were applied for 10 iterations. After every iteration, each bone element's remodeling stimulus was compared to its corresponding reference stimulus and its material properties modified. The simulation achieved convergence. At the end of the simulation the predicted and actual specimen bone apparent density were plotted and compared. Location of high and low predicted bone density was comparable to the actual specimen. High predicted bone density was greater than actual specimen. Low predicted bone density was lower than actual specimen. Differences were probably due to applied muscle and joint reaction loads, boundary conditions, and values of constants used. Work is underway to study this. Nonetheless, the results demonstrate three dimensional bone remodeling simulation validity and potential. Such adaptive predictions take physiological bone remodeling simulations one step closer to reality. Computational analyses are needed that integrate biological remodeling rules and predict how bone will respond over time. We expect the combination of computational static stress analyses together with adaptive bone remodeling simulations to become effective tools for regenerative medicine research.
Expert systems for automated maintenance of a Mars oxygen production system
NASA Technical Reports Server (NTRS)
Ash, Robert L.; Huang, Jen-Kuang; Ho, Ming-Tsang
1989-01-01
A prototype expert system was developed for maintaining autonomous operation of a Mars oxygen production system. Normal operation conditions and failure modes according to certain desired criteria are tested and identified. Several schemes for failure detection and isolation using forward chaining, backward chaining, knowledge-based and rule-based are devised to perform several housekeeping functions. These functions include self-health checkout, an emergency shut down program, fault detection and conventional control activities. An effort was made to derive the dynamic model of the system using Bond-Graph technique in order to develop the model-based failure detection and isolation scheme by estimation method. Finally, computer simulations and experimental results demonstrated the feasibility of the expert system and a preliminary reliability analysis for the oxygen production system is also provided.
Deductibles in health insurance
NASA Astrophysics Data System (ADS)
Dimitriyadis, I.; Öney, Ü. N.
2009-11-01
This study is an extension to a simulation study that has been developed to determine ruin probabilities in health insurance. The study concentrates on inpatient and outpatient benefits for customers of varying age bands. Loss distributions are modelled through the Allianz tool pack for different classes of insureds. Premiums at different levels of deductibles are derived in the simulation and ruin probabilities are computed assuming a linear loading on the premium. The increase in the probability of ruin at high levels of the deductible clearly shows the insufficiency of proportional loading in deductible premiums. The PH-transform pricing rule developed by Wang is analyzed as an alternative pricing rule. A simple case, where an insured is assumed to be an exponential utility decision maker while the insurer's pricing rule is a PH-transform is also treated.
NASA Astrophysics Data System (ADS)
Pietrzyk, Maciej; Kuziak, Roman; Pidvysots'kyy, Valeriy; Nowak, Jarosław; Węglarczyk, Stanisław; Drozdowski, Krzysztof
2013-07-01
Two copper-based alloys were considered, Cu-1 pct Cr and Cu-0.7 pct Cr-1 pct Si-2 pct Ni. The thermal, electrical, and mechanical properties of these alloys are given in the paper and compared to pure copper and steel. The role of aging and precipitation kinetics in hardening of the alloys is discussed based upon the developed model. Results of plastometric tests performed at various temperatures and various strain rates are presented. The effect of the initial microstructure on the flow stress was investigated. Rheologic models for the alloys were developed. A finite element (FE) model based on the Norton-Hoff visco-plastic flow rule was applied to the simulation of forging of the alloys. Analysis of the die wear for various processes of hot and cold forging is presented as well. A microstructure evolution model was implemented into the FE code, and the microstructure and mechanical properties of final products were predicted. Various variants of the manufacturing cycles were considered. These include different preheating schedules, hot forging, cold forging, and aging. All variants were simulated using the FE method and loads, die filling, tool wear, and mechanical properties of products were predicted. Three variants giving the best combination of forging parameters were selected and industrial trials were performed. The best manufacturing technology for the copper-based alloys is proposed.
NASA Astrophysics Data System (ADS)
Kanta, L.; Giacomoni, M.; Shafiee, M. E.; Berglund, E.
2014-12-01
The sustainability of water resources is threatened by urbanization, as increasing demands deplete water availability, and changes to the landscape alter runoff and the flow regime of receiving water bodies. Utility managers typically manage urban water resources through the use of centralized solutions, such as large reservoirs, which may be limited in their ability balance the needs of urbanization and ecological systems. Decentralized technologies, on the other hand, may improve the health of the water resources system and deliver urban water services. For example, low impact development technologies, such as rainwater harvesting, and water-efficient technologies, such as low-flow faucets and toilets, may be adopted by households to retain rainwater and reduce demands, offsetting the need for new centralized infrastructure. Decentralized technologies may create new complexities in infrastructure and water management, as decentralization depends on community behavior and participation beyond traditional water resources planning. Messages about water shortages and water quality from peers and the water utility managers can influence the adoption of new technologies. As a result, feedbacks between consumers and water resources emerge, creating a complex system. This research develops a framework to simulate the diffusion of water-efficient innovations and the sustainability of urban water resources, by coupling models of households in a community, hydrologic models of a water resources system, and a cellular automata model of land use change. Agent-based models are developed to simulate the land use and water demand decisions of individual households, and behavioral rules are encoded to simulate communication with other agents and adoption of decentralized technologies, using a model of the diffusion of innovation. The framework is applied for an illustrative case study to simulate water resources sustainability over a long-term planning horizon.
Systematic methods for knowledge acquisition and expert system development
NASA Technical Reports Server (NTRS)
Belkin, Brenda L.; Stengel, Robert F.
1991-01-01
Nine cooperating rule-based systems, collectively called AUTOCREW which were designed to automate functions and decisions associated with a combat aircraft's subsystems, are discussed. The organization of tasks within each system is described; performance metrics were developed to evaluate the workload of each rule base and to assess the cooperation between the rule bases. Simulation and comparative workload results for two mission scenarios are given. The scenarios are inbound surface-to-air-missile attack on the aircraft and pilot incapacitation. The methodology used to develop the AUTOCREW knowledge bases is summarized. Issues involved in designing the navigation sensor selection expert in AUTOCREW's NAVIGATOR knowledge base are discussed in detail. The performance of seven navigation systems aiding a medium-accuracy INS was investigated using Kalman filter covariance analyses. A navigation sensor management (NSM) expert system was formulated from covariance simulation data using the analysis of variance (ANOVA) method and the ID3 algorithm. ANOVA results show that statistically different position accuracies are obtained when different navaids are used, the number of navaids aiding the INS is varied, the aircraft's trajectory is varied, and the performance history is varied. The ID3 algorithm determines the NSM expert's classification rules in the form of decision trees. The performance of these decision trees was assessed on two arbitrary trajectories, and the results demonstrate that the NSM expert adapts to new situations and provides reasonable estimates of the expected hybrid performance.
Towards a voxel-based geographic automata for the simulation of geospatial processes
NASA Astrophysics Data System (ADS)
Jjumba, Anthony; Dragićević, Suzana
2016-07-01
Many geographic processes evolve in a three dimensional space and time continuum. However, when they are represented with the aid of geographic information systems (GIS) or geosimulation models they are modelled in a framework of two-dimensional space with an added temporal component. The objective of this study is to propose the design and implementation of voxel-based automata as a methodological approach for representing spatial processes evolving in the four-dimensional (4D) space-time domain. Similar to geographic automata models which are developed to capture and forecast geospatial processes that change in a two-dimensional spatial framework using cells (raster geospatial data), voxel automata rely on the automata theory and use three-dimensional volumetric units (voxels). Transition rules have been developed to represent various spatial processes which range from the movement of an object in 3D to the diffusion of airborne particles and landslide simulation. In addition, the proposed 4D models demonstrate that complex processes can be readily reproduced from simple transition functions without complex methodological approaches. The voxel-based automata approach provides a unique basis to model geospatial processes in 4D for the purpose of improving representation, analysis and understanding their spatiotemporal dynamics. This study contributes to the advancement of the concepts and framework of 4D GIS.
Henneman, Elizabeth A; Roche, Joan P; Fisher, Donald L; Cunningham, Helene; Reilly, Cheryl A; Nathanson, Brian H; Henneman, Philip L
2010-02-01
This study examined types of errors that occurred or were recovered in a simulated environment by student nurses. Errors occurred in all four rule-based error categories, and all students committed at least one error. The most frequent errors occurred in the verification category. Another common error was related to physician interactions. The least common errors were related to coordinating information with the patient and family. Our finding that 100% of student subjects committed rule-based errors is cause for concern. To decrease errors and improve safe clinical practice, nurse educators must identify effective strategies that students can use to improve patient surveillance. Copyright 2010 Elsevier Inc. All rights reserved.
Coupled flow and deformations in granular systems beyond the pendular regime
NASA Astrophysics Data System (ADS)
Yuan, Chao; Chareyre, Bruno; Darve, Felix
2017-06-01
A pore-scale numerical model is proposed for simulating the quasi-static primary drainage and the hydro-mechanical couplings in multiphase granular systems. The solid skeleton is idealized to a dense random packing of polydisperse spheres by DEM. The fluids (nonwetting and wetting phases) space is decomposed to a network of tetrahedral pores based on the Regular Triangulation method. The local drainage rules and invasion logic are defined. The fluid forces acting on solid grains are formulated. The model can simulate the hydraulic evolution from a fully saturated state to a low level of saturation but beyond the pendular regime. The features of wetting phase entrapments and capillary fingering can also be reproduced. Finally, a primary drainage test is performed on a 40,000 spheres of sample. The water retention curve is obtained. The solid skeleton first shrinks then swells.
An XML-Based Manipulation and Query Language for Rule-Based Information
NASA Astrophysics Data System (ADS)
Mansour, Essam; Höpfner, Hagen
Rules are utilized to assist in the monitoring process that is required in activities, such as disease management and customer relationship management. These rules are specified according to the application best practices. Most of research efforts emphasize on the specification and execution of these rules. Few research efforts focus on managing these rules as one object that has a management life-cycle. This paper presents our manipulation and query language that is developed to facilitate the maintenance of this object during its life-cycle and to query the information contained in this object. This language is based on an XML-based model. Furthermore, we evaluate the model and language using a prototype system applied to a clinical case study.
Rotating states of self-propelling particles in two dimensions.
Chen, Hsuan-Yi; Leung, Kwan-Tai
2006-05-01
We present particle-based simulations and a continuum theory for steady rotating flocks formed by self-propelling particles (SPPs) in two-dimensional space. Our models include realistic but simple rules for the self-propelling, drag, and interparticle interactions. Among other coherent structures, in particle-based simulations we find steady rotating flocks when the velocity of the particles lacks long-range alignment. Physical characteristics of the rotating flock are measured and discussed. We construct a phenomenological continuum model and seek steady-state solutions for a rotating flock. We show that the velocity and density profiles become simple in two limits. In the limit of weak alignment, we find that all particles move with the same speed and the density of particles vanishes near the center of the flock due to the divergence of centripetal force. In the limit of strong body force, the density of particles within the flock is uniform and the velocity of the particles close to the center of the flock becomes small.
Large fluctuations in anti-coordination games on scale-free graphs
NASA Astrophysics Data System (ADS)
Sabsovich, Daniel; Mobilia, Mauro; Assaf, Michael
2017-05-01
We study the influence of the complex topology of scale-free graphs on the dynamics of anti-coordination games (e.g. snowdrift games). These reference models are characterized by the coexistence (evolutionary stable mixed strategy) of two competing species, say ‘cooperators’ and ‘defectors’, and, in finite systems, by metastability and large-fluctuation-driven fixation. In this work, we use extensive computer simulations and an effective diffusion approximation (in the weak selection limit) to determine under which circumstances, depending on the individual-based update rules, the topology drastically affects the long-time behavior of anti-coordination games. In particular, we compute the variance of the number of cooperators in the metastable state and the mean fixation time when the dynamics is implemented according to the voter model (death-first/birth-second process) and the link dynamics (birth/death or death/birth at random). For the voter update rule, we show that the scale-free topology effectively renormalizes the population size and as a result the statistics of observables depend on the network’s degree distribution. In contrast, such a renormalization does not occur with the link dynamics update rule and we recover the same behavior as on complete graphs.
The role of correlations in uncertainty quantification of transportation relevant fuel models
Fridlyand, Aleksandr; Johnson, Matthew S.; Goldsborough, S. Scott; ...
2017-02-03
Large reaction mechanisms are often used to describe the combustion behavior of transportation-relevant fuels like gasoline, where these are typically represented by surrogate blends, e.g., n-heptane/iso-octane/toluene. We describe efforts to quantify the uncertainty in the predictions of such mechanisms at realistic engine conditions, seeking to better understand the robustness of the model as well as the important reaction pathways and their impacts on combustion behavior. In this work, we examine the importance of taking into account correlations among reactions that utilize the same rate rules and those with multiple product channels on forward propagation of uncertainty by Monte Carlo simulations.more » Automated means are developed to generate the uncertainty factor assignment for a detailed chemical kinetic mechanism, by first uniquely identifying each reacting species, then sorting each of the reactions based on the rate rule utilized. Simulation results reveal that in the low temperature combustion regime for iso-octane, the majority of the uncertainty in the model predictions can be attributed to low temperature reactions of the fuel sub-mechanism. The foundational, or small-molecule chemistry (C 0-C 4) only contributes significantly to uncertainties in the predictions at the highest temperatures (Tc=900 K). Accounting for correlations between important reactions is shown to produce non-negligible differences in the estimates of uncertainty. Including correlations among reactions that use the same rate rules increases uncertainty in the model predictions, while accounting for correlations among reactions with multiple branches decreases uncertainty in some cases. Significant non-linear response is observed in the model predictions depending on how the probability distributions of the uncertain rate constants are defined.Finally, we concluded that care must be exercised in defining these probability distributions in order to reduce bias, and physically unrealistic estimates in the forward propagation of uncertainty for a range of UQ activities.« less
Stability and structural properties of gene regulation networks with coregulation rules.
Warrell, Jonathan; Mhlanga, Musa
2017-05-07
Coregulation of the expression of groups of genes has been extensively demonstrated empirically in bacterial and eukaryotic systems. Such coregulation can arise through the use of shared regulatory motifs, which allow the coordinated expression of modules (and module groups) of functionally related genes across the genome. Coregulation can also arise through the physical association of multi-gene complexes through chromosomal looping, which are then transcribed together. We present a general formalism for modeling coregulation rules in the framework of Random Boolean Networks (RBN), and develop specific models for transcription factor networks with modular structure (including module groups, and multi-input modules (MIM) with autoregulation) and multi-gene complexes (including hierarchical differentiation between multi-gene complex members). We develop a mean-field approach to analyse the dynamical stability of large networks incorporating coregulation, and show that autoregulated MIM and hierarchical gene-complex models can achieve greater stability than networks without coregulation whose rules have matching activation frequency. We provide further analysis of the stability of small networks of both kinds through simulations. We also characterize several general properties of the transients and attractors in the hierarchical coregulation model, and show using simulations that the steady-state distribution factorizes hierarchically as a Bayesian network in a Markov Jump Process analogue of the RBN model. Copyright © 2017. Published by Elsevier Ltd.
Drosophila segmentation: supercomputer simulation of prepattern hierarchy.
Hunding, A; Kauffman, S A; Goodwin, B C
1990-08-09
Spontaneous prepattern formation in a two level hierarchy of reaction-diffusion systems is simulated in three space co-ordinates and time, mimicking gap gene and primary pair-rule gene expression. The model rests on the idea of Turing systems of the second kind, in which one prepattern generates position dependent rate constants for a subsequent reaction-diffusion system. Maternal genes are assumed responsible for setting up gradients from the anterior and posterior ends, one of which is needed to stabilize a double period prepattern suggested to underly the read out of the gap genes. The resulting double period pattern in turn stabilizes the next prepattern in the hierarchy, which has a short wavelength with many characteristics of the stripes seen in actual primary pair-rule gene expression. Without such hierarchical stabilization, reaction-diffusion mechanisms yield highly patchy short wave length patterns, and thus unreliable stripes. The model yields seven stable stripes located in the middle of the embryo, with the potential for additional expression near the poles, as observed experimentally. The model does not rely on specific chemical reaction kinetics, rather the effect is general to many such kinetic schemes. This makes it robust to parameter changes, and it has good potential for adapting to size and shape changes as well. The study thus suggests that the crucial organizing principle in early Drosophila embryogenesis is based on global field mechanisms, not on particular local interactions.
Developing a Learning Progression for Number Sense Based on the Rule Space Model in China
ERIC Educational Resources Information Center
Chen, Fu; Yan, Yue; Xin, Tao
2017-01-01
The current study focuses on developing the learning progression of number sense for primary school students, and it applies a cognitive diagnostic model, the rule space model, to data analysis. The rule space model analysis firstly extracted nine cognitive attributes and their hierarchy model from the analysis of previous research and the…
Engineers' Non-Scientific Models in Technology Education
ERIC Educational Resources Information Center
Norstrom, Per
2013-01-01
Engineers commonly use rules, theories and models that lack scientific justification. Examples include rules of thumb based on experience, but also models based on obsolete science or folk theories. Centrifugal forces, heat and cold as substances, and sucking vacuum all belong to the latter group. These models contradict scientific knowledge, but…
Virtual Reality for Artificial Intelligence: human-centered simulation for social science.
Cipresso, Pietro; Riva, Giuseppe
2015-01-01
There is a long last tradition in Artificial Intelligence as use of Robots endowing human peculiarities, from a cognitive and emotional point of view, and not only in shape. Today Artificial Intelligence is more oriented to several form of collective intelligence, also building robot simulators (hardware or software) to deeply understand collective behaviors in human beings and society as a whole. Modeling has also been crucial in the social sciences, to understand how complex systems can arise from simple rules. However, while engineers' simulations can be performed in the physical world using robots, for social scientist this is impossible. For decades, researchers tried to improve simulations by endowing artificial agents with simple and complex rules that emulated human behavior also by using artificial intelligence (AI). To include human beings and their real intelligence within artificial societies is now the big challenge. We present an hybrid (human-artificial) platform where experiments can be performed by simulated artificial worlds in the following manner: 1) agents' behaviors are regulated by the behaviors shown in Virtual Reality involving real human beings exposed to specific situations to simulate, and 2) technology transfers these rules into the artificial world. These form a closed-loop of real behaviors inserted into artificial agents, which can be used to study real society.
Innovative Tools for Water Quality/Quantity Management: New York City's Operations Support Tool
NASA Astrophysics Data System (ADS)
Wang, L.; Schaake, J. C.; Day, G. N.; Porter, J.; Sheer, D. P.; Pyke, G.
2011-12-01
The New York City Department of Environmental Protection (DEP) manages New York City's water supply, which is comprised of over 20 reservoirs and supplies more than 1 billion gallons of water per day to over 9 million customers. Recently, DEP has initiated design of an Operations Support Tool (OST), a state-of-the-art decision support system to provide computational and predictive support for water supply operations and planning. This presentation describes the technical structure of OST, including the underlying water supply and water quality models, data sources and database management, reservoir inflow forecasts, and the functionalities required to meet the needs of a diverse group of end users. OST is a major upgrade of DEP's current water supply - water quality model, developed to evaluate alternatives for controlling turbidity in NYC's Catskill reservoirs. While the current model relies on historical hydrologic and meteorological data, OST can be driven by forecasted future conditions. It will receive a variety of near-real-time data from a number of sources. OST will support two major types of simulations: long-term, for evaluating policy or infrastructure changes over an extended period of time; and short-term "position analysis" (PA) simulations, consisting of multiple short simulations, all starting from the same initial conditions. Typically, the starting conditions for a PA run will represent those for the current day and traces of forecasted hydrology will drive the model for the duration of the simulation period. The result of these simulations will be a distribution of future system states based on system operating rules and the range of input ensemble streamflow predictions. DEP managers will analyze the output distributions and make operation decisions using risk-based metrics such as probability of refill. Currently, in the developmental stages of OST, forecasts are based on antecedent hydrologic conditions and are statistical in nature. The statistical algorithm is a relatively simple and versatile, but lacks short-term skill critical for water quality and spill management. To improve short-term skill, OST will ultimately operate with meteorologically driven hydrologic forecasts provided by the National Weather Service (NWS). OST functionalities will support a wide range of DEP uses, including short term operational projections, outage planning and emergency management, operating rule development, and water supply planning. A core use of OST will be to inform reservoir management strategies to control and mitigate turbidity events while ensuring water supply reliability. OST will also allow DEP to manage its complex reservoir system to meet multiple objectives, including ecological flows, tailwater fisheries and recreational releases, and peak flow mitigation for downstream communities.
A model-driven privacy compliance decision support for medical data sharing in Europe.
Boussi Rahmouni, H; Solomonides, T; Casassa Mont, M; Shiu, S; Rahmouni, M
2011-01-01
Clinical practitioners and medical researchers often have to share health data with other colleagues across Europe. Privacy compliance in this context is very important but challenging. Automated privacy guidelines are a practical way of increasing users' awareness of privacy obligations and help eliminating unintentional breaches of privacy. In this paper we present an ontology-plus-rules based approach to privacy decision support for the sharing of patient data across European platforms. We use ontologies to model the required domain and context information about data sharing and privacy requirements. In addition, we use a set of Semantic Web Rule Language rules to reason about legal privacy requirements that are applicable to a specific context of data disclosure. We make the complete set invocable through the use of a semantic web application acting as an interactive privacy guideline system can then invoke the full model in order to provide decision support. When asked, the system will generate privacy reports applicable to a specific case of data disclosure described by the user. Also reports showing guidelines per Member State may be obtained. The advantage of this approach lies in the expressiveness and extensibility of the modelling and inference languages adopted and the ability they confer to reason with complex requirements interpreted from high level regulations. However, the system cannot at this stage fully simulate the role of an ethics committee or review board.
Understanding lizard's microhabitat use based on a mechanistic model of behavioral thermoregulation
NASA Astrophysics Data System (ADS)
Fei, Teng; Venus, Valentijn; Toxopeus, Bert; Skidmore, Andrew K.; Schlerf, Martin; Liu, Yaolin; van Overdijk, Sjef; Bian, Meng
2008-12-01
Lizards are an "excellent group of organisms" to examine the habitat and microhabitat use mainly because their ecology and physiology is well studied. Due to their behavioral body temperature regulation, the thermal environment is especially linked with their habitat use. In this study, for mapping and understanding lizard's distribution at microhabitat scale, an individual of Timon Lepidus was kept and monitored in a terrarium (245×120×115cm) in which sand, rocks, burrows, hatching chambers, UV-lamps, fog generators and heating devices were placed to simulate its natural habitat. Optical cameras, thermal cameras and other data loggers were fixed and recording the lizard's body temperature, ground surface temperature, air temperature, radiation and other important environmental parameters. By analysis the data collected, we propose a Cellular Automata (CA) model by which the movement of lizards is simulated and translated into their distribution. This paper explores the capabilities of applying GIS techniques to thermoregulatory activity studies in a microhabitat-scale. We conclude that microhabitat use of lizards can be explained in some degree by the rule based CA model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bickmore, Barry R.; Rosso, Kevin M.; Tadanier, Christopher J.
2006-08-15
In a previous contribution, we outlined a method for predicting (hydr)oxy-acid and oxide surface acidity constants based on three main factors: bond valence, Me?O bond ionicity, and molecular shape. Here electrostatics calculations and ab initio molecular dynamics simulations are used to qualitatively show that Me?O bond ionicity controls the extent to which the electrostatic work of proton removal departs from ideality, bond valence controls the extent of solvation of individual functional groups, and bond valence and molecular shape controls local dielectric response. These results are consistent with our model of acidity, but completely at odds with other methods of predictingmore » acidity constants for use in multisite complexation models. In particular, our ab initio molecular dynamics simulations of solvated monomers clearly indicate that hydrogen bonding between (hydr)oxo-groups and water molecules adjusts to obey the valence sum rule, rather than maintaining a fixed valence based on the coordination of the oxygen atom as predicted by the standard MUSIC model.« less
Designing a podiatry service to meet the needs of the population: a service simulation.
Campbell, Jackie A
2007-02-01
A model of a podiatry service has been developed which takes into consideration the effect of changing access criteria, skill mix and staffing levels (among others) given fixed local staffing budgets and the foot-health characteristics of the local community. A spreadsheet-based deterministic model was chosen to allow maximum transparency of programming. This work models a podiatry service in England, but could be adapted for other settings and, with some modification, for other community-based services. This model enables individual services to see the effect on outcome parameters such as number of patients treated, number discharged and size of waiting lists of various service configurations, given their individual local data profile. The process of designing the model has also had spin-off benefits for the participants in making explicit many of the implicit rules used in managing their services.
Ji, Xiang; Liu, Li-Ming; Li, Hong-Qing
2014-11-01
Taking Jinjing Town in Dongting Lake area as a case, this paper analyzed the evolution of rural landscape patterns by means of life cycle theory, simulated the evolution cycle curve, and calculated its evolution period, then combining CA-Markov model, a complete prediction model was built based on the rule of rural landscape change. The results showed that rural settlement and paddy landscapes of Jinjing Town would change most in 2020, with the rural settlement landscape increased to 1194.01 hm2 and paddy landscape greatly reduced to 3090.24 hm2. The quantitative and spatial prediction accuracies of the model were up to 99.3% and 96.4%, respectively, being more explicit than single CA-Markov model. The prediction model of rural landscape patterns change proposed in this paper would be helpful for rural landscape planning in future.
Climate Shocks and Migration: An Agent-Based Modeling Approach.
Entwisle, Barbara; Williams, Nathalie E; Verdery, Ashton M; Rindfuss, Ronald R; Walsh, Stephen J; Malanson, George P; Mucha, Peter J; Frizzelle, Brian G; McDaniel, Philip M; Yao, Xiaozheng; Heumann, Benjamin W; Prasartkul, Pramote; Sawangdee, Yothin; Jampaklay, Aree
2016-09-01
This is a study of migration responses to climate shocks. We construct an agent-based model that incorporates dynamic linkages between demographic behaviors, such as migration, marriage, and births, and agriculture and land use, which depend on rainfall patterns. The rules and parameterization of our model are empirically derived from qualitative and quantitative analyses of a well-studied demographic field site, Nang Rong district, Northeast Thailand. With this model, we simulate patterns of migration under four weather regimes in a rice economy: 1) a reference, 'normal' scenario; 2) seven years of unusually wet weather; 3) seven years of unusually dry weather; and 4) seven years of extremely variable weather. Results show relatively small impacts on migration. Experiments with the model show that existing high migration rates and strong selection factors, which are unaffected by climate change, are likely responsible for the weak migration response.
Climate Shocks and Migration: An Agent-Based Modeling Approach
Entwisle, Barbara; Williams, Nathalie E.; Verdery, Ashton M.; Rindfuss, Ronald R.; Walsh, Stephen J.; Malanson, George P.; Mucha, Peter J.; Frizzelle, Brian G.; McDaniel, Philip M.; Yao, Xiaozheng; Heumann, Benjamin W.; Prasartkul, Pramote; Sawangdee, Yothin; Jampaklay, Aree
2016-01-01
This is a study of migration responses to climate shocks. We construct an agent-based model that incorporates dynamic linkages between demographic behaviors, such as migration, marriage, and births, and agriculture and land use, which depend on rainfall patterns. The rules and parameterization of our model are empirically derived from qualitative and quantitative analyses of a well-studied demographic field site, Nang Rong district, Northeast Thailand. With this model, we simulate patterns of migration under four weather regimes in a rice economy: 1) a reference, ‘normal’ scenario; 2) seven years of unusually wet weather; 3) seven years of unusually dry weather; and 4) seven years of extremely variable weather. Results show relatively small impacts on migration. Experiments with the model show that existing high migration rates and strong selection factors, which are unaffected by climate change, are likely responsible for the weak migration response. PMID:27594725
NASA Astrophysics Data System (ADS)
Zhang, Shengfang; Hao, Qiang; Sha, Zhihua; Yin, Jian; Ma, Fujian; Liu, Yu
2017-12-01
For the friction and wear issues of brake pads in the large-megawatt wind turbine brake during braking, this paper established the micro finite element model of abrasive wear by using Deform-2D software. Based on abrasive wear theory and considered the variation of the velocity and load in the micro friction and wear process, the Archard wear calculation model is developed. The influence rules of relative sliding velocity and friction coefficient in the brake pad and disc is analysed. The simulation results showed that as the relative sliding velocity increases, the wear will be more serious, while the larger friction coefficient lowered the contact pressure which released the wear of the brake pad.
Online Sensor Fault Detection Based on an Improved Strong Tracking Filter
Wang, Lijuan; Wu, Lifeng; Guan, Yong; Wang, Guohui
2015-01-01
We propose a method for online sensor fault detection that is based on the evolving Strong Tracking Filter (STCKF). The cubature rule is used to estimate states to improve the accuracy of making estimates in a nonlinear case. A residual is the difference in value between an estimated value and the true value. A residual will be regarded as a signal that includes fault information. The threshold is set at a reasonable level, and will be compared with residuals to determine whether or not the sensor is faulty. The proposed method requires only a nominal plant model and uses STCKF to estimate the original state vector. The effectiveness of the algorithm is verified by simulation on a drum-boiler model. PMID:25690553
Automated knowledge-base refinement
NASA Technical Reports Server (NTRS)
Mooney, Raymond J.
1994-01-01
Over the last several years, we have developed several systems for automatically refining incomplete and incorrect knowledge bases. These systems are given an imperfect rule base and a set of training examples and minimally modify the knowledge base to make it consistent with the examples. One of our most recent systems, FORTE, revises first-order Horn-clause knowledge bases. This system can be viewed as automatically debugging Prolog programs based on examples of correct and incorrect I/O pairs. In fact, we have already used the system to debug simple Prolog programs written by students in a programming language course. FORTE has also been used to automatically induce and revise qualitative models of several continuous dynamic devices from qualitative behavior traces. For example, it has been used to induce and revise a qualitative model of a portion of the Reaction Control System (RCS) of the NASA Space Shuttle. By fitting a correct model of this portion of the RCS to simulated qualitative data from a faulty system, FORTE was also able to correctly diagnose simple faults in this system.
Fuzzy based attitude controller for flexible spacecraft with on/off thrusters
NASA Astrophysics Data System (ADS)
Knapp, Roger G.; Adams, Neil J.
A fuzzy-based attitude controller is designed for attitude control of a generic spacecraft with on/off thrusters. The controller is comprised of packages of rules dedicated to addressing different objectives (e.g., disturbance rejection, low fuel consumption, avoiding the excitation of flexible appendages, etc.). These rule packages can be inserted or removed depending on the requirements of the particular spacecraft and are parameterized based on vehicle parameters such as inertia or operational parameters such as the maneuvering rate. Individual rule packages can be 'weighted' relative to each other to emphasize the importance of one objective relative to another. Finally, the fuzzy controller and rule packages are demonstrated using the high-fidelity Space Shuttle Interactive On-Orbit Simulator (IOS) while performing typical on-orbit operations and are subsequently compared with the existing shuttle flight control system performance.
Fuzzy based attitude controller for flexible spacecraft with on/off thrusters
NASA Astrophysics Data System (ADS)
Knapp, Roger Glenn
1993-05-01
A fuzzy-based attitude controller is designed for attitude control of a generic spacecraft with on/off thrusters. The controller is comprised of packages of rules dedicated to addressing different objectives (e.g., disturbance rejection, low fuel consumption, avoiding the excitation of flexible appendages, etc.). These rule packages can be inserted or removed depending on the requirements of the particular spacecraft and are parameterized based on vehicle parameters such as inertia or operational parameters such as the maneuvering rate. Individual rule packages can be 'weighted' relative to each other to emphasize the importance of one objective relative to another. Finally, the fuzzy controller and rule packages are demonstrated using the high-fidelity Space Shuttle Interactive On-Orbit Simulator (IOS) while performing typical on-orbit operations and are subsequently compared with the existing shuttle flight control system performance.
An adaptive singular spectrum analysis method for extracting brain rhythms of electroencephalography
Hu, Hai; Guo, Shengxin; Liu, Ran
2017-01-01
Artifacts removal and rhythms extraction from electroencephalography (EEG) signals are important for portable and wearable EEG recording devices. Incorporating a novel grouping rule, we proposed an adaptive singular spectrum analysis (SSA) method for artifacts removal and rhythms extraction. Based on the EEG signal amplitude, the grouping rule determines adaptively the first one or two SSA reconstructed components as artifacts and removes them. The remaining reconstructed components are then grouped based on their peak frequencies in the Fourier transform to extract the desired rhythms. The grouping rule thus enables SSA to be adaptive to EEG signals containing different levels of artifacts and rhythms. The simulated EEG data based on the Markov Process Amplitude (MPA) EEG model and the experimental EEG data in the eyes-open and eyes-closed states were used to verify the adaptive SSA method. Results showed a better performance in artifacts removal and rhythms extraction, compared with the wavelet decomposition (WDec) and another two recently reported SSA methods. Features of the extracted alpha rhythms using adaptive SSA were calculated to distinguish between the eyes-open and eyes-closed states. Results showed a higher accuracy (95.8%) than those of the WDec method (79.2%) and the infinite impulse response (IIR) filtering method (83.3%). PMID:28674650
NASA Astrophysics Data System (ADS)
Limkumnerd, Surachate
2014-03-01
Interest in thin-film fabrication for industrial applications have driven both theoretical and computational aspects of modeling its growth. One of the earliest attempts toward understanding the morphological structure of a film's surface is through a class of solid-on-solid limited-mobility growth models such as the Family, Wolf-Villain, or Das Sarma-Tamborenea models, which have produced fascinating surface roughening behaviors. These models, however, restrict the motion of an incidence atom to be within the neighborhood of its landing site, which renders them inept for simulating long-distance surface diffusion such as that observed in thin-film growth using a molecular-beam epitaxy technique. Naive extension of these models by repeatedly applying the local diffusion rules for each hop to simulate large diffusion length can be computationally very costly when certain statistical aspects are demanded. We present a graph-theoretic approach to simulating a long-range diffusion-attachment growth model. Using the Markovian assumption and given a local diffusion bias, we derive the transition probabilities for a random walker to traverse from one lattice site to the others after a large, possibly infinite, number of steps. Only computation with linear-time complexity is required for the surface morphology calculation without other probabilistic measures. The formalism is applied, as illustrations, to simulate surface growth on a two-dimensional flat substrate and around a screw dislocation under the modified Wolf-Villain diffusion rule. A rectangular spiral ridge is observed in the latter case with a smooth front feature similar to that obtained from simulations using the well-known multiple registration technique. An algorithm for computing the inverse of a class of substochastic matrices is derived as a corollary.
Evaluating State Options for Reducing Medicaid Churning
Swartz, Katherine; Short, Pamela Farley; Graefe, Deborah R.; Uberoi, Namrata
2015-01-01
Medicaid churning - the constant exit and re-entry of beneficiaries as their eligibility changes - has long been a problem for both Medicaid administrators and recipients. Churning will continue under the Affordable Care Act, because despite new federal rules, Medicaid eligibility will continue to be based on current monthly income. We developed a longitudinal simulation model to evaluate four policy options for modifying or extending Medicaid eligibility to reduce churning. The simulations suggest that two options, extending Medicaid eligibility either to the end of a calendar year or for twelve months after enrollment, would be far more effective in reducing churning than the other options of a three-month extension or eligibility based on projected annual income. States should consider implementation of the option that best balances costs, including both administration and services, with improved health of Medicaid enrollees. PMID:26153313
BioNetFit: a fitting tool compatible with BioNetGen, NFsim and distributed computing environments
Thomas, Brandon R.; Chylek, Lily A.; Colvin, Joshua; Sirimulla, Suman; Clayton, Andrew H.A.; Hlavacek, William S.; Posner, Richard G.
2016-01-01
Summary: Rule-based models are analyzed with specialized simulators, such as those provided by the BioNetGen and NFsim open-source software packages. Here, we present BioNetFit, a general-purpose fitting tool that is compatible with BioNetGen and NFsim. BioNetFit is designed to take advantage of distributed computing resources. This feature facilitates fitting (i.e. optimization of parameter values for consistency with data) when simulations are computationally expensive. Availability and implementation: BioNetFit can be used on stand-alone Mac, Windows/Cygwin, and Linux platforms and on Linux-based clusters running SLURM, Torque/PBS, or SGE. The BioNetFit source code (Perl) is freely available (http://bionetfit.nau.edu). Supplementary information: Supplementary data are available at Bioinformatics online. Contact: bionetgen.help@gmail.com PMID:26556387
Developing a modular architecture for creation of rule-based clinical diagnostic criteria.
Hong, Na; Pathak, Jyotishman; Chute, Christopher G; Jiang, Guoqian
2016-01-01
With recent advances in computerized patient records system, there is an urgent need for producing computable and standards-based clinical diagnostic criteria. Notably, constructing rule-based clinical diagnosis criteria has become one of the goals in the International Classification of Diseases (ICD)-11 revision. However, few studies have been done in building a unified architecture to support the need for diagnostic criteria computerization. In this study, we present a modular architecture for enabling the creation of rule-based clinical diagnostic criteria leveraging Semantic Web technologies. The architecture consists of two modules: an authoring module that utilizes a standards-based information model and a translation module that leverages Semantic Web Rule Language (SWRL). In a prototype implementation, we created a diagnostic criteria upper ontology (DCUO) that integrates ICD-11 content model with the Quality Data Model (QDM). Using the DCUO, we developed a transformation tool that converts QDM-based diagnostic criteria into Semantic Web Rule Language (SWRL) representation. We evaluated the domain coverage of the upper ontology model using randomly selected diagnostic criteria from broad domains (n = 20). We also tested the transformation algorithms using 6 QDM templates for ontology population and 15 QDM-based criteria data for rule generation. As the results, the first draft of DCUO contains 14 root classes, 21 subclasses, 6 object properties and 1 data property. Investigation Findings, and Signs and Symptoms are the two most commonly used element types. All 6 HQMF templates are successfully parsed and populated into their corresponding domain specific ontologies and 14 rules (93.3 %) passed the rule validation. Our efforts in developing and prototyping a modular architecture provide useful insight into how to build a scalable solution to support diagnostic criteria representation and computerization.
Design and application of a CA-BDI model to determine farmers' land-use behavior.
Liang, Xiaoying; Chen, Hai; Wang, Yanni; Song, Shixiong
2016-01-01
The belief-desire-intention (BDI) model has been widely used to construct reasoning systems for complex tasks in dynamic environments. We have designed a capabilities and abilities (CA)-BDI farmer decision-making model, which is an extension of the BDI architecture and includes internal representations for farmer household Capabilities and Abilities. This model is used to explore farmer learning mechanisms and to simulate the bounded rational decisions made by farmer households. Our case study focuses on the Gaoqu Commune of Mizhi County, Shaanxi Province, China, where scallion is one of the main cash crops. After comparing the differences between actual land-use changes from 2007 to 2009 and the simulation results, we analyze the validity of the model and discuss the potential and limitations of the farmer land-use decision-making model under three scenarios. Based on the design and implementation of the model, the following conclusions can be drawn: (1) the CA-BDI framework is an appropriate model for exploring learning mechanisms and simulating bounded rational decisions; and (2) local governments should encourage scallion planting by assisting scallion farmer cooperatives and farmers to understand the market risk, standardize the rules of their cooperation, and supervise the contracts made between scallion cooperatives and farmers.
Origin of the moon - The collision hypothesis
NASA Technical Reports Server (NTRS)
Stevenson, D. J.
1987-01-01
Theoretical models of lunar origin involving one or more collisions between the earth and other large sun-orbiting bodies are examined in a critical review. Ten basic propositions of the collision hypothesis (CH) are listed; observational data on mass and angular momentum, bulk chemistry, volatile depletion, trace elements, primordial high temperatures, and orbital evolution are summarized; and the basic tenets of alternative models (fission, capture, and coformation) are reviewed. Consideration is given to the thermodynamics of large impacts, rheological and dynamical problems, numerical simulations based on the CH, disk evolution models, and the chemical implications of the CH. It is concluded that the sound arguments and evidence supporting the CH are not (yet) sufficient to rule out other hypotheses.
NASA Astrophysics Data System (ADS)
Becherer, Nico; Hesser, Jürgen; Kornmesser, Ulrike; Schranz, Dietmar; Männer, Reinhard
2007-03-01
Simulation systems are becoming increasingly essential in medical education. Hereby, capturing the physical behaviour of the real world requires a sophisticated modelling of instruments within the virtual environment. Most models currently used are not capable of user interactive simulations due to the computation of the complex underlying analytical equations. Alternatives are often based on simplifying mass-spring systems, being able to deliver high update rates that come at the cost of less realistic motion. In addition, most techniques are limited to narrow and tubular vessel structures or restrict shape alterations to two degrees of freedom, not allowing instrument deformations like torsion. In contrast, our approach combines high update rates with highly realistic motion and can in addition be used with respect to arbitrary structures like vessels or cavities (e.g. atrium, ventricle) without limiting the degrees of freedom. Based on energy minimization, bending energies and vessel structures are considered as linear elastic elements; energies are evaluated at regularly spaced points on the instrument, while the distance of the points is fixed, i.e. we simulate an articulated structure of joints with fixed connections between them. Arbitrary tissue structures are modeled through adaptive distance fields and are connected by nodes via an undirected graph system. The instrument points are linked to nodes by a system of rules. Energy minimization uses a Quasi Newton method without preconditioning and, hereby, gradients are estimated using a combination of analytical and numerical terms. Results show a high quality in motion simulation when compared to a phantom model. The approach is also robust and fast. Simulating an instrument with 100 joints runs at 100 Hz on a 3 GHz PC.
Responder analysis without dichotomization.
Zhang, Zhiwei; Chu, Jianxiong; Rahardja, Dewi; Zhang, Hui; Tang, Li
2016-01-01
In clinical trials, it is common practice to categorize subjects as responders and non-responders on the basis of one or more clinical measurements under pre-specified rules. Such a responder analysis is often criticized for the loss of information in dichotomizing one or more continuous or ordinal variables. It is worth noting that a responder analysis can be performed without dichotomization, because the proportion of responders for each treatment can be derived from a model for the original clinical variables (used to define a responder) and estimated by substituting maximum likelihood estimators of model parameters. This model-based approach can be considerably more efficient and more effective for dealing with missing data than the usual approach based on dichotomization. For parameter estimation, the model-based approach generally requires correct specification of the model for the original variables. However, under the sharp null hypothesis, the model-based approach remains unbiased for estimating the treatment difference even if the model is misspecified. We elaborate on these points and illustrate them with a series of simulation studies mimicking a study of Parkinson's disease, which involves longitudinal continuous data in the definition of a responder.
An architecture for the development of real-time fault diagnosis systems using model-based reasoning
NASA Technical Reports Server (NTRS)
Hall, Gardiner A.; Schuetzle, James; Lavallee, David; Gupta, Uday
1992-01-01
Presented here is an architecture for implementing real-time telemetry based diagnostic systems using model-based reasoning. First, we describe Paragon, a knowledge acquisition tool for offline entry and validation of physical system models. Paragon provides domain experts with a structured editing capability to capture the physical component's structure, behavior, and causal relationships. We next describe the architecture of the run time diagnostic system. The diagnostic system, written entirely in Ada, uses the behavioral model developed offline by Paragon to simulate expected component states as reflected in the telemetry stream. The diagnostic algorithm traces causal relationships contained within the model to isolate system faults. Since the diagnostic process relies exclusively on the behavioral model and is implemented without the use of heuristic rules, it can be used to isolate unpredicted faults in a wide variety of systems. Finally, we discuss the implementation of a prototype system constructed using this technique for diagnosing faults in a science instrument. The prototype demonstrates the use of model-based reasoning to develop maintainable systems with greater diagnostic capabilities at a lower cost.
scoringRules - A software package for probabilistic model evaluation
NASA Astrophysics Data System (ADS)
Lerch, Sebastian; Jordan, Alexander; Krüger, Fabian
2016-04-01
Models in the geosciences are generally surrounded by uncertainty, and being able to quantify this uncertainty is key to good decision making. Accordingly, probabilistic forecasts in the form of predictive distributions have become popular over the last decades. With the proliferation of probabilistic models arises the need for decision theoretically principled tools to evaluate the appropriateness of models and forecasts in a generalized way. Various scoring rules have been developed over the past decades to address this demand. Proper scoring rules are functions S(F,y) which evaluate the accuracy of a forecast distribution F , given that an outcome y was observed. As such, they allow to compare alternative models, a crucial ability given the variety of theories, data sources and statistical specifications that is available in many situations. This poster presents the software package scoringRules for the statistical programming language R, which contains functions to compute popular scoring rules such as the continuous ranked probability score for a variety of distributions F that come up in applied work. Two main classes are parametric distributions like normal, t, or gamma distributions, and distributions that are not known analytically, but are indirectly described through a sample of simulation draws. For example, Bayesian forecasts produced via Markov Chain Monte Carlo take this form. Thereby, the scoringRules package provides a framework for generalized model evaluation that both includes Bayesian as well as classical parametric models. The scoringRules package aims to be a convenient dictionary-like reference for computing scoring rules. We offer state of the art implementations of several known (but not routinely applied) formulas, and implement closed-form expressions that were previously unavailable. Whenever more than one implementation variant exists, we offer statistically principled default choices.
Embedded CLIPS for SDI BM/C3 simulation and analysis
NASA Technical Reports Server (NTRS)
Gossage, Brett; Nanney, Van
1990-01-01
Nichols Research Corporation is developing the BM/C3 Requirements Analysis Tool (BRAT) for the U.S. Army Strategic Defense Command. BRAT uses embedded CLIPS/Ada to model the decision making processes used by the human commander of a defense system. Embedding CLlPS/Ada in BRAT allows the user to explore the role of the human in Command and Control (C2) and the use of expert systems for automated C2. BRAT models assert facts about the current state of the system, the simulated scenario, and threat information into CLIPS/Ada. A user-defined rule set describes the decision criteria for the commander. We have extended CLIPS/Ada with user-defined functions that allow the firing of a rule to invoke a system action such as weapons release or a change in strategy. The use of embedded CLIPS/Ada will provide a powerful modeling tool for our customer at minimal cost.
1988-12-01
the number of facts. NFIRE : location of the rule status flag. NLVL: the number of levels. NRULE: the number of rules. 116 NRUN: the number of runs. PD...C-INITIALIZE THE RULE MATRIX 124 C NFIRE -4 +MAXL+MAXR NFAC-0 IL-MAXL-MINL+l IR-MAXR-MINR+ 1 DO 10 I-1,NRULE MR(I,1)=I NR( I, 2)-MINL+INT(RAN( II) *IL...NR( I, 3)-MINR+INT(RAN(II)*IR) NR( I, NFIRE )-0 10 CONTINUE-A C C-STORE THE RANDOM-ASSERTION SET IN A MATRIX C READ(8,* )NRUN, lASS DO 120 I-1,NRTJN
Dasgupta, Sakyasingha; Wörgötter, Florentin; Manoonpong, Poramate
2014-01-01
Goal-directed decision making in biological systems is broadly based on associations between conditional and unconditional stimuli. This can be further classified as classical conditioning (correlation-based learning) and operant conditioning (reward-based learning). A number of computational and experimental studies have well established the role of the basal ganglia in reward-based learning, where as the cerebellum plays an important role in developing specific conditioned responses. Although viewed as distinct learning systems, recent animal experiments point toward their complementary role in behavioral learning, and also show the existence of substantial two-way communication between these two brain structures. Based on this notion of co-operative learning, in this paper we hypothesize that the basal ganglia and cerebellar learning systems work in parallel and interact with each other. We envision that such an interaction is influenced by reward modulated heterosynaptic plasticity (RMHP) rule at the thalamus, guiding the overall goal directed behavior. Using a recurrent neural network actor-critic model of the basal ganglia and a feed-forward correlation-based learning model of the cerebellum, we demonstrate that the RMHP rule can effectively balance the outcomes of the two learning systems. This is tested using simulated environments of increasing complexity with a four-wheeled robot in a foraging task in both static and dynamic configurations. Although modeled with a simplified level of biological abstraction, we clearly demonstrate that such a RMHP induced combinatorial learning mechanism, leads to stabler and faster learning of goal-directed behaviors, in comparison to the individual systems. Thus, in this paper we provide a computational model for adaptive combination of the basal ganglia and cerebellum learning systems by way of neuromodulated plasticity for goal-directed decision making in biological and bio-mimetic organisms. PMID:25389391
Dasgupta, Sakyasingha; Wörgötter, Florentin; Manoonpong, Poramate
2014-01-01
Goal-directed decision making in biological systems is broadly based on associations between conditional and unconditional stimuli. This can be further classified as classical conditioning (correlation-based learning) and operant conditioning (reward-based learning). A number of computational and experimental studies have well established the role of the basal ganglia in reward-based learning, where as the cerebellum plays an important role in developing specific conditioned responses. Although viewed as distinct learning systems, recent animal experiments point toward their complementary role in behavioral learning, and also show the existence of substantial two-way communication between these two brain structures. Based on this notion of co-operative learning, in this paper we hypothesize that the basal ganglia and cerebellar learning systems work in parallel and interact with each other. We envision that such an interaction is influenced by reward modulated heterosynaptic plasticity (RMHP) rule at the thalamus, guiding the overall goal directed behavior. Using a recurrent neural network actor-critic model of the basal ganglia and a feed-forward correlation-based learning model of the cerebellum, we demonstrate that the RMHP rule can effectively balance the outcomes of the two learning systems. This is tested using simulated environments of increasing complexity with a four-wheeled robot in a foraging task in both static and dynamic configurations. Although modeled with a simplified level of biological abstraction, we clearly demonstrate that such a RMHP induced combinatorial learning mechanism, leads to stabler and faster learning of goal-directed behaviors, in comparison to the individual systems. Thus, in this paper we provide a computational model for adaptive combination of the basal ganglia and cerebellum learning systems by way of neuromodulated plasticity for goal-directed decision making in biological and bio-mimetic organisms.
Monte Carlo simulations of disorder in ZnSn N 2 and the effects on the electronic structure
Lany, Stephan; Fioretti, Angela N.; Zawadzki, Paweł P.; ...
2017-08-10
In multinary compound semiconductors, cation disorder can decisively alter the electronic properties and impact potential applications. ZnSnN 2 is a ternary nitride of interest for photovoltaics, which forms in a wurtzite-derived crystal structure. In the ground state, every N anion is coordinated by two Zn and two Sn cations, thereby observing the octet rule locally. Using a motif-based model Hamiltonian, we performed Monte Carlo simulations that provide atomistic representations of ZnSnN 2 with varying degrees of cation disorder. Subsequent electronic structure calculations describe the evolution of band gaps, optical properties, and carrier localization effects as a function of the disorder.more » We find that octet-rule conserving disorder is practically impossible to avoid but perfectly benign, with hardly any effects on the electronic structure. In contrast, a fully random cation distribution would be very detrimental, but fortunately it is energetically highly unfavorable. A degree of disorder that can realistically be expected for nonequilibrium thin-film deposition leads to a moderate band-gap reduction and to moderate carrier localization effects. Comparing the simulated structures with experimental samples grown by sputtering, we find evidence that these samples indeed incorporate a certain degree of octet-rule violating disorder, which is reflected in the x-ray diffraction and in the optical absorption spectra. This study demonstrates that the electronic properties of ZnSnN 2 are dominated by changes of the local coordination environments rather than long-range ordering effects.« less
Monte Carlo simulations of disorder in ZnSn N 2 and the effects on the electronic structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lany, Stephan; Fioretti, Angela N.; Zawadzki, Paweł P.
In multinary compound semiconductors, cation disorder can decisively alter the electronic properties and impact potential applications. ZnSnN 2 is a ternary nitride of interest for photovoltaics, which forms in a wurtzite-derived crystal structure. In the ground state, every N anion is coordinated by two Zn and two Sn cations, thereby observing the octet rule locally. Using a motif-based model Hamiltonian, we performed Monte Carlo simulations that provide atomistic representations of ZnSnN 2 with varying degrees of cation disorder. Subsequent electronic structure calculations describe the evolution of band gaps, optical properties, and carrier localization effects as a function of the disorder.more » We find that octet-rule conserving disorder is practically impossible to avoid but perfectly benign, with hardly any effects on the electronic structure. In contrast, a fully random cation distribution would be very detrimental, but fortunately it is energetically highly unfavorable. A degree of disorder that can realistically be expected for nonequilibrium thin-film deposition leads to a moderate band-gap reduction and to moderate carrier localization effects. Comparing the simulated structures with experimental samples grown by sputtering, we find evidence that these samples indeed incorporate a certain degree of octet-rule violating disorder, which is reflected in the x-ray diffraction and in the optical absorption spectra. This study demonstrates that the electronic properties of ZnSnN 2 are dominated by changes of the local coordination environments rather than long-range ordering effects.« less
2008-03-01
computational version of the CASIE architecture serves to demonstrate the functionality of our primary theories. However, implementation of several other...following facts. First, based on Theorem 3 and Theorem 5, the objective function is non -increasing under updating rule (6); second, by the criteria for...reassignment in updating rule (7), it is trivial to show that the objective function is non -increasing under updating rule (7). A Unified View to Graph
NASA Astrophysics Data System (ADS)
Botyánszki, János; Kasen, Daniel; Plewa, Tomasz
2018-01-01
The classic single-degenerate model for the progenitors of Type Ia supernova (SN Ia) predicts that the supernova ejecta should be enriched with solar-like abundance material stripped from the companion star. Spectroscopic observations of normal SNe Ia at late times, however, have not resulted in definite detection of hydrogen. In this Letter, we study line formation in SNe Ia at nebular times using non-LTE spectral modeling. We present, for the first time, multidimensional radiative transfer calculations of SNe Ia with stripped material mixed in the ejecta core, based on hydrodynamical simulations of ejecta–companion interaction. We find that interaction models with main-sequence companions produce significant Hα emission at late times, ruling out these types of binaries being viable progenitors of SNe Ia. We also predict significant He I line emission at optical and near-infrared wavelengths for both hydrogen-rich or helium-rich material, providing an additional observational probe of stripped ejecta. We produce models with reduced stripped masses and find a more stringent mass limit of M st ≲ 1 × 10‑4 M ⊙ of stripped companion material for SN 2011fe.
Fuzzy – PI controller to control the velocity parameter of Induction Motor
NASA Astrophysics Data System (ADS)
Malathy, R.; Balaji, V.
2018-04-01
The major application of Induction motor includes the usage of the same in industries because of its high robustness, reliability, low cost, highefficiency and good self-starting capability. Even though it has the above mentioned advantages, it also have some limitations: (1) the standard motor is not a true constant-speed machine, itsfull-load slip varies less than 1 % (in high-horsepower motors).And (2) it is not inherently capable of providing variable-speedoperation. In order to solve the above mentioned problem smart motor controls and variable speed controllers are used. Motor applications involve non linearity features, which can be controlled by Fuzzy logic controller as it is capable of handling those features with high efficiency and it act similar to human operator. This paper presents individuality of the plant modelling. The fuzzy logic controller (FLC)trusts on a set of linguistic if-then rules, a rule-based Mamdani for closed loop Induction Motor model. Themotor model is designed and membership functions are chosenaccording to the parameters of the motor model. Simulation results contains non linearity in induction motor model. A conventional PI controller iscompared practically to fuzzy logic controller using Simulink.
Klepiszewski, K; Schmitt, T G
2002-01-01
While conventional rule based, real time flow control of sewer systems is in common use, control systems based on fuzzy logic have been used only rarely, but successfully. The intention of this study is to compare a conventional rule based control of a combined sewer system with a fuzzy logic control by using hydrodynamic simulation. The objective of both control strategies is to reduce the combined sewer overflow volume by an optimization of the utilized storage capacities of four combined sewer overflow tanks. The control systems affect the outflow of four combined sewer overflow tanks depending on the water levels inside the structures. Both systems use an identical rule base. The developed control systems are tested and optimized for a single storm event which affects heterogeneously hydraulic load conditions and local discharge. Finally the efficiencies of the two different control systems are compared for two more storm events. The results indicate that the conventional rule based control and the fuzzy control similarly reach the objective of the control strategy. In spite of the higher expense to design the fuzzy control system its use provides no advantages in this case.
Web-based Interactive Landform Simulation Model - Grand Canyon
NASA Astrophysics Data System (ADS)
Luo, W.; Pelletier, J. D.; Duffin, K.; Ormand, C. J.; Hung, W.; Iverson, E. A.; Shernoff, D.; Zhai, X.; Chowdary, A.
2013-12-01
Earth science educators need interactive tools to engage and enable students to better understand how Earth systems work over geologic time scales. The evolution of landforms is ripe for interactive, inquiry-based learning exercises because landforms exist all around us. The Web-based Interactive Landform Simulation Model - Grand Canyon (WILSIM-GC, http://serc.carleton.edu/landform/) is a continuation and upgrade of the simple cellular automata (CA) rule-based model (WILSIM-CA, http://www.niu.edu/landform/) that can be accessed from anywhere with an Internet connection. Major improvements in WILSIM-GC include adopting a physically based model and the latest Java technology. The physically based model is incorporated to illustrate the fluvial processes involved in land-sculpting pertaining to the development and evolution of one of the most famous landforms on Earth: the Grand Canyon. It is hoped that this focus on a famous and specific landscape will attract greater student interest and provide opportunities for students to learn not only how different processes interact to form the landform we observe today, but also how models and data are used together to enhance our understanding of the processes involved. The latest development in Java technology (such as Java OpenGL for access to ubiquitous fast graphics hardware, Trusted Applet for file input and output, and multithreaded ability to take advantage of modern multi-core CPUs) are incorporated into building WILSIM-GC and active, standards-aligned curricula materials guided by educational psychology theory on science learning will be developed to accompany the model. This project is funded NSF-TUES program.