Hybrid modeling of nitrate fate in large catchments using fuzzy-rules
NASA Astrophysics Data System (ADS)
van der Heijden, Sven; Haberlandt, Uwe
2010-05-01
Especially for nutrient balance simulations, physically based ecohydrological modeling needs an abundance of measured data and model parameters, which for large catchments all too often are not available in sufficient spatial or temporal resolution or are simply unknown. For efficient large-scale studies it is thus beneficial to have methods at one's disposal which are parsimonious concerning the number of model parameters and the necessary input data. One such method is fuzzy-rule based modeling, which compared to other machine-learning techniques has the advantages to produce models (the fuzzy-rules) which are physically interpretable to a certain extent, and to allow the explicit introduction of expert knowledge through pre-defined rules. The study focuses on the application of fuzzy-rule based modeling for nitrate simulation in large catchments, in particular concerning decision support. Fuzzy-rule based modeling enables the generation of simple, efficient, easily understandable models with nevertheless satisfactory accuracy for problems of decision support. The chosen approach encompasses a hybrid metamodeling, which includes the generation of fuzzy-rules with data originating from physically based models as well as a coupling with a physically based water balance model. For the generation of the needed training data and also as coupled water balance model the ecohydrological model SWAT is employed. The conceptual model divides the nitrate pathway into three parts. The first fuzzy-module calculates nitrate leaching with the percolating water from soil surface to groundwater, the second module simulates groundwater passage, and the final module replaces the in-stream processes. The aim of this modularization is to create flexibility for using each of the modules on its own, for changing or completely replacing it. For fuzzy-rule based modeling this can explicitly mean that the re-training of one of the modules with newly available data will be possible without problem, while the module assembly does not have to be modified. Apart from the concept of hybrid metamodeling first results are presented for the fuzzy-module for nitrate passage through the unsaturated zone.
Automated visualization of rule-based models
Tapia, Jose-Juan; Faeder, James R.
2017-01-01
Frameworks such as BioNetGen, Kappa and Simmune use “reaction rules” to specify biochemical interactions compactly, where each rule specifies a mechanism such as binding or phosphorylation and its structural requirements. Current rule-based models of signaling pathways have tens to hundreds of rules, and these numbers are expected to increase as more molecule types and pathways are added. Visual representations are critical for conveying rule-based models, but current approaches to show rules and interactions between rules scale poorly with model size. Also, inferring design motifs that emerge from biochemical interactions is an open problem, so current approaches to visualize model architecture rely on manual interpretation of the model. Here, we present three new visualization tools that constitute an automated visualization framework for rule-based models: (i) a compact rule visualization that efficiently displays each rule, (ii) the atom-rule graph that conveys regulatory interactions in the model as a bipartite network, and (iii) a tunable compression pipeline that incorporates expert knowledge and produces compact diagrams of model architecture when applied to the atom-rule graph. The compressed graphs convey network motifs and architectural features useful for understanding both small and large rule-based models, as we show by application to specific examples. Our tools also produce more readable diagrams than current approaches, as we show by comparing visualizations of 27 published models using standard graph metrics. We provide an implementation in the open source and freely available BioNetGen framework, but the underlying methods are general and can be applied to rule-based models from the Kappa and Simmune frameworks also. We expect that these tools will promote communication and analysis of rule-based models and their eventual integration into comprehensive whole-cell models. PMID:29131816
Podolak, Charles J.
2013-01-01
An ensemble of rule-based models was constructed to assess possible future braided river planform configurations for the Toklat River in Denali National Park and Preserve, Alaska. This approach combined an analysis of large-scale influences on stability with several reduced-complexity models to produce the predictions at a practical level for managers concerned about the persistence of bank erosion while acknowledging the great uncertainty in any landscape prediction. First, a model of confluence angles reproduced observed angles of a major confluence, but showed limited susceptibility to a major rearrangement of the channel planform downstream. Second, a probabilistic map of channel locations was created with a two-parameter channel avulsion model. The predicted channel belt location was concentrated in the same area as the current channel belt. Finally, a suite of valley-scale channel and braid plain characteristics were extracted from a light detection and ranging (LiDAR)-derived surface. The characteristics demonstrated large-scale stabilizing topographic influences on channel planform. The combination of independent analyses increased confidence in the conclusion that the Toklat River braided planform is a dynamically stable system due to large and persistent valley-scale influences, and that a range of avulsive perturbations are likely to result in a relatively unchanged planform configuration in the short term.
Simulation of large-scale rule-based models
Colvin, Joshua; Monine, Michael I.; Faeder, James R.; Hlavacek, William S.; Von Hoff, Daniel D.; Posner, Richard G.
2009-01-01
Motivation: Interactions of molecules, such as signaling proteins, with multiple binding sites and/or multiple sites of post-translational covalent modification can be modeled using reaction rules. Rules comprehensively, but implicitly, define the individual chemical species and reactions that molecular interactions can potentially generate. Although rules can be automatically processed to define a biochemical reaction network, the network implied by a set of rules is often too large to generate completely or to simulate using conventional procedures. To address this problem, we present DYNSTOC, a general-purpose tool for simulating rule-based models. Results: DYNSTOC implements a null-event algorithm for simulating chemical reactions in a homogenous reaction compartment. The simulation method does not require that a reaction network be specified explicitly in advance, but rather takes advantage of the availability of the reaction rules in a rule-based specification of a network to determine if a randomly selected set of molecular components participates in a reaction during a time step. DYNSTOC reads reaction rules written in the BioNetGen language which is useful for modeling protein–protein interactions involved in signal transduction. The method of DYNSTOC is closely related to that of StochSim. DYNSTOC differs from StochSim by allowing for model specification in terms of BNGL, which extends the range of protein complexes that can be considered in a model. DYNSTOC enables the simulation of rule-based models that cannot be simulated by conventional methods. We demonstrate the ability of DYNSTOC to simulate models accounting for multisite phosphorylation and multivalent binding processes that are characterized by large numbers of reactions. Availability: DYNSTOC is free for non-commercial use. The C source code, supporting documentation and example input files are available at http://public.tgen.org/dynstoc/. Contact: dynstoc@tgen.org Supplementary information: Supplementary data are available at Bioinformatics online. PMID:19213740
The research of selection model based on LOD in multi-scale display of electronic map
NASA Astrophysics Data System (ADS)
Zhang, Jinming; You, Xiong; Liu, Yingzhen
2008-10-01
This paper proposes a selection model based on LOD to aid the display of electronic map. The ratio of display scale to map scale is regarded as a LOD operator. The categorization rule, classification rule, elementary rule and spatial geometry character rule of LOD operator setting are also concluded.
Fermi rules out the IC/CMB model for the Large-Scale Jet X-ray emission of 3C 273
NASA Astrophysics Data System (ADS)
Georganopoulos, Markos; Meyer, E. T.
2014-01-01
The process responsible for the Chandra-detected X-ray emission from the large-scale jets of powerful quasars is not clear yet. The two main models are inverse Compton scattering off the cosmic microwave background (IC/CMB) photons and synchrotron emission from a population of electrons separate from those producing the radio-IR emission. These two models imply radically different conditions in the large scale jet in terms of jet speed and maximum energy of the particle acceleration mechanism, with important implications for the impact of the jet on the larger-scale environment. Georganopoulos et al. (2006) proposed a diagnostic based on a fundamental difference between these two models: the production of synchrotron X-rays requires multi-TeV electrons, while the EC/CMB model requires a cutoff in the electron energy distribution below TeV energies. This has significant implications for the gamma-ray emission predicted by these two models. Here we present new Fermi observations that put an upper limit on the gamma-ray flux from the large-scale jet of 3C 273 that clearly violates the flux expected from the IC/CMB X-ray interpretation found by extrapolation of the UV to X-ray spectrum of knot A, thus ruling out the IC/CMB interpretation entirely for this source. Further, the Fermi upper limit constraints the Doppler beaming factor delta <5.
A Life-Cycle Model of Human Social Groups Produces a U-Shaped Distribution in Group Size.
Salali, Gul Deniz; Whitehouse, Harvey; Hochberg, Michael E
2015-01-01
One of the central puzzles in the study of sociocultural evolution is how and why transitions from small-scale human groups to large-scale, hierarchically more complex ones occurred. Here we develop a spatially explicit agent-based model as a first step towards understanding the ecological dynamics of small and large-scale human groups. By analogy with the interactions between single-celled and multicellular organisms, we build a theory of group lifecycles as an emergent property of single cell demographic and expansion behaviours. We find that once the transition from small-scale to large-scale groups occurs, a few large-scale groups continue expanding while small-scale groups gradually become scarcer, and large-scale groups become larger in size and fewer in number over time. Demographic and expansion behaviours of groups are largely influenced by the distribution and availability of resources. Our results conform to a pattern of human political change in which religions and nation states come to be represented by a few large units and many smaller ones. Future enhancements of the model should include decision-making rules and probabilities of fragmentation for large-scale societies. We suggest that the synthesis of population ecology and social evolution will generate increasingly plausible models of human group dynamics.
A Life-Cycle Model of Human Social Groups Produces a U-Shaped Distribution in Group Size
Salali, Gul Deniz; Whitehouse, Harvey; Hochberg, Michael E.
2015-01-01
One of the central puzzles in the study of sociocultural evolution is how and why transitions from small-scale human groups to large-scale, hierarchically more complex ones occurred. Here we develop a spatially explicit agent-based model as a first step towards understanding the ecological dynamics of small and large-scale human groups. By analogy with the interactions between single-celled and multicellular organisms, we build a theory of group lifecycles as an emergent property of single cell demographic and expansion behaviours. We find that once the transition from small-scale to large-scale groups occurs, a few large-scale groups continue expanding while small-scale groups gradually become scarcer, and large-scale groups become larger in size and fewer in number over time. Demographic and expansion behaviours of groups are largely influenced by the distribution and availability of resources. Our results conform to a pattern of human political change in which religions and nation states come to be represented by a few large units and many smaller ones. Future enhancements of the model should include decision-making rules and probabilities of fragmentation for large-scale societies. We suggest that the synthesis of population ecology and social evolution will generate increasingly plausible models of human group dynamics. PMID:26381745
Large fluctuations in anti-coordination games on scale-free graphs
NASA Astrophysics Data System (ADS)
Sabsovich, Daniel; Mobilia, Mauro; Assaf, Michael
2017-05-01
We study the influence of the complex topology of scale-free graphs on the dynamics of anti-coordination games (e.g. snowdrift games). These reference models are characterized by the coexistence (evolutionary stable mixed strategy) of two competing species, say ‘cooperators’ and ‘defectors’, and, in finite systems, by metastability and large-fluctuation-driven fixation. In this work, we use extensive computer simulations and an effective diffusion approximation (in the weak selection limit) to determine under which circumstances, depending on the individual-based update rules, the topology drastically affects the long-time behavior of anti-coordination games. In particular, we compute the variance of the number of cooperators in the metastable state and the mean fixation time when the dynamics is implemented according to the voter model (death-first/birth-second process) and the link dynamics (birth/death or death/birth at random). For the voter update rule, we show that the scale-free topology effectively renormalizes the population size and as a result the statistics of observables depend on the network’s degree distribution. In contrast, such a renormalization does not occur with the link dynamics update rule and we recover the same behavior as on complete graphs.
The X-ray emission mechanism of large scale powerful quasar jets: Fermi rules out IC/CMB for 3C 273.
NASA Astrophysics Data System (ADS)
Georganopoulos, Markos; Meyer, Eileen T.
2013-12-01
The process responsible for the Chandra-detected X-ray emission from the large-scale jets of powerful quasars is not clear yet. The two main models are inverse Compton scattering off the cosmic microwave background photons (IC/CMB) and synchrotron emission from a population of electrons separate from those producing the radio-IR emission. These two models imply radically different conditions in the large scale jet in terms of jet speed, kinetic power, and maximum energy of the particle acceleration mechanism, with important implications for the impact of the jet on the larger-scale environment. Georganopoulos et al. (2006) proposed a diagnostic based on a fundamental difference between these two models: the production of synchrotron X-rays requires multi-TeV electrons, while the EC/CMB model requires a cutoff in the electron energy distribution below TeV energies. This has significant implications for the γ-ray emission predicted by these two models. Here we present new Fermi observations that put an upper limit on the gamma-ray flux from the large-scale jet of 3C 273 that clearly violates the flux expected from the IC/CMB X-ray interpretation found by extrapolation of the UV to X-ray spectrum of knot A, thus ruling out the IC/CMB interpretation entirely for this source. Further, the upper limit from Fermi puts a limit on the Doppler beaming factor of at least δ <9, assuming equipartition fields, and possibly as low as δ <5 assuming no major deceleration of the jet from knots A through D1.
Large-Scale Simulations of Plastic Neural Networks on Neuromorphic Hardware
Knight, James C.; Tully, Philip J.; Kaplan, Bernhard A.; Lansner, Anders; Furber, Steve B.
2016-01-01
SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 2.0 × 104 neurons and 5.1 × 107 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately 45× more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models. PMID:27092061
Simulation Of Combat With An Expert System
NASA Technical Reports Server (NTRS)
Provenzano, J. P.
1989-01-01
Proposed expert system predicts outcomes of combat situations. Called "COBRA", combat outcome based on rules for attrition, system selects rules for mathematical modeling of losses and discrete events in combat according to previous experiences. Used with another software module known as the "Game". Game/COBRA software system, consisting of Game and COBRA modules, provides for both quantitative aspects and qualitative aspects in simulations of battles. COBRA intended for simulation of large-scale military exercises, concepts embodied in it have much broader applicability. In industrial research, knowledge-based system enables qualitative as well as quantitative simulations.
Complex dynamics and empirical evidence (Invited Paper)
NASA Astrophysics Data System (ADS)
Delli Gatti, Domenico; Gaffeo, Edoardo; Giulioni, Gianfranco; Gallegati, Mauro; Kirman, Alan; Palestrini, Antonio; Russo, Alberto
2005-05-01
Standard macroeconomics, based on a reductionist approach centered on the representative agent, is badly equipped to explain the empirical evidence where heterogeneity and industrial dynamics are the rule. In this paper we show that a simple agent-based model of heterogeneous financially fragile agents is able to replicate a large number of scaling type stylized facts with a remarkable degree of statistical precision.
Rule-based modeling and simulations of the inner kinetochore structure.
Tschernyschkow, Sergej; Herda, Sabine; Gruenert, Gerd; Döring, Volker; Görlich, Dennis; Hofmeister, Antje; Hoischen, Christian; Dittrich, Peter; Diekmann, Stephan; Ibrahim, Bashar
2013-09-01
Combinatorial complexity is a central problem when modeling biochemical reaction networks, since the association of a few components can give rise to a large variation of protein complexes. Available classical modeling approaches are often insufficient for the analysis of very large and complex networks in detail. Recently, we developed a new rule-based modeling approach that facilitates the analysis of spatial and combinatorially complex problems. Here, we explore for the first time how this approach can be applied to a specific biological system, the human kinetochore, which is a multi-protein complex involving over 100 proteins. Applying our freely available SRSim software to a large data set on kinetochore proteins in human cells, we construct a spatial rule-based simulation model of the human inner kinetochore. The model generates an estimation of the probability distribution of the inner kinetochore 3D architecture and we show how to analyze this distribution using information theory. In our model, the formation of a bridge between CenpA and an H3 containing nucleosome only occurs efficiently for higher protein concentration realized during S-phase but may be not in G1. Above a certain nucleosome distance the protein bridge barely formed pointing towards the importance of chromatin structure for kinetochore complex formation. We define a metric for the distance between structures that allow us to identify structural clusters. Using this modeling technique, we explore different hypothetical chromatin layouts. Applying a rule-based network analysis to the spatial kinetochore complex geometry allowed us to integrate experimental data on kinetochore proteins, suggesting a 3D model of the human inner kinetochore architecture that is governed by a combinatorial algebraic reaction network. This reaction network can serve as bridge between multiple scales of modeling. Our approach can be applied to other systems beyond kinetochores. Copyright © 2013 Elsevier Ltd. All rights reserved.
Detectability of large-scale power suppression in the galaxy distribution
NASA Astrophysics Data System (ADS)
Gibelyou, Cameron; Huterer, Dragan; Fang, Wenjuan
2010-12-01
Suppression in primordial power on the Universe’s largest observable scales has been invoked as a possible explanation for large-angle observations in the cosmic microwave background, and is allowed or predicted by some inflationary models. Here we investigate the extent to which such a suppression could be confirmed by the upcoming large-volume redshift surveys. For definiteness, we study a simple parametric model of suppression that improves the fit of the vanilla ΛCDM model to the angular correlation function measured by WMAP in cut-sky maps, and at the same time improves the fit to the angular power spectrum inferred from the maximum likelihood analysis presented by the WMAP team. We find that the missing power at large scales, favored by WMAP observations within the context of this model, will be difficult but not impossible to rule out with a galaxy redshift survey with large-volume (˜100Gpc3). A key requirement for success in ruling out power suppression will be having redshifts of most galaxies detected in the imaging survey.
Zhang, Haitao; Wu, Chenxue; Chen, Zewei; Liu, Zhao; Zhu, Yunhong
2017-01-01
Analyzing large-scale spatial-temporal k-anonymity datasets recorded in location-based service (LBS) application servers can benefit some LBS applications. However, such analyses can allow adversaries to make inference attacks that cannot be handled by spatial-temporal k-anonymity methods or other methods for protecting sensitive knowledge. In response to this challenge, first we defined a destination location prediction attack model based on privacy-sensitive sequence rules mined from large scale anonymity datasets. Then we proposed a novel on-line spatial-temporal k-anonymity method that can resist such inference attacks. Our anti-attack technique generates new anonymity datasets with awareness of privacy-sensitive sequence rules. The new datasets extend the original sequence database of anonymity datasets to hide the privacy-sensitive rules progressively. The process includes two phases: off-line analysis and on-line application. In the off-line phase, sequence rules are mined from an original sequence database of anonymity datasets, and privacy-sensitive sequence rules are developed by correlating privacy-sensitive spatial regions with spatial grid cells among the sequence rules. In the on-line phase, new anonymity datasets are generated upon LBS requests by adopting specific generalization and avoidance principles to hide the privacy-sensitive sequence rules progressively from the extended sequence anonymity datasets database. We conducted extensive experiments to test the performance of the proposed method, and to explore the influence of the parameter K value. The results demonstrated that our proposed approach is faster and more effective for hiding privacy-sensitive sequence rules in terms of hiding sensitive rules ratios to eliminate inference attacks. Our method also had fewer side effects in terms of generating new sensitive rules ratios than the traditional spatial-temporal k-anonymity method, and had basically the same side effects in terms of non-sensitive rules variation ratios with the traditional spatial-temporal k-anonymity method. Furthermore, we also found the performance variation tendency from the parameter K value, which can help achieve the goal of hiding the maximum number of original sensitive rules while generating a minimum of new sensitive rules and affecting a minimum number of non-sensitive rules.
Wu, Chenxue; Liu, Zhao; Zhu, Yunhong
2017-01-01
Analyzing large-scale spatial-temporal k-anonymity datasets recorded in location-based service (LBS) application servers can benefit some LBS applications. However, such analyses can allow adversaries to make inference attacks that cannot be handled by spatial-temporal k-anonymity methods or other methods for protecting sensitive knowledge. In response to this challenge, first we defined a destination location prediction attack model based on privacy-sensitive sequence rules mined from large scale anonymity datasets. Then we proposed a novel on-line spatial-temporal k-anonymity method that can resist such inference attacks. Our anti-attack technique generates new anonymity datasets with awareness of privacy-sensitive sequence rules. The new datasets extend the original sequence database of anonymity datasets to hide the privacy-sensitive rules progressively. The process includes two phases: off-line analysis and on-line application. In the off-line phase, sequence rules are mined from an original sequence database of anonymity datasets, and privacy-sensitive sequence rules are developed by correlating privacy-sensitive spatial regions with spatial grid cells among the sequence rules. In the on-line phase, new anonymity datasets are generated upon LBS requests by adopting specific generalization and avoidance principles to hide the privacy-sensitive sequence rules progressively from the extended sequence anonymity datasets database. We conducted extensive experiments to test the performance of the proposed method, and to explore the influence of the parameter K value. The results demonstrated that our proposed approach is faster and more effective for hiding privacy-sensitive sequence rules in terms of hiding sensitive rules ratios to eliminate inference attacks. Our method also had fewer side effects in terms of generating new sensitive rules ratios than the traditional spatial-temporal k-anonymity method, and had basically the same side effects in terms of non-sensitive rules variation ratios with the traditional spatial-temporal k-anonymity method. Furthermore, we also found the performance variation tendency from the parameter K value, which can help achieve the goal of hiding the maximum number of original sensitive rules while generating a minimum of new sensitive rules and affecting a minimum number of non-sensitive rules. PMID:28767687
Deep Reinforcement Learning of Cell Movement in the Early Stage of C. elegans Embryogenesis.
Wang, Zi; Wang, Dali; Li, Chengcheng; Xu, Yichi; Li, Husheng; Bao, Zhirong
2018-04-25
Cell movement in the early phase of C. elegans development is regulated by a highly complex process in which a set of rules and connections are formulated at distinct scales. Previous efforts have demonstrated that agent-based, multi-scale modeling systems can integrate physical and biological rules and provide new avenues to study developmental systems. However, the application of these systems to model cell movement is still challenging and requires a comprehensive understanding of regulatory networks at the right scales. Recent developments in deep learning and reinforcement learning provide an unprecedented opportunity to explore cell movement using 3D time-lapse microscopy images. We present a deep reinforcement learning approach within an agent-based modeling system to characterize cell movement in the embryonic development of C. elegans. Our modeling system captures the complexity of cell movement patterns in the embryo and overcomes the local optimization problem encountered by traditional rule-based, agent-based modeling that uses greedy algorithms. We tested our model with two real developmental processes: the anterior movement of the Cpaaa cell via intercalation and the rearrangement of the superficial left-right asymmetry. In the first case, the model results suggested that Cpaaa's intercalation is an active directional cell movement caused by the continuous effects from a longer distance (farther than the length of two adjacent cells), as opposed to a passive movement caused by neighbor cell movements. In the second case, a leader-follower mechanism well explained the collective cell movement pattern in the asymmetry rearrangement. These results showed that our approach to introduce deep reinforcement learning into agent-based modeling can test regulatory mechanisms by exploring cell migration paths in a reverse engineering perspective. This model opens new doors to explore the large datasets generated by live imaging. Source code is available at https://github.com/zwang84/drl4cellmovement. dwang7@utk.edu, baoz@mskcc.org. Supplementary data are available at Bioinformatics online.
NASA Astrophysics Data System (ADS)
Zhang, Jingwen; Wang, Xu; Liu, Pan; Lei, Xiaohui; Li, Zejun; Gong, Wei; Duan, Qingyun; Wang, Hao
2017-01-01
The optimization of large-scale reservoir system is time-consuming due to its intrinsic characteristics of non-commensurable objectives and high dimensionality. One way to solve the problem is to employ an efficient multi-objective optimization algorithm in the derivation of large-scale reservoir operating rules. In this study, the Weighted Multi-Objective Adaptive Surrogate Model Optimization (WMO-ASMO) algorithm is used. It consists of three steps: (1) simplifying the large-scale reservoir operating rules by the aggregation-decomposition model, (2) identifying the most sensitive parameters through multivariate adaptive regression splines (MARS) for dimensional reduction, and (3) reducing computational cost and speeding the searching process by WMO-ASMO, embedded with weighted non-dominated sorting genetic algorithm II (WNSGAII). The intercomparison of non-dominated sorting genetic algorithm (NSGAII), WNSGAII and WMO-ASMO are conducted in the large-scale reservoir system of Xijiang river basin in China. Results indicate that: (1) WNSGAII surpasses NSGAII in the median of annual power generation, increased by 1.03% (from 523.29 to 528.67 billion kW h), and the median of ecological index, optimized by 3.87% (from 1.879 to 1.809) with 500 simulations, because of the weighted crowding distance and (2) WMO-ASMO outperforms NSGAII and WNSGAII in terms of better solutions (annual power generation (530.032 billion kW h) and ecological index (1.675)) with 1000 simulations and computational time reduced by 25% (from 10 h to 8 h) with 500 simulations. Therefore, the proposed method is proved to be more efficient and could provide better Pareto frontier.
Hofman, Abe D.; Visser, Ingmar; Jansen, Brenda R. J.; van der Maas, Han L. J.
2015-01-01
We propose and test three statistical models for the analysis of children’s responses to the balance scale task, a seminal task to study proportional reasoning. We use a latent class modelling approach to formulate a rule-based latent class model (RB LCM) following from a rule-based perspective on proportional reasoning and a new statistical model, the Weighted Sum Model, following from an information-integration approach. Moreover, a hybrid LCM using item covariates is proposed, combining aspects of both a rule-based and information-integration perspective. These models are applied to two different datasets, a standard paper-and-pencil test dataset (N = 779), and a dataset collected within an online learning environment that included direct feedback, time-pressure, and a reward system (N = 808). For the paper-and-pencil dataset the RB LCM resulted in the best fit, whereas for the online dataset the hybrid LCM provided the best fit. The standard paper-and-pencil dataset yielded more evidence for distinct solution rules than the online data set in which quantitative item characteristics are more prominent in determining responses. These results shed new light on the discussion on sequential rule-based and information-integration perspectives of cognitive development. PMID:26505905
Similarity Rules for Scaling Solar Sail Systems
NASA Technical Reports Server (NTRS)
Canfield, Stephen L.; Beard, James W., III; Peddieson, John; Ewing, Anthony; Garbe, Greg
2004-01-01
Future science missions will require solar sails on the order 10,000 sq m (or larger). However, ground and flight demonstrations must be conducted at significantly smaller Sizes (400 sq m for ground demo) due to limitations of ground-based facilities and cost and availability of flight opportunities. For this reason, the ability to understand the process of scalability, as it applies to solar sail system models and test data, is crucial to the advancement of this technology. This report will address issues of scaling in solar sail systems, focusing on structural characteristics, by developing a set of similarity or similitude functions that will guide the scaling process. The primary goal of these similarity functions (process invariants) that collectively form a set of scaling rules or guidelines is to establish valid relationships between models and experiments that are performed at different orders of scale. In the near term, such an effort will help guide the size and properties of a flight validation sail that will need to be flown to accurately represent a large, mission-level sail.
RuleMonkey: software for stochastic simulation of rule-based models
2010-01-01
Background The system-level dynamics of many molecular interactions, particularly protein-protein interactions, can be conveniently represented using reaction rules, which can be specified using model-specification languages, such as the BioNetGen language (BNGL). A set of rules implicitly defines a (bio)chemical reaction network. The reaction network implied by a set of rules is often very large, and as a result, generation of the network implied by rules tends to be computationally expensive. Moreover, the cost of many commonly used methods for simulating network dynamics is a function of network size. Together these factors have limited application of the rule-based modeling approach. Recently, several methods for simulating rule-based models have been developed that avoid the expensive step of network generation. The cost of these "network-free" simulation methods is independent of the number of reactions implied by rules. Software implementing such methods is now needed for the simulation and analysis of rule-based models of biochemical systems. Results Here, we present a software tool called RuleMonkey, which implements a network-free method for simulation of rule-based models that is similar to Gillespie's method. The method is suitable for rule-based models that can be encoded in BNGL, including models with rules that have global application conditions, such as rules for intramolecular association reactions. In addition, the method is rejection free, unlike other network-free methods that introduce null events, i.e., steps in the simulation procedure that do not change the state of the reaction system being simulated. We verify that RuleMonkey produces correct simulation results, and we compare its performance against DYNSTOC, another BNGL-compliant tool for network-free simulation of rule-based models. We also compare RuleMonkey against problem-specific codes implementing network-free simulation methods. Conclusions RuleMonkey enables the simulation of rule-based models for which the underlying reaction networks are large. It is typically faster than DYNSTOC for benchmark problems that we have examined. RuleMonkey is freely available as a stand-alone application http://public.tgen.org/rulemonkey. It is also available as a simulation engine within GetBonNie, a web-based environment for building, analyzing and sharing rule-based models. PMID:20673321
Exact Mass-Coupling Relation for the Homogeneous Sine-Gordon Model.
Bajnok, Zoltán; Balog, János; Ito, Katsushi; Satoh, Yuji; Tóth, Gábor Zsolt
2016-05-06
We derive the exact mass-coupling relation of the simplest multiscale quantum integrable model, i.e., the homogeneous sine-Gordon model with two mass scales. The relation is obtained by comparing the perturbed conformal field theory description of the model valid at short distances to the large distance bootstrap description based on the model's integrability. In particular, we find a differential equation for the relation by constructing conserved tensor currents, which satisfy a generalization of the Θ sum rule Ward identity. The mass-coupling relation is written in terms of hypergeometric functions.
A networked voting rule for democratic representation
NASA Astrophysics Data System (ADS)
Hernández, Alexis R.; Gracia-Lázaro, Carlos; Brigatti, Edgardo; Moreno, Yamir
2018-03-01
We introduce a general framework for exploring the problem of selecting a committee of representatives with the aim of studying a networked voting rule based on a decentralized large-scale platform, which can assure a strong accountability of the elected. The results of our simulations suggest that this algorithm-based approach is able to obtain a high representativeness for relatively small committees, performing even better than a classical voting rule based on a closed list of candidates. We show that a general relation between committee size and representatives exists in the form of an inverse square root law and that the normalized committee size approximately scales with the inverse of the community size, allowing the scalability to very large populations. These findings are not strongly influenced by the different networks used to describe the individuals' interactions, except for the presence of few individuals with very high connectivity which can have a marginal negative effect in the committee selection process.
Scaling and modeling of turbulent suspension flows
NASA Technical Reports Server (NTRS)
Chen, C. P.
1989-01-01
Scaling factors determining various aspects of particle-fluid interactions and the development of physical models to predict gas-solid turbulent suspension flow fields are discussed based on two-fluid, continua formulation. The modes of particle-fluid interactions are discussed based on the length and time scale ratio, which depends on the properties of the particles and the characteristics of the flow turbulence. For particle size smaller than or comparable with the Kolmogorov length scale and concentration low enough for neglecting direct particle-particle interaction, scaling rules can be established in various parameter ranges. The various particle-fluid interactions give rise to additional mechanisms which affect the fluid mechanics of the conveying gas phase. These extra mechanisms are incorporated into a turbulence modeling method based on the scaling rules. A multiple-scale two-phase turbulence model is developed, which gives reasonable predictions for dilute suspension flow. Much work still needs to be done to account for the poly-dispersed effects and the extension to dense suspension flows.
Toward micro-scale spatial modeling of gentrification
NASA Astrophysics Data System (ADS)
O'Sullivan, David
A simple preliminary model of gentrification is presented. The model is based on an irregular cellular automaton architecture drawing on the concept of proximal space, which is well suited to the spatial externalities present in housing markets at the local scale. The rent gap hypothesis on which the model's cell transition rules are based is discussed. The model's transition rules are described in detail. Practical difficulties in configuring and initializing the model are described and its typical behavior reported. Prospects for further development of the model are discussed. The current model structure, while inadequate, is well suited to further elaboration and the incorporation of other interesting and relevant effects.
Rule-based modeling with Virtual Cell
Schaff, James C.; Vasilescu, Dan; Moraru, Ion I.; Loew, Leslie M.; Blinov, Michael L.
2016-01-01
Summary: Rule-based modeling is invaluable when the number of possible species and reactions in a model become too large to allow convenient manual specification. The popular rule-based software tools BioNetGen and NFSim provide powerful modeling and simulation capabilities at the cost of learning a complex scripting language which is used to specify these models. Here, we introduce a modeling tool that combines new graphical rule-based model specification with existing simulation engines in a seamless way within the familiar Virtual Cell (VCell) modeling environment. A mathematical model can be built integrating explicit reaction networks with reaction rules. In addition to offering a large choice of ODE and stochastic solvers, a model can be simulated using a network free approach through the NFSim simulation engine. Availability and implementation: Available as VCell (versions 6.0 and later) at the Virtual Cell web site (http://vcell.org/). The application installs and runs on all major platforms and does not require registration for use on the user’s computer. Tutorials are available at the Virtual Cell website and Help is provided within the software. Source code is available at Sourceforge. Contact: vcell_support@uchc.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27497444
The three-point function as a probe of models for large-scale structure
NASA Astrophysics Data System (ADS)
Frieman, Joshua A.; Gaztanaga, Enrique
1994-04-01
We analyze the consequences of models of structure formation for higher order (n-point) galaxy correlation functions in the mildly nonlinear regime. Several variations of the standard Omega = 1 cold dark matter model with scale-invariant primordial perturbations have recently been introduced to obtain more power on large scales, Rp is approximately 20/h Mpc, e.g., low matter-density (nonzero cosmological constant) models, 'tilted' primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower et al. We show that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale dependence leads to a dramatic decrease of the the hierarchical amplitudes QJ at large scales, r is greater than or approximately Rp. Current observational constraints on the three-point amplitudes Q3 and S3 can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales.
Health-Terrain: Visualizing Large Scale Health Data
2014-12-01
systems can only be realized if the quality of emerging large medical databases can be characterized and the meaning of the data understood. For this...Designed and tested an evaluation procedure for health data visualization system. This visualization framework offers a real time and web-based solution...rule is shown in the table, with the quality measures of each rule including the support, confidence, Laplace, Gain, p-s, lift and Conviction. We
A networked voting rule for democratic representation
Brigatti, Edgardo; Moreno, Yamir
2018-01-01
We introduce a general framework for exploring the problem of selecting a committee of representatives with the aim of studying a networked voting rule based on a decentralized large-scale platform, which can assure a strong accountability of the elected. The results of our simulations suggest that this algorithm-based approach is able to obtain a high representativeness for relatively small committees, performing even better than a classical voting rule based on a closed list of candidates. We show that a general relation between committee size and representatives exists in the form of an inverse square root law and that the normalized committee size approximately scales with the inverse of the community size, allowing the scalability to very large populations. These findings are not strongly influenced by the different networks used to describe the individuals’ interactions, except for the presence of few individuals with very high connectivity which can have a marginal negative effect in the committee selection process. PMID:29657817
Adaptive WTA with an analog VLSI neuromorphic learning chip.
Häfliger, Philipp
2007-03-01
In this paper, we demonstrate how a particular spike-based learning rule (where exact temporal relations between input and output spikes of a spiking model neuron determine the changes of the synaptic weights) can be tuned to express rate-based classical Hebbian learning behavior (where the average input and output spike rates are sufficient to describe the synaptic changes). This shift in behavior is controlled by the input statistic and by a single time constant. The learning rule has been implemented in a neuromorphic very large scale integration (VLSI) chip as part of a neurally inspired spike signal image processing system. The latter is the result of the European Union research project Convolution AER Vision Architecture for Real-Time (CAVIAR). Since it is implemented as a spike-based learning rule (which is most convenient in the overall spike-based system), even if it is tuned to show rate behavior, no explicit long-term average signals are computed on the chip. We show the rule's rate-based Hebbian learning ability in a classification task in both simulation and chip experiment, first with artificial stimuli and then with sensor input from the CAVIAR system.
NASA Astrophysics Data System (ADS)
Fyta, Maria; Netz, Roland R.
2012-03-01
Using molecular dynamics (MD) simulations in conjunction with the SPC/E water model, we optimize ionic force-field parameters for seven different halide and alkali ions, considering a total of eight ion-pairs. Our strategy is based on simultaneous optimizing single-ion and ion-pair properties, i.e., we first fix ion-water parameters based on single-ion solvation free energies, and in a second step determine the cation-anion interaction parameters (traditionally given by mixing or combination rules) based on the Kirkwood-Buff theory without modification of the ion-water interaction parameters. In doing so, we have introduced scaling factors for the cation-anion Lennard-Jones (LJ) interaction that quantify deviations from the standard mixing rules. For the rather size-symmetric salt solutions involving bromide and chloride ions, the standard mixing rules work fine. On the other hand, for the iodide and fluoride solutions, corresponding to the largest and smallest anion considered in this work, a rescaling of the mixing rules was necessary. For iodide, the experimental activities suggest more tightly bound ion pairing than given by the standard mixing rules, which is achieved in simulations by reducing the scaling factor of the cation-anion LJ energy. For fluoride, the situation is different and the simulations show too large attraction between fluoride and cations when compared with experimental data. For NaF, the situation can be rectified by increasing the cation-anion LJ energy. For KF, it proves necessary to increase the effective cation-anion Lennard-Jones diameter. The optimization strategy outlined in this work can be easily adapted to different kinds of ions.
Scalable DB+IR Technology: Processing Probabilistic Datalog with HySpirit.
Frommholz, Ingo; Roelleke, Thomas
2016-01-01
Probabilistic Datalog (PDatalog, proposed in 1995) is a probabilistic variant of Datalog and a nice conceptual idea to model Information Retrieval in a logical, rule-based programming paradigm. Making PDatalog work in real-world applications requires more than probabilistic facts and rules, and the semantics associated with the evaluation of the programs. We report in this paper some of the key features of the HySpirit system required to scale the execution of PDatalog programs. Firstly, there is the requirement to express probability estimation in PDatalog. Secondly, fuzzy-like predicates are required to model vague predicates (e.g. vague match of attributes such as age or price). Thirdly, to handle large data sets there are scalability issues to be addressed, and therefore, HySpirit provides probabilistic relational indexes and parallel and distributed processing . The main contribution of this paper is a consolidated view on the methods of the HySpirit system to make PDatalog applicable in real-scale applications that involve a wide range of requirements typical for data (information) management and analysis.
The three-point function as a probe of models for large-scale structure
NASA Technical Reports Server (NTRS)
Frieman, Joshua A.; Gaztanaga, Enrique
1993-01-01
The consequences of models of structure formation for higher-order (n-point) galaxy correlation functions in the mildly non-linear regime are analyzed. Several variations of the standard Omega = 1 cold dark matter model with scale-invariant primordial perturbations were recently introduced to obtain more power on large scales, R(sub p) is approximately 20 h(sup -1) Mpc, e.g., low-matter-density (non-zero cosmological constant) models, 'tilted' primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower, etal. It is shown that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale-dependence leads to a dramatic decrease of the hierarchical amplitudes Q(sub J) at large scales, r is approximately greater than R(sub p). Current observational constraints on the three-point amplitudes Q(sub 3) and S(sub 3) can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales.
How institutions shaped the last major evolutionary transition to large-scale human societies
Powers, Simon T.; van Schaik, Carel P.; Lehmann, Laurent
2016-01-01
What drove the transition from small-scale human societies centred on kinship and personal exchange, to large-scale societies comprising cooperation and division of labour among untold numbers of unrelated individuals? We propose that the unique human capacity to negotiate institutional rules that coordinate social actions was a key driver of this transition. By creating institutions, humans have been able to move from the default ‘Hobbesian’ rules of the ‘game of life’, determined by physical/environmental constraints, into self-created rules of social organization where cooperation can be individually advantageous even in large groups of unrelated individuals. Examples include rules of food sharing in hunter–gatherers, rules for the usage of irrigation systems in agriculturalists, property rights and systems for sharing reputation between mediaeval traders. Successful institutions create rules of interaction that are self-enforcing, providing direct benefits both to individuals that follow them, and to individuals that sanction rule breakers. Forming institutions requires shared intentionality, language and other cognitive abilities largely absent in other primates. We explain how cooperative breeding likely selected for these abilities early in the Homo lineage. This allowed anatomically modern humans to create institutions that transformed the self-reliance of our primate ancestors into the division of labour of large-scale human social organization. PMID:26729937
The CPAT 2.0.2 Domain Model - How CPAT 2.0.2 "Thinks" From an Analyst Perspective.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waddell, Lucas; Muldoon, Frank; Melander, Darryl J.
To help effectively plan the management and modernization of their large and diverse fleets of vehicles, the Program Executive Office Ground Combat Systems (PEO GCS) and the Program Executive Office Combat Support and Combat Service Support (PEO CS &CSS) commissioned the development of a large - scale portfolio planning optimization tool. This software, the Capability Portfolio Analysis Tool (CPAT), creates a detailed schedule that optimally prioritizes the modernization or replacement of vehicles within the fleet - respecting numerous business rules associated with fleet structure, budgets, industrial base, research and testing, etc., while maximizing overall fleet performance through time. This reportmore » contains a description of the organizational fleet structure and a thorough explanation of the business rules that the CPAT formulation follows involving performance, scheduling, production, and budgets. This report, which is an update to the original CPAT domain model published in 2015 (SAND2015 - 4009), covers important new CPAT features. This page intentionally left blank« less
A model for ionic polymer metal composites as sensors
NASA Astrophysics Data System (ADS)
Bonomo, C.; Fortuna, L.; Giannone, P.; Graziani, S.; Strazzeri, S.
2006-06-01
This paper introduces a comprehensive model of sensors based on ionic polymer metal composites (IPMCs) working in air. Significant quantities ruling the sensing properties of IPMC-based sensors are taken into account and the dynamics of the sensors are modelled. A large amount of experimental evidence is given for the excellent agreement between estimations obtained using the proposed model and the observed signals. Furthermore, the effect of sensor scaling is investigated, giving interesting support to the activities involved in the design of sensing devices based on these novel materials. We observed that the need for a wet environment is not a key issue for IPMC-based sensors to work well. This fact allows us to put IPMC-based sensors in a totally different light to the corresponding actuators, showing that sensors do not suffer from the same drawbacks.
Entity Bases: Large-Scale Knowledgebases for Intelligence Data
2009-02-01
declaratively expressed as Datalog rules . The EntityBase supports two query scenarios: • Free-Form Querying: A human analyst or a client program can pose...integration, Prometheus follows the Inverse Rules algo- rithm (Duschka 1997) with additional optimizations (Thakkar et al. 2005). We use the mediator...Discovery and Data Mining (PAKDD), Sydney, Australia. Crammer , K., Dekel, O., Keshet, J., Shalev-Shwartz, S., and Singer, Y. (2006). Online passive
On the effects of adaptive reservoir operating rules in hydrological physically-based models
NASA Astrophysics Data System (ADS)
Giudici, Federico; Anghileri, Daniela; Castelletti, Andrea; Burlando, Paolo
2017-04-01
Recent years have seen a significant increase of the human influence on the natural systems both at the global and local scale. Accurately modeling the human component and its interaction with the natural environment is key to characterize the real system dynamics and anticipate future potential changes to the hydrological regimes. Modern distributed, physically-based hydrological models are able to describe hydrological processes with high level of detail and high spatiotemporal resolution. Yet, they lack in sophistication for the behavior component and human decisions are usually described by very simplistic rules, which might underperform in reproducing the catchment dynamics. In the case of water reservoir operators, these simplistic rules usually consist of target-level rule curves, which represent the average historical level trajectory. Whilst these rules can reasonably reproduce the average seasonal water volume shifts due to the reservoirs' operation, they cannot properly represent peculiar conditions, which influence the actual reservoirs' operation, e.g., variations in energy price or water demand, dry or wet meteorological conditions. Moreover, target-level rule curves are not suitable to explore the water system response to climate and socio economic changing contexts, because they assume a business-as-usual operation. In this work, we quantitatively assess how the inclusion of adaptive reservoirs' operating rules into physically-based hydrological models contribute to the proper representation of the hydrological regime at the catchment scale. In particular, we contrast target-level rule curves and detailed optimization-based behavioral models. We, first, perform the comparison on past observational records, showing that target-level rule curves underperform in representing the hydrological regime over multiple time scales (e.g., weekly, seasonal, inter-annual). Then, we compare how future hydrological changes are affected by the two modeling approaches by considering different future scenarios comprising climate change projections of precipitation and temperature and projections of electricity prices. We perform this comparative assessment on the real-world water system of Lake Como catchment in the Italian Alps, which is characterized by the massive presence of artificial hydropower reservoirs heavily altering the natural hydrological regime. The results show how different behavioral model approaches affect the system representation in terms of hydropower performance, reservoirs dynamics and hydrological regime under different future scenarios.
Can standard cosmological models explain the observed Abell cluster bulk flow?
NASA Technical Reports Server (NTRS)
Strauss, Michael A.; Cen, Renyue; Ostriker, Jeremiah P.; Laure, Tod R.; Postman, Marc
1995-01-01
Lauer and Postman (LP) observed that all Abell clusters with redshifts less than 15,000 km/s appear to be participating in a bulk flow of 689 km/s with respect to the cosmic microwave background. We find this result difficult to reconcile with all popular models for large-scale structure formation that assume Gaussian initial conditions. This conclusion is based on Monte Carlo realizations of the LP data, drawn from large particle-mesh N-body simulations for six different models of the initial power spectrum (standard, tilted, and Omega(sub 0) = 0.3 cold dark matter, and two variants of the primordial baryon isocurvature model). We have taken special care to treat properly the longest-wavelength components of the power spectra. The simulations are sampled, 'observed,' and analyzed as identically as possible to the LP cluster sample. Large-scale bulk flows as measured from clusters in the simulations are in excellent agreement with those measured from the grid: the clusters do not exhibit any strong velocity bias on large scales. Bulk flows with amplitude as large as that reported by LP are not uncommon in the Monte Carlo data stes; the distribution of measured bulk flows before error bias subtraction is rougly Maxwellian, with a peak around 400 km/s. However the chi squared of the observed bulk flow, taking into account the anisotropy of the error ellipsoid, is much more difficult to match in the simulations. The models examined are ruled out at confidence levels between 94% and 98%.
Gu, Yingxin; Wylie, Bruce K.; Boyte, Stephen; Picotte, Joshua J.; Howard, Danny; Smith, Kelcy; Nelson, Kurtis
2016-01-01
Regression tree models have been widely used for remote sensing-based ecosystem mapping. Improper use of the sample data (model training and testing data) may cause overfitting and underfitting effects in the model. The goal of this study is to develop an optimal sampling data usage strategy for any dataset and identify an appropriate number of rules in the regression tree model that will improve its accuracy and robustness. Landsat 8 data and Moderate-Resolution Imaging Spectroradiometer-scaled Normalized Difference Vegetation Index (NDVI) were used to develop regression tree models. A Python procedure was designed to generate random replications of model parameter options across a range of model development data sizes and rule number constraints. The mean absolute difference (MAD) between the predicted and actual NDVI (scaled NDVI, value from 0–200) and its variability across the different randomized replications were calculated to assess the accuracy and stability of the models. In our case study, a six-rule regression tree model developed from 80% of the sample data had the lowest MAD (MADtraining = 2.5 and MADtesting = 2.4), which was suggested as the optimal model. This study demonstrates how the training data and rule number selections impact model accuracy and provides important guidance for future remote-sensing-based ecosystem modeling.
Using an empirical and rule-based modeling approach to map cause of disturbance in U.S
Todd A. Schroeder; Gretchen G. Moisen; Karen Schleeweis; Chris Toney; Warren B. Cohen; Zhiqiang Yang; Elizabeth A. Freeman
2015-01-01
Recently completing over a decade of research, the NASA/NACP funded North American Forest Dynamics (NAFD) project has led to several important advancements in the way U.S. forest disturbance dynamics are mapped at regional and continental scales. One major contribution has been the development of an empirical and rule-based modeling approach which addresses two of the...
NASA Astrophysics Data System (ADS)
Dimov, I.; Georgieva, R.; Todorov, V.; Ostromsky, Tz.
2017-10-01
Reliability of large-scale mathematical models is an important issue when such models are used to support decision makers. Sensitivity analysis of model outputs to variation or natural uncertainties of model inputs is crucial for improving the reliability of mathematical models. A comprehensive experimental study of Monte Carlo algorithms based on Sobol sequences for multidimensional numerical integration has been done. A comparison with Latin hypercube sampling and a particular quasi-Monte Carlo lattice rule based on generalized Fibonacci numbers has been presented. The algorithms have been successfully applied to compute global Sobol sensitivity measures corresponding to the influence of several input parameters (six chemical reactions rates and four different groups of pollutants) on the concentrations of important air pollutants. The concentration values have been generated by the Unified Danish Eulerian Model. The sensitivity study has been done for the areas of several European cities with different geographical locations. The numerical tests show that the stochastic algorithms under consideration are efficient for multidimensional integration and especially for computing small by value sensitivity indices. It is a crucial element since even small indices may be important to be estimated in order to achieve a more accurate distribution of inputs influence and a more reliable interpretation of the mathematical model results.
Redundancy checking algorithms based on parallel novel extension rule
NASA Astrophysics Data System (ADS)
Liu, Lei; Yang, Yang; Li, Guangli; Wang, Qi; Lü, Shuai
2017-05-01
Redundancy checking (RC) is a key knowledge reduction technology. Extension rule (ER) is a new reasoning method, first presented in 2003 and well received by experts at home and abroad. Novel extension rule (NER) is an improved ER-based reasoning method, presented in 2009. In this paper, we first analyse the characteristics of the extension rule, and then present a simple algorithm for redundancy checking based on extension rule (RCER). In addition, we introduce MIMF, a type of heuristic strategy. Using the aforementioned rule and strategy, we design and implement RCHER algorithm, which relies on MIMF. Next we design and implement an RCNER (redundancy checking based on NER) algorithm based on NER. Parallel computing greatly accelerates the NER algorithm, which has weak dependence among tasks when executed. Considering this, we present PNER (parallel NER) and apply it to redundancy checking and necessity checking. Furthermore, we design and implement the RCPNER (redundancy checking based on PNER) and NCPPNER (necessary clause partition based on PNER) algorithms as well. The experimental results show that MIMF significantly influences the acceleration of algorithm RCER in formulae on a large scale and high redundancy. Comparing PNER with NER and RCPNER with RCNER, the average speedup can reach up to the number of task decompositions when executed. Comparing NCPNER with the RCNER-based algorithm on separating redundant formulae, speedup increases steadily as the scale of the formulae is incrementing. Finally, we describe the challenges that the extension rule will be faced with and suggest possible solutions.
NASA Astrophysics Data System (ADS)
Önal, Orkun; Ozmenci, Cemre; Canadinc, Demircan
2014-09-01
A multi-scale modeling approach was applied to predict the impact response of a strain rate sensitive high-manganese austenitic steel. The roles of texture, geometry and strain rate sensitivity were successfully taken into account all at once by coupling crystal plasticity and finite element (FE) analysis. Specifically, crystal plasticity was utilized to obtain the multi-axial flow rule at different strain rates based on the experimental deformation response under uniaxial tensile loading. The equivalent stress - equivalent strain response was then incorporated into the FE model for the sake of a more representative hardening rule under impact loading. The current results demonstrate that reliable predictions can be obtained by proper coupling of crystal plasticity and FE analysis even if the experimental flow rule of the material is acquired under uniaxial loading and at moderate strain rates that are significantly slower than those attained during impact loading. Furthermore, the current findings also demonstrate the need for an experiment-based multi-scale modeling approach for the sake of reliable predictions of the impact response.
Closing in on the large-scale CMB power asymmetry
NASA Astrophysics Data System (ADS)
Contreras, D.; Hutchinson, J.; Moss, A.; Scott, D.; Zibin, J. P.
2018-03-01
Measurements of the cosmic microwave background (CMB) temperature anisotropies have revealed a dipolar asymmetry in power at the largest scales, in apparent contradiction with the statistical isotropy of standard cosmological models. The significance of the effect is not very high, and is dependent on a posteriori choices. Nevertheless, a number of models have been proposed that produce a scale-dependent asymmetry. We confront several such models for a physical, position-space modulation with CMB temperature observations. We find that, while some models that maintain the standard isotropic power spectrum are allowed, others, such as those with modulated tensor or uncorrelated isocurvature modes, can be ruled out on the basis of the overproduction of isotropic power. This remains the case even when an extra isocurvature mode fully anticorrelated with the adiabatic perturbations is added to suppress power on large scales.
Extending rule-based methods to model molecular geometry and 3D model resolution.
Hoard, Brittany; Jacobson, Bruna; Manavi, Kasra; Tapia, Lydia
2016-08-01
Computational modeling is an important tool for the study of complex biochemical processes associated with cell signaling networks. However, it is challenging to simulate processes that involve hundreds of large molecules due to the high computational cost of such simulations. Rule-based modeling is a method that can be used to simulate these processes with reasonably low computational cost, but traditional rule-based modeling approaches do not include details of molecular geometry. The incorporation of geometry into biochemical models can more accurately capture details of these processes, and may lead to insights into how geometry affects the products that form. Furthermore, geometric rule-based modeling can be used to complement other computational methods that explicitly represent molecular geometry in order to quantify binding site accessibility and steric effects. We propose a novel implementation of rule-based modeling that encodes details of molecular geometry into the rules and binding rates. We demonstrate how rules are constructed according to the molecular curvature. We then perform a study of antigen-antibody aggregation using our proposed method. We simulate the binding of antibody complexes to binding regions of the shrimp allergen Pen a 1 using a previously developed 3D rigid-body Monte Carlo simulation, and we analyze the aggregate sizes. Then, using our novel approach, we optimize a rule-based model according to the geometry of the Pen a 1 molecule and the data from the Monte Carlo simulation. We use the distances between the binding regions of Pen a 1 to optimize the rules and binding rates. We perform this procedure for multiple conformations of Pen a 1 and analyze the impact of conformation and resolution on the optimal rule-based model. We find that the optimized rule-based models provide information about the average steric hindrance between binding regions and the probability that antibodies will bind to these regions. These optimized models quantify the variation in aggregate size that results from differences in molecular geometry and from model resolution.
2008-03-01
computational version of the CASIE architecture serves to demonstrate the functionality of our primary theories. However, implementation of several other...following facts. First, based on Theorem 3 and Theorem 5, the objective function is non -increasing under updating rule (6); second, by the criteria for...reassignment in updating rule (7), it is trivial to show that the objective function is non -increasing under updating rule (7). A Unified View to Graph
NASA Technical Reports Server (NTRS)
Penrose, C. J.
1987-01-01
The difficulties of modeling the complex recirculating flow fields produced by multiple jet STOVL aircraft close to the ground have led to extensive use of experimental model tests to predict intake Hot Gas Reingestion (HGR). Model test results reliability is dependent on a satisfactory set of scaling rules which must be validated by fully comparable full scale tests. Scaling rules devised in the U.K. in the mid 60's gave good model/full scale agreement for the BAe P1127 aircraft. Until recently no opportunity has occurred to check the applicability of the rules to the high energy exhaust of current ASTOVL aircraft projects. Such an opportunity has arisen following tests on a Tethered Harrier. Comparison of this full scale data and results from tests on a model configuration approximating to the full scale aircraft geometry has shown discrepancies between HGR levels. These discrepancies although probably due to geometry and other model/scale differences indicate some reexamination of the scaling rules is needed. Therefore the scaling rules are reviewed, further scaling studies planned are described and potential areas for further work are suggested.
Estimation of Handgrip Force from SEMG Based on Wavelet Scale Selection.
Wang, Kai; Zhang, Xianmin; Ota, Jun; Huang, Yanjiang
2018-02-24
This paper proposes a nonlinear correlation-based wavelet scale selection technology to select the effective wavelet scales for the estimation of handgrip force from surface electromyograms (SEMG). The SEMG signal corresponding to gripping force was collected from extensor and flexor forearm muscles during the force-varying analysis task. We performed a computational sensitivity analysis on the initial nonlinear SEMG-handgrip force model. To explore the nonlinear correlation between ten wavelet scales and handgrip force, a large-scale iteration based on the Monte Carlo simulation was conducted. To choose a suitable combination of scales, we proposed a rule to combine wavelet scales based on the sensitivity of each scale and selected the appropriate combination of wavelet scales based on sequence combination analysis (SCA). The results of SCA indicated that the scale combination VI is suitable for estimating force from the extensors and the combination V is suitable for the flexors. The proposed method was compared to two former methods through prolonged static and force-varying contraction tasks. The experiment results showed that the root mean square errors derived by the proposed method for both static and force-varying contraction tasks were less than 20%. The accuracy and robustness of the handgrip force derived by the proposed method is better than that obtained by the former methods.
Infrared small target detection based on Danger Theory
NASA Astrophysics Data System (ADS)
Lan, Jinhui; Yang, Xiao
2009-11-01
To solve the problem that traditional method can't detect the small objects whose local SNR is less than 2 in IR images, a Danger Theory-based model to detect infrared small target is presented in this paper. First, on the analog with immunology, the definition is given, in this paper, to such terms as dangerous signal, antigens, APC, antibodies. Besides, matching rule between antigen and antibody is improved. Prior to training the detection model and detecting the targets, the IR images are processed utilizing adaptive smooth filter to decrease the stochastic noise. Then at the training process, deleting rule, generating rule, crossover rule and the mutation rule are established after a large number of experiments in order to realize immediate convergence and obtain good antibodies. The Danger Theory-based model is built after the training process, and this model can detect the target whose local SNR is only 1.5.
Learning invariance from natural images inspired by observations in the primary visual cortex.
Teichmann, Michael; Wiltschut, Jan; Hamker, Fred
2012-05-01
The human visual system has the remarkable ability to largely recognize objects invariant of their position, rotation, and scale. A good interpretation of neurobiological findings involves a computational model that simulates signal processing of the visual cortex. In part, this is likely achieved step by step from early to late areas of visual perception. While several algorithms have been proposed for learning feature detectors, only few studies at hand cover the issue of biologically plausible learning of such invariance. In this study, a set of Hebbian learning rules based on calcium dynamics and homeostatic regulations of single neurons is proposed. Their performance is verified within a simple model of the primary visual cortex to learn so-called complex cells, based on a sequence of static images. As a result, the learned complex-cell responses are largely invariant to phase and position.
Gravitational waves and large field inflation
NASA Astrophysics Data System (ADS)
Linde, Andrei
2017-02-01
According to the famous Lyth bound, one can confirm large field inflation by finding tensor modes with sufficiently large tensor-to-scalar ratio r. Here we will try to answer two related questions: is it possible to rule out all large field inflationary models by not finding tensor modes with r above some critical value, and what can we say about the scale of inflation by measuring r? However, in order to answer these questions one should distinguish between two different definitions of the large field inflation and three different definitions of the scale of inflation. We will examine these issues using the theory of cosmological α-attractors as a convenient testing ground.
The neural basis for novel semantic categorization.
Koenig, Phyllis; Smith, Edward E; Glosser, Guila; DeVita, Chris; Moore, Peachie; McMillan, Corey; Gee, Jim; Grossman, Murray
2005-01-15
We monitored regional cerebral activity with BOLD fMRI during acquisition of a novel semantic category and subsequent categorization of test stimuli by a rule-based strategy or a similarity-based strategy. We observed different patterns of activation in direct comparisons of rule- and similarity-based categorization. During rule-based category acquisition, subjects recruited anterior cingulate, thalamic, and parietal regions to support selective attention to perceptual features, and left inferior frontal cortex to helps maintain rules in working memory. Subsequent rule-based categorization revealed anterior cingulate and parietal activation while judging stimuli whose conformity with the rules was readily apparent, and left inferior frontal recruitment during judgments of stimuli whose conformity was less apparent. By comparison, similarity-based category acquisition showed recruitment of anterior prefrontal and posterior cingulate regions, presumably to support successful retrieval of previously encountered exemplars from long-term memory, and bilateral temporal-parietal activation for perceptual feature integration. Subsequent similarity-based categorization revealed temporal-parietal, posterior cingulate, and anterior prefrontal activation. These findings suggest that large-scale networks support relatively distinct categorization processes during the acquisition and judgment of semantic category knowledge.
TOMML: A Rule Language for Structured Data
NASA Astrophysics Data System (ADS)
Cirstea, Horatiu; Moreau, Pierre-Etienne; Reilles, Antoine
We present the TOM language that extends JAVA with the purpose of providing high level constructs inspired by the rewriting community. TOM bridges thus the gap between a general purpose language and high level specifications based on rewriting. This approach was motivated by the promotion of rule based techniques and their integration in large scale applications. Powerful matching capabilities along with a rich strategy language are among TOM's strong features that make it easy to use and competitive with respect to other rule based languages. TOM is thus a natural choice for querying and transforming structured data and in particular XML documents [1]. We present here its main XML oriented features and illustrate its use on several examples.
A PRACTICAL ONTOLOGY FOR THE LARGE-SCALE MODELING OF SCHOLARLY ARTIFACTS AND THEIR USAGE
DOE Office of Scientific and Technical Information (OSTI.GOV)
RODRIGUEZ, MARKO A.; BOLLEN, JOHAN; VAN DE SOMPEL, HERBERT
2007-01-30
The large-scale analysis of scholarly artifact usage is constrained primarily by current practices in usage data archiving, privacy issues concerned with the dissemination of usage data, and the lack of a practical ontology for modeling the usage domain. As a remedy to the third constraint, this article presents a scholarly ontology that was engineered to represent those classes for which large-scale bibliographic and usage data exists, supports usage research, and whose instantiation is scalable to the order of 50 million articles along with their associated artifacts (e.g. authors and journals) and an accompanying 1 billion usage events. The real worldmore » instantiation of the presented abstract ontology is a semantic network model of the scholarly community which lends the scholarly process to statistical analysis and computational support. They present the ontology, discuss its instantiation, and provide some example inference rules for calculating various scholarly artifact metrics.« less
Deep learning based state recognition of substation switches
NASA Astrophysics Data System (ADS)
Wang, Jin
2018-06-01
Different from the traditional method which recognize the state of substation switches based on the running rules of electrical power system, this work proposes a novel convolutional neuron network-based state recognition approach of substation switches. Inspired by the theory of transfer learning, we first establish a convolutional neuron network model trained on the large-scale image set ILSVRC2012, then the restricted Boltzmann machine is employed to replace the full connected layer of the convolutional neuron network and trained on our small image dataset of 110kV substation switches to get a stronger model. Experiments conducted on our image dataset of 110kV substation switches show that, the proposed approach can be applicable to the substation to reduce the running cost and implement the real unattended operation.
DAMS: A Model to Assess Domino Effects by Using Agent-Based Modeling and Simulation.
Zhang, Laobing; Landucci, Gabriele; Reniers, Genserik; Khakzad, Nima; Zhou, Jianfeng
2017-12-19
Historical data analysis shows that escalation accidents, so-called domino effects, have an important role in disastrous accidents in the chemical and process industries. In this study, an agent-based modeling and simulation approach is proposed to study the propagation of domino effects in the chemical and process industries. Different from the analytical or Monte Carlo simulation approaches, which normally study the domino effect at probabilistic network levels, the agent-based modeling technique explains the domino effects from a bottom-up perspective. In this approach, the installations involved in a domino effect are modeled as agents whereas the interactions among the installations (e.g., by means of heat radiation) are modeled via the basic rules of the agents. Application of the developed model to several case studies demonstrates the ability of the model not only in modeling higher-level domino effects and synergistic effects but also in accounting for temporal dependencies. The model can readily be applied to large-scale complicated cases. © 2017 Society for Risk Analysis.
Yang, Jin; Hlavacek, William S.
2011-01-01
Rule-based models, which are typically formulated to represent cell signaling systems, can now be simulated via various network-free simulation methods. In a network-free method, reaction rates are calculated for rules that characterize molecular interactions, and these rule rates, which each correspond to the cumulative rate of all reactions implied by a rule, are used to perform a stochastic simulation of reaction kinetics. Network-free methods, which can be viewed as generalizations of Gillespie’s method, are so named because these methods do not require that a list of individual reactions implied by a set of rules be explicitly generated, which is a requirement of other methods for simulating rule-based models. This requirement is impractical for rule sets that imply large reaction networks (i.e., long lists of individual reactions), as reaction network generation is expensive. Here, we compare the network-free simulation methods implemented in RuleMonkey and NFsim, general-purpose software tools for simulating rule-based models encoded in the BioNetGen language. The method implemented in NFsim uses rejection sampling to correct overestimates of rule rates, which introduces null events (i.e., time steps that do not change the state of the system being simulated). The method implemented in RuleMonkey uses iterative updates to track rule rates exactly, which avoids null events. To ensure a fair comparison of the two methods, we developed implementations of the rejection and rejection-free methods specific to a particular class of kinetic models for multivalent ligand-receptor interactions. These implementations were written with the intention of making them as much alike as possible, minimizing the contribution of irrelevant coding differences to efficiency differences. Simulation results show that performance of the rejection method is equal to or better than that of the rejection-free method over wide parameter ranges. However, when parameter values are such that ligand-induced aggregation of receptors yields a large connected receptor cluster, the rejection-free method is more efficient. PMID:21832806
NASA Technical Reports Server (NTRS)
Ramamoorthy, P. A.; Huang, Song; Govind, Girish
1991-01-01
In fault diagnosis, control and real-time monitoring, both timing and accuracy are critical for operators or machines to reach proper solutions or appropriate actions. Expert systems are becoming more popular in the manufacturing community for dealing with such problems. In recent years, neural networks have revived and their applications have spread to many areas of science and engineering. A method of using neural networks to implement rule-based expert systems for time-critical applications is discussed here. This method can convert a given rule-based system into a neural network with fixed weights and thresholds. The rules governing the translation are presented along with some examples. We also present the results of automated machine implementation of such networks from the given rule-base. This significantly simplifies the translation process to neural network expert systems from conventional rule-based systems. Results comparing the performance of the proposed approach based on neural networks vs. the classical approach are given. The possibility of very large scale integration (VLSI) realization of such neural network expert systems is also discussed.
CD volume design and verification
NASA Technical Reports Server (NTRS)
Li, Y. P.; Hughes, J. S.
1993-01-01
In this paper, we describe a prototype for CD-ROM volume design and verification. This prototype allows users to create their own model of CD volumes by modifying a prototypical model. Rule-based verification of the test volumes can then be performed later on against the volume definition. This working prototype has proven the concept of model-driven rule-based design and verification for large quantity of data. The model defined for the CD-ROM volumes becomes a data model as well as an executable specification.
Predicting Mycobacterium tuberculosis Complex Clades Using Knowledge-Based Bayesian Networks
Bennett, Kristin P.
2014-01-01
We develop a novel approach for incorporating expert rules into Bayesian networks for classification of Mycobacterium tuberculosis complex (MTBC) clades. The proposed knowledge-based Bayesian network (KBBN) treats sets of expert rules as prior distributions on the classes. Unlike prior knowledge-based support vector machine approaches which require rules expressed as polyhedral sets, KBBN directly incorporates the rules without any modification. KBBN uses data to refine rule-based classifiers when the rule set is incomplete or ambiguous. We develop a predictive KBBN model for 69 MTBC clades found in the SITVIT international collection. We validate the approach using two testbeds that model knowledge of the MTBC obtained from two different experts and large DNA fingerprint databases to predict MTBC genetic clades and sublineages. These models represent strains of MTBC using high-throughput biomarkers called spacer oligonucleotide types (spoligotypes), since these are routinely gathered from MTBC isolates of tuberculosis (TB) patients. Results show that incorporating rules into problems can drastically increase classification accuracy if data alone are insufficient. The SITVIT KBBN is publicly available for use on the World Wide Web. PMID:24864238
Segmentation of Object Outlines into Parts: A Large-Scale Integrative Study
ERIC Educational Resources Information Center
De Winter, Joeri; Wagemans, Johan
2006-01-01
In this study, a large number of observers (N=201) were asked to segment a collection of outlines derived from line drawings of everyday objects (N=88). This data set was then used as a benchmark to evaluate current models of object segmentation. All of the previously proposed rules of segmentation were found supported in our results. For example,…
How much does a tokamak reactor cost?
NASA Astrophysics Data System (ADS)
Freidberg, J.; Cerfon, A.; Ballinger, S.; Barber, J.; Dogra, A.; McCarthy, W.; Milanese, L.; Mouratidis, T.; Redman, W.; Sandberg, A.; Segal, D.; Simpson, R.; Sorensen, C.; Zhou, M.
2017-10-01
The cost of a fusion reactor is of critical importance to its ultimate acceptability as a commercial source of electricity. While there are general rules of thumb for scaling both overnight cost and levelized cost of electricity the corresponding relations are not very accurate or universally agreed upon. We have carried out a series of scaling studies of tokamak reactor costs based on reasonably sophisticated plasma and engineering models. The analysis is largely analytic, requiring only a simple numerical code, thus allowing a very large number of designs. Importantly, the studies are aimed at plasma physicists rather than fusion engineers. The goals are to assess the pros and cons of steady state burning plasma experiments and reactors. One specific set of results discusses the benefits of higher magnetic fields, now possible because of the recent development of high T rare earth superconductors (REBCO); with this goal in mind, we calculate quantitative expressions, including both scaling and multiplicative constants, for cost and major radius as a function of central magnetic field.
Chylek, Lily A.; Harris, Leonard A.; Tung, Chang-Shung; Faeder, James R.; Lopez, Carlos F.
2013-01-01
Rule-based modeling was developed to address the limitations of traditional approaches for modeling chemical kinetics in cell signaling systems. These systems consist of multiple interacting biomolecules (e.g., proteins), which themselves consist of multiple parts (e.g., domains, linear motifs, and sites of phosphorylation). Consequently, biomolecules that mediate information processing generally have the potential to interact in multiple ways, with the number of possible complexes and post-translational modification states tending to grow exponentially with the number of binary interactions considered. As a result, only large reaction networks capture all possible consequences of the molecular interactions that occur in a cell signaling system, which is problematic because traditional modeling approaches for chemical kinetics (e.g., ordinary differential equations) require explicit network specification. This problem is circumvented through representation of interactions in terms of local rules. With this approach, network specification is implicit and model specification is concise. Concise representation results in a coarse graining of chemical kinetics, which is introduced because all reactions implied by a rule inherit the rate law associated with that rule. Coarse graining can be appropriate if interactions are modular, and the coarseness of a model can be adjusted as needed. Rules can be specified using specialized model-specification languages, and recently developed tools designed for specification of rule-based models allow one to leverage powerful software engineering capabilities. A rule-based model comprises a set of rules, which can be processed by general-purpose simulation and analysis tools to achieve different objectives (e.g., to perform either a deterministic or stochastic simulation). PMID:24123887
Agent-based modeling of the interaction between CD8+ T cells and Beta cells in type 1 diabetes.
Ozturk, Mustafa Cagdas; Xu, Qian; Cinar, Ali
2018-01-01
We propose an agent-based model for the simulation of the autoimmune response in T1D. The model incorporates cell behavior from various rules derived from the current literature and is implemented on a high-performance computing system, which enables the simulation of a significant portion of the islets in the mouse pancreas. Simulation results indicate that the model is able to capture the trends that emerge during the progression of the autoimmunity. The multi-scale nature of the model enables definition of rules or equations that govern cellular or sub-cellular level phenomena and observation of the outcomes at the tissue scale. It is expected that such a model would facilitate in vivo clinical studies through rapid testing of hypotheses and planning of future experiments by providing insight into disease progression at different scales, some of which may not be obtained easily in clinical studies. Furthermore, the modular structure of the model simplifies tasks such as the addition of new cell types, and the definition or modification of different behaviors of the environment and the cells with ease.
Hints on the nature of dark matter from the properties of Milky Way satellites
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderhalden, Donnino; Diemand, Juerg; Schneider, Aurel
2013-03-01
The nature of dark matter is still unknown and one of the most fundamental scientific mysteries. Although successfully describing large scales, the standard cold dark matter model (CDM) exhibits possible shortcomings on galactic and sub-galactic scales. It is exactly at these highly non-linear scales where strong astrophysical constraints can be set on the nature of the dark matter particle. While observations of the Lyman-α forest probe the matter power spectrum in the mildly non-linear regime, satellite galaxies of the Milky Way provide an excellent laboratory as a test of the underlying cosmology on much smaller scales. Here we present resultsmore » from a set of high resolution simulations of a Milky Way sized dark matter halo in eight distinct cosmologies: CDM, warm dark matter (WDM) with a particle mass of 2 keV and six different cold plus warm dark matter (C+WDM) models, varying the fraction, f{sub wdm}, and the mass, m{sub wdm}, of the warm component. We used three different observational tests based on Milky Way satellite observations: the total satellite abundance, their radial distribution and their mass profile. We show that the requirement of simultaneously satisfying all three constraints sets very strong limits on the nature of dark matter. This shows the power of a multi-dimensional small scale approach in ruling out models which would be still allowed by large scale observations.« less
Exact Hybrid Particle/Population Simulation of Rule-Based Models of Biochemical Systems
Stover, Lori J.; Nair, Niketh S.; Faeder, James R.
2014-01-01
Detailed modeling and simulation of biochemical systems is complicated by the problem of combinatorial complexity, an explosion in the number of species and reactions due to myriad protein-protein interactions and post-translational modifications. Rule-based modeling overcomes this problem by representing molecules as structured objects and encoding their interactions as pattern-based rules. This greatly simplifies the process of model specification, avoiding the tedious and error prone task of manually enumerating all species and reactions that can potentially exist in a system. From a simulation perspective, rule-based models can be expanded algorithmically into fully-enumerated reaction networks and simulated using a variety of network-based simulation methods, such as ordinary differential equations or Gillespie's algorithm, provided that the network is not exceedingly large. Alternatively, rule-based models can be simulated directly using particle-based kinetic Monte Carlo methods. This “network-free” approach produces exact stochastic trajectories with a computational cost that is independent of network size. However, memory and run time costs increase with the number of particles, limiting the size of system that can be feasibly simulated. Here, we present a hybrid particle/population simulation method that combines the best attributes of both the network-based and network-free approaches. The method takes as input a rule-based model and a user-specified subset of species to treat as population variables rather than as particles. The model is then transformed by a process of “partial network expansion” into a dynamically equivalent form that can be simulated using a population-adapted network-free simulator. The transformation method has been implemented within the open-source rule-based modeling platform BioNetGen, and resulting hybrid models can be simulated using the particle-based simulator NFsim. Performance tests show that significant memory savings can be achieved using the new approach and a monetary cost analysis provides a practical measure of its utility. PMID:24699269
Exact hybrid particle/population simulation of rule-based models of biochemical systems.
Hogg, Justin S; Harris, Leonard A; Stover, Lori J; Nair, Niketh S; Faeder, James R
2014-04-01
Detailed modeling and simulation of biochemical systems is complicated by the problem of combinatorial complexity, an explosion in the number of species and reactions due to myriad protein-protein interactions and post-translational modifications. Rule-based modeling overcomes this problem by representing molecules as structured objects and encoding their interactions as pattern-based rules. This greatly simplifies the process of model specification, avoiding the tedious and error prone task of manually enumerating all species and reactions that can potentially exist in a system. From a simulation perspective, rule-based models can be expanded algorithmically into fully-enumerated reaction networks and simulated using a variety of network-based simulation methods, such as ordinary differential equations or Gillespie's algorithm, provided that the network is not exceedingly large. Alternatively, rule-based models can be simulated directly using particle-based kinetic Monte Carlo methods. This "network-free" approach produces exact stochastic trajectories with a computational cost that is independent of network size. However, memory and run time costs increase with the number of particles, limiting the size of system that can be feasibly simulated. Here, we present a hybrid particle/population simulation method that combines the best attributes of both the network-based and network-free approaches. The method takes as input a rule-based model and a user-specified subset of species to treat as population variables rather than as particles. The model is then transformed by a process of "partial network expansion" into a dynamically equivalent form that can be simulated using a population-adapted network-free simulator. The transformation method has been implemented within the open-source rule-based modeling platform BioNetGen, and resulting hybrid models can be simulated using the particle-based simulator NFsim. Performance tests show that significant memory savings can be achieved using the new approach and a monetary cost analysis provides a practical measure of its utility.
NASA Astrophysics Data System (ADS)
Borzí, Alfio; Caponigro, Marco
2016-09-01
The formulation of mathematical models for crowd dynamics is one current challenge in many fields of applied sciences. It involves the modelization of the complex behavior of a large number of individuals. In particular, the difficulty lays in describing emerging collective behaviors by means of a relatively small number of local interaction rules between individuals in a crowd. Clearly, the individual's free will involved in decision making processes and in the management of the social interactions cannot be described by a finite number of deterministic rules. On the other hand, in large crowds, this individual indeterminacy can be considered as a local fluctuation averaged to zero by the size of the crowd. While at the microscopic scale, using a system of coupled ODEs, the free will should be included in the mathematical description (e.g. with a stochastic term), the mesoscopic and macroscopic scales, modeled by PDEs, represent a powerful modelling tool that allows to neglect this feature and provide a reliable description. In this sense, the work by Bellomo, Clarke, Gibelli, Townsend, and Vreugdenhil [2] represents a mathematical-epistemological contribution towards the design of a reliable model of human behavior.
Comparison of spatial association approaches for landscape mapping of soil organic carbon stocks
NASA Astrophysics Data System (ADS)
Miller, B. A.; Koszinski, S.; Wehrhan, M.; Sommer, M.
2015-03-01
The distribution of soil organic carbon (SOC) can be variable at small analysis scales, but consideration of its role in regional and global issues demands the mapping of large extents. There are many different strategies for mapping SOC, among which is to model the variables needed to calculate the SOC stock indirectly or to model the SOC stock directly. The purpose of this research is to compare direct and indirect approaches to mapping SOC stocks from rule-based, multiple linear regression models applied at the landscape scale via spatial association. The final products for both strategies are high-resolution maps of SOC stocks (kg m-2), covering an area of 122 km2, with accompanying maps of estimated error. For the direct modelling approach, the estimated error map was based on the internal error estimations from the model rules. For the indirect approach, the estimated error map was produced by spatially combining the error estimates of component models via standard error propagation equations. We compared these two strategies for mapping SOC stocks on the basis of the qualities of the resulting maps as well as the magnitude and distribution of the estimated error. The direct approach produced a map with less spatial variation than the map produced by the indirect approach. The increased spatial variation represented by the indirect approach improved R2 values for the topsoil and subsoil stocks. Although the indirect approach had a lower mean estimated error for the topsoil stock, the mean estimated error for the total SOC stock (topsoil + subsoil) was lower for the direct approach. For these reasons, we recommend the direct approach to modelling SOC stocks be considered a more conservative estimate of the SOC stocks' spatial distribution.
Comparison of spatial association approaches for landscape mapping of soil organic carbon stocks
NASA Astrophysics Data System (ADS)
Miller, B. A.; Koszinski, S.; Wehrhan, M.; Sommer, M.
2014-11-01
The distribution of soil organic carbon (SOC) can be variable at small analysis scales, but consideration of its role in regional and global issues demands the mapping of large extents. There are many different strategies for mapping SOC, among which are to model the variables needed to calculate the SOC stock indirectly or to model the SOC stock directly. The purpose of this research is to compare direct and indirect approaches to mapping SOC stocks from rule-based, multiple linear regression models applied at the landscape scale via spatial association. The final products for both strategies are high-resolution maps of SOC stocks (kg m-2), covering an area of 122 km2, with accompanying maps of estimated error. For the direct modelling approach, the estimated error map was based on the internal error estimations from the model rules. For the indirect approach, the estimated error map was produced by spatially combining the error estimates of component models via standard error propagation equations. We compared these two strategies for mapping SOC stocks on the basis of the qualities of the resulting maps as well as the magnitude and distribution of the estimated error. The direct approach produced a map with less spatial variation than the map produced by the indirect approach. The increased spatial variation represented by the indirect approach improved R2 values for the topsoil and subsoil stocks. Although the indirect approach had a lower mean estimated error for the topsoil stock, the mean estimated error for the total SOC stock (topsoil + subsoil) was lower for the direct approach. For these reasons, we recommend the direct approach to modelling SOC stocks be considered a more conservative estimate of the SOC stocks' spatial distribution.
Heslot, Nicolas; Akdemir, Deniz; Sorrells, Mark E; Jannink, Jean-Luc
2014-02-01
Development of models to predict genotype by environment interactions, in unobserved environments, using environmental covariates, a crop model and genomic selection. Application to a large winter wheat dataset. Genotype by environment interaction (G*E) is one of the key issues when analyzing phenotypes. The use of environment data to model G*E has long been a subject of interest but is limited by the same problems as those addressed by genomic selection methods: a large number of correlated predictors each explaining a small amount of the total variance. In addition, non-linear responses of genotypes to stresses are expected to further complicate the analysis. Using a crop model to derive stress covariates from daily weather data for predicted crop development stages, we propose an extension of the factorial regression model to genomic selection. This model is further extended to the marker level, enabling the modeling of quantitative trait loci (QTL) by environment interaction (Q*E), on a genome-wide scale. A newly developed ensemble method, soft rule fit, was used to improve this model and capture non-linear responses of QTL to stresses. The method is tested using a large winter wheat dataset, representative of the type of data available in a large-scale commercial breeding program. Accuracy in predicting genotype performance in unobserved environments for which weather data were available increased by 11.1% on average and the variability in prediction accuracy decreased by 10.8%. By leveraging agronomic knowledge and the large historical datasets generated by breeding programs, this new model provides insight into the genetic architecture of genotype by environment interactions and could predict genotype performance based on past and future weather scenarios.
NASA Astrophysics Data System (ADS)
Zhu, Hongchun; Zhao, Yipeng; Liu, Haiying
2018-04-01
Scale is the basic attribute for expressing and describing spatial entity and phenomena. It offers theoretical significance in the study of gully structure information, variable characteristics of watershed morphology, and development evolution at different scales. This research selected five different areas in China's Loess Plateau as the experimental region and used DEM data at different scales as the experimental data. First, the change rule of the characteristic parameters of the data at different scales was analyzed. The watershed structure information did not change along with a change in the data scale. This condition was proven by selecting indices of gully bifurcation ratio and fractal dimension as characteristic parameters of watershed structure information. Then, the change rule of the characteristic parameters of gully structure with different analysis scales was analyzed by setting the scale sequence of analysis at the extraction gully. The gully structure of the watershed changed with variations in the analysis scale, and the change rule was obvious when the gully level changed. Finally, the change rule of the characteristic parameters of the gully structure at different areas was analyzed. The gully fractal dimension showed a significant numerical difference in different areas, whereas the variation of the gully branch ratio was small. The change rule indicated that the development degree of the gully obviously varied in different regions, but the morphological structure was basically similar.
NASA Astrophysics Data System (ADS)
Zhu, Hongchun; Zhao, Yipeng; Liu, Haiying
2018-06-01
Scale is the basic attribute for expressing and describing spatial entity and phenomena. It offers theoretical significance in the study of gully structure information, variable characteristics of watershed morphology, and development evolution at different scales. This research selected five different areas in China's Loess Plateau as the experimental region and used DEM data at different scales as the experimental data. First, the change rule of the characteristic parameters of the data at different scales was analyzed. The watershed structure information did not change along with a change in the data scale. This condition was proven by selecting indices of gully bifurcation ratio and fractal dimension as characteristic parameters of watershed structure information. Then, the change rule of the characteristic parameters of gully structure with different analysis scales was analyzed by setting the scale sequence of analysis at the extraction gully. The gully structure of the watershed changed with variations in the analysis scale, and the change rule was obvious when the gully level changed. Finally, the change rule of the characteristic parameters of the gully structure at different areas was analyzed. The gully fractal dimension showed a significant numerical difference in different areas, whereas the variation of the gully branch ratio was small. The change rule indicated that the development degree of the gully obviously varied in different regions, but the morphological structure was basically similar.
Fast Reduction Method in Dominance-Based Information Systems
NASA Astrophysics Data System (ADS)
Li, Yan; Zhou, Qinghua; Wen, Yongchuan
2018-01-01
In real world applications, there are often some data with continuous values or preference-ordered values. Rough sets based on dominance relations can effectively deal with these kinds of data. Attribute reduction can be done in the framework of dominance-relation based approach to better extract decision rules. However, the computational cost of the dominance classes greatly affects the efficiency of attribute reduction and rule extraction. This paper presents an efficient method of computing dominance classes, and further compares it with traditional method with increasing attributes and samples. Experiments on UCI data sets show that the proposed algorithm obviously improves the efficiency of the traditional method, especially for large-scale data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Hong -Yi; Leung, L. Ruby; Tesfa, Teklu
A new large-scale stream temperature model has been developed within the Community Earth System Model (CESM) framework. The model is coupled with the Model for Scale Adaptive River Transport (MOSART) that represents river routing and a water management model (WM) that represents the effects of reservoir operations and water withdrawals on flow regulation. The coupled models allow the impacts of reservoir operations and withdrawals on stream temperature to be explicitly represented in a physically based and consistent way. The models have been applied to the Contiguous United States driven by observed meteorological forcing. It is shown that the model ismore » capable of reproducing stream temperature spatiotemporal variation satisfactorily by comparison against the observed streamflow from over 320 USGS stations. Including water management in the models improves the agreement between the simulated and observed streamflow at a large number of stream gauge stations. Both climate and water management are found to have important influence on the spatiotemporal patterns of stream temperature. More interestingly, it is quantitatively estimated that reservoir operation could cool down stream temperature in the summer low-flow season (August – October) by as much as 1~2oC over many places, as water management generally mitigates low flow, which has important implications to aquatic ecosystems. In conclusion, sensitivity of the simulated stream temperature to input data and reservoir operation rules used in the WM model motivates future directions to address some limitations in the current modeling framework.« less
Reionization and the cosmic microwave background in an open universe
NASA Technical Reports Server (NTRS)
Persi, Fred M.
1995-01-01
If the universe was reionized at high reshift (z greater than or approximately equal to 30) or never recombined, then photon-electron scattering can erase fluctuations in the cosmic microwave background at scales less than or approximately equal to 1 deg. Peculiar motion at the surface of last scattering will then have given rise to new anisotropy at the 1 min level through the Vishniac effect. Here the observed fluctuations in galaxy counts are extrapolated to high redshifts using linear theory, and the expected anisotropy is computed. The predicted level of anisotropies is a function of Omega(sub 0) and the ratio of the density in ionized baryons to the critical density and is shown to depend strongly on the large- and small-scale power. It is not possible to make general statements about the viability of all reionized models based on current observations, but it is possible to rule out specific models for structure formation, particularly those with high baryonic content or small-scale power. The induced fluctuations are shown to scale with cosmological parameters and optical depth.
Geometry-based ensembles: toward a structural characterization of the classification boundary.
Pujol, Oriol; Masip, David
2009-06-01
This paper introduces a novel binary discriminative learning technique based on the approximation of the nonlinear decision boundary by a piecewise linear smooth additive model. The decision border is geometrically defined by means of the characterizing boundary points-points that belong to the optimal boundary under a certain notion of robustness. Based on these points, a set of locally robust linear classifiers is defined and assembled by means of a Tikhonov regularized optimization procedure in an additive model to create a final lambda-smooth decision rule. As a result, a very simple and robust classifier with a strong geometrical meaning and nonlinear behavior is obtained. The simplicity of the method allows its extension to cope with some of today's machine learning challenges, such as online learning, large-scale learning or parallelization, with linear computational complexity. We validate our approach on the UCI database, comparing with several state-of-the-art classification techniques. Finally, we apply our technique in online and large-scale scenarios and in six real-life computer vision and pattern recognition problems: gender recognition based on face images, intravascular ultrasound tissue classification, speed traffic sign detection, Chagas' disease myocardial damage severity detection, old musical scores clef classification, and action recognition using 3D accelerometer data from a wearable device. The results are promising and this paper opens a line of research that deserves further attention.
Ant groups optimally amplify the effect of transiently informed individuals
NASA Astrophysics Data System (ADS)
Gelblum, Aviram; Pinkoviezky, Itai; Fonio, Ehud; Ghosh, Abhijit; Gov, Nir; Feinerman, Ofer
2015-07-01
To cooperatively transport a large load, it is important that carriers conform in their efforts and align their forces. A downside of behavioural conformism is that it may decrease the group's responsiveness to external information. Combining experiment and theory, we show how ants optimize collective transport. On the single-ant scale, optimization stems from decision rules that balance individuality and compliance. Macroscopically, these rules poise the system at the transition between random walk and ballistic motion where the collective response to the steering of a single informed ant is maximized. We relate this peak in response to the divergence of susceptibility at a phase transition. Our theoretical models predict that the ant-load system can be transitioned through the critical point of this mesoscopic system by varying its size; we present experiments supporting these predictions. Our findings show that efficient group-level processes can arise from transient amplification of individual-based knowledge.
Ant groups optimally amplify the effect of transiently informed individuals
Gelblum, Aviram; Pinkoviezky, Itai; Fonio, Ehud; Ghosh, Abhijit; Gov, Nir; Feinerman, Ofer
2015-01-01
To cooperatively transport a large load, it is important that carriers conform in their efforts and align their forces. A downside of behavioural conformism is that it may decrease the group's responsiveness to external information. Combining experiment and theory, we show how ants optimize collective transport. On the single-ant scale, optimization stems from decision rules that balance individuality and compliance. Macroscopically, these rules poise the system at the transition between random walk and ballistic motion where the collective response to the steering of a single informed ant is maximized. We relate this peak in response to the divergence of susceptibility at a phase transition. Our theoretical models predict that the ant-load system can be transitioned through the critical point of this mesoscopic system by varying its size; we present experiments supporting these predictions. Our findings show that efficient group-level processes can arise from transient amplification of individual-based knowledge. PMID:26218613
Robust Strategy for Rocket Engine Health Monitoring
NASA Technical Reports Server (NTRS)
Santi, L. Michael
2001-01-01
Monitoring the health of rocket engine systems is essentially a two-phase process. The acquisition phase involves sensing physical conditions at selected locations, converting physical inputs to electrical signals, conditioning the signals as appropriate to establish scale or filter interference, and recording results in a form that is easy to interpret. The inference phase involves analysis of results from the acquisition phase, comparison of analysis results to established health measures, and assessment of health indications. A variety of analytical tools may be employed in the inference phase of health monitoring. These tools can be separated into three broad categories: statistical, rule based, and model based. Statistical methods can provide excellent comparative measures of engine operating health. They require well-characterized data from an ensemble of "typical" engines, or "golden" data from a specific test assumed to define the operating norm in order to establish reliable comparative measures. Statistical methods are generally suitable for real-time health monitoring because they do not deal with the physical complexities of engine operation. The utility of statistical methods in rocket engine health monitoring is hindered by practical limits on the quantity and quality of available data. This is due to the difficulty and high cost of data acquisition, the limited number of available test engines, and the problem of simulating flight conditions in ground test facilities. In addition, statistical methods incur a penalty for disregarding flow complexity and are therefore limited in their ability to define performance shift causality. Rule based methods infer the health state of the engine system based on comparison of individual measurements or combinations of measurements with defined health norms or rules. This does not mean that rule based methods are necessarily simple. Although binary yes-no health assessment can sometimes be established by relatively simple rules, the causality assignment needed for refined health monitoring often requires an exceptionally complex rule base involving complicated logical maps. Structuring the rule system to be clear and unambiguous can be difficult, and the expert input required to maintain a large logic network and associated rule base can be prohibitive.
NASA Astrophysics Data System (ADS)
Neverre, Noémie; Dumas, Patrice; Nassopoulos, Hypatia
2016-04-01
Global changes are expected to exacerbate water scarcity issues in the Mediterranean region in the next decades. In this work, we investigate the impacts of reservoirs operation rules based on an economic criterion. We examine whether can they help reduce the costs of water scarcity, and whether they become more relevant under future climatic and socioeconomic conditions. We develop an original hydroeconomic model able to compare future water supply and demand on a large scale, while representing river basin heterogeneity. On the demand side, we focus on the two main sectors of water use: the irrigation and domestic sectors. Demands are projected in terms of both quantity and economic value. Irrigation requirements are computed for 12 types of crops, at the 0.5° spatial resolution, under future climatic conditions (A1B scenario). The computation of the economic benefits of irrigation water is based on a yield comparison approach between rainfed and irrigated crops. For the domestic sector, we project the combined effects of demographic growth, economic development and water cost evolution on future demands. The economic value of domestic water is defined as the economic surplus. On the supply side, we evaluate the impacts of climate change on water inflows to the reservoirs. Operating rules of the reservoirs are set up using a parameterisation-simulation-optimisation approach. The objective is to maximise water benefits. We introduce prudential parametric rules in order to take into account spatial and temporal trade-offs. The methodology is applied to Algeria at the 2050 horizon. Overall, our results show that the supply-demand imbalance and its costs will increase in most basins under future climatic and socioeconomic conditions. Our results suggest that the benefits of operating rules based on economic criteria are not unequivocally increased with global changes: in some basins the positive impact of economic prioritisation is higher under future conditions, but in other basins it is higher under historical conditions. Global changes may be an incentive to use valuation in operating rules in some basins. In other basins, the benefits of reservoirs management based on economic criteria are less pronounced; in this case, trade-offs could arise between implementing economic based operation policies or not. Given its generic nature and low data requirements, the framework developed could be implemented in other regions concerned with water scarcity and its cost, or extended to a global coverage. Water policies at the country or regional level could be assessed.
NASA Astrophysics Data System (ADS)
Xia, Cheng-Yi; Wang, Lei; Wang, Juan; Wang, Jin-Song
2012-09-01
We combine the Fermi and Moran update rules in the spatial prisoner's dilemma and snowdrift games to investigate the behavior of collective cooperation among agents on the regular lattice. Large-scale simulations indicate that, compared to the model with only one update rule, the cooperation behavior exhibits the richer phenomena, and the role of update dynamics should be paid more attention in the evolutionary game theory. Meanwhile, we also observe that the introduction of Moran rule, which needs to consider all neighbor's information, can markedly promote the aggregate cooperation level, that is, randomly selecting the neighbor proportional to its payoff to imitate will facilitate the cooperation among agents. Current results will contribute to further understand the cooperation dynamics and evolutionary behaviors within many biological, economic and social systems.
Coupling large scale hydrologic-reservoir-hydraulic models for impact studies in data sparse regions
NASA Astrophysics Data System (ADS)
O'Loughlin, Fiachra; Neal, Jeff; Wagener, Thorsten; Bates, Paul; Freer, Jim; Woods, Ross; Pianosi, Francesca; Sheffied, Justin
2017-04-01
As hydraulic modelling moves to increasingly large spatial domains it has become essential to take reservoirs and their operations into account. Large-scale hydrological models have been including reservoirs for at least the past two decades, yet they cannot explicitly model the variations in spatial extent of reservoirs, and many reservoirs operations in hydrological models are not undertaken during the run-time operation. This requires a hydraulic model, yet to-date no continental scale hydraulic model has directly simulated reservoirs and their operations. In addition to the need to include reservoirs and their operations in hydraulic models as they move to global coverage, there is also a need to link such models to large scale hydrology models or land surface schemes. This is especially true for Africa where the number of river gauges has consistently declined since the middle of the twentieth century. In this study we address these two major issues by developing: 1) a coupling methodology for the VIC large-scale hydrological model and the LISFLOOD-FP hydraulic model, and 2) a reservoir module for the LISFLOOD-FP model, which currently includes four sets of reservoir operating rules taken from the major large-scale hydrological models. The Volta Basin, West Africa, was chosen to demonstrate the capability of the modelling framework as it is a large river basin ( 400,000 km2) and contains the largest man-made lake in terms of area (8,482 km2), Lake Volta, created by the Akosombo dam. Lake Volta also experiences a seasonal variation in water levels of between two and six metres that creates a dynamic shoreline. In this study, we first run our coupled VIC and LISFLOOD-FP model without explicitly modelling Lake Volta and then compare these results with those from model runs where the dam operations and Lake Volta are included. The results show that we are able to obtain variation in the Lake Volta water levels and that including the dam operations and Lake Volta has significant impacts on the water levels across the domain.
Personalised Information Services Using a Hybrid Recommendation Method Based on Usage Frequency
ERIC Educational Resources Information Center
Kim, Yong; Chung, Min Gyo
2008-01-01
Purpose: This paper seeks to describe a personal recommendation service (PRS) involving an innovative hybrid recommendation method suitable for deployment in a large-scale multimedia user environment. Design/methodology/approach: The proposed hybrid method partitions content and user into segments and executes association rule mining,…
From microscopic rules to macroscopic dynamics with active colloidal snakes
NASA Astrophysics Data System (ADS)
Zhang, Jie; Yan, Jing; Granick, Steve
Seeking to learn about self-assembly far from equilibrium, these imaging experiments inspect self-propelled colloidal particles whose heads and tails attract other particles reversibly as they swim. We observe processes akin to polymerization (short times) and chain scission and recombination (long times). The steady-state of dilute systems consists of discrete rings rotating in place with largely quenched dynamics, but when concentration is high, the system dynamics share features with turbulence. The dynamical rules of this model system appear to be scale-independent and hence potentially relevant more generally.
Data driven model generation based on computational intelligence
NASA Astrophysics Data System (ADS)
Gemmar, Peter; Gronz, Oliver; Faust, Christophe; Casper, Markus
2010-05-01
The simulation of discharges at a local gauge or the modeling of large scale river catchments are effectively involved in estimation and decision tasks of hydrological research and practical applications like flood prediction or water resource management. However, modeling such processes using analytical or conceptual approaches is made difficult by both complexity of process relations and heterogeneity of processes. It was shown manifold that unknown or assumed process relations can principally be described by computational methods, and that system models can automatically be derived from observed behavior or measured process data. This study describes the development of hydrological process models using computational methods including Fuzzy logic and artificial neural networks (ANN) in a comprehensive and automated manner. Methods We consider a closed concept for data driven development of hydrological models based on measured (experimental) data. The concept is centered on a Fuzzy system using rules of Takagi-Sugeno-Kang type which formulate the input-output relation in a generic structure like Ri : IFq(t) = lowAND...THENq(t+Δt) = ai0 +ai1q(t)+ai2p(t-Δti1)+ai3p(t+Δti2)+.... The rule's premise part (IF) describes process states involving available process information, e.g. actual outlet q(t) is low where low is one of several Fuzzy sets defined over variable q(t). The rule's conclusion (THEN) estimates expected outlet q(t + Δt) by a linear function over selected system variables, e.g. actual outlet q(t), previous and/or forecasted precipitation p(t ?Δtik). In case of river catchment modeling we use head gauges, tributary and upriver gauges in the conclusion part as well. In addition, we consider temperature and temporal (season) information in the premise part. By creating a set of rules R = {Ri|(i = 1,...,N)} the space of process states can be covered as concise as necessary. Model adaptation is achieved by finding on optimal set A = (aij) of conclusion parameters with respect to a defined rating function and experimental data. To find A, we use for example a linear equation solver and RMSE-function. In practical process models, the number of Fuzzy sets and the according number of rules is fairly low. Nevertheless, creating the optimal model requires some experience. Therefore, we improved this development step by methods for automatic generation of Fuzzy sets, rules, and conclusions. Basically, the model achievement depends to a great extend on the selection of the conclusion variables. It is the aim that variables having most influence on the system reaction being considered and superfluous ones being neglected. At first, we use Kohonen maps, a specialized ANN, to identify relevant input variables from the large set of available system variables. A greedy algorithm selects a comprehensive set of dominant and uncorrelated variables. Next, the premise variables are analyzed with clustering methods (e.g. Fuzzy-C-means) and Fuzzy sets are then derived from cluster centers and outlines. The rule base is automatically constructed by permutation of the Fuzzy sets of the premise variables. Finally, the conclusion parameters are calculated and the total coverage of the input space is iteratively tested with experimental data, rarely firing rules are combined and coarse coverage of sensitive process states results in refined Fuzzy sets and rules. Results The described methods were implemented and integrated in a development system for process models. A series of models has already been built e.g. for rainfall-runoff modeling or for flood prediction (up to 72 hours) in river catchments. The models required significantly less development effort and showed advanced simulation results compared to conventional models. The models can be used operationally and simulation takes only some minutes on a standard PC e.g. for a gauge forecast (up to 72 hours) for the whole Mosel (Germany) river catchment.
Maeda, Jin; Suzuki, Tatsuya; Takayama, Kozo
2012-12-01
A large-scale design space was constructed using a Bayesian estimation method with a small-scale design of experiments (DoE) and small sets of large-scale manufacturing data without enforcing a large-scale DoE. The small-scale DoE was conducted using various Froude numbers (X(1)) and blending times (X(2)) in the lubricant blending process for theophylline tablets. The response surfaces, design space, and their reliability of the compression rate of the powder mixture (Y(1)), tablet hardness (Y(2)), and dissolution rate (Y(3)) on a small scale were calculated using multivariate spline interpolation, a bootstrap resampling technique, and self-organizing map clustering. The constant Froude number was applied as a scale-up rule. Three experiments under an optimal condition and two experiments under other conditions were performed on a large scale. The response surfaces on the small scale were corrected to those on a large scale by Bayesian estimation using the large-scale results. Large-scale experiments under three additional sets of conditions showed that the corrected design space was more reliable than that on the small scale, even if there was some discrepancy in the pharmaceutical quality between the manufacturing scales. This approach is useful for setting up a design space in pharmaceutical development when a DoE cannot be performed at a commercial large manufacturing scale.
A nonlinear model for ionic polymer metal composites as actuators
NASA Astrophysics Data System (ADS)
Bonomo, C.; Fortuna, L.; Giannone, P.; Graziani, S.; Strazzeri, S.
2007-02-01
This paper introduces a comprehensive nonlinear dynamic model of motion actuators based on ionic polymer metal composites (IPMCs) working in air. Significant quantities ruling the acting properties of IPMC-based actuators are taken into account. The model is organized as follows. As a first step, the dependence of the IPMC absorbed current on the voltage applied across its thickness is taken into account; a nonlinear circuit model is proposed to describe this relationship. In a second step the transduction of the absorbed current into the IPMC mechanical reaction is modelled. The model resulting from the cascade of both the electrical and the electromechanical stages represents a novel contribution in the field of IPMCs, capable of describing the electromechanical behaviour of these materials and predicting relevant quantities in a large range of applied signals. The effect of actuator scaling is also investigated, giving interesting support to the activities involved in the design of actuating devices based on these novel materials. Evidence of the excellent agreement between the estimations obtained by using the proposed model and experimental signals is given.
Neuromodulated Synaptic Plasticity on the SpiNNaker Neuromorphic System
Mikaitis, Mantas; Pineda García, Garibaldi; Knight, James C.; Furber, Steve B.
2018-01-01
SpiNNaker is a digital neuromorphic architecture, designed specifically for the low power simulation of large-scale spiking neural networks at speeds close to biological real-time. Unlike other neuromorphic systems, SpiNNaker allows users to develop their own neuron and synapse models as well as specify arbitrary connectivity. As a result SpiNNaker has proved to be a powerful tool for studying different neuron models as well as synaptic plasticity—believed to be one of the main mechanisms behind learning and memory in the brain. A number of Spike-Timing-Dependent-Plasticity(STDP) rules have already been implemented on SpiNNaker and have been shown to be capable of solving various learning tasks in real-time. However, while STDP is an important biological theory of learning, it is a form of Hebbian or unsupervised learning and therefore does not explain behaviors that depend on feedback from the environment. Instead, learning rules based on neuromodulated STDP (three-factor learning rules) have been shown to be capable of solving reinforcement learning tasks in a biologically plausible manner. In this paper we demonstrate for the first time how a model of three-factor STDP, with the third-factor representing spikes from dopaminergic neurons, can be implemented on the SpiNNaker neuromorphic system. Using this learning rule we first show how reward and punishment signals can be delivered to a single synapse before going on to demonstrate it in a larger network which solves the credit assignment problem in a Pavlovian conditioning experiment. Because of its extra complexity, we find that our three-factor learning rule requires approximately 2× as much processing time as the existing SpiNNaker STDP learning rules. However, we show that it is still possible to run our Pavlovian conditioning model with up to 1 × 104 neurons in real-time, opening up new research opportunities for modeling behavioral learning on SpiNNaker. PMID:29535600
A Preliminary Study of Ice-Accretion Scaling for SLD Conditions
NASA Technical Reports Server (NTRS)
Anderson, David N.
2003-01-01
Proposed changes to aircraft icing certification rules are being considered by European, Canadian, and American regulatory agencies to include operation in super-cooled large droplet conditions (SLD). This paper reports results of an experimental study in the NASA Glenn Icing Research Tunnel (IRT) to evaluate how well scaling methods developed for Appendix C conditions might apply to SLD conditions. Until now, scaling studies have been confined to the FAA FAR-25 Appendix C envelope of atmospheric cloud conditions. Tests were made in which it was attempted to scale to a droplet MVD of 50 microns from clouds having droplet MVDs of 175, 120, 100, and 70 microns. Scaling was based on the Ruff method with scale velocities found either by maintaining constant Weber number or by using the average of the velocities obtained by maintaining constant Weber number and constant Reynolds number. Models were unswept NACA 0012 wing sections. The reference model had a chord of 91.4 cm. Scale models had chords of 91.4, 80.0, and 53.3 cm. Tests were conducted with reference airspeeds of 100 and 150 kt (52 and 77 m/s) and with freezing fractions of 1.0, 0.6, and 0.3. It was demonstrated that the scaled 50-micron cloud simulated well the non-dimensional ice shapes accreted in clouds with MVD's of 120 microns or less.
Large-scale optimization-based classification models in medicine and biology.
Lee, Eva K
2007-06-01
We present novel optimization-based classification models that are general purpose and suitable for developing predictive rules for large heterogeneous biological and medical data sets. Our predictive model simultaneously incorporates (1) the ability to classify any number of distinct groups; (2) the ability to incorporate heterogeneous types of attributes as input; (3) a high-dimensional data transformation that eliminates noise and errors in biological data; (4) the ability to incorporate constraints to limit the rate of misclassification, and a reserved-judgment region that provides a safeguard against over-training (which tends to lead to high misclassification rates from the resulting predictive rule); and (5) successive multi-stage classification capability to handle data points placed in the reserved-judgment region. To illustrate the power and flexibility of the classification model and solution engine, and its multi-group prediction capability, application of the predictive model to a broad class of biological and medical problems is described. Applications include: the differential diagnosis of the type of erythemato-squamous diseases; predicting presence/absence of heart disease; genomic analysis and prediction of aberrant CpG island meythlation in human cancer; discriminant analysis of motility and morphology data in human lung carcinoma; prediction of ultrasonic cell disruption for drug delivery; identification of tumor shape and volume in treatment of sarcoma; discriminant analysis of biomarkers for prediction of early atherosclerois; fingerprinting of native and angiogenic microvascular networks for early diagnosis of diabetes, aging, macular degeneracy and tumor metastasis; prediction of protein localization sites; and pattern recognition of satellite images in classification of soil types. In all these applications, the predictive model yields correct classification rates ranging from 80 to 100%. This provides motivation for pursuing its use as a medical diagnostic, monitoring and decision-making tool.
NASA Technical Reports Server (NTRS)
Wegener, P. P.
1980-01-01
A cryogenic wind tunnel is based on the twofold idea of lowering drive power and increasing Reynolds number by operating with nitrogen near its boiling point. There are two possible types of condensation problems involved in this mode of wind tunnel operation. They concern the expansion from the nozzle supply to the test section at relatively low cooling rates, and secondly the expansion around models in the test section. This secondary expansion involves higher cooling rates and shorter time scales. In addition to these two condensation problems it is not certain what purity of nitrogen can be achieved in a large facility. Therefore, one cannot rule out condensation processes other than those of homogeneous nucleation.
A framework for plasticity implementation on the SpiNNaker neural architecture.
Galluppi, Francesco; Lagorce, Xavier; Stromatias, Evangelos; Pfeiffer, Michael; Plana, Luis A; Furber, Steve B; Benosman, Ryad B
2014-01-01
Many of the precise biological mechanisms of synaptic plasticity remain elusive, but simulations of neural networks have greatly enhanced our understanding of how specific global functions arise from the massively parallel computation of neurons and local Hebbian or spike-timing dependent plasticity rules. For simulating large portions of neural tissue, this has created an increasingly strong need for large scale simulations of plastic neural networks on special purpose hardware platforms, because synaptic transmissions and updates are badly matched to computing style supported by current architectures. Because of the great diversity of biological plasticity phenomena and the corresponding diversity of models, there is a great need for testing various hypotheses about plasticity before committing to one hardware implementation. Here we present a novel framework for investigating different plasticity approaches on the SpiNNaker distributed digital neural simulation platform. The key innovation of the proposed architecture is to exploit the reconfigurability of the ARM processors inside SpiNNaker, dedicating a subset of them exclusively to process synaptic plasticity updates, while the rest perform the usual neural and synaptic simulations. We demonstrate the flexibility of the proposed approach by showing the implementation of a variety of spike- and rate-based learning rules, including standard Spike-Timing dependent plasticity (STDP), voltage-dependent STDP, and the rate-based BCM rule. We analyze their performance and validate them by running classical learning experiments in real time on a 4-chip SpiNNaker board. The result is an efficient, modular, flexible and scalable framework, which provides a valuable tool for the fast and easy exploration of learning models of very different kinds on the parallel and reconfigurable SpiNNaker system.
A framework for plasticity implementation on the SpiNNaker neural architecture
Galluppi, Francesco; Lagorce, Xavier; Stromatias, Evangelos; Pfeiffer, Michael; Plana, Luis A.; Furber, Steve B.; Benosman, Ryad B.
2015-01-01
Many of the precise biological mechanisms of synaptic plasticity remain elusive, but simulations of neural networks have greatly enhanced our understanding of how specific global functions arise from the massively parallel computation of neurons and local Hebbian or spike-timing dependent plasticity rules. For simulating large portions of neural tissue, this has created an increasingly strong need for large scale simulations of plastic neural networks on special purpose hardware platforms, because synaptic transmissions and updates are badly matched to computing style supported by current architectures. Because of the great diversity of biological plasticity phenomena and the corresponding diversity of models, there is a great need for testing various hypotheses about plasticity before committing to one hardware implementation. Here we present a novel framework for investigating different plasticity approaches on the SpiNNaker distributed digital neural simulation platform. The key innovation of the proposed architecture is to exploit the reconfigurability of the ARM processors inside SpiNNaker, dedicating a subset of them exclusively to process synaptic plasticity updates, while the rest perform the usual neural and synaptic simulations. We demonstrate the flexibility of the proposed approach by showing the implementation of a variety of spike- and rate-based learning rules, including standard Spike-Timing dependent plasticity (STDP), voltage-dependent STDP, and the rate-based BCM rule. We analyze their performance and validate them by running classical learning experiments in real time on a 4-chip SpiNNaker board. The result is an efficient, modular, flexible and scalable framework, which provides a valuable tool for the fast and easy exploration of learning models of very different kinds on the parallel and reconfigurable SpiNNaker system. PMID:25653580
Liu, Zhao; Zhu, Yunhong; Wu, Chenxue
2016-01-01
Spatial-temporal k-anonymity has become a mainstream approach among techniques for protection of users’ privacy in location-based services (LBS) applications, and has been applied to several variants such as LBS snapshot queries and continuous queries. Analyzing large-scale spatial-temporal anonymity sets may benefit several LBS applications. In this paper, we propose two location prediction methods based on transition probability matrices constructing from sequential rules for spatial-temporal k-anonymity dataset. First, we define single-step sequential rules mined from sequential spatial-temporal k-anonymity datasets generated from continuous LBS queries for multiple users. We then construct transition probability matrices from mined single-step sequential rules, and normalize the transition probabilities in the transition matrices. Next, we regard a mobility model for an LBS requester as a stationary stochastic process and compute the n-step transition probability matrices by raising the normalized transition probability matrices to the power n. Furthermore, we propose two location prediction methods: rough prediction and accurate prediction. The former achieves the probabilities of arriving at target locations along simple paths those include only current locations, target locations and transition steps. By iteratively combining the probabilities for simple paths with n steps and the probabilities for detailed paths with n-1 steps, the latter method calculates transition probabilities for detailed paths with n steps from current locations to target locations. Finally, we conduct extensive experiments, and correctness and flexibility of our proposed algorithm have been verified. PMID:27508502
Community detection in complex networks by using membrane algorithm
NASA Astrophysics Data System (ADS)
Liu, Chuang; Fan, Linan; Liu, Zhou; Dai, Xiang; Xu, Jiamei; Chang, Baoren
Community detection in complex networks is a key problem of network analysis. In this paper, a new membrane algorithm is proposed to solve the community detection in complex networks. The proposed algorithm is based on membrane systems, which consists of objects, reaction rules, and a membrane structure. Each object represents a candidate partition of a complex network, and the quality of objects is evaluated according to network modularity. The reaction rules include evolutionary rules and communication rules. Evolutionary rules are responsible for improving the quality of objects, which employ the differential evolutionary algorithm to evolve objects. Communication rules implement the information exchanged among membranes. Finally, the proposed algorithm is evaluated on synthetic, real-world networks with real partitions known and the large-scaled networks with real partitions unknown. The experimental results indicate the superior performance of the proposed algorithm in comparison with other experimental algorithms.
NASA Astrophysics Data System (ADS)
van Griensven, Ann; Haest, Pieter Jan; Broekx, Steven; Seuntjens, Piet; Campling, Paul; Ducos, Geraldine; Blaha, Ludek; Slobodnik, Jaroslav
2010-05-01
The European Union (EU) adopted the Water Framework Directive (WFD) in 2000 ensuring that all aquatic ecosystems meet ‘good status' by 2015. However, it is a major challenge for river basin managers to meet this requirement in river basins with a high population density as well as intensive agricultural and industrial activities. The EU financed AQUAREHAB project (FP7) specifically examines the ecological and economic impact of innovative rehabilitation technologies for multi-pressured degraded water bodies. For this purpose, a generic collaborative management tool ‘REACH-ER' is being developed that can be used by stakeholders, citizens and water managers to evaluate the ecological and economical effects of different remedial actions on waterbodies. The tool is built using databases from large scale models simulating the hydrological dynamics of the river basing and sub-basins, the costs of the measures and the effectiveness of the measures in terms of ecological impact. Knowledge rules are used to describe the relationships between these data in order to compute the flux concentrations or to compute the effectiveness of measures. The management tool specifically addresses nitrate pollution and pollution by organic micropollutants. Detailed models are also used to predict the effectiveness of site remedial technologies using readily available global data. Rules describing ecological impacts are derived from ecotoxicological data for (mixtures of) specific contaminants (msPAF) and ecological indices relating effects to the presence of certain contaminants. Rules describing the cost-effectiveness of measures are derived from linear programming models identifying the least-cost combination of abatement measures to satisfy multi-pollutant reduction targets and from multi-criteria analysis.
Dynamics of coupled human-landscape systems
NASA Astrophysics Data System (ADS)
Werner, B. T.; McNamara, D. E.
2007-11-01
A preliminary dynamical analysis of landscapes and humans as hierarchical complex systems suggests that strong coupling between the two that spreads to become regionally or globally pervasive should be focused at multi-year to decadal time scales. At these scales, landscape dynamics is dominated by water, sediment and biological routing mediated by fluvial, oceanic, atmospheric processes and human dynamics is dominated by simplifying, profit-maximizing market forces and political action based on projection of economic effect. Also at these scales, landscapes impact humans through patterns of natural disasters and trends such as sea level rise; humans impact landscapes by the effect of economic activity and changes meant to mitigate natural disasters and longer term trends. Based on this analysis, human-landscape coupled systems can be modeled using heterogeneous agents employing prediction models to determine actions to represent the nonlinear behavior of economic and political systems and rule-based routing algorithms to represent landscape processes. A cellular model for the development of New Orleans illustrates this approach, with routing algorithms for river and hurricane-storm surge determining flood extent, five markets (home, labor, hotel, tourism and port services) connecting seven types of economic agents (home buyers/laborers, home developers, hotel owners/ employers, hotel developers, tourists, port services developer and port services owners/employers), building of levees or a river spillway by political agents and damage to homes, hotels or port services within cells determined by the passage or depth of flood waters. The model reproduces historical aspects of New Orleans economic development and levee construction and the filtering of frequent small-scale floods at the expense of large disasters.
Modeling stream temperature in the Anthropocene: An earth system modeling approach
Li, Hong -Yi; Leung, L. Ruby; Tesfa, Teklu; ...
2015-10-29
A new large-scale stream temperature model has been developed within the Community Earth System Model (CESM) framework. The model is coupled with the Model for Scale Adaptive River Transport (MOSART) that represents river routing and a water management model (WM) that represents the effects of reservoir operations and water withdrawals on flow regulation. The coupled models allow the impacts of reservoir operations and withdrawals on stream temperature to be explicitly represented in a physically based and consistent way. The models have been applied to the Contiguous United States driven by observed meteorological forcing. It is shown that the model ismore » capable of reproducing stream temperature spatiotemporal variation satisfactorily by comparison against the observed streamflow from over 320 USGS stations. Including water management in the models improves the agreement between the simulated and observed streamflow at a large number of stream gauge stations. Both climate and water management are found to have important influence on the spatiotemporal patterns of stream temperature. More interestingly, it is quantitatively estimated that reservoir operation could cool down stream temperature in the summer low-flow season (August – October) by as much as 1~2oC over many places, as water management generally mitigates low flow, which has important implications to aquatic ecosystems. In conclusion, sensitivity of the simulated stream temperature to input data and reservoir operation rules used in the WM model motivates future directions to address some limitations in the current modeling framework.« less
NASA Astrophysics Data System (ADS)
Evans, Kellie Michele
Larger than Life (LtL), is a four-parameter family of two-dimensional cellular automata that generalizes John Conway's Game of Life (Life) to large neighborhoods and general birth and survival thresholds. LtL was proposed by David Griffeath in the early 1990s to explore whether Life might be a clue to a critical phase point in the threshold-range scaling limit. The LtL family of rules includes Life as well as a rich set of two-dimensional rules, some of which exhibit dynamics vastly different from Life. In this chapter we present rigorous results and conjectures about the ergodic classifications of several sets of "simplified" LtL rules, each of which has a property that makes the rule easier to analyze. For example, these include symmetric rules such as the threshold voter automaton and the anti-voter automaton, monotone rules such as the threshold growth models, and others. We also provide qualitative results and speculation about LtL rules on various phase boundaries and summarize results and open questions about our favorite "Life-like" LtL rules.
Automating the selection of standard parallels for conic map projections
NASA Astrophysics Data System (ADS)
Šavriǒ, Bojan; Jenny, Bernhard
2016-05-01
Conic map projections are appropriate for mapping regions at medium and large scales with east-west extents at intermediate latitudes. Conic projections are appropriate for these cases because they show the mapped area with less distortion than other projections. In order to minimize the distortion of the mapped area, the two standard parallels of conic projections need to be selected carefully. Rules of thumb exist for placing the standard parallels based on the width-to-height ratio of the map. These rules of thumb are simple to apply, but do not result in maps with minimum distortion. There also exist more sophisticated methods that determine standard parallels such that distortion in the mapped area is minimized. These methods are computationally expensive and cannot be used for real-time web mapping and GIS applications where the projection is adjusted automatically to the displayed area. This article presents a polynomial model that quickly provides the standard parallels for the three most common conic map projections: the Albers equal-area, the Lambert conformal, and the equidistant conic projection. The model defines the standard parallels with polynomial expressions based on the spatial extent of the mapped area. The spatial extent is defined by the length of the mapped central meridian segment, the central latitude of the displayed area, and the width-to-height ratio of the map. The polynomial model was derived from 3825 maps-each with a different spatial extent and computationally determined standard parallels that minimize the mean scale distortion index. The resulting model is computationally simple and can be used for the automatic selection of the standard parallels of conic map projections in GIS software and web mapping applications.
A rough set-based association rule approach implemented on a brand trust evaluation model
NASA Astrophysics Data System (ADS)
Liao, Shu-Hsien; Chen, Yin-Ju
2017-09-01
In commerce, businesses use branding to differentiate their product and service offerings from those of their competitors. The brand incorporates a set of product or service features that are associated with that particular brand name and identifies the product/service segmentation in the market. This study proposes a new data mining approach, a rough set-based association rule induction, implemented on a brand trust evaluation model. In addition, it presents as one way to deal with data uncertainty to analyse ratio scale data, while creating predictive if-then rules that generalise data values to the retail region. As such, this study uses the analysis of algorithms to find alcoholic beverages brand trust recall. Finally, discussions and conclusion are presented for further managerial implications.
Model-based Systems Engineering: Creation and Implementation of Model Validation Rules for MOS 2.0
NASA Technical Reports Server (NTRS)
Schmidt, Conrad K.
2013-01-01
Model-based Systems Engineering (MBSE) is an emerging modeling application that is used to enhance the system development process. MBSE allows for the centralization of project and system information that would otherwise be stored in extraneous locations, yielding better communication, expedited document generation and increased knowledge capture. Based on MBSE concepts and the employment of the Systems Modeling Language (SysML), extremely large and complex systems can be modeled from conceptual design through all system lifecycles. The Operations Revitalization Initiative (OpsRev) seeks to leverage MBSE to modernize the aging Advanced Multi-Mission Operations Systems (AMMOS) into the Mission Operations System 2.0 (MOS 2.0). The MOS 2.0 will be delivered in a series of conceptual and design models and documents built using the modeling tool MagicDraw. To ensure model completeness and cohesiveness, it is imperative that the MOS 2.0 models adhere to the specifications, patterns and profiles of the Mission Service Architecture Framework, thus leading to the use of validation rules. This paper outlines the process by which validation rules are identified, designed, implemented and tested. Ultimately, these rules provide the ability to maintain model correctness and synchronization in a simple, quick and effective manner, thus allowing the continuation of project and system progress.
The Handling of Hazard Data on a National Scale: A Case Study from the British Geological Survey
NASA Astrophysics Data System (ADS)
Royse, Katherine R.
2011-11-01
This paper reviews how hazard data and geological map data have been combined by the British Geological Survey (BGS) to produce a set of GIS-based national-scale hazard susceptibility maps for the UK. This work has been carried out over the last 9 years and as such reflects the combined outputs of a large number of researchers at BGS. The paper details the inception of these datasets from the development of the seamless digital geological map in 2001 through to the deterministic 2D hazard models produced today. These datasets currently include landslides, shrink-swell, soluble rocks, compressible and collapsible deposits, groundwater flooding, geological indicators of flooding, radon potential and potentially harmful elements in soil. These models have been created using a combination of expert knowledge (from both within BGS and from outside bodies such as the Health Protection Agency), national databases (which contain data collected over the past 175 years), multi-criteria analysis within geographical information systems and a flexible rule-based approach for each individual geohazard. By using GIS in this way, it has been possible to model the distribution and degree of geohazards across the whole of Britain.
NASA Astrophysics Data System (ADS)
Xu, Jiuping; Zeng, Ziqiang; Han, Bernard; Lei, Xiao
2013-07-01
This article presents a dynamic programming-based particle swarm optimization (DP-based PSO) algorithm for solving an inventory management problem for large-scale construction projects under a fuzzy random environment. By taking into account the purchasing behaviour and strategy under rules of international bidding, a multi-objective fuzzy random dynamic programming model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform fuzzy random parameters into fuzzy variables that are subsequently defuzzified by using an expected value operator with optimistic-pessimistic index. The iterative nature of the authors' model motivates them to develop a DP-based PSO algorithm. More specifically, their approach treats the state variables as hidden parameters. This in turn eliminates many redundant feasibility checks during initialization and particle updates at each iteration. Results and sensitivity analysis are presented to highlight the performance of the authors' optimization method, which is very effective as compared to the standard PSO algorithm.
A knowledge-based approach to identification and adaptation in dynamical systems control
NASA Technical Reports Server (NTRS)
Glass, B. J.; Wong, C. M.
1988-01-01
Artificial intelligence techniques are applied to the problems of model form and parameter identification of large-scale dynamic systems. The object-oriented knowledge representation is discussed in the context of causal modeling and qualitative reasoning. Structured sets of rules are used for implementing qualitative component simulations, for catching qualitative discrepancies and quantitative bound violations, and for making reconfiguration and control decisions that affect the physical system. These decisions are executed by backward-chaining through a knowledge base of control action tasks. This approach was implemented for two examples: a triple quadrupole mass spectrometer and a two-phase thermal testbed. Results of tests with both of these systems demonstrate that the software replicates some or most of the functionality of a human operator, thereby reducing the need for a human-in-the-loop in the lower levels of control of these complex systems.
iRODS: A Distributed Data Management Cyberinfrastructure for Observatories
NASA Astrophysics Data System (ADS)
Rajasekar, A.; Moore, R.; Vernon, F.
2007-12-01
Large-scale and long-term preservation of both observational and synthesized data requires a system that virtualizes data management concepts. A methodology is needed that can work across long distances in space (distribution) and long-periods in time (preservation). The system needs to manage data stored on multiple types of storage systems including new systems that become available in the future. This concept is called infrastructure independence, and is typically implemented through virtualization mechanisms. Data grids are built upon concepts of data and trust virtualization. These concepts enable the management of collections of data that are distributed across multiple institutions, stored on multiple types of storage systems, and accessed by multiple types of clients. Data virtualization ensures that the name spaces used to identify files, users, and storage systems are persistent, even when files are migrated onto future technology. This is required to preserve authenticity, the link between the record and descriptive and provenance metadata. Trust virtualization ensures that access controls remain invariant as files are moved within the data grid. This is required to track the chain of custody of records over time. The Storage Resource Broker (http://www.sdsc.edu/srb) is one such data grid used in a wide variety of applications in earth and space sciences such as ROADNet (roadnet.ucsd.edu), SEEK (seek.ecoinformatics.org), GEON (www.geongrid.org) and NOAO (www.noao.edu). Recent extensions to data grids provide one more level of virtualization - policy or management virtualization. Management virtualization ensures that execution of management policies can be automated, and that rules can be created that verify assertions about the shared collections of data. When dealing with distributed large-scale data over long periods of time, the policies used to manage the data and provide assurances about the authenticity of the data become paramount. The integrated Rule-Oriented Data System (iRODS) (http://irods.sdsc.edu) provides the mechanisms needed to describe not only management policies, but also to track how the policies are applied and their execution results. The iRODS data grid maps management policies to rules that control the execution of the remote micro-services. As an example, a rule can be created that automatically creates a replica whenever a file is added to a specific collection, or extracts its metadata automatically and registers it in a searchable catalog. For the replication operation, the persistent state information consists of the replica location, the creation date, the owner, the replica size, etc. The mechanism used by iRODS for providing policy virtualization is based on well-defined functions, called micro-services, which are chained into alternative workflows using rules. A rule engine, based on the event-condition-action paradigm executes the rule-based workflows after an event. Rules can be deferred to a pre-determined time or executed on a periodic basis. As the data management policies evolve, the iRODS system can implement new rules, new micro-services, and new state information (metadata content) needed to manage the new policies. Each sub- collection can be managed using a different set of policies. The discussion of the concepts in rule-based policy virtualization and its application to long-term and large-scale data management for observatories such as ORION and NEON will be the basis of the paper.
NASA Astrophysics Data System (ADS)
Zhang, J.; Lei, X.; Liu, P.; Wang, H.; Li, Z.
2017-12-01
Flood control operation of multi-reservoir systems such as parallel reservoirs and hybrid reservoirs often suffer from complex interactions and trade-off among tributaries and the mainstream. The optimization of such systems is computationally intensive due to nonlinear storage curves, numerous constraints and complex hydraulic connections. This paper aims to derive the optimal flood control operating rules based on the trade-off among tributaries and the mainstream using a new algorithm known as weighted non-dominated sorting genetic algorithm II (WNSGA II). WNSGA II could locate the Pareto frontier in non-dominated region efficiently due to the directed searching by weighted crowding distance, and the results are compared with those of conventional operating rules (COR) and single objective genetic algorithm (GA). Xijiang river basin in China is selected as a case study, with eight reservoirs and five flood control sections within four tributaries and the mainstream. Furthermore, the effects of inflow uncertainty have been assessed. Results indicate that: (1) WNSGA II could locate the non-dominated solutions faster and provide better Pareto frontier than the traditional non-dominated sorting genetic algorithm II (NSGA II) due to the weighted crowding distance; (2) WNSGA II outperforms COR and GA on flood control in the whole basin; (3) The multi-objective operating rules from WNSGA II deal with the inflow uncertainties better than COR. Therefore, the WNSGA II can be used to derive stable operating rules for large-scale reservoir systems effectively and efficiently.
The Relative Emphasis of Play Rules between Experienced and Trainee Caregivers of Toddlers
ERIC Educational Resources Information Center
Gyöngy, Kinga
2017-01-01
Content analysis of a large-scale (N = 920) qualitative data set with MAXQDA12 from a nationwide questionnaire of nursery practitioners in Hungary was able to demonstrate various types of rules during free play: social, health and safety, and environment-related rules. Environment-related rules, which govern space utilisation in toddler groups,…
ProteoLens: a visual analytic tool for multi-scale database-driven biological network data mining.
Huan, Tianxiao; Sivachenko, Andrey Y; Harrison, Scott H; Chen, Jake Y
2008-08-12
New systems biology studies require researchers to understand how interplay among myriads of biomolecular entities is orchestrated in order to achieve high-level cellular and physiological functions. Many software tools have been developed in the past decade to help researchers visually navigate large networks of biomolecular interactions with built-in template-based query capabilities. To further advance researchers' ability to interrogate global physiological states of cells through multi-scale visual network explorations, new visualization software tools still need to be developed to empower the analysis. A robust visual data analysis platform driven by database management systems to perform bi-directional data processing-to-visualizations with declarative querying capabilities is needed. We developed ProteoLens as a JAVA-based visual analytic software tool for creating, annotating and exploring multi-scale biological networks. It supports direct database connectivity to either Oracle or PostgreSQL database tables/views, on which SQL statements using both Data Definition Languages (DDL) and Data Manipulation languages (DML) may be specified. The robust query languages embedded directly within the visualization software help users to bring their network data into a visualization context for annotation and exploration. ProteoLens supports graph/network represented data in standard Graph Modeling Language (GML) formats, and this enables interoperation with a wide range of other visual layout tools. The architectural design of ProteoLens enables the de-coupling of complex network data visualization tasks into two distinct phases: 1) creating network data association rules, which are mapping rules between network node IDs or edge IDs and data attributes such as functional annotations, expression levels, scores, synonyms, descriptions etc; 2) applying network data association rules to build the network and perform the visual annotation of graph nodes and edges according to associated data values. We demonstrated the advantages of these new capabilities through three biological network visualization case studies: human disease association network, drug-target interaction network and protein-peptide mapping network. The architectural design of ProteoLens makes it suitable for bioinformatics expert data analysts who are experienced with relational database management to perform large-scale integrated network visual explorations. ProteoLens is a promising visual analytic platform that will facilitate knowledge discoveries in future network and systems biology studies.
Cellular scaling rules for the brain of Artiodactyla include a highly folded cortex with few neurons
Kazu, Rodrigo S.; Maldonado, José; Mota, Bruno; Manger, Paul R.; Herculano-Houzel, Suzana
2014-01-01
Quantitative analysis of the cellular composition of rodent, primate, insectivore, and afrotherian brains has shown that non-neuronal scaling rules are similar across these mammalian orders that diverged about 95 million years ago, and therefore appear to be conserved in evolution, while neuronal scaling rules appear to be free to vary in a clade-specific manner. Here we analyze the cellular scaling rules that apply to the brain of artiodactyls, a group within the order Cetartiodactyla, believed to be a relatively recent radiation from the common Eutherian ancestor. We find that artiodactyls share non-neuronal scaling rules with all groups analyzed previously. Artiodactyls share with afrotherians and rodents, but not with primates, the neuronal scaling rules that apply to the cerebral cortex and cerebellum. The neuronal scaling rules that apply to the remaining brain areas are, however, distinct in artiodactyls. Importantly, we show that the folding index of the cerebral cortex scales with the number of neurons in the cerebral cortex in distinct fashions across artiodactyls, afrotherians, rodents, and primates, such that the artiodactyl cerebral cortex is more convoluted than primate cortices of similar numbers of neurons. Our findings suggest that the scaling rules found to be shared across modern afrotherians, glires, and artiodactyls applied to the common Eutherian ancestor, such as the relationship between the mass of the cerebral cortex as a whole and its number of neurons. In turn, the distribution of neurons along the surface of the cerebral cortex, which is related to its degree of gyrification, appears to be a clade-specific characteristic. If the neuronal scaling rules for artiodactyls extend to all cetartiodactyls, we predict that the large cerebral cortex of cetaceans will still have fewer neurons than the human cerebral cortex. PMID:25429261
'Fracking', Induced Seismicity and the Critical Earth
NASA Astrophysics Data System (ADS)
Leary, P.; Malin, P. E.
2012-12-01
Issues of 'fracking' and induced seismicity are reverse-analogous to the equally complex issues of well productivity in hydrocarbon, geothermal and ore reservoirs. In low hazard reservoir economics, poorly producing wells and low grade ore bodies are many while highly producing wells and high grade ores are rare but high pay. With induced seismicity factored in, however, the same distribution physics reverses the high/low pay economics: large fracture-connectivity systems are hazardous hence low pay, while high probability small fracture-connectivity systems are non-hazardous hence high pay. Put differently, an economic risk abatement tactic for well productivity and ore body pay is to encounter large-scale fracture systems, while an economic risk abatement tactic for 'fracking'-induced seismicity is to avoid large-scale fracture systems. Well productivity and ore body grade distributions arise from three empirical rules for fluid flow in crustal rock: (i) power-law scaling of grain-scale fracture density fluctuations; (ii) spatial correlation between spatial fluctuations in well-core porosity and the logarithm of well-core permeability; (iii) frequency distributions of permeability governed by a lognormality skewness parameter. The physical origin of rules (i)-(iii) is the universal existence of a critical-state-percolation grain-scale fracture-density threshold for crustal rock. Crustal fractures are effectively long-range spatially-correlated distributions of grain-scale defects permitting fluid percolation on mm to km scales. The rule is, the larger the fracture system the more intense the percolation throughput. As percolation pathways are spatially erratic and unpredictable on all scales, they are difficult to model with sparsely sampled well data. Phenomena such as well productivity, induced seismicity, and ore body fossil fracture distributions are collectively extremely difficult to predict. Risk associated with unpredictable reservoir well productivity and ore body distributions can be managed by operating in a context which affords many small failures for a few large successes. In reverse view, 'fracking' and induced seismicity could be rationally managed in a context in which many small successes can afford a few large failures. However, just as there is every incentive to acquire information leading to higher rates of productive well drilling and ore body exploration, there are equal incentives for acquiring information leading to lower rates of 'fracking'-induced seismicity. Current industry practice of using an effective medium approach to reservoir rock creates an uncritical sense that property distributions in rock are essentially uniform. Well-log data show that the reverse is true: the larger the length scale the greater the deviation from uniformity. Applying the effective medium approach to large-scale rock formations thus appears to be unnecessarily hazardous. It promotes the notion that large scale fluid pressurization acts against weakly cohesive but essentially uniform rock to produce large-scale quasi-uniform tensile discontinuities. Indiscriminate hydrofacturing appears to be vastly more problematic in reality than as pictured by the effective medium hypothesis. The spatial complexity of rock, especially at large scales, provides ample reason to find more controlled pressurization strategies for enhancing in situ flow.
An, Gary; Christley, Scott
2012-01-01
Given the panoply of system-level diseases that result from disordered inflammation, such as sepsis, atherosclerosis, cancer, and autoimmune disorders, understanding and characterizing the inflammatory response is a key target of biomedical research. Untangling the complex behavioral configurations associated with a process as ubiquitous as inflammation represents a prototype of the translational dilemma: the ability to translate mechanistic knowledge into effective therapeutics. A critical failure point in the current research environment is a throughput bottleneck at the level of evaluating hypotheses of mechanistic causality; these hypotheses represent the key step toward the application of knowledge for therapy development and design. Addressing the translational dilemma will require utilizing the ever-increasing power of computers and computational modeling to increase the efficiency of the scientific method in the identification and evaluation of hypotheses of mechanistic causality. More specifically, development needs to focus on facilitating the ability of non-computer trained biomedical researchers to utilize and instantiate their knowledge in dynamic computational models. This is termed "dynamic knowledge representation." Agent-based modeling is an object-oriented, discrete-event, rule-based simulation method that is well suited for biomedical dynamic knowledge representation. Agent-based modeling has been used in the study of inflammation at multiple scales. The ability of agent-based modeling to encompass multiple scales of biological process as well as spatial considerations, coupled with an intuitive modeling paradigm, suggest that this modeling framework is well suited for addressing the translational dilemma. This review describes agent-based modeling, gives examples of its applications in the study of inflammation, and introduces a proposed general expansion of the use of modeling and simulation to augment the generation and evaluation of knowledge by the biomedical research community at large.
NASA Astrophysics Data System (ADS)
Yang, Yuchen; Mabu, Shingo; Shimada, Kaoru; Hirasawa, Kotaro
Intertransaction association rules have been reported to be useful in many fields such as stock market prediction, but still there are not so many efficient methods to dig them out from large data sets. Furthermore, how to use and measure these more complex rules should be considered carefully. In this paper, we propose a new intertransaction class association rule mining method based on Genetic Network Programming (GNP), which has the ability to overcome some shortages of Apriori-like based intertransaction association methods. Moreover, a general classifier model for intertransaction rules is also introduced. In experiments on the real world application of stock market prediction, the method shows its efficiency and ability to obtain good results and can bring more benefits with a suitable classifier considering larger interval span.
Hierarchical trie packet classification algorithm based on expectation-maximization clustering.
Bi, Xia-An; Zhao, Junxia
2017-01-01
With the development of computer network bandwidth, packet classification algorithms which are able to deal with large-scale rule sets are in urgent need. Among the existing algorithms, researches on packet classification algorithms based on hierarchical trie have become an important packet classification research branch because of their widely practical use. Although hierarchical trie is beneficial to save large storage space, it has several shortcomings such as the existence of backtracking and empty nodes. This paper proposes a new packet classification algorithm, Hierarchical Trie Algorithm Based on Expectation-Maximization Clustering (HTEMC). Firstly, this paper uses the formalization method to deal with the packet classification problem by means of mapping the rules and data packets into a two-dimensional space. Secondly, this paper uses expectation-maximization algorithm to cluster the rules based on their aggregate characteristics, and thereby diversified clusters are formed. Thirdly, this paper proposes a hierarchical trie based on the results of expectation-maximization clustering. Finally, this paper respectively conducts simulation experiments and real-environment experiments to compare the performances of our algorithm with other typical algorithms, and analyzes the results of the experiments. The hierarchical trie structure in our algorithm not only adopts trie path compression to eliminate backtracking, but also solves the problem of low efficiency of trie updates, which greatly improves the performance of the algorithm.
Systematic assignment of thermodynamic constraints in metabolic network models
Kümmel, Anne; Panke, Sven; Heinemann, Matthias
2006-01-01
Background The availability of genome sequences for many organisms enabled the reconstruction of several genome-scale metabolic network models. Currently, significant efforts are put into the automated reconstruction of such models. For this, several computational tools have been developed that particularly assist in identifying and compiling the organism-specific lists of metabolic reactions. In contrast, the last step of the model reconstruction process, which is the definition of the thermodynamic constraints in terms of reaction directionalities, still needs to be done manually. No computational method exists that allows for an automated and systematic assignment of reaction directions in genome-scale models. Results We present an algorithm that – based on thermodynamics, network topology and heuristic rules – automatically assigns reaction directions in metabolic models such that the reaction network is thermodynamically feasible with respect to the production of energy equivalents. It first exploits all available experimentally derived Gibbs energies of formation to identify irreversible reactions. As these thermodynamic data are not available for all metabolites, in a next step, further reaction directions are assigned on the basis of network topology considerations and thermodynamics-based heuristic rules. Briefly, the algorithm identifies reaction subsets from the metabolic network that are able to convert low-energy co-substrates into their high-energy counterparts and thus net produce energy. Our algorithm aims at disabling such thermodynamically infeasible cyclic operation of reaction subnetworks by assigning reaction directions based on a set of thermodynamics-derived heuristic rules. We demonstrate our algorithm on a genome-scale metabolic model of E. coli. The introduced systematic direction assignment yielded 130 irreversible reactions (out of 920 total reactions), which corresponds to about 70% of all irreversible reactions that are required to disable thermodynamically infeasible energy production. Conclusion Although not being fully comprehensive, our algorithm for systematic reaction direction assignment could define a significant number of irreversible reactions automatically with low computational effort. We envision that the presented algorithm is a valuable part of a computational framework that assists the automated reconstruction of genome-scale metabolic models. PMID:17123434
Thomas W. Bonnot; Frank R. III Thompson; Joshua Millspaugh
2011-01-01
Landscape-based population models are potentially valuable tools in facilitating conservation planning and actions at large scales. However, such models have rarely been applied at ecoregional scales. We extended landscape-based population models to ecoregional scales for three species of concern in the Central Hardwoods Bird Conservation Region and compared model...
A Data Stream Model For Runoff Simulation In A Changing Environment
NASA Astrophysics Data System (ADS)
Yang, Q.; Shao, J.; Zhang, H.; Wang, G.
2017-12-01
Runoff simulation is of great significance for water engineering design, water disaster control, water resources planning and management in a catchment or region. A large number of methods including concept-based process-driven models and statistic-based data-driven models, have been proposed and widely used in worldwide during past decades. Most existing models assume that the relationship among runoff and its impacting factors is stationary. However, in the changing environment (e.g., climate change, human disturbance), their relationship usually evolves over time. In this study, we propose a data stream model for runoff simulation in a changing environment. Specifically, the proposed model works in three steps: learning a rule set, expansion of a rule, and simulation. The first step is to initialize a rule set. When a new observation arrives, the model will check which rule covers it and then use the rule for simulation. Meanwhile, Page-Hinckley (PH) change detection test is used to monitor the online simulation error of each rule. If a change is detected, the corresponding rule is removed from the rule set. In the second step, for each rule, if it covers more than a given number of instance, the rule is expected to expand. In the third step, a simulation model of each leaf node is learnt with a perceptron without activation function, and is updated with adding a newly incoming observation. Taking Fuxi River catchment as a case study, we applied the model to simulate the monthly runoff in the catchment. Results show that abrupt change is detected in the year of 1997 by using the Page-Hinckley change detection test method, which is consistent with the historic record of flooding. In addition, the model achieves good simulation results with the RMSE of 13.326, and outperforms many established methods. The findings demonstrated that the proposed data stream model provides a promising way to simulate runoff in a changing environment.
Allometry of sexual size dimorphism in turtles: a comparison of mass and length data.
Regis, Koy W; Meik, Jesse M
2017-01-01
The macroevolutionary pattern of Rensch's Rule (positive allometry of sexual size dimorphism) has had mixed support in turtles. Using the largest carapace length dataset and only large-scale body mass dataset assembled for this group, we determine (a) whether turtles conform to Rensch's Rule at the order, suborder, and family levels, and (b) whether inferences regarding allometry of sexual size dimorphism differ based on choice of body size metric used for analyses. We compiled databases of mean body mass and carapace length for males and females for as many populations and species of turtles as possible. We then determined scaling relationships between males and females for average body mass and straight carapace length using traditional and phylogenetic comparative methods. We also used regression analyses to evalutate sex-specific differences in the variance explained by carapace length on body mass. Using traditional (non-phylogenetic) analyses, body mass supports Rensch's Rule, whereas straight carapace length supports isometry. Using phylogenetic independent contrasts, both body mass and straight carapace length support Rensch's Rule with strong congruence between metrics. At the family level, support for Rensch's Rule is more frequent when mass is used and in phylogenetic comparative analyses. Turtles do not differ in slopes of sex-specific mass-to-length regressions and more variance in body size within each sex is explained by mass than by carapace length. Turtles display Rensch's Rule overall and within families of Cryptodires, but not within Pleurodire families. Mass and length are strongly congruent with respect to Rensch's Rule across turtles, and discrepancies are observed mostly at the family level (the level where Rensch's Rule is most often evaluated). At macroevolutionary scales, the purported advantages of length measurements over weight are not supported in turtles.
Simple spatial scaling rules behind complex cities.
Li, Ruiqi; Dong, Lei; Zhang, Jiang; Wang, Xinran; Wang, Wen-Xu; Di, Zengru; Stanley, H Eugene
2017-11-28
Although most of wealth and innovation have been the result of human interaction and cooperation, we are not yet able to quantitatively predict the spatial distributions of three main elements of cities: population, roads, and socioeconomic interactions. By a simple model mainly based on spatial attraction and matching growth mechanisms, we reveal that the spatial scaling rules of these three elements are in a consistent framework, which allows us to use any single observation to infer the others. All numerical and theoretical results are consistent with empirical data from ten representative cities. In addition, our model can also provide a general explanation of the origins of the universal super- and sub-linear aggregate scaling laws and accurately predict kilometre-level socioeconomic activity. Our work opens a new avenue for uncovering the evolution of cities in terms of the interplay among urban elements, and it has a broad range of applications.
Prototype Vector Machine for Large Scale Semi-Supervised Learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Kai; Kwok, James T.; Parvin, Bahram
2009-04-29
Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of themore » kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.« less
Towards Better Computational Models of the Balance Scale Task: A Reply to Shultz and Takane
ERIC Educational Resources Information Center
van der Maas, Han L. J.; Quinlan, Philip T.; Jansen, Brenda R. J.
2007-01-01
In contrast to Shultz and Takane [Shultz, T.R., & Takane, Y. (2007). Rule following and rule use in the balance-scale task. "Cognition", in press, doi:10.1016/j.cognition.2006.12.004.] we do not accept that the traditional Rule Assessment Method (RAM) of scoring responses on the balance scale task has advantages over latent class analysis (LCA):…
Chameleon dark energy models with characteristic signatures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gannouji, Radouane; Department of Physics, Faculty of Science, Tokyo University of Science, 1-3, Kagurazaka, Shinjuku-ku, Tokyo 162-8601; Moraes, Bruno
2010-12-15
In chameleon dark energy models, local gravity constraints tend to rule out parameters in which observable cosmological signatures can be found. We study viable chameleon potentials consistent with a number of recent observational and experimental bounds. A novel chameleon field potential, motivated by f(R) gravity, is constructed where observable cosmological signatures are present both at the background evolution and in the growth rate of the perturbations. We study the evolution of matter density perturbations on low redshifts for this potential and show that the growth index today {gamma}{sub 0} can have significant dispersion on scales relevant for large scale structures.more » The values of {gamma}{sub 0} can be even smaller than 0.2 with large variations of {gamma} on very low redshifts for the model parameters constrained by local gravity tests. This gives a possibility to clearly distinguish these chameleon models from the {Lambda}-cold-dark-matter ({Lambda}CDM) model in future high-precision observations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freese, Katherine; Kinney, William H., E-mail: ktfreese@umich.edu, E-mail: whkinney@buffalo.edu
Natural inflation is a good fit to all cosmic microwave background (CMB) data and may be the correct description of an early inflationary expansion of the Universe. The large angular scale CMB polarization experiment BICEP2 has announced a major discovery, which can be explained as the gravitational wave signature of inflation, at a level that matches predictions by natural inflation models. The natural inflation (NI) potential is theoretically exceptionally well motivated in that it is naturally flat due to shift symmetries, and in the simplest version takes the form V(φ) = Λ{sup 4} [1 ± cos(Nφ/f)]. A tensor-to-scalar ratio r > 0.1 as seen by BICEP2 requiresmore » the height of any inflationary potential to be comparable to the scale of grand unification and the width to be comparable to the Planck scale. The Cosine Natural Inflation model agrees with all cosmic microwave background measurements as long as f ≥ m{sub Pl} (where m{sub Pl} = 1.22 × 10{sup 19} GeV) and Λ ∼ m{sub GUT} ∼ 10{sup 16} GeV. This paper also discusses other variants of the natural inflation scenario: we show that axion monodromy with potential V∝ φ{sup 2/3} is inconsistent with the BICEP2 limits at the 95% confidence level, and low-scale inflation is strongly ruled out. Linear potentials V ∝ φ are inconsistent with the BICEP2 limit at the 95% confidence level, but are marginally consistent with a joint Planck/BICEP2 limit at 95%. We discuss the pseudo-Nambu Goldstone model proposed by Kinney and Mahanthappa as a concrete realization of low-scale inflation. While the low-scale limit of the model is inconsistent with the data, the large-field limit of the model is marginally consistent with BICEP2. All of the models considered predict negligible running of the scalar spectral index, and would be ruled out by a detection of running.« less
Multiscale infrared and visible image fusion using gradient domain guided image filtering
NASA Astrophysics Data System (ADS)
Zhu, Jin; Jin, Weiqi; Li, Li; Han, Zhenghao; Wang, Xia
2018-03-01
For better surveillance with infrared and visible imaging, a novel hybrid multiscale decomposition fusion method using gradient domain guided image filtering (HMSD-GDGF) is proposed in this study. In this method, hybrid multiscale decomposition with guided image filtering and gradient domain guided image filtering of source images are first applied before the weight maps of each scale are obtained using a saliency detection technology and filtering means with three different fusion rules at different scales. The three types of fusion rules are for small-scale detail level, large-scale detail level, and base level. Finally, the target becomes more salient and can be more easily detected in the fusion result, with the detail information of the scene being fully displayed. After analyzing the experimental comparisons with state-of-the-art fusion methods, the HMSD-GDGF method has obvious advantages in fidelity of salient information (including structural similarity, brightness, and contrast), preservation of edge features, and human visual perception. Therefore, visual effects can be improved by using the proposed HMSD-GDGF method.
Identifying type 1 and type 2 diabetic cases using administrative data: a tree-structured model.
Lo-Ciganic, Weihsuan; Zgibor, Janice C; Ruppert, Kristine; Arena, Vincent C; Stone, Roslyn A
2011-05-01
To date, few administrative diabetes mellitus (DM) registries have distinguished type 1 diabetes mellitus (T1DM) from type 2 diabetes mellitus (T2DM). Using a classification tree model, a prediction rule was developed to distinguish T1DM from T2DM in a large administrative database. The Medical Archival Retrieval System at the University of Pittsburgh Medical Center included administrative and clinical data from January 1, 2000, through September 30, 2009, for 209,647 DM patients aged ≥18 years. Probable cases (8,173 T1DM and 125,111 T2DM) were identified by applying clinical criteria to administrative data. Nonparametric classification tree models were fit using TIBCO Spotfire S+ 8.1 (TIBCO Software), with model size based on 10-fold cross validation. Sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of T1DM were estimated. The main predictors that distinguished T1DM from T2DM are age <40 years; International Classification of Disease, 9th revision, codes of T1DM or T2DM diagnosis; inpatient oral hypoglycemic agent use; inpatient insulin use; and episode(s) of diabetic ketoacidosis diagnosis. Compared with a complex clinical algorithm, the tree-structured model to predict T1DM had 92.8% sensitivity, 99.3% specificity, 89.5% PPV, and 99.5% NPV. The preliminary predictive rule appears to be promising. Being able to distinguish between DM subtypes in administrative databases will allow large-scale subtype-specific analyses of medical care costs, morbidity, and mortality. © 2011 Diabetes Technology Society.
Confirmation of general relativity on large scales from weak lensing and galaxy velocities.
Reyes, Reinabelle; Mandelbaum, Rachel; Seljak, Uros; Baldauf, Tobias; Gunn, James E; Lombriser, Lucas; Smith, Robert E
2010-03-11
Although general relativity underlies modern cosmology, its applicability on cosmological length scales has yet to be stringently tested. Such a test has recently been proposed, using a quantity, E(G), that combines measures of large-scale gravitational lensing, galaxy clustering and structure growth rate. The combination is insensitive to 'galaxy bias' (the difference between the clustering of visible galaxies and invisible dark matter) and is thus robust to the uncertainty in this parameter. Modified theories of gravity generally predict values of E(G) different from the general relativistic prediction because, in these theories, the 'gravitational slip' (the difference between the two potentials that describe perturbations in the gravitational metric) is non-zero, which leads to changes in the growth of structure and the strength of the gravitational lensing effect. Here we report that E(G) = 0.39 +/- 0.06 on length scales of tens of megaparsecs, in agreement with the general relativistic prediction of E(G) approximately 0.4. The measured value excludes a model within the tensor-vector-scalar gravity theory, which modifies both Newtonian and Einstein gravity. However, the relatively large uncertainty still permits models within f(R) theory, which is an extension of general relativity. A fivefold decrease in uncertainty is needed to rule out these models.
Confirmation of general relativity on large scales from weak lensing and galaxy velocities
NASA Astrophysics Data System (ADS)
Reyes, Reinabelle; Mandelbaum, Rachel; Seljak, Uros; Baldauf, Tobias; Gunn, James E.; Lombriser, Lucas; Smith, Robert E.
2010-03-01
Although general relativity underlies modern cosmology, its applicability on cosmological length scales has yet to be stringently tested. Such a test has recently been proposed, using a quantity, EG, that combines measures of large-scale gravitational lensing, galaxy clustering and structure growth rate. The combination is insensitive to `galaxy bias' (the difference between the clustering of visible galaxies and invisible dark matter) and is thus robust to the uncertainty in this parameter. Modified theories of gravity generally predict values of EG different from the general relativistic prediction because, in these theories, the `gravitational slip' (the difference between the two potentials that describe perturbations in the gravitational metric) is non-zero, which leads to changes in the growth of structure and the strength of the gravitational lensing effect. Here we report that EG = 0.39+/-0.06 on length scales of tens of megaparsecs, in agreement with the general relativistic prediction of EG~0.4. The measured value excludes a model within the tensor-vector-scalar gravity theory, which modifies both Newtonian and Einstein gravity. However, the relatively large uncertainty still permits models within f() theory, which is an extension of general relativity. A fivefold decrease in uncertainty is needed to rule out these models.
Beyeler, Michael; Dutt, Nikil D; Krichmar, Jeffrey L
2013-12-01
Understanding how the human brain is able to efficiently perceive and understand a visual scene is still a field of ongoing research. Although many studies have focused on the design and optimization of neural networks to solve visual recognition tasks, most of them either lack neurobiologically plausible learning rules or decision-making processes. Here we present a large-scale model of a hierarchical spiking neural network (SNN) that integrates a low-level memory encoding mechanism with a higher-level decision process to perform a visual classification task in real-time. The model consists of Izhikevich neurons and conductance-based synapses for realistic approximation of neuronal dynamics, a spike-timing-dependent plasticity (STDP) synaptic learning rule with additional synaptic dynamics for memory encoding, and an accumulator model for memory retrieval and categorization. The full network, which comprised 71,026 neurons and approximately 133 million synapses, ran in real-time on a single off-the-shelf graphics processing unit (GPU). The network was constructed on a publicly available SNN simulator that supports general-purpose neuromorphic computer chips. The network achieved 92% correct classifications on MNIST in 100 rounds of random sub-sampling, which is comparable to other SNN approaches and provides a conservative and reliable performance metric. Additionally, the model correctly predicted reaction times from psychophysical experiments. Because of the scalability of the approach and its neurobiological fidelity, the current model can be extended to an efficient neuromorphic implementation that supports more generalized object recognition and decision-making architectures found in the brain. Copyright © 2013 Elsevier Ltd. All rights reserved.
Deduction of reservoir operating rules for application in global hydrological models
NASA Astrophysics Data System (ADS)
Coerver, Hubertus M.; Rutten, Martine M.; van de Giesen, Nick C.
2018-01-01
A big challenge in constructing global hydrological models is the inclusion of anthropogenic impacts on the water cycle, such as caused by dams. Dam operators make decisions based on experience and often uncertain information. In this study information generally available to dam operators, like inflow into the reservoir and storage levels, was used to derive fuzzy rules describing the way a reservoir is operated. Using an artificial neural network capable of mimicking fuzzy logic, called the ANFIS adaptive-network-based fuzzy inference system, fuzzy rules linking inflow and storage with reservoir release were determined for 11 reservoirs in central Asia, the US and Vietnam. By varying the input variables of the neural network, different configurations of fuzzy rules were created and tested. It was found that the release from relatively large reservoirs was significantly dependent on information concerning recent storage levels, while release from smaller reservoirs was more dependent on reservoir inflows. Subsequently, the derived rules were used to simulate reservoir release with an average Nash-Sutcliffe coefficient of 0.81.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giacinti, Gwenael; Kirk, John G.
We calculate the large-scale cosmic-ray (CR) anisotropies predicted for a range of Goldreich–Sridhar (GS) and isotropic models of interstellar turbulence, and compare them with IceTop data. In general, the predicted CR anisotropy is not a pure dipole; the cold spots reported at 400 TeV and 2 PeV are consistent with a GS model that contains a smooth deficit of parallel-propagating waves and a broad resonance function, though some other possibilities cannot, as yet, be ruled out. In particular, isotropic fast magnetosonic wave turbulence can match the observations at high energy, but cannot accommodate an energy dependence in the shape ofmore » the CR anisotropy. Our findings suggest that improved data on the large-scale CR anisotropy could provide a valuable probe of the properties—notably the power-spectrum—of the interstellar turbulence within a few tens of parsecs from Earth.« less
Dafni, Urania; Karlis, Dimitris; Pedeli, Xanthi; Bogaerts, Jan; Pentheroudakis, George; Tabernero, Josep; Zielinski, Christoph C; Piccart, Martine J; de Vries, Elisabeth G E; Latino, Nicola Jane; Douillard, Jean-Yves; Cherny, Nathan I
2017-01-01
The European Society for Medical Oncology (ESMO) has developed the ESMO Magnitude of Clinical Benefit Scale (ESMO-MCBS), a tool to assess the magnitude of clinical benefit from new cancer therapies. Grading is guided by a dual rule comparing the relative benefit (RB) and the absolute benefit (AB) achieved by the therapy to prespecified threshold values. The ESMO-MCBS v1.0 dual rule evaluates the RB of an experimental treatment based on the lower limit of the 95%CI (LL95%CI) for the hazard ratio (HR) along with an AB threshold. This dual rule addresses two goals: inclusiveness: not unfairly penalising experimental treatments from trials designed with adequate power targeting clinically meaningful relative benefit; and discernment: penalising trials designed to detect a small inconsequential benefit. Based on 50 000 simulations of plausible trial scenarios, the sensitivity and specificity of the LL95%CI rule and the ESMO-MCBS dual rule, the robustness of their characteristics for reasonable power and range of targeted and true HRs, are examined. The per cent acceptance of maximal preliminary grade is compared with other dual rules based on point estimate (PE) thresholds for RB. For particularly small or particularly large studies, the observed benefit needs to be relatively big for the ESMO-MCBS dual rule to be satisfied and the maximal grade awarded. Compared with approaches that evaluate RB using the PE thresholds, simulations demonstrate that the MCBS approach better exhibits the desired behaviour achieving the goals of both inclusiveness and discernment. RB assessment using the LL95%CI for HR rather than a PE threshold has two advantages: it diminishes the probability of excluding big benefit positive studies from achieving due credit and, when combined with the AB assessment, it increases the probability of downgrading a trial with a statistically significant but clinically insignificant observed benefit.
Dafni, Urania; Karlis, Dimitris; Pedeli, Xanthi; Bogaerts, Jan; Pentheroudakis, George; Tabernero, Josep; Zielinski, Christoph C; Piccart, Martine J; de Vries, Elisabeth G E; Latino, Nicola Jane; Douillard, Jean-Yves; Cherny, Nathan I
2017-01-01
Background The European Society for Medical Oncology (ESMO) has developed the ESMO Magnitude of Clinical Benefit Scale (ESMO-MCBS), a tool to assess the magnitude of clinical benefit from new cancer therapies. Grading is guided by a dual rule comparing the relative benefit (RB) and the absolute benefit (AB) achieved by the therapy to prespecified threshold values. The ESMO-MCBS v1.0 dual rule evaluates the RB of an experimental treatment based on the lower limit of the 95%CI (LL95%CI) for the hazard ratio (HR) along with an AB threshold. This dual rule addresses two goals: inclusiveness: not unfairly penalising experimental treatments from trials designed with adequate power targeting clinically meaningful relative benefit; and discernment: penalising trials designed to detect a small inconsequential benefit. Methods Based on 50 000 simulations of plausible trial scenarios, the sensitivity and specificity of the LL95%CI rule and the ESMO-MCBS dual rule, the robustness of their characteristics for reasonable power and range of targeted and true HRs, are examined. The per cent acceptance of maximal preliminary grade is compared with other dual rules based on point estimate (PE) thresholds for RB. Results For particularly small or particularly large studies, the observed benefit needs to be relatively big for the ESMO-MCBS dual rule to be satisfied and the maximal grade awarded. Compared with approaches that evaluate RB using the PE thresholds, simulations demonstrate that the MCBS approach better exhibits the desired behaviour achieving the goals of both inclusiveness and discernment. Conclusions RB assessment using the LL95%CI for HR rather than a PE threshold has two advantages: it diminishes the probability of excluding big benefit positive studies from achieving due credit and, when combined with the AB assessment, it increases the probability of downgrading a trial with a statistically significant but clinically insignificant observed benefit. PMID:29067214
Han, Xue; Hu, Shi; Guo, Qi; Wang, Hong-Fu; Zhu, Ai-Dong; Zhang, Shou
2015-08-05
We propose effective fusion schemes for stationary electronic W state and flying photonic W state, respectively, by using the quantum-dot-microcavity coupled system. The present schemes can fuse a n-qubit W state and a m-qubit W state to a (m + n - 1)-qubit W state, that is, these schemes can be used to not only create large W state with small ones, but also to prepare 3-qubit W states with Bell states. The schemes are based on the optical selection rules and the transmission and reflection rules of the cavity and can be achieved with high probability. We evaluate the effect of experimental imperfections and the feasibility of the schemes, which shows that the present schemes can be realized with high fidelity in both the weak coupling and the strong coupling regimes. These schemes may be meaningful for the large-scale solid-state-based quantum computation and the photon-qubit-based quantum communication.
Evaluating scale-up rules of a high-shear wet granulation process.
Tao, Jing; Pandey, Preetanshu; Bindra, Dilbir S; Gao, Julia Z; Narang, Ajit S
2015-07-01
This work aimed to evaluate the commonly used scale-up rules for high-shear wet granulation process using a microcrystalline cellulose-lactose-based low drug loading formulation. Granule properties such as particle size, porosity, flow, and tabletability, and tablet dissolution were compared across scales using scale-up rules based on different impeller speed calculations or extended wet massing time. Constant tip speed rule was observed to produce slightly less granulated material at the larger scales. Longer wet massing time can be used to compensate for the lower shear experienced by the granules at the larger scales. Constant Froude number and constant empirical stress rules yielded granules that were more comparable across different scales in terms of compaction performance and tablet dissolution. Granule porosity was shown to correlate well with blend tabletability and tablet dissolution, indicating the importance of monitoring granule densification (porosity) during scale-up. It was shown that different routes can be chosen during scale-up to achieve comparable granule growth and densification by altering one of the three parameters: water amount, impeller speed, and wet massing time. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.
Herculano-Houzel, Suzana; Kaas, Jon H.
2011-01-01
Gorillas and orangutans are primates at least as large as humans, but their brains amount to about one third of the size of the human brain. This discrepancy has been used as evidence that the human brain is about 3 times larger than it should be for a primate species of its body size. In contrast to the view that the human brain is special in its size, we have suggested that it is the great apes that might have evolved bodies that are unusually large, on the basis of our recent finding that the cellular composition of the human brain matches that expected for a primate brain of its size, making the human brain a linearly scaled-up primate brain in its number of cells. To investigate whether the brain of great apes also conforms to the primate cellular scaling rules identified previously, we determine the numbers of neuronal and other cells that compose the orangutan and gorilla cerebella, use these numbers to calculate the size of the brain and of the cerebral cortex expected for these species, and show that these match the sizes described in the literature. Our results suggest that the brains of great apes also scale linearly in their numbers of neurons like other primate brains, including humans. The conformity of great apes and humans to the linear cellular scaling rules that apply to other primates that diverged earlier in primate evolution indicates that prehistoric Homo species as well as other hominins must have had brains that conformed to the same scaling rules, irrespective of their body size. We then used those scaling rules and published estimated brain volumes for various hominin species to predict the numbers of neurons that composed their brains. We predict that Homo heidelbergensis and Homo neanderthalensis had brains with approximately 80 billion neurons, within the range of variation found in modern Homo sapiens. We propose that while the cellular scaling rules that apply to the primate brain have remained stable in hominin evolution (since they apply to simians, great apes and modern humans alike), the Colobinae and Pongidae lineages favored marked increases in body size rather than brain size from the common ancestor with the Homo lineage, while the Homo lineage seems to have favored a large brain instead of a large body, possibly due to the metabolic limitations to having both. PMID:21228547
Herculano-Houzel, Suzana; Kaas, Jon H
2011-01-01
Gorillas and orangutans are primates at least as large as humans, but their brains amount to about one third of the size of the human brain. This discrepancy has been used as evidence that the human brain is about 3 times larger than it should be for a primate species of its body size. In contrast to the view that the human brain is special in its size, we have suggested that it is the great apes that might have evolved bodies that are unusually large, on the basis of our recent finding that the cellular composition of the human brain matches that expected for a primate brain of its size, making the human brain a linearly scaled-up primate brain in its number of cells. To investigate whether the brain of great apes also conforms to the primate cellular scaling rules identified previously, we determine the numbers of neuronal and other cells that compose the orangutan and gorilla cerebella, use these numbers to calculate the size of the brain and of the cerebral cortex expected for these species, and show that these match the sizes described in the literature. Our results suggest that the brains of great apes also scale linearly in their numbers of neurons like other primate brains, including humans. The conformity of great apes and humans to the linear cellular scaling rules that apply to other primates that diverged earlier in primate evolution indicates that prehistoric Homo species as well as other hominins must have had brains that conformed to the same scaling rules, irrespective of their body size. We then used those scaling rules and published estimated brain volumes for various hominin species to predict the numbers of neurons that composed their brains. We predict that Homo heidelbergensis and Homo neanderthalensis had brains with approximately 80 billion neurons, within the range of variation found in modern Homo sapiens. We propose that while the cellular scaling rules that apply to the primate brain have remained stable in hominin evolution (since they apply to simians, great apes and modern humans alike), the Colobinae and Pongidae lineages favored marked increases in body size rather than brain size from the common ancestor with the Homo lineage, while the Homo lineage seems to have favored a large brain instead of a large body, possibly due to the metabolic limitations to having both. Copyright © 2011 S. Karger AG, Basel.
Multiscale modeling of lithium ion batteries: thermal aspects
Zausch, Jochen
2015-01-01
Summary The thermal behavior of lithium ion batteries has a huge impact on their lifetime and the initiation of degradation processes. The development of hot spots or large local overpotentials leading, e.g., to lithium metal deposition depends on material properties as well as on the nano- und microstructure of the electrodes. In recent years a theoretical structure emerges, which opens the possibility to establish a systematic modeling strategy from atomistic to continuum scale to capture and couple the relevant phenomena on each scale. We outline the building blocks for such a systematic approach and discuss in detail a rigorous approach for the continuum scale based on rational thermodynamics and homogenization theories. Our focus is on the development of a systematic thermodynamically consistent theory for thermal phenomena in batteries at the microstructure scale and at the cell scale. We discuss the importance of carefully defining the continuum fields for being able to compare seemingly different phenomenological theories and for obtaining rules to determine unknown parameters of the theory by experiments or lower-scale theories. The resulting continuum models for the microscopic and the cell scale are numerically solved in full 3D resolution. The complex very localized distributions of heat sources in a microstructure of a battery and the problems of mapping these localized sources on an averaged porous electrode model are discussed by comparing the detailed 3D microstructure-resolved simulations of the heat distribution with the result of the upscaled porous electrode model. It is shown, that not all heat sources that exist on the microstructure scale are represented in the averaged theory due to subtle cancellation effects of interface and bulk heat sources. Nevertheless, we find that in special cases the averaged thermal behavior can be captured very well by porous electrode theory. PMID:25977870
An investigation of the use of temporal decomposition in space mission scheduling
NASA Technical Reports Server (NTRS)
Bullington, Stanley E.; Narayanan, Venkat
1994-01-01
This research involves an examination of techniques for solving scheduling problems in long-duration space missions. The mission timeline is broken up into several time segments, which are then scheduled incrementally. Three methods are presented for identifying the activities that are to be attempted within these segments. The first method is a mathematical model, which is presented primarily to illustrate the structure of the temporal decomposition problem. Since the mathematical model is bound to be computationally prohibitive for realistic problems, two heuristic assignment procedures are also presented. The first heuristic method is based on dispatching rules for activity selection, and the second heuristic assigns performances of a model evenly over timeline segments. These heuristics are tested using a sample Space Station mission and a Spacelab mission. The results are compared with those obtained by scheduling the missions without any problem decomposition. The applicability of this approach to large-scale mission scheduling problems is also discussed.
Analysis on the dynamic error for optoelectronic scanning coordinate measurement network
NASA Astrophysics Data System (ADS)
Shi, Shendong; Yang, Linghui; Lin, Jiarui; Guo, Siyang; Ren, Yongjie
2018-01-01
Large-scale dynamic three-dimension coordinate measurement technique is eagerly demanded in equipment manufacturing. Noted for advantages of high accuracy, scale expandability and multitask parallel measurement, optoelectronic scanning measurement network has got close attention. It is widely used in large components jointing, spacecraft rendezvous and docking simulation, digital shipbuilding and automated guided vehicle navigation. At present, most research about optoelectronic scanning measurement network is focused on static measurement capacity and research about dynamic accuracy is insufficient. Limited by the measurement principle, the dynamic error is non-negligible and restricts the application. The workshop measurement and positioning system is a representative which can realize dynamic measurement function in theory. In this paper we conduct deep research on dynamic error resources and divide them two parts: phase error and synchronization error. Dynamic error model is constructed. Based on the theory above, simulation about dynamic error is carried out. Dynamic error is quantized and the rule of volatility and periodicity has been found. Dynamic error characteristics are shown in detail. The research result lays foundation for further accuracy improvement.
A Novel BA Complex Network Model on Color Template Matching
Han, Risheng; Yue, Guangxue; Ding, Hui
2014-01-01
A novel BA complex network model of color space is proposed based on two fundamental rules of BA scale-free network model: growth and preferential attachment. The scale-free characteristic of color space is discovered by analyzing evolving process of template's color distribution. And then the template's BA complex network model can be used to select important color pixels which have much larger effects than other color pixels in matching process. The proposed BA complex network model of color space can be easily integrated into many traditional template matching algorithms, such as SSD based matching and SAD based matching. Experiments show the performance of color template matching results can be improved based on the proposed algorithm. To the best of our knowledge, this is the first study about how to model the color space of images using a proper complex network model and apply the complex network model to template matching. PMID:25243235
A novel BA complex network model on color template matching.
Han, Risheng; Shen, Shigen; Yue, Guangxue; Ding, Hui
2014-01-01
A novel BA complex network model of color space is proposed based on two fundamental rules of BA scale-free network model: growth and preferential attachment. The scale-free characteristic of color space is discovered by analyzing evolving process of template's color distribution. And then the template's BA complex network model can be used to select important color pixels which have much larger effects than other color pixels in matching process. The proposed BA complex network model of color space can be easily integrated into many traditional template matching algorithms, such as SSD based matching and SAD based matching. Experiments show the performance of color template matching results can be improved based on the proposed algorithm. To the best of our knowledge, this is the first study about how to model the color space of images using a proper complex network model and apply the complex network model to template matching.
Capturing the semiotic relationship between terms
NASA Astrophysics Data System (ADS)
Hargood, Charlie; Millard, David E.; Weal, Mark J.
2010-04-01
Tags describing objects on the web are often treated as facts about a resource, whereas it is quite possible that they represent more subjective observations. Existing methods of term expansion expand terms based on dictionary definitions or statistical information on term occurrence. Here we propose the use of a thematic model for term expansion based on semiotic relationships between terms; this has been shown to improve a system's thematic understanding of content and tags and to tease out the more subjective implications of those tags. Such a system relies on a thematic model that must be made by hand. In this article, we explore a method to capture a semiotic understanding of particular terms using a rule-based guide to authoring a thematic model. Experimentation shows that it is possible to capture valid definitions that can be used for semiotic term expansion but that the guide itself may not be sufficient to support this on a large scale. We argue that whilst the formation of super definitions will mitigate some of these problems, the development of an authoring support tool may be necessary to solve others.
First-order fire effects models for land Management: Overview and issues
Elizabeth D. Reinhardt; Matthew B. Dickinson
2010-01-01
We give an overview of the science application process at work in supporting fire management. First-order fire effects models, such as those discussed in accompanying papers, are the building blocks of software systems designed for application to landscapes over time scales from days to centuries. Fire effects may be modeled using empirical, rule based, or process...
Extracting multistage screening rules from online dating activity data.
Bruch, Elizabeth; Feinberg, Fred; Lee, Kee Yeun
2016-09-20
This paper presents a statistical framework for harnessing online activity data to better understand how people make decisions. Building on insights from cognitive science and decision theory, we develop a discrete choice model that allows for exploratory behavior and multiple stages of decision making, with different rules enacted at each stage. Critically, the approach can identify if and when people invoke noncompensatory screeners that eliminate large swaths of alternatives from detailed consideration. The model is estimated using deidentified activity data on 1.1 million browsing and writing decisions observed on an online dating site. We find that mate seekers enact screeners ("deal breakers") that encode acceptability cutoffs. A nonparametric account of heterogeneity reveals that, even after controlling for a host of observable attributes, mate evaluation differs across decision stages as well as across identified groupings of men and women. Our statistical framework can be widely applied in analyzing large-scale data on multistage choices, which typify searches for "big ticket" items.
Extracting multistage screening rules from online dating activity data
Bruch, Elizabeth; Feinberg, Fred; Lee, Kee Yeun
2016-01-01
This paper presents a statistical framework for harnessing online activity data to better understand how people make decisions. Building on insights from cognitive science and decision theory, we develop a discrete choice model that allows for exploratory behavior and multiple stages of decision making, with different rules enacted at each stage. Critically, the approach can identify if and when people invoke noncompensatory screeners that eliminate large swaths of alternatives from detailed consideration. The model is estimated using deidentified activity data on 1.1 million browsing and writing decisions observed on an online dating site. We find that mate seekers enact screeners (“deal breakers”) that encode acceptability cutoffs. A nonparametric account of heterogeneity reveals that, even after controlling for a host of observable attributes, mate evaluation differs across decision stages as well as across identified groupings of men and women. Our statistical framework can be widely applied in analyzing large-scale data on multistage choices, which typify searches for “big ticket” items. PMID:27578870
NASA Astrophysics Data System (ADS)
Sidles, John A.; Garbini, Joseph L.; Harrell, Lee E.; Hero, Alfred O.; Jacky, Jonathan P.; Malcomb, Joseph R.; Norman, Anthony G.; Williamson, Austin M.
2009-06-01
Practical recipes are presented for simulating high-temperature and nonequilibrium quantum spin systems that are continuously measured and controlled. The notion of a spin system is broadly conceived, in order to encompass macroscopic test masses as the limiting case of large-j spins. The simulation technique has three stages: first the deliberate introduction of noise into the simulation, then the conversion of that noise into an equivalent continuous measurement and control process, and finally, projection of the trajectory onto state-space manifolds having reduced dimensionality and possessing a Kähler potential of multilinear algebraic form. These state-spaces can be regarded as ruled algebraic varieties upon which a projective quantum model order reduction (MOR) is performed. The Riemannian sectional curvature of ruled Kählerian varieties is analyzed, and proved to be non-positive upon all sections that contain a rule. These manifolds are shown to contain Slater determinants as a special case and their identity with Grassmannian varieties is demonstrated. The resulting simulation formalism is used to construct a positive P-representation for the thermal density matrix. Single-spin detection by magnetic resonance force microscopy (MRFM) is simulated, and the data statistics are shown to be those of a random telegraph signal with additive white noise. Larger-scale spin-dust models are simulated, having no spatial symmetry and no spatial ordering; the high-fidelity projection of numerically computed quantum trajectories onto low dimensionality Kähler state-space manifolds is demonstrated. The reconstruction of quantum trajectories from sparse random projections is demonstrated, the onset of Donoho-Stodden breakdown at the Candès-Tao sparsity limit is observed, a deterministic construction for sampling matrices is given and methods for quantum state optimization by Dantzig selection are given.
NASA Astrophysics Data System (ADS)
Macian-Sorribes, Hector; Pulido-Velazquez, Manuel
2013-04-01
Water resources systems are operated, mostly, using a set of pre-defined rules not regarding, usually, to an optimal allocation in terms of water use or economic benefits, but to historical and institutional reasons. These operating policies are reproduced, commonly, as hedging rules, pack rules or zone-based operations, and simulation models can be used to test their performance under a wide range of hydrological and/or socio-economic hypothesis. Despite the high degree of acceptation and testing that these models have achieved, the actual operation of water resources systems hardly follows all the time the pre-defined rules with the consequent uncertainty on the system performance. Real-world reservoir operation is very complex, affected by input uncertainty (imprecision in forecast inflow, seepage and evaporation losses, etc.), filtered by the reservoir operator's experience and natural risk-aversion, while considering the different physical and legal/institutional constraints in order to meet the different demands and system requirements. The aim of this work is to expose a fuzzy logic approach to derive and assess the historical operation of a system. This framework uses a fuzzy rule-based system to reproduce pre-defined rules and also to match as close as possible the actual decisions made by managers. After built up, the fuzzy rule-based system can be integrated in a water resources management model, making possible to assess the system performance at the basin scale. The case study of the Mijares basin (eastern Spain) is used to illustrate the method. A reservoir operating curve regulates the two main reservoir releases (operated in a conjunctive way) with the purpose of guaranteeing a high realiability of supply to the traditional irrigation districts with higher priority (more senior demands that funded the reservoir construction). A fuzzy rule-based system has been created to reproduce the operating curve's performance, defining the system state (total water stored in the reservoirs) and the month of the year as inputs; and the demand deliveries as outputs. The developed simulation management model integrates the fuzzy-ruled system of the operation of the two main reservoirs of the basin with the corresponding mass balance equations, the physical or boundary conditions and the water allocation rules among the competing demands. Historical information on inflow time series is used as inputs to the model simulation, being trained and validated using historical information on reservoir storage level and flow in several streams of the Mijares river. This methodology provides a more flexible and close to real policies approach. The model is easy to develop and to understand due to its rule-based structure, which mimics the human way of thinking. This can improve cooperation and negotiation between managers, decision-makers and stakeholders. The approach can be also applied to analyze the historical operation of the reservoir (what we have called a reservoir operation "audit").
A corpus for plant-chemical relationships in the biomedical domain.
Choi, Wonjun; Kim, Baeksoo; Cho, Hyejin; Lee, Doheon; Lee, Hyunju
2016-09-20
Plants are natural products that humans consume in various ways including food and medicine. They have a long empirical history of treating diseases with relatively few side effects. Based on these strengths, many studies have been performed to verify the effectiveness of plants in treating diseases. It is crucial to understand the chemicals contained in plants because these chemicals can regulate activities of proteins that are key factors in causing diseases. With the accumulation of a large volume of biomedical literature in various databases such as PubMed, it is possible to automatically extract relationships between plants and chemicals in a large-scale way if we apply a text mining approach. A cornerstone of achieving this task is a corpus of relationships between plants and chemicals. In this study, we first constructed a corpus for plant and chemical entities and for the relationships between them. The corpus contains 267 plant entities, 475 chemical entities, and 1,007 plant-chemical relationships (550 and 457 positive and negative relationships, respectively), which are drawn from 377 sentences in 245 PubMed abstracts. Inter-annotator agreement scores for the corpus among three annotators were measured. The simple percent agreement scores for entities and trigger words for the relationships were 99.6 and 94.8 %, respectively, and the overall kappa score for the classification of positive and negative relationships was 79.8 %. We also developed a rule-based model to automatically extract such plant-chemical relationships. When we evaluated the rule-based model using the corpus and randomly selected biomedical articles, overall F-scores of 68.0 and 61.8 % were achieved, respectively. We expect that the corpus for plant-chemical relationships will be a useful resource for enhancing plant research. The corpus is available at http://combio.gist.ac.kr/plantchemicalcorpus .
Using Agent Base Models to Optimize Large Scale Network for Large System Inventories
NASA Technical Reports Server (NTRS)
Shameldin, Ramez Ahmed; Bowling, Shannon R.
2010-01-01
The aim of this paper is to use Agent Base Models (ABM) to optimize large scale network handling capabilities for large system inventories and to implement strategies for the purpose of reducing capital expenses. The models used in this paper either use computational algorithms or procedure implementations developed by Matlab to simulate agent based models in a principal programming language and mathematical theory using clusters, these clusters work as a high performance computational performance to run the program in parallel computational. In both cases, a model is defined as compilation of a set of structures and processes assumed to underlie the behavior of a network system.
Le Mouël, Jean-Louis; Allègre, Claude J.; Narteau, Clément
1997-01-01
A scaling law approach is used to simulate the dynamo process of the Earth’s core. The model is made of embedded turbulent domains of increasing dimensions, until the largest whose size is comparable with the site of the core, pervaded by large-scale magnetic fields. Left-handed or right-handed cyclones appear at the lowest scale, the scale of the elementary domains of the hierarchical model, and disappear. These elementary domains then behave like electromotor generators with opposite polarities depending on whether they contain a left-handed or a right-handed cyclone. To transfer the behavior of the elementary domains to larger ones, a dynamic renormalization approach is used. A simple rule is adopted to determine whether a domain of scale l is a generator—and what its polarity is—in function of the state of the (l − 1) domains it is made of. This mechanism is used as the main ingredient of a kinematic dynamo model, which displays polarity intervals, excursions, and reversals of the geomagnetic field. PMID:11038547
Orographic barriers GIS-based definition of the Campania-Lucanian Apennine Range (Southern Italy)
NASA Astrophysics Data System (ADS)
Cuomo, Albina; Guida, Domenico
2010-05-01
The presence of mountains on the land surfaces plays a central role in the space-time dynamics of the hydrological, geomorphic and ecological systems (Roe G. H., 2005). The aim of this paper is to identify, delimitate and classify the orographic relief in the Campania - Lucanian Apennine (Southern Italy) to investigate the effects of large-scale orographic and small-scale windward-leeward phenomena on distribution, frequency and duration of rainfall. The scale-dependent effects derived from the topographic relief favor the utilization of a hierarchical and multi-scale approach. The approach is based on a GIS procedure applied on Digital Elevation Model (DEM) with 20 meters cell size and derived from Regional Technical Map (CTR) of Campania region (1:5000). The DEM has been smoothed from data spikes and pits and we have then proceed to: a) Identify the three basic landforms of the relief (summit, hillslope and plain) by generalizing a previous 10-type landforms using the TPI method (Weiss A. 2001) and by simplifying the established rules of the differential geometry on topographic surface; b) Delimitate the mountain relief by modifying the method proposed by O. Z. Chaudhry and W. A. Mackaness (2008). It is based on three concepts: prominence , morphological variability and parent-child relationship. Graphical results have shown a good spatial correspondence between the digital definition of mountains and their morpho-tectonic structure derived from tectonic geomorphological studies; c) Classify, by using a set rules of spatial statistics (Cluster analysis) on geomorphometric parameters (elevation, curvature, slope, aspect, relative relief and form factor). Finally, we have recognized three prototypal orographic barriers shapes: cone, tableland and ridge, which are fundamental to improve the models of orographic rainfall in the Southern Apennines. References Chaudhry O. Z.and Mackaness W. A. (2008). Creating Mountains out of Mole Hills: Automatic Identification of Hills and Ranges Using Morphometric Analysis. Transactions in GIS. 12(5), pp. 567-589 Roe Gerard H. 2005. Orographic precipitation. Annual Review of Earth and Planetary Sciences. Vol. 33: 645-671. Weiss A., 2001. Topographic position and landform analysis. Poster Presentation. ESRI User Conference. San Diego, CA.
NASA Astrophysics Data System (ADS)
Silvis, Maurits H.; Remmerswaal, Ronald A.; Verstappen, Roel
2017-01-01
We study the construction of subgrid-scale models for large-eddy simulation of incompressible turbulent flows. In particular, we aim to consolidate a systematic approach of constructing subgrid-scale models, based on the idea that it is desirable that subgrid-scale models are consistent with the mathematical and physical properties of the Navier-Stokes equations and the turbulent stresses. To that end, we first discuss in detail the symmetries of the Navier-Stokes equations, and the near-wall scaling behavior, realizability and dissipation properties of the turbulent stresses. We furthermore summarize the requirements that subgrid-scale models have to satisfy in order to preserve these important mathematical and physical properties. In this fashion, a framework of model constraints arises that we apply to analyze the behavior of a number of existing subgrid-scale models that are based on the local velocity gradient. We show that these subgrid-scale models do not satisfy all the desired properties, after which we explain that this is partly due to incompatibilities between model constraints and limitations of velocity-gradient-based subgrid-scale models. However, we also reason that the current framework shows that there is room for improvement in the properties and, hence, the behavior of existing subgrid-scale models. We furthermore show how compatible model constraints can be combined to construct new subgrid-scale models that have desirable properties built into them. We provide a few examples of such new models, of which a new model of eddy viscosity type, that is based on the vortex stretching magnitude, is successfully tested in large-eddy simulations of decaying homogeneous isotropic turbulence and turbulent plane-channel flow.
Kuramoto model with uniformly spaced frequencies: Finite-N asymptotics of the locking threshold.
Ottino-Löffler, Bertrand; Strogatz, Steven H
2016-06-01
We study phase locking in the Kuramoto model of coupled oscillators in the special case where the number of oscillators, N, is large but finite, and the oscillators' natural frequencies are evenly spaced on a given interval. In this case, stable phase-locked solutions are known to exist if and only if the frequency interval is narrower than a certain critical width, called the locking threshold. For infinite N, the exact value of the locking threshold was calculated 30 years ago; however, the leading corrections to it for finite N have remained unsolved analytically. Here we derive an asymptotic formula for the locking threshold when N≫1. The leading correction to the infinite-N result scales like either N^{-3/2} or N^{-1}, depending on whether the frequencies are evenly spaced according to a midpoint rule or an end-point rule. These scaling laws agree with numerical results obtained by Pazó [D. Pazó, Phys. Rev. E 72, 046211 (2005)PLEEE81539-375510.1103/PhysRevE.72.046211]. Moreover, our analysis yields the exact prefactors in the scaling laws, which also match the numerics.
Rule Following and Rule Use in the Balance-Scale Task
ERIC Educational Resources Information Center
Shultz, Thomas R.; Takane, Yoshio
2007-01-01
Quinlan et al. [Quinlan, p., van der Mass, H., Jansen, B., Booij, O., & Rendell, M. (this issue). Re-thinking stages of cognitive development: An appraisal of connectionist models of the balance scale task. "Cognition", doi:10.1016/j.cognition.2006.02.004] use Latent Class Analysis (LCA) to criticize a connectionist model of development on the…
Weather Research and Forecasting Model Wind Sensitivity Study at Edwards Air Force Base, CA
NASA Technical Reports Server (NTRS)
Watson, Leela R.; Bauman, William H., III
2008-01-01
NASA prefers to land the space shuttle at Kennedy Space Center (KSC). When weather conditions violate Flight Rules at KSC, NASA will usually divert the shuttle landing to Edwards Air Force Base (EAFB) in Southern California. But forecasting surface winds at EAFB is a challenge for the Spaceflight Meteorology Group (SMG) forecasters due to the complex terrain that surrounds EAFB, One particular phenomena identified by SMG is that makes it difficult to forecast the EAFB surface winds is called "wind cycling". This occurs when wind speeds and directions oscillate among towers near the EAFB runway leading to a challenging deorbit bum forecast for shuttle landings. The large-scale numerical weather prediction models cannot properly resolve the wind field due to their coarse horizontal resolutions, so a properly tuned high-resolution mesoscale model is needed. The Weather Research and Forecasting (WRF) model meets this requirement. The AMU assessed the different WRF model options to determine which configuration best predicted surface wind speed and direction at EAFB, To do so, the AMU compared the WRF model performance using two hot start initializations with the Advanced Research WRF and Non-hydrostatic Mesoscale Model dynamical cores and compared model performance while varying the physics options.
Model and Dynamic Behavior of Malware Propagation over Wireless Sensor Networks
NASA Astrophysics Data System (ADS)
Song, Yurong; Jiang, Guo-Ping
Based on the inherent characteristics of wireless sensor networks (WSN), the dynamic behavior of malware propagation in flat WSN is analyzed and investigated. A new model is proposed using 2-D cellular automata (CA), which extends the traditional definition of CA and establishes whole transition rules for malware propagation in WSN. Meanwhile, the validations of the model are proved through theoretical analysis and simulations. The theoretical analysis yields closed-form expressions which show good agreement with the simulation results of the proposed model. It is shown that the malware propaga-tion in WSN unfolds neighborhood saturation, which dominates the effects of increasing infectivity and limits the spread of the malware. MAC mechanism of wireless sensor networks greatly slows down the speed of malware propagation and reduces the risk of large-scale malware prevalence in these networks. The proposed model can describe accurately the dynamic behavior of malware propagation over WSN, which can be applied in developing robust and efficient defense system on WSN.
Hierarchical trie packet classification algorithm based on expectation-maximization clustering
Bi, Xia-an; Zhao, Junxia
2017-01-01
With the development of computer network bandwidth, packet classification algorithms which are able to deal with large-scale rule sets are in urgent need. Among the existing algorithms, researches on packet classification algorithms based on hierarchical trie have become an important packet classification research branch because of their widely practical use. Although hierarchical trie is beneficial to save large storage space, it has several shortcomings such as the existence of backtracking and empty nodes. This paper proposes a new packet classification algorithm, Hierarchical Trie Algorithm Based on Expectation-Maximization Clustering (HTEMC). Firstly, this paper uses the formalization method to deal with the packet classification problem by means of mapping the rules and data packets into a two-dimensional space. Secondly, this paper uses expectation-maximization algorithm to cluster the rules based on their aggregate characteristics, and thereby diversified clusters are formed. Thirdly, this paper proposes a hierarchical trie based on the results of expectation-maximization clustering. Finally, this paper respectively conducts simulation experiments and real-environment experiments to compare the performances of our algorithm with other typical algorithms, and analyzes the results of the experiments. The hierarchical trie structure in our algorithm not only adopts trie path compression to eliminate backtracking, but also solves the problem of low efficiency of trie updates, which greatly improves the performance of the algorithm. PMID:28704476
A cellular automaton model for ship traffic flow in waterways
NASA Astrophysics Data System (ADS)
Qi, Le; Zheng, Zhongyi; Gang, Longhui
2017-04-01
With the development of marine traffic, waterways become congested and more complicated traffic phenomena in ship traffic flow are observed. It is important and necessary to build a ship traffic flow model based on cellular automata (CAs) to study the phenomena and improve marine transportation efficiency and safety. Spatial discretization rules for waterways and update rules for ship movement are two important issues that are very different from vehicle traffic. To solve these issues, a CA model for ship traffic flow, called a spatial-logical mapping (SLM) model, is presented. In this model, the spatial discretization rules are improved by adding a mapping rule. And the dynamic ship domain model is considered in the update rules to describe ships' interaction more exactly. Take the ship traffic flow in the Singapore Strait for example, some simulations were carried out and compared. The simulations show that the SLM model could avoid ship pseudo lane-change efficiently, which is caused by traditional spatial discretization rules. The ship velocity change in the SLM model is consistent with the measured data. At finally, from the fundamental diagram, the relationship between traffic ability and the lengths of ships is explored. The number of ships in the waterway declines when the proportion of large ships increases.
Modernization and multiscale databases at the U.S. geological survey
Morrison, J.L.
1992-01-01
The U.S. Geological Survey (USGS) has begun a digital cartographic modernization program. Keys to that program are the creation of a multiscale database, a feature-based file structure that is derived from a spatial data model, and a series of "templates" or rules that specify the relationships between instances of entities in reality and features in the database. The database will initially hold data collected from the USGS standard map products at scales of 1:24,000, 1:100,000, and 1:2,000,000. The spatial data model is called the digital line graph-enhanced model, and the comprehensive rule set consists of collection rules, product generation rules, and conflict resolution rules. This modernization program will affect the USGS mapmaking process because both digital and graphic products will be created from the database. In addition, non-USGS map users will have more flexibility in uses of the databases. These remarks are those of the session discussant made in response to the six papers and the keynote address given in the session. ?? 1992.
(abstract) Scaling Nominal Solar Cell Impedances for Array Design
NASA Technical Reports Server (NTRS)
Mueller, Robert L; Wallace, Matthew T.; Iles, Peter
1994-01-01
This paper discusses a task the objective of which is to characterize solar cell array AC impedance and develop scaling rules for impedance characterization of large arrays by testing single solar cells and small arrays. This effort is aimed at formulating a methodology for estimating the AC impedance of the Mars Pathfinder (MPF) cruise and lander solar arrays based upon testing single cells and small solar cell arrays and to create a basis for design of a single shunt limiter for MPF power control of flight solar arrays having very different inpedances.
Rasmussen's model of human behavior in laparoscopy training.
Wentink, M; Stassen, L P S; Alwayn, I; Hosman, R J A W; Stassen, H G
2003-08-01
Compared to aviation, where virtual reality (VR) training has been standardized and simulators have proven their benefits, the objectives, needs, and means of VR training in minimally invasive surgery (MIS) still have to be established. The aim of the study presented is to introduce Rasmussen's model of human behavior as a practical framework for the definition of the training objectives, needs, and means in MIS. Rasmussen distinguishes three levels of human behavior: skill-, rule-, and knowledge-based behaviour. The training needs of a laparoscopic novice can be determined by identifying the specific skill-, rule-, and knowledge-based behavior that is required for performing safe laparoscopy. Future objectives of VR laparoscopy trainers should address all three levels of behavior. Although most commercially available simulators for laparoscopy aim at training skill-based behavior, especially the training of knowledge-based behavior during complications in surgery will improve safety levels. However, the cost and complexity of a training means increases when the training objectives proceed from the training of skill-based behavior to the training of complex knowledge-based behavior. In aviation, human behavior models have been used successfully to integrate the training of skill-, rule-, and knowledge-based behavior in a full flight simulator. Understanding surgeon behavior is one of the first steps towards a future full-scale laparoscopy simulator.
NASA Astrophysics Data System (ADS)
McMillan, Mitchell; Hu, Zhiyong
2017-10-01
Streambank erosion is a major source of fluvial sediment, but few large-scale, spatially distributed models exist to quantify streambank erosion rates. We introduce a spatially distributed model for streambank erosion applicable to sinuous, single-thread channels. We argue that such a model can adequately characterize streambank erosion rates, measured at the outsides of bends over a 2-year time period, throughout a large region. The model is based on the widely-used excess-velocity equation and comprised three components: a physics-based hydrodynamic model, a large-scale 1-dimensional model of average monthly discharge, and an empirical bank erodibility parameterization. The hydrodynamic submodel requires inputs of channel centerline, slope, width, depth, friction factor, and a scour factor A; the large-scale watershed submodel utilizes watershed-averaged monthly outputs of the Noah-2.8 land surface model; bank erodibility is based on tree cover and bank height as proxies for root density. The model was calibrated with erosion rates measured in sand-bed streams throughout the northern Gulf of Mexico coastal plain. The calibrated model outperforms a purely empirical model, as well as a model based only on excess velocity, illustrating the utility of combining a physics-based hydrodynamic model with an empirical bank erodibility relationship. The model could be improved by incorporating spatial variability in channel roughness and the hydrodynamic scour factor, which are here assumed constant. A reach-scale application of the model is illustrated on ∼1 km of a medium-sized, mixed forest-pasture stream, where the model identifies streambank erosion hotspots on forested and non-forested bends.
Agent Based Modeling Applications for Geosciences
NASA Astrophysics Data System (ADS)
Stein, J. S.
2004-12-01
Agent-based modeling techniques have successfully been applied to systems in which complex behaviors or outcomes arise from varied interactions between individuals in the system. Each individual interacts with its environment, as well as with other individuals, by following a set of relatively simple rules. Traditionally this "bottom-up" modeling approach has been applied to problems in the fields of economics and sociology, but more recently has been introduced to various disciplines in the geosciences. This technique can help explain the origin of complex processes from a relatively simple set of rules, incorporate large and detailed datasets when they exist, and simulate the effects of extreme events on system-wide behavior. Some of the challenges associated with this modeling method include: significant computational requirements in order to keep track of thousands to millions of agents, methods and strategies of model validation are lacking, as is a formal methodology for evaluating model uncertainty. Challenges specific to the geosciences, include how to define agents that control water, contaminant fluxes, climate forcing and other physical processes and how to link these "geo-agents" into larger agent-based simulations that include social systems such as demographics economics and regulations. Effective management of limited natural resources (such as water, hydrocarbons, or land) requires an understanding of what factors influence the demand for these resources on a regional and temporal scale. Agent-based models can be used to simulate this demand across a variety of sectors under a range of conditions and determine effective and robust management policies and monitoring strategies. The recent focus on the role of biological processes in the geosciences is another example of an area that could benefit from agent-based applications. A typical approach to modeling the effect of biological processes in geologic media has been to represent these processes in a thermodynamic framework as a set of reactions that roll-up the integrated effect that diverse biological communities exert on a geological system. This approach may work well to predict the effect of certain biological communities in specific environments in which experimental data is available. However, it does not further our knowledge of how the geobiological system actually functions on a micro scale. Agent-based techniques may provide a framework to explore the fundamental interactions required to explain the system-wide behavior. This presentation will present a survey of several promising applications of agent-based modeling approaches to problems in the geosciences and describe specific contributions to some of the inherent challenges facing this approach.
NASA Astrophysics Data System (ADS)
Yuen, Anthony C. Y.; Yeoh, Guan H.; Timchenko, Victoria; Cheung, Sherman C. P.; Chan, Qing N.; Chen, Timothy
2017-09-01
An in-house large eddy simulation (LES) based fire field model has been developed for large-scale compartment fire simulations. The model incorporates four major components, including subgrid-scale turbulence, combustion, soot and radiation models which are fully coupled. It is designed to simulate the temporal and fluid dynamical effects of turbulent reaction flow for non-premixed diffusion flame. Parametric studies were performed based on a large-scale fire experiment carried out in a 39-m long test hall facility. Several turbulent Prandtl and Schmidt numbers ranging from 0.2 to 0.5, and Smagorinsky constants ranging from 0.18 to 0.23 were investigated. It was found that the temperature and flow field predictions were most accurate with turbulent Prandtl and Schmidt numbers of 0.3, respectively, and a Smagorinsky constant of 0.2 applied. In addition, by utilising a set of numerically verified key modelling parameters, the smoke filling process was successfully captured by the present LES model.
Leveraging Modeling Approaches: Reaction Networks and Rules
Blinov, Michael L.; Moraru, Ion I.
2012-01-01
We have witnessed an explosive growth in research involving mathematical models and computer simulations of intracellular molecular interactions, ranging from metabolic pathways to signaling and gene regulatory networks. Many software tools have been developed to aid in the study of such biological systems, some of which have a wealth of features for model building and visualization, and powerful capabilities for simulation and data analysis. Novel high resolution and/or high throughput experimental techniques have led to an abundance of qualitative and quantitative data related to the spatio-temporal distribution of molecules and complexes, their interactions kinetics, and functional modifications. Based on this information, computational biology researchers are attempting to build larger and more detailed models. However, this has proved to be a major challenge. Traditionally, modeling tools require the explicit specification of all molecular species and interactions in a model, which can quickly become a major limitation in the case of complex networks – the number of ways biomolecules can combine to form multimolecular complexes can be combinatorially large. Recently, a new breed of software tools has been created to address the problems faced when building models marked by combinatorial complexity. These have a different approach for model specification, using reaction rules and species patterns. Here we compare the traditional modeling approach with the new rule-based methods. We make a case for combining the capabilities of conventional simulation software with the unique features and flexibility of a rule-based approach in a single software platform for building models of molecular interaction networks. PMID:22161349
Leveraging modeling approaches: reaction networks and rules.
Blinov, Michael L; Moraru, Ion I
2012-01-01
We have witnessed an explosive growth in research involving mathematical models and computer simulations of intracellular molecular interactions, ranging from metabolic pathways to signaling and gene regulatory networks. Many software tools have been developed to aid in the study of such biological systems, some of which have a wealth of features for model building and visualization, and powerful capabilities for simulation and data analysis. Novel high-resolution and/or high-throughput experimental techniques have led to an abundance of qualitative and quantitative data related to the spatiotemporal distribution of molecules and complexes, their interactions kinetics, and functional modifications. Based on this information, computational biology researchers are attempting to build larger and more detailed models. However, this has proved to be a major challenge. Traditionally, modeling tools require the explicit specification of all molecular species and interactions in a model, which can quickly become a major limitation in the case of complex networks - the number of ways biomolecules can combine to form multimolecular complexes can be combinatorially large. Recently, a new breed of software tools has been created to address the problems faced when building models marked by combinatorial complexity. These have a different approach for model specification, using reaction rules and species patterns. Here we compare the traditional modeling approach with the new rule-based methods. We make a case for combining the capabilities of conventional simulation software with the unique features and flexibility of a rule-based approach in a single software platform for building models of molecular interaction networks.
Complex-energy approach to sum rules within nuclear density functional theory
Hinohara, Nobuo; Kortelainen, Markus; Nazarewicz, Witold; ...
2015-04-27
The linear response of the nucleus to an external field contains unique information about the effective interaction, correlations governing the behavior of the many-body system, and properties of its excited states. To characterize the response, it is useful to use its energy-weighted moments, or sum rules. By comparing computed sum rules with experimental values, the information content of the response can be utilized in the optimization process of the nuclear Hamiltonian or nuclear energy density functional (EDF). But the additional information comes at a price: compared to the ground state, computation of excited states is more demanding. To establish anmore » efficient framework to compute energy-weighted sum rules of the response that is adaptable to the optimization of the nuclear EDF and large-scale surveys of collective strength, we have developed a new technique within the complex-energy finite-amplitude method (FAM) based on the quasiparticle random- phase approximation. The proposed sum-rule technique based on the complex-energy FAM is a tool of choice when optimizing effective interactions or energy functionals. The method is very efficient and well-adaptable to parallel computing. As a result, the FAM formulation is especially useful when standard theorems based on commutation relations involving the nuclear Hamiltonian and external field cannot be used.« less
An Infinite Mixture Model for Coreference Resolution in Clinical Notes
Liu, Sijia; Liu, Hongfang; Chaudhary, Vipin; Li, Dingcheng
2016-01-01
It is widely acknowledged that natural language processing is indispensable to process electronic health records (EHRs). However, poor performance in relation detection tasks, such as coreference (linguistic expressions pertaining to the same entity/event) may affect the quality of EHR processing. Hence, there is a critical need to advance the research for relation detection from EHRs. Most of the clinical coreference resolution systems are based on either supervised machine learning or rule-based methods. The need for manually annotated corpus hampers the use of such system in large scale. In this paper, we present an infinite mixture model method using definite sampling to resolve coreferent relations among mentions in clinical notes. A similarity measure function is proposed to determine the coreferent relations. Our system achieved a 0.847 F-measure for i2b2 2011 coreference corpus. This promising results and the unsupervised nature make it possible to apply the system in big-data clinical setting. PMID:27595047
NASA Astrophysics Data System (ADS)
Saadi, Sameh; Simonneaux, Vincent; Boulet, Gilles; Mougenot, Bernard; Zribi, Mehrez; Lili Chabaane, Zohra
2015-04-01
Water scarcity is one of the main factors limiting agricultural development in semi-arid areas. It is thus of major importance to design tools allowing a better management of this resource. Remote sensing has long been used for computing evapotranspiration estimates, which is an input for crop water balance monitoring. Up to now, only medium and low resolution data (e.g. MODIS) are available on regular basis to monitor cultivated areas. However, the increasing availability of high resolution high repetitivity VIS-NIR remote sensing, like the forthcoming Sentinel-2 mission to be lunched in 2015, offers unprecedented opportunity to improve this monitoring. In this study, regional crops water consumption was estimated with the SAMIR software (Satellite of Monitoring Irrigation) using the FAO-56 dual crop coefficient water balance model fed with high resolution NDVI image time series providing estimates of both the actual basal crop coefficient (Kcb) and the vegetation fraction cover. The model includes a soil water model, requiring the knowledge of soil water holding capacity, maximum rooting depth, and water inputs. As irrigations are usually not known on large areas, they are simulated based on rules reproducing the farmer practices. The main objective of this work is to assess the operationality and accuracy of SAMIR at plot and perimeter scales, when several land use types (winter cereals, summer vegetables…), irrigation and agricultural practices are intertwined in a given landscape, including complex canopies such as sparse orchards. Meteorological ground stations were used to compute the reference evapotranspiration and get the rainfall depths. Two time series of ten and fourteen high-resolution SPOT5 have been acquired for the 2008-2009 and 2012-2013 hydrological years over an irrigated area in central Tunisia. They span the various successive crop seasons. The images were radiometrically corrected, first, using the SMAC6s Algorithm, second, using invariant objects located on the scene, based on visual observation of the images. From these time series, a Normalized Difference Vegetation Index (NDVI) profile was generated for each pixel. SAMIR was first calibrated based on ground measurements of evapotranspiration achieved using eddy-correlation devices installed on irrigated wheat and barley plots. After calibration, the model was run to spatialize irrigation over the whole area and a validation was done using cumulated seasonal water volumes obtained from ground survey at both plot and perimeter scales. The results show that although determination of model parameters was successful at plot scale, irrigation rules required an additional calibration which was achieved at perimeter scale.
Natural Covariant Planck Scale Cutoffs and the Cosmic Microwave Background Spectrum.
Chatwin-Davies, Aidan; Kempf, Achim; Martin, Robert T W
2017-07-21
We calculate the impact of quantum gravity-motivated ultraviolet cutoffs on inflationary predictions for the cosmic microwave background spectrum. We model the ultraviolet cutoffs fully covariantly to avoid possible artifacts of covariance breaking. Imposing these covariant cutoffs results in the production of small, characteristically k-dependent oscillations in the spectrum. The size of the effect scales linearly with the ratio of the Planck to Hubble lengths during inflation. Consequently, the relative size of the effect could be as large as one part in 10^{5}; i.e., eventual observability may not be ruled out.
Stochasticity of convection in Giga-LES data
NASA Astrophysics Data System (ADS)
De La Chevrotière, Michèle; Khouider, Boualem; Majda, Andrew J.
2016-09-01
The poor representation of tropical convection in general circulation models (GCMs) is believed to be responsible for much of the uncertainty in the predictions of weather and climate in the tropics. The stochastic multicloud model (SMCM) was recently developed by Khouider et al. (Commun Math Sci 8(1):187-216, 2010) to represent the missing variability in GCMs due to unresolved features of organized tropical convection. The SMCM is based on three cloud types (congestus, deep and stratiform), and transitions between these cloud types are formalized in terms of probability rules that are functions of the large-scale environment convective state and a set of seven arbitrary cloud timescale parameters. Here, a statistical inference method based on the Bayesian paradigm is applied to estimate these key cloud timescales from the Giga-LES dataset, a 24-h large-eddy simulation (LES) of deep tropical convection (Khairoutdinov et al. in J Adv Model Earth Syst 1(12), 2009) over a domain comparable to a GCM gridbox. A sequential learning strategy is used where the Giga-LES domain is partitioned into a few subdomains, and atmospheric time series obtained on each subdomain are used to train the Bayesian procedure incrementally. Convergence of the marginal posterior densities for all seven parameters is demonstrated for two different grid partitions, and sensitivity tests to other model parameters are also presented. A single column model simulation using the SMCM parameterization with the Giga-LES inferred parameters reproduces many important statistical features of the Giga-LES run, without any further tuning. In particular it exhibits intermittent dynamical behavior in both the stochastic cloud fractions and the large scale dynamics, with periods of dry phases followed by a coherent sequence of congestus, deep, and stratiform convection, varying on timescales of a few hours consistent with the Giga-LES time series. The chaotic variations of the cloud area fractions were captured fairly well both qualitatively and quantitatively demonstrating the stochastic nature of convection in the Giga-LES simulation.
A Connectionist Model of a Continuous Developmental Transition in the Balance Scale Task
ERIC Educational Resources Information Center
Schapiro, Anna C.; McClelland, James L.
2009-01-01
A connectionist model of the balance scale task is presented which exhibits developmental transitions between "Rule I" and "Rule II" behavior [Siegler, R. S. (1976). Three aspects of cognitive development. "Cognitive Psychology," 8, 481-520.] as well as the "catastrophe flags" seen in data from Jansen and van der Maas [Jansen, B. R. J., & van der…
Spectral sum rules for confining large-N theories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cherman, Aleksey; McGady, David A.; Yamazaki, Masahito
2016-06-17
We consider asymptotically-free four-dimensional large-$N$ gauge theories with massive fermionic and bosonic adjoint matter fields, compactified on squashed three-spheres, and examine their regularized large-$N$ confined-phase spectral sums. The analysis is done in the limit of vanishing ’t Hooft coupling, which is justified by taking the size of the compactification manifold to be small compared to the inverse strong scale Λ ₋1. We find our results motivate us to conjecture some universal spectral sum rules for these large $N$ gauge theories.
Relaxing the rule of ten events per variable in logistic and Cox regression.
Vittinghoff, Eric; McCulloch, Charles E
2007-03-15
The rule of thumb that logistic and Cox models should be used with a minimum of 10 outcome events per predictor variable (EPV), based on two simulation studies, may be too conservative. The authors conducted a large simulation study of other influences on confidence interval coverage, type I error, relative bias, and other model performance measures. They found a range of circumstances in which coverage and bias were within acceptable levels despite less than 10 EPV, as well as other factors that were as influential as or more influential than EPV. They conclude that this rule can be relaxed, in particular for sensitivity analyses undertaken to demonstrate adequate control of confounding.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pin, F.G.
Sensor-based operation of autonomous robots in unstructured and/or outdoor environments has revealed to be an extremely challenging problem, mainly because of the difficulties encountered when attempting to represent the many uncertainties which are always present in the real world. These uncertainties are primarily due to sensor imprecisions and unpredictability of the environment, i.e., lack of full knowledge of the environment characteristics and dynamics. An approach. which we have named the {open_quotes}Fuzzy Behaviorist Approach{close_quotes} (FBA) is proposed in an attempt to remedy some of these difficulties. This approach is based on the representation of the system`s uncertainties using Fuzzy Set Theory-basedmore » approximations and on the representation of the reasoning and control schemes as sets of elemental behaviors. Using the FBA, a formalism for rule base development and an automated generator of fuzzy rules have been developed. This automated system can automatically construct the set of membership functions corresponding to fuzzy behaviors. Once these have been expressed in qualitative terms by the user. The system also checks for completeness of the rule base and for non-redundancy of the rules (which has traditionally been a major hurdle in rule base development). Two major conceptual features, the suppression and inhibition mechanisms which allow to express a dominance between behaviors are discussed in detail. Some experimental results obtained with the automated fuzzy, rule generator applied to the domain of sensor-based navigation in aprion unknown environments. using one of our autonomous test-bed robots as well as a real car in outdoor environments, are then reviewed and discussed to illustrate the feasibility of large-scale automatic fuzzy rule generation using the {open_quotes}Fuzzy Behaviorist{close_quotes} concepts.« less
Preduction of Vehicle Mobility on Large-Scale Soft-Soil Terrain Maps Using Physics-Based Simulation
2016-08-02
PREDICTION OF VEHICLE MOBILITY ON LARGE-SCALE SOFT- SOIL TERRAIN MAPS USING PHYSICS-BASED SIMULATION Tamer M. Wasfy, Paramsothy Jayakumar, Dave...NRMM • Objectives • Soft Soils • Review of Physics-Based Soil Models • MBD/DEM Modeling Formulation – Joint & Contact Constraints – DEM Cohesive... Soil Model • Cone Penetrometer Experiment • Vehicle- Soil Model • Vehicle Mobility DOE Procedure • Simulation Results • Concluding Remarks 2UNCLASSIFIED
Simple models for studying complex spatiotemporal patterns of animal behavior
NASA Astrophysics Data System (ADS)
Tyutyunov, Yuri V.; Titova, Lyudmila I.
2017-06-01
Minimal mathematical models able to explain complex patterns of animal behavior are essential parts of simulation systems describing large-scale spatiotemporal dynamics of trophic communities, particularly those with wide-ranging species, such as occur in pelagic environments. We present results obtained with three different modelling approaches: (i) an individual-based model of animal spatial behavior; (ii) a continuous taxis-diffusion-reaction system of partial-difference equations; (iii) a 'hybrid' approach combining the individual-based algorithm of organism movements with explicit description of decay and diffusion of the movement stimuli. Though the models are based on extremely simple rules, they all allow description of spatial movements of animals in a predator-prey system within a closed habitat, reproducing some typical patterns of the pursuit-evasion behavior observed in natural populations. In all three models, at each spatial position the animal movements are determined by local conditions only, so the pattern of collective behavior emerges due to self-organization. The movement velocities of animals are proportional to the density gradients of specific cues emitted by individuals of the antagonistic species (pheromones, exometabolites or mechanical waves of the media, e.g., sound). These cues play a role of taxis stimuli: prey attract predators, while predators repel prey. Depending on the nature and the properties of the movement stimulus we propose using either a simplified individual-based model, a continuous taxis pursuit-evasion system, or a little more detailed 'hybrid' approach that combines simulation of the individual movements with the continuous model describing diffusion and decay of the stimuli in an explicit way. These can be used to improve movement models for many species, including large marine predators.
A dynamic regularized gradient model of the subgrid-scale stress tensor for large-eddy simulation
NASA Astrophysics Data System (ADS)
Vollant, A.; Balarac, G.; Corre, C.
2016-02-01
Large-eddy simulation (LES) solves only the large scales part of turbulent flows by using a scales separation based on a filtering operation. The solution of the filtered Navier-Stokes equations requires then to model the subgrid-scale (SGS) stress tensor to take into account the effect of scales smaller than the filter size. In this work, a new model is proposed for the SGS stress model. The model formulation is based on a regularization procedure of the gradient model to correct its unstable behavior. The model is developed based on a priori tests to improve the accuracy of the modeling for both structural and functional performances, i.e., the model ability to locally approximate the SGS unknown term and to reproduce enough global SGS dissipation, respectively. LES is then performed for a posteriori validation. This work is an extension to the SGS stress tensor of the regularization procedure proposed by Balarac et al. ["A dynamic regularized gradient model of the subgrid-scale scalar flux for large eddy simulations," Phys. Fluids 25(7), 075107 (2013)] to model the SGS scalar flux. A set of dynamic regularized gradient (DRG) models is thus made available for both the momentum and the scalar equations. The second objective of this work is to compare this new set of DRG models with direct numerical simulations (DNS), filtered DNS in the case of classic flows simulated with a pseudo-spectral solver and with the standard set of models based on the dynamic Smagorinsky model. Various flow configurations are considered: decaying homogeneous isotropic turbulence, turbulent plane jet, and turbulent channel flows. These tests demonstrate the stable behavior provided by the regularization procedure, along with substantial improvement for velocity and scalar statistics predictions.
NASA Astrophysics Data System (ADS)
Burik, P.; Pesek, L.; Kejzlar, P.; Andrsova, Z.; Zubko, P.
2017-01-01
The main idea of this work is using a physical model to prepare a virtual material with required properties. The model is based on the relationship between the microstructure and mechanical properties. The macroscopic (global) mechanical properties of steel are highly dependent upon microstructure, crystallographic orientation of grains, distribution of each phase present, etc... We need to know the local mechanical properties of each phase separately in multiphase materials. The grain size is a scale, where local mechanical properties are responsible for the behavior. Nanomechanical testing using depth sensing indentation (DSI) provides a straightforward solution for quantitatively characterizing each of phases in microstructure because it is very powerful technique for characterization of materials in small volumes. The aim of this experimental investigation is: (i) to prove how the mixing rule works for local mechanical properties (indentation hardness HIT) in microstructure scale using the DSI technique on steel sheets with different microstructure; (ii) to compare measured global properties with properties achieved by mixing rule; (iii) to analyze the effect of crystallographic orientations of grains on the mixing rule.
Highly efficient model updating for structural condition assessment of large-scale bridges.
DOT National Transportation Integrated Search
2015-02-01
For eciently updating models of large-scale structures, the response surface (RS) method based on radial basis : functions (RBFs) is proposed to model the input-output relationship of structures. The key issues for applying : the proposed method a...
Uncovering Nature’s 100 TeV Particle Accelerators in the Large-Scale Jets of Quasars
NASA Astrophysics Data System (ADS)
Georganopoulos, Markos; Meyer, Eileen; Sparks, William B.; Perlman, Eric S.; Van Der Marel, Roeland P.; Anderson, Jay; Sohn, S. Tony; Biretta, John A.; Norman, Colin Arthur; Chiaberge, Marco
2016-04-01
Since the first jet X-ray detections sixteen years ago the adopted paradigm for the X-ray emission has been the IC/CMB model that requires highly relativistic (Lorentz factors of 10-20), extremely powerful (sometimes super-Eddington) kpc scale jets. R I will discuss recently obtained strong evidence, from two different avenues, IR to optical polarimetry for PKS 1136-135 and gamma-ray observations for 3C 273 and PKS 0637-752, ruling out the EC/CMB model. Our work constrains the jet Lorentz factors to less than ~few, and leaves as the only reasonable alternative synchrotron emission from ~100 TeV jet electrons, accelerated hundreds of kpc away from the central engine. This refutes over a decade of work on the jet X-ray emission mechanism and overall energetics and, if confirmed in more sources, it will constitute a paradigm shift in our understanding of powerful large scale jets and their role in the universe. Two important findings emerging from our work will also discussed be: (i) the solid angle-integrated luminosity of the large scale jet is comparable to that of the jet core, contrary to the current belief that the core is the dominant jet radiative outlet and (ii) the large scale jets are the main source of TeV photon in the universe, something potentially important, as TeV photons have been suggested to heat up the intergalactic medium and reduce the number of dwarf galaxies formed.
2017-01-01
The authors use four criteria to examine a novel community detection algorithm: (a) effectiveness in terms of producing high values of normalized mutual information (NMI) and modularity, using well-known social networks for testing; (b) examination, meaning the ability to examine mitigating resolution limit problems using NMI values and synthetic networks; (c) correctness, meaning the ability to identify useful community structure results in terms of NMI values and Lancichinetti-Fortunato-Radicchi (LFR) benchmark networks; and (d) scalability, or the ability to produce comparable modularity values with fast execution times when working with large-scale real-world networks. In addition to describing a simple hierarchical arc-merging (HAM) algorithm that uses network topology information, we introduce rule-based arc-merging strategies for identifying community structures. Five well-studied social network datasets and eight sets of LFR benchmark networks were employed to validate the correctness of a ground-truth community, eight large-scale real-world complex networks were used to measure its efficiency, and two synthetic networks were used to determine its susceptibility to two resolution limit problems. Our experimental results indicate that the proposed HAM algorithm exhibited satisfactory performance efficiency, and that HAM-identified and ground-truth communities were comparable in terms of social and LFR benchmark networks, while mitigating resolution limit problems. PMID:29121100
The effect of wind tunnel wall interference on the performance of a fan-in-wing VTOL model
NASA Technical Reports Server (NTRS)
Heyson, H. H.
1974-01-01
A fan-in-wing model with a 1.07-meter span was tested in seven different test sections with cross-sectional areas ranging from 2.2 sq meters to 265 sq meters. The data from the different test sections are compared both with and without correction for wall interference. The results demonstrate that extreme care must be used in interpreting uncorrected VTOL data since the wall interference may be so large as to invalidate even trends in the data. The wall interference is particularly large at the tail, a result which is in agreement with recently published comparisons of flight and large scale wind tunnel data for a propeller-driven deflected-slipstream configuration. The data verify the wall-interference theory even under conditions of extreme interference. A method yields reasonable estimates for the onset of Rae's minimum-speed limit. The rules for choosing model sizes to produce negligible wall effects are considerably in error and permit the use of excessively large models.
Moon-based Earth Observation for Large Scale Geoscience Phenomena
NASA Astrophysics Data System (ADS)
Guo, Huadong; Liu, Guang; Ding, Yixing
2016-07-01
The capability of Earth observation for large-global-scale natural phenomena needs to be improved and new observing platform are expected. We have studied the concept of Moon as an Earth observation in these years. Comparing with manmade satellite platform, Moon-based Earth observation can obtain multi-spherical, full-band, active and passive information,which is of following advantages: large observation range, variable view angle, long-term continuous observation, extra-long life cycle, with the characteristics of longevity ,consistency, integrity, stability and uniqueness. Moon-based Earth observation is suitable for monitoring the large scale geoscience phenomena including large scale atmosphere change, large scale ocean change,large scale land surface dynamic change,solid earth dynamic change,etc. For the purpose of establishing a Moon-based Earth observation platform, we already have a plan to study the five aspects as follows: mechanism and models of moon-based observing earth sciences macroscopic phenomena; sensors' parameters optimization and methods of moon-based Earth observation; site selection and environment of moon-based Earth observation; Moon-based Earth observation platform; and Moon-based Earth observation fundamental scientific framework.
The one scale that rules them all
NASA Astrophysics Data System (ADS)
Ouellette, Jennifer
2017-05-01
There are very real constraints on how large a complex organism can grow. This is the essence of all modern-day scaling laws, and the subject of Geoffrey West's provocative new book Scale: the Universal Laws of Life and Death in Organisms, Cities and Companies
Track-based event recognition in a realistic crowded environment
NASA Astrophysics Data System (ADS)
van Huis, Jasper R.; Bouma, Henri; Baan, Jan; Burghouts, Gertjan J.; Eendebak, Pieter T.; den Hollander, Richard J. M.; Dijk, Judith; van Rest, Jeroen H.
2014-10-01
Automatic detection of abnormal behavior in CCTV cameras is important to improve the security in crowded environments, such as shopping malls, airports and railway stations. This behavior can be characterized at different time scales, e.g., by small-scale subtle and obvious actions or by large-scale walking patterns and interactions between people. For example, pickpocketing can be recognized by the actual snatch (small scale), when he follows the victim, or when he interacts with an accomplice before and after the incident (longer time scale). This paper focusses on event recognition by detecting large-scale track-based patterns. Our event recognition method consists of several steps: pedestrian detection, object tracking, track-based feature computation and rule-based event classification. In the experiment, we focused on single track actions (walk, run, loiter, stop, turn) and track interactions (pass, meet, merge, split). The experiment includes a controlled setup, where 10 actors perform these actions. The method is also applied to all tracks that are generated in a crowded shopping mall in a selected time frame. The results show that most of the actions can be detected reliably (on average 90%) at a low false positive rate (1.1%), and that the interactions obtain lower detection rates (70% at 0.3% FP). This method may become one of the components that assists operators to find threatening behavior and enrich the selection of videos that are to be observed.
Statistical Measures of Large-Scale Structure
NASA Astrophysics Data System (ADS)
Vogeley, Michael; Geller, Margaret; Huchra, John; Park, Changbom; Gott, J. Richard
1993-12-01
\\inv Mpc} To quantify clustering in the large-scale distribution of galaxies and to test theories for the formation of structure in the universe, we apply statistical measures to the CfA Redshift Survey. This survey is complete to m_{B(0)}=15.5 over two contiguous regions which cover one-quarter of the sky and include ~ 11,000 galaxies. The salient features of these data are voids with diameter 30-50\\hmpc and coherent dense structures with a scale ~ 100\\hmpc. Comparison with N-body simulations rules out the ``standard" CDM model (Omega =1, b=1.5, sigma_8 =1) at the 99% confidence level because this model has insufficient power on scales lambda >30\\hmpc. An unbiased open universe CDM model (Omega h =0.2) and a biased CDM model with non-zero cosmological constant (Omega h =0.24, lambda_0 =0.6) match the observed power spectrum. The amplitude of the power spectrum depends on the luminosity of galaxies in the sample; bright (L>L(*) ) galaxies are more strongly clustered than faint galaxies. The paucity of bright galaxies in low-density regions may explain this dependence. To measure the topology of large-scale structure, we compute the genus of isodensity surfaces of the smoothed density field. On scales in the ``non-linear" regime, <= 10\\hmpc, the high- and low-density regions are multiply-connected over a broad range of density threshold, as in a filamentary net. On smoothing scales >10\\hmpc, the topology is consistent with statistics of a Gaussian random field. Simulations of CDM models fail to produce the observed coherence of structure on non-linear scales (>95% confidence level). The underdensity probability (the frequency of regions with density contrast delta rho //lineρ=-0.8) depends strongly on the luminosity of galaxies; underdense regions are significantly more common (>2sigma ) in bright (L>L(*) ) galaxy samples than in samples which include fainter galaxies.
Wong, William W L; Feng, Zeny Z; Thein, Hla-Hla
2016-11-01
Agent-based models (ABMs) are computer simulation models that define interactions among agents and simulate emergent behaviors that arise from the ensemble of local decisions. ABMs have been increasingly used to examine trends in infectious disease epidemiology. However, the main limitation of ABMs is the high computational cost for a large-scale simulation. To improve the computational efficiency for large-scale ABM simulations, we built a parallelizable sliding region algorithm (SRA) for ABM and compared it to a nonparallelizable ABM. We developed a complex agent network and performed two simulations to model hepatitis C epidemics based on the real demographic data from Saskatchewan, Canada. The first simulation used the SRA that processed on each postal code subregion subsequently. The second simulation processed the entire population simultaneously. It was concluded that the parallelizable SRA showed computational time saving with comparable results in a province-wide simulation. Using the same method, SRA can be generalized for performing a country-wide simulation. Thus, this parallel algorithm enables the possibility of using ABM for large-scale simulation with limited computational resources.
Complexity Characteristics of Currency Networks
NASA Astrophysics Data System (ADS)
Gorski, A. Z.; Drozdz, S.; Kwapien, J.; Oswiecimka, P.
2006-11-01
A large set of daily FOREX time series is analyzed. The corresponding correlation matrices (CM) are constructed for USD, EUR and PLN used as the base currencies. The triangle rule is interpreted as constraints reducing the number of independent returns. The CM spectrum is computed and compared with the cases of shuffled currencies and a fictitious random currency taken as a base currency. The Minimal Spanning Tree (MST) graphs are calculated and the clustering effects for strong currencies are found. It is shown that for MSTs the node rank has power like, scale free behavior. Finally, the scaling exponents are evaluated and found in the range analogous to those identified recently for various complex networks.
Automated Decomposition of Model-based Learning Problems
NASA Technical Reports Server (NTRS)
Williams, Brian C.; Millar, Bill
1996-01-01
A new generation of sensor rich, massively distributed autonomous systems is being developed that has the potential for unprecedented performance, such as smart buildings, reconfigurable factories, adaptive traffic systems and remote earth ecosystem monitoring. To achieve high performance these massive systems will need to accurately model themselves and their environment from sensor information. Accomplishing this on a grand scale requires automating the art of large-scale modeling. This paper presents a formalization of [\\em decompositional model-based learning (DML)], a method developed by observing a modeler's expertise at decomposing large scale model estimation tasks. The method exploits a striking analogy between learning and consistency-based diagnosis. Moriarty, an implementation of DML, has been applied to thermal modeling of a smart building, demonstrating a significant improvement in learning rate.
The best-fit universe. [cosmological models
NASA Technical Reports Server (NTRS)
Turner, Michael S.
1991-01-01
Inflation provides very strong motivation for a flat Universe, Harrison-Zel'dovich (constant-curvature) perturbations, and cold dark matter. However, there are a number of cosmological observations that conflict with the predictions of the simplest such model: one with zero cosmological constant. They include the age of the Universe, dynamical determinations of Omega, galaxy-number counts, and the apparent abundance of large-scale structure in the Universe. While the discrepancies are not yet serious enough to rule out the simplest and most well motivated model, the current data point to a best-fit model with the following parameters: Omega(sub B) approximately equal to 0.03, Omega(sub CDM) approximately equal to 0.17, Omega(sub Lambda) approximately equal to 0.8, and H(sub 0) approximately equal to 70 km/(sec x Mpc) which improves significantly the concordance with observations. While there is no good reason to expect such a value for the cosmological constant, there is no physical principle that would rule out such.
Compartmental and Spatial Rule-Based Modeling with Virtual Cell.
Blinov, Michael L; Schaff, James C; Vasilescu, Dan; Moraru, Ion I; Bloom, Judy E; Loew, Leslie M
2017-10-03
In rule-based modeling, molecular interactions are systematically specified in the form of reaction rules that serve as generators of reactions. This provides a way to account for all the potential molecular complexes and interactions among multivalent or multistate molecules. Recently, we introduced rule-based modeling into the Virtual Cell (VCell) modeling framework, permitting graphical specification of rules and merger of networks generated automatically (using the BioNetGen modeling engine) with hand-specified reaction networks. VCell provides a number of ordinary differential equation and stochastic numerical solvers for single-compartment simulations of the kinetic systems derived from these networks, and agent-based network-free simulation of the rules. In this work, compartmental and spatial modeling of rule-based models has been implemented within VCell. To enable rule-based deterministic and stochastic spatial simulations and network-free agent-based compartmental simulations, the BioNetGen and NFSim engines were each modified to support compartments. In the new rule-based formalism, every reactant and product pattern and every reaction rule are assigned locations. We also introduce the rule-based concept of molecular anchors. This assures that any species that has a molecule anchored to a predefined compartment will remain in this compartment. Importantly, in addition to formulation of compartmental models, this now permits VCell users to seamlessly connect reaction networks derived from rules to explicit geometries to automatically generate a system of reaction-diffusion equations. These may then be simulated using either the VCell partial differential equations deterministic solvers or the Smoldyn stochastic simulator. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Probing Inflation Using Galaxy Clustering On Ultra-Large Scales
NASA Astrophysics Data System (ADS)
Dalal, Roohi; de Putter, Roland; Dore, Olivier
2018-01-01
A detailed understanding of curvature perturbations in the universe is necessary to constrain theories of inflation. In particular, measurements of the local non-gaussianity parameter, flocNL, enable us to distinguish between two broad classes of inflationary theories, single-field and multi-field inflation. While most single-field theories predict flocNL ≈ ‑5/12 (ns -1), in multi-field theories, flocNL is not constrained to this value and is allowed to be observably large. Achieving σ(flocNL) = 1 would give us discovery potential for detecting multi-field inflation, while finding flocNL=0 would rule out a good fraction of interesting multi-field models. We study the use of galaxy clustering on ultra-large scales to achieve this level of constraint on flocNL. Upcoming surveys such as Euclid and LSST will give us galaxy catalogs from which we can construct the galaxy power spectrum and hence infer a value of flocNL. We consider two possible methods of determining the galaxy power spectrum from a catalog of galaxy positions: the traditional Feldman Kaiser Peacock (FKP) Power Spectrum Estimator, and an Optimal Quadratic Estimator (OQE). We implemented and tested each method using mock galaxy catalogs, and compared the resulting constraints on flocNL. We find that the FKP estimator can measure flocNL in an unbiased way, but there remains room for improvement in its precision. We also find that the OQE is not computationally fast, but remains a promising option due to its ability to isolate the power spectrum at large scales. We plan to extend this research to study alternative methods, such as pixel-based likelihood functions. We also plan to study the impact of general relativistic effects at these scales on our ability to measure flocNL.
A high-level language for rule-based modelling.
Pedersen, Michael; Phillips, Andrew; Plotkin, Gordon D
2015-01-01
Rule-based languages such as Kappa excel in their support for handling the combinatorial complexities prevalent in many biological systems, including signalling pathways. But Kappa provides little structure for organising rules, and large models can therefore be hard to read and maintain. This paper introduces a high-level, modular extension of Kappa called LBS-κ. We demonstrate the constructs of the language through examples and three case studies: a chemotaxis switch ring, a MAPK cascade, and an insulin signalling pathway. We then provide a formal definition of LBS-κ through an abstract syntax and a translation to plain Kappa. The translation is implemented in a compiler tool which is available as a web application. We finally demonstrate how to increase the expressivity of LBS-κ through embedded scripts in a general-purpose programming language, a technique which we view as generally applicable to other domain specific languages.
A High-Level Language for Rule-Based Modelling
Pedersen, Michael; Phillips, Andrew; Plotkin, Gordon D.
2015-01-01
Rule-based languages such as Kappa excel in their support for handling the combinatorial complexities prevalent in many biological systems, including signalling pathways. But Kappa provides little structure for organising rules, and large models can therefore be hard to read and maintain. This paper introduces a high-level, modular extension of Kappa called LBS-κ. We demonstrate the constructs of the language through examples and three case studies: a chemotaxis switch ring, a MAPK cascade, and an insulin signalling pathway. We then provide a formal definition of LBS-κ through an abstract syntax and a translation to plain Kappa. The translation is implemented in a compiler tool which is available as a web application. We finally demonstrate how to increase the expressivity of LBS-κ through embedded scripts in a general-purpose programming language, a technique which we view as generally applicable to other domain specific languages. PMID:26043208
Managing Large Scale Project Analysis Teams through a Web Accessible Database
NASA Technical Reports Server (NTRS)
O'Neil, Daniel A.
2008-01-01
Large scale space programs analyze thousands of requirements while mitigating safety, performance, schedule, and cost risks. These efforts involve a variety of roles with interdependent use cases and goals. For example, study managers and facilitators identify ground-rules and assumptions for a collection of studies required for a program or project milestone. Task leaders derive product requirements from the ground rules and assumptions and describe activities to produce needed analytical products. Disciplined specialists produce the specified products and load results into a file management system. Organizational and project managers provide the personnel and funds to conduct the tasks. Each role has responsibilities to establish information linkages and provide status reports to management. Projects conduct design and analysis cycles to refine designs to meet the requirements and implement risk mitigation plans. At the program level, integrated design and analysis cycles studies are conducted to eliminate every 'to-be-determined' and develop plans to mitigate every risk. At the agency level, strategic studies analyze different approaches to exploration architectures and campaigns. This paper describes a web-accessible database developed by NASA to coordinate and manage tasks at three organizational levels. Other topics in this paper cover integration technologies and techniques for process modeling and enterprise architectures.
NASA Astrophysics Data System (ADS)
Rexer, Moritz; Hirt, Christian
2015-09-01
Classical degree variance models (such as Kaula's rule or the Tscherning-Rapp model) often rely on low-resolution gravity data and so are subject to extrapolation when used to describe the decay of the gravity field at short spatial scales. This paper presents a new degree variance model based on the recently published GGMplus near-global land areas 220 m resolution gravity maps (Geophys Res Lett 40(16):4279-4283, 2013). We investigate and use a 2D-DFT (discrete Fourier transform) approach to transform GGMplus gravity grids into degree variances. The method is described in detail and its approximation errors are studied using closed-loop experiments. Focus is placed on tiling, azimuth averaging, and windowing effects in the 2D-DFT method and on analytical fitting of degree variances. Approximation errors of the 2D-DFT procedure on the (spherical harmonic) degree variance are found to be at the 10-20 % level. The importance of the reference surface (sphere, ellipsoid or topography) of the gravity data for correct interpretation of degree variance spectra is highlighted. The effect of the underlying mass arrangement (spherical or ellipsoidal approximation) on the degree variances is found to be crucial at short spatial scales. A rule-of-thumb for transformation of spectra between spherical and ellipsoidal approximation is derived. Application of the 2D-DFT on GGMplus gravity maps yields a new degree variance model to degree 90,000. The model is supported by GRACE, GOCE, EGM2008 and forward-modelled gravity at 3 billion land points over all land areas within the SRTM data coverage and provides gravity signal variances at the surface of the topography. The model yields omission errors of 9 mGal for gravity (1.5 cm for geoid effects) at scales of 10 km, 4 mGal (1 mm) at 2-km scales, and 2 mGal (0.2 mm) at 1-km scales.
Energetics and Structural Characterization of the large-scale Functional Motion of Adenylate Kinase
Formoso, Elena; Limongelli, Vittorio; Parrinello, Michele
2015-01-01
Adenylate Kinase (AK) is a signal transducing protein that regulates cellular energy homeostasis balancing between different conformations. An alteration of its activity can lead to severe pathologies such as heart failure, cancer and neurodegenerative diseases. A comprehensive elucidation of the large-scale conformational motions that rule the functional mechanism of this enzyme is of great value to guide rationally the development of new medications. Here using a metadynamics-based computational protocol we elucidate the thermodynamics and structural properties underlying the AK functional transitions. The free energy estimation of the conformational motions of the enzyme allows characterizing the sequence of events that regulate its action. We reveal the atomistic details of the most relevant enzyme states, identifying residues such as Arg119 and Lys13, which play a key role during the conformational transitions and represent druggable spots to design enzyme inhibitors. Our study offers tools that open new areas of investigation on large-scale motion in proteins. PMID:25672826
Energetics and Structural Characterization of the large-scale Functional Motion of Adenylate Kinase
NASA Astrophysics Data System (ADS)
Formoso, Elena; Limongelli, Vittorio; Parrinello, Michele
2015-02-01
Adenylate Kinase (AK) is a signal transducing protein that regulates cellular energy homeostasis balancing between different conformations. An alteration of its activity can lead to severe pathologies such as heart failure, cancer and neurodegenerative diseases. A comprehensive elucidation of the large-scale conformational motions that rule the functional mechanism of this enzyme is of great value to guide rationally the development of new medications. Here using a metadynamics-based computational protocol we elucidate the thermodynamics and structural properties underlying the AK functional transitions. The free energy estimation of the conformational motions of the enzyme allows characterizing the sequence of events that regulate its action. We reveal the atomistic details of the most relevant enzyme states, identifying residues such as Arg119 and Lys13, which play a key role during the conformational transitions and represent druggable spots to design enzyme inhibitors. Our study offers tools that open new areas of investigation on large-scale motion in proteins.
Hayenga, Heather N; Thorne, Bryan C; Peirce, Shayn M; Humphrey, Jay D
2011-11-01
There is a need to develop multiscale models of vascular adaptations to understand tissue-level manifestations of cellular level mechanisms. Continuum-based biomechanical models are well suited for relating blood pressures and flows to stress-mediated changes in geometry and properties, but less so for describing underlying mechanobiological processes. Discrete stochastic agent-based models are well suited for representing biological processes at a cellular level, but not for describing tissue-level mechanical changes. We present here a conceptually new approach to facilitate the coupling of continuum and agent-based models. Because of ubiquitous limitations in both the tissue- and cell-level data from which one derives constitutive relations for continuum models and rule-sets for agent-based models, we suggest that model verification should enforce congruency across scales. That is, multiscale model parameters initially determined from data sets representing different scales should be refined, when possible, to ensure that common outputs are consistent. Potential advantages of this approach are illustrated by comparing simulated aortic responses to a sustained increase in blood pressure predicted by continuum and agent-based models both before and after instituting a genetic algorithm to refine 16 objectively bounded model parameters. We show that congruency-based parameter refinement not only yielded increased consistency across scales, it also yielded predictions that are closer to in vivo observations.
Guillemot, Joannès; Delpierre, Nicolas; Vallet, Patrick; François, Christophe; Martin-StPaul, Nicolas K; Soudani, Kamel; Nicolas, Manuel; Badeau, Vincent; Dufrêne, Eric
2014-09-01
The structure of a forest stand, i.e. the distribution of tree size features, has strong effects on its functioning. The management of the structure is therefore an important tool in mitigating the impact of predicted changes in climate on forests, especially with respect to drought. Here, a new functional-structural model is presented and is used to assess the effects of management on forest functioning at a national scale. The stand process-based model (PBM) CASTANEA was coupled to a stand structure module (SSM) based on empirical tree-to-tree competition rules. The calibration of the SSM was based on a thorough analysis of intersite and interannual variability of competition asymmetry. The coupled CASTANEA-SSM model was evaluated across France using forest inventory data, and used to compare the effect of contrasted silvicultural practices on simulated stand carbon fluxes and growth. The asymmetry of competition varied consistently with stand productivity at both spatial and temporal scales. The modelling of the competition rules enabled efficient prediction of changes in stand structure within the CASTANEA PBM. The coupled model predicted an increase in net primary productivity (NPP) with management intensity, resulting in higher growth. This positive effect of management was found to vary at a national scale across France: the highest increases in NPP were attained in forests facing moderate to high water stress; however, the absolute effect of management on simulated stand growth remained moderate to low because stand thinning involved changes in carbon allocation at the tree scale. This modelling approach helps to identify the areas where management efforts should be concentrated in order to mitigate near-future drought impact on national forest productivity. Around a quarter of the French temperate oak and beech forests are currently in zones of high vulnerability, where management could thus mitigate the influence of climate change on forest yield.
Endpoint Model of Exclusive Processes
NASA Astrophysics Data System (ADS)
Dagaonkar, Sumeet; Jain, Pankaj; Ralston, John P.
2018-07-01
The endpoint model explains the scaling laws observed in exclusive hadronic reactions at large momentum transfer in all experimentally important regimes. The model, originally conceived by Feynman and others, assumes a single valence quark carries most of the hadron momentum. The quark wave function is directly related to the momentum transfer dependence of the reaction. After extracting the momentum dependence of the quark wave function from one process, it explains all the others. Endpoint quark-counting rules relate the number of quarks in a hadron to the power-law. A universal linear endpoint behavior explains the proton electromagnetic form factors F1 and F2, proton-proton scattering at fixed-angle, the t-dependence of proton-proton scattering at large s>> t, and Compton scattering at fixed t. The model appears to be the only comprehensive mechanism consistent with all experimental information.
Hyperscaling breakdown and Ising spin glasses: The Binder cumulant
NASA Astrophysics Data System (ADS)
Lundow, P. H.; Campbell, I. A.
2018-02-01
Among the Renormalization Group Theory scaling rules relating critical exponents, there are hyperscaling rules involving the dimension of the system. It is well known that in Ising models hyperscaling breaks down above the upper critical dimension. It was shown by Schwartz (1991) that the standard Josephson hyperscaling rule can also break down in Ising systems with quenched random interactions. A related Renormalization Group Theory hyperscaling rule links the critical exponents for the normalized Binder cumulant and the correlation length in the thermodynamic limit. An appropriate scaling approach for analyzing measurements from criticality to infinite temperature is first outlined. Numerical data on the scaling of the normalized correlation length and the normalized Binder cumulant are shown for the canonical Ising ferromagnet model in dimension three where hyperscaling holds, for the Ising ferromagnet in dimension five (so above the upper critical dimension) where hyperscaling breaks down, and then for Ising spin glass models in dimension three where the quenched interactions are random. For the Ising spin glasses there is a breakdown of the normalized Binder cumulant hyperscaling relation in the thermodynamic limit regime, with a return to size independent Binder cumulant values in the finite-size scaling regime around the critical region.
NASA Astrophysics Data System (ADS)
Aithal, B. H.
2015-12-01
Abstract: Urbanisation has gained momentum with globalization in India. Policy decisions to set up commercial, industrial hubs have fuelled large scale migration, added with population upsurge has contributed to the fast growing urban region that needs to be monitored in order to design sustainable urban cities. Unplanned urbanization have resulted in the growth of peri-urban region referred to as urban sprawl, are often devoid of basic amenities and infrastructure leading to large scale environmental problems that are evident. Remote sensing data acquired through space borne sensors at regular interval helps in understanding urban dynamics aided by Geoinformatics which has proved very effective in mapping and monitoring for sustainable urban planning. Cellular automata (CA) is a robust approach for the spatially explicit simulation of land-use land cover dynamics. CA uses rules, states, conditions that are vital factors in modelling urbanisation. This communication effectively introduces simulation assistances of CA with the agent based modelling supported by its fuzzy characteristics and weightages through analytical hierarchal process (AHP). This has been done considering perceived agents such as industries, natural resource etc. Respective agent's role in development of a particular regions into an urban area has been examined with weights and its influence of each of these agents based on its characteristics functions. Validation was performed obtaining a high kappa coefficient indicating the quality and the allocation performance of the model & validity of the model to predict future projections. The prediction using the proposed model was performed for 2030. Further environmental sustainability of each of these cities are explored such as water features, environment, greenhouse gas emissions, effects on human human health etc., Modeling suggests trend of various land use classes transformation with the spurt in urban expansions based on specific regions and policies providing a visual spatial information to both urban planners and city managers. Further environmental sustainability assessment indicates dwindling natural resources and increase in thermal discomfort to the living population thereby indicating need for balanced and planned development.
OpenMP parallelization of a gridded SWAT (SWATG)
NASA Astrophysics Data System (ADS)
Zhang, Ying; Hou, Jinliang; Cao, Yongpan; Gu, Juan; Huang, Chunlin
2017-12-01
Large-scale, long-term and high spatial resolution simulation is a common issue in environmental modeling. A Gridded Hydrologic Response Unit (HRU)-based Soil and Water Assessment Tool (SWATG) that integrates grid modeling scheme with different spatial representations also presents such problems. The time-consuming problem affects applications of very high resolution large-scale watershed modeling. The OpenMP (Open Multi-Processing) parallel application interface is integrated with SWATG (called SWATGP) to accelerate grid modeling based on the HRU level. Such parallel implementation takes better advantage of the computational power of a shared memory computer system. We conducted two experiments at multiple temporal and spatial scales of hydrological modeling using SWATG and SWATGP on a high-end server. At 500-m resolution, SWATGP was found to be up to nine times faster than SWATG in modeling over a roughly 2000 km2 watershed with 1 CPU and a 15 thread configuration. The study results demonstrate that parallel models save considerable time relative to traditional sequential simulation runs. Parallel computations of environmental models are beneficial for model applications, especially at large spatial and temporal scales and at high resolutions. The proposed SWATGP model is thus a promising tool for large-scale and high-resolution water resources research and management in addition to offering data fusion and model coupling ability.
Misirli, Goksel; Cavaliere, Matteo; Waites, William; Pocock, Matthew; Madsen, Curtis; Gilfellon, Owen; Honorato-Zimmer, Ricardo; Zuliani, Paolo; Danos, Vincent; Wipat, Anil
2016-03-15
Biological systems are complex and challenging to model and therefore model reuse is highly desirable. To promote model reuse, models should include both information about the specifics of simulations and the underlying biology in the form of metadata. The availability of computationally tractable metadata is especially important for the effective automated interpretation and processing of models. Metadata are typically represented as machine-readable annotations which enhance programmatic access to information about models. Rule-based languages have emerged as a modelling framework to represent the complexity of biological systems. Annotation approaches have been widely used for reaction-based formalisms such as SBML. However, rule-based languages still lack a rich annotation framework to add semantic information, such as machine-readable descriptions, to the components of a model. We present an annotation framework and guidelines for annotating rule-based models, encoded in the commonly used Kappa and BioNetGen languages. We adapt widely adopted annotation approaches to rule-based models. We initially propose a syntax to store machine-readable annotations and describe a mapping between rule-based modelling entities, such as agents and rules, and their annotations. We then describe an ontology to both annotate these models and capture the information contained therein, and demonstrate annotating these models using examples. Finally, we present a proof of concept tool for extracting annotations from a model that can be queried and analyzed in a uniform way. The uniform representation of the annotations can be used to facilitate the creation, analysis, reuse and visualization of rule-based models. Although examples are given, using specific implementations the proposed techniques can be applied to rule-based models in general. The annotation ontology for rule-based models can be found at http://purl.org/rbm/rbmo The krdf tool and associated executable examples are available at http://purl.org/rbm/rbmo/krdf anil.wipat@newcastle.ac.uk or vdanos@inf.ed.ac.uk. © The Author 2015. Published by Oxford University Press.
Expert systems and simulation models; Proceedings of the Seminar, Tucson, AZ, November 18, 19, 1985
NASA Technical Reports Server (NTRS)
1986-01-01
The seminar presents papers on modeling and simulation methodology, artificial intelligence and expert systems, environments for simulation/expert system development, and methodology for simulation/expert system development. Particular attention is given to simulation modeling concepts and their representation, modular hierarchical model specification, knowledge representation, and rule-based diagnostic expert system development. Other topics include the combination of symbolic and discrete event simulation, real time inferencing, and the management of large knowledge-based simulation projects.
Melissa A. Thomas-Van Gundy
2014-01-01
LANDFIRE maps of fire regime groups are frequently used by land managers to help plan and execute prescribed burns for ecosystem restoration. Since LANDFIRE maps are generally applicable at coarse scales, questions often arise regarding their utility and accuracy. Here, the two recently published products from West Virginia, a rule-based and a witness tree-based model...
Gago, Jorge; Martínez-Núñez, Lourdes; Landín, Mariana; Flexas, Jaume; Gallego, Pedro P.
2014-01-01
Background Plant acclimation is a highly complex process, which cannot be fully understood by analysis at any one specific level (i.e. subcellular, cellular or whole plant scale). Various soft-computing techniques, such as neural networks or fuzzy logic, were designed to analyze complex multivariate data sets and might be used to model large such multiscale data sets in plant biology. Methodology and Principal Findings In this study we assessed the effectiveness of applying neuro-fuzzy logic to modeling the effects of light intensities and sucrose content/concentration in the in vitro culture of kiwifruit on plant acclimation, by modeling multivariate data from 14 parameters at different biological scales of organization. The model provides insights through application of 14 sets of straightforward rules and indicates that plants with lower stomatal aperture areas and higher photoinhibition and photoprotective status score best for acclimation. The model suggests the best condition for obtaining higher quality acclimatized plantlets is the combination of 2.3% sucrose and photonflux of 122–130 µmol m−2 s−1. Conclusions Our results demonstrate that artificial intelligence models are not only successful in identifying complex non-linear interactions among variables, by integrating large-scale data sets from different levels of biological organization in a holistic plant systems-biology approach, but can also be used successfully for inferring new results without further experimental work. PMID:24465829
Gago, Jorge; Martínez-Núñez, Lourdes; Landín, Mariana; Flexas, Jaume; Gallego, Pedro P
2014-01-01
Plant acclimation is a highly complex process, which cannot be fully understood by analysis at any one specific level (i.e. subcellular, cellular or whole plant scale). Various soft-computing techniques, such as neural networks or fuzzy logic, were designed to analyze complex multivariate data sets and might be used to model large such multiscale data sets in plant biology. In this study we assessed the effectiveness of applying neuro-fuzzy logic to modeling the effects of light intensities and sucrose content/concentration in the in vitro culture of kiwifruit on plant acclimation, by modeling multivariate data from 14 parameters at different biological scales of organization. The model provides insights through application of 14 sets of straightforward rules and indicates that plants with lower stomatal aperture areas and higher photoinhibition and photoprotective status score best for acclimation. The model suggests the best condition for obtaining higher quality acclimatized plantlets is the combination of 2.3% sucrose and photonflux of 122-130 µmol m(-2) s(-1). Our results demonstrate that artificial intelligence models are not only successful in identifying complex non-linear interactions among variables, by integrating large-scale data sets from different levels of biological organization in a holistic plant systems-biology approach, but can also be used successfully for inferring new results without further experimental work.
Wang, Lu-Yong; Fasulo, D
2006-01-01
Genome-wide association study for complex diseases will generate massive amount of single nucleotide polymorphisms (SNPs) data. Univariate statistical test (i.e. Fisher exact test) was used to single out non-associated SNPs. However, the disease-susceptible SNPs may have little marginal effects in population and are unlikely to retain after the univariate tests. Also, model-based methods are impractical for large-scale dataset. Moreover, genetic heterogeneity makes the traditional methods harder to identify the genetic causes of diseases. A more recent random forest method provides a more robust method for screening the SNPs in thousands scale. However, for more large-scale data, i.e., Affymetrix Human Mapping 100K GeneChip data, a faster screening method is required to screening SNPs in whole-genome large scale association analysis with genetic heterogeneity. We propose a boosting-based method for rapid screening in large-scale analysis of complex traits in the presence of genetic heterogeneity. It provides a relatively fast and fairly good tool for screening and limiting the candidate SNPs for further more complex computational modeling task.
Mining algorithm for association rules in big data based on Hadoop
NASA Astrophysics Data System (ADS)
Fu, Chunhua; Wang, Xiaojing; Zhang, Lijun; Qiao, Liying
2018-04-01
In order to solve the problem that the traditional association rules mining algorithm has been unable to meet the mining needs of large amount of data in the aspect of efficiency and scalability, take FP-Growth as an example, the algorithm is realized in the parallelization based on Hadoop framework and Map Reduce model. On the basis, it is improved using the transaction reduce method for further enhancement of the algorithm's mining efficiency. The experiment, which consists of verification of parallel mining results, comparison on efficiency between serials and parallel, variable relationship between mining time and node number and between mining time and data amount, is carried out in the mining results and efficiency by Hadoop clustering. Experiments show that the paralleled FP-Growth algorithm implemented is able to accurately mine frequent item sets, with a better performance and scalability. It can be better to meet the requirements of big data mining and efficiently mine frequent item sets and association rules from large dataset.
76 FR 74585 - Railroad Workplace Safety; Adjacent-Track On-Track Safety for Roadway Workers
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-30
... roadway workers on the ground is engaged in a common task with on-track, self-propelled equipment or..., self-propelled equipment on an occupied track. These amendments to the Roadway Worker Protection Rule... effective, the RWP Rule requires that roadway work groups engaged in ``large-scale maintenance or...
NASA Astrophysics Data System (ADS)
Lyu, Dandan; Li, Shaofan
2017-10-01
Crystal defects have microstructure, and this microstructure should be related to the microstructure of the original crystal. Hence each type of crystals may have similar defects due to the same failure mechanism originated from the same microstructure, if they are under the same loading conditions. In this work, we propose a multiscale crystal defect dynamics (MCDD) model that models defects by considering its intrinsic microstructure derived from the microstructure or material genome of the original perfect crystal. The main novelties of present work are: (1) the discrete exterior calculus and algebraic topology theory are used to construct a scale-up (coarse-grained) dual lattice model for crystal defects, which may represent all possible defect modes inside a crystal; (2) a higher order Cauchy-Born rule (up to the fourth order) is adopted to construct atomistic-informed constitutive relations for various defect process zones, and (3) an hierarchical strain gradient theory based finite element formulation is developed to support an hierarchical multiscale cohesive (process) zone model for various defects in a unified formulation. The efficiency of MCDD computational algorithm allows us to simulate dynamic defect evolution at large scale while taking into account atomistic interaction. The MCDD model has been validated by comparing of the results of MCDD simulations with that of molecular dynamics (MD) in the cases of nanoindentation and uniaxial tension. Numerical simulations have shown that MCDD model can predict dislocation nucleation induced instability and inelastic deformation, and thus it may provide an alternative solution to study crystal plasticity.
A Multiscale Vision Model and Applications to Astronomical Image and Data Analyses
NASA Astrophysics Data System (ADS)
Bijaoui, A.; Slezak, E.; Vandame, B.
Many researches were carried out on the automated identification of the astrophy sical sources, and their relevant measurements. Some vision models have been developed for this task, their use depending on the image content. We have developed a multiscale vision model (MVM) \\cite{BR95} well suited for analyzing complex structures such like interstellar clouds, galaxies, or cluster of galaxies. Our model is based on a redundant wavelet transform. For each scale we detect significant wavelet coefficients by application of a decision rule based on their probability density functions (PDF) under the hypothesis of a uniform distribution. In the case of a Poisson noise, this PDF can be determined from the autoconvolution of the wavelet function histogram \\cite{SLB93}. We may also apply Anscombe's transform, scale by scale in order to take into account the integrated number of events at each scale \\cite{FSB98}. Our aim is to compute an image of all detected structural features. MVM allows us to build oriented trees from the neighbouring of significant wavelet coefficients. Each tree is also divided into subtrees taking into account the maxima along the scale axis. This leads to identify objects in the scale space, and then to restore their images by classical inverse methods. This model works only if the sampling is correct at each scale. It is not generally the case for the orthogonal wavelets, so that we apply the so-called `a trous algorithm \\cite{BSM94} or a specific pyramidal one \\cite{RBV98}. It leads to ext ract superimposed objets of different size, and it gives for each of them a separate image, from which we can obtain position, flux and p attern parameters. We have applied these methods to different kinds of images, photographic plates, CCD frames or X-ray images. We have only to change the statistical rule for extr acting significant coefficients to adapt the model from an image class to another one. We have also applied this model to extract clusters hierarchically distributed or to identify regions devoid of objects from galaxy counts.
A knowledge-based approach to improving optimization techniques in system planning
NASA Technical Reports Server (NTRS)
Momoh, J. A.; Zhang, Z. Z.
1990-01-01
A knowledge-based (KB) approach to improve mathematical programming techniques used in the system planning environment is presented. The KB system assists in selecting appropriate optimization algorithms, objective functions, constraints and parameters. The scheme is implemented by integrating symbolic computation of rules derived from operator and planner's experience and is used for generalized optimization packages. The KB optimization software package is capable of improving the overall planning process which includes correction of given violations. The method was demonstrated on a large scale power system discussed in the paper.
Tree Branching: Leonardo da Vinci's Rule versus Biomechanical Models
Minamino, Ryoko; Tateno, Masaki
2014-01-01
This study examined Leonardo da Vinci's rule (i.e., the sum of the cross-sectional area of all tree branches above a branching point at any height is equal to the cross-sectional area of the trunk or the branch immediately below the branching point) using simulations based on two biomechanical models: the uniform stress and elastic similarity models. Model calculations of the daughter/mother ratio (i.e., the ratio of the total cross-sectional area of the daughter branches to the cross-sectional area of the mother branch at the branching point) showed that both biomechanical models agreed with da Vinci's rule when the branching angles of daughter branches and the weights of lateral daughter branches were small; however, the models deviated from da Vinci's rule as the weights and/or the branching angles of lateral daughter branches increased. The calculated values of the two models were largely similar but differed in some ways. Field measurements of Fagus crenata and Abies homolepis also fit this trend, wherein models deviated from da Vinci's rule with increasing relative weights of lateral daughter branches. However, this deviation was small for a branching pattern in nature, where empirical measurements were taken under realistic measurement conditions; thus, da Vinci's rule did not critically contradict the biomechanical models in the case of real branching patterns, though the model calculations described the contradiction between da Vinci's rule and the biomechanical models. The field data for Fagus crenata fit the uniform stress model best, indicating that stress uniformity is the key constraint of branch morphology in Fagus crenata rather than elastic similarity or da Vinci's rule. On the other hand, mechanical constraints are not necessarily significant in the morphology of Abies homolepis branches, depending on the number of daughter branches. Rather, these branches were often in agreement with da Vinci's rule. PMID:24714065
Tree branching: Leonardo da Vinci's rule versus biomechanical models.
Minamino, Ryoko; Tateno, Masaki
2014-01-01
This study examined Leonardo da Vinci's rule (i.e., the sum of the cross-sectional area of all tree branches above a branching point at any height is equal to the cross-sectional area of the trunk or the branch immediately below the branching point) using simulations based on two biomechanical models: the uniform stress and elastic similarity models. Model calculations of the daughter/mother ratio (i.e., the ratio of the total cross-sectional area of the daughter branches to the cross-sectional area of the mother branch at the branching point) showed that both biomechanical models agreed with da Vinci's rule when the branching angles of daughter branches and the weights of lateral daughter branches were small; however, the models deviated from da Vinci's rule as the weights and/or the branching angles of lateral daughter branches increased. The calculated values of the two models were largely similar but differed in some ways. Field measurements of Fagus crenata and Abies homolepis also fit this trend, wherein models deviated from da Vinci's rule with increasing relative weights of lateral daughter branches. However, this deviation was small for a branching pattern in nature, where empirical measurements were taken under realistic measurement conditions; thus, da Vinci's rule did not critically contradict the biomechanical models in the case of real branching patterns, though the model calculations described the contradiction between da Vinci's rule and the biomechanical models. The field data for Fagus crenata fit the uniform stress model best, indicating that stress uniformity is the key constraint of branch morphology in Fagus crenata rather than elastic similarity or da Vinci's rule. On the other hand, mechanical constraints are not necessarily significant in the morphology of Abies homolepis branches, depending on the number of daughter branches. Rather, these branches were often in agreement with da Vinci's rule.
Bayesian learning and the psychology of rule induction
Endress, Ansgar D.
2014-01-01
In recent years, Bayesian learning models have been applied to an increasing variety of domains. While such models have been criticized on theoretical grounds, the underlying assumptions and predictions are rarely made concrete and tested experimentally. Here, I use Frank and Tenenbaum's (2011) Bayesian model of rule-learning as a case study to spell out the underlying assumptions, and to confront them with the empirical results Frank and Tenenbaum (2011) propose to simulate, as well as with novel experiments. While rule-learning is arguably well suited to rational Bayesian approaches, I show that their models are neither psychologically plausible nor ideal observer models. Further, I show that their central assumption is unfounded: humans do not always preferentially learn more specific rules, but, at least in some situations, those rules that happen to be more salient. Even when granting the unsupported assumptions, I show that all of the experiments modeled by Frank and Tenenbaum (2011) either contradict their models, or have a large number of more plausible interpretations. I provide an alternative account of the experimental data based on simple psychological mechanisms, and show that this account both describes the data better, and is easier to falsify. I conclude that, despite the recent surge in Bayesian models of cognitive phenomena, psychological phenomena are best understood by developing and testing psychological theories rather than models that can be fit to virtually any data. PMID:23454791
Continuous data assimilation for downscaling large-footprint soil moisture retrievals
NASA Astrophysics Data System (ADS)
Altaf, Muhammad U.; Jana, Raghavendra B.; Hoteit, Ibrahim; McCabe, Matthew F.
2016-10-01
Soil moisture is a key component of the hydrologic cycle, influencing processes leading to runoff generation, infiltration and groundwater recharge, evaporation and transpiration. Generally, the measurement scale for soil moisture is found to be different from the modeling scales for these processes. Reducing this mismatch between observation and model scales in necessary for improved hydrological modeling. An innovative approach to downscaling coarse resolution soil moisture data by combining continuous data assimilation and physically based modeling is presented. In this approach, we exploit the features of Continuous Data Assimilation (CDA) which was initially designed for general dissipative dynamical systems and later tested numerically on the incompressible Navier-Stokes equation, and the Benard equation. A nudging term, estimated as the misfit between interpolants of the assimilated coarse grid measurements and the fine grid model solution, is added to the model equations to constrain the model's large scale variability by available measurements. Soil moisture fields generated at a fine resolution by a physically-based vadose zone model (HYDRUS) are subjected to data assimilation conditioned upon coarse resolution observations. This enables nudging of the model outputs towards values that honor the coarse resolution dynamics while still being generated at the fine scale. Results show that the approach is feasible to generate fine scale soil moisture fields across large extents, based on coarse scale observations. Application of this approach is likely in generating fine and intermediate resolution soil moisture fields conditioned on the radiometerbased, coarse resolution products from remote sensing satellites.
The algebraic theory of latent projectors in lambda matrices
NASA Technical Reports Server (NTRS)
Denman, E. D.; Leyva-Ramos, J.; Jeon, G. J.
1981-01-01
Multivariable systems such as a finite-element model of vibrating structures, control systems, and large-scale systems are often formulated in terms of differential equations which give rise to lambda matrices. The present investigation is concerned with the formulation of the algebraic theory of lambda matrices and the relationship of latent roots, latent vectors, and latent projectors to the eigenvalues, eigenvectors, and eigenprojectors of the companion form. The chain rule for latent projectors and eigenprojectors for the repeated latent root or eigenvalues is given.
NASA Technical Reports Server (NTRS)
Nieten, Joseph L.; Seraphine, Kathleen M.
1991-01-01
Procedural modeling systems, rule based modeling systems, and a method for converting a procedural model to a rule based model are described. Simulation models are used to represent real time engineering systems. A real time system can be represented by a set of equations or functions connected so that they perform in the same manner as the actual system. Most modeling system languages are based on FORTRAN or some other procedural language. Therefore, they must be enhanced with a reaction capability. Rule based systems are reactive by definition. Once the engineering system has been decomposed into a set of calculations using only basic algebraic unary operations, a knowledge network of calculations and functions can be constructed. The knowledge network required by a rule based system can be generated by a knowledge acquisition tool or a source level compiler. The compiler would take an existing model source file, a syntax template, and a symbol table and generate the knowledge network. Thus, existing procedural models can be translated and executed by a rule based system. Neural models can be provide the high capacity data manipulation required by the most complex real time models.
Gross, Douglas P; Armijo-Olivo, Susan; Shaw, William S; Williams-Whitt, Kelly; Shaw, Nicola T; Hartvigsen, Jan; Qin, Ziling; Ha, Christine; Woodhouse, Linda J; Steenstra, Ivan A
2016-09-01
Purpose We aimed to identify and inventory clinical decision support (CDS) tools for helping front-line staff select interventions for patients with musculoskeletal (MSK) disorders. Methods We used Arksey and O'Malley's scoping review framework which progresses through five stages: (1) identifying the research question; (2) identifying relevant studies; (3) selecting studies for analysis; (4) charting the data; and (5) collating, summarizing and reporting results. We considered computer-based, and other available tools, such as algorithms, care pathways, rules and models. Since this research crosses multiple disciplines, we searched health care, computing science and business databases. Results Our search resulted in 4605 manuscripts. Titles and abstracts were screened for relevance. The reliability of the screening process was high with an average percentage of agreement of 92.3 %. Of the located articles, 123 were considered relevant. Within this literature, there were 43 CDS tools located. These were classified into 3 main areas: computer-based tools/questionnaires (n = 8, 19 %), treatment algorithms/models (n = 14, 33 %), and clinical prediction rules/classification systems (n = 21, 49 %). Each of these areas and the associated evidence are described. The state of evidentiary support for CDS tools is still preliminary and lacks external validation, head-to-head comparisons, or evidence of generalizability across different populations and settings. Conclusions CDS tools, especially those employing rapidly advancing computer technologies, are under development and of potential interest to health care providers, case management organizations and funders of care. Based on the results of this scoping review, we conclude that these tools, models and systems should be subjected to further validation before they can be recommended for large-scale implementation for managing patients with MSK disorders.
A logical model of cooperating rule-based systems
NASA Technical Reports Server (NTRS)
Bailin, Sidney C.; Moore, John M.; Hilberg, Robert H.; Murphy, Elizabeth D.; Bahder, Shari A.
1989-01-01
A model is developed to assist in the planning, specification, development, and verification of space information systems involving distributed rule-based systems. The model is based on an analysis of possible uses of rule-based systems in control centers. This analysis is summarized as a data-flow model for a hypothetical intelligent control center. From this data-flow model, the logical model of cooperating rule-based systems is extracted. This model consists of four layers of increasing capability: (1) communicating agents, (2) belief-sharing knowledge sources, (3) goal-sharing interest areas, and (4) task-sharing job roles.
Efficient Web Services Policy Combination
NASA Technical Reports Server (NTRS)
Vatan, Farrokh; Harman, Joseph G.
2010-01-01
Large-scale Web security systems usually involve cooperation between domains with non-identical policies. The network management and Web communication software used by the different organizations presents a stumbling block. Many of the tools used by the various divisions do not have the ability to communicate network management data with each other. At best, this means that manual human intervention into the communication protocols used at various network routers and endpoints is required. Developing practical, sound, and automated ways to compose policies to bridge these differences is a long-standing problem. One of the key subtleties is the need to deal with inconsistencies and defaults where one organization proposes a rule on a particular feature, and another has a different rule or expresses no rule. A general approach is to assign priorities to rules and observe the rules with the highest priorities when there are conflicts. The present methods have inherent inefficiency, which heavily restrict their practical applications. A new, efficient algorithm combines policies utilized for Web services. The method is based on an algorithm that allows an automatic and scalable composition of security policies between multiple organizations. It is based on defeasible policy composition, a promising approach for finding conflicts and resolving priorities between rules. In the general case, policy negotiation is an intractable problem. A promising method, suggested in the literature, is when policies are represented in defeasible logic, and composition is based on rules for non-monotonic inference. In this system, policy writers construct metapolicies describing both the policy that they wish to enforce and annotations describing their composition preferences. These annotations can indicate whether certain policy assertions are required by the policy writer or, if not, under what circumstances the policy writer is willing to compromise and allow other assertions to take precedence. Meta-policies are specified in defeasible logic, a computationally efficient non-monotonic logic developed to model human reasoning. One drawback of this method is that at one point the algorithm starts an exhaustive search of all subsets of the set of conclusions of a defeasible theory. Although the propositional defeasible logic has linear complexity, the set of conclusions here may be large, especially in real-life practical cases. This phenomenon leads to an inefficient exponential explosion of complexity. The current process of getting a Web security policy from combination of two meta-policies consists of two steps. The first is generating a new meta-policy that is a composition of the input meta-policies, and the second is mapping the meta-policy onto a security policy. The new algorithm avoids the exhaustive search in the current algorithm, and provides a security policy that matches all requirements of the involved metapolicies.
Consideration of VT5 etch-based OPC modeling
NASA Astrophysics Data System (ADS)
Lim, ChinTeong; Temchenko, Vlad; Kaiser, Dieter; Meusel, Ingo; Schmidt, Sebastian; Schneider, Jens; Niehoff, Martin
2008-03-01
Including etch-based empirical data during OPC model calibration is a desired yet controversial decision for OPC modeling, especially for process with a large litho to etch biasing. While many OPC software tools are capable of providing this functionality nowadays; yet few were implemented in manufacturing due to various risks considerations such as compromises in resist and optical effects prediction, etch model accuracy or even runtime concern. Conventional method of applying rule-based alongside resist model is popular but requires a lot of lengthy code generation to provide a leaner OPC input. This work discusses risk factors and their considerations, together with introduction of techniques used within Mentor Calibre VT5 etch-based modeling at sub 90nm technology node. Various strategies are discussed with the aim of better handling of large etch bias offset without adding complexity into final OPC package. Finally, results were presented to assess the advantages and limitations of the final method chosen.
Intelligent fault management for the Space Station active thermal control system
NASA Technical Reports Server (NTRS)
Hill, Tim; Faltisco, Robert M.
1992-01-01
The Thermal Advanced Automation Project (TAAP) approach and architecture is described for automating the Space Station Freedom (SSF) Active Thermal Control System (ATCS). The baseline functionally and advanced automation techniques for Fault Detection, Isolation, and Recovery (FDIR) will be compared and contrasted. Advanced automation techniques such as rule-based systems and model-based reasoning should be utilized to efficiently control, monitor, and diagnose this extremely complex physical system. TAAP is developing advanced FDIR software for use on the SSF thermal control system. The goal of TAAP is to join Knowledge-Based System (KBS) technology, using a combination of rules and model-based reasoning, with conventional monitoring and control software in order to maximize autonomy of the ATCS. TAAP's predecessor was NASA's Thermal Expert System (TEXSYS) project which was the first large real-time expert system to use both extensive rules and model-based reasoning to control and perform FDIR on a large, complex physical system. TEXSYS showed that a method is needed for safely and inexpensively testing all possible faults of the ATCS, particularly those potentially damaging to the hardware, in order to develop a fully capable FDIR system. TAAP therefore includes the development of a high-fidelity simulation of the thermal control system. The simulation provides realistic, dynamic ATCS behavior and fault insertion capability for software testing without hardware related risks or expense. In addition, thermal engineers will gain greater confidence in the KBS FDIR software than was possible prior to this kind of simulation testing. The TAAP KBS will initially be a ground-based extension of the baseline ATCS monitoring and control software and could be migrated on-board as additional computation resources are made available.
A simple rule for the evolution of contingent cooperation in large groups
Schonmann, Roberto H.; Boyd, Robert
2016-01-01
Humans cooperate in large groups of unrelated individuals, and many authors have argued that such cooperation is sustained by contingent reward and punishment. However, such sanctioning systems can also stabilize a wide range of behaviours, including mutually deleterious behaviours. Moreover, it is very likely that large-scale cooperation is derived in the human lineage. Thus, understanding the evolution of mutually beneficial cooperative behaviour requires knowledge of when strategies that support such behaviour can increase when rare. Here, we derive a simple formula that gives the relatedness necessary for contingent cooperation in n-person iterated games to increase when rare. This rule applies to a wide range of pay-off functions and assumes that the strategies supporting cooperation are based on the presence of a threshold fraction of cooperators. This rule suggests that modest levels of relatedness are sufficient for invasion by strategies that make cooperation contingent on previous cooperation by a small fraction of group members. In contrast, only high levels of relatedness allow the invasion by strategies that require near universal cooperation. In order to derive this formula, we introduce a novel methodology for studying evolution in group structured populations including local and global group-size regulation and fluctuations in group size. PMID:26729938
Wildhaber, Mark L.; Wikle, Christopher K.; Anderson, Christopher J.; Franz, Kristie J.; Moran, Edward H.; Dey, Rima; Mader, Helmut; Kraml, Julia
2012-01-01
Climate change operates over a broad range of spatial and temporal scales. Understanding its effects on ecosystems requires multi-scale models. For understanding effects on fish populations of riverine ecosystems, climate predicted by coarse-resolution Global Climate Models must be downscaled to Regional Climate Models to watersheds to river hydrology to population response. An additional challenge is quantifying sources of uncertainty given the highly nonlinear nature of interactions between climate variables and community level processes. We present a modeling approach for understanding and accomodating uncertainty by applying multi-scale climate models and a hierarchical Bayesian modeling framework to Midwest fish population dynamics and by linking models for system components together by formal rules of probability. The proposed hierarchical modeling approach will account for sources of uncertainty in forecasts of community or population response. The goal is to evaluate the potential distributional changes in an ecological system, given distributional changes implied by a series of linked climate and system models under various emissions/use scenarios. This understanding will aid evaluation of management options for coping with global climate change. In our initial analyses, we found that predicted pallid sturgeon population responses were dependent on the climate scenario considered.
US National Large-scale City Orthoimage Standard Initiative
Zhou, G.; Song, C.; Benjamin, S.; Schickler, W.
2003-01-01
The early procedures and algorithms for National digital orthophoto generation in National Digital Orthophoto Program (NDOP) were based on earlier USGS mapping operations, such as field control, aerotriangulation (derived in the early 1920's), the quarter-quadrangle-centered (3.75 minutes of longitude and latitude in geographic extent), 1:40,000 aerial photographs, and 2.5 D digital elevation models. However, large-scale city orthophotos using early procedures have disclosed many shortcomings, e.g., ghost image, occlusion, shadow. Thus, to provide the technical base (algorithms, procedure) and experience needed for city large-scale digital orthophoto creation is essential for the near future national large-scale digital orthophoto deployment and the revision of the Standards for National Large-scale City Digital Orthophoto in National Digital Orthophoto Program (NDOP). This paper will report our initial research results as follows: (1) High-precision 3D city DSM generation through LIDAR data processing, (2) Spatial objects/features extraction through surface material information and high-accuracy 3D DSM data, (3) 3D city model development, (4) Algorithm development for generation of DTM-based orthophoto, and DBM-based orthophoto, (5) True orthophoto generation by merging DBM-based orthophoto and DTM-based orthophoto, and (6) Automatic mosaic by optimizing and combining imagery from many perspectives.
Roorda, Leo D; Green, John R; Houwink, Annemieke; Bagley, Pam J; Smith, Jane; Molenaar, Ivo W; Geurts, Alexander C
2012-06-01
To enable improved interpretation of the total score and faster scoring of the Rivermead Mobility Index (RMI) by studying item ordering or hierarchy and formulating start-and-stop rules in patients after stroke. Cohort study. Rehabilitation center in the Netherlands; stroke rehabilitation units and the community in the United Kingdom. Item hierarchy of the RMI was studied in an initial group of patients (n=620; mean age ± SD, 69.2±12.5y; 297 [48%] men; 304 [49%] left hemisphere lesion, and 269 [43%] right hemisphere lesion), and the adequacy of the item hierarchy-based start-and-stop rules was checked in a second group of patients (n=237; mean age ± SD, 60.0±11.3y; 139 [59%] men; 103 [44%] left hemisphere lesion, and 93 [39%] right hemisphere lesion) undergoing rehabilitation after stroke. Not applicable. Mokken scale analysis was used to investigate the fit of the double monotonicity model, indicating hierarchical item ordering. The percentages of patients with a difference between the RMI total score and the scores based on the start-and-stop rules were calculated to check the adequacy of these rules. The RMI had good fit of the double monotonicity model (coefficient H(T)=.87). The interpretation of the total score improved. Item hierarchy-based start-and-stop rules were formulated. The percentages of patients with a difference between the RMI total score and the score based on the recommended start-and-stop rules were 3% and 5%, respectively. Ten of the original 15 items had to be scored after applying the start-and-stop rules. Item hierarchy was established, enabling improved interpretation and faster scoring of the RMI. Copyright © 2012 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Complexities, Catastrophes and Cities: Emergency Dynamics in Varying Scenarios and Urban Topologies
NASA Astrophysics Data System (ADS)
Narzisi, Giuseppe; Mysore, Venkatesh; Byeon, Jeewoong; Mishra, Bud
Complex Systems are often characterized by agents capable of interacting with each other dynamically, often in non-linear and non-intuitive ways. Trying to characterize their dynamics often results in partial differential equations that are difficult, if not impossible, to solve. A large city or a city-state is an example of such an evolving and self-organizing complex environment that efficiently adapts to different and numerous incremental changes to its social, cultural and technological infrastructure [1]. One powerful technique for analyzing such complex systems is Agent-Based Modeling (ABM) [9], which has seen an increasing number of applications in social science, economics and also biology. The agent-based paradigm facilitates easier transfer of domain specific knowledge into a model. ABM provides a natural way to describe systems in which the overall dynamics can be described as the result of the behavior of populations of autonomous components: agents, with a fixed set of rules based on local information and possible central control. As part of the NYU Center for Catastrophe Preparedness and Response (CCPR1), we have been exploring how ABM can serve as a powerful simulation technique for analyzing large-scale urban disasters. The central problem in Disaster Management is that it is not immediately apparent whether the current emergency plans are robust against such sudden, rare and punctuated catastrophic events.
Akita, Yasuyuki; Baldasano, Jose M; Beelen, Rob; Cirach, Marta; de Hoogh, Kees; Hoek, Gerard; Nieuwenhuijsen, Mark; Serre, Marc L; de Nazelle, Audrey
2014-04-15
In recognition that intraurban exposure gradients may be as large as between-city variations, recent air pollution epidemiologic studies have become increasingly interested in capturing within-city exposure gradients. In addition, because of the rapidly accumulating health data, recent studies also need to handle large study populations distributed over large geographic domains. Even though several modeling approaches have been introduced, a consistent modeling framework capturing within-city exposure variability and applicable to large geographic domains is still missing. To address these needs, we proposed a modeling framework based on the Bayesian Maximum Entropy method that integrates monitoring data and outputs from existing air quality models based on Land Use Regression (LUR) and Chemical Transport Models (CTM). The framework was applied to estimate the yearly average NO2 concentrations over the region of Catalunya in Spain. By jointly accounting for the global scale variability in the concentration from the output of CTM and the intraurban scale variability through LUR model output, the proposed framework outperformed more conventional approaches.
Tompkins, Adrian Mark; Caporaso, Luca; Biondi, Riccardo; Bell, Jean Pierre
2015-01-01
A new deforestation and land-use change scenario generator model (FOREST-SAGE) is presented that is designed to interface directly with dynamic vegetation models used in latest generation earth system models. The model requires a regional-scale scenario for aggregate land-use change that may be time-dependent, provided by observational studies or by regional land-use change/economic models for future projections. These land-use categories of the observations/economic model are first translated into equivalent plant function types used by the particular vegetation model, and then FOREST-SAGE disaggregates the regional-scale scenario to the local grid-scale of the earth system model using a set of risk-rules based on factors such as proximity to transport networks, distance weighted population density, forest fragmentation and presence of protected areas and logging concessions. These rules presently focus on the conversion of forest to agriculture and pasture use, but could be generalized to other land use change conversions. After introducing the model, an evaluation of its performance is shown for the land-cover changes that have occurred in the Central African Basin from 2001–2010 using retrievals from MODerate Resolution Imaging Spectroradiometer Vegetation Continuous Field data. The model is able to broadly reproduce the spatial patterns of forest cover change observed by MODIS, and the use of the local-scale risk factors enables FOREST-SAGE to improve land use change patterns considerably relative to benchmark scenarios used in the latest Coupled Model Intercomparison Project integrations. The uncertainty to the various risk factors is investigated using an ensemble of investigations, and it is shown that the model is sensitive to the population density, forest fragmentation and reforestation factors specified. PMID:26394392
A Ground-Based Research Vehicle for Base Drag Studies at Subsonic Speeds
NASA Technical Reports Server (NTRS)
Diebler, Corey; Smith, Mark
2002-01-01
A ground research vehicle (GRV) has been developed to study the base drag on large-scale vehicles at subsonic speeds. Existing models suggest that base drag is dependent upon vehicle forebody drag, and for certain configurations, the total drag of a vehicle can be reduced by increasing its forebody drag. Although these models work well for small projectile shapes, studies have shown that they do not provide accurate predictions when applied to large-scale vehicles. Experiments are underway at the NASA Dryden Flight Research Center to collect data at Reynolds numbers to a maximum of 3 x 10(exp 7), and to formulate a new model for predicting the base drag of trucks, buses, motor homes, reentry vehicles, and other large-scale vehicles. Preliminary tests have shown errors as great as 70 percent compared to Hoerner's two-dimensional base drag prediction. This report describes the GRV and its capabilities, details the studies currently underway at NASA Dryden, and presents preliminary results of both the effort to formulate a new base drag model and the investigation into a method of reducing total drag by manipulating forebody drag.
Models of Quantitative Estimations: Rule-Based and Exemplar-Based Processes Compared
ERIC Educational Resources Information Center
von Helversen, Bettina; Rieskamp, Jorg
2009-01-01
The cognitive processes underlying quantitative estimations vary. Past research has identified task-contingent changes between rule-based and exemplar-based processes (P. Juslin, L. Karlsson, & H. Olsson, 2008). B. von Helversen and J. Rieskamp (2008), however, proposed a simple rule-based model--the mapping model--that outperformed the…
Isospin symmetry breaking and large-scale shell-model calculations with the Sakurai-Sugiura method
NASA Astrophysics Data System (ADS)
Mizusaki, Takahiro; Kaneko, Kazunari; Sun, Yang; Tazaki, Shigeru
2015-05-01
Recently isospin symmetry breaking for mass 60-70 region has been investigated based on large-scale shell-model calculations in terms of mirror energy differences (MED), Coulomb energy differences (CED) and triplet energy differences (TED). Behind these investigations, we have encountered a subtle problem in numerical calculations for odd-odd N = Z nuclei with large-scale shell-model calculations. Here we focus on how to solve this subtle problem by the Sakurai-Sugiura (SS) method, which has been recently proposed as a new diagonalization method and has been successfully applied to nuclear shell-model calculations.
Inflation physics from the cosmic microwave background and large scale structure
NASA Astrophysics Data System (ADS)
Abazajian, K. N.; Arnold, K.; Austermann, J.; Benson, B. A.; Bischoff, C.; Bock, J.; Bond, J. R.; Borrill, J.; Buder, I.; Burke, D. L.; Calabrese, E.; Carlstrom, J. E.; Carvalho, C. S.; Chang, C. L.; Chiang, H. C.; Church, S.; Cooray, A.; Crawford, T. M.; Crill, B. P.; Dawson, K. S.; Das, S.; Devlin, M. J.; Dobbs, M.; Dodelson, S.; Doré, O.; Dunkley, J.; Feng, J. L.; Fraisse, A.; Gallicchio, J.; Giddings, S. B.; Green, D.; Halverson, N. W.; Hanany, S.; Hanson, D.; Hildebrandt, S. R.; Hincks, A.; Hlozek, R.; Holder, G.; Holzapfel, W. L.; Honscheid, K.; Horowitz, G.; Hu, W.; Hubmayr, J.; Irwin, K.; Jackson, M.; Jones, W. C.; Kallosh, R.; Kamionkowski, M.; Keating, B.; Keisler, R.; Kinney, W.; Knox, L.; Komatsu, E.; Kovac, J.; Kuo, C.-L.; Kusaka, A.; Lawrence, C.; Lee, A. T.; Leitch, E.; Linde, A.; Linder, E.; Lubin, P.; Maldacena, J.; Martinec, E.; McMahon, J.; Miller, A.; Mukhanov, V.; Newburgh, L.; Niemack, M. D.; Nguyen, H.; Nguyen, H. T.; Page, L.; Pryke, C.; Reichardt, C. L.; Ruhl, J. E.; Sehgal, N.; Seljak, U.; Senatore, L.; Sievers, J.; Silverstein, E.; Slosar, A.; Smith, K. M.; Spergel, D.; Staggs, S. T.; Stark, A.; Stompor, R.; Vieregg, A. G.; Wang, G.; Watson, S.; Wollack, E. J.; Wu, W. L. K.; Yoon, K. W.; Zahn, O.; Zaldarriaga, M.
2015-03-01
Fluctuations in the intensity and polarization of the cosmic microwave background (CMB) and the large-scale distribution of matter in the universe each contain clues about the nature of the earliest moments of time. The next generation of CMB and large-scale structure (LSS) experiments are poised to test the leading paradigm for these earliest moments-the theory of cosmic inflation-and to detect the imprints of the inflationary epoch, thereby dramatically increasing our understanding of fundamental physics and the early universe. A future CMB experiment with sufficient angular resolution and frequency coverage that surveys at least 1% of the sky to a depth of 1 uK-arcmin can deliver a constraint on the tensor-to-scalar ratio that will either result in a 5 σ measurement of the energy scale of inflation or rule out all large-field inflation models, even in the presence of foregrounds and the gravitational lensing B-mode signal. LSS experiments, particularly spectroscopic surveys such as the Dark Energy Spectroscopic Instrument, will complement the CMB effort by improving current constraints on running of the spectral index by up to a factor of four, improving constraints on curvature by a factor of ten, and providing non-Gaussianity constraints that are competitive with the current CMB bounds.
Inflation Physics from the Cosmic Microwave Background and Large Scale Structure
NASA Technical Reports Server (NTRS)
Abazajian, K.N.; Arnold,K.; Austermann, J.; Benson, B.A.; Bischoff, C.; Bock, J.; Bond, J.R.; Borrill, J.; Buder, I.; Burke, D.L.;
2013-01-01
Fluctuations in the intensity and polarization of the cosmic microwave background (CMB) and the large-scale distribution of matter in the universe each contain clues about the nature of the earliest moments of time. The next generation of CMB and large-scale structure (LSS) experiments are poised to test the leading paradigm for these earliest moments---the theory of cosmic inflation---and to detect the imprints of the inflationary epoch, thereby dramatically increasing our understanding of fundamental physics and the early universe. A future CMB experiment with sufficient angular resolution and frequency coverage that surveys at least 1 of the sky to a depth of 1 uK-arcmin can deliver a constraint on the tensor-to-scalar ratio that will either result in a 5-sigma measurement of the energy scale of inflation or rule out all large-field inflation models, even in the presence of foregrounds and the gravitational lensing B-mode signal. LSS experiments, particularly spectroscopic surveys such as the Dark Energy Spectroscopic Instrument, will complement the CMB effort by improving current constraints on running of the spectral index by up to a factor of four, improving constraints on curvature by a factor of ten, and providing non-Gaussianity constraints that are competitive with the current CMB bounds.
Inflation physics from the cosmic microwave background and large scale structure
Abazajian, K. N.; Arnold, K.; Austermann, J.; ...
2014-06-26
Here, fluctuations in the intensity and polarization of the cosmic microwave background (CMB) and the large-scale distribution of matter in the universe each contain clues about the nature of the earliest moments of time. The next generation of CMB and large-scale structure (LSS) experiments are poised to test the leading paradigm for these earliest moments—the theory of cosmic inflation—and to detect the imprints of the inflationary epoch, thereby dramatically increasing our understanding of fundamental physics and the early universe. A future CMB experiment with sufficient angular resolution and frequency coverage that surveys at least 1% of the sky to amore » depth of 1 uK-arcmin can deliver a constraint on the tensor-to-scalar ratio that will either result in a 5σ measurement of the energy scale of inflation or rule out all large-field inflation models, even in the presence of foregrounds and the gravitational lensing B -mode signal. LSS experiments, particularly spectroscopic surveys such as the Dark Energy Spectroscopic Instrument, will complement the CMB effort by improving current constraints on running of the spectral index by up to a factor of four, improving constraints on curvature by a factor of ten, and providing non-Gaussianity constraints that are competitive with the current CMB bounds.« less
A unifying framework for systems modeling, control systems design, and system operation
NASA Technical Reports Server (NTRS)
Dvorak, Daniel L.; Indictor, Mark B.; Ingham, Michel D.; Rasmussen, Robert D.; Stringfellow, Margaret V.
2005-01-01
Current engineering practice in the analysis and design of large-scale multi-disciplinary control systems is typified by some form of decomposition- whether functional or physical or discipline-based-that enables multiple teams to work in parallel and in relative isolation. Too often, the resulting system after integration is an awkward marriage of different control and data mechanisms with poor end-to-end accountability. System of systems engineering, which faces this problem on a large scale, cries out for a unifying framework to guide analysis, design, and operation. This paper describes such a framework based on a state-, model-, and goal-based architecture for semi-autonomous control systems that guides analysis and modeling, shapes control system software design, and directly specifies operational intent. This paper illustrates the key concepts in the context of a large-scale, concurrent, globally distributed system of systems: NASA's proposed Array-based Deep Space Network.
Bio-Inspired Neural Model for Learning Dynamic Models
NASA Technical Reports Server (NTRS)
Duong, Tuan; Duong, Vu; Suri, Ronald
2009-01-01
A neural-network mathematical model that, relative to prior such models, places greater emphasis on some of the temporal aspects of real neural physical processes, has been proposed as a basis for massively parallel, distributed algorithms that learn dynamic models of possibly complex external processes by means of learning rules that are local in space and time. The algorithms could be made to perform such functions as recognition and prediction of words in speech and of objects depicted in video images. The approach embodied in this model is said to be "hardware-friendly" in the following sense: The algorithms would be amenable to execution by special-purpose computers implemented as very-large-scale integrated (VLSI) circuits that would operate at relatively high speeds and low power demands.
Testing the gravitational instability hypothesis?
NASA Technical Reports Server (NTRS)
Babul, Arif; Weinberg, David H.; Dekel, Avishai; Ostriker, Jeremiah P.
1994-01-01
We challenge a widely accepted assumption of observational cosmology: that successful reconstruction of observed galaxy density fields from measured galaxy velocity fields (or vice versa), using the methods of gravitational instability theory, implies that the observed large-scale structures and large-scale flows were produced by the action of gravity. This assumption is false, in that there exist nongravitational theories that pass the reconstruction tests and gravitational theories with certain forms of biased galaxy formation that fail them. Gravitational instability theory predicts specific correlations between large-scale velocity and mass density fields, but the same correlations arise in any model where (a) structures in the galaxy distribution grow from homogeneous initial conditions in a way that satisfies the continuity equation, and (b) the present-day velocity field is irrotational and proportional to the time-averaged velocity field. We demonstrate these assertions using analytical arguments and N-body simulations. If large-scale structure is formed by gravitational instability, then the ratio of the galaxy density contrast to the divergence of the velocity field yields an estimate of the density parameter Omega (or, more generally, an estimate of beta identically equal to Omega(exp 0.6)/b, where b is an assumed constant of proportionality between galaxy and mass density fluctuations. In nongravitational scenarios, the values of Omega or beta estimated in this way may fail to represent the true cosmological values. However, even if nongravitational forces initiate and shape the growth of structure, gravitationally induced accelerations can dominate the velocity field at late times, long after the action of any nongravitational impulses. The estimated beta approaches the true value in such cases, and in our numerical simulations the estimated beta values are reasonably accurate for both gravitational and nongravitational models. Reconstruction tests that show correlations between galaxy density and velocity fields can rule out some physically interesting models of large-scale structure. In particular, successful reconstructions constrain the nature of any bias between the galaxy and mass distributions, since processes that modulate the efficiency of galaxy formation on large scales in a way that violates the continuity equation also produce a mismatch between the observed galaxy density and the density inferred from the peculiar velocity field. We obtain successful reconstructions for a gravitational model with peaks biasing, but we also show examples of gravitational and nongravitational models that fail reconstruction tests because of more complicated modulations of galaxy formation.
Device Scale Modeling of Solvent Absorption using MFIX-TFM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carney, Janine E.; Finn, Justin R.
Recent climate change is largely attributed to greenhouse gases (e.g., carbon dioxide, methane) and fossil fuels account for a large majority of global CO 2 emissions. That said, fossil fuels will continue to play a significant role in the generation of power for the foreseeable future. The extent to which CO 2 is emitted needs to be reduced, however, carbon capture and sequestration are also necessary actions to tackle climate change. Different approaches exist for CO 2 capture including both post-combustion and pre-combustion technologies, oxy-fuel combustion and/or chemical looping combustion. The focus of this effort is on post-combustion solvent-absorption technology.more » To apply CO 2 technologies at commercial scale, the availability and maturity and the potential for scalability of that technology need to be considered. Solvent absorption is a proven technology but not at the scale needed by typical power plant. The scale up and down and design of laboratory and commercial packed bed reactors depends heavily on the specific knowledge of two-phase pressure drop, liquid holdup, the wetting efficiency and mass transfer efficiency as a function of operating conditions. Simple scaling rules often fail to provide proper design. Conventional reactor design modeling approaches will generally characterize complex non-ideal flow and mixing patterns using simplified and/or mechanistic flow assumptions. While there are varying levels of complexity used within these approaches, none of these models resolve the local velocity fields. Consequently, they are unable to account for important design factors such as flow maldistribution and channeling from a fundamental perspective. Ideally design would be aided by development of predictive models based on truer representation of the physical and chemical processes that occur at different scales. Computational fluid dynamic (CFD) models are based on multidimensional flow equations with first principle foundations. CFD models can include a more accurate physical description of flow processes and be modified to include more complex behavior. Wetting performance and spatial liquid distribution inside the absorber are recognized as weak areas of knowledge requiring further investigation. CFD tools offer a possible method to investigating such topics and gaining a better understanding of their influence on reactor performance. This report focuses first on describing a hydrodynamic model for countercurrent gas-liquid flow through a packed column and then on the chemistry, heat and mass transfer specific to CO 2 absorption using monoethanolamine (MEA). The indicated model is implemented in MFIX, a CFD open source software package. The user defined functions needed to build this model are described in detail along with the keywords for the corresponding input file. A test case is outlined along with a few results. The example serves to briefly illustrate the developed CFD tool and its potential capability to investigate solvent absorption.« less
Interacting particle systems in time-dependent geometries
NASA Astrophysics Data System (ADS)
Ali, A.; Ball, R. C.; Grosskinsky, S.; Somfai, E.
2013-09-01
Many complex structures and stochastic patterns emerge from simple kinetic rules and local interactions, and are governed by scale invariance properties in combination with effects of the global geometry. We consider systems that can be described effectively by space-time trajectories of interacting particles, such as domain boundaries in two-dimensional growth or river networks. We study trajectories embedded in time-dependent geometries, and the main focus is on uniformly expanding or decreasing domains for which we obtain an exact mapping to simple fixed domain systems while preserving the local scale invariance properties. This approach was recently introduced in Ali et al (2013 Phys. Rev. E 87 020102(R)) and here we provide a detailed discussion on its applicability for self-affine Markovian models, and how it can be adapted to self-affine models with memory or explicit time dependence. The mapping corresponds to a nonlinear time transformation which converges to a finite value for a large class of trajectories, enabling an exact analysis of asymptotic properties in expanding domains. We further provide a detailed discussion of different particle interactions and generalized geometries. All our findings are based on exact computations and are illustrated numerically for various examples, including Lévy processes and fractional Brownian motion.
Multilevel Item Response Modeling: Applications to Large-Scale Assessment of Academic Achievement
ERIC Educational Resources Information Center
Zheng, Xiaohui
2009-01-01
The call for standards-based reform and educational accountability has led to increased attention to large-scale assessments. Over the past two decades, large-scale assessments have been providing policymakers and educators with timely information about student learning and achievement to facilitate their decisions regarding schools, teachers and…
Research on global path planning based on ant colony optimization for AUV
NASA Astrophysics Data System (ADS)
Wang, Hong-Jian; Xiong, Wei
2009-03-01
Path planning is an important issue for autonomous underwater vehicles (AUVs) traversing an unknown environment such as a sea floor, a jungle, or the outer celestial planets. For this paper, global path planning using large-scale chart data was studied, and the principles of ant colony optimization (ACO) were applied. This paper introduced the idea of a visibility graph based on the grid workspace model. It also brought a series of pheromone updating rules for the ACO planning algorithm. The operational steps of the ACO algorithm are proposed as a model for a global path planning method for AUV. To mimic the process of smoothing a planned path, a cutting operator and an insertion-point operator were designed. Simulation results demonstrated that the ACO algorithm is suitable for global path planning. The system has many advantages, including that the operating path of the AUV can be quickly optimized, and it is shorter, safer, and smoother. The prototype system successfully demonstrated the feasibility of the concept, proving it can be applied to surveys of unstructured unmanned environments.
NASA Astrophysics Data System (ADS)
Hostache, Renaud; Rains, Dominik; Chini, Marco; Lievens, Hans; Verhoest, Niko E. C.; Matgen, Patrick
2017-04-01
Motivated by climate change and its impact on the scarcity or excess of water in many parts of the world, several agencies and research institutions have taken initiatives in monitoring and predicting the hydrologic cycle at a global scale. Such a monitoring/prediction effort is important for understanding the vulnerability to extreme hydrological events and for providing early warnings. This can be based on an optimal combination of hydro-meteorological models and remote sensing, in which satellite measurements can be used as forcing or calibration data or for regularly updating the model states or parameters. Many advances have been made in these domains and the near future will bring new opportunities with respect to remote sensing as a result of the increasing number of spaceborn sensors enabling the large scale monitoring of water resources. Besides of these advances, there is currently a tendency to refine and further complicate physically-based hydrologic models to better capture the hydrologic processes at hand. However, this may not necessarily be beneficial for large-scale hydrology, as computational efforts are therefore increasing significantly. As a matter of fact, a novel thematic science question that is to be investigated is whether a flexible conceptual model can match the performance of a complex physically-based model for hydrologic simulations at large scale. In this context, the main objective of this study is to investigate how innovative techniques that allow for the estimation of soil moisture from satellite data can help in reducing errors and uncertainties in large scale conceptual hydro-meteorological modelling. A spatially distributed conceptual hydrologic model has been set up based on recent developments of the SUPERFLEX modelling framework. As it requires limited computational efforts, this model enables early warnings for large areas. Using as forcings the ERA-Interim public dataset and coupled with the CMEM radiative transfer model, SUPERFLEX is capable of predicting runoff, soil moisture, and SMOS-like brightness temperature time series. Such a model is traditionally calibrated using only discharge measurements. In this study we designed a multi-objective calibration procedure based on both discharge measurements and SMOS-derived brightness temperature observations in order to evaluate the added value of remotely sensed soil moisture data in the calibration process. As a test case we set up the SUPERFLEX model for the large scale Murray-Darling catchment in Australia ( 1 Million km2). When compared to in situ soil moisture time series, model predictions show good agreement resulting in correlation coefficients exceeding 70 % and Root Mean Squared Errors below 1 %. When benchmarked with the physically based land surface model CLM, SUPERFLEX exhibits similar performance levels. By adapting the runoff routing function within the SUPERFLEX model, the predicted discharge results in a Nash Sutcliff Efficiency exceeding 0.7 over both the calibration and the validation periods.
IS THE SMALL-SCALE MAGNETIC FIELD CORRELATED WITH THE DYNAMO CYCLE?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karak, Bidya Binay; Brandenburg, Axel, E-mail: bbkarak@nordita.org
2016-01-01
The small-scale magnetic field is ubiquitous at the solar surface—even at high latitudes. From observations we know that this field is uncorrelated (or perhaps even weakly anticorrelated) with the global sunspot cycle. Our aim is to explore the origin, and particularly the cycle dependence, of such a phenomenon using three-dimensional dynamo simulations. We adopt a simple model of a turbulent dynamo in a shearing box driven by helically forced turbulence. Depending on the dynamo parameters, large-scale (global) and small-scale (local) dynamos can be excited independently in this model. Based on simulations in different parameter regimes, we find that, when onlymore » the large-scale dynamo is operating in the system, the small-scale magnetic field generated through shredding and tangling of the large-scale magnetic field is positively correlated with the global magnetic cycle. However, when both dynamos are operating, the small-scale field is produced from both the small-scale dynamo and the tangling of the large-scale field. In this situation, when the large-scale field is weaker than the equipartition value of the turbulence, the small-scale field is almost uncorrelated with the large-scale magnetic cycle. On the other hand, when the large-scale field is stronger than the equipartition value, we observe an anticorrelation between the small-scale field and the large-scale magnetic cycle. This anticorrelation can be interpreted as a suppression of the small-scale dynamo. Based on our studies we conclude that the observed small-scale magnetic field in the Sun is generated by the combined mechanisms of a small-scale dynamo and tangling of the large-scale field.« less
2:1 for naturalness at the LHC?
NASA Astrophysics Data System (ADS)
Arkani-Hamed, Nima; Blum, Kfir; D'Agnolo, Raffaele Tito; Fan, JiJi
2013-01-01
A large enhancement of a factor of 1.5 - 2 in Higgs production and decay in the diphoton channel, with little deviation in the ZZ channel, can only plausibly arise from a loop of new charged particles with large couplings to the Higgs. We show that, allowing only new fermions with marginal interactions at the weak scale, the required Yukawa couplings for a factor of 2 enhancement are so large that the Higgs quartic coupling is pushed to large negative values in the UV, triggering an unacceptable vacuum instability far beneath the 10 TeV scale. An enhancement by a factor of 1.5 can be accommodated if the charged particles are lighter than 150 GeV, within reach of discovery in almost all cases in the 8 TeV run at the LHC, and in even the most difficult cases at 14 TeV. Thus if the diphoton enhancement survives further scrutiny, and no charged particles beneath 150 GeV are found, there must be new bosons far beneath the 10 TeV scale. This would unambiguously rule out a large class of fine-tuned theories for physics beyond the Standard Model, including split SUSY and many of its variants, and provide strong circumstantial evidence for a natural theory of electroweak symmetry breaking at the TeV scale. Alternately, theories with only a single fine-tuned Higgs and new fermions at the weak scale, with no additional scalars or gauge bosons up to a cutoff much larger than the 10 TeV scale, unambiguously predict that the hints for a large diphoton enhancement in the current data will disappear.
NASA Astrophysics Data System (ADS)
Luo, L.; Wang, Z.
2010-12-01
Soil Conservation Service Curve Number (SCS-CN) based hydrologic model, has widely been used for agricultural watersheds in recent years. However, there will be relative error when applying it due to differentiation of geographical and climatological conditions. This paper introduces a more adaptable and propagable model based on the modified SCS-CN method, which specializes into two different scale cases of research regions. Combining the typical conditions of the Zhanghe irrigation district in southern part of China, such as hydrometeorologic conditions and surface conditions, SCS-CN based models were established. The Xinbu-Qiao River basin (area =1207 km2) and the Tuanlin runoff test area (area =2.87 km2)were taken as the study areas of basin scale and field scale in Zhanghe irrigation district. Applications were extended from ordinary meso-scale watershed to field scale in Zhanghe paddy field-dominated irrigated . Based on actual measurement data of land use, soil classification, hydrology and meteorology, quantitative evaluation and modifications for two coefficients, i.e. preceding loss and runoff curve, were proposed with corresponding models, table of CN values for different landuse and AMC(antecedent moisture condition) grading standard fitting for research cases were proposed. The simulation precision was increased by putting forward a 12h unit hydrograph of the field area, and 12h unit hydrograph were simplified. Comparison between different scales show that it’s more effectively to use SCS-CN model on field scale after parameters calibrated in basin scale These results can help discovering the rainfall-runoff rule in the district. Differences of established SCS-CN model's parameters between the two study regions are also considered. Varied forms of landuse and impacts of human activities were the important factors which can impact the rainfall-runoff relations in Zhanghe irrigation district.
NASA Astrophysics Data System (ADS)
Amelang, Jeff
The quasicontinuum (QC) method was introduced to coarse-grain crystalline atomic ensembles in order to bridge the scales from individual atoms to the micro- and mesoscales. Though many QC formulations have been proposed with varying characteristics and capabilities, a crucial cornerstone of all QC techniques is the concept of summation rules, which attempt to efficiently approximate the total Hamiltonian of a crystalline atomic ensemble by a weighted sum over a small subset of atoms. In this work we propose a novel, fully-nonlocal, energy-based formulation of the QC method with support for legacy and new summation rules through a general energy-sampling scheme. Our formulation does not conceptually differentiate between atomistic and coarse-grained regions and thus allows for seamless bridging without domain-coupling interfaces. Within this structure, we introduce a new class of summation rules which leverage the affine kinematics of this QC formulation to most accurately integrate thermodynamic quantities of interest. By comparing this new class of summation rules to commonly-employed rules through analysis of energy and spurious force errors, we find that the new rules produce no residual or spurious force artifacts in the large-element limit under arbitrary affine deformation, while allowing us to seamlessly bridge to full atomistics. We verify that the new summation rules exhibit significantly smaller force artifacts and energy approximation errors than all comparable previous summation rules through a comprehensive suite of examples with spatially non-uniform QC discretizations in two and three dimensions. Due to the unique structure of these summation rules, we also use the new formulation to study scenarios with large regions of free surface, a class of problems previously out of reach of the QC method. Lastly, we present the key components of a high-performance, distributed-memory realization of the new method, including a novel algorithm for supporting unparalleled levels of deformation. Overall, this new formulation and implementation allows us to efficiently perform simulations containing an unprecedented number of degrees of freedom with low approximation error.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Medvedev, Nikita; Li, Zheng; Tkachenko, Victor
2017-01-31
In the present study, a theoretical study of electron-phonon (electron-ion) coupling rates in semiconductors driven out of equilibrium is performed. Transient change of optical coefficients reflects the band gap shrinkage in covalently bonded materials, and thus, the heating of atomic lattice. Utilizing this dependence, we test various models of electron-ion coupling. The simulation technique is based on tight-binding molecular dynamics. Our simulations with the dedicated hybrid approach (XTANT) indicate that the widely used Fermi's golden rule can break down describing material excitation on femtosecond time scales. In contrast, dynamical coupling proposed in this work yields a reasonably good agreement ofmore » simulation results with available experimental data.« less
Amirataee, Babak; Montaseri, Majid; Rezaie, Hossein
2018-01-15
Droughts are extreme events characterized by temporal duration and spatial large-scale effects. In general, regional droughts are affected by general circulation of the atmosphere (at large-scale) and regional natural factors, including the topography, natural lakes, the position relative to the center and the path of the ocean currents (at small-scale), and they don't cover the exact same effects in a wide area. Therefore, drought Severity-Area-Frequency (S-A-F) curve investigation is an essential task to develop decision making rule for regional drought management. This study developed the copula-based joint probability distribution of drought severity and percent of area under drought across the Lake Urmia basin, Iran. To do this end, one-month Standardized Precipitation Index (SPI) values during the 1971-2013 were applied across 24 rainfall stations in the study area. Then, seven copula functions of various families, including Clayton, Gumbel, Frank, Joe, Galambos, Plackett and Normal copulas, were used to model the joint probability distribution of drought severity and drought area. Using AIC, BIC and RMSE criteria, the Frank copula was selected as the most appropriate copula in order to develop the joint probability distribution of severity-percent of area under drought across the study area. Based on the Frank copula, the drought S-A-F curve for the study area was derived. The results indicated that severe/extreme drought and non-drought (wet) behaviors have affected the majority of study areas (Lake Urmia basin). However, the area covered by the specific semi-drought effects is limited and has been subject to significant variations. Copyright © 2017 Elsevier Ltd. All rights reserved.
Orthogonal search-based rule extraction for modelling the decision to transfuse.
Etchells, T A; Harrison, M J
2006-04-01
Data from an audit relating to transfusion decisions during intermediate or major surgery were analysed to determine the strengths of certain factors in the decision making process. The analysis, using orthogonal search-based rule extraction (OSRE) from a trained neural network, demonstrated that the risk of tissue hypoxia (ROTH) assessed using a 100-mm visual analogue scale, the haemoglobin value (Hb) and the presence or absence of on-going haemorrhage (OGH) were able to reproduce the transfusion decisions with a joint specificity of 0.96 and sensitivity of 0.93 and a positive predictive value of 0.9. The rules indicating transfusion were: 1. ROTH > 32 mm and Hb < 94 g x l(-1); 2. ROTH > 13 mm and Hb < 87 g x l(-1); 3. ROTH > 38 mm, Hb < 102 g x l(-1) and OGH; 4. Hb < 78 g x l(-1).
Theory of wavelet-based coarse-graining hierarchies for molecular dynamics.
Rinderspacher, Berend Christopher; Bardhan, Jaydeep P; Ismail, Ahmed E
2017-07-01
We present a multiresolution approach to compressing the degrees of freedom and potentials associated with molecular dynamics, such as the bond potentials. The approach suggests a systematic way to accelerate large-scale molecular simulations with more than two levels of coarse graining, particularly applications of polymeric materials. In particular, we derive explicit models for (arbitrarily large) linear (homo)polymers and iterative methods to compute large-scale wavelet decompositions from fragment solutions. This approach does not require explicit preparation of atomistic-to-coarse-grained mappings, but instead uses the theory of diffusion wavelets for graph Laplacians to develop system-specific mappings. Our methodology leads to a hierarchy of system-specific coarse-grained degrees of freedom that provides a conceptually clear and mathematically rigorous framework for modeling chemical systems at relevant model scales. The approach is capable of automatically generating as many coarse-grained model scales as necessary, that is, to go beyond the two scales in conventional coarse-grained strategies; furthermore, the wavelet-based coarse-grained models explicitly link time and length scales. Furthermore, a straightforward method for the reintroduction of omitted degrees of freedom is presented, which plays a major role in maintaining model fidelity in long-time simulations and in capturing emergent behaviors.
A two-stage stochastic rule-based model to determine pre-assembly buffer content
NASA Astrophysics Data System (ADS)
Gunay, Elif Elcin; Kula, Ufuk
2018-01-01
This study considers instant decision-making needs of the automobile manufactures for resequencing vehicles before final assembly (FA). We propose a rule-based two-stage stochastic model to determine the number of spare vehicles that should be kept in the pre-assembly buffer to restore the altered sequence due to paint defects and upstream department constraints. First stage of the model decides the spare vehicle quantities, where the second stage model recovers the scrambled sequence respect to pre-defined rules. The problem is solved by sample average approximation (SAA) algorithm. We conduct a numerical study to compare the solutions of heuristic model with optimal ones and provide following insights: (i) as the mismatch between paint entrance and scheduled sequence decreases, the rule-based heuristic model recovers the scrambled sequence as good as the optimal resequencing model, (ii) the rule-based model is more sensitive to the mismatch between the paint entrance and scheduled sequences for recovering the scrambled sequence, (iii) as the defect rate increases, the difference in recovery effectiveness between rule-based heuristic and optimal solutions increases, (iv) as buffer capacity increases, the recovery effectiveness of the optimization model outperforms heuristic model, (v) as expected the rule-based model holds more inventory than the optimization model.
Simulating statistics of lightning-induced and man made fires
NASA Astrophysics Data System (ADS)
Krenn, R.; Hergarten, S.
2009-04-01
The frequency-area distributions of forest fires show power-law behavior with scaling exponents α in a quite narrow range, relating wildfire research to the theoretical framework of self-organized criticality. Examples of self-organized critical behavior can be found in computer simulations of simple cellular automata. The established self-organized critical Drossel-Schwabl forest fire model (DS-FFM) is one of the most widespread models in this context. Despite its qualitative agreement with event-size statistics from nature, its applicability is still questioned. Apart from general concerns that the DS-FFM apparently oversimplifies the complex nature of forest dynamics, it significantly overestimates the frequency of large fires. We present a straightforward modification of the model rules that increases the scaling exponent α by approximately 13 and brings the simulated event-size statistics close to those observed in nature. In addition, combined simulations of both the original and the modified model predict a dependence of the overall distribution on the ratio of lightning induced and man made fires as well as a difference between their respective event-size statistics. The increase of the scaling exponent with decreasing lightning probability as well as the splitting of the partial distributions are confirmed by the analysis of the Canadian Large Fire Database. As a consequence, lightning induced and man made forest fires cannot be treated separately in wildfire modeling, hazard assessment and forest management.
The logical primitives of thought: Empirical foundations for compositional cognitive models.
Piantadosi, Steven T; Tenenbaum, Joshua B; Goodman, Noah D
2016-07-01
The notion of a compositional language of thought (LOT) has been central in computational accounts of cognition from earliest attempts (Boole, 1854; Fodor, 1975) to the present day (Feldman, 2000; Penn, Holyoak, & Povinelli, 2008; Fodor, 2008; Kemp, 2012; Goodman, Tenenbaum, & Gerstenberg, 2015). Recent modeling work shows how statistical inferences over compositionally structured hypothesis spaces might explain learning and development across a variety of domains. However, the primitive components of such representations are typically assumed a priori by modelers and theoreticians rather than determined empirically. We show how different sets of LOT primitives, embedded in a psychologically realistic approximate Bayesian inference framework, systematically predict distinct learning curves in rule-based concept learning experiments. We use this feature of LOT models to design a set of large-scale concept learning experiments that can determine the most likely primitives for psychological concepts involving Boolean connectives and quantification. Subjects' inferences are most consistent with a rich (nonminimal) set of Boolean operations, including first-order, but not second-order, quantification. Our results more generally show how specific LOT theories can be distinguished empirically. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
A comparison of scoring weights for the EuroQol derived from patients and the general public.
Polsky, D; Willke, R J; Scott, K; Schulman, K A; Glick, H A
2001-01-01
General health state classification systems, such as the EuroQol instrument, have been developed to improve the systematic measurement and comparability of health state preferences. In this paper we generate valuations for EuroQol health states using responses to this instrument's visual analogue scale made by patients enrolled in a randomized clinical trial evaluating tirilazad mesylate, a new drug used to treat subarachnoid haemorrhage. We then compare these valuations derived from patients with published valuations derived from responses made by a sample from the general public. The data were derived from two sources: (1) responses to the EuroQol instrument from 649 patients 3 months after enrollment in the clinical trial, and (2) from a published study reporting a scoring rule for the EuroQol instrument that was based upon responses made by the general public. We used a linear regression model to develop an additive scoring rule. This rule enables direct valuation of all 243 EuroQol health states using patients' scores for their own health states elicited using a visual analogue scale. We then compared predicted scores generated using our scoring rule with predicted scores derived from a sample from the general public. The predicted scores derived using the additive scoring rules met convergent validity criteria and explained a substantial amount of the variation in visual analogue scale scores (R(2)=0.57). In the pairwise comparison of the predicted scores derived from the study sample with those derived from the general public, we found that the former set of scores were higher for 223 of the 243 states. Despite the low level of correspondence in the pairwise comparison, the overall correlation between the two sets of scores was 87%. The model presented in this paper demonstrated that scoring weights for the EuroQol instrument can be derived directly from patient responses from a clinical trial and that these weights can explain a substantial amount of variation in health valuations. Scoring weights based on patient responses are significantly higher than those derived from the general public. Further research is required to understand the source of these differences. Copyright 2001 John Wiley & Sons, Ltd.
Information fusion-based approach for studying influence on Twitter using belief theory.
Azaza, Lobna; Kirgizov, Sergey; Savonnet, Marinette; Leclercq, Éric; Gastineau, Nicolas; Faiz, Rim
2016-01-01
Influence in Twitter has become recently a hot research topic, since this micro-blogging service is widely used to share and disseminate information. Some users are more able than others to influence and persuade peers. Thus, studying most influential users leads to reach a large-scale information diffusion area, something very useful in marketing or political campaigns. In this study, we propose a new approach for multi-level influence assessment on multi-relational networks, such as Twitter . We define a social graph to model the relationships between users as a multiplex graph where users are represented by nodes, and links model the different relations between them (e.g., retweets , mentions , and replies ). We explore how relations between nodes in this graph could reveal about the influence degree and propose a generic computational model to assess influence degree of a certain node. This is based on the conjunctive combination rule from the belief functions theory to combine different types of relations. We experiment the proposed method on a large amount of data gathered from Twitter during the European Elections 2014 and deduce top influential candidates. The results show that our model is flexible enough to to consider multiple interactions combination according to social scientists needs or requirements and that the numerical results of the belief theory are accurate. We also evaluate the approach over the CLEF RepLab 2014 data set and show that our approach leads to quite interesting results.
A comprehensive study on urban true orthorectification
Zhou, G.; Chen, W.; Kelmelis, J.A.; Zhang, Dongxiao
2005-01-01
To provide some advanced technical bases (algorithms and procedures) and experience needed for national large-scale digital orthophoto generation and revision of the Standards for National Large-Scale City Digital Orthophoto in the National Digital Orthophoto Program (NDOP), this paper presents a comprehensive study on theories, algorithms, and methods of large-scale urban orthoimage generation. The procedures of orthorectification for digital terrain model (DTM)-based and digital building model (DBM)-based orthoimage generation and their mergence for true orthoimage generation are discussed in detail. A method of compensating for building occlusions using photogrammetric geometry is developed. The data structure needed to model urban buildings for accurately generating urban orthoimages is presented. Shadow detection and removal, the optimization of seamline for automatic mosaic, and the radiometric balance of neighbor images are discussed. Street visibility analysis, including the relationship between flight height, building height, street width, and relative location of the street to the imaging center, is analyzed for complete true orthoimage generation. The experimental results demonstrated that our method can effectively and correctly orthorectify the displacements caused by terrain and buildings in urban large-scale aerial images. ?? 2005 IEEE.
Emergent dynamics of laboratory insect swarms
NASA Astrophysics Data System (ADS)
Kelley, Douglas H.; Ouellette, Nicholas T.
2013-01-01
Collective animal behaviour occurs at nearly every biological size scale, from single-celled organisms to the largest animals on earth. It has long been known that models with simple interaction rules can reproduce qualitative features of this complex behaviour. But determining whether these models accurately capture the biology requires data from real animals, which has historically been difficult to obtain. Here, we report three-dimensional, time-resolved measurements of the positions, velocities, and accelerations of individual insects in laboratory swarms of the midge Chironomus riparius. Even though the swarms do not show an overall polarisation, we find statistical evidence for local clusters of correlated motion. We also show that the swarms display an effective large-scale potential that keeps individuals bound together, and we characterize the shape of this potential. Our results provide quantitative data against which the emergent characteristics of animal aggregation models can be benchmarked.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jo, Na Hyun; Wu, Yun; Wang, Lin-Lin
The recently discovered material PtSn 4 is known to exhibit extremely large magnetoresistance (XMR) that also manifests Dirac arc nodes on the surface. PdSn 4 is isostructural to PtSn 4 with the same electron count. Here, we report on the physical properties of high-quality single crystals of PdSn 4 including specific heat, temperature- and magnetic-field-dependent resistivity and magnetization, and electronic band-structure properties obtained from angle-resolved photoemission spectroscopy (ARPES). We observe that PdSn 4 has physical properties that are qualitatively similar to those of PtSn 4 , but find also pronounced differences. Importantly, the Dirac arc node surface state of PtSnmore » 4 is gapped out for PdSn 4. By comparing these similar compounds, we address the origin of the extremely large magnetoresistance in PdSn 4 and PtSn 4; based on detailed analysis of the magnetoresistivity ρ ( H , T ) , we conclude that neither the carrier compensation nor the Dirac arc node surface state are the primary reason for the extremely large magnetoresistance. On the other hand, we also find that, surprisingly, Kohler's rule scaling of the magnetoresistance, which describes a self-similarity of the field-induced orbital electronic motion across different length scales and is derived for a simple electronic response of metals to an applied magnetic field is obeyed over the full range of temperatures and field strengths that we explore.« less
Jo, Na Hyun; Wu, Yun; Wang, Lin-Lin; ...
2017-10-27
The recently discovered material PtSn 4 is known to exhibit extremely large magnetoresistance (XMR) that also manifests Dirac arc nodes on the surface. PdSn 4 is isostructural to PtSn 4 with the same electron count. Here, we report on the physical properties of high-quality single crystals of PdSn 4 including specific heat, temperature- and magnetic-field-dependent resistivity and magnetization, and electronic band-structure properties obtained from angle-resolved photoemission spectroscopy (ARPES). We observe that PdSn 4 has physical properties that are qualitatively similar to those of PtSn 4 , but find also pronounced differences. Importantly, the Dirac arc node surface state of PtSnmore » 4 is gapped out for PdSn 4. By comparing these similar compounds, we address the origin of the extremely large magnetoresistance in PdSn 4 and PtSn 4; based on detailed analysis of the magnetoresistivity ρ ( H , T ) , we conclude that neither the carrier compensation nor the Dirac arc node surface state are the primary reason for the extremely large magnetoresistance. On the other hand, we also find that, surprisingly, Kohler's rule scaling of the magnetoresistance, which describes a self-similarity of the field-induced orbital electronic motion across different length scales and is derived for a simple electronic response of metals to an applied magnetic field is obeyed over the full range of temperatures and field strengths that we explore.« less
Building distributed rule-based systems using the AI Bus
NASA Technical Reports Server (NTRS)
Schultz, Roger D.; Stobie, Iain C.
1990-01-01
The AI Bus software architecture was designed to support the construction of large-scale, production-quality applications in areas of high technology flux, running heterogeneous distributed environments, utilizing a mix of knowledge-based and conventional components. These goals led to its current development as a layered, object-oriented library for cooperative systems. This paper describes the concepts and design of the AI Bus and its implementation status as a library of reusable and customizable objects, structured by layers from operating system interfaces up to high-level knowledge-based agents. Each agent is a semi-autonomous process with specialized expertise, and consists of a number of knowledge sources (a knowledge base and inference engine). Inter-agent communication mechanisms are based on blackboards and Actors-style acquaintances. As a conservative first implementation, we used C++ on top of Unix, and wrapped an embedded Clips with methods for the knowledge source class. This involved designing standard protocols for communication and functions which use these protocols in rules. Embedding several CLIPS objects within a single process was an unexpected problem because of global variables, whose solution involved constructing and recompiling a C++ version of CLIPS. We are currently working on a more radical approach to incorporating CLIPS, by separating out its pattern matcher, rule and fact representations and other components as true object oriented modules.
Summation rules for a fully nonlocal energy-based quasicontinuum method
NASA Astrophysics Data System (ADS)
Amelang, J. S.; Venturini, G. N.; Kochmann, D. M.
2015-09-01
The quasicontinuum (QC) method coarse-grains crystalline atomic ensembles in order to bridge the scales from individual atoms to the micro- and mesoscales. A crucial cornerstone of all QC techniques, summation or quadrature rules efficiently approximate the thermodynamic quantities of interest. Here, we investigate summation rules for a fully nonlocal, energy-based QC method to approximate the total Hamiltonian of a crystalline atomic ensemble by a weighted sum over a small subset of all atoms in the crystal lattice. Our formulation does not conceptually differentiate between atomistic and coarse-grained regions and thus allows for seamless bridging without domain-coupling interfaces. We review traditional summation rules and discuss their strengths and weaknesses with a focus on energy approximation errors and spurious force artifacts. Moreover, we introduce summation rules which produce no residual or spurious force artifacts in centrosymmetric crystals in the large-element limit under arbitrary affine deformations in two dimensions (and marginal force artifacts in three dimensions), while allowing us to seamlessly bridge to full atomistics. Through a comprehensive suite of examples with spatially non-uniform QC discretizations in two and three dimensions, we compare the accuracy of the new scheme to various previous ones. Our results confirm that the new summation rules exhibit significantly smaller force artifacts and energy approximation errors. Our numerical benchmark examples include the calculation of elastic constants from completely random QC meshes and the inhomogeneous deformation of aggressively coarse-grained crystals containing nano-voids. In the elastic regime, we directly compare QC results to those of full atomistics to assess global and local errors in complex QC simulations. Going beyond elasticity, we illustrate the performance of the energy-based QC method with the new second-order summation rule by the help of nanoindentation examples with automatic mesh adaptation. Overall, our findings provide guidelines for the selection of summation rules for the fully nonlocal energy-based QC method.
Large-Scale Optimization for Bayesian Inference in Complex Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Willcox, Karen; Marzouk, Youssef
2013-11-12
The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of themore » SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.« less
Final Report: Large-Scale Optimization for Bayesian Inference in Complex Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghattas, Omar
2013-10-15
The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimiza- tion) Project focuses on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimiza- tion and inversion methods. Our research is directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. Our efforts are integrated in the context of a challenging testbed problem that considers subsurface reacting flow and transport. The MIT component of the SAGUAROmore » Project addresses the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas-Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to- observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as "reduce then sample" and "sample then reduce." In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bojechko, Casey; Phillps, Mark; Kalet, Alan
Purpose: Complex treatments in radiation therapy require robust verification in order to prevent errors that can adversely affect the patient. For this purpose, the authors estimate the effectiveness of detecting errors with a “defense in depth” system composed of electronic portal imaging device (EPID) based dosimetry and a software-based system composed of rules-based and Bayesian network verifications. Methods: The authors analyzed incidents with a high potential severity score, scored as a 3 or 4 on a 4 point scale, recorded in an in-house voluntary incident reporting system, collected from February 2012 to August 2014. The incidents were categorized into differentmore » failure modes. The detectability, defined as the number of incidents that are detectable divided total number of incidents, was calculated for each failure mode. Results: In total, 343 incidents were used in this study. Of the incidents 67% were related to photon external beam therapy (EBRT). The majority of the EBRT incidents were related to patient positioning and only a small number of these could be detected by EPID dosimetry when performed prior to treatment (6%). A large fraction could be detected by in vivo dosimetry performed during the first fraction (74%). Rules-based and Bayesian network verifications were found to be complimentary to EPID dosimetry, able to detect errors related to patient prescriptions and documentation, and errors unrelated to photon EBRT. Combining all of the verification steps together, 91% of all EBRT incidents could be detected. Conclusions: This study shows that the defense in depth system is potentially able to detect a large majority of incidents. The most effective EPID-based dosimetry verification is in vivo measurements during the first fraction and is complemented by rules-based and Bayesian network plan checking.« less
A Framework of Simple Event Detection in Surveillance Video
NASA Astrophysics Data System (ADS)
Xu, Weiguang; Zhang, Yafei; Lu, Jianjiang; Tian, Yulong; Wang, Jiabao
Video surveillance is playing more and more important role in people's social life. Real-time alerting of threaten events and searching interesting content in stored large scale video footage needs human operator to pay full attention on monitor for long time. The labor intensive mode has limit the effectiveness and efficiency of the system. A framework of simple event detection is presented advance the automation of video surveillance. An improved inner key point matching approach is used to compensate motion of background in real-time; frame difference are used to detect foreground; HOG based classifiers are used to classify foreground object into people and car; mean-shift is used to tracking the recognized objects. Events are detected based on predefined rules. The maturity of the algorithms guarantee the robustness of the framework, and the improved approach and the easily checked rules enable the framework to work in real-time. Future works to be done are also discussed.
Global fits of GUT-scale SUSY models with GAMBIT
NASA Astrophysics Data System (ADS)
Athron, Peter; Balázs, Csaba; Bringmann, Torsten; Buckley, Andy; Chrząszcz, Marcin; Conrad, Jan; Cornell, Jonathan M.; Dal, Lars A.; Edsjö, Joakim; Farmer, Ben; Jackson, Paul; Krislock, Abram; Kvellestad, Anders; Mahmoudi, Farvah; Martinez, Gregory D.; Putze, Antje; Raklev, Are; Rogan, Christopher; de Austri, Roberto Ruiz; Saavedra, Aldo; Savage, Christopher; Scott, Pat; Serra, Nicola; Weniger, Christoph; White, Martin
2017-12-01
We present the most comprehensive global fits to date of three supersymmetric models motivated by grand unification: the constrained minimal supersymmetric standard model (CMSSM), and its Non-Universal Higgs Mass generalisations NUHM1 and NUHM2. We include likelihoods from a number of direct and indirect dark matter searches, a large collection of electroweak precision and flavour observables, direct searches for supersymmetry at LEP and Runs I and II of the LHC, and constraints from Higgs observables. Our analysis improves on existing results not only in terms of the number of included observables, but also in the level of detail with which we treat them, our sampling techniques for scanning the parameter space, and our treatment of nuisance parameters. We show that stau co-annihilation is now ruled out in the CMSSM at more than 95% confidence. Stop co-annihilation turns out to be one of the most promising mechanisms for achieving an appropriate relic density of dark matter in all three models, whilst avoiding all other constraints. We find high-likelihood regions of parameter space featuring light stops and charginos, making them potentially detectable in the near future at the LHC. We also show that tonne-scale direct detection will play a largely complementary role, probing large parts of the remaining viable parameter space, including essentially all models with multi-TeV neutralinos.
Setoguchi, Soko; Zhu, Ying; Jalbert, Jessica J; Williams, Lauren A; Chen, Chih-Ying
2014-05-01
Linking patient registries with administrative databases can enhance the utility of the databases for epidemiological and comparative effectiveness research. However, registries often lack direct personal identifiers, and the validity of record linkage using multiple indirect personal identifiers is not well understood. Using a large contemporary national cardiovascular device registry and 100% Medicare inpatient data, we linked hospitalization-level records. The main outcomes were the validity measures of several deterministic linkage rules using multiple indirect personal identifiers compared with rules using both direct and indirect personal identifiers. Linkage rules using 2 or 3 indirect, patient-level identifiers (ie, date of birth, sex, admission date) and hospital ID produced linkages with sensitivity of 95% and specificity of 98% compared with a gold standard linkage rule using a combination of both direct and indirect identifiers. Ours is the first large-scale study to validate the performance of deterministic linkage rules without direct personal identifiers. When linking hospitalization-level records in the absence of direct personal identifiers, provider information is necessary for successful linkage. © 2014 American Heart Association, Inc.
Adaptive Scaling of Cluster Boundaries for Large-Scale Social Media Data Clustering.
Meng, Lei; Tan, Ah-Hwee; Wunsch, Donald C
2016-12-01
The large scale and complex nature of social media data raises the need to scale clustering techniques to big data and make them capable of automatically identifying data clusters with few empirical settings. In this paper, we present our investigation and three algorithms based on the fuzzy adaptive resonance theory (Fuzzy ART) that have linear computational complexity, use a single parameter, i.e., the vigilance parameter to identify data clusters, and are robust to modest parameter settings. The contribution of this paper lies in two aspects. First, we theoretically demonstrate how complement coding, commonly known as a normalization method, changes the clustering mechanism of Fuzzy ART, and discover the vigilance region (VR) that essentially determines how a cluster in the Fuzzy ART system recognizes similar patterns in the feature space. The VR gives an intrinsic interpretation of the clustering mechanism and limitations of Fuzzy ART. Second, we introduce the idea of allowing different clusters in the Fuzzy ART system to have different vigilance levels in order to meet the diverse nature of the pattern distribution of social media data. To this end, we propose three vigilance adaptation methods, namely, the activation maximization (AM) rule, the confliction minimization (CM) rule, and the hybrid integration (HI) rule. With an initial vigilance value, the resulting clustering algorithms, namely, the AM-ART, CM-ART, and HI-ART, can automatically adapt the vigilance values of all clusters during the learning epochs in order to produce better cluster boundaries. Experiments on four social media data sets show that AM-ART, CM-ART, and HI-ART are more robust than Fuzzy ART to the initial vigilance value, and they usually achieve better or comparable performance and much faster speed than the state-of-the-art clustering algorithms that also do not require a predefined number of clusters.
Tian, Liang; Russell, Alan; Anderson, Iver
2014-01-03
Deformation processed metal–metal composites (DMMCs) are high-strength, high-electrical conductivity composites developed by severe plastic deformation of two ductile metal phases. The extraordinarily high strength of DMMCs is underestimated using the rule of mixture (or volumetric weighted average) of conventionally work-hardened metals. A dislocation-density-based, strain–gradient–plasticity model is proposed to relate the strain-gradient effect with the geometrically necessary dislocations emanating from the interface to better predict the strength of DMMCs. The model prediction was compared with our experimental findings of Cu–Nb, Cu–Ta, and Al–Ti DMMC systems to verify the applicability of the new model. The results show that this model predicts themore » strength of DMMCs better than the rule-of-mixture model. The strain-gradient effect, responsible for the exceptionally high strength of heavily cold worked DMMCs, is dominant at large deformation strain since its characteristic microstructure length is comparable with the intrinsic material length.« less
A Process Algebra Approach to Quantum Electrodynamics
NASA Astrophysics Data System (ADS)
Sulis, William
2017-12-01
The process algebra program is directed towards developing a realist model of quantum mechanics free of paradoxes, divergences and conceptual confusions. From this perspective, fundamental phenomena are viewed as emerging from primitive informational elements generated by processes. The process algebra has been shown to successfully reproduce scalar non-relativistic quantum mechanics (NRQM) without the usual paradoxes and dualities. NRQM appears as an effective theory which emerges under specific asymptotic limits. Space-time, scalar particle wave functions and the Born rule are all emergent in this framework. In this paper, the process algebra model is reviewed, extended to the relativistic setting, and then applied to the problem of electrodynamics. A semiclassical version is presented in which a Minkowski-like space-time emerges as well as a vector potential that is discrete and photon-like at small scales and near-continuous and wave-like at large scales. QED is viewed as an effective theory at small scales while Maxwell theory becomes an effective theory at large scales. The process algebra version of quantum electrodynamics is intuitive and realist, free from divergences and eliminates the distinction between particle, field and wave. Computations are carried out using the configuration space process covering map, although the connection to second quantization has not been fully explored.
NASA Astrophysics Data System (ADS)
Raghib, Michael; Levin, Simon; Kevrekidis, Ioannis
2010-05-01
Self-propelled particle models (SPP's) are a class of agent-based simulations that have been successfully used to explore questions related to various flavors of collective motion, including flocking, swarming, and milling. These models typically consist of particle configurations, where each particle moves with constant speed, but changes its orientation in response to local averages of the positions and orientations of its neighbors found within some interaction region. These local averages are based on `social interactions', which include avoidance of collisions, attraction, and polarization, that are designed to generate configurations that move as a single object. Errors made by the individuals in the estimates of the state of the local configuration are modeled as a random rotation of the updated orientation resulting from the social rules. More recently, SPP's have been introduced in the context of collective decision-making, where the main innovation consists of dividing the population into naïve and `informed' individuals. Whereas naïve individuals follow the classical collective motion rules, members of the informed sub-population update their orientations according to a weighted average of the social rules and a fixed `preferred' direction, shared by all the informed individuals. Collective decision-making is then understood in terms of the ability of the informed sub-population to steer the whole group along the preferred direction. Summary statistics of collective decision-making are defined in terms of the stochastic properties of the random walk followed by the centroid of the configuration as the particles move about, in particular the scaling behavior of the mean squared displacement (msd). For the region of parameters where the group remains coherent , we note that there are two characteristic time scales, first there is an anomalous transient shared by both purely naïve and informed configurations, i.e. the scaling exponent lies between 1 and 2. The long-time behavior of the msd of the centroid walk scales linearly with time for naïve groups (diffusion), but shows a sharp transition to quadratic scaling (advection) for informed ones. These observations suggest that the mesoscopic variables of interest are the magnitude of the drift, the diffusion coefficient and the time-scales at which the anomalous and the asymptotic behavior respectively dominate transport, the latter being linked to the time scale at which the group reaches a decision. In order to estimate these summary statistics from the msd, we assumed that the configuration centroid follows an uncoupled Continuous Time Random Walk (CTRW) with smooth jump and waiting time pdf's. The mesoscopic transport equation for this type of random walk corresponds to an Advection-Diffusion Equation with Memory (ADEM). The introduction of the memory, and thus non-Markovian effects, is necessary in order to correctly account for the two time scales present. Although we were not able to calculate the memory directly from the individual-level rules, we show that it can estimated from a single, relatively short, simulation run using a Mittag-Leffler function as template. With this function it is possible to predict accurately the behavior of the msd, as well as the full pdf for the position of the centroid. The resulting ADEM is self-consistent in the sense that transport parameters estimated from the memory via a Kubo relationship coincide with those estimated from the moments of the jump size pdf of the associated CTRW for a large number of group sizes, proportions of informed individuals, and degrees of bias along the preferred direction. We also discuss the phase diagrams for the transport coefficients estimated from this method, where we notice velocity-precision trade-offs, where precision is a measure of the deviation of realized group orientations with respect to the informed direction. We also note that the time scale to collective decision is invariant with respect to group size, and depends only on the proportion of informed individuals and the strength of the coupling along the informed direction.
Scaling rules for the final decline to extinction
Griffen, Blaine D.; Drake, John M.
2009-01-01
Space–time scaling rules are ubiquitous in ecological phenomena. Current theory postulates three scaling rules that describe the duration of a population's final decline to extinction, although these predictions have not previously been empirically confirmed. We examine these scaling rules across a broader set of conditions, including a wide range of density-dependent patterns in the underlying population dynamics. We then report on tests of these predictions from experiments using the cladoceran Daphnia magna as a model. Our results support two predictions that: (i) the duration of population persistence is much greater than the duration of the final decline to extinction and (ii) the duration of the final decline to extinction increases with the logarithm of the population's estimated carrying capacity. However, our results do not support a third prediction that the duration of the final decline scales inversely with population growth rate. These findings not only support the current standard theory of population extinction but also introduce new empirical anomalies awaiting a theoretical explanation. PMID:19141422
Dynamic Simulation of Human Thermoregulation and Heat Transfer for Spaceflight Applications
NASA Technical Reports Server (NTRS)
Miller, Thomas R.; Nelson, David A.; Bue, Grant; Kuznetz, Lawrence
2011-01-01
Models of human thermoregulation and heat transfer date from the early 1970s and have been developed for applications ranging from evaluating thermal comfort in spacecraft and aircraft cabin environments to predicting heat stress during EVAs. Most lumped or compartment models represent the body as an assemblage cylindrical and spherical elements which may be subdivided into layers to describe tissue heterogeneity. Many existing models are of limited usefulness in asymmetric thermal environments, such as may be encountered during an EVA. Conventional whole-body clothing models also limit the ability to describe local surface thermal and evaporation effects in sufficient detail. A further limitation is that models based on a standard man model are not readily scalable to represent large or small subjects. This work describes development of a new human thermal model derived from the 41-node man model. Each segment is divided into four concentric, constant thickness cylinders made up of a central core surrounded by muscle, fat, and skin, respectively. These cylinders are connected by the flow of blood from a central blood pool to each part. The central blood pool is updated at each time step, based on a whole-body energy balance. Results show the model simulates core and surface temperature histories, sweat evaporation and metabolic rates which generally are consistent with controlled exposures of human subjects. Scaling rules are developed to enable simulation of small and large subjects (5th percentile and 95th percentile). Future refinements will include a clothing model that addresses local surface insulation and permeation effects and developing control equations to describe thermoregulatory effects such as may occur with prolonged weightlessness or with aging.
Fajardo, Alex
2016-05-01
The study of scaling examines the relative dimensions of diverse organismal traits. Understanding whether global scaling patterns are paralleled within species is key to identify causal factors of universal scaling. I examined whether the foliage-stem (Corner's rules), the leaf size-number, and the leaf mass-leaf area scaling relationships remained invariant and isometric with elevation in a wide-distributed treeline species in the southern Chilean Andes. Mean leaf area, leaf mass, leafing intensity, and twig cross-sectional area were determined for 1-2 twigs of 8-15 Nothofagus pumilio individuals across four elevations (including treeline elevation) and four locations (from central Chile at 36°S to Tierra del Fuego at 54°S). Mixed effects models were fitted to test whether the interaction term between traits and elevation was nonsignificant (invariant). The leaf-twig cross-sectional area and the leaf mass-leaf area scaling relationships were isometric (slope = 1) and remained invariant with elevation, whereas the leaf size-number (i.e., leafing intensity) scaling was allometric (slope ≠ -1) and showed no variation with elevation. Leaf area and leaf number were consistently negatively correlated across elevation. The scaling relationships examined in the current study parallel those seen across species. It is plausible that the explanation of intraspecific scaling relationships, as trait combinations favored by natural selection, is the same as those invoked to explain across species patterns. Thus, it is very likely that the global interspecific Corner's rules and other leaf-leaf scaling relationships emerge as the aggregate of largely parallel intraspecific patterns. © 2016 Botanical Society of America.
Energy model for rumor propagation on social networks
NASA Astrophysics Data System (ADS)
Han, Shuo; Zhuang, Fuzhen; He, Qing; Shi, Zhongzhi; Ao, Xiang
2014-01-01
With the development of social networks, the impact of rumor propagation on human lives is more and more significant. Due to the change of propagation mode, traditional rumor propagation models designed for word-of-mouth process may not be suitable for describing the rumor spreading on social networks. To overcome this shortcoming, we carefully analyze the mechanisms of rumor propagation and the topological properties of large-scale social networks, then propose a novel model based on the physical theory. In this model, heat energy calculation formula and Metropolis rule are introduced to formalize this problem and the amount of heat energy is used to measure a rumor’s impact on a network. Finally, we conduct track experiments to show the evolution of rumor propagation, make comparison experiments to contrast the proposed model with the traditional models, and perform simulation experiments to study the dynamics of rumor spreading. The experiments show that (1) the rumor propagation simulated by our model goes through three stages: rapid growth, fluctuant persistence and slow decline; (2) individuals could spread a rumor repeatedly, which leads to the rumor’s resurgence; (3) rumor propagation is greatly influenced by a rumor’s attraction, the initial rumormonger and the sending probability.
Compactness Aromaticity of Atoms in Molecules
Putz, Mihai V.
2010-01-01
A new aromaticity definition is advanced as the compactness formulation through the ratio between atoms-in-molecule and orbital molecular facets of the same chemical reactivity property around the pre- and post-bonding stabilization limit, respectively. Geometrical reactivity index of polarizability was assumed as providing the benchmark aromaticity scale, since due to its observable character; with this occasion new Hydrogenic polarizability quantum formula that recovers the exact value of 4.5 a03 for Hydrogen is provided, where a0 is the Bohr radius; a polarizability based–aromaticity scale enables the introduction of five referential aromatic rules (Aroma 1 to 5 Rules). With the help of these aromatic rules, the aromaticity scales based on energetic reactivity indices of electronegativity and chemical hardness were computed and analyzed within the major semi-empirical and ab initio quantum chemical methods. Results show that chemical hardness based-aromaticity is in better agreement with polarizability based-aromaticity than the electronegativity-based aromaticity scale, while the most favorable computational environment appears to be the quantum semi-empirical for the first and quantum ab initio for the last of them, respectively. PMID:20480020
The Hubble IR cutoff in holographic ellipsoidal cosmologies
NASA Astrophysics Data System (ADS)
Cataldo, Mauricio; Cruz, Norman
2018-01-01
It is well known that for spatially flat FRW cosmologies, the holographic dark energy disfavors the Hubble parameter as a candidate for the IR cutoff. For overcoming this problem, we explore the use of this cutoff in holographic ellipsoidal cosmological models, and derive the general ellipsoidal metric induced by a such holographic energy density. Despite the drawbacks that this cutoff presents in homogeneous and isotropic universes, based on this general metric, we developed a suitable ellipsoidal holographic cosmological model, filled with a dark matter and a dark energy components. At late time stages, the cosmic evolution is dominated by a holographic anisotropic dark energy with barotropic equations of state. The cosmologies expand in all directions in accelerated manner. Since the ellipsoidal cosmologies given here are not asymptotically FRW, the deviation from homogeneity and isotropy of the universe on large cosmological scales remains constant during all cosmic evolution. This feature allows the studied holographic ellipsoidal cosmologies to be ruled by an equation of state ω =p/ρ , whose range belongs to quintessence or even phantom matter.
Real-time simulation of large-scale floods
NASA Astrophysics Data System (ADS)
Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.
2016-08-01
According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.
Target-decoy Based False Discovery Rate Estimation for Large-scale Metabolite Identification.
Wang, Xusheng; Jones, Drew R; Shaw, Timothy I; Cho, Ji-Hoon; Wang, Yuanyuan; Tan, Haiyan; Xie, Boer; Zhou, Suiping; Li, Yuxin; Peng, Junmin
2018-05-23
Metabolite identification is a crucial step in mass spectrometry (MS)-based metabolomics. However, it is still challenging to assess the confidence of assigned metabolites. In this study, we report a novel method for estimating false discovery rate (FDR) of metabolite assignment with a target-decoy strategy, in which the decoys are generated through violating the octet rule of chemistry by adding small odd numbers of hydrogen atoms. The target-decoy strategy was integrated into JUMPm, an automated metabolite identification pipeline for large-scale MS analysis, and was also evaluated with two other metabolomics tools, mzMatch and mzMine 2. The reliability of FDR calculation was examined by false datasets, which were simulated by altering MS1 or MS2 spectra. Finally, we used the JUMPm pipeline coupled with the target-decoy strategy to process unlabeled and stable-isotope labeled metabolomic datasets. The results demonstrate that the target-decoy strategy is a simple and effective method for evaluating the confidence of high-throughput metabolite identification.
Niklas, Karl J
2006-02-01
Life forms as diverse as unicellular algae, zooplankton, vascular plants, and mammals appear to obey quarter-power scaling rules. Among the most famous of these rules is Kleiber's (i.e. basal metabolic rates scale as the three-quarters power of body mass), which has a botanical analogue (i.e. annual plant growth rates scale as the three-quarters power of total body mass). Numerous theories have tried to explain why these rules exist, but each has been heavily criticized either on conceptual or empirical grounds. N,P-STOICHIOMETRY: Recent models predicting growth rates on the basis of how total cell, tissue, or organism nitrogen and phosphorus are allocated, respectively, to protein and rRNA contents may provide the answer, particularly in light of the observation that annual plant growth rates scale linearly with respect to standing leaf mass and that total leaf mass scales isometrically with respect to nitrogen but as the three-quarters power of leaf phosphorus. For example, when these relationships are juxtaposed with other allometric trends, a simple N,P-stoichiometric model successfully predicts the relative growth rates of 131 diverse C3 and C4 species. The melding of allometric and N,P-stoichiometric theoretical insights provides a robust modelling approach that conceptually links the subcellular 'machinery' of protein/ribosomal metabolism to observed growth rates of uni- and multicellular organisms. Because the operation of this 'machinery' is basic to the biology of all life forms, its allometry may provide a mechanistic explanation for the apparent ubiquity of quarter-power scaling rules.
A generic hydroeconomic model to assess future water scarcity
NASA Astrophysics Data System (ADS)
Neverre, Noémie; Dumas, Patrice
2015-04-01
We developed a generic hydroeconomic model able to confront future water supply and demand on a large scale, taking into account man-made reservoirs. The assessment is done at the scale of river basins, using only globally available data; the methodology can thus be generalized. On the supply side, we evaluate the impacts of climate change on water resources. The available quantity of water at each site is computed using the following information: runoff is taken from the outputs of CNRM climate model (Dubois et al., 2010), reservoirs are located using Aquastat, and the sub-basin flow-accumulation area of each reservoir is determined based on a Digital Elevation Model (HYDRO1k). On the demand side, agricultural and domestic demands are projected in terms of both quantity and economic value. For the agricultural sector, globally available data on irrigated areas and crops are combined in order to determine irrigated crops localization. Then, crops irrigation requirements are computed for the different stages of the growing season using Allen (1998) method with Hargreaves potential evapotranspiration. Irrigation water economic value is based on a yield comparison approach between rainfed and irrigated crops. Potential irrigated and rainfed yields are taken from LPJmL (Blondeau et al., 2007), or from FAOSTAT by making simple assumptions on yield ratios. For the domestic sector, we project the combined effects of demographic growth, economic development and water cost evolution on future demands. The method consists in building three-blocks inverse demand functions where volume limits of the blocks evolve with the level of GDP per capita. The value of water along the demand curve is determined from price-elasticity, price and demand data from the literature, using the point-expansion method, and from water costs data. Then projected demands are confronted to future water availability. Operating rules of the reservoirs and water allocation between demands are based on the maximization of water benefits, over time and space. A parameterisation-simulation-optimisation approach is used. This gives a projection of future water scarcity in the different locations and an estimation of the associated direct economic losses from unsatisfied demands. This generic hydroeconomic model can be easily applied to large-scale regions, in particular developing regions where little reliable data is available. We will present an application to Algeria, up to the 2050 horizon.
Aubert, Alice H; Thrun, Michael C; Breuer, Lutz; Ultsch, Alfred
2016-08-30
High-frequency, in-situ monitoring provides large environmental datasets. These datasets will likely bring new insights in landscape functioning and process scale understanding. However, tailoring data analysis methods is necessary. Here, we detach our analysis from the usual temporal analysis performed in hydrology to determine if it is possible to infer general rules regarding hydrochemistry from available large datasets. We combined a 2-year in-stream nitrate concentration time series (time resolution of 15 min) with concurrent hydrological, meteorological and soil moisture data. We removed the low-frequency variations through low-pass filtering, which suppressed seasonality. We then analyzed the high-frequency variability component using Pareto Density Estimation, which to our knowledge has not been applied to hydrology. The resulting distribution of nitrate concentrations revealed three normally distributed modes: low, medium and high. Studying the environmental conditions for each mode revealed the main control of nitrate concentration: the saturation state of the riparian zone. We found low nitrate concentrations under conditions of hydrological connectivity and dominant denitrifying biological processes, and we found high nitrate concentrations under hydrological recession conditions and dominant nitrifying biological processes. These results generalize our understanding of hydro-biogeochemical nitrate flux controls and bring useful information to the development of nitrogen process-based models at the landscape scale.
ERIC Educational Resources Information Center
Anderson, Sarah; Gurnee, Anne
2016-01-01
While the purpose of K-12 education is largely to train students for college and career, free education in a democratic society has another purpose: to prepare citizens to rule themselves. In this article, Anderson and Gurnee explain how place-based learning equips students to be active citizens in their communities. In this model, school localize…
Homogenization of Large-Scale Movement Models in Ecology
Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.
2011-01-01
A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.
Wave models for turbulent free shear flows
NASA Technical Reports Server (NTRS)
Liou, W. W.; Morris, P. J.
1991-01-01
New predictive closure models for turbulent free shear flows are presented. They are based on an instability wave description of the dominant large scale structures in these flows using a quasi-linear theory. Three model were developed to study the structural dynamics of turbulent motions of different scales in free shear flows. The local characteristics of the large scale motions are described using linear theory. Their amplitude is determined from an energy integral analysis. The models were applied to the study of an incompressible free mixing layer. In all cases, predictions are made for the development of the mean flow field. In the last model, predictions of the time dependent motion of the large scale structure of the mixing region are made. The predictions show good agreement with experimental observations.
Renormalisation group corrections to neutrino mixing sum rules
NASA Astrophysics Data System (ADS)
Gehrlein, J.; Petcov, S. T.; Spinrath, M.; Titov, A. V.
2016-11-01
Neutrino mixing sum rules are common to a large class of models based on the (discrete) symmetry approach to lepton flavour. In this approach the neutrino mixing matrix U is assumed to have an underlying approximate symmetry form Ũν, which is dictated by, or associated with, the employed (discrete) symmetry. In such a setup the cosine of the Dirac CP-violating phase δ can be related to the three neutrino mixing angles in terms of a sum rule which depends on the symmetry form of Ũν. We consider five extensively discussed possible symmetry forms of Ũν: i) bimaximal (BM) and ii) tri-bimaximal (TBM) forms, the forms corresponding to iii) golden ratio type A (GRA) mixing, iv) golden ratio type B (GRB) mixing, and v) hexagonal (HG) mixing. For each of these forms we investigate the renormalisation group corrections to the sum rule predictions for δ in the cases of neutrino Majorana mass term generated by the Weinberg (dimension 5) operator added to i) the Standard Model, and ii) the minimal SUSY extension of the Standard Model.
Andreeva, Antonina
2016-06-15
The Structural Classification of Proteins (SCOP) database has facilitated the development of many tools and algorithms and it has been successfully used in protein structure prediction and large-scale genome annotations. During the development of SCOP, numerous exceptions were found to topological rules, along with complex evolutionary scenarios and peculiarities in proteins including the ability to fold into alternative structures. This article reviews cases of structural variations observed for individual proteins and among groups of homologues, knowledge of which is essential for protein structure modelling. © 2016 The Author(s). published by Portland Press Limited on behalf of the Biochemical Society.
Fast hierarchical knowledge-based approach for human face detection in color images
NASA Astrophysics Data System (ADS)
Jiang, Jun; Gong, Jie; Zhang, Guilin; Hu, Ruolan
2001-09-01
This paper presents a fast hierarchical knowledge-based approach for automatically detecting multi-scale upright faces in still color images. The approach consists of three levels. At the highest level, skin-like regions are determinated by skin model, which is based on the color attributes hue and saturation in HSV color space, as well color attributes red and green in normalized color space. In level 2, a new eye model is devised to select human face candidates in segmented skin-like regions. An important feature of the eye model is that it is independent of the scale of human face. So it is possible for finding human faces in different scale with scanning image only once, and it leads to reduction the computation time of face detection greatly. In level 3, a human face mosaic image model, which is consistent with physical structure features of human face well, is applied to judge whether there are face detects in human face candidate regions. This model includes edge and gray rules. Experiment results show that the approach has high robustness and fast speed. It has wide application perspective at human-computer interactions and visual telephone etc.
Association Rule-based Predictive Model for Machine Failure in Industrial Internet of Things
NASA Astrophysics Data System (ADS)
Kwon, Jung-Hyok; Lee, Sol-Bee; Park, Jaehoon; Kim, Eui-Jik
2017-09-01
This paper proposes an association rule-based predictive model for machine failure in industrial Internet of things (IIoT), which can accurately predict the machine failure in real manufacturing environment by investigating the relationship between the cause and type of machine failure. To develop the predictive model, we consider three major steps: 1) binarization, 2) rule creation, 3) visualization. The binarization step translates item values in a dataset into one or zero, then the rule creation step creates association rules as IF-THEN structures using the Lattice model and Apriori algorithm. Finally, the created rules are visualized in various ways for users’ understanding. An experimental implementation was conducted using R Studio version 3.3.2. The results show that the proposed predictive model realistically predicts machine failure based on association rules.
Localization Algorithm Based on a Spring Model (LASM) for Large Scale Wireless Sensor Networks.
Chen, Wanming; Mei, Tao; Meng, Max Q-H; Liang, Huawei; Liu, Yumei; Li, Yangming; Li, Shuai
2008-03-15
A navigation method for a lunar rover based on large scale wireless sensornetworks is proposed. To obtain high navigation accuracy and large exploration area, highnode localization accuracy and large network scale are required. However, thecomputational and communication complexity and time consumption are greatly increasedwith the increase of the network scales. A localization algorithm based on a spring model(LASM) method is proposed to reduce the computational complexity, while maintainingthe localization accuracy in large scale sensor networks. The algorithm simulates thedynamics of physical spring system to estimate the positions of nodes. The sensor nodesare set as particles with masses and connected with neighbor nodes by virtual springs. Thevirtual springs will force the particles move to the original positions, the node positionscorrespondingly, from the randomly set positions. Therefore, a blind node position can bedetermined from the LASM algorithm by calculating the related forces with the neighbornodes. The computational and communication complexity are O(1) for each node, since thenumber of the neighbor nodes does not increase proportionally with the network scale size.Three patches are proposed to avoid local optimization, kick out bad nodes and deal withnode variation. Simulation results show that the computational and communicationcomplexity are almost constant despite of the increase of the network scale size. The time consumption has also been proven to remain almost constant since the calculation steps arealmost unrelated with the network scale size.
NASA Astrophysics Data System (ADS)
He, Yingqing; Ai, Bin; Yao, Yao; Zhong, Fajun
2015-06-01
Cellular automata (CA) have proven to be very effective for simulating and predicting the spatio-temporal evolution of complex geographical phenomena. Traditional methods generally pose problems in determining the structure and parameters of CA for a large, complex region or a long-term simulation. This study presents a self-adaptive CA model integrated with an artificial immune system to discover dynamic transition rules automatically. The model's parameters are allowed to be self-modified with the application of multi-temporal remote sensing images: that is, the CA can adapt itself to the changed and complex environment. Therefore, urban dynamic evolution rules over time can be efficiently retrieved by using this integrated model. The proposed AIS-based CA model was then used to simulate the rural-urban land conversion of Guangzhou city, located in the core of China's Pearl River Delta. The initial urban land was directly classified from TM satellite image in the year 1990. Urban land in the years 1995, 2000, 2005, 2009 and 2012 was correspondingly used as the observed data to calibrate the model's parameters. With the quantitative index figure of merit (FoM) and pattern similarity, the comparison was further performed between the AIS-based model and a Logistic CA model. The results indicate that the AIS-based CA model can perform better and with higher precision in simulating urban evolution, and the simulated spatial pattern is closer to the actual development situation.
NASA Astrophysics Data System (ADS)
Ji, Xinye; Shen, Chaopeng; Riley, William J.
2015-12-01
Soil moisture statistical fractal is an important tool for downscaling remotely-sensed observations and has the potential to play a key role in multi-scale hydrologic modeling. The fractal was first introduced two decades ago, but relatively little is known regarding how its scaling exponents evolve in time in response to climatic forcings. Previous studies have neglected the process of moisture re-distribution due to regional groundwater flow. In this study we used a physically-based surface-subsurface processes model and numerical experiments to elucidate the patterns and controls of fractal temporal evolution in two U.S. Midwest basins. Groundwater flow was found to introduce large-scale spatial structure, thereby reducing the scaling exponents (τ), which has implications for the transferability of calibrated parameters to predict τ. However, the groundwater effects depend on complex interactions with other physical controls such as soil texture and land use. The fractal scaling exponents, while in general showing a seasonal mode that correlates with mean moisture content, display hysteresis after storm events that can be divided into three phases, consistent with literature findings: (a) wetting, (b) re-organizing, and (c) dry-down. Modeling experiments clearly show that the hysteresis is attributed to soil texture, whose "patchiness" is the primary contributing factor. We generalized phenomenological rules for the impacts of rainfall, soil texture, groundwater flow, and land use on τ evolution. Grid resolution has a mild influence on the results and there is a strong correlation between predictions of τ from different resolutions. Overall, our results suggest that groundwater flow should be given more consideration in studies of the soil moisture statistical fractal, especially in regions with a shallow water table.
Fuzzy rule-based forecast of meteorological drought in western Niger
NASA Astrophysics Data System (ADS)
Abdourahamane, Zakari Seybou; Acar, Reşat
2018-01-01
Understanding the causes of rainfall anomalies in the West African Sahel to effectively predict drought events remains a challenge. The physical mechanisms that influence precipitation in this region are complex, uncertain, and imprecise in nature. Fuzzy logic techniques are renowned to be highly efficient in modeling such dynamics. This paper attempts to forecast meteorological drought in Western Niger using fuzzy rule-based modeling techniques. The 3-month scale standardized precipitation index (SPI-3) of four rainfall stations was used as predictand. Monthly data of southern oscillation index (SOI), South Atlantic sea surface temperature (SST), relative humidity (RH), and Atlantic sea level pressure (SLP), sourced from the National Oceanic and Atmosphere Administration (NOAA), were used as predictors. Fuzzy rules and membership functions were generated using fuzzy c-means clustering approach, expert decision, and literature review. For a minimum lead time of 1 month, the model has a coefficient of determination R 2 between 0.80 and 0.88, mean square error (MSE) below 0.17, and Nash-Sutcliffe efficiency (NSE) ranging between 0.79 and 0.87. The empirical frequency distributions of the predicted and the observed drought classes are equal at the 99% of confidence level based on two-sample t test. Results also revealed the discrepancy in the influence of SOI and SLP on drought occurrence at the four stations while the effect of SST and RH are space independent, being both significantly correlated (at α < 0.05 level) to the SPI-3. Moreover, the implemented fuzzy model compared to decision tree-based forecast model shows better forecast skills.
Developing a reversible rapid coordinate transformation model for the cylindrical projection
NASA Astrophysics Data System (ADS)
Ye, Si-jing; Yan, Tai-lai; Yue, Yan-li; Lin, Wei-yan; Li, Lin; Yao, Xiao-chuang; Mu, Qin-yun; Li, Yong-qin; Zhu, De-hai
2016-04-01
Numerical models are widely used for coordinate transformations. However, in most numerical models, polynomials are generated to approximate "true" geographic coordinates or plane coordinates, and one polynomial is hard to make simultaneously appropriate for both forward and inverse transformations. As there is a transformation rule between geographic coordinates and plane coordinates, how accurate and efficient is the calculation of the coordinate transformation if we construct polynomials to approximate the transformation rule instead of "true" coordinates? In addition, is it preferable to compare models using such polynomials with traditional numerical models with even higher exponents? Focusing on cylindrical projection, this paper reports on a grid-based rapid numerical transformation model - a linear rule approximation model (LRA-model) that constructs linear polynomials to approximate the transformation rule and uses a graticule to alleviate error propagation. Our experiments on cylindrical projection transformation between the WGS 84 Geographic Coordinate System (EPSG 4326) and the WGS 84 UTM ZONE 50N Plane Coordinate System (EPSG 32650) with simulated data demonstrate that the LRA-model exhibits high efficiency, high accuracy, and high stability; is simple and easy to use for both forward and inverse transformations; and can be applied to the transformation of a large amount of data with a requirement of high calculation efficiency. Furthermore, the LRA-model exhibits advantages in terms of calculation efficiency, accuracy and stability for coordinate transformations, compared to the widely used hyperbolic transformation model.
Programming biological models in Python using PySB.
Lopez, Carlos F; Muhlich, Jeremy L; Bachman, John A; Sorger, Peter K
2013-01-01
Mathematical equations are fundamental to modeling biological networks, but as networks get large and revisions frequent, it becomes difficult to manage equations directly or to combine previously developed models. Multiple simultaneous efforts to create graphical standards, rule-based languages, and integrated software workbenches aim to simplify biological modeling but none fully meets the need for transparent, extensible, and reusable models. In this paper we describe PySB, an approach in which models are not only created using programs, they are programs. PySB draws on programmatic modeling concepts from little b and ProMot, the rule-based languages BioNetGen and Kappa and the growing library of Python numerical tools. Central to PySB is a library of macros encoding familiar biochemical actions such as binding, catalysis, and polymerization, making it possible to use a high-level, action-oriented vocabulary to construct detailed models. As Python programs, PySB models leverage tools and practices from the open-source software community, substantially advancing our ability to distribute and manage the work of testing biochemical hypotheses. We illustrate these ideas using new and previously published models of apoptosis.
Programming biological models in Python using PySB
Lopez, Carlos F; Muhlich, Jeremy L; Bachman, John A; Sorger, Peter K
2013-01-01
Mathematical equations are fundamental to modeling biological networks, but as networks get large and revisions frequent, it becomes difficult to manage equations directly or to combine previously developed models. Multiple simultaneous efforts to create graphical standards, rule-based languages, and integrated software workbenches aim to simplify biological modeling but none fully meets the need for transparent, extensible, and reusable models. In this paper we describe PySB, an approach in which models are not only created using programs, they are programs. PySB draws on programmatic modeling concepts from little b and ProMot, the rule-based languages BioNetGen and Kappa and the growing library of Python numerical tools. Central to PySB is a library of macros encoding familiar biochemical actions such as binding, catalysis, and polymerization, making it possible to use a high-level, action-oriented vocabulary to construct detailed models. As Python programs, PySB models leverage tools and practices from the open-source software community, substantially advancing our ability to distribute and manage the work of testing biochemical hypotheses. We illustrate these ideas using new and previously published models of apoptosis. PMID:23423320
NASA Astrophysics Data System (ADS)
Wang, Min; Cui, Qi; Wang, Jie; Ming, Dongping; Lv, Guonian
2017-01-01
In this paper, we first propose several novel concepts for object-based image analysis, which include line-based shape regularity, line density, and scale-based best feature value (SBV), based on the region-line primitive association framework (RLPAF). We then propose a raft cultivation area (RCA) extraction method for high spatial resolution (HSR) remote sensing imagery based on multi-scale feature fusion and spatial rule induction. The proposed method includes the following steps: (1) Multi-scale region primitives (segments) are obtained by image segmentation method HBC-SEG, and line primitives (straight lines) are obtained by phase-based line detection method. (2) Association relationships between regions and lines are built based on RLPAF, and then multi-scale RLPAF features are extracted and SBVs are selected. (3) Several spatial rules are designed to extract RCAs within sea waters after land and water separation. Experiments show that the proposed method can successfully extract different-shaped RCAs from HR images with good performance.
Significance testing of rules in rule-based models of human problem solving
NASA Technical Reports Server (NTRS)
Lewis, C. M.; Hammer, J. M.
1986-01-01
Rule-based models of human problem solving have typically not been tested for statistical significance. Three methods of testing rules - analysis of variance, randomization, and contingency tables - are presented. Advantages and disadvantages of the methods are also described.
Recent development and biomedical applications of probabilistic Boolean networks
2013-01-01
Probabilistic Boolean network (PBN) modelling is a semi-quantitative approach widely used for the study of the topology and dynamic aspects of biological systems. The combined use of rule-based representation and probability makes PBN appealing for large-scale modelling of biological networks where degrees of uncertainty need to be considered. A considerable expansion of our knowledge in the field of theoretical research on PBN can be observed over the past few years, with a focus on network inference, network intervention and control. With respect to areas of applications, PBN is mainly used for the study of gene regulatory networks though with an increasing emergence in signal transduction, metabolic, and also physiological networks. At the same time, a number of computational tools, facilitating the modelling and analysis of PBNs, are continuously developed. A concise yet comprehensive review of the state-of-the-art on PBN modelling is offered in this article, including a comparative discussion on PBN versus similar models with respect to concepts and biomedical applications. Due to their many advantages, we consider PBN to stand as a suitable modelling framework for the description and analysis of complex biological systems, ranging from molecular to physiological levels. PMID:23815817
A hybrid learning method for constructing compact rule-based fuzzy models.
Zhao, Wanqing; Niu, Qun; Li, Kang; Irwin, George W
2013-12-01
The Takagi–Sugeno–Kang-type rule-based fuzzy model has found many applications in different fields; a major challenge is, however, to build a compact model with optimized model parameters which leads to satisfactory model performance. To produce a compact model, most existing approaches mainly focus on selecting an appropriate number of fuzzy rules. In contrast, this paper considers not only the selection of fuzzy rules but also the structure of each rule premise and consequent, leading to the development of a novel compact rule-based fuzzy model. Here, each fuzzy rule is associated with two sets of input attributes, in which the first is used for constructing the rule premise and the other is employed in the rule consequent. A new hybrid learning method combining the modified harmony search method with a fast recursive algorithm is hereby proposed to determine the structure and the parameters for the rule premises and consequents. This is a hard mixed-integer nonlinear optimization problem, and the proposed hybrid method solves the problem by employing an embedded framework, leading to a significantly reduced number of model parameters and a small number of fuzzy rules with each being as simple as possible. Results from three examples are presented to demonstrate the compactness (in terms of the number of model parameters and the number of rules) and the performance of the fuzzy models obtained by the proposed hybrid learning method, in comparison with other techniques from the literature.
Wave Energy Prize - General Information
Scharmen, Wesley
2016-12-01
All the informational files, templates, rules and guidelines for Wave Energy Prize (WEP), including the Wave Energy Prize Rules, Participant Terms and Conditions Template, WEC Prize Name, Logo, Branding, WEC Publicity, Technical Submission Template , Numerical Modeling Template, SSTF Submission Template, 1/20th Scale Model Design and Construction Plan Template, Final Report template, and Webinars.
Model Selection for Monitoring CO2 Plume during Sequestration
DOE Office of Scientific and Technical Information (OSTI.GOV)
2014-12-31
The model selection method developed as part of this project mainly includes four steps: (1) assessing the connectivity/dynamic characteristics of a large prior ensemble of models, (2) model clustering using multidimensional scaling coupled with k-mean clustering, (3) model selection using the Bayes' rule in the reduced model space, (4) model expansion using iterative resampling of the posterior models. The fourth step expresses one of the advantages of the method: it provides a built-in means of quantifying the uncertainty in predictions made with the selected models. In our application to plume monitoring, by expanding the posterior space of models, the finalmore » ensemble of representations of geological model can be used to assess the uncertainty in predicting the future displacement of the CO2 plume. The software implementation of this approach is attached here.« less
2014-01-01
Pollinator decline has been linked to landscape change, through both habitat fragmentation and the loss of habitat suitable for the pollinators to live within. One method for exploring why landscape change should affect pollinator populations is to combine individual-level behavioural ecological techniques with larger-scale landscape ecology. A modelling framework is described that uses spatially-explicit individual-based models to explore the effects of individual behavioural rules within a landscape. The technique described gives a simple method for exploring the effects of the removal of wild corridors, and the creation of wild set-aside fields: interventions that are common to many national agricultural policies. The effects of these manipulations on central-place nesting pollinators are varied, and depend upon the behavioural rules that the pollinators are using to move through the environment. The value of this modelling framework is discussed, and future directions for exploration are identified. PMID:24795848
Rands, Sean A
2014-01-01
Pollinator decline has been linked to landscape change, through both habitat fragmentation and the loss of habitat suitable for the pollinators to live within. One method for exploring why landscape change should affect pollinator populations is to combine individual-level behavioural ecological techniques with larger-scale landscape ecology. A modelling framework is described that uses spatially-explicit individual-based models to explore the effects of individual behavioural rules within a landscape. The technique described gives a simple method for exploring the effects of the removal of wild corridors, and the creation of wild set-aside fields: interventions that are common to many national agricultural policies. The effects of these manipulations on central-place nesting pollinators are varied, and depend upon the behavioural rules that the pollinators are using to move through the environment. The value of this modelling framework is discussed, and future directions for exploration are identified.
Modelling the large-scale redshift-space 3-point correlation function of galaxies
NASA Astrophysics Data System (ADS)
Slepian, Zachary; Eisenstein, Daniel J.
2017-08-01
We present a configuration-space model of the large-scale galaxy 3-point correlation function (3PCF) based on leading-order perturbation theory and including redshift-space distortions (RSD). This model should be useful in extracting distance-scale information from the 3PCF via the baryon acoustic oscillation method. We include the first redshift-space treatment of biasing by the baryon-dark matter relative velocity. Overall, on large scales the effect of RSD is primarily a renormalization of the 3PCF that is roughly independent of both physical scale and triangle opening angle; for our adopted Ωm and bias values, the rescaling is a factor of ˜1.8. We also present an efficient scheme for computing 3PCF predictions from our model, important for allowing fast exploration of the space of cosmological parameters in future analyses.
Resistivity scaling and electron relaxation times in metallic nanowires
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moors, Kristof, E-mail: kristof@itf.fys.kuleuven.be; Imec, Kapeldreef 75, B-3001 Leuven; Sorée, Bart
2014-08-14
We study the resistivity scaling in nanometer-sized metallic wires due to surface roughness and grain-boundaries, currently the main cause of electron scattering in nanoscaled interconnects. The resistivity has been obtained with the Boltzmann transport equation, adopting the relaxation time approximation of the distribution function and the effective mass approximation for the conducting electrons. The relaxation times are calculated exactly, using Fermi's golden rule, resulting in a correct relaxation time for every sub-band state contributing to the transport. In general, the relaxation time strongly depends on the sub-band state, something that remained unclear with the methods of previous work. The resistivitymore » scaling is obtained for different roughness and grain-boundary properties, showing large differences in scaling behavior and relaxation times. Our model clearly indicates that the resistivity is dominated by grain-boundary scattering, easily surpassing the surface roughness contribution by a factor of 10.« less
Drought and Heat Wave Impacts on Electricity Grid Reliability in Illinois
NASA Astrophysics Data System (ADS)
Stillwell, A. S.; Lubega, W. N.
2016-12-01
A large proportion of thermal power plants in the United States use cooling systems that discharge large volumes of heated water into rivers and cooling ponds. To minimize thermal pollution from these discharges, restrictions are placed on temperatures at the edge of defined mixing zones in the receiving waters. However, during extended hydrological droughts and heat waves, power plants are often granted thermal variances permitting them to exceed these temperature restrictions. These thermal variances are often deemed necessary for maintaining electricity reliability, particularly as heat waves cause increased electricity demand. Current practice, however, lacks tools for the development of grid-scale operational policies specifying generator output levels that ensure reliable electricity supply while minimizing thermal variances. Such policies must take into consideration characteristics of individual power plants, topology and characteristics of the electricity grid, and locations of power plants within the river basin. In this work, we develop a methodology for the development of these operational policies that captures necessary factors. We develop optimal rules for different hydrological and meteorological conditions, serving as rule curves for thermal power plants. The rules are conditioned on leading modes of the ambient hydrological and meteorological conditions at the different power plant locations, as the locations are geographically close and hydrologically connected. Heat dissipation in the rivers and cooling ponds is modeled using the equilibrium temperature concept. Optimal rules are determined through a Monte Carlo sampling optimization framework. The methodology is applied to a case study of eight power plants in Illinois that were granted thermal variances in the summer of 2012, with a representative electricity grid model used in place of the actual electricity grid.
The global reference atmospheric model, mod 2 (with two scale perturbation model)
NASA Technical Reports Server (NTRS)
Justus, C. G.; Hargraves, W. R.
1976-01-01
The Global Reference Atmospheric Model was improved to produce more realistic simulations of vertical profiles of atmospheric parameters. A revised two scale random perturbation model using perturbation magnitudes which are adjusted to conform to constraints imposed by the perfect gas law and the hydrostatic condition is described. The two scale perturbation model produces appropriately correlated (horizontally and vertically) small scale and large scale perturbations. These stochastically simulated perturbations are representative of the magnitudes and wavelengths of perturbations produced by tides and planetary scale waves (large scale) and turbulence and gravity waves (small scale). Other new features of the model are: (1) a second order geostrophic wind relation for use at low latitudes which does not "blow up" at low latitudes as the ordinary geostrophic relation does; and (2) revised quasi-biennial amplitudes and phases and revised stationary perturbations, based on data through 1972.
NASA Astrophysics Data System (ADS)
Lo Iudice, N.; Bianco, D.; Andreozzi, F.; Porrino, A.; Knapp, F.
2012-10-01
Large scale shell model calculations based on a new diagonalization algorithm are performed in order to investigate the mixed symmetry states in chains of nuclei in the proximity of N=82. The resulting spectra and transitions are in agreement with the experiments and consistent with the scheme provided by the interacting boson model.
Four simple rules that are sufficient to generate the mammalian blastocyst
Nissen, Silas Boye; Perera, Marta; Gonzalez, Javier Martin; Morgani, Sophie M.; Jensen, Mogens H.; Sneppen, Kim; Brickman, Joshua M.
2017-01-01
Early mammalian development is both highly regulative and self-organizing. It involves the interplay of cell position, predetermined gene regulatory networks, and environmental interactions to generate the physical arrangement of the blastocyst with precise timing. However, this process occurs in the absence of maternal information and in the presence of transcriptional stochasticity. How does the preimplantation embryo ensure robust, reproducible development in this context? It utilizes a versatile toolbox that includes complex intracellular networks coupled to cell—cell communication, segregation by differential adhesion, and apoptosis. Here, we ask whether a minimal set of developmental rules based on this toolbox is sufficient for successful blastocyst development, and to what extent these rules can explain mutant and experimental phenotypes. We implemented experimentally reported mechanisms for polarity, cell—cell signaling, adhesion, and apoptosis as a set of developmental rules in an agent-based in silico model of physically interacting cells. We find that this model quantitatively reproduces specific mutant phenotypes and provides an explanation for the emergence of heterogeneity without requiring any initial transcriptional variation. It also suggests that a fixed time point for the cells’ competence of fibroblast growth factor (FGF)/extracellular signal—regulated kinase (ERK) sets an embryonic clock that enables certain scaling phenomena, a concept that we evaluate quantitatively by manipulating embryos in vitro. Based on these observations, we conclude that the minimal set of rules enables the embryo to experiment with stochastic gene expression and could provide the robustness necessary for the evolutionary diversification of the preimplantation gene regulatory network. PMID:28700688
Simulation-based MDP verification for leading-edge masks
NASA Astrophysics Data System (ADS)
Su, Bo; Syrel, Oleg; Pomerantsev, Michael; Hagiwara, Kazuyuki; Pearman, Ryan; Pang, Leo; Fujimara, Aki
2017-07-01
For IC design starts below the 20nm technology node, the assist features on photomasks shrink well below 60nm and the printed patterns of those features on masks written by VSB eBeam writers start to show a large deviation from the mask designs. Traditional geometry-based fracturing starts to show large errors for those small features. As a result, other mask data preparation (MDP) methods have become available and adopted, such as rule-based Mask Process Correction (MPC), model-based MPC and eventually model-based MDP. The new MDP methods may place shot edges slightly differently from target to compensate for mask process effects, so that the final patterns on a mask are much closer to the design (which can be viewed as the ideal mask), especially for those assist features. Such an alteration generally produces better masks that are closer to the intended mask design. Traditional XOR-based MDP verification cannot detect problems caused by eBeam effects. Much like model-based OPC verification which became a necessity for OPC a decade ago, we see the same trend in MDP today. Simulation-based MDP verification solution requires a GPU-accelerated computational geometry engine with simulation capabilities. To have a meaningful simulation-based mask check, a good mask process model is needed. The TrueModel® system is a field tested physical mask model developed by D2S. The GPU-accelerated D2S Computational Design Platform (CDP) is used to run simulation-based mask check, as well as model-based MDP. In addition to simulation-based checks such as mask EPE or dose margin, geometry-based rules are also available to detect quality issues such as slivers or CD splits. Dose margin related hotspots can also be detected by setting a correct detection threshold. In this paper, we will demonstrate GPU-acceleration for geometry processing, and give examples of mask check results and performance data. GPU-acceleration is necessary to make simulation-based mask MDP verification acceptable.
Textual and visual content-based anti-phishing: a Bayesian approach.
Zhang, Haijun; Liu, Gang; Chow, Tommy W S; Liu, Wenyin
2011-10-01
A novel framework using a Bayesian approach for content-based phishing web page detection is presented. Our model takes into account textual and visual contents to measure the similarity between the protected web page and suspicious web pages. A text classifier, an image classifier, and an algorithm fusing the results from classifiers are introduced. An outstanding feature of this paper is the exploration of a Bayesian model to estimate the matching threshold. This is required in the classifier for determining the class of the web page and identifying whether the web page is phishing or not. In the text classifier, the naive Bayes rule is used to calculate the probability that a web page is phishing. In the image classifier, the earth mover's distance is employed to measure the visual similarity, and our Bayesian model is designed to determine the threshold. In the data fusion algorithm, the Bayes theory is used to synthesize the classification results from textual and visual content. The effectiveness of our proposed approach was examined in a large-scale dataset collected from real phishing cases. Experimental results demonstrated that the text classifier and the image classifier we designed deliver promising results, the fusion algorithm outperforms either of the individual classifiers, and our model can be adapted to different phishing cases. © 2011 IEEE
Large-scale derived flood frequency analysis based on continuous simulation
NASA Astrophysics Data System (ADS)
Dung Nguyen, Viet; Hundecha, Yeshewatesfa; Guse, Björn; Vorogushyn, Sergiy; Merz, Bruno
2016-04-01
There is an increasing need for spatially consistent flood risk assessments at the regional scale (several 100.000 km2), in particular in the insurance industry and for national risk reduction strategies. However, most large-scale flood risk assessments are composed of smaller-scale assessments and show spatial inconsistencies. To overcome this deficit, a large-scale flood model composed of a weather generator and catchments models was developed reflecting the spatially inherent heterogeneity. The weather generator is a multisite and multivariate stochastic model capable of generating synthetic meteorological fields (precipitation, temperature, etc.) at daily resolution for the regional scale. These fields respect the observed autocorrelation, spatial correlation and co-variance between the variables. They are used as input into catchment models. A long-term simulation of this combined system enables to derive very long discharge series at many catchment locations serving as a basic for spatially consistent flood risk estimates at the regional scale. This combined model was set up and validated for major river catchments in Germany. The weather generator was trained by 53-year observation data at 528 stations covering not only the complete Germany but also parts of France, Switzerland, Czech Republic and Australia with the aggregated spatial scale of 443,931 km2. 10.000 years of daily meteorological fields for the study area were generated. Likewise, rainfall-runoff simulations with SWIM were performed for the entire Elbe, Rhine, Weser, Donau and Ems catchments. The validation results illustrate a good performance of the combined system, as the simulated flood magnitudes and frequencies agree well with the observed flood data. Based on continuous simulation this model chain is then used to estimate flood quantiles for the whole Germany including upstream headwater catchments in neighbouring countries. This continuous large scale approach overcomes the several drawbacks reported in traditional approaches for the derived flood frequency analysis and therefore is recommended for large scale flood risk case studies.
Dynamic occupancy models for explicit colonization processes
Broms, Kristin M.; Hooten, Mevin B.; Johnson, Devin S.; Altwegg, Res; Conquest, Loveday
2016-01-01
The dynamic, multi-season occupancy model framework has become a popular tool for modeling open populations with occupancies that change over time through local colonizations and extinctions. However, few versions of the model relate these probabilities to the occupancies of neighboring sites or patches. We present a modeling framework that incorporates this information and is capable of describing a wide variety of spatiotemporal colonization and extinction processes. A key feature of the model is that it is based on a simple set of small-scale rules describing how the process evolves. The result is a dynamic process that can account for complicated large-scale features. In our model, a site is more likely to be colonized if more of its neighbors were previously occupied and if it provides more appealing environmental characteristics than its neighboring sites. Additionally, a site without occupied neighbors may also become colonized through the inclusion of a long-distance dispersal process. Although similar model specifications have been developed for epidemiological applications, ours formally accounts for detectability using the well-known occupancy modeling framework. After demonstrating the viability and potential of this new form of dynamic occupancy model in a simulation study, we use it to obtain inference for the ongoing Common Myna (Acridotheres tristis) invasion in South Africa. Our results suggest that the Common Myna continues to enlarge its distribution and its spread via short distance movement, rather than long-distance dispersal. Overall, this new modeling framework provides a powerful tool for managers examining the drivers of colonization including short- vs. long-distance dispersal, habitat quality, and distance from source populations.
NASA Technical Reports Server (NTRS)
Caulfield, John; Crosson, William L.; Inguva, Ramarao; Laymon, Charles A.; Schamschula, Marius
1998-01-01
This is a followup on the preceding presentation by Crosson and Schamschula. The grid size for remote microwave measurements is much coarser than the hydrological model computational grids. To validate the hydrological models with measurements we propose mechanisms to disaggregate the microwave measurements to allow comparison with outputs from the hydrological models. Weighted interpolation and Bayesian methods are proposed to facilitate the comparison. While remote measurements occur at a large scale, they reflect underlying small-scale features. We can give continuing estimates of the small scale features by correcting the simple 0th-order, starting with each small-scale model with each large-scale measurement using a straightforward method based on Kalman filtering.
NASA Astrophysics Data System (ADS)
Noor-E-Alam, Md.; Doucette, John
2015-08-01
Grid-based location problems (GBLPs) can be used to solve location problems in business, engineering, resource exploitation, and even in the field of medical sciences. To solve these decision problems, an integer linear programming (ILP) model is designed and developed to provide the optimal solution for GBLPs considering fixed cost criteria. Preliminary results show that the ILP model is efficient in solving small to moderate-sized problems. However, this ILP model becomes intractable in solving large-scale instances. Therefore, a decomposition heuristic is proposed to solve these large-scale GBLPs, which demonstrates significant reduction of solution runtimes. To benchmark the proposed heuristic, results are compared with the exact solution via ILP. The experimental results show that the proposed method significantly outperforms the exact method in runtime with minimal (and in most cases, no) loss of optimality.
2009-06-01
simulation is the campaign-level Peace Support Operations Model (PSOM). This thesis provides a quantitative analysis of PSOM. The results are based ...multiple potential outcomes , further development and analysis is required before the model is used for large scale analysis . 15. NUMBER OF PAGES 159...multiple potential outcomes , further development and analysis is required before the model is used for large scale analysis . vi THIS PAGE
Rules based process window OPC
NASA Astrophysics Data System (ADS)
O'Brien, Sean; Soper, Robert; Best, Shane; Mason, Mark
2008-03-01
As a preliminary step towards Model-Based Process Window OPC we have analyzed the impact of correcting post-OPC layouts using rules based methods. Image processing on the Brion Tachyon was used to identify sites where the OPC model/recipe failed to generate an acceptable solution. A set of rules for 65nm active and poly were generated by classifying these failure sites. The rules were based upon segment runlengths, figure spaces, and adjacent figure widths. 2.1 million sites for active were corrected in a small chip (comparing the pre and post rules based operations), and 59 million were found at poly. Tachyon analysis of the final reticle layout found weak margin sites distinct from those sites repaired by rules-based corrections. For the active layer more than 75% of the sites corrected by rules would have printed without a defect indicating that most rulesbased cleanups degrade the lithographic pattern. Some sites were missed by the rules based cleanups due to either bugs in the DRC software or gaps in the rules table. In the end dramatic changes to the reticle prevented catastrophic lithography errors, but this method is far too blunt. A more subtle model-based procedure is needed changing only those sites which have unsatisfactory lithographic margin.
Ice shelf fracture parameterization in an ice sheet model
NASA Astrophysics Data System (ADS)
Sun, Sainan; Cornford, Stephen L.; Moore, John C.; Gladstone, Rupert; Zhao, Liyun
2017-11-01
Floating ice shelves exert a stabilizing force onto the inland ice sheet. However, this buttressing effect is diminished by the fracture process, which on large scales effectively softens the ice, accelerating its flow, increasing calving, and potentially leading to ice shelf breakup. We add a continuum damage model (CDM) to the BISICLES ice sheet model, which is intended to model the localized opening of crevasses under stress, the transport of those crevasses through the ice sheet, and the coupling between crevasse depth and the ice flow field and to carry out idealized numerical experiments examining the broad impact on large-scale ice sheet and shelf dynamics. In each case we see a complex pattern of damage evolve over time, with an eventual loss of buttressing approximately equivalent to halving the thickness of the ice shelf. We find that it is possible to achieve a similar ice flow pattern using a simple rule of thumb: introducing an enhancement factor ˜ 10 everywhere in the model domain. However, spatially varying damage (or equivalently, enhancement factor) fields set at the start of prognostic calculations to match velocity observations, as is widely done in ice sheet simulations, ought to evolve in time, or grounding line retreat can be slowed by an order of magnitude.
2018-01-01
We propose a novel approach to modelling rater effects in scoring-based assessment. The approach is based on a Bayesian hierarchical model and simulations from the posterior distribution. We apply it to large-scale essay assessment data over a period of 5 years. Empirical results suggest that the model provides a good fit for both the total scores and when applied to individual rubrics. We estimate the median impact of rater effects on the final grade to be ± 2 points on a 50 point scale, while 10% of essays would receive a score at least ± 5 different from their actual quality. Most of the impact is due to rater unreliability, not rater bias. PMID:29614129
Enduring love? Attitudes to family and inheritance law in England and Wales.
Douglas, Gillian; Woodward, Hilary; Humphrey, Alun; Mills, Lisa; Morrell, Gareth
2011-01-01
This paper reports on the findings from a large-scale study of public attitudes to inheritance law, particularly the rules on intestacy. It argues that far from the assumption that the family' is in terminal decline, people in England and Wales still view their most important relationships, at least for the purposes of inheritance law, as centred on a narrow, nuclear family model. However, there is also widespread acceptance of re-partnering and cohabitation, producing generally high levels of support for including cohabitants in the intestacy rules and for ensuring that children from former relationships are protected. We argue that these views are underpinned by a continuing sense of responsibility to the members of one's nuclear family, arising from notions of sharing and commitment, dependency and support, and a sense of lineage.
Simulation studies of self-organization of microtubules and molecular motors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jian, Z.; Karpeev, D.; Aranson, I. S.
We perform Monte Carlo type simulation studies of self-organization of microtubules interacting with molecular motors. We model microtubules as stiff polar rods of equal length exhibiting anisotropic diffusion in the plane. The molecular motors are implicitly introduced by specifying certain probabilistic collision rules resulting in realignment of the rods. This approximation of the complicated microtubule-motor interaction by a simple instant collision allows us to bypass the 'computational bottlenecks' associated with the details of the diffusion and the dynamics of motors and the reorientation of microtubules. Consequently, we are able to perform simulations of large ensembles of microtubules and motors onmore » a very large time scale. This simple model reproduces all important phenomenology observed in in vitro experiments: Formation of vortices for low motor density and raylike asters and bundles for higher motor density.« less
The Parallel System for Integrating Impact Models and Sectors (pSIMS)
NASA Technical Reports Server (NTRS)
Elliott, Joshua; Kelly, David; Chryssanthacopoulos, James; Glotter, Michael; Jhunjhnuwala, Kanika; Best, Neil; Wilde, Michael; Foster, Ian
2014-01-01
We present a framework for massively parallel climate impact simulations: the parallel System for Integrating Impact Models and Sectors (pSIMS). This framework comprises a) tools for ingesting and converting large amounts of data to a versatile datatype based on a common geospatial grid; b) tools for translating this datatype into custom formats for site-based models; c) a scalable parallel framework for performing large ensemble simulations, using any one of a number of different impacts models, on clusters, supercomputers, distributed grids, or clouds; d) tools and data standards for reformatting outputs to common datatypes for analysis and visualization; and e) methodologies for aggregating these datatypes to arbitrary spatial scales such as administrative and environmental demarcations. By automating many time-consuming and error-prone aspects of large-scale climate impacts studies, pSIMS accelerates computational research, encourages model intercomparison, and enhances reproducibility of simulation results. We present the pSIMS design and use example assessments to demonstrate its multi-model, multi-scale, and multi-sector versatility.
Machine learning: novel bioinformatics approaches for combating antimicrobial resistance.
Macesic, Nenad; Polubriaginof, Fernanda; Tatonetti, Nicholas P
2017-12-01
Antimicrobial resistance (AMR) is a threat to global health and new approaches to combating AMR are needed. Use of machine learning in addressing AMR is in its infancy but has made promising steps. We reviewed the current literature on the use of machine learning for studying bacterial AMR. The advent of large-scale data sets provided by next-generation sequencing and electronic health records make applying machine learning to the study and treatment of AMR possible. To date, it has been used for antimicrobial susceptibility genotype/phenotype prediction, development of AMR clinical decision rules, novel antimicrobial agent discovery and antimicrobial therapy optimization. Application of machine learning to studying AMR is feasible but remains limited. Implementation of machine learning in clinical settings faces barriers to uptake with concerns regarding model interpretability and data quality.Future applications of machine learning to AMR are likely to be laboratory-based, such as antimicrobial susceptibility phenotype prediction.
NASA Astrophysics Data System (ADS)
Gong, L.
2013-12-01
Large-scale hydrological models and land surface models are by far the only tools for accessing future water resources in climate change impact studies. Those models estimate discharge with large uncertainties, due to the complex interaction between climate and hydrology, the limited quality and availability of data, as well as model uncertainties. A new purely data-based scale-extrapolation method is proposed, to estimate water resources for a large basin solely from selected small sub-basins, which are typically two-orders-of-magnitude smaller than the large basin. Those small sub-basins contain sufficient information, not only on climate and land surface, but also on hydrological characteristics for the large basin In the Baltic Sea drainage basin, best discharge estimation for the gauged area was achieved with sub-basins that cover 2-4% of the gauged area. There exist multiple sets of sub-basins that resemble the climate and hydrology of the basin equally well. Those multiple sets estimate annual discharge for gauged area consistently well with 5% average error. The scale-extrapolation method is completely data-based; therefore it does not force any modelling error into the prediction. The multiple predictions are expected to bracket the inherent variations and uncertainties of the climate and hydrology of the basin. The method can be applied in both un-gauged basins and un-gauged periods with uncertainty estimation.
Subgrid-scale Condensation Modeling for Entropy-based Large Eddy Simulations of Clouds
NASA Astrophysics Data System (ADS)
Kaul, C. M.; Schneider, T.; Pressel, K. G.; Tan, Z.
2015-12-01
An entropy- and total water-based formulation of LES thermodynamics, such as that used by the recently developed code PyCLES, is advantageous from physical and numerical perspectives. However, existing closures for subgrid-scale thermodynamic fluctuations assume more traditional choices for prognostic thermodynamic variables, such as liquid potential temperature, and are not directly applicable to entropy-based modeling. Since entropy and total water are generally nonlinearly related to diagnosed quantities like temperature and condensate amounts, neglecting their small-scale variability can lead to bias in simulation results. Here we present the development of a subgrid-scale condensation model suitable for use with entropy-based thermodynamic formulations.
Still searching for the Holy Grail: on the use of effective soil parameters for Parflow-CLM.
NASA Astrophysics Data System (ADS)
Baroni, Gabriele; Schalge, Bernd; Rihani, Jehan; Attinger, Sabine
2015-04-01
In the last decades the advances in computer science have led to a growing number of coupled and distributed hydrological models based on Richards' equation. Several studies were conducted for understanding hydrological processes at different spatial and temporal scales and they showed promising uses of these types of models also in practical applications. However, these models are generally applied to scales different from that at which the equation is deduced and validated. For this reason, the models are implemented with effective soil parameters that, in principle, should preserve the water fluxes that would have been estimated assuming the finer resolution scale. In this context, the reduction in spatial discretization becomes a trade-off between complexity and performance of the model. The aim of the present contribution is to assess the performance of Parflow-CLM implemented at different spatial scales. A virtual experiment based on data available for the Neckar catchment (Germany) is used as reference at 100x100m resolution. Different upscaling rules for the soil hydraulic parameters are used for coarsening the model up to 1x1km. The analysis is carried out based on different model output e.g., river discharge, evapotranspiration, soil moisture and groundwater recharge. The effects of soil variability, correlation length and spatial distribution over the water flow direction on the simulation results are discussed. Further researches aim to quantify the related uncertainty in model output and the possibility to fill in the model structure inadequacy with data assimilation techniques.
NASA Astrophysics Data System (ADS)
Pevtsov, A.
Solar magnetic fields exhibit hemispheric preference for negative (pos- itive) helicity in northern (southern) hemisphere. The hemispheric he- licity rule, however, is not very strong, - the patterns of opposite sign helicity were observed on different spatial scales in each hemisphere. For instance, many individual sunspots exhibit patches of opposite he- licity inside the single polarity field. There are also helicity patterns on scales larger than the size of typical active region. Such patterns were observed in distribution of active regions with abnormal (for a give hemisphere) helicity, in large-scale photospheric magnetic fields and coronal flux systems. We will review the observations of large-scale pat- terns of helicity in solar atmosphere and their possible relationship with (sub-)photospheric processes. The emphasis will be on large-scale pho- tospheric magnetic field and solar corona.
Fytas, Nikolaos G; Martín-Mayor, Víctor
2016-06-01
It was recently shown [Phys. Rev. Lett. 110, 227201 (2013)PRLTAO0031-900710.1103/PhysRevLett.110.227201] that the critical behavior of the random-field Ising model in three dimensions is ruled by a single universality class. This conclusion was reached only after a proper taming of the large scaling corrections of the model by applying a combined approach of various techniques, coming from the zero- and positive-temperature toolboxes of statistical physics. In the present contribution we provide a detailed description of this combined scheme, explaining in detail the zero-temperature numerical scheme and developing the generalized fluctuation-dissipation formula that allowed us to compute connected and disconnected correlation functions of the model. We discuss the error evolution of our method and we illustrate the infinite limit-size extrapolation of several observables within phenomenological renormalization. We present an extension of the quotients method that allows us to obtain estimates of the critical exponent α of the specific heat of the model via the scaling of the bond energy and we discuss the self-averaging properties of the system and the algorithmic aspects of the maximum-flow algorithm used.
NASA Astrophysics Data System (ADS)
Chapman, E.; Yang, J.; Crawshaw, J.; Boek, E. S.
2012-04-01
In the 1980s, Lenormand et al. carried out their pioneering work on displacement mechanisms of fluids in etched networks [1]. Here we further examine displacement mechanisms in relation to capillary filling rules for spontaneous imbibition. Understanding the role of spontaneous imbibition in fluid displacement is essential for refining pore network models. Generally, pore network models use simple capillary filling rules and here we examine the validity of these rules for spontaneous imbibition. Improvement of pore network models is vital for the process of 'up-scaling' to the field scale for both enhanced oil recovery (EOR) and carbon sequestration. In this work, we present our experimental microfluidic research into the displacement of both supercritical CO2/deionised water (DI) systems and analogous n-decane/air - where supercritical CO2 and n-decane are the respective wetting fluids - controlled by imbibition at the pore scale. We conducted our experiments in etched PMMA and silicon/glass micro-fluidic hydrophobic chips. We first investigate displacement in single etched pore junctions, followed by displacement in complex network designs representing actual rock thin sections, i.e. Berea sandstone and Sucrosic dolomite. The n-decane/air experiments were conducted under ambient conditions, whereas the supercritical CO2/DI water experiments were conducted under high temperature and pressure in order to replicate reservoir conditions. Fluid displacement in all experiments was captured via a high speed video microscope. The direction and type of displacement the imbibing fluid takes when it enters a junction is dependent on the number of possible channels in which the wetting fluid can imbibe, i.e. I1, I2 and I3 [1]. Depending on the experiment conducted, the micro-models were initially filled with either DI water or air before the wetting fluid was injected. We found that the imbibition of the wetting fluid through a single pore is primarily controlled by the geometry of the pore body rather than the downstream pore throat sizes, contrary to the established capillary filling rules as used in current pore network models. Our experimental observations are confirmed by detailed lattice-Boltzmann pore scale computer simulations of fluid displacement in the same geometries. This suggests that capillary filling rules for imbibition as used in pore network models may need to be revised. [1] G. Lenormand, C. Zarcone and A. Sarr, J. Fluid Mech. 135 , 337-353 (1983).
Small-scale grassland assembly patterns differ above and below the soil surface.
Price, Jodi N; Hiiesalu, Inga; Gerhold, Pille; Pärtel, Meelis
2012-06-01
The existence of deterministic assembly rules for plant communities remains an important and unresolved topic in ecology. Most studies examining community assembly have sampled aboveground species diversity and composition. However, plants also coexist belowground, and many coexistence theories invoke belowground competition as an explanation for aboveground patterns. We used next-generation sequencing that enables the identification of roots and rhizomes from mixed-species samples to measure coexisting species at small scales in temperate grasslands. We used comparable data from above (conventional methods) and below (molecular techniques) the soil surface (0.1 x 0.1 x 0.1 m volume). To detect evidence for nonrandom patterns in the direction of biotic or abiotic assembly processes, we used three assembly rules tests (richness variance, guild proportionality, and species co-occurrence indices) as well as pairwise association tests. We found support for biotic assembly rules aboveground, with lower variance in species richness than expected and more negative species associations. Belowground plant communities were structured more by abiotic processes, with greater variability in richness and guild proportionality than expected. Belowground assembly is largely driven by abiotic processes, with little evidence for competition-driven assembly, and this has implications for plant coexistence theories that are based on competition for soil resources.
Movement rules for individual-based models of stream fish
Steven F. Railsback; Roland H. Lamberson; Bret C. Harvey; Walter E. Duffy
1999-01-01
Abstract - Spatially explicit individual-based models (IBMs) use movement rules to determine when an animal departs its current location and to determine its movement destination; these rules are therefore critical to accurate simulations. Movement rules typically define some measure of how an individual's expected fitness varies among locations, under the...
Alecu, I M; Zheng, Jingjing; Zhao, Yan; Truhlar, Donald G
2010-09-14
Optimized scale factors for calculating vibrational harmonic and fundamental frequencies and zero-point energies have been determined for 145 electronic model chemistries, including 119 based on approximate functionals depending on occupied orbitals, 19 based on single-level wave function theory, three based on the neglect-of-diatomic-differential-overlap, two based on doubly hybrid density functional theory, and two based on multicoefficient correlation methods. Forty of the scale factors are obtained from large databases, which are also used to derive two universal scale factor ratios that can be used to interconvert between scale factors optimized for various properties, enabling the derivation of three key scale factors at the effort of optimizing only one of them. A reduced scale factor optimization model is formulated in order to further reduce the cost of optimizing scale factors, and the reduced model is illustrated by using it to obtain 105 additional scale factors. Using root-mean-square errors from the values in the large databases, we find that scaling reduces errors in zero-point energies by a factor of 2.3 and errors in fundamental vibrational frequencies by a factor of 3.0, but it reduces errors in harmonic vibrational frequencies by only a factor of 1.3. It is shown that, upon scaling, the balanced multicoefficient correlation method based on coupled cluster theory with single and double excitations (BMC-CCSD) can lead to very accurate predictions of vibrational frequencies. With a polarized, minimally augmented basis set, the density functionals with zero-point energy scale factors closest to unity are MPWLYP1M (1.009), τHCTHhyb (0.989), BB95 (1.012), BLYP (1.013), BP86 (1.014), B3LYP (0.986), MPW3LYP (0.986), and VSXC (0.986).
One Giant Leap for Categorizers: One Small Step for Categorization Theory
Smith, J. David; Ell, Shawn W.
2015-01-01
We explore humans’ rule-based category learning using analytic approaches that highlight their psychological transitions during learning. These approaches confirm that humans show qualitatively sudden psychological transitions during rule learning. These transitions contribute to the theoretical literature contrasting single vs. multiple category-learning systems, because they seem to reveal a distinctive learning process of explicit rule discovery. A complete psychology of categorization must describe this learning process, too. Yet extensive formal-modeling analyses confirm that a wide range of current (gradient-descent) models cannot reproduce these transitions, including influential rule-based models (e.g., COVIS) and exemplar models (e.g., ALCOVE). It is an important theoretical conclusion that existing models cannot explain humans’ rule-based category learning. The problem these models have is the incremental algorithm by which learning is simulated. Humans descend no gradient in rule-based tasks. Very different formal-modeling systems will be required to explain humans’ psychology in these tasks. An important next step will be to build a new generation of models that can do so. PMID:26332587
NASA Astrophysics Data System (ADS)
Hansen, A. L.; Donnelly, C.; Refsgaard, J. C.; Karlsson, I. B.
2018-01-01
This paper describes a modeling approach proposed to simulate the impact of local-scale, spatially targeted N-mitigation measures for the Baltic Sea Basin. Spatially targeted N-regulations aim at exploiting the considerable spatial differences in the natural N-reduction taking place in groundwater and surface water. While such measures can be simulated using local-scale physically-based catchment models, use of such detailed models for the 1.8 million km2 Baltic Sea basin is not feasible due to constraints on input data and computing power. Large-scale models that are able to simulate the Baltic Sea basin, on the other hand, do not have adequate spatial resolution to simulate some of the field-scale measures. Our methodology combines knowledge and results from two local-scale physically-based MIKE SHE catchment models, the large-scale and more conceptual E-HYPE model, and auxiliary data in order to enable E-HYPE to simulate how spatially targeted regulation of agricultural practices may affect N-loads to the Baltic Sea. We conclude that the use of E-HYPE with this upscaling methodology enables the simulation of the impact on N-loads of applying a spatially targeted regulation at the Baltic Sea basin scale to the correct order-of-magnitude. The E-HYPE model together with the upscaling methodology therefore provides a sound basis for large-scale policy analysis; however, we do not expect it to be sufficiently accurate to be useful for the detailed design of local-scale measures.
A mixing evolution model for bidirectional microblog user networks
NASA Astrophysics Data System (ADS)
Yuan, Wei-Guo; Liu, Yun
2015-08-01
Microblogs have been widely used as a new form of online social networking. Based on the user profile data collected from Sina Weibo, we find that the number of microblog user bidirectional friends approximately corresponds with the lognormal distribution. We then build two microblog user networks with real bidirectional relationships, both of which have not only small-world and scale-free but also some special properties, such as double power-law degree distribution, disassortative network, hierarchical and rich-club structure. Moreover, by detecting the community structures of the two real networks, we find both of their community scales follow an exponential distribution. Based on the empirical analysis, we present a novel evolution network model with mixed connection rules, including lognormal fitness preferential and random attachment, nearest neighbor interconnected in the same community, and global random associations in different communities. The simulation results show that our model is consistent with real network in many topology features.
Text mixing shapes the anatomy of rank-frequency distributions
NASA Astrophysics Data System (ADS)
Williams, Jake Ryland; Bagrow, James P.; Danforth, Christopher M.; Dodds, Peter Sheridan
2015-05-01
Natural languages are full of rules and exceptions. One of the most famous quantitative rules is Zipf's law, which states that the frequency of occurrence of a word is approximately inversely proportional to its rank. Though this "law" of ranks has been found to hold across disparate texts and forms of data, analyses of increasingly large corpora since the late 1990s have revealed the existence of two scaling regimes. These regimes have thus far been explained by a hypothesis suggesting a separability of languages into core and noncore lexica. Here we present and defend an alternative hypothesis that the two scaling regimes result from the act of aggregating texts. We observe that text mixing leads to an effective decay of word introduction, which we show provides accurate predictions of the location and severity of breaks in scaling. Upon examining large corpora from 10 languages in the Project Gutenberg eBooks collection, we find emphatic empirical support for the universality of our claim.
Text mixing shapes the anatomy of rank-frequency distributions.
Williams, Jake Ryland; Bagrow, James P; Danforth, Christopher M; Dodds, Peter Sheridan
2015-05-01
Natural languages are full of rules and exceptions. One of the most famous quantitative rules is Zipf's law, which states that the frequency of occurrence of a word is approximately inversely proportional to its rank. Though this "law" of ranks has been found to hold across disparate texts and forms of data, analyses of increasingly large corpora since the late 1990s have revealed the existence of two scaling regimes. These regimes have thus far been explained by a hypothesis suggesting a separability of languages into core and noncore lexica. Here we present and defend an alternative hypothesis that the two scaling regimes result from the act of aggregating texts. We observe that text mixing leads to an effective decay of word introduction, which we show provides accurate predictions of the location and severity of breaks in scaling. Upon examining large corpora from 10 languages in the Project Gutenberg eBooks collection, we find emphatic empirical support for the universality of our claim.
NASA Astrophysics Data System (ADS)
Zorita, E.
2009-12-01
One of the objectives when comparing simulations of past climates to proxy-based climate reconstructions is to asses the skill of climate models to simulate climate change. This comparison may accomplished at large spatial scales, for instance the evolution of simulated and reconstructed Northern Hemisphere annual temperature, or at regional or point scales. In both approaches a 'fair' comparison has to take into account different aspects that affect the inevitable uncertainties and biases in the simulations and in the reconstructions. These efforts face a trade-off: climate models are believed to be more skillful at large hemispheric scales, but climate reconstructions are these scales are burdened by the spatial distribution of available proxies and by methodological issues surrounding the statistical method used to translate the proxy information into large-spatial averages. Furthermore, the internal climatic noise at large hemispheric scales is low, so that the sampling uncertainty tends to be also low. On the other hand, the skill of climate models at regional scales is limited by the coarse spatial resolution, which hinders a faithful representation of aspects important for the regional climate. At small spatial scales, the reconstruction of past climate probably faces less methodological problems if information from different proxies is available. The internal climatic variability at regional scales is, however, high. In this contribution some examples of the different issues faced when comparing simulation and reconstructions at small spatial scales in the past millennium are discussed. These examples comprise reconstructions from dendrochronological data and from historical documentary data in Europe and climate simulations with global and regional models. These examples indicate that the centennial climate variations can offer a reasonable target to assess the skill of global climate models and of proxy-based reconstructions, even at small spatial scales. However, as the focus shifts towards higher frequency variability, decadal or multidecadal, the need for larger simulation ensembles becomes more evident. Nevertheless,the comparison at these time scales may expose some lines of research on the origin of multidecadal regional climate variability.
A semiparametric graphical modelling approach for large-scale equity selection.
Liu, Han; Mulvey, John; Zhao, Tianqi
2016-01-01
We propose a new stock selection strategy that exploits rebalancing returns and improves portfolio performance. To effectively harvest rebalancing gains, we apply ideas from elliptical-copula graphical modelling and stability inference to select stocks that are as independent as possible. The proposed elliptical-copula graphical model has a latent Gaussian representation; its structure can be effectively inferred using the regularized rank-based estimators. The resulting algorithm is computationally efficient and scales to large data-sets. To show the efficacy of the proposed method, we apply it to conduct equity selection based on a 16-year health care stock data-set and a large 34-year stock data-set. Empirical tests show that the proposed method is superior to alternative strategies including a principal component analysis-based approach and the classical Markowitz strategy based on the traditional buy-and-hold assumption.
Application of Large-Scale Database-Based Online Modeling to Plant State Long-Term Estimation
NASA Astrophysics Data System (ADS)
Ogawa, Masatoshi; Ogai, Harutoshi
Recently, attention has been drawn to the local modeling techniques of a new idea called “Just-In-Time (JIT) modeling”. To apply “JIT modeling” to a large amount of database online, “Large-scale database-based Online Modeling (LOM)” has been proposed. LOM is a technique that makes the retrieval of neighboring data more efficient by using both “stepwise selection” and quantization. In order to predict the long-term state of the plant without using future data of manipulated variables, an Extended Sequential Prediction method of LOM (ESP-LOM) has been proposed. In this paper, the LOM and the ESP-LOM are introduced.
Module-based multiscale simulation of angiogenesis in skeletal muscle
2011-01-01
Background Mathematical modeling of angiogenesis has been gaining momentum as a means to shed new light on the biological complexity underlying blood vessel growth. A variety of computational models have been developed, each focusing on different aspects of the angiogenesis process and occurring at different biological scales, ranging from the molecular to the tissue levels. Integration of models at different scales is a challenging and currently unsolved problem. Results We present an object-oriented module-based computational integration strategy to build a multiscale model of angiogenesis that links currently available models. As an example case, we use this approach to integrate modules representing microvascular blood flow, oxygen transport, vascular endothelial growth factor transport and endothelial cell behavior (sensing, migration and proliferation). Modeling methodologies in these modules include algebraic equations, partial differential equations and agent-based models with complex logical rules. We apply this integrated model to simulate exercise-induced angiogenesis in skeletal muscle. The simulation results compare capillary growth patterns between different exercise conditions for a single bout of exercise. Results demonstrate how the computational infrastructure can effectively integrate multiple modules by coordinating their connectivity and data exchange. Model parameterization offers simulation flexibility and a platform for performing sensitivity analysis. Conclusions This systems biology strategy can be applied to larger scale integration of computational models of angiogenesis in skeletal muscle, or other complex processes in other tissues under physiological and pathological conditions. PMID:21463529
Connecting clinical and actuarial prediction with rule-based methods.
Fokkema, Marjolein; Smits, Niels; Kelderman, Henk; Penninx, Brenda W J H
2015-06-01
Meta-analyses comparing the accuracy of clinical versus actuarial prediction have shown actuarial methods to outperform clinical methods, on average. However, actuarial methods are still not widely used in clinical practice, and there has been a call for the development of actuarial prediction methods for clinical practice. We argue that rule-based methods may be more useful than the linear main effect models usually employed in prediction studies, from a data and decision analytic as well as a practical perspective. In addition, decision rules derived with rule-based methods can be represented as fast and frugal trees, which, unlike main effects models, can be used in a sequential fashion, reducing the number of cues that have to be evaluated before making a prediction. We illustrate the usability of rule-based methods by applying RuleFit, an algorithm for deriving decision rules for classification and regression problems, to a dataset on prediction of the course of depressive and anxiety disorders from Penninx et al. (2011). The RuleFit algorithm provided a model consisting of 2 simple decision rules, requiring evaluation of only 2 to 4 cues. Predictive accuracy of the 2-rule model was very similar to that of a logistic regression model incorporating 20 predictor variables, originally applied to the dataset. In addition, the 2-rule model required, on average, evaluation of only 3 cues. Therefore, the RuleFit algorithm appears to be a promising method for creating decision tools that are less time consuming and easier to apply in psychological practice, and with accuracy comparable to traditional actuarial methods. (c) 2015 APA, all rights reserved).
Wang, Runchun M.; Hamilton, Tara J.; Tapson, Jonathan C.; van Schaik, André
2015-01-01
We present a neuromorphic implementation of multiple synaptic plasticity learning rules, which include both Spike Timing Dependent Plasticity (STDP) and Spike Timing Dependent Delay Plasticity (STDDP). We present a fully digital implementation as well as a mixed-signal implementation, both of which use a novel dynamic-assignment time-multiplexing approach and support up to 226 (64M) synaptic plasticity elements. Rather than implementing dedicated synapses for particular types of synaptic plasticity, we implemented a more generic synaptic plasticity adaptor array that is separate from the neurons in the neural network. Each adaptor performs synaptic plasticity according to the arrival times of the pre- and post-synaptic spikes assigned to it, and sends out a weighted or delayed pre-synaptic spike to the post-synaptic neuron in the neural network. This strategy provides great flexibility for building complex large-scale neural networks, as a neural network can be configured for multiple synaptic plasticity rules without changing its structure. We validate the proposed neuromorphic implementations with measurement results and illustrate that the circuits are capable of performing both STDP and STDDP. We argue that it is practical to scale the work presented here up to 236 (64G) synaptic adaptors on a current high-end FPGA platform. PMID:26041985
Investigation of model-based physical design restrictions (Invited Paper)
NASA Astrophysics Data System (ADS)
Lucas, Kevin; Baron, Stanislas; Belledent, Jerome; Boone, Robert; Borjon, Amandine; Couderc, Christophe; Patterson, Kyle; Riviere-Cazaux, Lionel; Rody, Yves; Sundermann, Frank; Toublan, Olivier; Trouiller, Yorick; Urbani, Jean-Christophe; Wimmer, Karl
2005-05-01
As lithography and other patterning processes become more complex and more non-linear with each generation, the task of physical design rules necessarily increases in complexity also. The goal of the physical design rules is to define the boundary between the physical layout structures which will yield well from those which will not. This is essentially a rule-based pre-silicon guarantee of layout correctness. However the rapid increase in design rule requirement complexity has created logistical problems for both the design and process functions. Therefore, similar to the semiconductor industry's transition from rule-based to model-based optical proximity correction (OPC) due to increased patterning complexity, opportunities for improving physical design restrictions by implementing model-based physical design methods are evident. In this paper we analyze the possible need and applications for model-based physical design restrictions (MBPDR). We first analyze the traditional design rule evolution, development and usage methodologies for semiconductor manufacturers. Next we discuss examples of specific design rule challenges requiring new solution methods in the patterning regime of low K1 lithography and highly complex RET. We then evaluate possible working strategies for MBPDR in the process development and product design flows, including examples of recent model-based pre-silicon verification techniques. Finally we summarize with a proposed flow and key considerations for MBPDR implementation.
NASA Technical Reports Server (NTRS)
Zhang, Zhong
1997-01-01
The development of large-scale, composite software in a geographically distributed environment is an evolutionary process. Often, in such evolving systems, striving for consistency is complicated by many factors, because development participants have various locations, skills, responsibilities, roles, opinions, languages, terminology and different degrees of abstraction they employ. This naturally leads to many partial specifications or viewpoints. These multiple views on the system being developed usually overlap. From another aspect, these multiple views give rise to the potential for inconsistency. Existing CASE tools do not efficiently manage inconsistencies in distributed development environment for a large-scale project. Based on the ViewPoints framework the WHERE (Web-Based Hypertext Environment for requirements Evolution) toolkit aims to tackle inconsistency management issues within geographically distributed software development projects. Consequently, WHERE project helps make more robust software and support software assurance process. The long term goal of WHERE tools aims to the inconsistency analysis and management in requirements specifications. A framework based on Graph Grammar theory and TCMJAVA toolkit is proposed to detect inconsistencies among viewpoints. This systematic approach uses three basic operations (UNION, DIFFERENCE, INTERSECTION) to study the static behaviors of graphic and tabular notations. From these operations, subgraphs Query, Selection, Merge, Replacement operations can be derived. This approach uses graph PRODUCTIONS (rewriting rules) to study the dynamic transformations of graphs. We discuss the feasibility of implementation these operations. Also, We present the process of porting original TCM (Toolkit for Conceptual Modeling) project from C++ to Java programming language in this thesis. A scenario based on NASA International Space Station Specification is discussed to show the applicability of our approach. Finally, conclusion and future work about inconsistency management issues in WHERE project will be summarized.
He, Xinhua; Hu, Wenfa
2014-01-01
This paper presents a multiple-rescue model for an emergency supply chain system under uncertainties in large-scale affected area of disasters. The proposed methodology takes into consideration that the rescue demands caused by a large-scale disaster are scattered in several locations; the servers are arranged in multiple echelons (resource depots, distribution centers, and rescue center sites) located in different places but are coordinated within one emergency supply chain system; depending on the types of rescue demands, one or more distinct servers dispatch emergency resources in different vehicle routes, and emergency rescue services queue in multiple rescue-demand locations. This emergency system is modeled as a minimal queuing response time model of location and allocation. A solution to this complex mathematical problem is developed based on genetic algorithm. Finally, a case study of an emergency supply chain system operating in Shanghai is discussed. The results demonstrate the robustness and applicability of the proposed model.
He, Xinhua
2014-01-01
This paper presents a multiple-rescue model for an emergency supply chain system under uncertainties in large-scale affected area of disasters. The proposed methodology takes into consideration that the rescue demands caused by a large-scale disaster are scattered in several locations; the servers are arranged in multiple echelons (resource depots, distribution centers, and rescue center sites) located in different places but are coordinated within one emergency supply chain system; depending on the types of rescue demands, one or more distinct servers dispatch emergency resources in different vehicle routes, and emergency rescue services queue in multiple rescue-demand locations. This emergency system is modeled as a minimal queuing response time model of location and allocation. A solution to this complex mathematical problem is developed based on genetic algorithm. Finally, a case study of an emergency supply chain system operating in Shanghai is discussed. The results demonstrate the robustness and applicability of the proposed model. PMID:24688367
Lee, Sung-Min; Biswas, Roshni; Li, Weigu; Kang, Dongseok; Chan, Lesley; Yoon, Jongseung
2014-10-28
Nanostructured forms of crystalline silicon represent an attractive materials building block for photovoltaics due to their potential benefits to significantly reduce the consumption of active materials, relax the requirement of materials purity for high performance, and hence achieve greatly improved levelized cost of energy. Despite successful demonstrations for their concepts over the past decade, however, the practical application of nanostructured silicon solar cells for large-scale implementation has been hampered by many existing challenges associated with the consumption of the entire wafer or expensive source materials, difficulties to precisely control materials properties and doping characteristics, or restrictions on substrate materials and scalability. Here we present a highly integrable materials platform of nanostructured silicon solar cells that can overcome these limitations. Ultrathin silicon solar microcells integrated with engineered photonic nanostructures are fabricated directly from wafer-based source materials in configurations that can lower the materials cost and can be compatible with deterministic assembly procedures to allow programmable, large-scale distribution, unlimited choices of module substrates, as well as lightweight, mechanically compliant constructions. Systematic studies on optical and electrical properties, photovoltaic performance in experiments, as well as numerical modeling elucidate important design rules for nanoscale photon management with ultrathin, nanostructured silicon solar cells and their interconnected, mechanically flexible modules, where we demonstrate 12.4% solar-to-electric energy conversion efficiency for printed ultrathin (∼ 8 μm) nanostructured silicon solar cells when configured with near-optimal designs of rear-surface nanoposts, antireflection coating, and back-surface reflector.
Similarity Rules for Scaling Solar Sail Systems
NASA Technical Reports Server (NTRS)
Canfield, Stephen L.; Peddieson, John; Garbe, Gregory
2010-01-01
Future science missions will require solar sails on the order of 200 square meters (or larger). However, ground demonstrations and flight demonstrations must be conducted at significantly smaller sizes, due to limitations of ground-based facilities and cost and availability of flight opportunities. For this reason, the ability to understand the process of scalability, as it applies to solar sail system models and test data, is crucial to the advancement of this technology. This paper will approach the problem of scaling in solar sail models by developing a set of scaling laws or similarity criteria that will provide constraints in the sail design process. These scaling laws establish functional relationships between design parameters of a prototype and model sail that are created at different geometric sizes. This work is applied to a specific solar sail configuration and results in three (four) similarity criteria for static (dynamic) sail models. Further, it is demonstrated that even in the context of unique sail material requirements and gravitational load of earth-bound experiments, it is possible to develop appropriate scaled sail experiments. In the longer term, these scaling laws can be used in the design of scaled experimental tests for solar sails and in analyzing the results from such tests.
Bhaskar, Anand; Song, Yun S
2014-01-01
The sample frequency spectrum (SFS) is a widely-used summary statistic of genomic variation in a sample of homologous DNA sequences. It provides a highly efficient dimensional reduction of large-scale population genomic data and its mathematical dependence on the underlying population demography is well understood, thus enabling the development of efficient inference algorithms. However, it has been recently shown that very different population demographies can actually generate the same SFS for arbitrarily large sample sizes. Although in principle this nonidentifiability issue poses a thorny challenge to statistical inference, the population size functions involved in the counterexamples are arguably not so biologically realistic. Here, we revisit this problem and examine the identifiability of demographic models under the restriction that the population sizes are piecewise-defined where each piece belongs to some family of biologically-motivated functions. Under this assumption, we prove that the expected SFS of a sample uniquely determines the underlying demographic model, provided that the sample is sufficiently large. We obtain a general bound on the sample size sufficient for identifiability; the bound depends on the number of pieces in the demographic model and also on the type of population size function in each piece. In the cases of piecewise-constant, piecewise-exponential and piecewise-generalized-exponential models, which are often assumed in population genomic inferences, we provide explicit formulas for the bounds as simple functions of the number of pieces. Lastly, we obtain analogous results for the "folded" SFS, which is often used when there is ambiguity as to which allelic type is ancestral. Our results are proved using a generalization of Descartes' rule of signs for polynomials to the Laplace transform of piecewise continuous functions.
Bhaskar, Anand; Song, Yun S.
2016-01-01
The sample frequency spectrum (SFS) is a widely-used summary statistic of genomic variation in a sample of homologous DNA sequences. It provides a highly efficient dimensional reduction of large-scale population genomic data and its mathematical dependence on the underlying population demography is well understood, thus enabling the development of efficient inference algorithms. However, it has been recently shown that very different population demographies can actually generate the same SFS for arbitrarily large sample sizes. Although in principle this nonidentifiability issue poses a thorny challenge to statistical inference, the population size functions involved in the counterexamples are arguably not so biologically realistic. Here, we revisit this problem and examine the identifiability of demographic models under the restriction that the population sizes are piecewise-defined where each piece belongs to some family of biologically-motivated functions. Under this assumption, we prove that the expected SFS of a sample uniquely determines the underlying demographic model, provided that the sample is sufficiently large. We obtain a general bound on the sample size sufficient for identifiability; the bound depends on the number of pieces in the demographic model and also on the type of population size function in each piece. In the cases of piecewise-constant, piecewise-exponential and piecewise-generalized-exponential models, which are often assumed in population genomic inferences, we provide explicit formulas for the bounds as simple functions of the number of pieces. Lastly, we obtain analogous results for the “folded” SFS, which is often used when there is ambiguity as to which allelic type is ancestral. Our results are proved using a generalization of Descartes’ rule of signs for polynomials to the Laplace transform of piecewise continuous functions. PMID:28018011
Single-trabecula building block for large-scale finite element models of cancellous bone.
Dagan, D; Be'ery, M; Gefen, A
2004-07-01
Recent development of high-resolution imaging of cancellous bone allows finite element (FE) analysis of bone tissue stresses and strains in individual trabeculae. However, specimen-specific stress/strain analyses can include effects of anatomical variations and local damage that can bias the interpretation of the results from individual specimens with respect to large populations. This study developed a standard (generic) 'building-block' of a trabecula for large-scale FE models. Being parametric and based on statistics of dimensions of ovine trabeculae, this building block can be scaled for trabecular thickness and length and be used in commercial or custom-made FE codes to construct generic, large-scale FE models of bone, using less computer power than that currently required to reproduce the accurate micro-architecture of trabecular bone. Orthogonal lattices constructed with this building block, after it was scaled to trabeculae of the human proximal femur, provided apparent elastic moduli of approximately 150 MPa, in good agreement with experimental data for the stiffness of cancellous bone from this site. Likewise, lattices with thinner, osteoporotic-like trabeculae could predict a reduction of approximately 30% in the apparent elastic modulus, as reported in experimental studies of osteoporotic femora. Based on these comparisons, it is concluded that the single-trabecula element developed in the present study is well-suited for representing cancellous bone in large-scale generic FE simulations.
Automated detection of pain from facial expressions: a rule-based approach using AAM
NASA Astrophysics Data System (ADS)
Chen, Zhanli; Ansari, Rashid; Wilkie, Diana J.
2012-02-01
In this paper, we examine the problem of using video analysis to assess pain, an important problem especially for critically ill, non-communicative patients, and people with dementia. We propose and evaluate an automated method to detect the presence of pain manifested in patient videos using a unique and large collection of cancer patient videos captured in patient homes. The method is based on detecting pain-related facial action units defined in the Facial Action Coding System (FACS) that is widely used for objective assessment in pain analysis. In our research, a person-specific Active Appearance Model (AAM) based on Project-Out Inverse Compositional Method is trained for each patient individually for the modeling purpose. A flexible representation of the shape model is used in a rule-based method that is better suited than the more commonly used classifier-based methods for application to the cancer patient videos in which pain-related facial actions occur infrequently and more subtly. The rule-based method relies on the feature points that provide facial action cues and is extracted from the shape vertices of AAM, which have a natural correspondence to face muscular movement. In this paper, we investigate the detection of a commonly used set of pain-related action units in both the upper and lower face. Our detection results show good agreement with the results obtained by three trained FACS coders who independently reviewed and scored the action units in the cancer patient videos.
Large-scale anisotropy of the cosmic microwave background radiation
NASA Technical Reports Server (NTRS)
Silk, J.; Wilson, M. L.
1981-01-01
Inhomogeneities in the large-scale distribution of matter inevitably lead to the generation of large-scale anisotropy in the cosmic background radiation. The dipole, quadrupole, and higher order fluctuations expected in an Einstein-de Sitter cosmological model have been computed. The dipole and quadrupole anisotropies are comparable to the measured values, and impose important constraints on the allowable spectrum of large-scale matter density fluctuations. A significant dipole anisotropy is generated by the matter distribution on scales greater than approximately 100 Mpc. The large-scale anisotropy is insensitive to the ionization history of the universe since decoupling, and cannot easily be reconciled with a galaxy formation theory that is based on primordial adiabatic density fluctuations.
2015-01-01
still necessary. One such model that could bridge this gap is discrete dis- location dynamics ( DDD ) simulations, in which both the time- and length-scale...limitations from atomic simulations are greatly reduced. Over the past two decades, two-dimen- sional (2D) and three-dimensional (3D) DDD methods have...dislocation ensem- bles according to physics-based rules [27–34]. The physics that can be incorporated in DDD simulations can range http://dx.doi.org
Rule-Based Event Processing and Reaction Rules
NASA Astrophysics Data System (ADS)
Paschke, Adrian; Kozlenkov, Alexander
Reaction rules and event processing technologies play a key role in making business and IT / Internet infrastructures more agile and active. While event processing is concerned with detecting events from large event clouds or streams in almost real-time, reaction rules are concerned with the invocation of actions in response to events and actionable situations. They state the conditions under which actions must be taken. In the last decades various reaction rule and event processing approaches have been developed, which for the most part have been advanced separately. In this paper we survey reaction rule approaches and rule-based event processing systems and languages.
NASA Astrophysics Data System (ADS)
Calderer, Antoni; Guo, Xin; Shen, Lian; Sotiropoulos, Fotis
2018-02-01
We develop a numerical method for simulating coupled interactions of complex floating structures with large-scale ocean waves and atmospheric turbulence. We employ an efficient large-scale model to develop offshore wind and wave environmental conditions, which are then incorporated into a high resolution two-phase flow solver with fluid-structure interaction (FSI). The large-scale wind-wave interaction model is based on a two-fluid dynamically-coupled approach that employs a high-order spectral method for simulating the water motion and a viscous solver with undulatory boundaries for the air motion. The two-phase flow FSI solver is based on the level set method and is capable of simulating the coupled dynamic interaction of arbitrarily complex bodies with airflow and waves. The large-scale wave field solver is coupled with the near-field FSI solver with a one-way coupling approach by feeding into the latter waves via a pressure-forcing method combined with the level set method. We validate the model for both simple wave trains and three-dimensional directional waves and compare the results with experimental and theoretical solutions. Finally, we demonstrate the capabilities of the new computational framework by carrying out large-eddy simulation of a floating offshore wind turbine interacting with realistic ocean wind and waves.
Mesoscopic model for binary fluids
NASA Astrophysics Data System (ADS)
Echeverria, C.; Tucci, K.; Alvarez-Llamoza, O.; Orozco-Guillén, E. E.; Morales, M.; Cosenza, M. G.
2017-10-01
We propose a model for studying binary fluids based on the mesoscopic molecular simulation technique known as multiparticle collision, where the space and state variables are continuous, and time is discrete. We include a repulsion rule to simulate segregation processes that does not require calculation of the interaction forces between particles, so binary fluids can be described on a mesoscopic scale. The model is conceptually simple and computationally efficient; it maintains Galilean invariance and conserves the mass and energy in the system at the micro- and macro-scale, whereas momentum is conserved globally. For a wide range of temperatures and densities, the model yields results in good agreement with the known properties of binary fluids, such as the density profile, interface width, phase separation, and phase growth. We also apply the model to the study of binary fluids in crowded environments with consistent results.
Cheung, Kit; Schultz, Simon R; Luk, Wayne
2015-01-01
NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation.
Cheung, Kit; Schultz, Simon R.; Luk, Wayne
2016-01-01
NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation. PMID:26834542
The small-scale treatability study sample exemption
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coalgate, J.
1991-01-01
In 1981, the Environmental Protection Agency (EPA) issued an interim final rule that conditionally exempted waste samples collected solely for the purpose of monitoring or testing to determine their characteristics or composition'' from RCRA Subtitle C hazardous waste regulations. This exemption (40 CFR 261.4(d)) apples to the transportation of samples between the generator and testing laboratory, temporary storage of samples at the laboratory prior to and following testing, and storage at a laboratory for specific purposes such as an enforcement action. However, the exclusion did not include large-scale samples used in treatability studies or other testing at pilot plants ormore » other experimental facilities. As a result of comments received by the EPA subsequent to the issuance of the interim final rule, the EPA reopened the comment period on the interim final rule on September 18, 1987, and specifically requested comments on whether or not the sample exclusion should be expanded to include waste samples used in small-scale treatability studies. Almost all responders commented favorably on such a proposal. As a result, the EPA issued a final rule (53 FR 27290, July 19, 1988) conditionally exempting waste samples used in small-scale treatability studies from full regulation under Subtitle C of RCRA. The question of whether or not to extend the exclusion to larger scale as proposed by the Hazardous Waste Treatment Council was deferred until a later date. This information Brief summarizes the requirements of the small-scale treatability exemption.« less
Studies on combined model based on functional objectives of large scale complex engineering
NASA Astrophysics Data System (ADS)
Yuting, Wang; Jingchun, Feng; Jiabao, Sun
2018-03-01
As various functions were included in large scale complex engineering, and each function would be conducted with completion of one or more projects, combined projects affecting their functions should be located. Based on the types of project portfolio, the relationship of projects and their functional objectives were analyzed. On that premise, portfolio projects-technics based on their functional objectives were introduced, then we studied and raised the principles of portfolio projects-technics based on the functional objectives of projects. In addition, The processes of combined projects were also constructed. With the help of portfolio projects-technics based on the functional objectives of projects, our research findings laid a good foundation for management of large scale complex engineering portfolio management.
Implications of Small Samples for Generalization: Adjustments and Rules of Thumb
ERIC Educational Resources Information Center
Tipton, Elizabeth; Hallberg, Kelly; Hedges, Larry V.; Chan, Wendy
2015-01-01
Policy-makers are frequently interested in understanding how effective a particular intervention may be for a specific (and often broad) population. In many fields, particularly education and social welfare, the ideal form of these evaluations is a large-scale randomized experiment. Recent research has highlighted that sites in these large-scale…
NASA Astrophysics Data System (ADS)
Dechant, B.; Ryu, Y.; Jiang, C.; Yang, K.
2017-12-01
Solar-induced chlorophyll fluorescence (SIF) is rapidly becoming an important tool to remotely estimate terrestrial gross primary productivity (GPP) at large spatial scales. Many findings, however, are based on empirical relationships between SIF and GPP that have been found to be dependent on plant functional types. Therefore, combining model-based analysis with observations is crucial to improve our understanding of SIF-GPP relationships. So far, most model-based results were based on SCOPE, a complex ecophysiological model with explicit description of canopy layers and a large number of parameters that may not be easily obtained reliably on large scales. Here, we report on our efforts to incorporate SIF into a two-big leaf (sun and shade) process-based model that is suitable for obtaining its inputs entirely from satellite products. We examine if the SIF-GPP relationships are consistent with the findings from SCOPE simulations and investigate if incorporation of the SIF signal into BESS can help improve GPP estimation. A case study in a rice paddy is presented.
X-ray light curves of active galactic nuclei are phase incoherent
NASA Technical Reports Server (NTRS)
Krolik, Julian; Done, Chris; Madejski, Grzegorz
1993-01-01
We compute the Fourier phase spectra for the light curves of five low-luminosity active galactic nuclei observed by EXOSAT. There is no statistically significant phase coherence in any of them. This statement is equivalent, subject to a technical caveat, to a demonstration that their fluctuation statistics are Gaussian. Models in which the X-ray output is controlled wholly by a unitary process undergoing a nonlinear limit cycle are therefore ruled out, while models with either a large number of randomly excited independent oscillation modes or nonlinearly interacting spatially dependent oscillations are favored. We also demonstrate how the degree of phase coherence in light curve fluctuations influences the application of causality bounds on internal length scales.
A Cognitive Modeling Approach to Strategy Formation in Dynamic Decision Making.
Prezenski, Sabine; Brechmann, André; Wolff, Susann; Russwinkel, Nele
2017-01-01
Decision-making is a high-level cognitive process based on cognitive processes like perception, attention, and memory. Real-life situations require series of decisions to be made, with each decision depending on previous feedback from a potentially changing environment. To gain a better understanding of the underlying processes of dynamic decision-making, we applied the method of cognitive modeling on a complex rule-based category learning task. Here, participants first needed to identify the conjunction of two rules that defined a target category and later adapt to a reversal of feedback contingencies. We developed an ACT-R model for the core aspects of this dynamic decision-making task. An important aim of our model was that it provides a general account of how such tasks are solved and, with minor changes, is applicable to other stimulus materials. The model was implemented as a mixture of an exemplar-based and a rule-based approach which incorporates perceptual-motor and metacognitive aspects as well. The model solves the categorization task by first trying out one-feature strategies and then, as a result of repeated negative feedback, switching to two-feature strategies. Overall, this model solves the task in a similar way as participants do, including generally successful initial learning as well as reversal learning after the change of feedback contingencies. Moreover, the fact that not all participants were successful in the two learning phases is also reflected in the modeling data. However, we found a larger variance and a lower overall performance of the modeling data as compared to the human data which may relate to perceptual preferences or additional knowledge and rules applied by the participants. In a next step, these aspects could be implemented in the model for a better overall fit. In view of the large interindividual differences in decision performance between participants, additional information about the underlying cognitive processes from behavioral, psychobiological and neurophysiological data may help to optimize future applications of this model such that it can be transferred to other domains of comparable dynamic decision tasks.
A Cognitive Modeling Approach to Strategy Formation in Dynamic Decision Making
Prezenski, Sabine; Brechmann, André; Wolff, Susann; Russwinkel, Nele
2017-01-01
Decision-making is a high-level cognitive process based on cognitive processes like perception, attention, and memory. Real-life situations require series of decisions to be made, with each decision depending on previous feedback from a potentially changing environment. To gain a better understanding of the underlying processes of dynamic decision-making, we applied the method of cognitive modeling on a complex rule-based category learning task. Here, participants first needed to identify the conjunction of two rules that defined a target category and later adapt to a reversal of feedback contingencies. We developed an ACT-R model for the core aspects of this dynamic decision-making task. An important aim of our model was that it provides a general account of how such tasks are solved and, with minor changes, is applicable to other stimulus materials. The model was implemented as a mixture of an exemplar-based and a rule-based approach which incorporates perceptual-motor and metacognitive aspects as well. The model solves the categorization task by first trying out one-feature strategies and then, as a result of repeated negative feedback, switching to two-feature strategies. Overall, this model solves the task in a similar way as participants do, including generally successful initial learning as well as reversal learning after the change of feedback contingencies. Moreover, the fact that not all participants were successful in the two learning phases is also reflected in the modeling data. However, we found a larger variance and a lower overall performance of the modeling data as compared to the human data which may relate to perceptual preferences or additional knowledge and rules applied by the participants. In a next step, these aspects could be implemented in the model for a better overall fit. In view of the large interindividual differences in decision performance between participants, additional information about the underlying cognitive processes from behavioral, psychobiological and neurophysiological data may help to optimize future applications of this model such that it can be transferred to other domains of comparable dynamic decision tasks. PMID:28824512
Multi-scale Rule-of-Mixtures Model of Carbon Nanotube/Carbon Fiber/Epoxy Lamina
NASA Technical Reports Server (NTRS)
Frankland, Sarah-Jane V.; Roddick, Jaret C.; Gates, Thomas S.
2005-01-01
A unidirectional carbon fiber/epoxy lamina in which the carbon fibers are coated with single-walled carbon nanotubes is modeled with a multi-scale method, the atomistically informed rule-of-mixtures. This multi-scale model is designed to include the effect of the carbon nanotubes on the constitutive properties of the lamina. It included concepts from the molecular dynamics/equivalent continuum methods, micromechanics, and the strength of materials. Within the model both the nanotube volume fraction and nanotube distribution were varied. It was found that for a lamina with 60% carbon fiber volume fraction, the Young's modulus in the fiber direction varied with changes in the nanotube distribution, from 138.8 to 140 GPa with nanotube volume fractions ranging from 0.0001 to 0.0125. The presence of nanotube near the surface of the carbon fiber is therefore expected to have a small, but positive, effect on the constitutive properties of the lamina.
Rule groupings: A software engineering approach towards verification of expert systems
NASA Technical Reports Server (NTRS)
Mehrotra, Mala
1991-01-01
Currently, most expert system shells do not address software engineering issues for developing or maintaining expert systems. As a result, large expert systems tend to be incomprehensible, difficult to debug or modify and almost impossible to verify or validate. Partitioning rule based systems into rule groups which reflect the underlying subdomains of the problem should enhance the comprehensibility, maintainability, and reliability of expert system software. Attempts were made to semiautomatically structure a CLIPS rule base into groups of related rules that carry the same type of information. Different distance metrics that capture relevant information from the rules for grouping are discussed. Two clustering algorithms that partition the rule base into groups of related rules are given. Two independent evaluation criteria are developed to measure the effectiveness of the grouping strategies. Results of the experiment with three sample rule bases are presented.
Ontological Modeling of Transformation in Heart Defect Diagrams
Viswanath, Venkatesh; Tong, Tuanjie; Dinakarpandian, Deendayal; Lee, Yugyung
2006-01-01
The accurate portrayal of a large volume data of variable heart defects is crucial to providing good patient care in pediatric cardiology. Our research aims to span the universe of congenital heart defects by generating illustrative diagrams that enhance data interpretation. To accommodate the range and severity of defects to be represented, we base our diagrams on transformation models applied to a normal heart rather than a static set of defects. These models are based on a domain-specific ontology, clustering, association rule mining and the use of parametric equations specified in a mathematical programming language. PMID:17238451
Performance of distributed multiscale simulations
Borgdorff, J.; Ben Belgacem, M.; Bona-Casas, C.; Fazendeiro, L.; Groen, D.; Hoenen, O.; Mizeranschi, A.; Suter, J. L.; Coster, D.; Coveney, P. V.; Dubitzky, W.; Hoekstra, A. G.; Strand, P.; Chopard, B.
2014-01-01
Multiscale simulations model phenomena across natural scales using monolithic or component-based code, running on local or distributed resources. In this work, we investigate the performance of distributed multiscale computing of component-based models, guided by six multiscale applications with different characteristics and from several disciplines. Three modes of distributed multiscale computing are identified: supplementing local dependencies with large-scale resources, load distribution over multiple resources, and load balancing of small- and large-scale resources. We find that the first mode has the apparent benefit of increasing simulation speed, and the second mode can increase simulation speed if local resources are limited. Depending on resource reservation and model coupling topology, the third mode may result in a reduction of resource consumption. PMID:24982258
A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations
NASA Astrophysics Data System (ADS)
Demir, I.; Agliamzanov, R.
2014-12-01
Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.
NASA Technical Reports Server (NTRS)
Fukumori, I.; Raghunath, R.; Fu, L. L.
1996-01-01
The relation between large-scale sea level variability and ocean circulation is studied using a numerical model. A global primitive equaiton model of the ocean is forced by daily winds and climatological heat fluxes corresponding to the period from January 1992 to February 1996. The physical nature of the temporal variability from periods of days to a year, are examined based on spectral analyses of model results and comparisons with satellite altimetry and tide gauge measurements.
Modelling of Reservoir Operations using Fuzzy Logic and ANNs
NASA Astrophysics Data System (ADS)
Van De Giesen, N.; Coerver, B.; Rutten, M.
2015-12-01
Today, almost 40.000 large reservoirs, containing approximately 6.000 km3 of water and inundating an area of almost 400.000 km2, can be found on earth. Since these reservoirs have a storage capacity of almost one-sixth of the global annual river discharge they have a large impact on the timing, volume and peaks of river discharges. Global Hydrological Models (GHM) are thus significantly influenced by these anthropogenic changes in river flows. We developed a parametrically parsimonious method to extract operational rules based on historical reservoir storage and inflow time-series. Managing a reservoir is an imprecise and vague undertaking. Operators always face uncertainties about inflows, evaporation, seepage losses and various water demands to be met. They often base their decisions on experience and on available information, like reservoir storage and the previous periods inflow. We modeled this decision-making process through a combination of fuzzy logic and artificial neural networks in an Adaptive-Network-based Fuzzy Inference System (ANFIS). In a sensitivity analysis, we compared results for reservoirs in Vietnam, Central Asia and the USA. ANFIS can indeed capture reservoirs operations adequately when fed with a historical monthly time-series of inflows and storage. It was shown that using ANFIS, operational rules of existing reservoirs can be derived without much prior knowledge about the reservoirs. Their validity was tested by comparing actual and simulated releases with each other. For the eleven reservoirs modelled, the normalised outflow, <0,1>, was predicted with a MSE of 0.002 to 0.044. The rules can be incorporated into GHMs. After a network for a specific reservoir has been trained, the inflow calculated by the hydrological model can be combined with the release and initial storage to calculate the storage for the next time-step using a mass balance. Subsequently, the release can be predicted one time-step ahead using the inflow and storage.
Downscaling ocean conditions: Experiments with a quasi-geostrophic model
NASA Astrophysics Data System (ADS)
Katavouta, A.; Thompson, K. R.
2013-12-01
The predictability of small-scale ocean variability, given the time history of the associated large-scales, is investigated using a quasi-geostrophic model of two wind-driven gyres separated by an unstable, mid-ocean jet. Motivated by the recent theoretical study of Henshaw et al. (2003), we propose a straightforward method for assimilating information on the large-scale in order to recover the small-scale details of the quasi-geostrophic circulation. The similarity of this method to the spectral nudging of limited area atmospheric models is discussed. Results from the spectral nudging of the quasi-geostrophic model, and an independent multivariate regression-based approach, show that important features of the ocean circulation, including the position of the meandering mid-ocean jet and the associated pinch-off eddies, can be recovered from the time history of a small number of large-scale modes. We next propose a hybrid approach for assimilating both the large-scales and additional observed time series from a limited number of locations that alone are too sparse to recover the small scales using traditional assimilation techniques. The hybrid approach improved significantly the recovery of the small-scales. The results highlight the importance of the coupling between length scales in downscaling applications, and the value of assimilating limited point observations after the large-scales have been set correctly. The application of the hybrid and spectral nudging to practical ocean forecasting, and projecting changes in ocean conditions on climate time scales, is discussed briefly.
Combining points and lines in rectifying satellite images
NASA Astrophysics Data System (ADS)
Elaksher, Ahmed F.
2017-09-01
The quick advance in remote sensing technologies established the potential to gather accurate and reliable information about the Earth surface using high resolution satellite images. Remote sensing satellite images of less than one-meter pixel size are currently used in large-scale mapping. Rigorous photogrammetric equations are usually used to describe the relationship between the image coordinates and ground coordinates. These equations require the knowledge of the exterior and interior orientation parameters of the image that might not be available. On the other hand, the parallel projection transformation could be used to represent the mathematical relationship between the image-space and objectspace coordinate systems and provides the required accuracy for large-scale mapping using fewer ground control features. This article investigates the differences between point-based and line-based parallel projection transformation models in rectifying satellite images with different resolutions. The point-based parallel projection transformation model and its extended form are presented and the corresponding line-based forms are developed. Results showed that the RMS computed using the point- or line-based transformation models are equivalent and satisfy the requirement for large-scale mapping. The differences between the transformation parameters computed using the point- and line-based transformation models are insignificant. The results showed high correlation between the differences in the ground elevation and the RMS.
NASA Astrophysics Data System (ADS)
Bronstert, Axel; Heistermann, Maik; Francke, Till
2017-04-01
Hydrological models aim at quantifying the hydrological cycle and its constituent processes for particular conditions, sites or periods in time. Such models have been developed for a large range of spatial and temporal scales. One must be aware that the question which is the appropriate scale to be applied depends on the overall question under study. Therefore, it is not advisable to give a general applicable guideline on what is "the best" scale for a model. This statement is even more relevant for coupled hydrological, ecological and atmospheric models. Although a general statement about the most appropriate modelling scale is not recommendable, it is worth to have a look on what are the advantages and the shortcomings of micro-, meso- and macro-scale approaches. Such an appraisal is of increasing importance, since increasingly (very) large / global scale approaches and models are under operation and therefore the question arises how far and for what purposes such methods may yield scientifically sound results. It is important to understand that in most hydrological (and ecological, atmospheric and other) studies process scale, measurement scale, and modelling scale differ from each other. In some cases, the differences between theses scales can be of different orders of magnitude (example: runoff formation, measurement and modelling). These differences are a major source of uncertainty in description and modelling of hydrological, ecological and atmospheric processes. Let us now summarize our viewpoint of the strengths (+) and weaknesses (-) of hydrological models of different scales: Micro scale (e.g. extent of a plot, field or hillslope): (+) enables process research, based on controlled experiments (e.g. infiltration; root water uptake; chemical matter transport); (+) data of state conditions (e.g. soil parameter, vegetation properties) and boundary fluxes (e.g. rainfall or evapotranspiration) are directly measurable and reproducible; (+) equations based on first principals, partly pde-type, are available for several processes (but not for all), because measurement and modelling scale are compatible (-) the spatial model domain are hardly representative for larger spatial entities, including regions for which water resources management decisions are to be taken; straightforward upsizing is also limited by data availability and computational requirements. Meso scale (e.g. extent of a small to large catchment or region): (+) the spatial extent of the model domain has approximately the same extent as the regions for which water resources management decisions are to be taken. I.e., such models enable water resources quantification at the scale of most water management decisions; (+) data of some state conditions (e.g. vegetation cover, topography, river network and cross sections) are available; (+) data of some boundary fluxes (in particular surface runoff / channel flow) are directly measurable with mostly sufficient certainty; (+) equations, partly based on simple water budgeting, partly variants of pde-type equations, are available for most hydrological processes. This enables the construction of meso-scale distributed models reflecting the spatial heterogeneity of regions/landscapes; (-) process scale, measurement scale, and modelling scale differ from each other for a number of processes, e.g., such as runoff generation; (-) the process formulation (usually derived from micro-scale studies) cannot directly be transferred to the modelling domain. Upscaling procedures for this purpose are not readily and generally available. Macro scale (e.g. extent of a continent up to global): (+) the spatial extent of the model may cover the whole Earth. This enables an attractive global display of model results; (+) model results might be technically interchangeable or at least comparable with results from other global models, such as global climate models; (-) process scale, measurement scale, and modelling scale differ heavily from each other for all hydrological and associated processes; (-) the model domain and its results are not representative regions for which water resources management decisions are to be taken. (-) both state condition and boundary flux data are hardly available for the whole model domain. Water management data and discharge data from remote regions are particular incomplete / unavailable for this scale. This undermines the model's verifiability; (-) since process formulation and resulting modelling reliability at this scale is very limited, such models can hardly show any explanatory skills or prognostic power; (-) since both the entire model domain and the spatial sub-units cover large areas, model results represent values averaged over at least the spatial sub-unit's extent. In many cases, the applied time scale implies a long-term averaging in time, too. We emphasize the importance to be aware of the above mentioned strengths and weaknesses of those scale-specific models. (Many of the) results of the current global model studies do not reflect such limitations. In particular, we consider the averaging over large model entities in space and/or time inadequate. Many hydrological processes are of a non-linear nature, including threshold-type behaviour. Such features cannot be reflected by such large scale entities. The model results therefore can be of little or no use for water resources decisions and/or even misleading for public debates or decision making. Some rather newly developed sustainability concepts, e.g. "Planetary Boundaries" in which humanity may "continue to develop and thrive for generations to come" are based on such global-scale approaches and models. However, many of the major problems regarding sustainability on Earth, e.g. water scarcity, do not exhibit on a global but on a regional scale. While on a global scale water might look like being available in sufficient quantity and quality, there are many regions where water problems already have very harmful or even devastating effects. Therefore, it is the challenge to derive models and observation programmes for regional scales. In case a global display is desired future efforts should be directed towards the development of a global picture based on a mosaic of regional sound assessments, rather than "zooming into" the results of large-scale simulations. Still, a key question remains to be discussed, i.e. for which purpose models at this (global) scale can be used.
Embedding Task-Based Neural Models into a Connectome-Based Model of the Cerebral Cortex.
Ulloa, Antonio; Horwitz, Barry
2016-01-01
A number of recent efforts have used large-scale, biologically realistic, neural models to help understand the neural basis for the patterns of activity observed in both resting state and task-related functional neural imaging data. An example of the former is The Virtual Brain (TVB) software platform, which allows one to apply large-scale neural modeling in a whole brain framework. TVB provides a set of structural connectomes of the human cerebral cortex, a collection of neural processing units for each connectome node, and various forward models that can convert simulated neural activity into a variety of functional brain imaging signals. In this paper, we demonstrate how to embed a previously or newly constructed task-based large-scale neural model into the TVB platform. We tested our method on a previously constructed large-scale neural model (LSNM) of visual object processing that consisted of interconnected neural populations that represent, primary and secondary visual, inferotemporal, and prefrontal cortex. Some neural elements in the original model were "non-task-specific" (NS) neurons that served as noise generators to "task-specific" neurons that processed shapes during a delayed match-to-sample (DMS) task. We replaced the NS neurons with an anatomical TVB connectome model of the cerebral cortex comprising 998 regions of interest interconnected by white matter fiber tract weights. We embedded our LSNM of visual object processing into corresponding nodes within the TVB connectome. Reciprocal connections between TVB nodes and our task-based modules were included in this framework. We ran visual object processing simulations and showed that the TVB simulator successfully replaced the noise generation originally provided by NS neurons; i.e., the DMS tasks performed with the hybrid LSNM/TVB simulator generated equivalent neural and fMRI activity to that of the original task-based models. Additionally, we found partial agreement between the functional connectivities using the hybrid LSNM/TVB model and the original LSNM. Our framework thus presents a way to embed task-based neural models into the TVB platform, enabling a better comparison between empirical and computational data, which in turn can lead to a better understanding of how interacting neural populations give rise to human cognitive behaviors.
Ensemble learning with trees and rules: supervised, semi-supervised, unsupervised
USDA-ARS?s Scientific Manuscript database
In this article, we propose several new approaches for post processing a large ensemble of conjunctive rules for supervised and semi-supervised learning problems. We show with various examples that for high dimensional regression problems the models constructed by the post processing the rules with ...
A semiparametric graphical modelling approach for large-scale equity selection
Liu, Han; Mulvey, John; Zhao, Tianqi
2016-01-01
We propose a new stock selection strategy that exploits rebalancing returns and improves portfolio performance. To effectively harvest rebalancing gains, we apply ideas from elliptical-copula graphical modelling and stability inference to select stocks that are as independent as possible. The proposed elliptical-copula graphical model has a latent Gaussian representation; its structure can be effectively inferred using the regularized rank-based estimators. The resulting algorithm is computationally efficient and scales to large data-sets. To show the efficacy of the proposed method, we apply it to conduct equity selection based on a 16-year health care stock data-set and a large 34-year stock data-set. Empirical tests show that the proposed method is superior to alternative strategies including a principal component analysis-based approach and the classical Markowitz strategy based on the traditional buy-and-hold assumption. PMID:28316507
NASA Astrophysics Data System (ADS)
Park, Seonyoung; Im, Jungho; Park, Sumin; Rhee, Jinyoung
2017-04-01
Soil moisture is one of the most important keys for understanding regional and global climate systems. Soil moisture is directly related to agricultural processes as well as hydrological processes because soil moisture highly influences vegetation growth and determines water supply in the agroecosystem. Accurate monitoring of the spatiotemporal pattern of soil moisture is important. Soil moisture has been generally provided through in situ measurements at stations. Although field survey from in situ measurements provides accurate soil moisture with high temporal resolution, it requires high cost and does not provide the spatial distribution of soil moisture over large areas. Microwave satellite (e.g., advanced Microwave Scanning Radiometer on the Earth Observing System (AMSR2), the Advanced Scatterometer (ASCAT), and Soil Moisture Active Passive (SMAP)) -based approaches and numerical models such as Global Land Data Assimilation System (GLDAS) and Modern- Era Retrospective Analysis for Research and Applications (MERRA) provide spatial-temporalspatiotemporally continuous soil moisture products at global scale. However, since those global soil moisture products have coarse spatial resolution ( 25-40 km), their applications for agriculture and water resources at local and regional scales are very limited. Thus, soil moisture downscaling is needed to overcome the limitation of the spatial resolution of soil moisture products. In this study, GLDAS soil moisture data were downscaled up to 1 km spatial resolution through the integration of AMSR2 and ASCAT soil moisture data, Shuttle Radar Topography Mission (SRTM) Digital Elevation Model (DEM), and Moderate Resolution Imaging Spectroradiometer (MODIS) data—Land Surface Temperature, Normalized Difference Vegetation Index, and Land cover—using modified regression trees over East Asia from 2013 to 2015. Modified regression trees were implemented using Cubist, a commercial software tool based on machine learning. An optimization based on pruning of rules derived from the modified regression trees was conducted. Root Mean Square Error (RMSE) and Correlation coefficients (r) were used to optimize the rules, and finally 59 rules from modified regression trees were produced. The results show high validation r (0.79) and low validation RMSE (0.0556m3/m3). The 1 km downscaled soil moisture was evaluated using ground soil moisture data at 14 stations, and both soil moisture data showed similar temporal patterns (average r=0.51 and average RMSE=0.041). The spatial distribution of the 1 km downscaled soil moisture well corresponded with GLDAS soil moisture that caught both extremely dry and wet regions. Correlation between GLDAS and the 1 km downscaled soil moisture during growing season was positive (mean r=0.35) in most regions.
Integrated layout based Monte-Carlo simulation for design arc optimization
NASA Astrophysics Data System (ADS)
Shao, Dongbing; Clevenger, Larry; Zhuang, Lei; Liebmann, Lars; Wong, Robert; Culp, James
2016-03-01
Design rules are created considering a wafer fail mechanism with the relevant design levels under various design cases, and the values are set to cover the worst scenario. Because of the simplification and generalization, design rule hinders, rather than helps, dense device scaling. As an example, SRAM designs always need extensive ground rule waivers. Furthermore, dense design also often involves "design arc", a collection of design rules, the sum of which equals critical pitch defined by technology. In design arc, a single rule change can lead to chain reaction of other rule violations. In this talk we present a methodology using Layout Based Monte-Carlo Simulation (LBMCS) with integrated multiple ground rule checks. We apply this methodology on SRAM word line contact, and the result is a layout that has balanced wafer fail risks based on Process Assumptions (PAs). This work was performed at the IBM Microelectronics Div, Semiconductor Research and Development Center, Hopewell Junction, NY 12533
GraDit: graph-based data repair algorithm for multiple data edits rule violations
NASA Astrophysics Data System (ADS)
Ode Zuhayeni Madjida, Wa; Gusti Bagus Baskara Nugraha, I.
2018-03-01
Constraint-based data cleaning captures data violation to a set of rule called data quality rules. The rules consist of integrity constraint and data edits. Structurally, they are similar, where the rule contain left hand side and right hand side. Previous research proposed a data repair algorithm for integrity constraint violation. The algorithm uses undirected hypergraph as rule violation representation. Nevertheless, this algorithm can not be applied for data edits because of different rule characteristics. This study proposed GraDit, a repair algorithm for data edits rule. First, we use bipartite-directed hypergraph as model representation of overall defined rules. These representation is used for getting interaction between violation rules and clean rules. On the other hand, we proposed undirected graph as violation representation. Our experimental study showed that algorithm with undirected graph as violation representation model gave better data quality than algorithm with undirected hypergraph as representation model.
ERIC Educational Resources Information Center
Zhang, Zhidong
2016-01-01
This study explored an alternative assessment procedure to examine learning trajectories of matrix multiplication. It took rule-based analytical and cognitive task analysis methods specifically to break down operation rules for a given matrix multiplication. Based on the analysis results, a hierarchical Bayesian network, an assessment model,…
Efficient Storage Scheme of Covariance Matrix during Inverse Modeling
NASA Astrophysics Data System (ADS)
Mao, D.; Yeh, T. J.
2013-12-01
During stochastic inverse modeling, the covariance matrix of geostatistical based methods carries the information about the geologic structure. Its update during iterations reflects the decrease of uncertainty with the incorporation of observed data. For large scale problem, its storage and update cost too much memory and computational resources. In this study, we propose a new efficient storage scheme for storage and update. Compressed Sparse Column (CSC) format is utilized to storage the covariance matrix, and users can assign how many data they prefer to store based on correlation scales since the data beyond several correlation scales are usually not very informative for inverse modeling. After every iteration, only the diagonal terms of the covariance matrix are updated. The off diagonal terms are calculated and updated based on shortened correlation scales with a pre-assigned exponential model. The correlation scales are shortened by a coefficient, i.e. 0.95, every iteration to show the decrease of uncertainty. There is no universal coefficient for all the problems and users are encouraged to try several times. This new scheme is tested with 1D examples first. The estimated results and uncertainty are compared with the traditional full storage method. In the end, a large scale numerical model is utilized to validate this new scheme.
Joel W. Homan; Charles H. Luce; James P. McNamara; Nancy F. Glenn
2011-01-01
Describing the spatial variability of heterogeneous snowpacks at a watershed or mountain-front scale is important for improvements in large-scale snowmelt modelling. Snowmelt depletion curves, which relate fractional decreases in snowcovered area (SCA) against normalized decreases in snow water equivalent (SWE), are a common approach to scale-up snowmelt models....
Interactive, graphical processing unitbased evaluation of evacuation scenarios at the state scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perumalla, Kalyan S; Aaby, Brandon G; Yoginath, Srikanth B
2011-01-01
In large-scale scenarios, transportation modeling and simulation is severely constrained by simulation time. For example, few real- time simulators scale to evacuation traffic scenarios at the level of an entire state, such as Louisiana (approximately 1 million links) or Florida (2.5 million links). New simulation approaches are needed to overcome severe computational demands of conventional (microscopic or mesoscopic) modeling techniques. Here, a new modeling and execution methodology is explored that holds the potential to provide a tradeoff among the level of behavioral detail, the scale of transportation network, and real-time execution capabilities. A novel, field-based modeling technique and its implementationmore » on graphical processing units are presented. Although additional research with input from domain experts is needed for refining and validating the models, the techniques reported here afford interactive experience at very large scales of multi-million road segments. Illustrative experiments on a few state-scale net- works are described based on an implementation of this approach in a software system called GARFIELD. Current modeling cap- abilities and implementation limitations are described, along with possible use cases and future research.« less
An Evaluation of Cosmological Models from the Expansion and Growth of Structure Measurements
NASA Astrophysics Data System (ADS)
Zhai, Zhongxu; Blanton, Michael; Slosar, Anže; Tinker, Jeremy
2017-12-01
We compare a large suite of theoretical cosmological models to observational data from the cosmic microwave background, baryon acoustic oscillation measurements of expansion, Type Ia supernova measurements of expansion, redshift space distortion measurements of the growth of structure, and the local Hubble constant. Our theoretical models include parametrizations of dark energy as well as physical models of dark energy and modified gravity. We determine the constraints on the model parameters, incorporating the redshift space distortion data directly in the analysis. To determine whether models can be ruled out, we evaluate the p-value (the probability under the model of obtaining data as bad or worse than the observed data). In our comparison, we find the well-known tension of H 0 with the other data; no model resolves this tension successfully. Among the models we consider, the large-scale growth of structure data does not affect the modified gravity models as a category particularly differently from dark energy models; it matters for some modified gravity models but not others, and the same is true for dark energy models. We compute predicted observables for each model under current observational constraints, and identify models for which future observational constraints will be particularly informative.
The Jet Heated X-Ray Filament in the Centaurus A Northern Middle Radio Lobe
NASA Astrophysics Data System (ADS)
Kraft, R. P.; Forman, W. R.; Hardcastle, M. J.; Birkinshaw, M.; Croston, J. H.; Jones, C.; Nulsen, P. E. J.; Worrall, D. M.; Murray, S. S.
2009-06-01
We present results from a 40 ks XMM-Newton observation of the X-ray filament coincident with the southeast edge of the Centaurus A Northern Middle Radio Lobe (NML). We find that the X-ray filament consists of five spatially resolved X-ray knots embedded in a continuous diffuse bridge. The spectrum of each knot is well fitted by a thermal model with temperatures ranging from 0.3 to 0.7 keV and subsolar elemental abundances. In four of the five knots, nonthermal models are a poor fit to the spectra, conclusively ruling out synchrotron or IC/CMB mechanisms for their emission. The internal pressures of the knots exceed that of the ambient interstellar medium or the equipartition pressure of the NML by more than an order of magnitude, demonstrating that they must be short lived (~3 × 106 yr). Based on energetic arguments, it is implausible that these knots have been ionized by the beamed flux from the active galactic nucleus of Cen A or that they have been shock heated by supersonic inflation of the NML. In our view, the most viable scenario for the origin of the X-ray knots is that they are the result of cold gas shock heated by a direct interaction with the jet. The most plausible model of the NML is that it is a bubble from a previous nuclear outburst that is being re-energized by the current outburst. The northeast inner lobe and the large-scale jet are lossless channels through which the jet material rapidly travels to the NML in this scenario. We also report the discovery of a large-scale (at least 35 kpc radius) gas halo around Cen A.
Assimilating data into open ocean tidal models
NASA Astrophysics Data System (ADS)
Kivman, Gennady A.
The problem of deriving tidal fields from observations by reason of incompleteness and imperfectness of every data set practically available has an infinitely large number of allowable solutions fitting the data within measurement errors and hence can be treated as ill-posed. Therefore, interpolating the data always relies on some a priori assumptions concerning the tides, which provide a rule of sampling or, in other words, a regularization of the ill-posed problem. Data assimilation procedures used in large scale tide modeling are viewed in a common mathematical framework as such regularizations. It is shown that they all (basis functions expansion, parameter estimation, nudging, objective analysis, general inversion, and extended general inversion), including those (objective analysis and general inversion) originally formulated in stochastic terms, may be considered as utilizations of one of the three general methods suggested by the theory of ill-posed problems. The problem of grid refinement critical for inverse methods and nudging is discussed.
Early optical polarization of a gamma-ray burst afterglow.
Mundell, Carole G; Steele, Iain A; Smith, Robert J; Kobayashi, Shiho; Melandri, Andrea; Guidorzi, Cristiano; Gomboc, Andreja; Mottram, Chris J; Clarke, David; Monfardini, Alessandro; Carter, David; Bersier, David
2007-03-30
We report the optical polarization of a gamma-ray burst (GRB) afterglow, obtained 203 seconds after the initial burst of gamma-rays from GRB 060418, using a ring polarimeter on the robotic Liverpool Telescope. Our robust (2sigma) upper limit on the percentage of polarization, less than 8%, coincides with the fireball deceleration time at the onset of the afterglow. The combination of the rate of decay of the optical brightness and the low polarization at this critical time constrains standard models of GRB ejecta, ruling out the presence of a large-scale ordered magnetic field in the emitting region.
Discovering Sentinel Rules for Business Intelligence
NASA Astrophysics Data System (ADS)
Middelfart, Morten; Pedersen, Torben Bach
This paper proposes the concept of sentinel rules for multi-dimensional data that warns users when measure data concerning the external environment changes. For instance, a surge in negative blogging about a company could trigger a sentinel rule warning that revenue will decrease within two months, so a new course of action can be taken. Hereby, we expand the window of opportunity for organizations and facilitate successful navigation even though the world behaves chaotically. Since sentinel rules are at the schema level as opposed to the data level, and operate on data changes as opposed to absolute data values, we are able to discover strong and useful sentinel rules that would otherwise be hidden when using sequential pattern mining or correlation techniques. We present a method for sentinel rule discovery and an implementation of this method that scales linearly on large data volumes.
Global biogeography and ecology of body size in birds.
Olson, Valérie A; Davies, Richard G; Orme, C David L; Thomas, Gavin H; Meiri, Shai; Blackburn, Tim M; Gaston, Kevin J; Owens, Ian P F; Bennett, Peter M
2009-03-01
In 1847, Karl Bergmann proposed that temperature gradients are the key to understanding geographic variation in the body sizes of warm-blooded animals. Yet both the geographic patterns of body-size variation and their underlying mechanisms remain controversial. Here, we conduct the first assemblage-level global examination of 'Bergmann's rule' within an entire animal class. We generate global maps of avian body size and demonstrate a general pattern of larger body sizes at high latitudes, conforming to Bergmann's rule. We also show, however, that median body size within assemblages is systematically large on islands and small in species-rich areas. Similarly, while spatial models show that temperature is the single strongest environmental correlate of body size, there are secondary correlations with resource availability and a strong pattern of decreasing body size with increasing species richness. Finally, our results suggest that geographic patterns of body size are caused both by adaptation within lineages, as invoked by Bergmann, and by taxonomic turnover among lineages. Taken together, these results indicate that while Bergmann's prediction based on physiological scaling is remarkably accurate, it is far from the full picture. Global patterns of body size in avian assemblages are driven by interactions between the physiological demands of the environment, resource availability, species richness and taxonomic turnover among lineages.
NASA Astrophysics Data System (ADS)
Tang, G.; Bartlein, P. J.
2012-01-01
Water balance models of simple structure are easier to grasp and more clearly connect cause and effect than models of complex structure. Such models are essential for studying large spatial scale land surface water balance in the context of climate and land cover change, both natural and anthropogenic. This study aims to (i) develop a large spatial scale water balance model by modifying a dynamic global vegetation model (DGVM), and (ii) test the model's performance in simulating actual evapotranspiration (ET), soil moisture and surface runoff for the coterminous United States (US). Toward these ends, we first introduced development of the "LPJ-Hydrology" (LH) model by incorporating satellite-based land covers into the Lund-Potsdam-Jena (LPJ) DGVM instead of dynamically simulating them. We then ran LH using historical (1982-2006) climate data and satellite-based land covers at 2.5 arc-min grid cells. The simulated ET, soil moisture and surface runoff were compared to existing sets of observed or simulated data for the US. The results indicated that LH captures well the variation of monthly actual ET (R2 = 0.61, p < 0.01) in the Everglades of Florida over the years 1996-2001. The modeled monthly soil moisture for Illinois of the US agrees well (R2 = 0.79, p < 0.01) with the observed over the years 1984-2001. The modeled monthly stream flow for most 12 major rivers in the US is consistent R2 > 0.46, p < 0.01; Nash-Sutcliffe Coefficients >0.52) with observed values over the years 1982-2006, respectively. The modeled spatial patterns of annual ET and surface runoff are in accordance with previously published data. Compared to its predecessor, LH simulates better monthly stream flow in winter and early spring by incorporating effects of solar radiation on snowmelt. Overall, this study proves the feasibility of incorporating satellite-based land-covers into a DGVM for simulating large spatial scale land surface water balance. LH developed in this study should be a useful tool for studying effects of climate and land cover change on land surface hydrology at large spatial scales.
Local versus global knowledge in the Barabási-Albert scale-free network model.
Gómez-Gardeñes, Jesús; Moreno, Yamir
2004-03-01
The scale-free model of Barabási and Albert (BA) gave rise to a burst of activity in the field of complex networks. In this paper, we revisit one of the main assumptions of the model, the preferential attachment (PA) rule. We study a model in which the PA rule is applied to a neighborhood of newly created nodes and thus no global knowledge of the network is assumed. We numerically show that global properties of the BA model such as the connectivity distribution and the average shortest path length are quite robust when there is some degree of local knowledge. In contrast, other properties such as the clustering coefficient and degree-degree correlations differ and approach the values measured for real-world networks.
Improving Cyber-Security of Smart Grid Systems via Anomaly Detection and Linguistic Domain Knowledge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ondrej Linda; Todd Vollmer; Milos Manic
The planned large scale deployment of smart grid network devices will generate a large amount of information exchanged over various types of communication networks. The implementation of these critical systems will require appropriate cyber-security measures. A network anomaly detection solution is considered in this work. In common network architectures multiple communications streams are simultaneously present, making it difficult to build an anomaly detection solution for the entire system. In addition, common anomaly detection algorithms require specification of a sensitivity threshold, which inevitably leads to a tradeoff between false positives and false negatives rates. In order to alleviate these issues, thismore » paper proposes a novel anomaly detection architecture. The designed system applies the previously developed network security cyber-sensor method to individual selected communication streams allowing for learning accurate normal network behavior models. Furthermore, the developed system dynamically adjusts the sensitivity threshold of each anomaly detection algorithm based on domain knowledge about the specific network system. It is proposed to model this domain knowledge using Interval Type-2 Fuzzy Logic rules, which linguistically describe the relationship between various features of the network communication and the possibility of a cyber attack. The proposed method was tested on experimental smart grid system demonstrating enhanced cyber-security.« less
NASA Astrophysics Data System (ADS)
Wang, Lixia; Pei, Jihong; Xie, Weixin; Liu, Jinyuan
2018-03-01
Large-scale oceansat remote sensing images cover a big area sea surface, which fluctuation can be considered as a non-stationary process. Short-Time Fourier Transform (STFT) is a suitable analysis tool for the time varying nonstationary signal. In this paper, a novel ship detection method using 2-D STFT sea background statistical modeling for large-scale oceansat remote sensing images is proposed. First, the paper divides the large-scale oceansat remote sensing image into small sub-blocks, and 2-D STFT is applied to each sub-block individually. Second, the 2-D STFT spectrum of sub-blocks is studied and the obvious different characteristic between sea background and non-sea background is found. Finally, the statistical model for all valid frequency points in the STFT spectrum of sea background is given, and the ship detection method based on the 2-D STFT spectrum modeling is proposed. The experimental result shows that the proposed algorithm can detect ship targets with high recall rate and low missing rate.
Evolution of Collective Behaviour in an Artificial World Using Linguistic Fuzzy Rule-Based Systems
Lebar Bajec, Iztok
2017-01-01
Collective behaviour is a fascinating and easily observable phenomenon, attractive to a wide range of researchers. In biology, computational models have been extensively used to investigate various properties of collective behaviour, such as: transfer of information across the group, benefits of grouping (defence against predation, foraging), group decision-making process, and group behaviour types. The question ‘why,’ however remains largely unanswered. Here the interest goes into which pressures led to the evolution of such behaviour, and evolutionary computational models have already been used to test various biological hypotheses. Most of these models use genetic algorithms to tune the parameters of previously presented non-evolutionary models, but very few attempt to evolve collective behaviour from scratch. Of these last, the successful attempts display clumping or swarming behaviour. Empirical evidence suggests that in fish schools there exist three classes of behaviour; swarming, milling and polarized. In this paper we present a novel, artificial life-like evolutionary model, where individual agents are governed by linguistic fuzzy rule-based systems, which is capable of evolving all three classes of behaviour. PMID:28045964
Evolution of Collective Behaviour in an Artificial World Using Linguistic Fuzzy Rule-Based Systems.
Demšar, Jure; Lebar Bajec, Iztok
2017-01-01
Collective behaviour is a fascinating and easily observable phenomenon, attractive to a wide range of researchers. In biology, computational models have been extensively used to investigate various properties of collective behaviour, such as: transfer of information across the group, benefits of grouping (defence against predation, foraging), group decision-making process, and group behaviour types. The question 'why,' however remains largely unanswered. Here the interest goes into which pressures led to the evolution of such behaviour, and evolutionary computational models have already been used to test various biological hypotheses. Most of these models use genetic algorithms to tune the parameters of previously presented non-evolutionary models, but very few attempt to evolve collective behaviour from scratch. Of these last, the successful attempts display clumping or swarming behaviour. Empirical evidence suggests that in fish schools there exist three classes of behaviour; swarming, milling and polarized. In this paper we present a novel, artificial life-like evolutionary model, where individual agents are governed by linguistic fuzzy rule-based systems, which is capable of evolving all three classes of behaviour.
Rules Mothers and Sons Use to Integrate Intent and Damage Information in Their Moral Judgments.
ERIC Educational Resources Information Center
Leon, Manuel
1984-01-01
The similarity between rules used by mothers and those used by sons was extensive. Results suggest that research should emphasize the process by which children come to employ multidimensional rules and the role of parental models in this process. Current research in moral judgments largely ignores the rule-governed nature of children's judgments.…
Stochastic simulation of multiscale complex systems with PISKaS: A rule-based approach.
Perez-Acle, Tomas; Fuenzalida, Ignacio; Martin, Alberto J M; Santibañez, Rodrigo; Avaria, Rodrigo; Bernardin, Alejandro; Bustos, Alvaro M; Garrido, Daniel; Dushoff, Jonathan; Liu, James H
2018-03-29
Computational simulation is a widely employed methodology to study the dynamic behavior of complex systems. Although common approaches are based either on ordinary differential equations or stochastic differential equations, these techniques make several assumptions which, when it comes to biological processes, could often lead to unrealistic models. Among others, model approaches based on differential equations entangle kinetics and causality, failing when complexity increases, separating knowledge from models, and assuming that the average behavior of the population encompasses any individual deviation. To overcome these limitations, simulations based on the Stochastic Simulation Algorithm (SSA) appear as a suitable approach to model complex biological systems. In this work, we review three different models executed in PISKaS: a rule-based framework to produce multiscale stochastic simulations of complex systems. These models span multiple time and spatial scales ranging from gene regulation up to Game Theory. In the first example, we describe a model of the core regulatory network of gene expression in Escherichia coli highlighting the continuous model improvement capacities of PISKaS. The second example describes a hypothetical outbreak of the Ebola virus occurring in a compartmentalized environment resembling cities and highways. Finally, in the last example, we illustrate a stochastic model for the prisoner's dilemma; a common approach from social sciences describing complex interactions involving trust within human populations. As whole, these models demonstrate the capabilities of PISKaS providing fertile scenarios where to explore the dynamics of complex systems. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Oulas, Anastasis; Karathanasis, Nestoras; Louloupi, Annita; Pavlopoulos, Georgios A; Poirazi, Panayiota; Kalantidis, Kriton; Iliopoulos, Ioannis
2015-01-01
Computational methods for miRNA target prediction are currently undergoing extensive review and evaluation. There is still a great need for improvement of these tools and bioinformatics approaches are looking towards high-throughput experiments in order to validate predictions. The combination of large-scale techniques with computational tools will not only provide greater credence to computational predictions but also lead to the better understanding of specific biological questions. Current miRNA target prediction tools utilize probabilistic learning algorithms, machine learning methods and even empirical biologically defined rules in order to build models based on experimentally verified miRNA targets. Large-scale protein downregulation assays and next-generation sequencing (NGS) are now being used to validate methodologies and compare the performance of existing tools. Tools that exhibit greater correlation between computational predictions and protein downregulation or RNA downregulation are considered the state of the art. Moreover, efficiency in prediction of miRNA targets that are concurrently verified experimentally provides additional validity to computational predictions and further highlights the competitive advantage of specific tools and their efficacy in extracting biologically significant results. In this review paper, we discuss the computational methods for miRNA target prediction and provide a detailed comparison of methodologies and features utilized by each specific tool. Moreover, we provide an overview of current state-of-the-art high-throughput methods used in miRNA target prediction.
Research on complex 3D tree modeling based on L-system
NASA Astrophysics Data System (ADS)
Gang, Chen; Bin, Chen; Yuming, Liu; Hui, Li
2018-03-01
L-system as a fractal iterative system could simulate complex geometric patterns. Based on the field observation data of trees and knowledge of forestry experts, this paper extracted modeling constraint rules and obtained an L-system rules set. Using the self-developed L-system modeling software the L-system rule set was parsed to generate complex tree 3d models.The results showed that the geometrical modeling method based on l-system could be used to describe the morphological structure of complex trees and generate 3D tree models.
Classical Wave Model of Quantum-Like Processing in Brain
NASA Astrophysics Data System (ADS)
Khrennikov, A.
2011-01-01
We discuss the conjecture on quantum-like (QL) processing of information in the brain. It is not based on the physical quantum brain (e.g., Penrose) - quantum physical carriers of information. In our approach the brain created the QL representation (QLR) of information in Hilbert space. It uses quantum information rules in decision making. The existence of such QLR was (at least preliminary) confirmed by experimental data from cognitive psychology. The violation of the law of total probability in these experiments is an important sign of nonclassicality of data. In so called "constructive wave function approach" such data can be represented by complex amplitudes. We presented 1,2 the QL model of decision making. In this paper we speculate on a possible physical realization of QLR in the brain: a classical wave model producing QLR . It is based on variety of time scales in the brain. Each pair of scales (fine - the background fluctuations of electromagnetic field and rough - the cognitive image scale) induces the QL representation. The background field plays the crucial role in creation of "superstrong QL correlations" in the brain.
Atmospheric Downscaling using Genetic Programming
NASA Astrophysics Data System (ADS)
Zerenner, Tanja; Venema, Victor; Simmer, Clemens
2013-04-01
Coupling models for the different components of the Soil-Vegetation-Atmosphere-System requires up-and downscaling procedures. Subject of our work is the downscaling scheme used to derive high resolution forcing data for land-surface and subsurface models from coarser atmospheric model output. The current downscaling scheme [Schomburg et. al. 2010, 2012] combines a bi-quadratic spline interpolation, deterministic rules and autoregressive noise. For the development of the scheme, training and validation data sets have been created by carrying out high-resolution runs of the atmospheric model. The deterministic rules in this scheme are partly based on known physical relations and partly determined by an automated search for linear relationships between the high resolution fields of the atmospheric model output and high resolution data on surface characteristics. Up to now deterministic rules are available for downscaling surface pressure and partially, depending on the prevailing weather conditions, for near surface temperature and radiation. Aim of our work is to improve those rules and to find deterministic rules for the remaining variables, which require downscaling, e.g. precipitation or near surface specifc humidity. To accomplish that, we broaden the search by allowing for interdependencies between different atmospheric parameters, non-linear relations, non-local and time-lagged relations. To cope with the vast number of possible solutions, we use genetic programming, a method from machine learning, which is based on the principles of natural evolution. We are currently working with GPLAB, a Genetic Programming toolbox for Matlab. At first we have tested the GP system to retrieve the known physical rule for downscaling surface pressure, i.e. the hydrostatic equation, from our training data. We have found this to be a simple task to the GP system. Furthermore we have improved accuracy and efficiency of the GP solution by implementing constant variation and optimization as genetic operators. Next we have worked on an improvement of the downscaling rule for the two-meter-temperature. We have added an if-function with four input arguments to the function set. Since this has shown to increase bloat we have additionally modified our fitness function by including penalty terms for both the size of the solutions and the number intron nodes, i.e program parts that are never evaluated. Starting from the known downscaling rule for the two-meter temperature, which linearly exploits the orography anomalies allowed or disallowed by a certain temperature gradient, our GP system has been able to find an improvement. The rule produced by the GP clearly shows a better performance concerning the reproduced small-scale variability.
NASA Technical Reports Server (NTRS)
Skakun, Sergii; Franch, Belen; Vermote, Eric; Roger, Jean-Claude; Becker-Reshef, Inbal; Justice, Christopher; Kussul, Nataliia
2017-01-01
Knowledge on geographical location and distribution of crops at global, national and regional scales is an extremely valuable source of information applications. Traditional approaches to crop mapping using remote sensing data rely heavily on reference or ground truth data in order to train/calibrate classification models. As a rule, such models are only applicable to a single vegetation season and should be recalibrated to be applicable for other seasons. This paper addresses the problem of early season large-area winter crop mapping using Moderate Resolution Imaging Spectroradiometer (MODIS) derived Normalized Difference Vegetation Index (NDVI) time-series and growing degree days (GDD) information derived from the Modern-Era Retrospective analysis for Research and Applications (MERRA-2) product. The model is based on the assumption that winter crops have developed biomass during early spring while other crops (spring and summer) have no biomass. As winter crop development is temporally and spatially non-uniform due to the presence of different agro-climatic zones, we use GDD to account for such discrepancies. A Gaussian mixture model (GMM) is applied to discriminate winter crops from other crops (spring and summer). The proposed method has the following advantages: low input data requirements, robustness, applicability to global scale application and can provide winter crop maps 1.5-2 months before harvest. The model is applied to two study regions, the State of Kansas in the US and Ukraine, and for multiple seasons (2001-2014). Validation using the US Department of Agriculture (USDA) Crop Data Layer (CDL) for Kansas and ground measurements for Ukraine shows that accuracies of greater than 90% can be achieved in mapping winter crops 1.5-2 months before harvest. Results also show good correspondence to official statistics with average coefficients of determination R(exp. 2) greater than 0.85.
An empirical and model study on automobile market in Taiwan
NASA Astrophysics Data System (ADS)
Tang, Ji-Ying; Qiu, Rong; Zhou, Yueping; He, Da-Ren
2006-03-01
We have done an empirical investigation on automobile market in Taiwan including the development of the possession rate of the companies in the market from 1979 to 2003, the development of the largest possession rate, and so on. A dynamic model for describing the competition between the companies is suggested based on the empirical study. In the model each company is given a long-term competition factor (such as technology, capital and scale) and a short-term competition factor (such as management, service and advertisement). Then the companies play games in order to obtain more possession rate in the market under certain rules. Numerical simulation based on the model display a competition developing process, which qualitatively and quantitatively agree with our empirical investigation results.
Selection Shapes Transcriptional Logic and Regulatory Specialization in Genetic Networks.
Fogelmark, Karl; Peterson, Carsten; Troein, Carl
2016-01-01
Living organisms need to regulate their gene expression in response to environmental signals and internal cues. This is a computational task where genes act as logic gates that connect to form transcriptional networks, which are shaped at all scales by evolution. Large-scale mutations such as gene duplications and deletions add and remove network components, whereas smaller mutations alter the connections between them. Selection determines what mutations are accepted, but its importance for shaping the resulting networks has been debated. To investigate the effects of selection in the shaping of transcriptional networks, we derive transcriptional logic from a combinatorially powerful yet tractable model of the binding between DNA and transcription factors. By evolving the resulting networks based on their ability to function as either a simple decision system or a circadian clock, we obtain information on the regulation and logic rules encoded in functional transcriptional networks. Comparisons are made between networks evolved for different functions, as well as with structurally equivalent but non-functional (neutrally evolved) networks, and predictions are validated against the transcriptional network of E. coli. We find that the logic rules governing gene expression depend on the function performed by the network. Unlike the decision systems, the circadian clocks show strong cooperative binding and negative regulation, which achieves tight temporal control of gene expression. Furthermore, we find that transcription factors act preferentially as either activators or repressors, both when binding multiple sites for a single target gene and globally in the transcriptional networks. This separation into positive and negative regulators requires gene duplications, which highlights the interplay between mutation and selection in shaping the transcriptional networks.
Unraveling the drivers of community dissimilarity and species extinction in fragmented landscapes.
Banks-Leite, Cristina; Ewers, Robert M; Metzger, Jean Paul
2012-12-01
Communities in fragmented landscapes are often assumed to be structured by species extinction due to habitat loss, which has led to extensive use of the species-area relationship (SAR) in fragmentation studies. However, the use of the SAR presupposes that habitat loss leads species to extinction but does not allow for extinction to be offset by colonization of disturbed-habitat specialists. Moreover, the use of SAR assumes that species richness is a good proxy of community changes in fragmented landscapes. Here, we assessed how communities dwelling in fragmented landscapes are influenced by habitat loss at multiple scales; then we estimated the ability of models ruled by SAR and by species turnover in successfully predicting changes in community composition, and asked whether species richness is indeed an informative community metric. To address these issues, we used a data set consisting of 140 bird species sampled in 65 patches, from six landscapes with different proportions of forest cover in the Atlantic Forest of Brazil. We compared empirical patterns against simulations of over 8 million communities structured by different magnitudes of the power-law SAR and with species-specific rules to assign species to sites. Empirical results showed that, while bird community composition was strongly influenced by habitat loss at the patch and landscape scale, species richness remained largely unaffected. Modeling results revealed that the compositional changes observed in the Atlantic Forest bird metacommunity were only matched by models with either unrealistic magnitudes of the SAR or by models ruled by species turnover, akin to what would be observed along natural gradients. We show that, in the presence of such compositional turnover, species richness is poorly correlated with species extinction, and z values of the SAR strongly underestimate the effects of habitat loss. We suggest that the observed compositional changes are driven by each species reaching its individual extinction threshold: either a threshold of forest cover for species that disappear with habitat loss, or of matrix cover for species that benefit from habitat loss.
CATS - A process-based model for turbulent turbidite systems at the reservoir scale
NASA Astrophysics Data System (ADS)
Teles, Vanessa; Chauveau, Benoît; Joseph, Philippe; Weill, Pierre; Maktouf, Fakher
2016-09-01
The Cellular Automata for Turbidite systems (CATS) model is intended to simulate the fine architecture and facies distribution of turbidite reservoirs with a multi-event and process-based approach. The main processes of low-density turbulent turbidity flow are modeled: downslope sediment-laden flow, entrainment of ambient water, erosion and deposition of several distinct lithologies. This numerical model, derived from (Salles, 2006; Salles et al., 2007), proposes a new approach based on the Rouse concentration profile to consider the flow capacity to carry the sediment load in suspension. In CATS, the flow distribution on a given topography is modeled with local rules between neighboring cells (cellular automata) based on potential and kinetic energy balance and diffusion concepts. Input parameters are the initial flow parameters and a 3D topography at depositional time. An overview of CATS capabilities in different contexts is presented and discussed.
Rule groupings in expert systems using nearest neighbour decision rules, and convex hulls
NASA Technical Reports Server (NTRS)
Anastasiadis, Stergios
1991-01-01
Expert System shells are lacking in many areas of software engineering. Large rule based systems are not semantically comprehensible, difficult to debug, and impossible to modify or validate. Partitioning a set of rules found in CLIPS (C Language Integrated Production System) into groups of rules which reflect the underlying semantic subdomains of the problem, will address adequately the concerns stated above. Techniques are introduced to structure a CLIPS rule base into groups of rules that inherently have common semantic information. The concepts involved are imported from the field of A.I., Pattern Recognition, and Statistical Inference. Techniques focus on the areas of feature selection, classification, and a criteria of how 'good' the classification technique is, based on Bayesian Decision Theory. A variety of distance metrics are discussed for measuring the 'closeness' of CLIPS rules and various Nearest Neighbor classification algorithms are described based on the above metric.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hicks, E. P.; Rosner, R., E-mail: eph2001@columbia.edu
In this paper, we provide support for the Rayleigh-Taylor-(RT)-based subgrid model used in full-star simulations of deflagrations in Type Ia supernovae explosions. We use the results of a parameter study of two-dimensional direct numerical simulations of an RT unstable model flame to distinguish between the two main types of subgrid models (RT or turbulence dominated) in the flamelet regime. First, we give scalings for the turbulent flame speed, the Reynolds number, the viscous scale, and the size of the burning region as the non-dimensional gravity (G) is varied. The flame speed is well predicted by an RT-based flame speed model.more » Next, the above scalings are used to calculate the Karlovitz number (Ka) and to discuss appropriate combustion regimes. No transition to thin reaction zones is seen at Ka = 1, although such a transition is expected by turbulence-dominated subgrid models. Finally, we confirm a basic physical premise of the RT subgrid model, namely, that the flame is fractal, and thus self-similar. By modeling the turbulent flame speed, we demonstrate that it is affected more by large-scale RT stretching than by small-scale turbulent wrinkling. In this way, the RT instability controls the flame directly from the large scales. Overall, these results support the RT subgrid model.« less
Toward accelerating landslide mapping with interactive machine learning techniques
NASA Astrophysics Data System (ADS)
Stumpf, André; Lachiche, Nicolas; Malet, Jean-Philippe; Kerle, Norman; Puissant, Anne
2013-04-01
Despite important advances in the development of more automated methods for landslide mapping from optical remote sensing images, the elaboration of inventory maps after major triggering events still remains a tedious task. Image classification with expert defined rules typically still requires significant manual labour for the elaboration and adaption of rule sets for each particular case. Machine learning algorithm, on the contrary, have the ability to learn and identify complex image patterns from labelled examples but may require relatively large amounts of training data. In order to reduce the amount of required training data active learning has evolved as key concept to guide the sampling for applications such as document classification, genetics and remote sensing. The general underlying idea of most active learning approaches is to initialize a machine learning model with a small training set, and to subsequently exploit the model state and/or the data structure to iteratively select the most valuable samples that should be labelled by the user and added in the training set. With relatively few queries and labelled samples, an active learning strategy should ideally yield at least the same accuracy than an equivalent classifier trained with many randomly selected samples. Our study was dedicated to the development of an active learning approach for landslide mapping from VHR remote sensing images with special consideration of the spatial distribution of the samples. The developed approach is a region-based query heuristic that enables to guide the user attention towards few compact spatial batches rather than distributed points resulting in time savings of 50% and more compared to standard active learning techniques. The approach was tested with multi-temporal and multi-sensor satellite images capturing recent large scale triggering events in Brazil and China and demonstrated balanced user's and producer's accuracies between 74% and 80%. The assessment also included an experimental evaluation of the uncertainties of manual mappings from multiple experts and demonstrated strong relationships between the uncertainty of the experts and the machine learning model.
Analyses of interactions among pair-rule genes and the gap gene Krüppel in Bombyx segmentation.
Nakao, Hajime
2015-09-01
In the short-germ insect Tribolium, a pair-rule gene circuit consisting of the Tribolium homologs of even-skipped, runt, and odd-skipped (Tc-eve, Tc-run and Tc-odd, respectively) has been implicated in segment formation. To examine the application of the model to other taxa, I studied the expression and function of pair-rule genes in Bombyx mori, together with a Bombyx homolog of Krüppel (Bm-Kr), a known gap gene. Knockdown embryos of Bombyx homologs of eve, run and odd (Bm-eve, Bm-run and Bm-odd) exhibited asegmental phenotypes similar to those of Tribolium knockdowns. However, pair-rule gene interactions were similar to those of both Tribolium and Drosophila, which, different from Tribolium, shows a hierarchical segmentation mode. Additionally, the Bm-odd expression pattern shares characteristics with those of Drosophila pair-rule genes that receive upstream regulatory input. On the other hand, Bm-Kr knockdowns exhibited a large posterior segment deletion as observed in short-germ insects. However, a detailed analysis of these embryos indicated that Bm-Kr modulates expression of pair-rule genes like in Drosophila, although the mechanisms appear to be different. This suggested hierarchical interactions between Bm-Kr and pair-rule genes. Based on these results, I concluded that the pair-rule gene circuit model that describes Tribolium development is not applicable to Bombyx. Copyright © 2015 Elsevier Inc. All rights reserved.
An Interval Type-2 Neural Fuzzy System for Online System Identification and Feature Elimination.
Lin, Chin-Teng; Pal, Nikhil R; Wu, Shang-Lin; Liu, Yu-Ting; Lin, Yang-Yin
2015-07-01
We propose an integrated mechanism for discarding derogatory features and extraction of fuzzy rules based on an interval type-2 neural fuzzy system (NFS)-in fact, it is a more general scheme that can discard bad features, irrelevant antecedent clauses, and even irrelevant rules. High-dimensional input variable and a large number of rules not only enhance the computational complexity of NFSs but also reduce their interpretability. Therefore, a mechanism for simultaneous extraction of fuzzy rules and reducing the impact of (or eliminating) the inferior features is necessary. The proposed approach, namely an interval type-2 Neural Fuzzy System for online System Identification and Feature Elimination (IT2NFS-SIFE), uses type-2 fuzzy sets to model uncertainties associated with information and data in designing the knowledge base. The consequent part of the IT2NFS-SIFE is of Takagi-Sugeno-Kang type with interval weights. The IT2NFS-SIFE possesses a self-evolving property that can automatically generate fuzzy rules. The poor features can be discarded through the concept of a membership modulator. The antecedent and modulator weights are learned using a gradient descent algorithm. The consequent part weights are tuned via the rule-ordered Kalman filter algorithm to enhance learning effectiveness. Simulation results show that IT2NFS-SIFE not only simplifies the system architecture by eliminating derogatory/irrelevant antecedent clauses, rules, and features but also maintains excellent performance.
Fuzzy logic-based flight control system design
NASA Astrophysics Data System (ADS)
Nho, Kyungmoon
The application of fuzzy logic to aircraft motion control is studied in this dissertation. The self-tuning fuzzy techniques are developed by changing input scaling factors to obtain a robust fuzzy controller over a wide range of operating conditions and nonlinearities for a nonlinear aircraft model. It is demonstrated that the properly adjusted input scaling factors can meet the required performance and robustness in a fuzzy controller. For a simple demonstration of the easy design and control capability of a fuzzy controller, a proportional-derivative (PD) fuzzy control system is compared to the conventional controller for a simple dynamical system. This thesis also describes the design principles and stability analysis of fuzzy control systems by considering the key features of a fuzzy control system including the fuzzification, rule-base and defuzzification. The wing-rock motion of slender delta wings, a linear aircraft model and the six degree of freedom nonlinear aircraft dynamics are considered to illustrate several self-tuning methods employing change in input scaling factors. Finally, this dissertation is concluded with numerical simulation of glide-slope capture in windshear demonstrating the robustness of the fuzzy logic based flight control system.
NASA Astrophysics Data System (ADS)
Zhang, Shengfang; Hao, Qiang; Sha, Zhihua; Yin, Jian; Ma, Fujian; Liu, Yu
2017-12-01
For the friction and wear issues of brake pads in the large-megawatt wind turbine brake during braking, this paper established the micro finite element model of abrasive wear by using Deform-2D software. Based on abrasive wear theory and considered the variation of the velocity and load in the micro friction and wear process, the Archard wear calculation model is developed. The influence rules of relative sliding velocity and friction coefficient in the brake pad and disc is analysed. The simulation results showed that as the relative sliding velocity increases, the wear will be more serious, while the larger friction coefficient lowered the contact pressure which released the wear of the brake pad.
Universal scaling in the branching of the tree of life.
Herrada, E Alejandro; Tessone, Claudio J; Klemm, Konstantin; Eguíluz, Víctor M; Hernández-García, Emilio; Duarte, Carlos M
2008-07-23
Understanding the patterns and processes of diversification of life in the planet is a key challenge of science. The Tree of Life represents such diversification processes through the evolutionary relationships among the different taxa, and can be extended down to intra-specific relationships. Here we examine the topological properties of a large set of interspecific and intraspecific phylogenies and show that the branching patterns follow allometric rules conserved across the different levels in the Tree of Life, all significantly departing from those expected from the standard null models. The finding of non-random universal patterns of phylogenetic differentiation suggests that similar evolutionary forces drive diversification across the broad range of scales, from macro-evolutionary to micro-evolutionary processes, shaping the diversity of life on the planet.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tellarini, Matteo; Ross, Ashley J.; Wands, David
Measurements of the non-Gaussianity of the primordial density field have the power to considerably improve our understanding of the physics of inflation. Indeed, if we can increase the precision of current measurements by an order of magnitude, a null-detection would rule out many classes of scenarios for generating primordial fluctuations. Large-scale galaxy redshift surveys represent experiments that hold the promise to realise this goal. Thus, we model the galaxy bispectrum and forecast the accuracy with which it will probe the parameter f {sub NL}, which represents the degree of primordial local-type non Gaussianity. Specifically, we address the problem of modellingmore » redshift space distortions (RSD) in the tree-level galaxy bispectrum including f {sub NL}. We find novel contributions associated with RSD, with the characteristic large scale amplification induced by local-type non-Gaussianity. These RSD effects must be properly accounted for in order to obtain un-biased measurements of f {sub NL} from the galaxy bispectrum. We propose an analytic template for the monopole which can be used to fit against data on large scales, extending models used in the recent measurements. Finally, we perform idealised forecasts on σ {sub f} {sub N{sub L}}—the accuracy of the determination of local non-linear parameter f {sub NL}—from measurements of the galaxy bispectrum. Our findings suggest that current surveys can in principle provide f {sub NL} constraints competitive with Planck , and future surveys could improve them further.« less
NASA Astrophysics Data System (ADS)
Si, Y.; Cai, X.
2017-12-01
The large-scale reservoir system built on the upper Yellow River serves multiple purposes. The generated hydropower supplies over 60% of the entire electricity for the regional power grid while the irrigated crop production feeds almost one-third of the total population throughout the whole river basin. Moreover, the reservoir system also bears the responsibility for controlling ice flood, which occurs during the non-flood season due to winter ice freezing followed by spring thawing process, and could be even more disastrous than the summer flood. The contradiction of water allocation to satisfy multi-sector demands while mitigating ice flood risk has been longstanding. However, few researchers endeavor to employ the nexus thinking to addressing the complexities involved in all the interlinked purposes. In this study, we develop an integrated hydro-economic model that can be used to explore both the tradeoffs and synergies between the multiple purposes, based on which the water infrastructures (e.g., reservoir, diversion canal, pumping well) can be coordinated for maximizing the co-benefits of multiple sectors. The model is based on a node-link schematic of multiple operations including hydropower generation, irrigation scheduling, and the conjunctive use of surface and ground water resources. In particular, the model depicts some details regarding reservoir operation rules during the ice season using two indicators, i.e., flow control period and flow control level. The rules are obtained from historical records using data mining techniques under different climate conditions, and they are added to the model as part of the system constraints. Future reservoir inflow series are generated by a hydrological model with future climate scenarios projected by General Circulation Model (GCM). By analyzing the model results under the various climate scenarios, the future possible shifting trajectory of the food-energy-water system characteristics will be derived compared to the baseline scenario (i.e., the status-quo condition). Thus the model and the results are expected to be useful for enlightening economically efficient water allocation policy coping with climate change.
Tsai, Jason Sheng-Hong; Du, Yan-Yi; Huang, Pei-Hsiang; Guo, Shu-Mei; Shieh, Leang-San; Chen, Yuhua
2011-07-01
In this paper, a digital redesign methodology of the iterative learning-based decentralized adaptive tracker is proposed to improve the dynamic performance of sampled-data linear large-scale control systems consisting of N interconnected multi-input multi-output subsystems, so that the system output will follow any trajectory which may not be presented by the analytic reference model initially. To overcome the interference of each sub-system and simplify the controller design, the proposed model reference decentralized adaptive control scheme constructs a decoupled well-designed reference model first. Then, according to the well-designed model, this paper develops a digital decentralized adaptive tracker based on the optimal analog control and prediction-based digital redesign technique for the sampled-data large-scale coupling system. In order to enhance the tracking performance of the digital tracker at specified sampling instants, we apply the iterative learning control (ILC) to train the control input via continual learning. As a result, the proposed iterative learning-based decentralized adaptive tracker not only has robust closed-loop decoupled property but also possesses good tracking performance at both transient and steady state. Besides, evolutionary programming is applied to search for a good learning gain to speed up the learning process of ILC. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
Similar Task Features Shape Judgment and Categorization Processes
ERIC Educational Resources Information Center
Hoffmann, Janina A.; von Helversen, Bettina; Rieskamp, Jörg
2016-01-01
The distinction between similarity-based and rule-based strategies has instigated a large body of research in categorization and judgment. Within both domains, the task characteristics guiding strategy shifts are increasingly well documented. Across domains, past research has observed shifts from rule-based strategies in judgment to…
Fire flame detection based on GICA and target tracking
NASA Astrophysics Data System (ADS)
Rong, Jianzhong; Zhou, Dechuang; Yao, Wei; Gao, Wei; Chen, Juan; Wang, Jian
2013-04-01
To improve the video fire detection rate, a robust fire detection algorithm based on the color, motion and pattern characteristics of fire targets was proposed, which proved a satisfactory fire detection rate for different fire scenes. In this fire detection algorithm: (a) a rule-based generic color model was developed based on analysis on a large quantity of flame pixels; (b) from the traditional GICA (Geometrical Independent Component Analysis) model, a Cumulative Geometrical Independent Component Analysis (C-GICA) model was developed for motion detection without static background and (c) a BP neural network fire recognition model based on multi-features of the fire pattern was developed. Fire detection tests on benchmark fire video clips of different scenes have shown the robustness, accuracy and fast-response of the algorithm.
Flares from small to large: X-ray spectroscopy of Proxima Centauri with XMM-Newton
NASA Astrophysics Data System (ADS)
Güdel, M.; Audard, M.; Reale, F.; Skinner, S. L.; Linsky, J. L.
2004-03-01
We report results from a comprehensive study of the nearby M dwarf Proxima Centauri with the XMM-Newton satellite, using simultaneously its X-ray detectors and the Optical Monitor with its U band filter. We find strongly variable coronal X-ray emission, with flares ranging over a factor of 100 in peak flux. The low-level emission is found to be continuously variable on at least three time scales (a slow decay of several hours, modulation on a time scale of 1 hr, and weak flares with time scales of a few minutes). Several weak flares are characteristically preceded by an optical burst, compatible with predictions from standard solar flare models. We propose that the U band bursts are proxies for the elusive stellar non-thermal hard X-ray bursts suggested from solar observations. In the course of the observation, a very large X-ray flare started and was observed essentially in its entirety. Its peak luminosity reached 3.9× 1028 erg s-1 [0.15-10 keV], and the total X-ray energy released in the same band is derived to be 1.5× 1032 ergs. This flare has for the first time allowed to measure significant density variations across several phases of the flare from X-ray spectroscopy of the O VII He-like triplet; we find peak densities reaching up to 4× 1011 cm-3 for plasma of about 1-5 MK. Abundance ratios show little variability in time, with a tendency of elements with a high first ionization potential to be overabundant relative to solar photospheric values. Using Fe XVII lines with different oscillator strengths, we do not find significant effects due to opacity during the flare, indicating that large opacity increases are not the rule even in extreme flares. We model the large flare in terms of an analytic 2-Ribbon flare model and find that the flaring loop system should have large characteristic sizes (≈ 1R*) within the framework of this simplistic model. These results are supported by full hydrodynamic simulations. Comparing the large flare to flares of similar size occurring much more frequently on more active stars, we propose that the X-ray properties of active stars are a consequence of superimposed flares such as the example analyzed in this paper. Since larger flares produce hotter plasma, such a model also explains why, during episodes of low-level emission, more active stars show hotter plasma than less active stars. Based on observations obtained with XMM-Newton, an ESA science mission with instruments and contributions directly funded by ESA Member States and the USA (NASA).
Large-scale dynamo growth rates from numerical simulations and implications for mean-field theories
NASA Astrophysics Data System (ADS)
Park, Kiwan; Blackman, Eric G.; Subramanian, Kandaswamy
2013-05-01
Understanding large-scale magnetic field growth in turbulent plasmas in the magnetohydrodynamic limit is a goal of magnetic dynamo theory. In particular, assessing how well large-scale helical field growth and saturation in simulations match those predicted by existing theories is important for progress. Using numerical simulations of isotropically forced turbulence without large-scale shear with its implications, we focus on several additional aspects of this comparison: (1) Leading mean-field dynamo theories which break the field into large and small scales predict that large-scale helical field growth rates are determined by the difference between kinetic helicity and current helicity with no dependence on the nonhelical energy in small-scale magnetic fields. Our simulations show that the growth rate of the large-scale field from fully helical forcing is indeed unaffected by the presence or absence of small-scale magnetic fields amplified in a precursor nonhelical dynamo. However, because the precursor nonhelical dynamo in our simulations produced fields that were strongly subequipartition with respect to the kinetic energy, we cannot yet rule out the potential influence of stronger nonhelical small-scale fields. (2) We have identified two features in our simulations which cannot be explained by the most minimalist versions of two-scale mean-field theory: (i) fully helical small-scale forcing produces significant nonhelical large-scale magnetic energy and (ii) the saturation of the large-scale field growth is time delayed with respect to what minimalist theory predicts. We comment on desirable generalizations to the theory in this context and future desired work.
Large-scale dynamo growth rates from numerical simulations and implications for mean-field theories.
Park, Kiwan; Blackman, Eric G; Subramanian, Kandaswamy
2013-05-01
Understanding large-scale magnetic field growth in turbulent plasmas in the magnetohydrodynamic limit is a goal of magnetic dynamo theory. In particular, assessing how well large-scale helical field growth and saturation in simulations match those predicted by existing theories is important for progress. Using numerical simulations of isotropically forced turbulence without large-scale shear with its implications, we focus on several additional aspects of this comparison: (1) Leading mean-field dynamo theories which break the field into large and small scales predict that large-scale helical field growth rates are determined by the difference between kinetic helicity and current helicity with no dependence on the nonhelical energy in small-scale magnetic fields. Our simulations show that the growth rate of the large-scale field from fully helical forcing is indeed unaffected by the presence or absence of small-scale magnetic fields amplified in a precursor nonhelical dynamo. However, because the precursor nonhelical dynamo in our simulations produced fields that were strongly subequipartition with respect to the kinetic energy, we cannot yet rule out the potential influence of stronger nonhelical small-scale fields. (2) We have identified two features in our simulations which cannot be explained by the most minimalist versions of two-scale mean-field theory: (i) fully helical small-scale forcing produces significant nonhelical large-scale magnetic energy and (ii) the saturation of the large-scale field growth is time delayed with respect to what minimalist theory predicts. We comment on desirable generalizations to the theory in this context and future desired work.
Self-similar transmission properties of aperiodic Cantor potentials in gapped graphene
NASA Astrophysics Data System (ADS)
Rodríguez-González, Rogelio; Rodríguez-Vargas, Isaac; Díaz-Guerrero, Dan Sidney; Gaggero-Sager, Luis Manuel
2016-01-01
We investigate the transmission properties of quasiperiodic or aperiodic structures based on graphene arranged according to the Cantor sequence. In particular, we have found self-similar behaviour in the transmission spectra, and most importantly, we have calculated the scalability of the spectra. To do this, we implement and propose scaling rules for each one of the fundamental parameters: generation number, height of the barriers and length of the system. With this in mind we have been able to reproduce the reference transmission spectrum, applying the appropriate scaling rule, by means of the scaled transmission spectrum. These scaling rules are valid for both normal and oblique incidence, and as far as we can see the basic ingredients to obtain self-similar characteristics are: relativistic Dirac electrons, a self-similar structure and the non-conservation of the pseudo-spin.
Compression and release dynamics of an active matter system of Euglena gracilis
NASA Astrophysics Data System (ADS)
Lam, Amy; Tsang, Alan C. H.; Ouellette, Nicholas; Riedel-Kruse, Ingmar
Active matter, defined as ensembles of self-propelled particles, encompasses a large variety of systems at all scales, from nanoparticles to bird flocks. Though various models and simulations have been created to describe the dynamics of these systems, experimental verification has been difficult to obtain. This is frequently due to the complex interaction rules which govern the particle behavior, in turn making systematic varying of parameters impossible. Here, we propose a model for predicting the system evolution of compression and release of an active system based on experiments and simulations. In particular, we consider ensembles of the unicellular, photo-responsive algae, Euglena gracilis, under light stimulation. By varying the spatiotemporal light patterns, we are able to finely adjust cell densities and achieve arbitrary non-homogeneous distributions, including compression into high-density aggregates of varying geometries. We observe the formation of depletion zones after the release of the confining stimulus and investigate the effects of the density distribution and particle rotational noise on the depletion. These results provide implications for defining state parameters which determine system evolution.
Hill, Ryan M; Oosterhoff, Benjamin; Kaplow, Julie B
2017-07-01
Although a large number of risk markers for suicide ideation have been identified, little guidance has been provided to prospectively identify adolescents at risk for suicide ideation within community settings. The current study addressed this gap in the literature by utilizing classification tree analysis (CTA) to provide a decision-making model for screening adolescents at risk for suicide ideation. Participants were N = 4,799 youth (Mage = 16.15 years, SD = 1.63) who completed both Waves 1 and 2 of the National Longitudinal Study of Adolescent to Adult Health. CTA was used to generate a series of decision rules for identifying adolescents at risk for reporting suicide ideation at Wave 2. Findings revealed 3 distinct solutions with varying sensitivity and specificity for identifying adolescents who reported suicide ideation. Sensitivity of the classification trees ranged from 44.6% to 77.6%. The tree with greatest specificity and lowest sensitivity was based on a history of suicide ideation. The tree with moderate sensitivity and high specificity was based on depressive symptoms, suicide attempts or suicide among family and friends, and social support. The most sensitive but least specific tree utilized these factors and gender, ethnicity, hours of sleep, school-related factors, and future orientation. These classification trees offer community organizations options for instituting large-scale screenings for suicide ideation risk depending on the available resources and modality of services to be provided. This study provides a theoretically and empirically driven model for prospectively identifying adolescents at risk for suicide ideation and has implications for preventive interventions among at-risk youth. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Classification Based on Pruning and Double Covered Rule Sets for the Internet of Things Applications
Zhou, Zhongmei; Wang, Weiping
2014-01-01
The Internet of things (IOT) is a hot issue in recent years. It accumulates large amounts of data by IOT users, which is a great challenge to mining useful knowledge from IOT. Classification is an effective strategy which can predict the need of users in IOT. However, many traditional rule-based classifiers cannot guarantee that all instances can be covered by at least two classification rules. Thus, these algorithms cannot achieve high accuracy in some datasets. In this paper, we propose a new rule-based classification, CDCR-P (Classification based on the Pruning and Double Covered Rule sets). CDCR-P can induce two different rule sets A and B. Every instance in training set can be covered by at least one rule not only in rule set A, but also in rule set B. In order to improve the quality of rule set B, we take measure to prune the length of rules in rule set B. Our experimental results indicate that, CDCR-P not only is feasible, but also it can achieve high accuracy. PMID:24511304
Li, Shasha; Zhou, Zhongmei; Wang, Weiping
2014-01-01
The Internet of things (IOT) is a hot issue in recent years. It accumulates large amounts of data by IOT users, which is a great challenge to mining useful knowledge from IOT. Classification is an effective strategy which can predict the need of users in IOT. However, many traditional rule-based classifiers cannot guarantee that all instances can be covered by at least two classification rules. Thus, these algorithms cannot achieve high accuracy in some datasets. In this paper, we propose a new rule-based classification, CDCR-P (Classification based on the Pruning and Double Covered Rule sets). CDCR-P can induce two different rule sets A and B. Every instance in training set can be covered by at least one rule not only in rule set A, but also in rule set B. In order to improve the quality of rule set B, we take measure to prune the length of rules in rule set B. Our experimental results indicate that, CDCR-P not only is feasible, but also it can achieve high accuracy.
NASA Astrophysics Data System (ADS)
Haer, T.; Botzen, W.; Aerts, J.
2016-12-01
In the last four decades the global population living in the 1/100 year-flood zone has doubled from approximately 500 million to a little less than 1 billion people. Urbanization in low lying -flood prone- cities further increases the exposed assets, such as buildings and infrastructure. Moreover, climate change will further exacerbate flood risk in the future. Accurate flood risk assessments are important to inform policy-makers and society on current- and future flood risk levels. However, these assessment suffer from a major flaw in the way they estimate flood vulnerability and adaptive behaviour of individuals and governments. Current flood risk projections commonly assume that either vulnerability remains constant, or try to mimic vulnerability through incorporating an external scenario. Such a static approach leads to a misrepresentation of future flood risk, as humans respond adaptively to flood events, flood risk communication, and incentives to reduce risk. In our study, we integrate adaptive behaviour in a large-scale European flood risk framework through an agent-based modelling approach. This allows for the inclusion of heterogeneous agents, which dynamically respond to each other and a changing environment. We integrate state-of-the-art flood risk maps based on climate scenarios (RCP's), and socio-economic scenarios (SSP's), with government and household agents, which behave autonomously based on (micro-)economic behaviour rules. We show for the first time that excluding adaptive behaviour leads to a major misrepresentation of future flood risk. The methodology is applied to flood risk, but has similar implications for other research in the field of natural hazards. While more research is needed, this multi-disciplinary study advances our understanding of how future flood risk will develop.
Limits to Ice on Asteroids (24) Themis and (65) Cybele
NASA Astrophysics Data System (ADS)
Jewitt, David; Guilbert-Lepoutre, Aurelie
2012-01-01
We present optical spectra of (24) Themis and (65) Cybele, two large main-belt asteroids on which exposed water ice has recently been reported. No emission lines, expected from resonance fluorescence in gas sublimated from the ice, were detected. Derived limits to the production rates of water are lsim400 kg s-1 (5σ) for each object, assuming a cometary H2O/CN ratio. We rule out models in which a large fraction of the surface is occupied by high-albedo ("fresh") water ice because the measured albedos of Themis and Cybele are low (~0.05-0.07). We also rule out models in which a large fraction of the surface is occupied by low-albedo ("dirty") water ice because dirty ice would be warm and would sublimate strongly enough for gaseous products to have been detected. If ice exists on these bodies it must be relatively clean (albedo gsim0.3) and confined to a fraction of the Earth-facing surface lsim10%. By analogy with impacted asteroid (596) Scheila, we propose an impact excavation scenario, in which 10 m scale projectiles have exposed buried ice. If the ice is even more reflective (albedo gsim0.6), then the timescale for sublimation of an optically thick layer can rival the ~103 yr interval between impacts with bodies this size. In this sense, exposure by impact may be a quasi steady-state feature of ice-containing asteroids at 3 AU.
2015-01-01
Background Modern methods for mining biomolecular interactions from literature typically make predictions based solely on the immediate textual context, in effect a single sentence. No prior work has been published on extending this context to the information automatically gathered from the whole biomedical literature. Thus, our motivation for this study is to explore whether mutually supporting evidence, aggregated across several documents can be utilized to improve the performance of the state-of-the-art event extraction systems. In this paper, we describe our participation in the latest BioNLP Shared Task using the large-scale text mining resource EVEX. We participated in the Genia Event Extraction (GE) and Gene Regulation Network (GRN) tasks with two separate systems. In the GE task, we implemented a re-ranking approach to improve the precision of an existing event extraction system, incorporating features from the EVEX resource. In the GRN task, our system relied solely on the EVEX resource and utilized a rule-based conversion algorithm between the EVEX and GRN formats. Results In the GE task, our re-ranking approach led to a modest performance increase and resulted in the first rank of the official Shared Task results with 50.97% F-score. Additionally, in this paper we explore and evaluate the usage of distributed vector representations for this challenge. In the GRN task, we ranked fifth in the official results with a strict/relaxed SER score of 0.92/0.81 respectively. To try and improve upon these results, we have implemented a novel machine learning based conversion system and benchmarked its performance against the original rule-based system. Conclusions For the GRN task, we were able to produce a gene regulatory network from the EVEX data, warranting the use of such generic large-scale text mining data in network biology settings. A detailed performance and error analysis provides more insight into the relatively low recall rates. In the GE task we demonstrate that both the re-ranking approach and the word vectors can provide slight performance improvement. A manual evaluation of the re-ranking results pinpoints some of the challenges faced in applying large-scale text mining knowledge to event extraction. PMID:26551766
Research Progress on Dark Matter Model Based on Weakly Interacting Massive Particles
NASA Astrophysics Data System (ADS)
He, Yu; Lin, Wen-bin
2017-04-01
The cosmological model of cold dark matter (CDM) with the dark energy and a scale-invariant adiabatic primordial power spectrum has been considered as the standard cosmological model, i.e. the ΛCDM model. Weakly interacting massive particles (WIMPs) become a prominent candidate for the CDM. Many models extended from the standard model can provide the WIMPs naturally. The standard calculations of relic abundance of dark matter show that the WIMPs are well in agreement with the astronomical observation of ΩDM h2 ≈0.11. The WIMPs have a relatively large mass, and a relatively slow velocity, so they are easy to aggregate into clusters, and the results of numerical simulations based on the WIMPs agree well with the observational results of cosmic large-scale structures. In the aspect of experiments, the present accelerator or non-accelerator direct/indirect detections are mostly designed for the WIMPs. Thus, a wide attention has been paid to the CDM model based on the WIMPs. However, the ΛCDM model has a serious problem for explaining the small-scale structures under one Mpc. Different dark matter models have been proposed to alleviate the small-scale problem. However, so far there is no strong evidence enough to exclude the CDM model. We plan to introduce the research progress of the dark matter model based on the WIMPs, such as the WIMPs miracle, numerical simulation, small-scale problem, and the direct/indirect detection, to analyze the criterion for discriminating the ;cold;, ;hot;, and ;warm; dark matter, and present the future prospects for the study in this field.
NASA Astrophysics Data System (ADS)
Majdalani, Samer; Guinot, Vincent; Delenne, Carole; Gebran, Hicham
2018-06-01
This paper is devoted to theoretical and experimental investigations of solute dispersion in heterogeneous porous media. Dispersion in heterogenous porous media has been reported to be scale-dependent, a likely indication that the proposed dispersion models are incompletely formulated. A high quality experimental data set of breakthrough curves in periodic model heterogeneous porous media is presented. In contrast with most previously published experiments, the present experiments involve numerous replicates. This allows the statistical variability of experimental data to be accounted for. Several models are benchmarked against the data set: the Fickian-based advection-dispersion, mobile-immobile, multirate, multiple region advection dispersion models, and a newly proposed transport model based on pure advection. A salient property of the latter model is that its solutions exhibit a ballistic behaviour for small times, while tending to the Fickian behaviour for large time scales. Model performance is assessed using a novel objective function accounting for the statistical variability of the experimental data set, while putting equal emphasis on both small and large time scale behaviours. Besides being as accurate as the other models, the new purely advective model has the advantages that (i) it does not exhibit the undesirable effects associated with the usual Fickian operator (namely the infinite solute front propagation speed), and (ii) it allows dispersive transport to be simulated on every heterogeneity scale using scale-independent parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hummel, K.E.
1987-12-01
Expert systems are artificial intelligence programs that solve problems requiring large amounts of heuristic knowledge, based on years of experience and tradition. Production systems are domain-independent tools that support the development of rule-based expert systems. This document describes a general purpose production system known as HERB. This system was developed to support the programming of expert systems using hierarchically structured rule bases. HERB encourages the partitioning of rules into multiple rule bases and supports the use of multiple conflict resolution strategies. Multiple rule bases can also be placed on a system stack and simultaneously searched during each interpreter cycle. Bothmore » backward and forward chaining rules are supported by HERB. The condition portion of each rule can contain both patterns, which are matched with facts in a data base, and LISP expressions, which are explicitly evaluated in the LISP environment. Properties of objects can also be stored in the HERB data base and referenced within the scope of each rule. This document serves both as an introduction to the principles of LISP-based production systems and as a user's manual for the HERB system. 6 refs., 17 figs.« less
eFSM--a novel online neural-fuzzy semantic memory model.
Tung, Whye Loon; Quek, Chai
2010-01-01
Fuzzy rule-based systems (FRBSs) have been successfully applied to many areas. However, traditional fuzzy systems are often manually crafted, and their rule bases that represent the acquired knowledge are static and cannot be trained to improve the modeling performance. This subsequently leads to intensive research on the autonomous construction and tuning of a fuzzy system directly from the observed training data to address the knowledge acquisition bottleneck, resulting in well-established hybrids such as neural-fuzzy systems (NFSs) and genetic fuzzy systems (GFSs). However, the complex and dynamic nature of real-world problems demands that fuzzy rule-based systems and models be able to adapt their parameters and ultimately evolve their rule bases to address the nonstationary (time-varying) characteristics of their operating environments. Recently, considerable research efforts have been directed to the study of evolving Tagaki-Sugeno (T-S)-type NFSs based on the concept of incremental learning. In contrast, there are very few incremental learning Mamdani-type NFSs reported in the literature. Hence, this paper presents the evolving neural-fuzzy semantic memory (eFSM) model, a neural-fuzzy Mamdani architecture with a data-driven progressively adaptive structure (i.e., rule base) based on incremental learning. Issues related to the incremental learning of the eFSM rule base are carefully investigated, and a novel parameter learning approach is proposed for the tuning of the fuzzy set parameters in eFSM. The proposed eFSM model elicits highly interpretable semantic knowledge in the form of Mamdani-type if-then fuzzy rules from low-level numeric training data. These Mamdani fuzzy rules define the computing structure of eFSM and are incrementally learned with the arrival of each training data sample. New rules are constructed from the emergence of novel training data and obsolete fuzzy rules that no longer describe the recently observed data trends are pruned. This enables eFSM to maintain a current and compact set of Mamdani-type if-then fuzzy rules that collectively generalizes and describes the salient associative mappings between the inputs and outputs of the underlying process being modeled. The learning and modeling performances of the proposed eFSM are evaluated using several benchmark applications and the results are encouraging.
The RiverFish Approach to Business Process Modeling: Linking Business Steps to Control-Flow Patterns
NASA Astrophysics Data System (ADS)
Zuliane, Devanir; Oikawa, Marcio K.; Malkowski, Simon; Alcazar, José Perez; Ferreira, João Eduardo
Despite the recent advances in the area of Business Process Management (BPM), today’s business processes have largely been implemented without clearly defined conceptual modeling. This results in growing difficulties for identification, maintenance, and reuse of rules, processes, and control-flow patterns. To mitigate these problems in future implementations, we propose a new approach to business process modeling using conceptual schemas, which represent hierarchies of concepts for rules and processes shared among collaborating information systems. This methodology bridges the gap between conceptual model description and identification of actual control-flow patterns for workflow implementation. We identify modeling guidelines that are characterized by clear phase separation, step-by-step execution, and process building through diagrams and tables. The separation of business process modeling in seven mutually exclusive phases clearly delimits information technology from business expertise. The sequential execution of these phases leads to the step-by-step creation of complex control-flow graphs. The process model is refined through intuitive table and diagram generation in each phase. Not only does the rigorous application of our modeling framework minimize the impact of rule and process changes, but it also facilitates the identification and maintenance of control-flow patterns in BPM-based information system architectures.
Project Management Life Cycle Models to Improve Management in High-rise Construction
NASA Astrophysics Data System (ADS)
Burmistrov, Andrey; Siniavina, Maria; Iliashenko, Oksana
2018-03-01
The paper describes a possibility to improve project management in high-rise buildings construction through the use of various Project Management Life Cycle Models (PMLC models) based on traditional and agile project management approaches. Moreover, the paper describes, how the split the whole large-scale project to the "project chain" will create the factor for better manageability of the large-scale buildings project and increase the efficiency of the activities of all participants in such projects.
Origin of the moon - The collision hypothesis
NASA Technical Reports Server (NTRS)
Stevenson, D. J.
1987-01-01
Theoretical models of lunar origin involving one or more collisions between the earth and other large sun-orbiting bodies are examined in a critical review. Ten basic propositions of the collision hypothesis (CH) are listed; observational data on mass and angular momentum, bulk chemistry, volatile depletion, trace elements, primordial high temperatures, and orbital evolution are summarized; and the basic tenets of alternative models (fission, capture, and coformation) are reviewed. Consideration is given to the thermodynamics of large impacts, rheological and dynamical problems, numerical simulations based on the CH, disk evolution models, and the chemical implications of the CH. It is concluded that the sound arguments and evidence supporting the CH are not (yet) sufficient to rule out other hypotheses.
Incorporating linguistic knowledge for learning distributed word representations.
Wang, Yan; Liu, Zhiyuan; Sun, Maosong
2015-01-01
Combined with neural language models, distributed word representations achieve significant advantages in computational linguistics and text mining. Most existing models estimate distributed word vectors from large-scale data in an unsupervised fashion, which, however, do not take rich linguistic knowledge into consideration. Linguistic knowledge can be represented as either link-based knowledge or preference-based knowledge, and we propose knowledge regularized word representation models (KRWR) to incorporate these prior knowledge for learning distributed word representations. Experiment results demonstrate that our estimated word representation achieves better performance in task of semantic relatedness ranking. This indicates that our methods can efficiently encode both prior knowledge from knowledge bases and statistical knowledge from large-scale text corpora into a unified word representation model, which will benefit many tasks in text mining.
Incorporating Linguistic Knowledge for Learning Distributed Word Representations
Wang, Yan; Liu, Zhiyuan; Sun, Maosong
2015-01-01
Combined with neural language models, distributed word representations achieve significant advantages in computational linguistics and text mining. Most existing models estimate distributed word vectors from large-scale data in an unsupervised fashion, which, however, do not take rich linguistic knowledge into consideration. Linguistic knowledge can be represented as either link-based knowledge or preference-based knowledge, and we propose knowledge regularized word representation models (KRWR) to incorporate these prior knowledge for learning distributed word representations. Experiment results demonstrate that our estimated word representation achieves better performance in task of semantic relatedness ranking. This indicates that our methods can efficiently encode both prior knowledge from knowledge bases and statistical knowledge from large-scale text corpora into a unified word representation model, which will benefit many tasks in text mining. PMID:25874581
Web-based Visual Analytics for Extreme Scale Climate Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steed, Chad A; Evans, Katherine J; Harney, John F
In this paper, we introduce a Web-based visual analytics framework for democratizing advanced visualization and analysis capabilities pertinent to large-scale earth system simulations. We address significant limitations of present climate data analysis tools such as tightly coupled dependencies, ineffi- cient data movements, complex user interfaces, and static visualizations. Our Web-based visual analytics framework removes critical barriers to the widespread accessibility and adoption of advanced scientific techniques. Using distributed connections to back-end diagnostics, we minimize data movements and leverage HPC platforms. We also mitigate system dependency issues by employing a RESTful interface. Our framework embraces the visual analytics paradigm via newmore » visual navigation techniques for hierarchical parameter spaces, multi-scale representations, and interactive spatio-temporal data mining methods that retain details. Although generalizable to other science domains, the current work focuses on improving exploratory analysis of large-scale Community Land Model (CLM) and Community Atmosphere Model (CAM) simulations.« less
NASA Astrophysics Data System (ADS)
Lifton, Nathaniel; Sato, Tatsuhiko; Dunai, Tibor J.
2014-01-01
Several models have been proposed for scaling in situ cosmogenic nuclide production rates from the relatively few sites where they have been measured to other sites of interest. Two main types of models are recognized: (1) those based on data from nuclear disintegrations in photographic emulsions combined with various neutron detectors, and (2) those based largely on neutron monitor data. However, stubborn discrepancies between these model types have led to frequent confusion when calculating surface exposure ages from production rates derived from the models. To help resolve these discrepancies and identify the sources of potential biases in each model, we have developed a new scaling model based on analytical approximations to modeled fluxes of the main atmospheric cosmic-ray particles responsible for in situ cosmogenic nuclide production. Both the analytical formulations and the Monte Carlo model fluxes on which they are based agree well with measured atmospheric fluxes of neutrons, protons, and muons, indicating they can serve as a robust estimate of the atmospheric cosmic-ray flux based on first principles. We are also using updated records for quantifying temporal and spatial variability in geomagnetic and solar modulation effects on the fluxes. A key advantage of this new model (herein termed LSD) over previous Monte Carlo models of cosmogenic nuclide production is that it allows for faster estimation of scaling factors based on time-varying geomagnetic and solar inputs. Comparing scaling predictions derived from the LSD model with those of previously published models suggest potential sources of bias in the latter can be largely attributed to two factors: different energy responses of the secondary neutron detectors used in developing the models, and different geomagnetic parameterizations. Given that the LSD model generates flux spectra for each cosmic-ray particle of interest, it is also relatively straightforward to generate nuclide-specific scaling factors based on recently updated neutron and proton excitation functions (probability of nuclide production in a given nuclear reaction as a function of energy) for commonly measured in situ cosmogenic nuclides. Such scaling factors reflect the influence of the energy distribution of the flux folded with the relevant excitation functions. Resulting scaling factors indicate 3He shows the strongest positive deviation from the flux-based scaling, while 14C exhibits a negative deviation. These results are consistent with a recent Monte Carlo-based study using a different cosmic-ray physics code package but the same excitation functions.
Large-scale model quality assessment for improving protein tertiary structure prediction.
Cao, Renzhi; Bhattacharya, Debswapna; Adhikari, Badri; Li, Jilong; Cheng, Jianlin
2015-06-15
Sampling structural models and ranking them are the two major challenges of protein structure prediction. Traditional protein structure prediction methods generally use one or a few quality assessment (QA) methods to select the best-predicted models, which cannot consistently select relatively better models and rank a large number of models well. Here, we develop a novel large-scale model QA method in conjunction with model clustering to rank and select protein structural models. It unprecedentedly applied 14 model QA methods to generate consensus model rankings, followed by model refinement based on model combination (i.e. averaging). Our experiment demonstrates that the large-scale model QA approach is more consistent and robust in selecting models of better quality than any individual QA method. Our method was blindly tested during the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM group. It was officially ranked third out of all 143 human and server predictors according to the total scores of the first models predicted for 78 CASP11 protein domains and second according to the total scores of the best of the five models predicted for these domains. MULTICOM's outstanding performance in the extremely competitive 2014 CASP11 experiment proves that our large-scale QA approach together with model clustering is a promising solution to one of the two major problems in protein structure modeling. The web server is available at: http://sysbio.rnet.missouri.edu/multicom_cluster/human/. © The Author 2015. Published by Oxford University Press.
Modeling and Density Estimation of an Urban Freeway Network Based on Dynamic Graph Hybrid Automata
Chen, Yangzhou; Guo, Yuqi; Wang, Ying
2017-01-01
In this paper, in order to describe complex network systems, we firstly propose a general modeling framework by combining a dynamic graph with hybrid automata and thus name it Dynamic Graph Hybrid Automata (DGHA). Then we apply this framework to model traffic flow over an urban freeway network by embedding the Cell Transmission Model (CTM) into the DGHA. With a modeling procedure, we adopt a dual digraph of road network structure to describe the road topology, use linear hybrid automata to describe multi-modes of dynamic densities in road segments and transform the nonlinear expressions of the transmitted traffic flow between two road segments into piecewise linear functions in terms of multi-mode switchings. This modeling procedure is modularized and rule-based, and thus is easily-extensible with the help of a combination algorithm for the dynamics of traffic flow. It can describe the dynamics of traffic flow over an urban freeway network with arbitrary topology structures and sizes. Next we analyze mode types and number in the model of the whole freeway network, and deduce a Piecewise Affine Linear System (PWALS) model. Furthermore, based on the PWALS model, a multi-mode switched state observer is designed to estimate the traffic densities of the freeway network, where a set of observer gain matrices are computed by using the Lyapunov function approach. As an example, we utilize the PWALS model and the corresponding switched state observer to traffic flow over Beijing third ring road. In order to clearly interpret the principle of the proposed method and avoid computational complexity, we adopt a simplified version of Beijing third ring road. Practical application for a large-scale road network will be implemented by decentralized modeling approach and distributed observer designing in the future research. PMID:28353664
Modeling and Density Estimation of an Urban Freeway Network Based on Dynamic Graph Hybrid Automata.
Chen, Yangzhou; Guo, Yuqi; Wang, Ying
2017-03-29
In this paper, in order to describe complex network systems, we firstly propose a general modeling framework by combining a dynamic graph with hybrid automata and thus name it Dynamic Graph Hybrid Automata (DGHA). Then we apply this framework to model traffic flow over an urban freeway network by embedding the Cell Transmission Model (CTM) into the DGHA. With a modeling procedure, we adopt a dual digraph of road network structure to describe the road topology, use linear hybrid automata to describe multi-modes of dynamic densities in road segments and transform the nonlinear expressions of the transmitted traffic flow between two road segments into piecewise linear functions in terms of multi-mode switchings. This modeling procedure is modularized and rule-based, and thus is easily-extensible with the help of a combination algorithm for the dynamics of traffic flow. It can describe the dynamics of traffic flow over an urban freeway network with arbitrary topology structures and sizes. Next we analyze mode types and number in the model of the whole freeway network, and deduce a Piecewise Affine Linear System (PWALS) model. Furthermore, based on the PWALS model, a multi-mode switched state observer is designed to estimate the traffic densities of the freeway network, where a set of observer gain matrices are computed by using the Lyapunov function approach. As an example, we utilize the PWALS model and the corresponding switched state observer to traffic flow over Beijing third ring road. In order to clearly interpret the principle of the proposed method and avoid computational complexity, we adopt a simplified version of Beijing third ring road. Practical application for a large-scale road network will be implemented by decentralized modeling approach and distributed observer designing in the future research.
NASA Technical Reports Server (NTRS)
Bardino, J.; Ferziger, J. H.; Reynolds, W. C.
1983-01-01
The physical bases of large eddy simulation and subgrid modeling are studied. A subgrid scale similarity model is developed that can account for system rotation. Large eddy simulations of homogeneous shear flows with system rotation were carried out. Apparently contradictory experimental results were explained. The main effect of rotation is to increase the transverse length scales in the rotation direction, and thereby decrease the rates of dissipation. Experimental results are shown to be affected by conditions at the turbulence producing grid, which make the initial states a function of the rotation rate. A two equation model is proposed that accounts for effects of rotation and shows good agreement with experimental results. In addition, a Reynolds stress model is developed that represents the turbulence structure of homogeneous shear flows very well and can account also for the effects of system rotation.
Lampa, Samuel; Alvarsson, Jonathan; Spjuth, Ola
2016-01-01
Predictive modelling in drug discovery is challenging to automate as it often contains multiple analysis steps and might involve cross-validation and parameter tuning that create complex dependencies between tasks. With large-scale data or when using computationally demanding modelling methods, e-infrastructures such as high-performance or cloud computing are required, adding to the existing challenges of fault-tolerant automation. Workflow management systems can aid in many of these challenges, but the currently available systems are lacking in the functionality needed to enable agile and flexible predictive modelling. We here present an approach inspired by elements of the flow-based programming paradigm, implemented as an extension of the Luigi system which we name SciLuigi. We also discuss the experiences from using the approach when modelling a large set of biochemical interactions using a shared computer cluster.Graphical abstract.
Complex Sequencing Rules of Birdsong Can be Explained by Simple Hidden Markov Processes
Katahira, Kentaro; Suzuki, Kenta; Okanoya, Kazuo; Okada, Masato
2011-01-01
Complex sequencing rules observed in birdsongs provide an opportunity to investigate the neural mechanism for generating complex sequential behaviors. To relate the findings from studying birdsongs to other sequential behaviors such as human speech and musical performance, it is crucial to characterize the statistical properties of the sequencing rules in birdsongs. However, the properties of the sequencing rules in birdsongs have not yet been fully addressed. In this study, we investigate the statistical properties of the complex birdsong of the Bengalese finch (Lonchura striata var. domestica). Based on manual-annotated syllable labeles, we first show that there are significant higher-order context dependencies in Bengalese finch songs, that is, which syllable appears next depends on more than one previous syllable. We then analyze acoustic features of the song and show that higher-order context dependencies can be explained using first-order hidden state transition dynamics with redundant hidden states. This model corresponds to hidden Markov models (HMMs), well known statistical models with a large range of application for time series modeling. The song annotation with these models with first-order hidden state dynamics agreed well with manual annotation, the score was comparable to that of a second-order HMM, and surpassed the zeroth-order model (the Gaussian mixture model; GMM), which does not use context information. Our results imply that the hierarchical representation with hidden state dynamics may underlie the neural implementation for generating complex behavioral sequences with higher-order dependencies. PMID:21915345
Estimating Snow Water Storage in North America Using CLM4, DART, and Snow Radiance Data Assimilation
NASA Technical Reports Server (NTRS)
Kwon, Yonghwan; Yang, Zong-Liang; Zhao, Long; Hoar, Timothy J.; Toure, Ally M.; Rodell, Matthew
2016-01-01
This paper addresses continental-scale snow estimates in North America using a recently developed snow radiance assimilation (RA) system. A series of RA experiments with the ensemble adjustment Kalman filter are conducted by assimilating the Advanced Microwave Scanning Radiometer for Earth Observing System (AMSR-E) brightness temperature T(sub B) at 18.7- and 36.5-GHz vertical polarization channels. The overall RA performance in estimating snow depth for North America is improved by simultaneously updating the Community Land Model, version 4 (CLM4), snow/soil states and radiative transfer model (RTM) parameters involved in predicting T(sub B) based on their correlations with the prior T(sub B) (i.e., rule-based RA), although degradations are also observed. The RA system exhibits a more mixed performance for snow cover fraction estimates. Compared to the open-loop run (0.171m RMSE), the overall snow depth estimates are improved by 1.6% (0.168m RMSE) in the rule-based RA whereas the default RA (without a rule) results in a degradation of 3.6% (0.177mRMSE). Significant improvement of the snow depth estimates in the rule-based RA as observed for tundra snow class (11.5%, p < 0.05) and bare soil land-cover type (13.5%, p < 0.05). However, the overall improvement is not significant (p = 0.135) because snow estimates are degraded or marginally improved for other snow classes and land covers, especially the taiga snow class and forest land cover (7.1% and 7.3% degradations, respectively). The current RA system needs to be further refined to enhance snow estimates for various snow types and forested regions.
Janesko, Benjamin G; Scuseria, Gustavo E
2006-09-28
We present a model for electromagnetic enhancements in surface enhanced Raman optical activity (SEROA) spectroscopy. The model extends previous treatments of SEROA to substrates, such as metal nanoparticles in solution, that are orientationally averaged with respect to the laboratory frame. Our theoretical treatment combines analytical expressions for unenhanced Raman optical activity with molecular polarizability tensors that are dressed by the substrate's electromagnetic enhancements. We evaluate enhancements from model substrates to determine preliminary scaling laws and selection rules for SEROA. We find that dipolar substrates enhance Raman optical activity (ROA) scattering less than Raman scattering. Evanescent gradient contributions to orientationally averaged ROA scale to first or higher orders in the gradient of the incident plane-wave field. These evanescent gradient contributions may be large for substrates with quadrupolar responses to the plane-wave field gradient. Some substrates may also show a ROA contribution that depends only on the molecular electric dipole-electric dipole polarizability. These conclusions are illustrated via numerical calculations of surface enhanced Raman and ROA spectra from (R)-(-)-bromochlorofluoromethane on various model substrates.
[Study on Ammonia Emission Rules in a Dairy Feedlot Based on Laser Spectroscopy Detection Method].
He, Ying; Zhang, Yu-jun; You, Kun; Wang, Li-ming; Gao, Yan-wei; Xu, Jin-feng; Gao, Zhi-ling; Ma, Wen-qi
2016-03-01
It needs on-line monitoring of ammonia concentration on dairy feedlot to disclose ammonia emissions characteristics accurately for reducing ammonia emissions and improving the ecological environment. The on-line monitoring system for ammonia concentration has been designed based on Tunable Diode Laser Absorption Spectroscopy (TDLAS) technology combining with long open-path technology, then the study has been carried out with inverse dispersion technique and the system. The ammonia concentration in-situ has been detected and ammonia emission rules have been analyzed on a dairy feedlot in Baoding in autumn and winter of 2013. The monitoring indicated that the peak of ammonia concentration was 6.11 x 10(-6) in autumn, and that was 6.56 x 10(-6) in winter. The concentration results show that the variation of ammonia concentration had an obvious diurnal periodicity, and the general characteristic of diurnal variation was that the concentration was low in the daytime and was high at night. The ammonia emissions characteristic was obtained with inverse dispersion model that the peak of ammonia emissions velocity appeared at noon. The emission velocity was from 1.48 kg/head/hr to 130.6 kg/head/hr in autumn, and it was from 0.004 5 kg/head/hr to 43.32 kg/head/hr in winter which was lower than that in autumn. The results demonstrated ammonia emissions had certain seasonal differences in dairy feedlot scale. In conclusion, the ammonia concentration was detected with optical technology, and the ammonia emissions results were acquired by inverse dispersion model analysis with large range, high sensitivity, quick response without gas sampling. Thus, it's an effective method for ammonia emissions monitoring in dairy feedlot that provides technical support for scientific breeding.
Special Relativity at the Quantum Scale
Lam, Pui K.
2014-01-01
It has been suggested that the space-time structure as described by the theory of special relativity is a macroscopic manifestation of a more fundamental quantum structure (pre-geometry). Efforts to quantify this idea have come mainly from the area of abstract quantum logic theory. Here we present a preliminary attempt to develop a quantum formulation of special relativity based on a model that retains some geometric attributes. Our model is Feynman's “checker-board” trajectory for a 1-D relativistic free particle. We use this model to guide us in identifying (1) the quantum version of the postulates of special relativity and (2) the appropriate quantum “coordinates”. This model possesses a useful feature that it admits an interpretation both in terms of paths in space-time and in terms of quantum states. Based on the quantum version of the postulates, we derive a transformation rule for velocity. This rule reduces to the Einstein's velocity-addition formula in the macroscopic limit and reveals an interesting aspect of time. The 3-D case, time-dilation effect, and invariant interval are also discussed in term of this new formulation. This is a preliminary investigation; some results are derived, while others are interesting observations at this point. PMID:25531675
Special relativity at the quantum scale.
Lam, Pui K
2014-01-01
It has been suggested that the space-time structure as described by the theory of special relativity is a macroscopic manifestation of a more fundamental quantum structure (pre-geometry). Efforts to quantify this idea have come mainly from the area of abstract quantum logic theory. Here we present a preliminary attempt to develop a quantum formulation of special relativity based on a model that retains some geometric attributes. Our model is Feynman's "checker-board" trajectory for a 1-D relativistic free particle. We use this model to guide us in identifying (1) the quantum version of the postulates of special relativity and (2) the appropriate quantum "coordinates". This model possesses a useful feature that it admits an interpretation both in terms of paths in space-time and in terms of quantum states. Based on the quantum version of the postulates, we derive a transformation rule for velocity. This rule reduces to the Einstein's velocity-addition formula in the macroscopic limit and reveals an interesting aspect of time. The 3-D case, time-dilation effect, and invariant interval are also discussed in term of this new formulation. This is a preliminary investigation; some results are derived, while others are interesting observations at this point.
Pattern formation in individual-based systems with time-varying parameters
NASA Astrophysics Data System (ADS)
Ashcroft, Peter; Galla, Tobias
2013-12-01
We study the patterns generated in finite-time sweeps across symmetry-breaking bifurcations in individual-based models. Similar to the well-known Kibble-Zurek scenario of defect formation, large-scale patterns are generated when model parameters are varied slowly, whereas fast sweeps produce a large number of small domains. The symmetry breaking is triggered by intrinsic noise, originating from the discrete dynamics at the microlevel. Based on a linear-noise approximation, we calculate the characteristic length scale of these patterns. We demonstrate the applicability of this approach in a simple model of opinion dynamics, a model in evolutionary game theory with a time-dependent fitness structure, and a model of cell differentiation. Our theoretical estimates are confirmed in simulations. In further numerical work, we observe a similar phenomenon when the symmetry-breaking bifurcation is triggered by population growth.
A High-Resolution WRF Tropical Channel Simulation Driven by a Global Reanalysis
NASA Astrophysics Data System (ADS)
Holland, G.; Leung, L.; Kuo, Y.; Hurrell, J.
2006-12-01
Since 2003, NCAR has invested in the development and application of Nested Regional Climate Model (NRCM) based on the Weather Research and Forecasting (WRF) model and the Community Climate System Model, as a key component of the Prediction Across Scales Initiative. A prototype tropical channel model has been developed to investigate scale interactions and the influence of tropical convection on large scale circulation and tropical modes. The model was developed based on the NCAR Weather Research and Forecasting Model (WRF), configured as a tropical channel between 30 ° S and 45 ° N, wide enough to allow teleconnection effects over the mid-latitudes. Compared to the limited area domain that WRF is typically applied over, the channel mode alleviates issues with reflection of tropical modes that could result from imposing east/west boundaries. Using a large amount of available computing resources on a supercomputer (Blue Vista) during its bedding in period, a simulation has been completed with the tropical channel applied at 36 km horizontal resolution for 5 years from 1996 to 2000, with large scale circulation provided by the NCEP/NCAR global reanalysis at the north/south boundaries. Shorter simulations of 2 years and 6 months have also been performed to include two-way nests at 12 km and 4 km resolution, respectively, over the western Pacific warm pool, to explicitly resolve tropical convection in the Maritime Continent. The simulations realistically captured the large-scale circulation including the trade winds over the tropical Pacific and Atlantic, the Australian and Asian monsoon circulation, and hurricane statistics. Preliminary analysis and evaluation of the simulations will be presented.
NASA Technical Reports Server (NTRS)
Spinks, Debra (Compiler)
1997-01-01
This report contains the 1997 annual progress reports of the research fellows and students supported by the Center for Turbulence Research (CTR). Titles include: Invariant modeling in large-eddy simulation of turbulence; Validation of large-eddy simulation in a plain asymmetric diffuser; Progress in large-eddy simulation of trailing-edge turbulence and aeronautics; Resolution requirements in large-eddy simulations of shear flows; A general theory of discrete filtering for LES in complex geometry; On the use of discrete filters for large eddy simulation; Wall models in large eddy simulation of separated flow; Perspectives for ensemble average LES; Anisotropic grid-based formulas for subgrid-scale models; Some modeling requirements for wall models in large eddy simulation; Numerical simulation of 3D turbulent boundary layers using the V2F model; Accurate modeling of impinging jet heat transfer; Application of turbulence models to high-lift airfoils; Advances in structure-based turbulence modeling; Incorporating realistic chemistry into direct numerical simulations of turbulent non-premixed combustion; Effects of small-scale structure on turbulent mixing; Turbulent premixed combustion in the laminar flamelet and the thin reaction zone regime; Large eddy simulation of combustion instabilities in turbulent premixed burners; On the generation of vorticity at a free-surface; Active control of turbulent channel flow; A generalized framework for robust control in fluid mechanics; Combined immersed-boundary/B-spline methods for simulations of flow in complex geometries; and DNS of shock boundary-layer interaction - preliminary results for compression ramp flow.
Hocalar, A; Türker, M; Karakuzu, C; Yüzgeç, U
2011-04-01
In this study, previously developed five different state estimation methods are examined and compared for estimation of biomass concentrations at a production scale fed-batch bioprocess. These methods are i. estimation based on kinetic model of overflow metabolism; ii. estimation based on metabolic black-box model; iii. estimation based on observer; iv. estimation based on artificial neural network; v. estimation based on differential evaluation. Biomass concentrations are estimated from available measurements and compared with experimental data obtained from large scale fermentations. The advantages and disadvantages of the presented techniques are discussed with regard to accuracy, reproducibility, number of primary measurements required and adaptation to different working conditions. Among the various techniques, the metabolic black-box method seems to have advantages although the number of measurements required is more than that for the other methods. However, the required extra measurements are based on commonly employed instruments in an industrial environment. This method is used for developing a model based control of fed-batch yeast fermentations. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Seoud, Ahmed; Kim, Juhwan; Ma, Yuansheng; Jayaram, Srividya; Hong, Le; Chae, Gyu-Yeol; Lee, Jeong-Woo; Park, Dae-Jin; Yune, Hyoung-Soon; Oh, Se-Young; Park, Chan-Ha
2018-03-01
Sub-resolution assist feature (SRAF) insertion techniques have been effectively used for a long time now to increase process latitude in the lithography patterning process. Rule-based SRAF and model-based SRAF are complementary solutions, and each has its own benefits, depending on the objectives of applications and the criticality of the impact on manufacturing yield, efficiency, and productivity. Rule-based SRAF provides superior geometric output consistency and faster runtime performance, but the associated recipe development time can be of concern. Model-based SRAF provides better coverage for more complicated pattern structures in terms of shapes and sizes, with considerably less time required for recipe development, although consistency and performance may be impacted. In this paper, we introduce a new model-assisted template extraction (MATE) SRAF solution, which employs decision tree learning in a model-based solution to provide the benefits of both rule-based and model-based SRAF insertion approaches. The MATE solution is designed to automate the creation of rules/templates for SRAF insertion, and is based on the SRAF placement predicted by model-based solutions. The MATE SRAF recipe provides optimum lithographic quality in relation to various manufacturing aspects in a very short time, compared to traditional methods of rule optimization. Experiments were done using memory device pattern layouts to compare the MATE solution to existing model-based SRAF and pixelated SRAF approaches, based on lithographic process window quality, runtime performance, and geometric output consistency.
Stream Flow Prediction by Remote Sensing and Genetic Programming
NASA Technical Reports Server (NTRS)
Chang, Ni-Bin
2009-01-01
A genetic programming (GP)-based, nonlinear modeling structure relates soil moisture with synthetic-aperture-radar (SAR) images to present representative soil moisture estimates at the watershed scale. Surface soil moisture measurement is difficult to obtain over a large area due to a variety of soil permeability values and soil textures. Point measurements can be used on a small-scale area, but it is impossible to acquire such information effectively in large-scale watersheds. This model exhibits the capacity to assimilate SAR images and relevant geoenvironmental parameters to measure soil moisture.
Expert system for computer-assisted annotation of MS/MS spectra.
Neuhauser, Nadin; Michalski, Annette; Cox, Jürgen; Mann, Matthias
2012-11-01
An important step in mass spectrometry (MS)-based proteomics is the identification of peptides by their fragment spectra. Regardless of the identification score achieved, almost all tandem-MS (MS/MS) spectra contain remaining peaks that are not assigned by the search engine. These peaks may be explainable by human experts but the scale of modern proteomics experiments makes this impractical. In computer science, Expert Systems are a mature technology to implement a list of rules generated by interviews with practitioners. We here develop such an Expert System, making use of literature knowledge as well as a large body of high mass accuracy and pure fragmentation spectra. Interestingly, we find that even with high mass accuracy data, rule sets can quickly become too complex, leading to over-annotation. Therefore we establish a rigorous false discovery rate, calculated by random insertion of peaks from a large collection of other MS/MS spectra, and use it to develop an optimized knowledge base. This rule set correctly annotates almost all peaks of medium or high abundance. For high resolution HCD data, median intensity coverage of fragment peaks in MS/MS spectra increases from 58% by search engine annotation alone to 86%. The resulting annotation performance surpasses a human expert, especially on complex spectra such as those of larger phosphorylated peptides. Our system is also applicable to high resolution collision-induced dissociation data. It is available both as a part of MaxQuant and via a webserver that only requires an MS/MS spectrum and the corresponding peptides sequence, and which outputs publication quality, annotated MS/MS spectra (www.biochem.mpg.de/mann/tools/). It provides expert knowledge to beginners in the field of MS-based proteomics and helps advanced users to focus on unusual and possibly novel types of fragment ions.
Expert System for Computer-assisted Annotation of MS/MS Spectra*
Neuhauser, Nadin; Michalski, Annette; Cox, Jürgen; Mann, Matthias
2012-01-01
An important step in mass spectrometry (MS)-based proteomics is the identification of peptides by their fragment spectra. Regardless of the identification score achieved, almost all tandem-MS (MS/MS) spectra contain remaining peaks that are not assigned by the search engine. These peaks may be explainable by human experts but the scale of modern proteomics experiments makes this impractical. In computer science, Expert Systems are a mature technology to implement a list of rules generated by interviews with practitioners. We here develop such an Expert System, making use of literature knowledge as well as a large body of high mass accuracy and pure fragmentation spectra. Interestingly, we find that even with high mass accuracy data, rule sets can quickly become too complex, leading to over-annotation. Therefore we establish a rigorous false discovery rate, calculated by random insertion of peaks from a large collection of other MS/MS spectra, and use it to develop an optimized knowledge base. This rule set correctly annotates almost all peaks of medium or high abundance. For high resolution HCD data, median intensity coverage of fragment peaks in MS/MS spectra increases from 58% by search engine annotation alone to 86%. The resulting annotation performance surpasses a human expert, especially on complex spectra such as those of larger phosphorylated peptides. Our system is also applicable to high resolution collision-induced dissociation data. It is available both as a part of MaxQuant and via a webserver that only requires an MS/MS spectrum and the corresponding peptides sequence, and which outputs publication quality, annotated MS/MS spectra (www.biochem.mpg.de/mann/tools/). It provides expert knowledge to beginners in the field of MS-based proteomics and helps advanced users to focus on unusual and possibly novel types of fragment ions. PMID:22888147
Modelling disease outbreaks in realistic urban social networks
NASA Astrophysics Data System (ADS)
Eubank, Stephen; Guclu, Hasan; Anil Kumar, V. S.; Marathe, Madhav V.; Srinivasan, Aravind; Toroczkai, Zoltán; Wang, Nan
2004-05-01
Most mathematical models for the spread of disease use differential equations based on uniform mixing assumptions or ad hoc models for the contact process. Here we explore the use of dynamic bipartite graphs to model the physical contact patterns that result from movements of individuals between specific locations. The graphs are generated by large-scale individual-based urban traffic simulations built on actual census, land-use and population-mobility data. We find that the contact network among people is a strongly connected small-world-like graph with a well-defined scale for the degree distribution. However, the locations graph is scale-free, which allows highly efficient outbreak detection by placing sensors in the hubs of the locations network. Within this large-scale simulation framework, we then analyse the relative merits of several proposed mitigation strategies for smallpox spread. Our results suggest that outbreaks can be contained by a strategy of targeted vaccination combined with early detection without resorting to mass vaccination of a population.
Diversity, competition, extinction: the ecophysics of language change.
Solé, Ricard V; Corominas-Murtra, Bernat; Fortuny, Jordi
2010-12-06
As indicated early by Charles Darwin, languages behave and change very much like living species. They display high diversity, differentiate in space and time, emerge and disappear. A large body of literature has explored the role of information exchanges and communicative constraints in groups of agents under selective scenarios. These models have been very helpful in providing a rationale on how complex forms of communication emerge under evolutionary pressures. However, other patterns of large-scale organization can be described using mathematical methods ignoring communicative traits. These approaches consider shorter time scales and have been developed by exploiting both theoretical ecology and statistical physics methods. The models are reviewed here and include extinction, invasion, origination, spatial organization, coexistence and diversity as key concepts and are very simple in their defining rules. Such simplicity is used in order to catch the most fundamental laws of organization and those universal ingredients responsible for qualitative traits. The similarities between observed and predicted patterns indicate that an ecological theory of language is emerging, supporting (on a quantitative basis) its ecological nature, although key differences are also present. Here, we critically review some recent advances and outline their implications and limitations as well as highlight problems for future research.
Diversity, competition, extinction: the ecophysics of language change
Solé, Ricard V.; Corominas-Murtra, Bernat; Fortuny, Jordi
2010-01-01
As indicated early by Charles Darwin, languages behave and change very much like living species. They display high diversity, differentiate in space and time, emerge and disappear. A large body of literature has explored the role of information exchanges and communicative constraints in groups of agents under selective scenarios. These models have been very helpful in providing a rationale on how complex forms of communication emerge under evolutionary pressures. However, other patterns of large-scale organization can be described using mathematical methods ignoring communicative traits. These approaches consider shorter time scales and have been developed by exploiting both theoretical ecology and statistical physics methods. The models are reviewed here and include extinction, invasion, origination, spatial organization, coexistence and diversity as key concepts and are very simple in their defining rules. Such simplicity is used in order to catch the most fundamental laws of organization and those universal ingredients responsible for qualitative traits. The similarities between observed and predicted patterns indicate that an ecological theory of language is emerging, supporting (on a quantitative basis) its ecological nature, although key differences are also present. Here, we critically review some recent advances and outline their implications and limitations as well as highlight problems for future research. PMID:20591847
The salt marsh vegetation spread dynamics simulation and prediction based on conditions optimized CA
NASA Astrophysics Data System (ADS)
Guan, Yujuan; Zhang, Liquan
2006-10-01
The biodiversity conservation and management of the salt marsh vegetation relies on processing their spatial information. Nowadays, more attentions are focused on their classification surveying and describing qualitatively dynamics based on RS images interpreted, rather than on simulating and predicting their dynamics quantitatively, which is of greater importance for managing and planning the salt marsh vegetation. In this paper, our notion is to make a dynamic model on large-scale and to provide a virtual laboratory in which researchers can run it according requirements. Firstly, the characteristic of the cellular automata was analyzed and a conclusion indicated that it was necessary for a CA model to be extended geographically under varying conditions of space-time circumstance in order to make results matched the facts accurately. Based on the conventional cellular automata model, the author introduced several new conditions to optimize it for simulating the vegetation objectively, such as elevation, growth speed, invading ability, variation and inheriting and so on. Hence the CA cells and remote sensing image pixels, cell neighbors and pixel neighbors, cell rules and nature of the plants were unified respectively. Taking JiuDuanSha as the test site, where holds mainly Phragmites australis (P.australis) community, Scirpus mariqueter (S.mariqueter) community and Spartina alterniflora (S.alterniflora) community. The paper explored the process of making simulation and predictions about these salt marsh vegetable changing with the conditions optimized CA (COCA) model, and examined the links among data, statistical models, and ecological predictions. This study exploited the potential of applying Conditioned Optimized CA model technique to solve this problem.
A cloud-based framework for large-scale traditional Chinese medical record retrieval.
Liu, Lijun; Liu, Li; Fu, Xiaodong; Huang, Qingsong; Zhang, Xianwen; Zhang, Yin
2018-01-01
Electronic medical records are increasingly common in medical practice. The secondary use of medical records has become increasingly important. It relies on the ability to retrieve the complete information about desired patient populations. How to effectively and accurately retrieve relevant medical records from large- scale medical big data is becoming a big challenge. Therefore, we propose an efficient and robust framework based on cloud for large-scale Traditional Chinese Medical Records (TCMRs) retrieval. We propose a parallel index building method and build a distributed search cluster, the former is used to improve the performance of index building, and the latter is used to provide high concurrent online TCMRs retrieval. Then, a real-time multi-indexing model is proposed to ensure the latest relevant TCMRs are indexed and retrieved in real-time, and a semantics-based query expansion method and a multi- factor ranking model are proposed to improve retrieval quality. Third, we implement a template-based visualization method for displaying medical reports. The proposed parallel indexing method and distributed search cluster can improve the performance of index building and provide high concurrent online TCMRs retrieval. The multi-indexing model can ensure the latest relevant TCMRs are indexed and retrieved in real-time. The semantics expansion method and the multi-factor ranking model can enhance retrieval quality. The template-based visualization method can enhance the availability and universality, where the medical reports are displayed via friendly web interface. In conclusion, compared with the current medical record retrieval systems, our system provides some advantages that are useful in improving the secondary use of large-scale traditional Chinese medical records in cloud environment. The proposed system is more easily integrated with existing clinical systems and be used in various scenarios. Copyright © 2017. Published by Elsevier Inc.
A Bayesian model averaging method for the derivation of reservoir operating rules
NASA Astrophysics Data System (ADS)
Zhang, Jingwen; Liu, Pan; Wang, Hao; Lei, Xiaohui; Zhou, Yanlai
2015-09-01
Because the intrinsic dynamics among optimal decision making, inflow processes and reservoir characteristics are complex, functional forms of reservoir operating rules are always determined subjectively. As a result, the uncertainty of selecting form and/or model involved in reservoir operating rules must be analyzed and evaluated. In this study, we analyze the uncertainty of reservoir operating rules using the Bayesian model averaging (BMA) model. Three popular operating rules, namely piecewise linear regression, surface fitting and a least-squares support vector machine, are established based on the optimal deterministic reservoir operation. These individual models provide three-member decisions for the BMA combination, enabling the 90% release interval to be estimated by the Markov Chain Monte Carlo simulation. A case study of China's the Baise reservoir shows that: (1) the optimal deterministic reservoir operation, superior to any reservoir operating rules, is used as the samples to derive the rules; (2) the least-squares support vector machine model is more effective than both piecewise linear regression and surface fitting; (3) BMA outperforms any individual model of operating rules based on the optimal trajectories. It is revealed that the proposed model can reduce the uncertainty of operating rules, which is of great potential benefit in evaluating the confidence interval of decisions.
Mathematics and morphogenesis of cities: A geometrical approach
NASA Astrophysics Data System (ADS)
Courtat, Thomas; Gloaguen, Catherine; Douady, Stephane
2011-03-01
Cities are living organisms. They are out of equilibrium, open systems that never stop developing and sometimes die. The local geography can be compared to a shell constraining its development. In brief, a city’s current layout is a step in a running morphogenesis process. Thus cities display a huge diversity of shapes and none of the traditional models, from random graphs, complex networks theory, or stochastic geometry, takes into account the geometrical, functional, and dynamical aspects of a city in the same framework. We present here a global mathematical model dedicated to cities that permits describing, manipulating, and explaining cities’ overall shape and layout of their street systems. This street-based framework conciliates the topological and geometrical sides of the problem. From the static analysis of several French towns (topology of first and second order, anisotropy, streets scaling) we make the hypothesis that the development of a city follows a logic of division or extension of space. We propose a dynamical model that mimics this logic and that, from simple general rules and a few parameters, succeeds in generating a large diversity of cities and in reproducing the general features the static analysis has pointed out.
Frog: Asynchronous Graph Processing on GPU with Hybrid Coloring Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, Xuanhua; Luo, Xuan; Liang, Junling
GPUs have been increasingly used to accelerate graph processing for complicated computational problems regarding graph theory. Many parallel graph algorithms adopt the asynchronous computing model to accelerate the iterative convergence. Unfortunately, the consistent asynchronous computing requires locking or atomic operations, leading to significant penalties/overheads when implemented on GPUs. As such, coloring algorithm is adopted to separate the vertices with potential updating conflicts, guaranteeing the consistency/correctness of the parallel processing. Common coloring algorithms, however, may suffer from low parallelism because of a large number of colors generally required for processing a large-scale graph with billions of vertices. We propose a light-weightmore » asynchronous processing framework called Frog with a preprocessing/hybrid coloring model. The fundamental idea is based on Pareto principle (or 80-20 rule) about coloring algorithms as we observed through masses of realworld graph coloring cases. We find that a majority of vertices (about 80%) are colored with only a few colors, such that they can be read and updated in a very high degree of parallelism without violating the sequential consistency. Accordingly, our solution separates the processing of the vertices based on the distribution of colors. In this work, we mainly answer three questions: (1) how to partition the vertices in a sparse graph with maximized parallelism, (2) how to process large-scale graphs that cannot fit into GPU memory, and (3) how to reduce the overhead of data transfers on PCIe while processing each partition. We conduct experiments on real-world data (Amazon, DBLP, YouTube, RoadNet-CA, WikiTalk and Twitter) to evaluate our approach and make comparisons with well-known non-preprocessed (such as Totem, Medusa, MapGraph and Gunrock) and preprocessed (Cusha) approaches, by testing four classical algorithms (BFS, PageRank, SSSP and CC). On all the tested applications and datasets, Frog is able to significantly outperform existing GPU-based graph processing systems except Gunrock and MapGraph. MapGraph gets better performance than Frog when running BFS on RoadNet-CA. The comparison between Gunrock and Frog is inconclusive. Frog can outperform Gunrock more than 1.04X when running PageRank and SSSP, while the advantage of Frog is not obvious when running BFS and CC on some datasets especially for RoadNet-CA.« less
Cortical circuitry implementing graphical models.
Litvak, Shai; Ullman, Shimon
2009-11-01
In this letter, we develop and simulate a large-scale network of spiking neurons that approximates the inference computations performed by graphical models. Unlike previous related schemes, which used sum and product operations in either the log or linear domains, the current model uses an inference scheme based on the sum and maximization operations in the log domain. Simulations show that using these operations, a large-scale circuit, which combines populations of spiking neurons as basic building blocks, is capable of finding close approximations to the full mathematical computations performed by graphical models within a few hundred milliseconds. The circuit is general in the sense that it can be wired for any graph structure, it supports multistate variables, and it uses standard leaky integrate-and-fire neuronal units. Following previous work, which proposed relations between graphical models and the large-scale cortical anatomy, we focus on the cortical microcircuitry and propose how anatomical and physiological aspects of the local circuitry may map onto elements of the graphical model implementation. We discuss in particular the roles of three major types of inhibitory neurons (small fast-spiking basket cells, large layer 2/3 basket cells, and double-bouquet neurons), subpopulations of strongly interconnected neurons with their unique connectivity patterns in different cortical layers, and the possible role of minicolumns in the realization of the population-based maximum operation.
Robust large-scale parallel nonlinear solvers for simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson
2005-11-01
This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their usemore » in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any existing linear solver, which makes it simple to write and easily portable. However, the method usually takes twice as long to solve as Newton-GMRES on general problems because it solves two linear systems at each iteration. In this paper, we discuss modifications to Bouaricha's method for a practical implementation, including a special globalization technique and other modifications for greater efficiency. We present numerical results showing computational advantages over Newton-GMRES on some realistic problems. We further discuss a new approach for dealing with singular (or ill-conditioned) matrices. In particular, we modify an algorithm for identifying a turning point so that an increasingly ill-conditioned Jacobian does not prevent convergence.« less
Feedforward Inhibition and Synaptic Scaling – Two Sides of the Same Coin?
Lücke, Jörg
2012-01-01
Feedforward inhibition and synaptic scaling are important adaptive processes that control the total input a neuron can receive from its afferents. While often studied in isolation, the two have been reported to co-occur in various brain regions. The functional implications of their interactions remain unclear, however. Based on a probabilistic modeling approach, we show here that fast feedforward inhibition and synaptic scaling interact synergistically during unsupervised learning. In technical terms, we model the input to a neural circuit using a normalized mixture model with Poisson noise. We demonstrate analytically and numerically that, in the presence of lateral inhibition introducing competition between different neurons, Hebbian plasticity and synaptic scaling approximate the optimal maximum likelihood solutions for this model. Our results suggest that, beyond its conventional use as a mechanism to remove undesired pattern variations, input normalization can make typical neural interaction and learning rules optimal on the stimulus subspace defined through feedforward inhibition. Furthermore, learning within this subspace is more efficient in practice, as it helps avoid locally optimal solutions. Our results suggest a close connection between feedforward inhibition and synaptic scaling which may have important functional implications for general cortical processing. PMID:22457610
Feedforward inhibition and synaptic scaling--two sides of the same coin?
Keck, Christian; Savin, Cristina; Lücke, Jörg
2012-01-01
Feedforward inhibition and synaptic scaling are important adaptive processes that control the total input a neuron can receive from its afferents. While often studied in isolation, the two have been reported to co-occur in various brain regions. The functional implications of their interactions remain unclear, however. Based on a probabilistic modeling approach, we show here that fast feedforward inhibition and synaptic scaling interact synergistically during unsupervised learning. In technical terms, we model the input to a neural circuit using a normalized mixture model with Poisson noise. We demonstrate analytically and numerically that, in the presence of lateral inhibition introducing competition between different neurons, Hebbian plasticity and synaptic scaling approximate the optimal maximum likelihood solutions for this model. Our results suggest that, beyond its conventional use as a mechanism to remove undesired pattern variations, input normalization can make typical neural interaction and learning rules optimal on the stimulus subspace defined through feedforward inhibition. Furthermore, learning within this subspace is more efficient in practice, as it helps avoid locally optimal solutions. Our results suggest a close connection between feedforward inhibition and synaptic scaling which may have important functional implications for general cortical processing.
Bayesian random local clocks, or one rate to rule them all
2010-01-01
Background Relaxed molecular clock models allow divergence time dating and "relaxed phylogenetic" inference, in which a time tree is estimated in the face of unequal rates across lineages. We present a new method for relaxing the assumption of a strict molecular clock using Markov chain Monte Carlo to implement Bayesian modeling averaging over random local molecular clocks. The new method approaches the problem of rate variation among lineages by proposing a series of local molecular clocks, each extending over a subregion of the full phylogeny. Each branch in a phylogeny (subtending a clade) is a possible location for a change of rate from one local clock to a new one. Thus, including both the global molecular clock and the unconstrained model results, there are a total of 22n-2 possible rate models available for averaging with 1, 2, ..., 2n - 2 different rate categories. Results We propose an efficient method to sample this model space while simultaneously estimating the phylogeny. The new method conveniently allows a direct test of the strict molecular clock, in which one rate rules them all, against a large array of alternative local molecular clock models. We illustrate the method's utility on three example data sets involving mammal, primate and influenza evolution. Finally, we explore methods to visualize the complex posterior distribution that results from inference under such models. Conclusions The examples suggest that large sequence datasets may only require a small number of local molecular clocks to reconcile their branch lengths with a time scale. All of the analyses described here are implemented in the open access software package BEAST 1.5.4 (http://beast-mcmc.googlecode.com/). PMID:20807414
Bi-local holography in the SYK model
Jevicki, Antal; Suzuki, Kenta; Yoon, Junggi
2016-07-01
We discuss large N rules of the Sachdev-Ye-Kitaev model and the bi-local representation of holography of this theory. This is done by establishing 1/N Feynman rules in terms of bi-local propagators and vertices, which can be evaluated following the recent procedure of Polchinski and Rosenhaus. Lastly, these rules can be interpreted as Witten type diagrams of the dual AdS theory, which we are able to define at IR fixed point and off.
Li, Yong; Yuan, Gonglin; Wei, Zengxin
2015-01-01
In this paper, a trust-region algorithm is proposed for large-scale nonlinear equations, where the limited-memory BFGS (L-M-BFGS) update matrix is used in the trust-region subproblem to improve the effectiveness of the algorithm for large-scale problems. The global convergence of the presented method is established under suitable conditions. The numerical results of the test problems show that the method is competitive with the norm method.
NASA Astrophysics Data System (ADS)
Watson, James R.; Stock, Charles A.; Sarmiento, Jorge L.
2015-11-01
Modeling the dynamics of marine populations at a global scale - from phytoplankton to fish - is necessary if we are to quantify how climate change and other broad-scale anthropogenic actions affect the supply of marine-based food. Here, we estimate the abundance and distribution of fish biomass using a simple size-based food web model coupled to simulations of global ocean physics and biogeochemistry. We focus on the spatial distribution of biomass, identifying highly productive regions - shelf seas, western boundary currents and major upwelling zones. In the absence of fishing, we estimate the total ocean fish biomass to be ∼ 2.84 ×109 tonnes, similar to previous estimates. However, this value is sensitive to the choice of parameters, and further, allowing fish to move had a profound impact on the spatial distribution of fish biomass and the structure of marine communities. In particular, when movement is implemented the viable range of large predators is greatly increased, and stunted biomass spectra characterizing large ocean regions in simulations without movement, are replaced with expanded spectra that include large predators. These results highlight the importance of considering movement in global-scale ecological models.
Initial Conceptualization and Application of the Alaska Thermokarst Model
NASA Astrophysics Data System (ADS)
Bolton, W. R.; Lara, M. J.; Genet, H.; Romanovsky, V. E.; McGuire, A. D.
2015-12-01
Thermokarst topography forms whenever ice-rich permafrost thaws and the ground subsides due to the volume loss when ground ice transitions to water. The Alaska Thermokarst Model (ATM) is a large-scale, state-and-transition model designed to simulate transitions between landscape units affected by thermokarst disturbance. The ATM uses a frame-based methodology to track transitions and proportion of cohorts within a 1-km2 grid cell. In the arctic tundra environment, the ATM tracks thermokarst-related transitions among wetland tundra, graminoid tundra, shrub tundra, and thermokarst lakes. In the boreal forest environment, the ATM tracks transitions among forested permafrost plateau, thermokarst lakes, collapse scar fens and bogs. The transition from one cohort to another due to thermokarst processes can take place if thaw reaches ice-rich ground layers either due to pulse disturbance (i.e. large precipitation event or fires), or due to gradual active layer deepening that eventually results in penetration of the protective layer. The protective layer buffers the ice-rich soils from the land surface and is critical to determine how susceptible an area is to thermokarst degradation. The rate of terrain transition in our model is determined by a set of rules that are based upon the ice-content of the soil, the drainage efficiency (or the ability of the landscape to store or transport water), the cumulative probability of thermokarst initiation, distance from rivers, lake dynamics (increasing, decreasing, or stable), and other factors. Tundra types are allowed to transition from one type to another (for example, wetland tundra to graminoid tundra) under favorable climatic conditions. In this study, we present our conceptualization and initial simulation results from in the arctic (the Barrow Peninsula) and boreal (the Tanana Flats) regions of Alaska.
A model of interval timing by neural integration.
Simen, Patrick; Balci, Fuat; de Souza, Laura; Cohen, Jonathan D; Holmes, Philip
2011-06-22
We show that simple assumptions about neural processing lead to a model of interval timing as a temporal integration process, in which a noisy firing-rate representation of time rises linearly on average toward a response threshold over the course of an interval. Our assumptions include: that neural spike trains are approximately independent Poisson processes, that correlations among them can be largely cancelled by balancing excitation and inhibition, that neural populations can act as integrators, and that the objective of timed behavior is maximal accuracy and minimal variance. The model accounts for a variety of physiological and behavioral findings in rodents, monkeys, and humans, including ramping firing rates between the onset of reward-predicting cues and the receipt of delayed rewards, and universally scale-invariant response time distributions in interval timing tasks. It furthermore makes specific, well-supported predictions about the skewness of these distributions, a feature of timing data that is usually ignored. The model also incorporates a rapid (potentially one-shot) duration-learning procedure. Human behavioral data support the learning rule's predictions regarding learning speed in sequences of timed responses. These results suggest that simple, integration-based models should play as prominent a role in interval timing theory as they do in theories of perceptual decision making, and that a common neural mechanism may underlie both types of behavior.
Pacaci, Anil; Gonul, Suat; Sinaci, A Anil; Yuksel, Mustafa; Laleci Erturkmen, Gokce B
2018-01-01
Background: Utilization of the available observational healthcare datasets is key to complement and strengthen the postmarketing safety studies. Use of common data models (CDM) is the predominant approach in order to enable large scale systematic analyses on disparate data models and vocabularies. Current CDM transformation practices depend on proprietarily developed Extract-Transform-Load (ETL) procedures, which require knowledge both on the semantics and technical characteristics of the source datasets and target CDM. Purpose: In this study, our aim is to develop a modular but coordinated transformation approach in order to separate semantic and technical steps of transformation processes, which do not have a strict separation in traditional ETL approaches. Such an approach would discretize the operations to extract data from source electronic health record systems, alignment of the source, and target models on the semantic level and the operations to populate target common data repositories. Approach: In order to separate the activities that are required to transform heterogeneous data sources to a target CDM, we introduce a semantic transformation approach composed of three steps: (1) transformation of source datasets to Resource Description Framework (RDF) format, (2) application of semantic conversion rules to get the data as instances of ontological model of the target CDM, and (3) population of repositories, which comply with the specifications of the CDM, by processing the RDF instances from step 2. The proposed approach has been implemented on real healthcare settings where Observational Medical Outcomes Partnership (OMOP) CDM has been chosen as the common data model and a comprehensive comparative analysis between the native and transformed data has been conducted. Results: Health records of ~1 million patients have been successfully transformed to an OMOP CDM based database from the source database. Descriptive statistics obtained from the source and target databases present analogous and consistent results. Discussion and Conclusion: Our method goes beyond the traditional ETL approaches by being more declarative and rigorous. Declarative because the use of RDF based mapping rules makes each mapping more transparent and understandable to humans while retaining logic-based computability. Rigorous because the mappings would be based on computer readable semantics which are amenable to validation through logic-based inference methods.
Ajelli, Marco; Gonçalves, Bruno; Balcan, Duygu; Colizza, Vittoria; Hu, Hao; Ramasco, José J; Merler, Stefano; Vespignani, Alessandro
2010-06-29
In recent years large-scale computational models for the realistic simulation of epidemic outbreaks have been used with increased frequency. Methodologies adapt to the scale of interest and range from very detailed agent-based models to spatially-structured metapopulation models. One major issue thus concerns to what extent the geotemporal spreading pattern found by different modeling approaches may differ and depend on the different approximations and assumptions used. We provide for the first time a side-by-side comparison of the results obtained with a stochastic agent-based model and a structured metapopulation stochastic model for the progression of a baseline pandemic event in Italy, a large and geographically heterogeneous European country. The agent-based model is based on the explicit representation of the Italian population through highly detailed data on the socio-demographic structure. The metapopulation simulations use the GLobal Epidemic and Mobility (GLEaM) model, based on high-resolution census data worldwide, and integrating airline travel flow data with short-range human mobility patterns at the global scale. The model also considers age structure data for Italy. GLEaM and the agent-based models are synchronized in their initial conditions by using the same disease parameterization, and by defining the same importation of infected cases from international travels. The results obtained show that both models provide epidemic patterns that are in very good agreement at the granularity levels accessible by both approaches, with differences in peak timing on the order of a few days. The relative difference of the epidemic size depends on the basic reproductive ratio, R0, and on the fact that the metapopulation model consistently yields a larger incidence than the agent-based model, as expected due to the differences in the structure in the intra-population contact pattern of the approaches. The age breakdown analysis shows that similar attack rates are obtained for the younger age classes. The good agreement between the two modeling approaches is very important for defining the tradeoff between data availability and the information provided by the models. The results we present define the possibility of hybrid models combining the agent-based and the metapopulation approaches according to the available data and computational resources.
Impact of spatial variability and sampling design on model performance
NASA Astrophysics Data System (ADS)
Schrape, Charlotte; Schneider, Anne-Kathrin; Schröder, Boris; van Schaik, Loes
2017-04-01
Many environmental physical and chemical parameters as well as species distributions display a spatial variability at different scales. In case measurements are very costly in labour time or money a choice has to be made between a high sampling resolution at small scales and a low spatial cover of the study area or a lower sampling resolution at the small scales resulting in local data uncertainties with a better spatial cover of the whole area. This dilemma is often faced in the design of field sampling campaigns for large scale studies. When the gathered field data are subsequently used for modelling purposes the choice of sampling design and resulting data quality influence the model performance criteria. We studied this influence with a virtual model study based on a large dataset of field information on spatial variation of earthworms at different scales. Therefore we built a virtual map of anecic earthworm distributions over the Weiherbach catchment (Baden-Württemberg in Germany). First of all the field scale abundance of earthworms was estimated using a catchment scale model based on 65 field measurements. Subsequently the high small scale variability was added using semi-variograms, based on five fields with a total of 430 measurements divided in a spatially nested sampling design over these fields, to estimate the nugget, range and standard deviation of measurements within the fields. With the produced maps, we performed virtual samplings of one up to 50 random points per field. We then used these data to rebuild the catchment scale models of anecic earthworm abundance with the same model parameters as in the work by Palm et al. (2013). The results of the models show clearly that a large part of the non-explained deviance of the models is due to the very high small scale variability in earthworm abundance: the models based on single virtual sampling points on average obtain an explained deviance of 0.20 and a correlation coefficient of 0.64. With increasing sampling points per field, we averaged the measured abundance of the sampling within each field to obtain a more representative value of the field average. Doubling the samplings per field strongly improved the model performance criteria (explained deviance 0.38 and correlation coefficient 0.73). With 50 sampling points per field the performance criteria were 0.91 and 0.97 respectively for explained deviance and correlation coefficient. The relationship between number of samplings and performance criteria can be described with a saturation curve. Beyond five samples per field the model improvement becomes rather small. With this contribution we wish to discuss the impact of data variability at sampling scale on model performance and the implications for sampling design and assessment of model results as well as ecological inferences.
Analyzing CMOS/SOS fabrication for LSI arrays
NASA Technical Reports Server (NTRS)
Ipri, A. C.
1978-01-01
Report discusses set of design rules that have been developed as result of work with test arrays. Set of optimum dimensions is given that would maximize process output and would correspondingly minimize costs in fabrication of large-scale integration (LSI) arrays.
Orszag Tang vortex - Kinetic study of a turbulent plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parashar, T. N.; Servidio, S.; Shay, M. A.
Kinetic evolution of the Orszag-Tang vortex is studied using collisionless hybrid simulations based on particle in cell ions and fluid electrons. In magnetohydrodynamics (MHD) this configuration leads rapidly to broadband turbulence. An earlier study estimated the dissipation in the system. A comparison of MHD and hybrid simulations showed similar behavior at large scales but substantial differences at small scales. The hybrid magnetic energy spectrum shows a break at the scale where Hall term in the Ohm's law becomes important. The protons heat perpendicularly and most of the energy is dissipated through magnetic interactions. Here, the space time structure of themore » system is studied using frequency-wavenumber (k-omega) decomposition. No clear resonances appear, ruling out the cyclotron resonances as a likely candidate for the perpendicular heating. The only distinguishable wave modes present, which constitute a small percentage of total energy, are magnetosonic modes.« less
Zhang, Yong; Li, Peng; Jin, Yingyezhe; Choe, Yoonsuck
2015-11-01
This paper presents a bioinspired digital liquid-state machine (LSM) for low-power very-large-scale-integration (VLSI)-based machine learning applications. To the best of the authors' knowledge, this is the first work that employs a bioinspired spike-based learning algorithm for the LSM. With the proposed online learning, the LSM extracts information from input patterns on the fly without needing intermediate data storage as required in offline learning methods such as ridge regression. The proposed learning rule is local such that each synaptic weight update is based only upon the firing activities of the corresponding presynaptic and postsynaptic neurons without incurring global communications across the neural network. Compared with the backpropagation-based learning, the locality of computation in the proposed approach lends itself to efficient parallel VLSI implementation. We use subsets of the TI46 speech corpus to benchmark the bioinspired digital LSM. To reduce the complexity of the spiking neural network model without performance degradation for speech recognition, we study the impacts of synaptic models on the fading memory of the reservoir and hence the network performance. Moreover, we examine the tradeoffs between synaptic weight resolution, reservoir size, and recognition performance and present techniques to further reduce the overhead of hardware implementation. Our simulation results show that in terms of isolated word recognition evaluated using the TI46 speech corpus, the proposed digital LSM rivals the state-of-the-art hidden Markov-model-based recognizer Sphinx-4 and outperforms all other reported recognizers including the ones that are based upon the LSM or neural networks.
Modeling emergent large-scale structures of barchan dune fields
NASA Astrophysics Data System (ADS)
Worman, S. L.; Murray, A.; Littlewood, R. C.; Andreotti, B.; Claudin, P.
2013-12-01
In nature, barchan dunes typically exist as members of larger fields that display striking, enigmatic structures that cannot be readily explained by examining the dynamics at the scale of single dunes, or by appealing to patterns in external forcing. To explore the possibility that observed structures emerge spontaneously as a collective result of many dunes interacting with each other, we built a numerical model that treats barchans as discrete entities that interact with one another according to simplified rules derived from theoretical and numerical work, and from field observations: Dunes exchange sand through the fluxes that leak from the downwind side of each dune and are captured on their upstream sides; when dunes become sufficiently large, small dunes are born on their downwind sides ('calving'); and when dunes collide directly enough, they merge. Results show that these relatively simple interactions provide potential explanations for a range of field-scale phenomena including isolated patches of dunes and heterogeneous arrangements of similarly sized dunes in denser fields. The results also suggest that (1) dune field characteristics depend on the sand flux fed into the upwind boundary, although (2) moving downwind, the system approaches a common attracting state in which the memory of the upwind conditions vanishes. This work supports the hypothesis that calving exerts a first order control on field-scale phenomena; it prevents individual dunes from growing without bound, as single-dune analyses suggest, and allows the formation of roughly realistic, persistent dune field patterns.
NASA Technical Reports Server (NTRS)
Filho, Aluzio Haendehen; Caminada, Numo; Haeusler, Edward Hermann; vonStaa, Arndt
2004-01-01
To support the development of flexible and reusable MAS, we have built a framework designated MAS-CF. MAS-CF is a component framework that implements a layered architecture based on contextual composition. Interaction rules, controlled by architecture mechanisms, ensure very low coupling, making possible the sharing of distributed services in a transparent, dynamic and independent way. These properties propitiate large-scale reuse, since organizational abstractions can be reused and propagated to all instances created from a framework. The objective is to reduce complexity and development time of multi-agent systems through the reuse of generic organizational abstractions.