Laghari, Samreen; Niazi, Muaz A
2016-01-01
Computer Networks have a tendency to grow at an unprecedented scale. Modern networks involve not only computers but also a wide variety of other interconnected devices ranging from mobile phones to other household items fitted with sensors. This vision of the "Internet of Things" (IoT) implies an inherent difficulty in modeling problems. It is practically impossible to implement and test all scenarios for large-scale and complex adaptive communication networks as part of Complex Adaptive Communication Networks and Environments (CACOONS). The goal of this study is to explore the use of Agent-based Modeling as part of the Cognitive Agent-based Computing (CABC) framework to model a Complex communication network problem. We use Exploratory Agent-based Modeling (EABM), as part of the CABC framework, to develop an autonomous multi-agent architecture for managing carbon footprint in a corporate network. To evaluate the application of complexity in practical scenarios, we have also introduced a company-defined computer usage policy. The conducted experiments demonstrated two important results: Primarily CABC-based modeling approach such as using Agent-based Modeling can be an effective approach to modeling complex problems in the domain of IoT. Secondly, the specific problem of managing the Carbon footprint can be solved using a multiagent system approach.
Learning Natural Selection in 4th Grade with Multi-Agent-Based Computational Models
ERIC Educational Resources Information Center
Dickes, Amanda Catherine; Sengupta, Pratim
2013-01-01
In this paper, we investigate how elementary school students develop multi-level explanations of population dynamics in a simple predator-prey ecosystem, through scaffolded interactions with a multi-agent-based computational model (MABM). The term "agent" in an MABM indicates individual computational objects or actors (e.g., cars), and these…
2016-01-01
Background Computer Networks have a tendency to grow at an unprecedented scale. Modern networks involve not only computers but also a wide variety of other interconnected devices ranging from mobile phones to other household items fitted with sensors. This vision of the "Internet of Things" (IoT) implies an inherent difficulty in modeling problems. Purpose It is practically impossible to implement and test all scenarios for large-scale and complex adaptive communication networks as part of Complex Adaptive Communication Networks and Environments (CACOONS). The goal of this study is to explore the use of Agent-based Modeling as part of the Cognitive Agent-based Computing (CABC) framework to model a Complex communication network problem. Method We use Exploratory Agent-based Modeling (EABM), as part of the CABC framework, to develop an autonomous multi-agent architecture for managing carbon footprint in a corporate network. To evaluate the application of complexity in practical scenarios, we have also introduced a company-defined computer usage policy. Results The conducted experiments demonstrated two important results: Primarily CABC-based modeling approach such as using Agent-based Modeling can be an effective approach to modeling complex problems in the domain of IoT. Secondly, the specific problem of managing the Carbon footprint can be solved using a multiagent system approach. PMID:26812235
Singh, Karandeep; Ahn, Chang-Won; Paik, Euihyun; Bae, Jang Won; Lee, Chun-Hee
2018-01-01
Artificial life (ALife) examines systems related to natural life, its processes, and its evolution, using simulations with computer models, robotics, and biochemistry. In this article, we focus on the computer modeling, or "soft," aspects of ALife and prepare a framework for scientists and modelers to be able to support such experiments. The framework is designed and built to be a parallel as well as distributed agent-based modeling environment, and does not require end users to have expertise in parallel or distributed computing. Furthermore, we use this framework to implement a hybrid model using microsimulation and agent-based modeling techniques to generate an artificial society. We leverage this artificial society to simulate and analyze population dynamics using Korean population census data. The agents in this model derive their decisional behaviors from real data (microsimulation feature) and interact among themselves (agent-based modeling feature) to proceed in the simulation. The behaviors, interactions, and social scenarios of the agents are varied to perform an analysis of population dynamics. We also estimate the future cost of pension policies based on the future population structure of the artificial society. The proposed framework and model demonstrates how ALife techniques can be used by researchers in relation to social issues and policies.
Brief introductory guide to agent-based modeling and an illustration from urban health research.
Auchincloss, Amy H; Garcia, Leandro Martin Totaro
2015-11-01
There is growing interest among urban health researchers in addressing complex problems using conceptual and computation models from the field of complex systems. Agent-based modeling (ABM) is one computational modeling tool that has received a lot of interest. However, many researchers remain unfamiliar with developing and carrying out an ABM, hindering the understanding and application of it. This paper first presents a brief introductory guide to carrying out a simple agent-based model. Then, the method is illustrated by discussing a previously developed agent-based model, which explored inequalities in diet in the context of urban residential segregation.
Brief introductory guide to agent-based modeling and an illustration from urban health research
Auchincloss, Amy H.; Garcia, Leandro Martin Totaro
2017-01-01
There is growing interest among urban health researchers in addressing complex problems using conceptual and computation models from the field of complex systems. Agent-based modeling (ABM) is one computational modeling tool that has received a lot of interest. However, many researchers remain unfamiliar with developing and carrying out an ABM, hindering the understanding and application of it. This paper first presents a brief introductory guide to carrying out a simple agent-based model. Then, the method is illustrated by discussing a previously developed agent-based model, which explored inequalities in diet in the context of urban residential segregation. PMID:26648364
An agent-based computational model for tuberculosis spreading on age-structured populations
NASA Astrophysics Data System (ADS)
Graciani Rodrigues, C. C.; Espíndola, Aquino L.; Penna, T. J. P.
2015-06-01
In this work we present an agent-based computational model to study the spreading of the tuberculosis (TB) disease on age-structured populations. The model proposed is a merge of two previous models: an agent-based computational model for the spreading of tuberculosis and a bit-string model for biological aging. The combination of TB with the population aging, reproduces the coexistence of health states, as seen in real populations. In addition, the universal exponential behavior of mortalities curves is still preserved. Finally, the population distribution as function of age shows the prevalence of TB mostly in elders, for high efficacy treatments.
Using Agent Base Models to Optimize Large Scale Network for Large System Inventories
NASA Technical Reports Server (NTRS)
Shameldin, Ramez Ahmed; Bowling, Shannon R.
2010-01-01
The aim of this paper is to use Agent Base Models (ABM) to optimize large scale network handling capabilities for large system inventories and to implement strategies for the purpose of reducing capital expenses. The models used in this paper either use computational algorithms or procedure implementations developed by Matlab to simulate agent based models in a principal programming language and mathematical theory using clusters, these clusters work as a high performance computational performance to run the program in parallel computational. In both cases, a model is defined as compilation of a set of structures and processes assumed to underlie the behavior of a network system.
Model reduction for agent-based social simulation: coarse-graining a civil violence model.
Zou, Yu; Fonoberov, Vladimir A; Fonoberova, Maria; Mezic, Igor; Kevrekidis, Ioannis G
2012-06-01
Agent-based modeling (ABM) constitutes a powerful computational tool for the exploration of phenomena involving emergent dynamic behavior in the social sciences. This paper demonstrates a computer-assisted approach that bridges the significant gap between the single-agent microscopic level and the macroscopic (coarse-grained population) level, where fundamental questions must be rationally answered and policies guiding the emergent dynamics devised. Our approach will be illustrated through an agent-based model of civil violence. This spatiotemporally varying ABM incorporates interactions between a heterogeneous population of citizens [active (insurgent), inactive, or jailed] and a population of police officers. Detailed simulations exhibit an equilibrium punctuated by periods of social upheavals. We show how to effectively reduce the agent-based dynamics to a stochastic model with only two coarse-grained degrees of freedom: the number of jailed citizens and the number of active ones. The coarse-grained model captures the ABM dynamics while drastically reducing the computation time (by a factor of approximately 20).
Model reduction for agent-based social simulation: Coarse-graining a civil violence model
NASA Astrophysics Data System (ADS)
Zou, Yu; Fonoberov, Vladimir A.; Fonoberova, Maria; Mezic, Igor; Kevrekidis, Ioannis G.
2012-06-01
Agent-based modeling (ABM) constitutes a powerful computational tool for the exploration of phenomena involving emergent dynamic behavior in the social sciences. This paper demonstrates a computer-assisted approach that bridges the significant gap between the single-agent microscopic level and the macroscopic (coarse-grained population) level, where fundamental questions must be rationally answered and policies guiding the emergent dynamics devised. Our approach will be illustrated through an agent-based model of civil violence. This spatiotemporally varying ABM incorporates interactions between a heterogeneous population of citizens [active (insurgent), inactive, or jailed] and a population of police officers. Detailed simulations exhibit an equilibrium punctuated by periods of social upheavals. We show how to effectively reduce the agent-based dynamics to a stochastic model with only two coarse-grained degrees of freedom: the number of jailed citizens and the number of active ones. The coarse-grained model captures the ABM dynamics while drastically reducing the computation time (by a factor of approximately 20).
NASA Astrophysics Data System (ADS)
Markauskaite, Lina; Kelly, Nick; Jacobson, Michael J.
2017-12-01
This paper gives a grounded cognition account of model-based learning of complex scientific knowledge related to socio-scientific issues, such as climate change. It draws on the results from a study of high school students learning about the carbon cycle through computational agent-based models and investigates two questions: First, how do students ground their understanding about the phenomenon when they learn and solve problems with computer models? Second, what are common sources of mistakes in students' reasoning with computer models? Results show that students ground their understanding in computer models in five ways: direct observation, straight abstraction, generalisation, conceptualisation, and extension. Students also incorporate into their reasoning their knowledge and experiences that extend beyond phenomena represented in the models, such as attitudes about unsustainable carbon emission rates, human agency, external events, and the nature of computational models. The most common difficulties of the students relate to seeing the modelled scientific phenomenon and connecting results from the observations with other experiences and understandings about the phenomenon in the outside world. An important contribution of this study is the constructed coding scheme for establishing different ways of grounding, which helps to understand some challenges that students encounter when they learn about complex phenomena with agent-based computer models.
Reciprocity in computer-human interaction: source-based, norm-based, and affect-based explanations.
Lee, Seungcheol Austin; Liang, Yuhua Jake
2015-04-01
Individuals often apply social rules when they interact with computers, and this is known as the Computers Are Social Actors (CASA) effect. Following previous work, one approach to understand the mechanism responsible for CASA is to utilize computer agents and have the agents attempt to gain human compliance (e.g., completing a pattern recognition task). The current study focuses on three key factors frequently cited to influence traditional notions of compliance: evaluations toward the source (competence and warmth), normative influence (reciprocity), and affective influence (mood). Structural equation modeling assessed the effects of these factors on human compliance with computer request. The final model shows that norm-based influence (reciprocity) increased the likelihood of compliance, while evaluations toward the computer agent did not significantly influence compliance.
The Agent-based Approach: A New Direction for Computational Models of Development.
ERIC Educational Resources Information Center
Schlesinger, Matthew; Parisi, Domenico
2001-01-01
Introduces the concepts of online and offline sampling and highlights the role of online sampling in agent-based models of learning and development. Compares the strengths of each approach for modeling particular developmental phenomena and research questions. Describes a recent agent-based model of infant causal perception. Discusses limitations…
Wong, William W L; Feng, Zeny Z; Thein, Hla-Hla
2016-11-01
Agent-based models (ABMs) are computer simulation models that define interactions among agents and simulate emergent behaviors that arise from the ensemble of local decisions. ABMs have been increasingly used to examine trends in infectious disease epidemiology. However, the main limitation of ABMs is the high computational cost for a large-scale simulation. To improve the computational efficiency for large-scale ABM simulations, we built a parallelizable sliding region algorithm (SRA) for ABM and compared it to a nonparallelizable ABM. We developed a complex agent network and performed two simulations to model hepatitis C epidemics based on the real demographic data from Saskatchewan, Canada. The first simulation used the SRA that processed on each postal code subregion subsequently. The second simulation processed the entire population simultaneously. It was concluded that the parallelizable SRA showed computational time saving with comparable results in a province-wide simulation. Using the same method, SRA can be generalized for performing a country-wide simulation. Thus, this parallel algorithm enables the possibility of using ABM for large-scale simulation with limited computational resources.
Unsilencing Critical Conversations in Social-Studies Teacher Education Using Agent-Based Modeling
ERIC Educational Resources Information Center
Hostetler, Andrew; Sengupta, Pratim; Hollett, Ty
2018-01-01
In this article, we argue that when complex sociopolitical issues such as ethnocentrism and racial segregation are represented as complex, emergent systems using agent-based computational models (in short agent-based models or ABMs), discourse about these representations can disrupt social studies teacher candidates' dispositions of teaching…
Dynamic electronic institutions in agent oriented cloud robotic systems.
Nagrath, Vineet; Morel, Olivier; Malik, Aamir; Saad, Naufal; Meriaudeau, Fabrice
2015-01-01
The dot-com bubble bursted in the year 2000 followed by a swift movement towards resource virtualization and cloud computing business model. Cloud computing emerged not as new form of computing or network technology but a mere remoulding of existing technologies to suit a new business model. Cloud robotics is understood as adaptation of cloud computing ideas for robotic applications. Current efforts in cloud robotics stress upon developing robots that utilize computing and service infrastructure of the cloud, without debating on the underlying business model. HTM5 is an OMG's MDA based Meta-model for agent oriented development of cloud robotic systems. The trade-view of HTM5 promotes peer-to-peer trade amongst software agents. HTM5 agents represent various cloud entities and implement their business logic on cloud interactions. Trade in a peer-to-peer cloud robotic system is based on relationships and contracts amongst several agent subsets. Electronic Institutions are associations of heterogeneous intelligent agents which interact with each other following predefined norms. In Dynamic Electronic Institutions, the process of formation, reformation and dissolution of institutions is automated leading to run time adaptations in groups of agents. DEIs in agent oriented cloud robotic ecosystems bring order and group intellect. This article presents DEI implementations through HTM5 methodology.
Protecting software agents from malicious hosts using quantum computing
NASA Astrophysics Data System (ADS)
Reisner, John; Donkor, Eric
2000-07-01
We evaluate how quantum computing can be applied to security problems for software agents. Agent-based computing, which merges technological advances in artificial intelligence and mobile computing, is a rapidly growing domain, especially in applications such as electronic commerce, network management, information retrieval, and mission planning. System security is one of the more eminent research areas in agent-based computing, and the specific problem of protecting a mobile agent from a potentially hostile host is one of the most difficult of these challenges. In this work, we describe our agent model, and discuss the capabilities and limitations of classical solutions to the malicious host problem. Quantum computing may be extremely helpful in addressing the limitations of classical solutions to this problem. This paper highlights some of the areas where quantum computing could be applied to agent security.
NASA Astrophysics Data System (ADS)
Gromek, Katherine Emily
A novel computational and inference framework of the physics-of-failure (PoF) reliability modeling for complex dynamic systems has been established in this research. The PoF-based reliability models are used to perform a real time simulation of system failure processes, so that the system level reliability modeling would constitute inferences from checking the status of component level reliability at any given time. The "agent autonomy" concept is applied as a solution method for the system-level probabilistic PoF-based (i.e. PPoF-based) modeling. This concept originated from artificial intelligence (AI) as a leading intelligent computational inference in modeling of multi agents systems (MAS). The concept of agent autonomy in the context of reliability modeling was first proposed by M. Azarkhail [1], where a fundamentally new idea of system representation by autonomous intelligent agents for the purpose of reliability modeling was introduced. Contribution of the current work lies in the further development of the agent anatomy concept, particularly the refined agent classification within the scope of the PoF-based system reliability modeling, new approaches to the learning and the autonomy properties of the intelligent agents, and modeling interacting failure mechanisms within the dynamic engineering system. The autonomous property of intelligent agents is defined as agent's ability to self-activate, deactivate or completely redefine their role in the analysis. This property of agents and the ability to model interacting failure mechanisms of the system elements makes the agent autonomy fundamentally different from all existing methods of probabilistic PoF-based reliability modeling. 1. Azarkhail, M., "Agent Autonomy Approach to Physics-Based Reliability Modeling of Structures and Mechanical Systems", PhD thesis, University of Maryland, College Park, 2007.
Hybrid evolutionary computing model for mobile agents of wireless Internet multimedia
NASA Astrophysics Data System (ADS)
Hortos, William S.
2001-03-01
The ecosystem is used as an evolutionary paradigm of natural laws for the distributed information retrieval via mobile agents to allow the computational load to be added to server nodes of wireless networks, while reducing the traffic on communication links. Based on the Food Web model, a set of computational rules of natural balance form the outer stage to control the evolution of mobile agents providing multimedia services with a wireless Internet protocol WIP. The evolutionary model shows how mobile agents should behave with the WIP, in particular, how mobile agents can cooperate, compete and learn from each other, based on an underlying competition for radio network resources to establish the wireless connections to support the quality of service QoS of user requests. Mobile agents are also allowed to clone themselves, propagate and communicate with other agents. A two-layer model is proposed for agent evolution: the outer layer is based on the law of natural balancing, the inner layer is based on a discrete version of a Kohonen self-organizing feature map SOFM to distribute network resources to meet QoS requirements. The former is embedded in the higher OSI layers of the WIP, while the latter is used in the resource management procedures of Layer 2 and 3 of the protocol. Algorithms for the distributed computation of mobile agent evolutionary behavior are developed by adding a learning state to the agent evolution state diagram. When an agent is in an indeterminate state, it can communicate to other agents. Computing models can be replicated from other agents. Then the agents transitions to the mutating state to wait for a new information-retrieval goal. When a wireless terminal or station lacks a network resource, an agent in the suspending state can change its policy to submit to the environment before it transitions to the searching state. The agents learn the facts of agent state information entered into an external database. In the cloning process, two agents on a host station sharing a common goal can be merged or married to compose a new agent. Application of the two-layer set of algorithms for mobile agent evolution, performed in a distributed processing environment, is made to the QoS management functions of the IP multimedia IM sub-network of the third generation 3G Wideband Code-division Multiple Access W-CDMA wireless network.
ERIC Educational Resources Information Center
Dickes, Amanda Catherine; Sengupta, Pratim; Farris, Amy Voss; Satabdi, Basu
2016-01-01
In this paper, we present a third-grade ecology learning environment that integrates two forms of modeling--embodied modeling and agent-based modeling (ABMs)--through the generation of mathematical representations that are common to both forms of modeling. The term "agent" in the context of ABMs indicates individual computational objects…
Scalco, Andrea; Ceschi, Andrea; Sartori, Riccardo
2018-01-01
It is likely that computer simulations will assume a greater role in the next future to investigate and understand reality (Rand & Rust, 2011). Particularly, agent-based models (ABMs) represent a method of investigation of social phenomena that blend the knowledge of social sciences with the advantages of virtual simulations. Within this context, the development of algorithms able to recreate the reasoning engine of autonomous virtual agents represents one of the most fragile aspects and it is indeed crucial to establish such models on well-supported psychological theoretical frameworks. For this reason, the present work discusses the application case of the theory of planned behavior (TPB; Ajzen, 1991) in the context of agent-based modeling: It is argued that this framework might be helpful more than others to develop a valid representation of human behavior in computer simulations. Accordingly, the current contribution considers issues related with the application of the model proposed by the TPB inside computer simulations and suggests potential solutions with the hope to contribute to shorten the distance between the fields of psychology and computer science.
NASA Astrophysics Data System (ADS)
Cenek, Martin; Dahl, Spencer K.
2016-11-01
Systems with non-linear dynamics frequently exhibit emergent system behavior, which is important to find and specify rigorously to understand the nature of the modeled phenomena. Through this analysis, it is possible to characterize phenomena such as how systems assemble or dissipate and what behaviors lead to specific final system configurations. Agent Based Modeling (ABM) is one of the modeling techniques used to study the interaction dynamics between a system's agents and its environment. Although the methodology of ABM construction is well understood and practiced, there are no computational, statistically rigorous, comprehensive tools to evaluate an ABM's execution. Often, a human has to observe an ABM's execution in order to analyze how the ABM functions, identify the emergent processes in the agent's behavior, or study a parameter's effect on the system-wide behavior. This paper introduces a new statistically based framework to automatically analyze agents' behavior, identify common system-wide patterns, and record the probability of agents changing their behavior from one pattern of behavior to another. We use network based techniques to analyze the landscape of common behaviors in an ABM's execution. Finally, we test the proposed framework with a series of experiments featuring increasingly emergent behavior. The proposed framework will allow computational comparison of ABM executions, exploration of a model's parameter configuration space, and identification of the behavioral building blocks in a model's dynamics.
Cenek, Martin; Dahl, Spencer K
2016-11-01
Systems with non-linear dynamics frequently exhibit emergent system behavior, which is important to find and specify rigorously to understand the nature of the modeled phenomena. Through this analysis, it is possible to characterize phenomena such as how systems assemble or dissipate and what behaviors lead to specific final system configurations. Agent Based Modeling (ABM) is one of the modeling techniques used to study the interaction dynamics between a system's agents and its environment. Although the methodology of ABM construction is well understood and practiced, there are no computational, statistically rigorous, comprehensive tools to evaluate an ABM's execution. Often, a human has to observe an ABM's execution in order to analyze how the ABM functions, identify the emergent processes in the agent's behavior, or study a parameter's effect on the system-wide behavior. This paper introduces a new statistically based framework to automatically analyze agents' behavior, identify common system-wide patterns, and record the probability of agents changing their behavior from one pattern of behavior to another. We use network based techniques to analyze the landscape of common behaviors in an ABM's execution. Finally, we test the proposed framework with a series of experiments featuring increasingly emergent behavior. The proposed framework will allow computational comparison of ABM executions, exploration of a model's parameter configuration space, and identification of the behavioral building blocks in a model's dynamics.
AGENT-BASED MODELS IN EMPIRICAL SOCIAL RESEARCH*
Bruch, Elizabeth; Atwell, Jon
2014-01-01
Agent-based modeling has become increasingly popular in recent years, but there is still no codified set of recommendations or practices for how to use these models within a program of empirical research. This article provides ideas and practical guidelines drawn from sociology, biology, computer science, epidemiology, and statistics. We first discuss the motivations for using agent-based models in both basic science and policy-oriented social research. Next, we provide an overview of methods and strategies for incorporating data on behavior and populations into agent-based models, and review techniques for validating and testing the sensitivity of agent-based models. We close with suggested directions for future research. PMID:25983351
Agents in bioinformatics, computational and systems biology.
Merelli, Emanuela; Armano, Giuliano; Cannata, Nicola; Corradini, Flavio; d'Inverno, Mark; Doms, Andreas; Lord, Phillip; Martin, Andrew; Milanesi, Luciano; Möller, Steffen; Schroeder, Michael; Luck, Michael
2007-01-01
The adoption of agent technologies and multi-agent systems constitutes an emerging area in bioinformatics. In this article, we report on the activity of the Working Group on Agents in Bioinformatics (BIOAGENTS) founded during the first AgentLink III Technical Forum meeting on the 2nd of July, 2004, in Rome. The meeting provided an opportunity for seeding collaborations between the agent and bioinformatics communities to develop a different (agent-based) approach of computational frameworks both for data analysis and management in bioinformatics and for systems modelling and simulation in computational and systems biology. The collaborations gave rise to applications and integrated tools that we summarize and discuss in context of the state of the art in this area. We investigate on future challenges and argue that the field should still be explored from many perspectives ranging from bio-conceptual languages for agent-based simulation, to the definition of bio-ontology-based declarative languages to be used by information agents, and to the adoption of agents for computational grids.
An, Gary; Christley, Scott
2012-01-01
Given the panoply of system-level diseases that result from disordered inflammation, such as sepsis, atherosclerosis, cancer, and autoimmune disorders, understanding and characterizing the inflammatory response is a key target of biomedical research. Untangling the complex behavioral configurations associated with a process as ubiquitous as inflammation represents a prototype of the translational dilemma: the ability to translate mechanistic knowledge into effective therapeutics. A critical failure point in the current research environment is a throughput bottleneck at the level of evaluating hypotheses of mechanistic causality; these hypotheses represent the key step toward the application of knowledge for therapy development and design. Addressing the translational dilemma will require utilizing the ever-increasing power of computers and computational modeling to increase the efficiency of the scientific method in the identification and evaluation of hypotheses of mechanistic causality. More specifically, development needs to focus on facilitating the ability of non-computer trained biomedical researchers to utilize and instantiate their knowledge in dynamic computational models. This is termed "dynamic knowledge representation." Agent-based modeling is an object-oriented, discrete-event, rule-based simulation method that is well suited for biomedical dynamic knowledge representation. Agent-based modeling has been used in the study of inflammation at multiple scales. The ability of agent-based modeling to encompass multiple scales of biological process as well as spatial considerations, coupled with an intuitive modeling paradigm, suggest that this modeling framework is well suited for addressing the translational dilemma. This review describes agent-based modeling, gives examples of its applications in the study of inflammation, and introduces a proposed general expansion of the use of modeling and simulation to augment the generation and evaluation of knowledge by the biomedical research community at large.
Agent Model Development for Assessing Climate-Induced Geopolitical Instability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boslough, Mark B.; Backus, George A.
2005-12-01
We present the initial stages of development of new agent-based computational methods to generate and test hypotheses about linkages between environmental change and international instability. This report summarizes the first year's effort of an originally proposed three-year Laboratory Directed Research and Development (LDRD) project. The preliminary work focused on a set of simple agent-based models and benefited from lessons learned in previous related projects and case studies of human response to climate change and environmental scarcity. Our approach was to define a qualitative model using extremely simple cellular agent models akin to Lovelock's Daisyworld and Schelling's segregation model. Such modelsmore » do not require significant computing resources, and users can modify behavior rules to gain insights. One of the difficulties in agent-based modeling is finding the right balance between model simplicity and real-world representation. Our approach was to keep agent behaviors as simple as possible during the development stage (described herein) and to ground them with a realistic geospatial Earth system model in subsequent years. This work is directed toward incorporating projected climate data--including various C02 scenarios from the Intergovernmental Panel on Climate Change (IPCC) Third Assessment Report--and ultimately toward coupling a useful agent-based model to a general circulation model.3« less
Using Model Replication to Improve the Reliability of Agent-Based Models
NASA Astrophysics Data System (ADS)
Zhong, Wei; Kim, Yushim
The basic presupposition of model replication activities for a computational model such as an agent-based model (ABM) is that, as a robust and reliable tool, it must be replicable in other computing settings. This assumption has recently gained attention in the community of artificial society and simulation due to the challenges of model verification and validation. Illustrating the replication of an ABM representing fraudulent behavior in a public service delivery system originally developed in the Java-based MASON toolkit for NetLogo by a different author, this paper exemplifies how model replication exercises provide unique opportunities for model verification and validation process. At the same time, it helps accumulate best practices and patterns of model replication and contributes to the agenda of developing a standard methodological protocol for agent-based social simulation.
A hybrid agent-based approach for modeling microbiological systems.
Guo, Zaiyi; Sloot, Peter M A; Tay, Joc Cing
2008-11-21
Models for systems biology commonly adopt Differential Equations or Agent-Based modeling approaches for simulating the processes as a whole. Models based on differential equations presuppose phenomenological intracellular behavioral mechanisms, while models based on Multi-Agent approach often use directly translated, and quantitatively less precise if-then logical rule constructs. We propose an extendible systems model based on a hybrid agent-based approach where biological cells are modeled as individuals (agents) while molecules are represented by quantities. This hybridization in entity representation entails a combined modeling strategy with agent-based behavioral rules and differential equations, thereby balancing the requirements of extendible model granularity with computational tractability. We demonstrate the efficacy of this approach with models of chemotaxis involving an assay of 10(3) cells and 1.2x10(6) molecules. The model produces cell migration patterns that are comparable to laboratory observations.
NASA Astrophysics Data System (ADS)
Siettos, C. I.; Gear, C. W.; Kevrekidis, I. G.
2012-08-01
We show how the equation-free approach can be exploited to enable agent-based simulators to perform system-level computations such as bifurcation, stability analysis and controller design. We illustrate these tasks through an event-driven agent-based model describing the dynamic behaviour of many interacting investors in the presence of mimesis. Using short bursts of appropriately initialized runs of the detailed, agent-based simulator, we construct the coarse-grained bifurcation diagram of the (expected) density of agents and investigate the stability of its multiple solution branches. When the mimetic coupling between agents becomes strong enough, the stable stationary state loses its stability at a coarse turning point bifurcation. We also demonstrate how the framework can be used to design a wash-out dynamic controller that stabilizes open-loop unstable stationary states even under model uncertainty.
A Novel Machine Learning Classifier Based on a Qualia Modeling Agent (QMA)
Information Theory (IIT) of Consciousness , which proposes that the fundamental structural elements of consciousness are qualia. By modeling the...This research develops a computational agent, which overcomes this problem. The Qualia Modeling Agent (QMA) is modeled after two cognitive theories
NASA Technical Reports Server (NTRS)
Dorais, Gregory A.; Kurien, James; Rajan, Kanna
1999-01-01
We describe the computer demonstration of the Remote Agent Experiment (RAX). The Remote Agent is a high-level, model-based, autonomous control agent being validated on the NASA Deep Space 1 spacecraft.
Agent Models for Self-Motivated Home-Assistant Bots
NASA Astrophysics Data System (ADS)
Merrick, Kathryn; Shafi, Kamran
2010-01-01
Modern society increasingly relies on technology to support everyday activities. In the past, this technology has focused on automation, using computer technology embedded in physical objects. More recently, there is an expectation that this technology will not just embed reactive automation, but also embed intelligent, proactive automation in the environment. That is, there is an emerging desire for novel technologies that can monitor, assist, inform or entertain when required, and not just when requested. This paper presents three self-motivated, home-assistant bot applications using different self-motivated agent models. Self-motivated agents use a computational model of motivation to generate goals proactively. Technologies based on self-motivated agents can thus respond autonomously and proactively to stimuli from their environment. Three prototypes of different self-motivated agent models, using different computational models of motivation, are described to demonstrate these concepts.
Computing with motile bio-agents
NASA Astrophysics Data System (ADS)
Nicolau, Dan V., Jr.; Burrage, Kevin; Nicolau, Dan V.
2007-12-01
We describe a model of computation of the parallel type, which we call 'computing with bio-agents', based on the concept that motions of biological objects such as bacteria or protein molecular motors in confined spaces can be regarded as computations. We begin with the observation that the geometric nature of the physical structures in which model biological objects move modulates the motions of the latter. Consequently, by changing the geometry, one can control the characteristic trajectories of the objects; on the basis of this, we argue that such systems are computing devices. We investigate the computing power of mobile bio-agent systems and show that they are computationally universal in the sense that they are capable of computing any Boolean function in parallel. We argue also that using appropriate conditions, bio-agent systems can solve NP-complete problems in probabilistic polynomial time.
ERIC Educational Resources Information Center
Sengupta, Pratim; Farris, Amy Voss; Wright, Mason
2012-01-01
Novice learners find motion as a continuous process of change challenging to understand. In this paper, we present a pedagogical approach based on agent-based, visual programming to address this issue. Integrating agent-based programming, in particular, Logo programming, with curricular science has been shown to be challenging in previous research…
Proceedings 3rd NASA/IEEE Workshop on Formal Approaches to Agent-Based Systems (FAABS-III)
NASA Technical Reports Server (NTRS)
Hinchey, Michael (Editor); Rash, James (Editor); Truszkowski, Walt (Editor); Rouff, Christopher (Editor)
2004-01-01
These preceedings contain 18 papers and 4 poster presentation, covering topics such as: multi-agent systems, agent-based control, formalism, norms, as well as physical and biological models of agent-based systems. Some applications presented in the proceedings include systems analysis, software engineering, computer networks and robot control.
Agent-Based Models in Empirical Social Research
ERIC Educational Resources Information Center
Bruch, Elizabeth; Atwell, Jon
2015-01-01
Agent-based modeling has become increasingly popular in recent years, but there is still no codified set of recommendations or practices for how to use these models within a program of empirical research. This article provides ideas and practical guidelines drawn from sociology, biology, computer science, epidemiology, and statistics. We first…
Agent-based model to rural urban migration analysis
NASA Astrophysics Data System (ADS)
Silveira, Jaylson J.; Espíndola, Aquino L.; Penna, T. J. P.
2006-05-01
In this paper, we analyze the rural-urban migration phenomenon as it is usually observed in economies which are in the early stages of industrialization. The analysis is conducted by means of a statistical mechanics approach which builds a computational agent-based model. Agents are placed on a lattice and the connections among them are described via an Ising-like model. Simulations on this computational model show some emergent properties that are common in developing economies, such as a transitional dynamics characterized by continuous growth of urban population, followed by the equalization of expected wages between rural and urban sectors (Harris-Todaro equilibrium condition), urban concentration and increasing of per capita income.
Evolvable social agents for bacterial systems modeling.
Paton, Ray; Gregory, Richard; Vlachos, Costas; Saunders, Jon; Wu, Henry
2004-09-01
We present two approaches to the individual-based modeling (IbM) of bacterial ecologies and evolution using computational tools. The IbM approach is introduced, and its important complementary role to biosystems modeling is discussed. A fine-grained model of bacterial evolution is then presented that is based on networks of interactivity between computational objects representing genes and proteins. This is followed by a coarser grained agent-based model, which is designed to explore the evolvability of adaptive behavioral strategies in artificial bacteria represented by learning classifier systems. The structure and implementation of the two proposed individual-based bacterial models are discussed, and some results from simulation experiments are presented, illustrating their adaptive properties.
Ajelli, Marco; Gonçalves, Bruno; Balcan, Duygu; Colizza, Vittoria; Hu, Hao; Ramasco, José J; Merler, Stefano; Vespignani, Alessandro
2010-06-29
In recent years large-scale computational models for the realistic simulation of epidemic outbreaks have been used with increased frequency. Methodologies adapt to the scale of interest and range from very detailed agent-based models to spatially-structured metapopulation models. One major issue thus concerns to what extent the geotemporal spreading pattern found by different modeling approaches may differ and depend on the different approximations and assumptions used. We provide for the first time a side-by-side comparison of the results obtained with a stochastic agent-based model and a structured metapopulation stochastic model for the progression of a baseline pandemic event in Italy, a large and geographically heterogeneous European country. The agent-based model is based on the explicit representation of the Italian population through highly detailed data on the socio-demographic structure. The metapopulation simulations use the GLobal Epidemic and Mobility (GLEaM) model, based on high-resolution census data worldwide, and integrating airline travel flow data with short-range human mobility patterns at the global scale. The model also considers age structure data for Italy. GLEaM and the agent-based models are synchronized in their initial conditions by using the same disease parameterization, and by defining the same importation of infected cases from international travels. The results obtained show that both models provide epidemic patterns that are in very good agreement at the granularity levels accessible by both approaches, with differences in peak timing on the order of a few days. The relative difference of the epidemic size depends on the basic reproductive ratio, R0, and on the fact that the metapopulation model consistently yields a larger incidence than the agent-based model, as expected due to the differences in the structure in the intra-population contact pattern of the approaches. The age breakdown analysis shows that similar attack rates are obtained for the younger age classes. The good agreement between the two modeling approaches is very important for defining the tradeoff between data availability and the information provided by the models. The results we present define the possibility of hybrid models combining the agent-based and the metapopulation approaches according to the available data and computational resources.
Agent-Based Modeling of Cancer Stem Cell Driven Solid Tumor Growth.
Poleszczuk, Jan; Macklin, Paul; Enderling, Heiko
2016-01-01
Computational modeling of tumor growth has become an invaluable tool to simulate complex cell-cell interactions and emerging population-level dynamics. Agent-based models are commonly used to describe the behavior and interaction of individual cells in different environments. Behavioral rules can be informed and calibrated by in vitro assays, and emerging population-level dynamics may be validated with both in vitro and in vivo experiments. Here, we describe the design and implementation of a lattice-based agent-based model of cancer stem cell driven tumor growth.
Hardware accelerated high performance neutron transport computation based on AGENT methodology
NASA Astrophysics Data System (ADS)
Xiao, Shanjie
The spatial heterogeneity of the next generation Gen-IV nuclear reactor core designs brings challenges to the neutron transport analysis. The Arbitrary Geometry Neutron Transport (AGENT) AGENT code is a three-dimensional neutron transport analysis code being developed at the Laboratory for Neutronics and Geometry Computation (NEGE) at Purdue University. It can accurately describe the spatial heterogeneity in a hierarchical structure through the R-function solid modeler. The previous version of AGENT coupled the 2D transport MOC solver and the 1D diffusion NEM solver to solve the three dimensional Boltzmann transport equation. In this research, the 2D/1D coupling methodology was expanded to couple two transport solvers, the radial 2D MOC solver and the axial 1D MOC solver, for better accuracy. The expansion was benchmarked with the widely applied C5G7 benchmark models and two fast breeder reactor models, and showed good agreement with the reference Monte Carlo results. In practice, the accurate neutron transport analysis for a full reactor core is still time-consuming and thus limits its application. Therefore, another content of my research is focused on designing a specific hardware based on the reconfigurable computing technique in order to accelerate AGENT computations. It is the first time that the application of this type is used to the reactor physics and neutron transport for reactor design. The most time consuming part of the AGENT algorithm was identified. Moreover, the architecture of the AGENT acceleration system was designed based on the analysis. Through the parallel computation on the specially designed, highly efficient architecture, the acceleration design on FPGA acquires high performance at the much lower working frequency than CPUs. The whole design simulations show that the acceleration design would be able to speedup large scale AGENT computations about 20 times. The high performance AGENT acceleration system will drastically shortening the computation time for 3D full-core neutron transport analysis, making the AGENT methodology unique and advantageous, and thus supplies the possibility to extend the application range of neutron transport analysis in either industry engineering or academic research.
ERIC Educational Resources Information Center
Jacobson, Michael J.; Taylor, Charlotte E.; Richards, Deborah
2016-01-01
In this paper, we propose computational scientific inquiry (CSI) as an innovative model for learning important scientific knowledge and new practices for "doing" science. This approach involves the use of a "game-like" virtual world for students to experience virtual biological fieldwork in conjunction with using an agent-based…
NASA Astrophysics Data System (ADS)
Sharpanskykh, Alexei; Treur, Jan
Employing rich internal agent models of actors in large-scale socio-technical systems often results in scalability issues. The problem addressed in this paper is how to improve computational properties of a complex internal agent model, while preserving its behavioral properties. The problem is addressed for the case of an existing affective-cognitive decision making model instantiated for an emergency scenario. For this internal decision model an abstracted behavioral agent model is obtained, which ensures a substantial increase of the computational efficiency at the cost of approximately 1% behavioural error. The abstraction technique used can be applied to a wide range of internal agent models with loops, for example, involving mutual affective-cognitive interactions.
Nature as a network of morphological infocomputational processes for cognitive agents
NASA Astrophysics Data System (ADS)
Dodig-Crnkovic, Gordana
2017-01-01
This paper presents a view of nature as a network of infocomputational agents organized in a dynamical hierarchy of levels. It provides a framework for unification of currently disparate understandings of natural, formal, technical, behavioral and social phenomena based on information as a structure, differences in one system that cause the differences in another system, and computation as its dynamics, i.e. physical process of morphological change in the informational structure. We address some of the frequent misunderstandings regarding the natural/morphological computational models and their relationships to physical systems, especially cognitive systems such as living beings. Natural morphological infocomputation as a conceptual framework necessitates generalization of models of computation beyond the traditional Turing machine model presenting symbol manipulation, and requires agent-based concurrent resource-sensitive models of computation in order to be able to cover the whole range of phenomena from physics to cognition. The central role of agency, particularly material vs. cognitive agency is highlighted.
Agent-Based Modeling in Public Health: Current Applications and Future Directions.
Tracy, Melissa; Cerdá, Magdalena; Keyes, Katherine M
2018-04-01
Agent-based modeling is a computational approach in which agents with a specified set of characteristics interact with each other and with their environment according to predefined rules. We review key areas in public health where agent-based modeling has been adopted, including both communicable and noncommunicable disease, health behaviors, and social epidemiology. We also describe the main strengths and limitations of this approach for questions with public health relevance. Finally, we describe both methodologic and substantive future directions that we believe will enhance the value of agent-based modeling for public health. In particular, advances in model validation, comparisons with other causal modeling procedures, and the expansion of the models to consider comorbidity and joint influences more systematically will improve the utility of this approach to inform public health research, practice, and policy.
Agent-Based Multicellular Modeling for Predictive Toxicology
Biological modeling is a rapidly growing field that has benefited significantly from recent technological advances, expanding traditional methods with greater computing power, parameter-determination algorithms, and the development of novel computational approaches to modeling bi...
An Application of Artificial Intelligence to the Implementation of Electronic Commerce
NASA Astrophysics Data System (ADS)
Srivastava, Anoop Kumar
In this paper, we present an application of Artificial Intelligence (AI) to the implementation of Electronic Commerce. We provide a multi autonomous agent based framework. Our agent based architecture leads to flexible design of a spectrum of multiagent system (MAS) by distributing computation and by providing a unified interface to data and programs. Autonomous agents are intelligent enough and provide autonomy, simplicity of communication, computation, and a well developed semantics. The steps of design and implementation are discussed in depth, structure of Electronic Marketplace, an ontology, the agent model, and interaction pattern between agents is given. We have developed mechanisms for coordination between agents using a language, which is called Virtual Enterprise Modeling Language (VEML). VEML is a integration of Java and Knowledge Query and Manipulation Language (KQML). VEML provides application programmers with potential to globally develop different kinds of MAS based on their requirements and applications. We have implemented a multi autonomous agent based system called VE System. We demonstrate efficacy of our system by discussing experimental results and its salient features.
Simulating Cancer Growth with Multiscale Agent-Based Modeling
Wang, Zhihui; Butner, Joseph D.; Kerketta, Romica; Cristini, Vittorio; Deisboeck, Thomas S.
2014-01-01
There have been many techniques developed in recent years to in silico model a variety of cancer behaviors. Agent-based modeling is a specific discrete-based hybrid modeling approach that allows simulating the role of diversity in cell populations as well as within each individual cell; it has therefore become a powerful modeling method widely used by computational cancer researchers. Many aspects of tumor morphology including phenotype-changing mutations, the adaptation to microenvironment, the process of angiogenesis, the influence of extracellular matrix, reactions to chemotherapy or surgical intervention, the effects of oxygen and nutrient availability, and metastasis and invasion of healthy tissues have been incorporated and investigated in agent-based models. In this review, we introduce some of the most recent agent-based models that have provided insight into the understanding of cancer growth and invasion, spanning multiple biological scales in time and space, and we further describe several experimentally testable hypotheses generated by those models. We also discuss some of the current challenges of multiscale agent-based cancer models. PMID:24793698
Computational Modeling of Inflammation and Wound Healing
Ziraldo, Cordelia; Mi, Qi; An, Gary; Vodovotz, Yoram
2013-01-01
Objective Inflammation is both central to proper wound healing and a key driver of chronic tissue injury via a positive-feedback loop incited by incidental cell damage. We seek to derive actionable insights into the role of inflammation in wound healing in order to improve outcomes for individual patients. Approach To date, dynamic computational models have been used to study the time evolution of inflammation in wound healing. Emerging clinical data on histo-pathological and macroscopic images of evolving wounds, as well as noninvasive measures of blood flow, suggested the need for tissue-realistic, agent-based, and hybrid mechanistic computational simulations of inflammation and wound healing. Innovation We developed a computational modeling system, Simple Platform for Agent-based Representation of Knowledge, to facilitate the construction of tissue-realistic models. Results A hybrid equation–agent-based model (ABM) of pressure ulcer formation in both spinal cord-injured and -uninjured patients was used to identify control points that reduce stress caused by tissue ischemia/reperfusion. An ABM of arterial restenosis revealed new dynamics of cell migration during neointimal hyperplasia that match histological features, but contradict the currently prevailing mechanistic hypothesis. ABMs of vocal fold inflammation were used to predict inflammatory trajectories in individuals, possibly allowing for personalized treatment. Conclusions The intertwined inflammatory and wound healing responses can be modeled computationally to make predictions in individuals, simulate therapies, and gain mechanistic insights. PMID:24527362
A technology path to tactical agent-based modeling
NASA Astrophysics Data System (ADS)
James, Alex; Hanratty, Timothy P.
2017-05-01
Wargaming is a process of thinking through and visualizing events that could occur during a possible course of action. Over the past 200 years, wargaming has matured into a set of formalized processes. One area of growing interest is the application of agent-based modeling. Agent-based modeling and its additional supporting technologies has potential to introduce a third-generation wargaming capability to the Army, creating a positive overmatch decision-making capability. In its simplest form, agent-based modeling is a computational technique that helps the modeler understand and simulate how the "whole of a system" responds to change over time. It provides a decentralized method of looking at situations where individual agents are instantiated within an environment, interact with each other, and empowered to make their own decisions. However, this technology is not without its own risks and limitations. This paper explores a technology roadmap, identifying research topics that could realize agent-based modeling within a tactical wargaming context.
Multi-Agent Framework for Virtual Learning Spaces.
ERIC Educational Resources Information Center
Sheremetov, Leonid; Nunez, Gustavo
1999-01-01
Discussion of computer-supported collaborative learning, distributed artificial intelligence, and intelligent tutoring systems focuses on the concept of agents, and describes a virtual learning environment that has a multi-agent system. Describes a model of interactions in collaborative learning and discusses agents for Web-based virtual…
ERIC Educational Resources Information Center
Lai, K. Robert; Lan, Chung Hsien
2006-01-01
This work presents a novel method for modeling collaborative learning as multi-issue agent negotiation using fuzzy constraints. Agent negotiation is an iterative process, through which, the proposed method aggregates student marks to reduce personal bias. In the framework, students define individual fuzzy membership functions based on their…
Seal, John B; Alverdy, John C; Zaborina, Olga; An, Gary
2011-09-19
There is a growing realization that alterations in host-pathogen interactions (HPI) can generate disease phenotypes without pathogen invasion. The gut represents a prime region where such HPI can arise and manifest. Under normal conditions intestinal microbial communities maintain a stable, mutually beneficial ecosystem. However, host stress can lead to changes in environmental conditions that shift the nature of the host-microbe dialogue, resulting in escalation of virulence expression, immune activation and ultimately systemic disease. Effective modulation of these dynamics requires the ability to characterize the complexity of the HPI, and dynamic computational modeling can aid in this task. Agent-based modeling is a computational method that is suited to representing spatially diverse, dynamical systems. We propose that dynamic knowledge representation of gut HPI with agent-based modeling will aid in the investigation of the pathogenesis of gut-derived sepsis. An agent-based model (ABM) of virulence regulation in Pseudomonas aeruginosa was developed by translating bacterial and host cell sense-and-response mechanisms into behavioral rules for computational agents and integrated into a virtual environment representing the host-microbe interface in the gut. The resulting gut milieu ABM (GMABM) was used to: 1) investigate a potential clinically relevant laboratory experimental condition not yet developed--i.e. non-lethal transient segmental intestinal ischemia, 2) examine the sufficiency of existing hypotheses to explain experimental data--i.e. lethality in a model of major surgical insult and stress, and 3) produce behavior to potentially guide future experimental design--i.e. suggested sample points for a potential laboratory model of non-lethal transient intestinal ischemia. Furthermore, hypotheses were generated to explain certain discrepancies between the behaviors of the GMABM and biological experiments, and new investigatory avenues proposed to test those hypotheses. Agent-based modeling can account for the spatio-temporal dynamics of an HPI, and, even when carried out with a relatively high degree of abstraction, can be useful in the investigation of system-level consequences of putative mechanisms operating at the individual agent level. We suggest that an integrated and iterative heuristic relationship between computational modeling and more traditional laboratory and clinical investigations, with a focus on identifying useful and sufficient degrees of abstraction, will enhance the efficiency and translational productivity of biomedical research.
2011-01-01
Background There is a growing realization that alterations in host-pathogen interactions (HPI) can generate disease phenotypes without pathogen invasion. The gut represents a prime region where such HPI can arise and manifest. Under normal conditions intestinal microbial communities maintain a stable, mutually beneficial ecosystem. However, host stress can lead to changes in environmental conditions that shift the nature of the host-microbe dialogue, resulting in escalation of virulence expression, immune activation and ultimately systemic disease. Effective modulation of these dynamics requires the ability to characterize the complexity of the HPI, and dynamic computational modeling can aid in this task. Agent-based modeling is a computational method that is suited to representing spatially diverse, dynamical systems. We propose that dynamic knowledge representation of gut HPI with agent-based modeling will aid in the investigation of the pathogenesis of gut-derived sepsis. Methodology/Principal Findings An agent-based model (ABM) of virulence regulation in Pseudomonas aeruginosa was developed by translating bacterial and host cell sense-and-response mechanisms into behavioral rules for computational agents and integrated into a virtual environment representing the host-microbe interface in the gut. The resulting gut milieu ABM (GMABM) was used to: 1) investigate a potential clinically relevant laboratory experimental condition not yet developed - i.e. non-lethal transient segmental intestinal ischemia, 2) examine the sufficiency of existing hypotheses to explain experimental data - i.e. lethality in a model of major surgical insult and stress, and 3) produce behavior to potentially guide future experimental design - i.e. suggested sample points for a potential laboratory model of non-lethal transient intestinal ischemia. Furthermore, hypotheses were generated to explain certain discrepancies between the behaviors of the GMABM and biological experiments, and new investigatory avenues proposed to test those hypotheses. Conclusions/Significance Agent-based modeling can account for the spatio-temporal dynamics of an HPI, and, even when carried out with a relatively high degree of abstraction, can be useful in the investigation of system-level consequences of putative mechanisms operating at the individual agent level. We suggest that an integrated and iterative heuristic relationship between computational modeling and more traditional laboratory and clinical investigations, with a focus on identifying useful and sufficient degrees of abstraction, will enhance the efficiency and translational productivity of biomedical research. PMID:21929759
SPARK: A Framework for Multi-Scale Agent-Based Biomedical Modeling.
Solovyev, Alexey; Mikheev, Maxim; Zhou, Leming; Dutta-Moscato, Joyeeta; Ziraldo, Cordelia; An, Gary; Vodovotz, Yoram; Mi, Qi
2010-01-01
Multi-scale modeling of complex biological systems remains a central challenge in the systems biology community. A method of dynamic knowledge representation known as agent-based modeling enables the study of higher level behavior emerging from discrete events performed by individual components. With the advancement of computer technology, agent-based modeling has emerged as an innovative technique to model the complexities of systems biology. In this work, the authors describe SPARK (Simple Platform for Agent-based Representation of Knowledge), a framework for agent-based modeling specifically designed for systems-level biomedical model development. SPARK is a stand-alone application written in Java. It provides a user-friendly interface, and a simple programming language for developing Agent-Based Models (ABMs). SPARK has the following features specialized for modeling biomedical systems: 1) continuous space that can simulate real physical space; 2) flexible agent size and shape that can represent the relative proportions of various cell types; 3) multiple spaces that can concurrently simulate and visualize multiple scales in biomedical models; 4) a convenient graphical user interface. Existing ABMs of diabetic foot ulcers and acute inflammation were implemented in SPARK. Models of identical complexity were run in both NetLogo and SPARK; the SPARK-based models ran two to three times faster.
Voulgarelis, Dimitrios; Velayudhan, Ajoy; Smith, Frank
2017-01-01
Agent-based models provide a formidable tool for exploring complex and emergent behaviour of biological systems as well as accurate results but with the drawback of needing a lot of computational power and time for subsequent analysis. On the other hand, equation-based models can more easily be used for complex analysis in a much shorter timescale. This paper formulates an ordinary differential equations and stochastic differential equations model to capture the behaviour of an existing agent-based model of tumour cell reprogramming and applies it to optimization of possible treatment as well as dosage sensitivity analysis. For certain values of the parameter space a close match between the equation-based and agent-based models is achieved. The need for division of labour between the two approaches is explored. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
ERIC Educational Resources Information Center
Cangelosi, Angelo
2007-01-01
In this paper we present the "grounded adaptive agent" computational framework for studying the emergence of communication and language. This modeling framework is based on simulations of population of cognitive agents that evolve linguistic capabilities by interacting with their social and physical environment (internal and external symbol…
Agent-Based Computing in Distributed Adversarial Planning
2010-08-09
plans. An agent is expected to agree to deviate from its optimal uncoordinated plan only if it improves its position. - process models for opponent...Game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.2 Improvements ...plan only if it improves its position. – process models for opponent modeling – We have analyzed the suitability of business process models for creating
Building occupancy simulation and data assimilation using a graph-based agent-oriented model
NASA Astrophysics Data System (ADS)
Rai, Sanish; Hu, Xiaolin
2018-07-01
Building occupancy simulation and estimation simulates the dynamics of occupants and estimates their real-time spatial distribution in a building. It requires a simulation model and an algorithm for data assimilation that assimilates real-time sensor data into the simulation model. Existing building occupancy simulation models include agent-based models and graph-based models. The agent-based models suffer high computation cost for simulating large numbers of occupants, and graph-based models overlook the heterogeneity and detailed behaviors of individuals. Recognizing the limitations of existing models, this paper presents a new graph-based agent-oriented model which can efficiently simulate large numbers of occupants in various kinds of building structures. To support real-time occupancy dynamics estimation, a data assimilation framework based on Sequential Monte Carlo Methods is also developed and applied to the graph-based agent-oriented model to assimilate real-time sensor data. Experimental results show the effectiveness of the developed model and the data assimilation framework. The major contributions of this work are to provide an efficient model for building occupancy simulation that can accommodate large numbers of occupants and an effective data assimilation framework that can provide real-time estimations of building occupancy from sensor data.
Numerical Problems and Agent-Based Models for a Mass Transfer Course
ERIC Educational Resources Information Center
Murthi, Manohar; Shea, Lonnie D.; Snurr, Randall Q.
2009-01-01
Problems requiring numerical solutions of differential equations or the use of agent-based modeling are presented for use in a course on mass transfer. These problems were solved using the popular technical computing language MATLABTM. Students were introduced to MATLAB via a problem with an analytical solution. A more complex problem to which no…
Niazi, Muaz A
2014-01-01
The body structure of snakes is composed of numerous natural components thereby making it resilient, flexible, adaptive, and dynamic. In contrast, current computer animations as well as physical implementations of snake-like autonomous structures are typically designed to use either a single or a relatively smaller number of components. As a result, not only these artificial structures are constrained by the dimensions of the constituent components but often also require relatively more computationally intensive algorithms to model and animate. Still, these animations often lack life-like resilience and adaptation. This paper presents a solution to the problem of modeling snake-like structures by proposing an agent-based, self-organizing algorithm resulting in an emergent and surprisingly resilient dynamic structure involving a minimal of interagent communication. Extensive simulation experiments demonstrate the effectiveness as well as resilience of the proposed approach. The ideas originating from the proposed algorithm can not only be used for developing self-organizing animations but can also have practical applications such as in the form of complex, autonomous, evolvable robots with self-organizing, mobile components with minimal individual computational capabilities. The work also demonstrates the utility of exploratory agent-based modeling (EABM) in the engineering of artificial life-like complex adaptive systems.
Niazi, Muaz A.
2014-01-01
The body structure of snakes is composed of numerous natural components thereby making it resilient, flexible, adaptive, and dynamic. In contrast, current computer animations as well as physical implementations of snake-like autonomous structures are typically designed to use either a single or a relatively smaller number of components. As a result, not only these artificial structures are constrained by the dimensions of the constituent components but often also require relatively more computationally intensive algorithms to model and animate. Still, these animations often lack life-like resilience and adaptation. This paper presents a solution to the problem of modeling snake-like structures by proposing an agent-based, self-organizing algorithm resulting in an emergent and surprisingly resilient dynamic structure involving a minimal of interagent communication. Extensive simulation experiments demonstrate the effectiveness as well as resilience of the proposed approach. The ideas originating from the proposed algorithm can not only be used for developing self-organizing animations but can also have practical applications such as in the form of complex, autonomous, evolvable robots with self-organizing, mobile components with minimal individual computational capabilities. The work also demonstrates the utility of exploratory agent-based modeling (EABM) in the engineering of artificial life-like complex adaptive systems. PMID:24701135
Simulating cancer growth with multiscale agent-based modeling.
Wang, Zhihui; Butner, Joseph D; Kerketta, Romica; Cristini, Vittorio; Deisboeck, Thomas S
2015-02-01
There have been many techniques developed in recent years to in silico model a variety of cancer behaviors. Agent-based modeling is a specific discrete-based hybrid modeling approach that allows simulating the role of diversity in cell populations as well as within each individual cell; it has therefore become a powerful modeling method widely used by computational cancer researchers. Many aspects of tumor morphology including phenotype-changing mutations, the adaptation to microenvironment, the process of angiogenesis, the influence of extracellular matrix, reactions to chemotherapy or surgical intervention, the effects of oxygen and nutrient availability, and metastasis and invasion of healthy tissues have been incorporated and investigated in agent-based models. In this review, we introduce some of the most recent agent-based models that have provided insight into the understanding of cancer growth and invasion, spanning multiple biological scales in time and space, and we further describe several experimentally testable hypotheses generated by those models. We also discuss some of the current challenges of multiscale agent-based cancer models. Copyright © 2014 Elsevier Ltd. All rights reserved.
Quantitative characterization of cellular dose in vitro is needed for alignment of doses in vitro and in vivo. We used the agent-based software, CompuCell3D (CC3D), to provide a stochastic description of cell growth in culture. The model was configured so that isolated cells assu...
Intelligent judgements over health risks in a spatial agent-based model.
Abdulkareem, Shaheen A; Augustijn, Ellen-Wien; Mustafa, Yaseen T; Filatova, Tatiana
2018-03-20
Millions of people worldwide are exposed to deadly infectious diseases on a regular basis. Breaking news of the Zika outbreak for instance, made it to the main media titles internationally. Perceiving disease risks motivate people to adapt their behavior toward a safer and more protective lifestyle. Computational science is instrumental in exploring patterns of disease spread emerging from many individual decisions and interactions among agents and their environment by means of agent-based models. Yet, current disease models rarely consider simulating dynamics in risk perception and its impact on the adaptive protective behavior. Social sciences offer insights into individual risk perception and corresponding protective actions, while machine learning provides algorithms and methods to capture these learning processes. This article presents an innovative approach to extend agent-based disease models by capturing behavioral aspects of decision-making in a risky context using machine learning techniques. We illustrate it with a case of cholera in Kumasi, Ghana, accounting for spatial and social risk factors that affect intelligent behavior and corresponding disease incidents. The results of computational experiments comparing intelligent with zero-intelligent representations of agents in a spatial disease agent-based model are discussed. We present a spatial disease agent-based model (ABM) with agents' behavior grounded in Protection Motivation Theory. Spatial and temporal patterns of disease diffusion among zero-intelligent agents are compared to those produced by a population of intelligent agents. Two Bayesian Networks (BNs) designed and coded using R and are further integrated with the NetLogo-based Cholera ABM. The first is a one-tier BN1 (only risk perception), the second is a two-tier BN2 (risk and coping behavior). We run three experiments (zero-intelligent agents, BN1 intelligence and BN2 intelligence) and report the results per experiment in terms of several macro metrics of interest: an epidemic curve, a risk perception curve, and a distribution of different types of coping strategies over time. Our results emphasize the importance of integrating behavioral aspects of decision making under risk into spatial disease ABMs using machine learning algorithms. This is especially relevant when studying cumulative impacts of behavioral changes and possible intervention strategies.
Welch, M C; Kwan, P W; Sajeev, A S M
2014-10-01
Agent-based modelling has proven to be a promising approach for developing rich simulations for complex phenomena that provide decision support functions across a broad range of areas including biological, social and agricultural sciences. This paper demonstrates how high performance computing technologies, namely General-Purpose Computing on Graphics Processing Units (GPGPU), and commercial Geographic Information Systems (GIS) can be applied to develop a national scale, agent-based simulation of an incursion of Old World Screwworm fly (OWS fly) into the Australian mainland. The development of this simulation model leverages the combination of massively data-parallel processing capabilities supported by NVidia's Compute Unified Device Architecture (CUDA) and the advanced spatial visualisation capabilities of GIS. These technologies have enabled the implementation of an individual-based, stochastic lifecycle and dispersal algorithm for the OWS fly invasion. The simulation model draws upon a wide range of biological data as input to stochastically determine the reproduction and survival of the OWS fly through the different stages of its lifecycle and dispersal of gravid females. Through this model, a highly efficient computational platform has been developed for studying the effectiveness of control and mitigation strategies and their associated economic impact on livestock industries can be materialised. Copyright © 2014 International Atomic Energy Agency 2014. Published by Elsevier B.V. All rights reserved.
An agent-based computational model of the spread of tuberculosis
NASA Astrophysics Data System (ADS)
de Espíndola, Aquino L.; Bauch, Chris T.; Troca Cabella, Brenno C.; Souto Martinez, Alexandre
2011-05-01
In this work we propose an alternative model of the spread of tuberculosis (TB) and the emergence of drug resistance due to the treatment with antibiotics. We implement the simulations by an agent-based model computational approach where the spatial structure is taken into account. The spread of tuberculosis occurs according to probabilities defined by the interactions among individuals. The model was validated by reproducing results already known from the literature in which different treatment regimes yield the emergence of drug resistance. The different patterns of TB spread can be visualized at any time of the system evolution. The implementation details as well as some results of this alternative approach are discussed.
CulSim: A simulator of emergence and resilience of cultural diversity
NASA Astrophysics Data System (ADS)
Ulloa, Roberto
CulSim is an agent-based computer simulation software that allows further exploration of influential and recent models of emergence of cultural groups grounded in sociological theories. CulSim provides a collection of tools to analyze resilience of cultural diversity when events affect agents, institutions or global parameters of the simulations; upon combination, events can be used to approximate historical circumstances. The software provides a graphical and text-based user interface, and so makes this agent-based modeling methodology accessible to a variety of users from different research fields.
Riaz, Faisal; Niazi, Muaz A
2017-01-01
This paper presents the concept of a social autonomous agent to conceptualize such Autonomous Vehicles (AVs), which interacts with other AVs using social manners similar to human behavior. The presented AVs also have the capability of predicting intentions, i.e. mentalizing and copying the actions of each other, i.e. mirroring. Exploratory Agent Based Modeling (EABM) level of the Cognitive Agent Based Computing (CABC) framework has been utilized to design the proposed social agent. Furthermore, to emulate the functionality of mentalizing and mirroring modules of proposed social agent, a tailored mathematical model of the Richardson's arms race model has also been presented. The performance of the proposed social agent has been validated at two levels-firstly it has been simulated using NetLogo, a standard agent-based modeling tool and also, at a practical level using a prototype AV. The simulation results have confirmed that the proposed social agent-based collision avoidance strategy is 78.52% more efficient than Random walk based collision avoidance strategy in congested flock-like topologies. Whereas practical results have confirmed that the proposed scheme can avoid rear end and lateral collisions with the efficiency of 99.876% as compared with the IEEE 802.11n-based existing state of the art mirroring neuron-based collision avoidance scheme.
Niazi, Muaz A.
2017-01-01
This paper presents the concept of a social autonomous agent to conceptualize such Autonomous Vehicles (AVs), which interacts with other AVs using social manners similar to human behavior. The presented AVs also have the capability of predicting intentions, i.e. mentalizing and copying the actions of each other, i.e. mirroring. Exploratory Agent Based Modeling (EABM) level of the Cognitive Agent Based Computing (CABC) framework has been utilized to design the proposed social agent. Furthermore, to emulate the functionality of mentalizing and mirroring modules of proposed social agent, a tailored mathematical model of the Richardson’s arms race model has also been presented. The performance of the proposed social agent has been validated at two levels–firstly it has been simulated using NetLogo, a standard agent-based modeling tool and also, at a practical level using a prototype AV. The simulation results have confirmed that the proposed social agent-based collision avoidance strategy is 78.52% more efficient than Random walk based collision avoidance strategy in congested flock-like topologies. Whereas practical results have confirmed that the proposed scheme can avoid rear end and lateral collisions with the efficiency of 99.876% as compared with the IEEE 802.11n-based existing state of the art mirroring neuron-based collision avoidance scheme. PMID:29040294
Learning from Multiple Collaborating Intelligent Tutors: An Agent-based Approach.
ERIC Educational Resources Information Center
Solomos, Konstantinos; Avouris, Nikolaos
1999-01-01
Describes an open distributed multi-agent tutoring system (MATS) and discusses issues related to learning in such open environments. Topics include modeling a one student-many teachers approach in a computer-based learning context; distributed artificial intelligence; implementation issues; collaboration; and user interaction. (Author/LRW)
Action Understanding as Inverse Planning
ERIC Educational Resources Information Center
Baker, Chris L.; Saxe, Rebecca; Tenenbaum, Joshua B.
2009-01-01
Humans are adept at inferring the mental states underlying other agents' actions, such as goals, beliefs, desires, emotions and other thoughts. We propose a computational framework based on Bayesian inverse planning for modeling human action understanding. The framework represents an intuitive theory of intentional agents' behavior based on the…
Dynamic Simulation of Crime Perpetration and Reporting to Examine Community Intervention Strategies
ERIC Educational Resources Information Center
Yonas, Michael A.; Burke, Jessica G.; Brown, Shawn T.; Borrebach, Jeffrey D.; Garland, Richard; Burke, Donald S.; Grefenstette, John J.
2013-01-01
Objective: To develop a conceptual computational agent-based model (ABM) to explore community-wide versus spatially focused crime reporting interventions to reduce community crime perpetrated by youth. Method: Agents within the model represent individual residents and interact on a two-dimensional grid representing an abstract nonempirically…
NASA Astrophysics Data System (ADS)
Blikstein, Paulo
The goal of this dissertation is to explore relations between content, representation, and pedagogy, so as to understand the impact of the nascent field of complexity sciences on science, technology, engineering and mathematics (STEM) learning. Wilensky & Papert coined the term "structurations" to express the relationship between knowledge and its representational infrastructure. A change from one representational infrastructure to another they call a "restructuration." The complexity sciences have introduced a novel and powerful structuration: agent-based modeling. In contradistinction to traditional mathematical modeling, which relies on equational descriptions of macroscopic properties of systems, agent-based modeling focuses on a few archetypical micro-behaviors of "agents" to explain emergent macro-behaviors of the agent collective. Specifically, this dissertation is about a series of studies of undergraduate students' learning of materials science, in which two structurations are compared (equational and agent-based), consisting of both design research and empirical evaluation. I have designed MaterialSim, a constructionist suite of computer models, supporting materials and learning activities designed within the approach of agent-based modeling, and over four years conducted an empirical inves3 tigation of an undergraduate materials science course. The dissertation is comprised of three studies: Study 1 - diagnosis . I investigate current representational and pedagogical practices in engineering classrooms. Study 2 - laboratory studies. I investigate the cognition of students engaging in scientific inquiry through programming their own scientific models. Study 3 - classroom implementation. I investigate the characteristics, advantages, and trajectories of scientific content knowledge that is articulated in epistemic forms and representational infrastructures unique to complexity sciences, as well as the feasibility of the integration of constructionist, agent-based learning environments in engineering classrooms. Data sources include classroom observations, interviews, videotaped sessions of model-building, questionnaires, analysis of computer-generated logfiles, and quantitative and qualitative analysis of artifacts. Results shows that (1) current representational and pedagogical practices in engineering classrooms were not up to the challenge of the complex content being taught, (2) by building their own scientific models, students developed a deeper understanding of core scientific concepts, and learned how to better identify unifying principles and behaviors in materials science, and (3) programming computer models was feasible within a regular engineering classroom.
Research on monocentric model of urbanization by agent-based simulation
NASA Astrophysics Data System (ADS)
Xue, Ling; Yang, Kaizhong
2008-10-01
Over the past years, GIS have been widely used for modeling urbanization from a variety of perspectives such as digital terrain representation and overlay analysis using cell-based data platform. Similarly, simulation of urban dynamics has been achieved with the use of Cellular Automata. In contrast to these approaches, agent-based simulation provides a much more powerful set of tools. This allows researchers to set up a counterpart for real environmental and urban systems in computer for experimentation and scenario analysis. This Paper basically reviews the research on the economic mechanism of urbanization and an agent-based monocentric model is setup for further understanding the urbanization process and mechanism in China. We build an endogenous growth model with dynamic interactions between spatial agglomeration and urban development by using agent-based simulation. It simulates the migration decisions of two main types of agents, namely rural and urban households between rural and urban area. The model contains multiple economic interactions that are crucial in understanding urbanization and industrial process in China. These adaptive agents can adjust their supply and demand according to the market situation by a learning algorithm. The simulation result shows this agent-based urban model is able to perform the regeneration and to produce likely-to-occur projections of reality.
Agent-based models in translational systems biology
An, Gary; Mi, Qi; Dutta-Moscato, Joyeeta; Vodovotz, Yoram
2013-01-01
Effective translational methodologies for knowledge representation are needed in order to make strides against the constellation of diseases that affect the world today. These diseases are defined by their mechanistic complexity, redundancy, and nonlinearity. Translational systems biology aims to harness the power of computational simulation to streamline drug/device design, simulate clinical trials, and eventually to predict the effects of drugs on individuals. The ability of agent-based modeling to encompass multiple scales of biological process as well as spatial considerations, coupled with an intuitive modeling paradigm, suggests that this modeling framework is well suited for translational systems biology. This review describes agent-based modeling and gives examples of its translational applications in the context of acute inflammation and wound healing. PMID:20835989
A Comparison of Computational Cognitive Models: Agent-Based Systems Versus Rule-Based Architectures
2003-03-01
Java™ How To Program , Prentice Hall, 1999. Friedman-Hill, E., Jess, The Expert System Shell for the Java Platform, Sandia National Laboratories, 2001...transition from the descriptive NDM theory to a computational model raises several questions: Who is an experienced decision maker? How do you model the...progression from being a novice to an experienced decision maker? How does the model account for previous experiences? Are there situations where
Use of agents to implement an integrated computing environment
NASA Technical Reports Server (NTRS)
Hale, Mark A.; Craig, James I.
1995-01-01
Integrated Product and Process Development (IPPD) embodies the simultaneous application to both system and quality engineering methods throughout an iterative design process. The use of IPPD results in the time-conscious, cost-saving development of engineering systems. To implement IPPD, a Decision-Based Design perspective is encapsulated in an approach that focuses on the role of the human designer in product development. The approach has two parts and is outlined in this paper. First, an architecture, called DREAMS, is being developed that facilitates design from a decision-based perspective. Second, a supporting computing infrastructure, called IMAGE, is being designed. Agents are used to implement the overall infrastructure on the computer. Successful agent utilization requires that they be made of three components: the resource, the model, and the wrap. Current work is focused on the development of generalized agent schemes and associated demonstration projects. When in place, the technology independent computing infrastructure will aid the designer in systematically generating knowledge used to facilitate decision-making.
Graceful Failure and Societal Resilience Analysis Via Agent-Based Modeling and Simulation
NASA Astrophysics Data System (ADS)
Schopf, P. S.; Cioffi-Revilla, C.; Rogers, J. D.; Bassett, J.; Hailegiorgis, A. B.
2014-12-01
Agent-based social modeling is opening up new methodologies for the study of societal response to weather and climate hazards, and providing measures of resiliency that can be studied in many contexts, particularly in coupled human and natural-technological systems (CHANTS). Since CHANTS are complex adaptive systems, societal resiliency may or may not occur, depending on dynamics that lack closed form solutions. Agent-based modeling has been shown to provide a viable theoretical and methodological approach for analyzing and understanding disasters and societal resiliency in CHANTS. Our approach advances the science of societal resilience through computational modeling and simulation methods that complement earlier statistical and mathematical approaches. We present three case studies of social dynamics modeling that demonstrate the use of these agent based models. In Central Asia, we exmaine mutltiple ensemble simulations with varying climate statistics to see how droughts and zuds affect populations, transmission of wealth across generations, and the overall structure of the social system. In Eastern Africa, we explore how successive episodes of drought events affect the adaptive capacity of rural households. Human displacement, mainly, rural to urban migration, and livelihood transition particularly from pastoral to farming are observed as rural households interacting dynamically with the biophysical environment and continually adjust their behavior to accommodate changes in climate. In the far north case we demonstrate one of the first successful attempts to model the complete climate-permafrost-infrastructure-societal interaction network as a complex adaptive system/CHANTS implemented as a ``federated'' agent-based model using evolutionary computation. Analysis of population changes resulting from extreme weather across these and other cases provides evidence for the emergence of new steady states and shifting patterns of resilience.
High performance cellular level agent-based simulation with FLAME for the GPU.
Richmond, Paul; Walker, Dawn; Coakley, Simon; Romano, Daniela
2010-05-01
Driven by the availability of experimental data and ability to simulate a biological scale which is of immediate interest, the cellular scale is fast emerging as an ideal candidate for middle-out modelling. As with 'bottom-up' simulation approaches, cellular level simulations demand a high degree of computational power, which in large-scale simulations can only be achieved through parallel computing. The flexible large-scale agent modelling environment (FLAME) is a template driven framework for agent-based modelling (ABM) on parallel architectures ideally suited to the simulation of cellular systems. It is available for both high performance computing clusters (www.flame.ac.uk) and GPU hardware (www.flamegpu.com) and uses a formal specification technique that acts as a universal modelling format. This not only creates an abstraction from the underlying hardware architectures, but avoids the steep learning curve associated with programming them. In benchmarking tests and simulations of advanced cellular systems, FLAME GPU has reported massive improvement in performance over more traditional ABM frameworks. This allows the time spent in the development and testing stages of modelling to be drastically reduced and creates the possibility of real-time visualisation for simple visual face-validation.
Agent-based modeling of the spread of the 1918-1919 flu in three Canadian fur trading communities.
O'Neil, Caroline A; Sattenspiel, Lisa
2010-01-01
Previous attempts to study the 1918-1919 flu in three small communities in central Manitoba have used both three-community population-based and single-community agent-based models. These studies identified critical factors influencing epidemic spread, but they also left important questions unanswered. The objective of this project was to design a more realistic agent-based model that would overcome limitations of earlier models and provide new insights into these outstanding questions. The new model extends the previous agent-based model to three communities so that results can be compared to those from the population-based model. Sensitivity testing was conducted, and the new model was used to investigate the influence of seasonal settlement and mobility patterns, the geographic heterogeneity of the observed 1918-1919 epidemic in Manitoba, and other questions addressed previously. Results confirm outcomes from the population-based model that suggest that (a) social organization and mobility strongly influence the timing and severity of epidemics and (b) the impact of the epidemic would have been greater if it had arrived in the summer rather than the winter. New insights from the model suggest that the observed heterogeneity among communities in epidemic impact was not unusual and would have been the expected outcome given settlement structure and levels of interaction among communities. Application of an agent-based computer simulation has helped to better explain observed patterns of spread of the 1918-1919 flu epidemic in central Manitoba. Contrasts between agent-based and population-based models illustrate the advantages of agent-based models for the study of small populations. © 2010 Wiley-Liss, Inc.
Cilfone, Nicholas A.; Kirschner, Denise E.; Linderman, Jennifer J.
2015-01-01
Biologically related processes operate across multiple spatiotemporal scales. For computational modeling methodologies to mimic this biological complexity, individual scale models must be linked in ways that allow for dynamic exchange of information across scales. A powerful methodology is to combine a discrete modeling approach, agent-based models (ABMs), with continuum models to form hybrid models. Hybrid multi-scale ABMs have been used to simulate emergent responses of biological systems. Here, we review two aspects of hybrid multi-scale ABMs: linking individual scale models and efficiently solving the resulting model. We discuss the computational choices associated with aspects of linking individual scale models while simultaneously maintaining model tractability. We demonstrate implementations of existing numerical methods in the context of hybrid multi-scale ABMs. Using an example model describing Mycobacterium tuberculosis infection, we show relative computational speeds of various combinations of numerical methods. Efficient linking and solution of hybrid multi-scale ABMs is key to model portability, modularity, and their use in understanding biological phenomena at a systems level. PMID:26366228
IMAGE: A Design Integration Framework Applied to the High Speed Civil Transport
NASA Technical Reports Server (NTRS)
Hale, Mark A.; Craig, James I.
1993-01-01
Effective design of the High Speed Civil Transport requires the systematic application of design resources throughout a product's life-cycle. Information obtained from the use of these resources is used for the decision-making processes of Concurrent Engineering. Integrated computing environments facilitate the acquisition, organization, and use of required information. State-of-the-art computing technologies provide the basis for the Intelligent Multi-disciplinary Aircraft Generation Environment (IMAGE) described in this paper. IMAGE builds upon existing agent technologies by adding a new component called a model. With the addition of a model, the agent can provide accountable resource utilization in the presence of increasing design fidelity. The development of a zeroth-order agent is used to illustrate agent fundamentals. Using a CATIA(TM)-based agent from previous work, a High Speed Civil Transport visualization system linking CATIA, FLOPS, and ASTROS will be shown. These examples illustrate the important role of the agent technologies used to implement IMAGE, and together they demonstrate that IMAGE can provide an integrated computing environment for the design of the High Speed Civil Transport.
Modelling brain emergent behaviours through coevolution of neural agents.
Maniadakis, Michail; Trahanias, Panos
2006-06-01
Recently, many research efforts focus on modelling partial brain areas with the long-term goal to support cognitive abilities of artificial organisms. Existing models usually suffer from heterogeneity, which constitutes their integration very difficult. The present work introduces a computational framework to address brain modelling tasks, emphasizing on the integrative performance of substructures. Moreover, implemented models are embedded in a robotic platform to support its behavioural capabilities. We follow an agent-based approach in the design of substructures to support the autonomy of partial brain structures. Agents are formulated to allow the emergence of a desired behaviour after a certain amount of interaction with the environment. An appropriate collaborative coevolutionary algorithm, able to emphasize both the speciality of brain areas and their cooperative performance, is employed to support design specification of agent structures. The effectiveness of the proposed approach is illustrated through the implementation of computational models for motor cortex and hippocampus, which are successfully tested on a simulated mobile robot.
Organization of the secure distributed computing based on multi-agent system
NASA Astrophysics Data System (ADS)
Khovanskov, Sergey; Rumyantsev, Konstantin; Khovanskova, Vera
2018-04-01
Nowadays developing methods for distributed computing is received much attention. One of the methods of distributed computing is using of multi-agent systems. The organization of distributed computing based on the conventional network computers can experience security threats performed by computational processes. Authors have developed the unified agent algorithm of control system of computing network nodes operation. Network PCs is used as computing nodes. The proposed multi-agent control system for the implementation of distributed computing allows in a short time to organize using of the processing power of computers any existing network to solve large-task by creating a distributed computing. Agents based on a computer network can: configure a distributed computing system; to distribute the computational load among computers operated agents; perform optimization distributed computing system according to the computing power of computers on the network. The number of computers connected to the network can be increased by connecting computers to the new computer system, which leads to an increase in overall processing power. Adding multi-agent system in the central agent increases the security of distributed computing. This organization of the distributed computing system reduces the problem solving time and increase fault tolerance (vitality) of computing processes in a changing computing environment (dynamic change of the number of computers on the network). Developed a multi-agent system detects cases of falsification of the results of a distributed system, which may lead to wrong decisions. In addition, the system checks and corrects wrong results.
Integrating GIS and ABM to Explore Spatiotemporal Dynamics
NASA Astrophysics Data System (ADS)
Sun, M.; Jiang, Y.; Yang, C.
2013-12-01
Agent-based modeling as a methodology for the bottom-up exploration with the account of adaptive behavior and heterogeneity of system components can help discover the development and pattern of the complex social and environmental system. However, ABM is a computationally intensive process especially when the number of system components becomes large and the agent-agent/agent-environmental interaction is modeled very complex. Most of traditional ABM frameworks developed based on CPU do not have a satisfying computing capacity. To address the problem and as the emergence of advanced techniques, GPU computing with CUDA can provide powerful parallel structure to enable the complex simulation of spatiotemporal dynamics. In this study, we first develop a GPU-based ABM system. Secondly, in order to visualize the dynamics generated from the movement of agent and the change of agent/environmental attributes during the simulation, we integrate GIS into the ABM system. Advanced geovisualization technologies can be utilized for representing the spatiotemporal change events, such as proper 2D/3D maps with state-of-the-art symbols, space-time cube and multiple layers each of which presents pattern in one time-stamp, etc. Thirdly, visual analytics which include interactive tools (e.g. grouping, filtering, linking, etc.) is included in our ABM-GIS system to help users conduct real-time data exploration during the progress of simulation. Analysis like flow analysis and spatial cluster analysis can be integrated according to the geographical problem we want to explore.
Mechanistic modeling of developmental defects through computational embryology (WC10th)
Abstract: An important consideration for 3Rs is to identify developmental hazards utilizing mechanism-based in vitro assays (e.g., ToxCast) and in silico predictive models. Steady progress has been made with agent-based models that recapitulate morphogenetic drivers for angiogen...
NASA Astrophysics Data System (ADS)
Pappalardo, Francesco; Pennisi, Marzio
2016-07-01
Fibrosis represents a process where an excessive tissue formation in an organ follows the failure of a physiological reparative or reactive process. Mathematical and computational techniques may be used to improve the understanding of the mechanisms that lead to the disease and to test potential new treatments that may directly or indirectly have positive effects against fibrosis [1]. In this scenario, Ben Amar and Bianca [2] give us a broad picture of the existing mathematical and computational tools that have been used to model fibrotic processes at the molecular, cellular, and tissue levels. Among such techniques, agent based models (ABM) can give a valuable contribution in the understanding and better management of fibrotic diseases.
Projective simulation for artificial intelligence
NASA Astrophysics Data System (ADS)
Briegel, Hans J.; de Las Cuevas, Gemma
2012-05-01
We propose a model of a learning agent whose interaction with the environment is governed by a simulation-based projection, which allows the agent to project itself into future situations before it takes real action. Projective simulation is based on a random walk through a network of clips, which are elementary patches of episodic memory. The network of clips changes dynamically, both due to new perceptual input and due to certain compositional principles of the simulation process. During simulation, the clips are screened for specific features which trigger factual action of the agent. The scheme is different from other, computational, notions of simulation, and it provides a new element in an embodied cognitive science approach to intelligent action and learning. Our model provides a natural route for generalization to quantum-mechanical operation and connects the fields of reinforcement learning and quantum computation.
Projective simulation for artificial intelligence
Briegel, Hans J.; De las Cuevas, Gemma
2012-01-01
We propose a model of a learning agent whose interaction with the environment is governed by a simulation-based projection, which allows the agent to project itself into future situations before it takes real action. Projective simulation is based on a random walk through a network of clips, which are elementary patches of episodic memory. The network of clips changes dynamically, both due to new perceptual input and due to certain compositional principles of the simulation process. During simulation, the clips are screened for specific features which trigger factual action of the agent. The scheme is different from other, computational, notions of simulation, and it provides a new element in an embodied cognitive science approach to intelligent action and learning. Our model provides a natural route for generalization to quantum-mechanical operation and connects the fields of reinforcement learning and quantum computation. PMID:22590690
DualTrust: A Trust Management Model for Swarm-Based Autonomic Computing Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maiden, Wendy M.
Trust management techniques must be adapted to the unique needs of the application architectures and problem domains to which they are applied. For autonomic computing systems that utilize mobile agents and ant colony algorithms for their sensor layer, certain characteristics of the mobile agent ant swarm -- their lightweight, ephemeral nature and indirect communication -- make this adaptation especially challenging. This thesis looks at the trust issues and opportunities in swarm-based autonomic computing systems and finds that by monitoring the trustworthiness of the autonomic managers rather than the swarming sensors, the trust management problem becomes much more scalable and stillmore » serves to protect the swarm. After analyzing the applicability of trust management research as it has been applied to architectures with similar characteristics, this thesis specifies the required characteristics for trust management mechanisms used to monitor the trustworthiness of entities in a swarm-based autonomic computing system and describes a trust model that meets these requirements.« less
Parallel Agent-Based Simulations on Clusters of GPUs and Multi-Core Processors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aaby, Brandon G; Perumalla, Kalyan S; Seal, Sudip K
2010-01-01
An effective latency-hiding mechanism is presented in the parallelization of agent-based model simulations (ABMS) with millions of agents. The mechanism is designed to accommodate the hierarchical organization as well as heterogeneity of current state-of-the-art parallel computing platforms. We use it to explore the computation vs. communication trade-off continuum available with the deep computational and memory hierarchies of extant platforms and present a novel analytical model of the tradeoff. We describe our implementation and report preliminary performance results on two distinct parallel platforms suitable for ABMS: CUDA threads on multiple, networked graphical processing units (GPUs), and pthreads on multi-core processors. Messagemore » Passing Interface (MPI) is used for inter-GPU as well as inter-socket communication on a cluster of multiple GPUs and multi-core processors. Results indicate the benefits of our latency-hiding scheme, delivering as much as over 100-fold improvement in runtime for certain benchmark ABMS application scenarios with several million agents. This speed improvement is obtained on our system that is already two to three orders of magnitude faster on one GPU than an equivalent CPU-based execution in a popular simulator in Java. Thus, the overall execution of our current work is over four orders of magnitude faster when executed on multiple GPUs.« less
Parameter estimation and sensitivity analysis in an agent-based model of Leishmania major infection
Jones, Douglas E.; Dorman, Karin S.
2009-01-01
Computer models of disease take a systems biology approach toward understanding host-pathogen interactions. In particular, data driven computer model calibration is the basis for inference of immunological and pathogen parameters, assessment of model validity, and comparison between alternative models of immune or pathogen behavior. In this paper we describe the calibration and analysis of an agent-based model of Leishmania major infection. A model of macrophage loss following uptake of necrotic tissue is proposed to explain macrophage depletion following peak infection. Using Gaussian processes to approximate the computer code, we perform a sensitivity analysis to identify important parameters and to characterize their influence on the simulated infection. The analysis indicates that increasing growth rate can favor or suppress pathogen loads, depending on the infection stage and the pathogen’s ability to avoid detection. Subsequent calibration of the model against previously published biological observations suggests that L. major has a relatively slow growth rate and can replicate for an extended period of time before damaging the host cell. PMID:19837088
Tučník, Petr; Bureš, Vladimír
2016-01-01
Multi-criteria decision-making (MCDM) can be formally implemented by various methods. This study compares suitability of four selected MCDM methods, namely WPM, TOPSIS, VIKOR, and PROMETHEE, for future applications in agent-based computational economic (ACE) models of larger scale (i.e., over 10 000 agents in one geographical region). These four MCDM methods were selected according to their appropriateness for computational processing in ACE applications. Tests of the selected methods were conducted on four hardware configurations. For each method, 100 tests were performed, which represented one testing iteration. With four testing iterations conducted on each hardware setting and separated testing of all configurations with the-server parameter de/activated, altogether, 12800 data points were collected and consequently analyzed. An illustrational decision-making scenario was used which allows the mutual comparison of all of the selected decision making methods. Our test results suggest that although all methods are convenient and can be used in practice, the VIKOR method accomplished the tests with the best results and thus can be recommended as the most suitable for simulations of large-scale agent-based models.
Pragmatically Framed Cross-Situational Noun Learning Using Computational Reinforcement Models
Najnin, Shamima; Banerjee, Bonny
2018-01-01
Cross-situational learning and social pragmatic theories are prominent mechanisms for learning word meanings (i.e., word-object pairs). In this paper, the role of reinforcement is investigated for early word-learning by an artificial agent. When exposed to a group of speakers, the agent comes to understand an initial set of vocabulary items belonging to the language used by the group. Both cross-situational learning and social pragmatic theory are taken into account. As social cues, joint attention and prosodic cues in caregiver's speech are considered. During agent-caregiver interaction, the agent selects a word from the caregiver's utterance and learns the relations between that word and the objects in its visual environment. The “novel words to novel objects” language-specific constraint is assumed for computing rewards. The models are learned by maximizing the expected reward using reinforcement learning algorithms [i.e., table-based algorithms: Q-learning, SARSA, SARSA-λ, and neural network-based algorithms: Q-learning for neural network (Q-NN), neural-fitted Q-network (NFQ), and deep Q-network (DQN)]. Neural network-based reinforcement learning models are chosen over table-based models for better generalization and quicker convergence. Simulations are carried out using mother-infant interaction CHILDES dataset for learning word-object pairings. Reinforcement is modeled in two cross-situational learning cases: (1) with joint attention (Attentional models), and (2) with joint attention and prosodic cues (Attentional-prosodic models). Attentional-prosodic models manifest superior performance to Attentional ones for the task of word-learning. The Attentional-prosodic DQN outperforms existing word-learning models for the same task. PMID:29441027
Reducing the Complexity of an Agent-Based Local Heroin Market Model
Heard, Daniel; Bobashev, Georgiy V.; Morris, Robert J.
2014-01-01
This project explores techniques for reducing the complexity of an agent-based model (ABM). The analysis involved a model developed from the ethnographic research of Dr. Lee Hoffer in the Larimer area heroin market, which involved drug users, drug sellers, homeless individuals and police. The authors used statistical techniques to create a reduced version of the original model which maintained simulation fidelity while reducing computational complexity. This involved identifying key summary quantities of individual customer behavior as well as overall market activity and replacing some agents with probability distributions and regressions. The model was then extended to allow external market interventions in the form of police busts. Extensions of this research perspective, as well as its strengths and limitations, are discussed. PMID:25025132
Videos | Argonne National Laboratory
science --Agent-based modeling --Applied mathematics --Artificial intelligence --Cloud computing management -Intelligence & counterterrorrism -Vulnerability assessment -Sensors & detectors Programs
Computational Model for Ethnographically Informed Systems Design
NASA Astrophysics Data System (ADS)
Iqbal, Rahat; James, Anne; Shah, Nazaraf; Terken, Jacuqes
This paper presents a computational model for ethnographically informed systems design that can support complex and distributed cooperative activities. This model is based on an ethnographic framework consisting of three important dimensions (e.g., distributed coordination, awareness of work and plans and procedure), and the BDI (Belief, Desire and Intention) model of intelligent agents. The ethnographic framework is used to conduct ethnographic analysis and to organise ethnographically driven information into three dimensions, whereas the BDI model allows such information to be mapped upon the underlying concepts of multi-agent systems. The advantage of this model is that it is built upon an adaptation of existing mature and well-understood techniques. By the use of this model, we also address the cognitive aspects of systems design.
NASA Astrophysics Data System (ADS)
Hibbard, Bill
2012-05-01
Orseau and Ring, as well as Dewey, have recently described problems, including self-delusion, with the behavior of agents using various definitions of utility functions. An agent's utility function is defined in terms of the agent's history of interactions with its environment. This paper argues, via two examples, that the behavior problems can be avoided by formulating the utility function in two steps: 1) inferring a model of the environment from interactions, and 2) computing utility as a function of the environment model. Basing a utility function on a model that the agent must learn implies that the utility function must initially be expressed in terms of specifications to be matched to structures in the learned model. These specifications constitute prior assumptions about the environment so this approach will not work with arbitrary environments. But the approach should work for agents designed by humans to act in the physical world. The paper also addresses the issue of self-modifying agents and shows that if provided with the possibility to modify their utility functions agents will not choose to do so, under some usual assumptions.
Agent-Based Computational Modeling to Examine How Individual Cell Morphology Affects Dosimetry
Cell-based models utilizing high-content screening (HCS) data have applications for predictive toxicology. Evaluating concentration-dependent effects on cell fate and state response is a fundamental utilization of HCS data.Although HCS assays may capture quantitative readouts at ...
A Harris-Todaro Agent-Based Model to Rural-Urban Migration
NASA Astrophysics Data System (ADS)
Espíndola, Aquino L.; Silveira, Jaylson J.; Penna, T. J. P.
2006-09-01
The Harris-Todaro model of the rural-urban migration process is revisited under an agent-based approach. The migration of the workers is interpreted as a process of social learning by imitation, formalized by a computational model. By simulating this model, we observe a transitional dynamics with continuous growth of the urban fraction of overall population toward an equilibrium. Such an equilibrium is characterized by stabilization of rural-urban expected wages differential (generalized Harris-Todaro equilibrium condition), urban concentration and urban unemployment. These classic results obtained originally by Harris and Todaro are emergent properties of our model.
The Effect of Emotional Feedback on Behavioral Intention to Use Computer Based Assessment
ERIC Educational Resources Information Center
Terzis, Vasileios; Moridis, Christos N.; Economides, Anastasios A.
2012-01-01
This study introduces emotional feedback as a construct in an acceptance model. It explores the effect of emotional feedback on behavioral intention to use Computer Based Assessment (CBA). A female Embodied Conversational Agent (ECA) with empathetic encouragement behavior was displayed as emotional feedback. More specifically, this research aims…
2013-03-29
Assessor that is in the SoS agent. Figure 31. Fuzzy Assessor for the SoS Agent for Assessment of SoS Architecture «subsystem» Fuzzy Rules « datatype ...Affordability « datatype » Flexibility « datatype » Performance « datatype » Robustness Input Input Input Input « datatype » Architecture QualityOutput Fuzzy
Agent-Based Modeling in Molecular Systems Biology.
Soheilypour, Mohammad; Mofrad, Mohammad R K
2018-07-01
Molecular systems orchestrating the biology of the cell typically involve a complex web of interactions among various components and span a vast range of spatial and temporal scales. Computational methods have advanced our understanding of the behavior of molecular systems by enabling us to test assumptions and hypotheses, explore the effect of different parameters on the outcome, and eventually guide experiments. While several different mathematical and computational methods are developed to study molecular systems at different spatiotemporal scales, there is still a need for methods that bridge the gap between spatially-detailed and computationally-efficient approaches. In this review, we summarize the capabilities of agent-based modeling (ABM) as an emerging molecular systems biology technique that provides researchers with a new tool in exploring the dynamics of molecular systems/pathways in health and disease. © 2018 WILEY Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Singh, V. K.; Jha, A. K.; Gupta, K.; Srivastav, S. K.
2017-12-01
Recent studies indicate that there is a significant improvement in the urban land use dynamics through modeling at finer spatial resolutions. Geo-computational models such as cellular automata and agent based model have given evident proof regarding the quantification of the urban growth pattern with urban boundary. In recent studies, socio- economic factors such as demography, education rate, household density, parcel price of the current year, distance to road, school, hospital, commercial centers and police station are considered to the major factors influencing the Land Use Land Cover (LULC) pattern of the city. These factors have unidirectional approach to land use pattern which makes it difficult to analyze the spatial aspects of model results both quantitatively and qualitatively. In this study, cellular automata model is combined with generic model known as Agent Based Model to evaluate the impact of socio economic factors on land use pattern. For this purpose, Dehradun an Indian city is selected as a case study. Socio economic factors were collected from field survey, Census of India, Directorate of economic census, Uttarakhand, India. A 3X3 simulating window is used to consider the impact on LULC. Cellular automata model results are examined for the identification of hot spot areas within the urban area and agent based model will be using logistic based regression approach where it will identify the correlation between each factor on LULC and classify the available area into low density, medium density, high density residential or commercial area. In the modeling phase, transition rule, neighborhood effect, cell change factors are used to improve the representation of built-up classes. Significant improvement is observed in the built-up classes from 84 % to 89 %. However after incorporating agent based model with cellular automata model the accuracy improved from 89 % to 94 % in 3 classes of urban i.e. low density, medium density and commercial classes. Sensitivity study of the model indicated that southern and south-west part of the city have shown improvement and small patches of growth are also observed in the north western part of the city.The study highlights the growing importance of socio economic factors and geo-computational modeling approach on changing LULC of newly growing cities of modern India.
An, Gary C
2010-01-01
The greatest challenge facing the biomedical research community is the effective translation of basic mechanistic knowledge into clinically effective therapeutics. This challenge is most evident in attempts to understand and modulate "systems" processes/disorders, such as sepsis, cancer, and wound healing. Formulating an investigatory strategy for these issues requires the recognition that these are dynamic processes. Representation of the dynamic behavior of biological systems can aid in the investigation of complex pathophysiological processes by augmenting existing discovery procedures by integrating disparate information sources and knowledge. This approach is termed Translational Systems Biology. Focusing on the development of computational models capturing the behavior of mechanistic hypotheses provides a tool that bridges gaps in the understanding of a disease process by visualizing "thought experiments" to fill those gaps. Agent-based modeling is a computational method particularly well suited to the translation of mechanistic knowledge into a computational framework. Utilizing agent-based models as a means of dynamic hypothesis representation will be a vital means of describing, communicating, and integrating community-wide knowledge. The transparent representation of hypotheses in this dynamic fashion can form the basis of "knowledge ecologies," where selection between competing hypotheses will apply an evolutionary paradigm to the development of community knowledge.
Epstein, Joshua M.; Pankajakshan, Ramesh; Hammond, Ross A.
2011-01-01
We introduce a novel hybrid of two fields—Computational Fluid Dynamics (CFD) and Agent-Based Modeling (ABM)—as a powerful new technique for urban evacuation planning. CFD is a predominant technique for modeling airborne transport of contaminants, while ABM is a powerful approach for modeling social dynamics in populations of adaptive individuals. The hybrid CFD-ABM method is capable of simulating how large, spatially-distributed populations might respond to a physically realistic contaminant plume. We demonstrate the overall feasibility of CFD-ABM evacuation design, using the case of a hypothetical aerosol release in Los Angeles to explore potential effectiveness of various policy regimes. We conclude by arguing that this new approach can be powerfully applied to arbitrary population centers, offering an unprecedented preparedness and catastrophic event response tool. PMID:21687788
Yi-Qun, Xu; Wei, Liu; Xin-Ye, Ni
2016-10-01
This study employs dual-source computed tomography single-spectrum imaging to evaluate the effects of contrast agent artifact removal and the computational accuracy of radiotherapy treatment planning improvement. The phantom, including the contrast agent, was used in all experiments. The amounts of iodine in the contrast agent were 30, 15, 7.5, and 0.75 g/100 mL. Two images with different energy values were scanned and captured using dual-source computed tomography (80 and 140 kV). To obtain a fused image, 2 groups of images were processed using single-energy spectrum imaging technology. The Pinnacle planning system was used to measure the computed tomography values of the contrast agent and the surrounding phantom tissue. The difference between radiotherapy treatment planning based on 80 kV, 140 kV, and energy spectrum image was analyzed. For the image with high iodine concentration, the quality of the energy spectrum-fused image was the highest, followed by that of the 140-kV image. That of the 80-kV image was the worst. The difference in the radiotherapy treatment results among the 3 models was significant. When the concentration of iodine was 30 g/100 mL and the distance from the contrast agent at the dose measurement point was 1 cm, the deviation values (P) were 5.95% and 2.20% when image treatment planning was based on 80 and 140 kV, respectively. When the concentration of iodine was 15 g/100 mL, deviation values (P) were -2.64% and -1.69%. Dual-source computed tomography single-energy spectral imaging technology can remove contrast agent artifacts to improve the calculated dose accuracy in radiotherapy treatment planning. © The Author(s) 2015.
Imbalance detection in a manufacturing system: An agent-based model usage
NASA Astrophysics Data System (ADS)
Shevchuk, G. K.; Zvereva, O. M.; Medvedev, M. A.
2017-11-01
This paper delivers the results of the research work targeted at communications in a manufacturing system. A computer agent-based model which simulates manufacturing system functioning has been engineered. The system lifecycle consists of two recursively repeated stages: a communication stage and a production stage. Model data sets were estimated with the static Leontief's equilibrium equation usage. In experiments relationships between the manufacturing system lifecycle time and conditions of equilibrium violations have been identified. The research results are to be used to propose violation negative influence compensation methods.
Zsuga, Judit; Biro, Klara; Papp, Csaba; Tajti, Gabor; Gesztelyi, Rudolf
2016-02-01
Reinforcement learning (RL) is a powerful concept underlying forms of associative learning governed by the use of a scalar reward signal, with learning taking place if expectations are violated. RL may be assessed using model-based and model-free approaches. Model-based reinforcement learning involves the amygdala, the hippocampus, and the orbitofrontal cortex (OFC). The model-free system involves the pedunculopontine-tegmental nucleus (PPTgN), the ventral tegmental area (VTA) and the ventral striatum (VS). Based on the functional connectivity of VS, model-free and model based RL systems center on the VS that by integrating model-free signals (received as reward prediction error) and model-based reward related input computes value. Using the concept of reinforcement learning agent we propose that the VS serves as the value function component of the RL agent. Regarding the model utilized for model-based computations we turned to the proactive brain concept, which offers an ubiquitous function for the default network based on its great functional overlap with contextual associative areas. Hence, by means of the default network the brain continuously organizes its environment into context frames enabling the formulation of analogy-based association that are turned into predictions of what to expect. The OFC integrates reward-related information into context frames upon computing reward expectation by compiling stimulus-reward and context-reward information offered by the amygdala and hippocampus, respectively. Furthermore we suggest that the integration of model-based expectations regarding reward into the value signal is further supported by the efferent of the OFC that reach structures canonical for model-free learning (e.g., the PPTgN, VTA, and VS). (c) 2016 APA, all rights reserved).
ERIC Educational Resources Information Center
Berland, Matthew; Wilensky, Uri
2015-01-01
Both complex systems methods (such as agent-based modeling) and computational methods (such as programming) provide powerful ways for students to understand new phenomena. To understand how to effectively teach complex systems and computational content to younger students, we conducted a study in four urban middle school classrooms comparing…
A Neurocomputational Model of Goal-Directed Navigation in Insect-Inspired Artificial Agents
Goldschmidt, Dennis; Manoonpong, Poramate; Dasgupta, Sakyasingha
2017-01-01
Despite their small size, insect brains are able to produce robust and efficient navigation in complex environments. Specifically in social insects, such as ants and bees, these navigational capabilities are guided by orientation directing vectors generated by a process called path integration. During this process, they integrate compass and odometric cues to estimate their current location as a vector, called the home vector for guiding them back home on a straight path. They further acquire and retrieve path integration-based vector memories globally to the nest or based on visual landmarks. Although existing computational models reproduced similar behaviors, a neurocomputational model of vector navigation including the acquisition of vector representations has not been described before. Here we present a model of neural mechanisms in a modular closed-loop control—enabling vector navigation in artificial agents. The model consists of a path integration mechanism, reward-modulated global learning, random search, and action selection. The path integration mechanism integrates compass and odometric cues to compute a vectorial representation of the agent's current location as neural activity patterns in circular arrays. A reward-modulated learning rule enables the acquisition of vector memories by associating the local food reward with the path integration state. A motor output is computed based on the combination of vector memories and random exploration. In simulation, we show that the neural mechanisms enable robust homing and localization, even in the presence of external sensory noise. The proposed learning rules lead to goal-directed navigation and route formation performed under realistic conditions. Consequently, we provide a novel approach for vector learning and navigation in a simulated, situated agent linking behavioral observations to their possible underlying neural substrates. PMID:28446872
A Neurocomputational Model of Goal-Directed Navigation in Insect-Inspired Artificial Agents.
Goldschmidt, Dennis; Manoonpong, Poramate; Dasgupta, Sakyasingha
2017-01-01
Despite their small size, insect brains are able to produce robust and efficient navigation in complex environments. Specifically in social insects, such as ants and bees, these navigational capabilities are guided by orientation directing vectors generated by a process called path integration. During this process, they integrate compass and odometric cues to estimate their current location as a vector, called the home vector for guiding them back home on a straight path. They further acquire and retrieve path integration-based vector memories globally to the nest or based on visual landmarks. Although existing computational models reproduced similar behaviors, a neurocomputational model of vector navigation including the acquisition of vector representations has not been described before. Here we present a model of neural mechanisms in a modular closed-loop control-enabling vector navigation in artificial agents. The model consists of a path integration mechanism, reward-modulated global learning, random search, and action selection. The path integration mechanism integrates compass and odometric cues to compute a vectorial representation of the agent's current location as neural activity patterns in circular arrays. A reward-modulated learning rule enables the acquisition of vector memories by associating the local food reward with the path integration state. A motor output is computed based on the combination of vector memories and random exploration. In simulation, we show that the neural mechanisms enable robust homing and localization, even in the presence of external sensory noise. The proposed learning rules lead to goal-directed navigation and route formation performed under realistic conditions. Consequently, we provide a novel approach for vector learning and navigation in a simulated, situated agent linking behavioral observations to their possible underlying neural substrates.
Agent-based approach for generation of a money-centered star network
NASA Astrophysics Data System (ADS)
Yang, Jae-Suk; Kwon, Okyu; Jung, Woo-Sung; Kim, In-mook
2008-09-01
The history of trade is a progression from a pure barter system. A medium of exchange emerges autonomously in the market, a position currently occupied by money. We investigate an agent-based computational economics model consisting of interacting agents considering distinguishable properties of commodities which represent salability. We also analyze the properties of the commodity network using a spanning tree. We find that the “storage fee” is more crucial than “demand” in determining which commodity is used as a medium of exchange.
Global Sensitivity Analysis for Large-scale Socio-hydrological Models using the Cloud
NASA Astrophysics Data System (ADS)
Hu, Y.; Garcia-Cabrejo, O.; Cai, X.; Valocchi, A. J.; Dupont, B.
2014-12-01
In the context of coupled human and natural system (CHNS), incorporating human factors into water resource management provides us with the opportunity to understand the interactions between human and environmental systems. A multi-agent system (MAS) model is designed to couple with the physically-based Republican River Compact Administration (RRCA) groundwater model, in an attempt to understand the declining water table and base flow in the heavily irrigated Republican River basin. For MAS modelling, we defined five behavioral parameters (κ_pr, ν_pr, κ_prep, ν_prep and λ) to characterize the agent's pumping behavior given the uncertainties of the future crop prices and precipitation. κ and ν describe agent's beliefs in their prior knowledge of the mean and variance of crop prices (κ_pr, ν_pr) and precipitation (κ_prep, ν_prep), and λ is used to describe the agent's attitude towards the fluctuation of crop profits. Notice that these human behavioral parameters as inputs to the MAS model are highly uncertain and even not measurable. Thus, we estimate the influences of these behavioral parameters on the coupled models using Global Sensitivity Analysis (GSA). In this paper, we address two main challenges arising from GSA with such a large-scale socio-hydrological model by using Hadoop-based Cloud Computing techniques and Polynomial Chaos Expansion (PCE) based variance decomposition approach. As a result, 1,000 scenarios of the coupled models are completed within two hours with the Hadoop framework, rather than about 28days if we run those scenarios sequentially. Based on the model results, GSA using PCE is able to measure the impacts of the spatial and temporal variations of these behavioral parameters on crop profits and water table, and thus identifies two influential parameters, κ_pr and λ. The major contribution of this work is a methodological framework for the application of GSA in large-scale socio-hydrological models. This framework attempts to find a balance between the heavy computational burden regarding model execution and the number of model evaluations required in the GSA analysis, particularly through an organic combination of Hadoop-based Cloud Computing to efficiently evaluate the socio-hydrological model and PCE where the sensitivity indices are efficiently estimated from its coefficients.
An Analysis on a Negotiation Model Based on Multiagent Systems with Symbiotic Learning and Evolution
NASA Astrophysics Data System (ADS)
Hossain, Md. Tofazzal
This study explores an evolutionary analysis on a negotiation model based on Masbiole (Multiagent Systems with Symbiotic Learning and Evolution) which has been proposed as a new methodology of Multiagent Systems (MAS) based on symbiosis in the ecosystem. In Masbiole, agents evolve in consideration of not only their own benefits and losses, but also the benefits and losses of opponent agents. To aid effective application of Masbiole, we develop a competitive negotiation model where rigorous and advanced intelligent decision-making mechanisms are required for agents to achieve solutions. A Negotiation Protocol is devised aiming at developing a set of rules for agents' behavior during evolution. Simulations use a newly developed evolutionary computing technique, called Genetic Network Programming (GNP) which has the directed graph-type gene structure that can develop and design the required intelligent mechanisms for agents. In a typical scenario, competitive negotiation solutions are reached by concessions that are usually predetermined in the conventional MAS. In this model, however, not only concession is determined automatically by symbiotic evolution (making the system intelligent, automated, and efficient) but the solution also achieves Pareto optimal automatically.
A Computational Model Predicting Disruption of Blood Vessel Development
Kleinstreuer, Nicole; Dix, David; Rountree, Michael; Baker, Nancy; Sipes, Nisha; Reif, David; Spencer, Richard; Knudsen, Thomas
2013-01-01
Vascular development is a complex process regulated by dynamic biological networks that vary in topology and state across different tissues and developmental stages. Signals regulating de novo blood vessel formation (vasculogenesis) and remodeling (angiogenesis) come from a variety of biological pathways linked to endothelial cell (EC) behavior, extracellular matrix (ECM) remodeling and the local generation of chemokines and growth factors. Simulating these interactions at a systems level requires sufficient biological detail about the relevant molecular pathways and associated cellular behaviors, and tractable computational models that offset mathematical and biological complexity. Here, we describe a novel multicellular agent-based model of vasculogenesis using the CompuCell3D (http://www.compucell3d.org/) modeling environment supplemented with semi-automatic knowledgebase creation. The model incorporates vascular endothelial growth factor signals, pro- and anti-angiogenic inflammatory chemokine signals, and the plasminogen activating system of enzymes and proteases linked to ECM interactions, to simulate nascent EC organization, growth and remodeling. The model was shown to recapitulate stereotypical capillary plexus formation and structural emergence of non-coded cellular behaviors, such as a heterologous bridging phenomenon linking endothelial tip cells together during formation of polygonal endothelial cords. Molecular targets in the computational model were mapped to signatures of vascular disruption derived from in vitro chemical profiling using the EPA's ToxCast high-throughput screening (HTS) dataset. Simulating the HTS data with the cell-agent based model of vascular development predicted adverse effects of a reference anti-angiogenic thalidomide analog, 5HPP-33, on in vitro angiogenesis with respect to both concentration-response and morphological consequences. These findings support the utility of cell agent-based models for simulating a morphogenetic series of events and for the first time demonstrate the applicability of these models for predictive toxicology. PMID:23592958
On agent-based modeling and computational social science.
Conte, Rosaria; Paolucci, Mario
2014-01-01
In the first part of the paper, the field of agent-based modeling (ABM) is discussed focusing on the role of generative theories, aiming at explaining phenomena by growing them. After a brief analysis of the major strengths of the field some crucial weaknesses are analyzed. In particular, the generative power of ABM is found to have been underexploited, as the pressure for simple recipes has prevailed and shadowed the application of rich cognitive models. In the second part of the paper, the renewal of interest for Computational Social Science (CSS) is focused upon, and several of its variants, such as deductive, generative, and complex CSS, are identified and described. In the concluding remarks, an interdisciplinary variant, which takes after ABM, reconciling it with the quantitative one, is proposed as a fundamental requirement for a new program of the CSS.
On agent-based modeling and computational social science
Conte, Rosaria; Paolucci, Mario
2014-01-01
In the first part of the paper, the field of agent-based modeling (ABM) is discussed focusing on the role of generative theories, aiming at explaining phenomena by growing them. After a brief analysis of the major strengths of the field some crucial weaknesses are analyzed. In particular, the generative power of ABM is found to have been underexploited, as the pressure for simple recipes has prevailed and shadowed the application of rich cognitive models. In the second part of the paper, the renewal of interest for Computational Social Science (CSS) is focused upon, and several of its variants, such as deductive, generative, and complex CSS, are identified and described. In the concluding remarks, an interdisciplinary variant, which takes after ABM, reconciling it with the quantitative one, is proposed as a fundamental requirement for a new program of the CSS. PMID:25071642
2016-01-01
Multi-criteria decision-making (MCDM) can be formally implemented by various methods. This study compares suitability of four selected MCDM methods, namely WPM, TOPSIS, VIKOR, and PROMETHEE, for future applications in agent-based computational economic (ACE) models of larger scale (i.e., over 10 000 agents in one geographical region). These four MCDM methods were selected according to their appropriateness for computational processing in ACE applications. Tests of the selected methods were conducted on four hardware configurations. For each method, 100 tests were performed, which represented one testing iteration. With four testing iterations conducted on each hardware setting and separated testing of all configurations with the–server parameter de/activated, altogether, 12800 data points were collected and consequently analyzed. An illustrational decision-making scenario was used which allows the mutual comparison of all of the selected decision making methods. Our test results suggest that although all methods are convenient and can be used in practice, the VIKOR method accomplished the tests with the best results and thus can be recommended as the most suitable for simulations of large-scale agent-based models. PMID:27806061
Case Study: Organotypic human in vitro models of embryonic ...
Morphogenetic fusion of tissues is a common event in embryonic development and disruption of fusion is associated with birth defects of the eye, heart, neural tube, phallus, palate, and other organ systems. Embryonic tissue fusion requires precise regulation of cell-cell and cell-matrix interactions that drive proliferation, differentiation, and morphogenesis. Chemical low-dose exposures can disrupt morphogenesis across space and time by interfering with key embryonic fusion events. The Morphogenetic Fusion Task uses computer and in vitro models to elucidate consequences of developmental exposures. The Morphogenetic Fusion Task integrates multiple approaches to model responses to chemicals that leaad to birth defects, including integrative mining on ToxCast DB, ToxRefDB, and chemical structures, advanced computer agent-based models, and human cell-based cultures that model disruption of cellular and molecular behaviors including mechanisms predicted from integrative data mining and agent-based models. The purpose of the poster is to indicate progress on the CSS 17.02 Virtual Tissue Models Morphogenesis Task 1 products for the Board of Scientific Counselors meeting on Nov 16-17.
Hypercompetitive Environments: An Agent-based model approach
NASA Astrophysics Data System (ADS)
Dias, Manuel; Araújo, Tanya
Information technology (IT) environments are characterized by complex changes and rapid evolution. Globalization and the spread of technological innovation have increased the need for new strategic information resources, both from individual firms and management environments. Improvements in multidisciplinary methods and, particularly, the availability of powerful computational tools, are giving researchers an increasing opportunity to investigate management environments in their true complex nature. The adoption of a complex systems approach allows for modeling business strategies from a bottom-up perspective — understood as resulting from repeated and local interaction of economic agents — without disregarding the consequences of the business strategies themselves to individual behavior of enterprises, emergence of interaction patterns between firms and management environments. Agent-based models are at the leading approach of this attempt.
NASA Astrophysics Data System (ADS)
Aydiner, Ekrem; Cherstvy, Andrey G.; Metzler, Ralf
2018-01-01
We study by Monte Carlo simulations a kinetic exchange trading model for both fixed and distributed saving propensities of the agents and rationalize the person and wealth distributions. We show that the newly introduced wealth distribution - that may be more amenable in certain situations - features a different power-law exponent, particularly for distributed saving propensities of the agents. For open agent-based systems, we analyze the person and wealth distributions and find that the presence of trap agents alters their amplitude, leaving however the scaling exponents nearly unaffected. For an open system, we show that the total wealth - for different trap agent densities and saving propensities of the agents - decreases in time according to the classical Kohlrausch-Williams-Watts stretched exponential law. Interestingly, this decay does not depend on the trap agent density, but rather on saving propensities. The system relaxation for fixed and distributed saving schemes are found to be different.
NASA Astrophysics Data System (ADS)
Hashimoto, Ryoji; Matsumura, Tomoya; Nozato, Yoshihiro; Watanabe, Kenji; Onoye, Takao
A multi-agent object attention system is proposed, which is based on biologically inspired attractor selection model. Object attention is facilitated by using a video sequence and a depth map obtained through a compound-eye image sensor TOMBO. Robustness of the multi-agent system over environmental changes is enhanced by utilizing the biological model of adaptive response by attractor selection. To implement the proposed system, an efficient VLSI architecture is employed with reducing enormous computational costs and memory accesses required for depth map processing and multi-agent attractor selection process. According to the FPGA implementation result of the proposed object attention system, which is accomplished by using 7,063 slices, 640×512 pixel input images can be processed in real-time with three agents at a rate of 9fps in 48MHz operation.
Development and verification of an agent-based model of opinion leadership.
Anderson, Christine A; Titler, Marita G
2014-09-27
The use of opinion leaders is a strategy used to speed the process of translating research into practice. Much is still unknown about opinion leader attributes and activities and the context in which they are most effective. Agent-based modeling is a methodological tool that enables demonstration of the interactive and dynamic effects of individuals and their behaviors on other individuals in the environment. The purpose of this study was to develop and test an agent-based model of opinion leadership. The details of the design and verification of the model are presented. The agent-based model was developed by using a software development platform to translate an underlying conceptual model of opinion leadership into a computer model. Individual agent attributes (for example, motives and credibility) and behaviors (seeking or providing an opinion) were specified as variables in the model in the context of a fictitious patient care unit. The verification process was designed to test whether or not the agent-based model was capable of reproducing the conditions of the preliminary conceptual model. The verification methods included iterative programmatic testing ('debugging') and exploratory analysis of simulated data obtained from execution of the model. The simulation tests included a parameter sweep, in which the model input variables were adjusted systematically followed by an individual time series experiment. Statistical analysis of model output for the 288 possible simulation scenarios in the parameter sweep revealed that the agent-based model was performing, consistent with the posited relationships in the underlying model. Nurse opinion leaders act on the strength of their beliefs and as a result, become an opinion resource for their uncertain colleagues, depending on their perceived credibility. Over time, some nurses consistently act as this type of resource and have the potential to emerge as opinion leaders in a context where uncertainty exists. The development and testing of agent-based models is an iterative process. The opinion leader model presented here provides a basic structure for continued model development, ongoing verification, and the establishment of validation procedures, including empirical data collection.
Trends in Social Science: The Impact of Computational and Simulative Models
NASA Astrophysics Data System (ADS)
Conte, Rosaria; Paolucci, Mario; Cecconi, Federico
This paper discusses current progress in the computational social sciences. Specifically, it examines the following questions: Are the computational social sciences exhibiting positive or negative developments? What are the roles of agent-based models and simulation (ABM), network analysis, and other "computational" methods within this dynamic? (Conte, The necessity of intelligent agents in social simulation, Advances in Complex Systems, 3(01n04), 19-38, 2000; Conte 2010; Macy, Annual Review of Sociology, 143-166, 2002). Are there objective indicators of scientific growth that can be applied to different scientific areas, allowing for comparison among them? In this paper, some answers to these questions are presented and discussed. In particular, comparisons among different disciplines in the social and computational sciences are shown, taking into account their respective growth trends in the number of publication citations over the last few decades (culled from Google Scholar). After a short discussion of the methodology adopted, results of keyword-based queries are presented, unveiling some unexpected local impacts of simulation on the takeoff of traditionally poorly productive disciplines.
Research on mixed network architecture collaborative application model
NASA Astrophysics Data System (ADS)
Jing, Changfeng; Zhao, Xi'an; Liang, Song
2009-10-01
When facing complex requirements of city development, ever-growing spatial data, rapid development of geographical business and increasing business complexity, collaboration between multiple users and departments is needed urgently, however conventional GIS software (such as Client/Server model or Browser/Server model) are not support this well. Collaborative application is one of the good resolutions. Collaborative application has four main problems to resolve: consistency and co-edit conflict, real-time responsiveness, unconstrained operation, spatial data recoverability. In paper, application model called AMCM is put forward based on agent and multi-level cache. AMCM can be used in mixed network structure and supports distributed collaborative. Agent is an autonomous, interactive, initiative and reactive computing entity in a distributed environment. Agent has been used in many fields such as compute science and automation. Agent brings new methods for cooperation and the access for spatial data. Multi-level cache is a part of full data. It reduces the network load and improves the access and handle of spatial data, especially, in editing the spatial data. With agent technology, we make full use of its characteristics of intelligent for managing the cache and cooperative editing that brings a new method for distributed cooperation and improves the efficiency.
Grow, André; Van Bavel, Jan
2015-01-01
While men have always received more education than women in the past, this gender imbalance in education has turned around in large parts of the world. In many countries, women now excel men in terms of participation and success in higher education. This implies that, for the first time in history, there are more highly educated women than men reaching the reproductive ages and looking for a partner. We develop an agent-based computational model that explicates the mechanisms that may have linked the reversal of gender inequality in education with observed changes in educational assortative mating. Our model builds on the notion that individuals search for spouses in a marriage market and evaluate potential candidates based on preferences. Based on insights from earlier research, we assume that men and women prefer partners with similar educational attainment and high earnings prospects, that women tend to prefer men who are somewhat older than themselves, and that men prefer women who are in their mid-twenties. We also incorporate the insight that the educational system structures meeting opportunities on the marriage market. We assess the explanatory power of our model with systematic computational experiments, in which we simulate marriage market dynamics in 12 European countries among individuals born between 1921 and 2012. In these experiments, we make use of realistic agent populations in terms of educational attainment and earnings prospects and validate model outcomes with data from the European Social Survey. We demonstrate that the observed changes in educational assortative mating can be explained without any change in male or female preferences. We argue that our model provides a useful computational laboratory to explore and quantify the implications of scenarios for the future. PMID:26039151
Grow, André; Van Bavel, Jan
2015-01-01
While men have always received more education than women in the past, this gender imbalance in education has turned around in large parts of the world. In many countries, women now excel men in terms of participation and success in higher education. This implies that, for the first time in history, there are more highly educated women than men reaching the reproductive ages and looking for a partner. We develop an agent-based computational model that explicates the mechanisms that may have linked the reversal of gender inequality in education with observed changes in educational assortative mating. Our model builds on the notion that individuals search for spouses in a marriage market and evaluate potential candidates based on preferences. Based on insights from earlier research, we assume that men and women prefer partners with similar educational attainment and high earnings prospects, that women tend to prefer men who are somewhat older than themselves, and that men prefer women who are in their mid-twenties. We also incorporate the insight that the educational system structures meeting opportunities on the marriage market. We assess the explanatory power of our model with systematic computational experiments, in which we simulate marriage market dynamics in 12 European countries among individuals born between 1921 and 2012. In these experiments, we make use of realistic agent populations in terms of educational attainment and earnings prospects and validate model outcomes with data from the European Social Survey. We demonstrate that the observed changes in educational assortative mating can be explained without any change in male or female preferences. We argue that our model provides a useful computational laboratory to explore and quantify the implications of scenarios for the future.
NASA Astrophysics Data System (ADS)
Bosse, Stefan
2013-05-01
Sensorial materials consisting of high-density, miniaturized, and embedded sensor networks require new robust and reliable data processing and communication approaches. Structural health monitoring is one major field of application for sensorial materials. Each sensor node provides some kind of sensor, electronics, data processing, and communication with a strong focus on microchip-level implementation to meet the goals of miniaturization and low-power energy environments, a prerequisite for autonomous behaviour and operation. Reliability requires robustness of the entire system in the presence of node, link, data processing, and communication failures. Interaction between nodes is required to manage and distribute information. One common interaction model is the mobile agent. An agent approach provides stronger autonomy than a traditional object or remote-procedure-call based approach. Agents can decide for themselves, which actions are performed, and they are capable of flexible behaviour, reacting on the environment and other agents, providing some degree of robustness. Traditionally multi-agent systems are abstract programming models which are implemented in software and executed on program controlled computer architectures. This approach does not well scale to micro-chip level and requires full equipped computers and communication structures, and the hardware architecture does not consider and reflect the requirements for agent processing and interaction. We propose and demonstrate a novel design paradigm for reliable distributed data processing systems and a synthesis methodology and framework for multi-agent systems implementable entirely on microchip-level with resource and power constrained digital logic supporting Agent-On-Chip architectures (AoC). The agent behaviour and mobility is fully integrated on the micro-chip using pipelined communicating processes implemented with finite-state machines and register-transfer logic. The agent behaviour, interaction (communication), and mobility features are modelled and specified on a machine-independent abstract programming level using a state-based agent behaviour language (APL). With this APL a high-level agent compiler is able to synthesize a hardware model (RTL, VHDL), a software model (C, ML), or a simulation model (XML) suitable to simulate a multi-agent system using the SeSAm simulator framework. Agent communication is provided by a simple tuple-space database implemented on node level providing fault tolerant access of global data. A novel synthesis development kit (SynDK) based on a graph-structured database approach is introduced to support the rapid development of compilers and synthesis tools, used for example for the design and implementation of the APL compiler.
NASA Astrophysics Data System (ADS)
Siettos, Constantinos I.; Anastassopoulou, Cleo; Russo, Lucia; Grigoras, Christos; Mylonakis, Eleftherios
2016-06-01
Based on multiscale agent-based computations we estimated the per-contact probability of transmission by age of the Ebola virus disease (EVD) that swept through Liberia from May 2014 to March 2015. For the approximation of the epidemic dynamics we have developed a detailed agent-based model with small-world interactions between individuals categorized by age. For the estimation of the structure of the evolving contact network as well as the per-contact transmission probabilities by age group we exploited the so called Equation-Free framework. Model parameters were fitted to official case counts reported by the World Health Organization (WHO) as well as to recently published data of key epidemiological variables, such as the mean time to death, recovery and the case fatality rate.
Excellent approach to modeling urban expansion by fuzzy cellular automata: agent base model
NASA Astrophysics Data System (ADS)
Khajavigodellou, Yousef; Alesheikh, Ali A.; Mohammed, Abdulrazak A. S.; Chapi, Kamran
2014-09-01
Recently, the interaction between humans and their environment is the one of important challenges in the world. Landuse/ cover change (LUCC) is a complex process that includes actors and factors at different social and spatial levels. The complexity and dynamics of urban systems make the applicable practice of urban modeling very difficult. With the increased computational power and the greater availability of spatial data, micro-simulation such as the agent based and cellular automata simulation methods, has been developed by geographers, planners, and scholars, and it has shown great potential for representing and simulating the complexity of the dynamic processes involved in urban growth and land use change. This paper presents Fuzzy Cellular Automata in Geospatial Information System and remote Sensing to simulated and predicted urban expansion pattern. These FCA-based dynamic spatial urban models provide an improved ability to forecast and assess future urban growth and to create planning scenarios, allowing us to explore the potential impacts of simulations that correspond to urban planning and management policies. A fuzzy inference guided cellular automata approach. Semantic or linguistic knowledge on Land use change is expressed as fuzzy rules, based on which fuzzy inference is applied to determine the urban development potential for each pixel. The model integrates an ABM (agent-based model) and FCA (Fuzzy Cellular Automata) to investigate a complex decision-making process and future urban dynamic processes. Based on this model rapid development and green land protection under the influences of the behaviors and decision modes of regional authority agents, real estate developer agents, resident agents and non- resident agents and their interactions have been applied to predict the future development patterns of the Erbil metropolitan region.
Continuous Opinion Dynamics Under Bounded Confidence:. a Survey
NASA Astrophysics Data System (ADS)
Lorenz, Jan
Models of continuous opinion dynamics under bounded confidence have been presented independently by Krause and Hegselmann and by Deffuant et al. in 2000. They have raised a fair amount of attention in the communities of social simulation, sociophysics and complexity science. The researchers working on it come from disciplines such as physics, mathematics, computer science, social psychology and philosophy. In these models agents hold continuous opinions which they can gradually adjust if they hear the opinions of others. The idea of bounded confidence is that agents only interact if they are close in opinion to each other. Usually, the models are analyzed with agent-based simulations in a Monte Carlo style, but they can also be reformulated on the agent's density in the opinion space in a master equation style. The contribution of this survey is fourfold. First, it will present the agent-based and density-based modeling frameworks including the cases of multidimensional opinions and heterogeneous bounds of confidence. Second, it will give the bifurcation diagrams of cluster configuration in the homogeneous model with uniformly distributed initial opinions. Third, it will review the several extensions and the evolving phenomena which have been studied so far, and fourth it will state some open questions.
2012-09-30
System N Agent « datatype » SoS Architecture -Receives Capabilities1 -Provides Capabilities1 1 -Provides Capabilities1 1 -Provides Capabilities1 -Updates 1...fitness, or objective function. The structure of the SoS Agent is depicted in Figure 10. SoS Agent Architecture « datatype » Initial SoS...Architecture «subsystem» Fuzzy Inference Engine FAM « datatype » Affordability « datatype » Flexibility « datatype » Performance « datatype » Robustness Input Input
Learning Natural Selection in 4th Grade with Multi-Agent-Based Computational Models
NASA Astrophysics Data System (ADS)
Dickes, Amanda Catherine; Sengupta, Pratim
2013-06-01
In this paper, we investigate how elementary school students develop multi-level explanations of population dynamics in a simple predator-prey ecosystem, through scaffolded interactions with a multi-agent-based computational model (MABM). The term "agent" in an MABM indicates individual computational objects or actors (e.g., cars), and these agents obey simple rules assigned or manipulated by the user (e.g., speeding up, slowing down, etc.). It is the interactions between these agents, based on the rules assigned by the user, that give rise to emergent, aggregate-level behavior (e.g., formation and movement of the traffic jam). Natural selection is such an emergent phenomenon, which has been shown to be challenging for novices (K16 students) to understand. Whereas prior research on learning evolutionary phenomena with MABMs has typically focused on high school students and beyond, we investigate how elementary students (4th graders) develop multi-level explanations of some introductory aspects of natural selection—species differentiation and population change—through scaffolded interactions with an MABM that simulates predator-prey dynamics in a simple birds-butterflies ecosystem. We conducted a semi-clinical interview based study with ten participants, in which we focused on the following: a) identifying the nature of learners' initial interpretations of salient events or elements of the represented phenomena, b) identifying the roles these interpretations play in the development of their multi-level explanations, and c) how attending to different levels of the relevant phenomena can make explicit different mechanisms to the learners. In addition, our analysis also shows that although there were differences between high- and low-performing students (in terms of being able to explain population-level behaviors) in the pre-test, these differences disappeared in the post-test.
Elsawah, Sondoss; Guillaume, Joseph H A; Filatova, Tatiana; Rook, Josefine; Jakeman, Anthony J
2015-03-15
This paper aims to contribute to developing better ways for incorporating essential human elements in decision making processes for modelling of complex socio-ecological systems. It presents a step-wise methodology for integrating perceptions of stakeholders (qualitative) into formal simulation models (quantitative) with the ultimate goal of improving understanding and communication about decision making in complex socio-ecological systems. The methodology integrates cognitive mapping and agent based modelling. It cascades through a sequence of qualitative/soft and numerical methods comprising: (1) Interviews to elicit mental models; (2) Cognitive maps to represent and analyse individual and group mental models; (3) Time-sequence diagrams to chronologically structure the decision making process; (4) All-encompassing conceptual model of decision making, and (5) computational (in this case agent-based) Model. We apply the proposed methodology (labelled ICTAM) in a case study of viticulture irrigation in South Australia. Finally, we use strengths-weakness-opportunities-threats (SWOT) analysis to reflect on the methodology. Results show that the methodology leverages the use of cognitive mapping to capture the richness of decision making and mental models, and provides a combination of divergent and convergent analysis methods leading to the construction of an Agent Based Model. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Fuentes-Cabrera, Miguel; Anderson, John D.; Wilmoth, Jared; Ginovart, Marta; Prats, Clara; Portell-Canal, Xavier; Retterer, Scott
Microbial interactions are critical for governing community behavior and structure in natural environments. Examination of microbial interactions in the lab involves growth under ideal conditions in batch culture; conditions that occur in nature are, however, characterized by disequilibrium. Of particular interest is the role that system variables play in shaping cell-to-cell interactions and organization at ultrafine spatial scales. We seek to use experiments and agent-based modeling to help discover mechanisms relevant to microbial dynamics and interactions in the environment. Currently, we are using an agent-based model to simulate microbial growth, dynamics and interactions that occur on a microwell-array device developed in our lab. Bacterial cells growing in the microwells of this platform can be studied with high-throughput and high-content image analyses using brightfield and fluorescence microscopy. The agent-based model is written in the language Netlogo, which in turn is ''plugged into'' a computational framework that allows submitting many calculations in parallel for different initial parameters; visualizing the outcomes in an interactive phase-like diagram; and searching, with a genetic algorithm, for the parameters that lead to the most optimal simulation outcome.
Method for distributed agent-based non-expert simulation of manufacturing process behavior
Ivezic, Nenad; Potok, Thomas E.
2004-11-30
A method for distributed agent based non-expert simulation of manufacturing process behavior on a single-processor computer comprises the steps of: object modeling a manufacturing technique having a plurality of processes; associating a distributed agent with each the process; and, programming each the agent to respond to discrete events corresponding to the manufacturing technique, wherein each discrete event triggers a programmed response. The method can further comprise the step of transmitting the discrete events to each agent in a message loop. In addition, the programming step comprises the step of conditioning each agent to respond to a discrete event selected from the group consisting of a clock tick message, a resources received message, and a request for output production message.
The comparison of the use of holonic and agent-based methods in modelling of manufacturing systems
NASA Astrophysics Data System (ADS)
Foit, K.; Banaś, W.; Gwiazda, A.; Hryniewicz, P.
2017-08-01
The rapid evolution in the field of industrial automation and manufacturing is often called the 4th Industry Revolution. Worldwide availability of the internet access contributes to the competition between manufacturers, gives the opportunity for buying materials, parts and for creating the partnership networks, like cloud manufacturing, grid manufacturing (MGrid), virtual enterprises etc. The effect of the industry evolution is the need to search for new solutions in the field of manufacturing systems modelling and simulation. During the last decade researchers have developed the agent-based approach of modelling. This methodology have been taken from the computer science, but was adapted to the philosophy of industrial automation and robotization. The operation of the agent-based system depends on the simultaneous acting of different agents that may have different roles. On the other hand, there is the holon-based approach that uses the structures created by holons. It differs from the agent-based structure in some aspects, while the other ones are quite similar in both methodologies. The aim of this paper is to present the both methodologies and discuss the similarities and the differences. This may could help to select the optimal method of modelling, according to the considered problem and software resources.
Computational Modeling and Simulation of Genital Tubercle Development
Hypospadias is a developmental defect of urethral tube closure that has a complex etiology. Here, we describe a multicellular agent-based model of genital tubercle development that simulates urethrogenesis from the urethral plate stage to urethral tube closure in differentiating ...
Mathematical modeling and computational prediction of cancer drug resistance.
Sun, Xiaoqiang; Hu, Bin
2017-06-23
Diverse forms of resistance to anticancer drugs can lead to the failure of chemotherapy. Drug resistance is one of the most intractable issues for successfully treating cancer in current clinical practice. Effective clinical approaches that could counter drug resistance by restoring the sensitivity of tumors to the targeted agents are urgently needed. As numerous experimental results on resistance mechanisms have been obtained and a mass of high-throughput data has been accumulated, mathematical modeling and computational predictions using systematic and quantitative approaches have become increasingly important, as they can potentially provide deeper insights into resistance mechanisms, generate novel hypotheses or suggest promising treatment strategies for future testing. In this review, we first briefly summarize the current progress of experimentally revealed resistance mechanisms of targeted therapy, including genetic mechanisms, epigenetic mechanisms, posttranslational mechanisms, cellular mechanisms, microenvironmental mechanisms and pharmacokinetic mechanisms. Subsequently, we list several currently available databases and Web-based tools related to drug sensitivity and resistance. Then, we focus primarily on introducing some state-of-the-art computational methods used in drug resistance studies, including mechanism-based mathematical modeling approaches (e.g. molecular dynamics simulation, kinetic model of molecular networks, ordinary differential equation model of cellular dynamics, stochastic model, partial differential equation model, agent-based model, pharmacokinetic-pharmacodynamic model, etc.) and data-driven prediction methods (e.g. omics data-based conventional screening approach for node biomarkers, static network approach for edge biomarkers and module biomarkers, dynamic network approach for dynamic network biomarkers and dynamic module network biomarkers, etc.). Finally, we discuss several further questions and future directions for the use of computational methods for studying drug resistance, including inferring drug-induced signaling networks, multiscale modeling, drug combinations and precision medicine. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
A development framework for distributed artificial intelligence
NASA Technical Reports Server (NTRS)
Adler, Richard M.; Cottman, Bruce H.
1989-01-01
The authors describe distributed artificial intelligence (DAI) applications in which multiple organizations of agents solve multiple domain problems. They then describe work in progress on a DAI system development environment, called SOCIAL, which consists of three primary language-based components. The Knowledge Object Language defines models of knowledge representation and reasoning. The metaCourier language supplies the underlying functionality for interprocess communication and control access across heterogeneous computing environments. The metaAgents language defines models for agent organization coordination, control, and resource management. Application agents and agent organizations will be constructed by combining metaAgents and metaCourier building blocks with task-specific functionality such as diagnostic or planning reasoning. This architecture hides implementation details of communications, control, and integration in distributed processing environments, enabling application developers to concentrate on the design and functionality of the intelligent agents and agent networks themselves.
NASA Astrophysics Data System (ADS)
Rahman, M. S.; Pota, H. R.; Mahmud, M. A.; Hossain, M. J.
2016-05-01
This paper presents the impact of large penetration of wind power on the transient stability through a dynamic evaluation of the critical clearing times (CCTs) by using intelligent agent-based approach. A decentralised multi-agent-based framework is developed, where agents represent a number of physical device models to form a complex infrastructure for computation and communication. They enable the dynamic flow of information and energy for the interaction between the physical processes and their activities. These agents dynamically adapt online measurements and use the CCT information for relay coordination to improve the transient stability of power systems. Simulations are carried out on a smart microgrid system for faults at increasing wind power penetration levels and the improvement in transient stability using the proposed agent-based framework is demonstrated.
Simulating the decentralized processes of the human immune system in a virtual anatomy model.
Sarpe, Vladimir; Jacob, Christian
2013-01-01
Many physiological processes within the human body can be perceived and modeled as large systems of interacting particles or swarming agents. The complex processes of the human immune system prove to be challenging to capture and illustrate without proper reference to the spatial distribution of immune-related organs and systems. Our work focuses on physical aspects of immune system processes, which we implement through swarms of agents. This is our first prototype for integrating different immune processes into one comprehensive virtual physiology simulation. Using agent-based methodology and a 3-dimensional modeling and visualization environment (LINDSAY Composer), we present an agent-based simulation of the decentralized processes in the human immune system. The agents in our model - such as immune cells, viruses and cytokines - interact through simulated physics in two different, compartmentalized and decentralized 3-dimensional environments namely, (1) within the tissue and (2) inside a lymph node. While the two environments are separated and perform their computations asynchronously, an abstract form of communication is allowed in order to replicate the exchange, transportation and interaction of immune system agents between these sites. The distribution of simulated processes, that can communicate across multiple, local CPUs or through a network of machines, provides a starting point to build decentralized systems that replicate larger-scale processes within the human body, thus creating integrated simulations with other physiological systems, such as the circulatory, endocrine, or nervous system. Ultimately, this system integration across scales is our goal for the LINDSAY Virtual Human project. Our current immune system simulations extend our previous work on agent-based simulations by introducing advanced visualizations within the context of a virtual human anatomy model. We also demonstrate how to distribute a collection of connected simulations over a network of computers. As a future endeavour, we plan to use parameter tuning techniques on our model to further enhance its biological credibility. We consider these in silico experiments and their associated modeling and optimization techniques as essential components in further enhancing our capabilities of simulating a whole-body, decentralized immune system, to be used both for medical education and research as well as for virtual studies in immunoinformatics.
Using computer agents to explain medical documents to patients with low health literacy.
Bickmore, Timothy W; Pfeifer, Laura M; Paasche-Orlow, Michael K
2009-06-01
Patients are commonly presented with complex documents that they have difficulty understanding. The objective of this study was to design and evaluate an animated computer agent to explain research consent forms to potential research participants. Subjects were invited to participate in a simulated consent process for a study involving a genetic repository. Explanation of the research consent form by the computer agent was compared to explanation by a human and a self-study condition in a randomized trial. Responses were compared according to level of health literacy. Participants were most satisfied with the consent process and most likely to sign the consent form when it was explained by the computer agent, regardless of health literacy level. Participants with adequate health literacy demonstrated the highest level of comprehension with the computer agent-based explanation compared to the other two conditions. However, participants with limited health literacy showed poor comprehension levels in all three conditions. Participants with limited health literacy reported several reasons, such as lack of time constraints, ability to re-ask questions, and lack of bias, for preferring the computer agent-based explanation over a human-based one. Animated computer agents can perform as well as or better than humans in the administration of informed consent. Animated computer agents represent a viable method for explaining health documents to patients.
Computational Model of Secondary Palate Fusion and Disruption
Morphogenetic events are driven by cell-generated physical forces and complex cellular dynamics. To improve our capacity to predict developmental effects from cellular alterations, we built a multi-cellular agent-based model in CompuCell3D that recapitulates the cellular networks...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hale, M.A.; Craig, J.I.
Integrated Product and Process Development (IPPD) embodies the simultaneous application to both system and quality engineering methods throughout an iterative design process. The use of IPPD results in the time-conscious, cost-saving development of engineering systems. To implement IPPD, a Decision-Based Design perspective is encapsulated in an approach that focuses on the role of the human designer in product development. The approach has two parts and is outlined in this paper. First, an architecture, called DREAMS, is being developed that facilitates design from a decision-based perspective. Second, a supporting computing infrastructure, called IMAGE, is being designed. Agents are used to implementmore » the overall infrastructure on the computer. Successful agent utilization requires that they be made of three components: the resource, the model, and the wrap. Current work is focused on the development of generalized agent schemes and associated demonstration projects. When in place, the technology independent computing infrastructure will aid the designer in systematically generating knowledge used to facilitate decision-making.« less
Koh, Keumseok; Reno, Rebecca; Hyder, Ayaz
2018-04-01
Recent advances in computing resources have increased interest in systems modeling and population health. While group model building (GMB) has been effectively applied in developing system dynamics models (SD), few studies have used GMB for developing an agent-based model (ABM). This article explores the use of a GMB approach to develop an ABM focused on food insecurity. In our GMB workshops, we modified a set of the standard GMB scripts to develop and validate an ABM in collaboration with local experts and stakeholders. Based on this experience, we learned that GMB is a useful collaborative modeling platform for modelers and community experts to address local population health issues. We also provide suggestions for increasing the use of the GMB approach to develop rigorous, useful, and validated ABMs.
Multiscaling Edge Effects in an Agent-based Money Emergence Model
NASA Astrophysics Data System (ADS)
Oświęcimka, P.; Drożdż, S.; Gębarowski, R.; Górski, A. Z.; Kwapień, J.
An agent-based computational economical toy model for the emergence of money from the initial barter trading, inspired by Menger's postulate that money can spontaneously emerge in a commodity exchange economy, is extensively studied. The model considered, while manageable, is significantly complex, however. It is already able to reveal phenomena that can be interpreted as emergence and collapse of money as well as the related competition effects. In particular, it is shown that - as an extra emerging effect - the money lifetimes near the critical threshold value develop multiscaling, which allow one to set parallels to critical phenomena and, thus, to the real financial markets.
Competition of information channels in the spreading of innovations
NASA Astrophysics Data System (ADS)
Kocsis, Gergely; Kun, Ferenc
2011-08-01
We study the spreading of information on technological developments in socioeconomic systems where the social contacts of agents are represented by a network of connections. In the model, agents get informed about the existence and advantages of new innovations through advertising activities of producers, which are then followed by an interagent information transfer. Computer simulations revealed that varying the strength of external driving and of interagent coupling, furthermore, the topology of social contacts, the model presents a complex behavior with interesting novel features: On the macrolevel the system exhibits logistic behavior typical for the diffusion of innovations. The time evolution can be described analytically by an integral equation that captures the nucleation and growth of clusters of informed agents. On the microlevel, small clusters are found to be compact with a crossover to fractal structures with increasing size. The distribution of cluster sizes has a power-law behavior with a crossover to a higher exponent when long-range social contacts are present in the system. Based on computer simulations we construct an approximate phase diagram of the model on a regular square lattice of agents.
Competition of information channels in the spreading of innovations.
Kocsis, Gergely; Kun, Ferenc
2011-08-01
We study the spreading of information on technological developments in socioeconomic systems where the social contacts of agents are represented by a network of connections. In the model, agents get informed about the existence and advantages of new innovations through advertising activities of producers, which are then followed by an interagent information transfer. Computer simulations revealed that varying the strength of external driving and of interagent coupling, furthermore, the topology of social contacts, the model presents a complex behavior with interesting novel features: On the macrolevel the system exhibits logistic behavior typical for the diffusion of innovations. The time evolution can be described analytically by an integral equation that captures the nucleation and growth of clusters of informed agents. On the microlevel, small clusters are found to be compact with a crossover to fractal structures with increasing size. The distribution of cluster sizes has a power-law behavior with a crossover to a higher exponent when long-range social contacts are present in the system. Based on computer simulations we construct an approximate phase diagram of the model on a regular square lattice of agents.
Understanding System of Systems Development Using an Agent-Based Wave Model
2012-01-01
Procedia Computer Science Procedia Computer Science 00 (2012) 000–000 www.elsevier.com/locate/ procedia Complex Adaptive Systems...integration of technical systems as well as cognitive and social processes, which alter system behavior [6]. As mentioned before * Corresponding...Prescribed by ANSI Std Z39-18 Acheson/ Procedia Computer Science 00 (2012) 000–000 most system architects assume that SoS participants exhibit
Mesoscopic Effects in an Agent-Based Bargaining Model in Regular Lattices
Poza, David J.; Santos, José I.; Galán, José M.; López-Paredes, Adolfo
2011-01-01
The effect of spatial structure has been proved very relevant in repeated games. In this work we propose an agent based model where a fixed finite population of tagged agents play iteratively the Nash demand game in a regular lattice. The model extends the multiagent bargaining model by Axtell, Epstein and Young [1] modifying the assumption of global interaction. Each agent is endowed with a memory and plays the best reply against the opponent's most frequent demand. We focus our analysis on the transient dynamics of the system, studying by computer simulation the set of states in which the system spends a considerable fraction of the time. The results show that all the possible persistent regimes in the global interaction model can also be observed in this spatial version. We also find that the mesoscopic properties of the interaction networks that the spatial distribution induces in the model have a significant impact on the diffusion of strategies, and can lead to new persistent regimes different from those found in previous research. In particular, community structure in the intratype interaction networks may cause that communities reach different persistent regimes as a consequence of the hindering diffusion effect of fluctuating agents at their borders. PMID:21408019
Mesoscopic effects in an agent-based bargaining model in regular lattices.
Poza, David J; Santos, José I; Galán, José M; López-Paredes, Adolfo
2011-03-09
The effect of spatial structure has been proved very relevant in repeated games. In this work we propose an agent based model where a fixed finite population of tagged agents play iteratively the Nash demand game in a regular lattice. The model extends the multiagent bargaining model by Axtell, Epstein and Young modifying the assumption of global interaction. Each agent is endowed with a memory and plays the best reply against the opponent's most frequent demand. We focus our analysis on the transient dynamics of the system, studying by computer simulation the set of states in which the system spends a considerable fraction of the time. The results show that all the possible persistent regimes in the global interaction model can also be observed in this spatial version. We also find that the mesoscopic properties of the interaction networks that the spatial distribution induces in the model have a significant impact on the diffusion of strategies, and can lead to new persistent regimes different from those found in previous research. In particular, community structure in the intratype interaction networks may cause that communities reach different persistent regimes as a consequence of the hindering diffusion effect of fluctuating agents at their borders.
Agent-based modeling: case study in cleavage furrow models
Mogilner, Alex; Manhart, Angelika
2016-01-01
The number of studies in cell biology in which quantitative models accompany experiments has been growing steadily. Roughly, mathematical and computational techniques of these models can be classified as “differential equation based” (DE) or “agent based” (AB). Recently AB models have started to outnumber DE models, but understanding of AB philosophy and methodology is much less widespread than familiarity with DE techniques. Here we use the history of modeling a fundamental biological problem—positioning of the cleavage furrow in dividing cells—to explain how and why DE and AB models are used. We discuss differences, advantages, and shortcomings of these two approaches. PMID:27811328
An Agent-Based Modeling Template for a Cohort of Veterans with Diabetic Retinopathy.
Day, Theodore Eugene; Ravi, Nathan; Xian, Hong; Brugh, Ann
2013-01-01
Agent-based models are valuable for examining systems where large numbers of discrete individuals interact with each other, or with some environment. Diabetic Veterans seeking eye care at a Veterans Administration hospital represent one such cohort. The objective of this study was to develop an agent-based template to be used as a model for a patient with diabetic retinopathy (DR). This template may be replicated arbitrarily many times in order to generate a large cohort which is representative of a real-world population, upon which in-silico experimentation may be conducted. Agent-based template development was performed in java-based computer simulation suite AnyLogic Professional 6.6. The model was informed by medical data abstracted from 535 patient records representing a retrospective cohort of current patients of the VA St. Louis Healthcare System Eye clinic. Logistic regression was performed to determine the predictors associated with advancing stages of DR. Predicted probabilities obtained from logistic regression were used to generate the stage of DR in the simulated cohort. The simulated cohort of DR patients exhibited no significant deviation from the test population of real-world patients in proportion of stage of DR, duration of diabetes mellitus (DM), or the other abstracted predictors. Simulated patients after 10 years were significantly more likely to exhibit proliferative DR (P<0.001). Agent-based modeling is an emerging platform, capable of simulating large cohorts of individuals based on manageable data abstraction efforts. The modeling method described may be useful in simulating many different conditions where course of disease is described in categorical stages.
Comba, Peter; Martin, Bodo; Sanyal, Avik; Stephan, Holger
2013-08-21
A QSPR scheme for the computation of lipophilicities of ⁶⁴Cu complexes was developed with a training set of 24 tetraazamacrocylic and bispidine-based Cu(II) compounds and their experimentally available 1-octanol-water distribution coefficients. A minimum number of physically meaningful parameters were used in the scheme, and these are primarily based on data available from molecular mechanics calculations, using an established force field for Cu(II) complexes and a recently developed scheme for the calculation of fluctuating atomic charges. The developed model was also applied to an independent validation set and was found to accurately predict distribution coefficients of potential ⁶⁴Cu PET (positron emission tomography) systems. A possible next step would be the development of a QSAR-based biodistribution model to track the uptake of imaging agents in different organs and tissues of the body. It is expected that such simple, empirical models of lipophilicity and biodistribution will be very useful in the design and virtual screening of positron emission tomography (PET) imaging agents.
A Semantic Grid Oriented to E-Tourism
NASA Astrophysics Data System (ADS)
Zhang, Xiao Ming
With increasing complexity of tourism business models and tasks, there is a clear need of the next generation e-Tourism infrastructure to support flexible automation, integration, computation, storage, and collaboration. Currently several enabling technologies such as semantic Web, Web service, agent and grid computing have been applied in the different e-Tourism applications, however there is no a unified framework to be able to integrate all of them. So this paper presents a promising e-Tourism framework based on emerging semantic grid, in which a number of key design issues are discussed including architecture, ontologies structure, semantic reconciliation, service and resource discovery, role based authorization and intelligent agent. The paper finally provides the implementation of the framework.
Stabilization of business cycles of finance agents using nonlinear optimal control
NASA Astrophysics Data System (ADS)
Rigatos, G.; Siano, P.; Ghosh, T.; Sarno, D.
2017-11-01
Stabilization of the business cycles of interconnected finance agents is performed with the use of a new nonlinear optimal control method. First, the dynamics of the interacting finance agents and of the associated business cycles is described by a modeled of coupled nonlinear oscillators. Next, this dynamic model undergoes approximate linearization round a temporary operating point which is defined by the present value of the system's state vector and the last value of the control inputs vector that was exerted on it. The linearization procedure is based on Taylor series expansion of the dynamic model and on the computation of Jacobian matrices. The modelling error, which is due to the truncation of higher-order terms in the Taylor series expansion is considered as a disturbance which is compensated by the robustness of the control loop. Next, for the linearized model of the interacting finance agents, an H-infinity feedback controller is designed. The computation of the feedback control gain requires the solution of an algebraic Riccati equation at each iteration of the control algorithm. Through Lyapunov stability analysis it is proven that the control scheme satisfies an H-infinity tracking performance criterion, which signifies elevated robustness against modelling uncertainty and external perturbations. Moreover, under moderate conditions the global asymptotic stability features of the control loop are proven.
NASA Astrophysics Data System (ADS)
Berges, J. A.; Raphael, T.; Rafa Todd, C. S.; Bate, T. C.; Hellweger, F. L.
2016-02-01
Engaging undergraduate students in research projects that require expertise in multiple disciplines (e.g. cell biology, population ecology, and mathematical modeling) can be challenging because they have often not developed the expertise that allows them to participate at a satisfying level. Use of agent-based modeling can allow exploration of concepts at more intuitive levels, and encourage experimentation that emphasizes processes over computational skills. Over the past several years, we have involved undergraduate students in projects examining both ecological and cell biological aspects of aquatic microbial biology, using the freely-downloadable, agent-based modeling environment NetLogo (https://ccl.northwestern.edu/netlogo/). In Netlogo, actions of large numbers of individuals can be simulated, leading to complex systems with emergent behavior. The interface features appealing graphics, monitors, and control structures. In one example, a group of sophomores in a BioMathematics program developed an agent-based model of phytoplankton population dynamics in a pond ecosystem, motivated by observed macroscopic changes in cell numbers (due to growth and death), and driven by responses to irradiance, temperature and a limiting nutrient. In a second example, junior and senior undergraduates conducting Independent Studies created a model of the intracellular processes governing stress and cell death for individual phytoplankton cells (based on parameters derived from experiments using single-cell culturing and flow cytometry), and then this model was embedded in the agents in the pond ecosystem model. In our experience, students with a range of mathematical abilities learned to code quickly and could use the software with varying degrees of sophistication, for example, creation of spatially-explicit two and three-dimensional models. Skills developed quickly and transferred readily to other platforms (e.g. Matlab).
Scalable Entity-Based Modeling of Population-Based Systems, Final LDRD Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cleary, A J; Smith, S G; Vassilevska, T K
2005-01-27
The goal of this project has been to develop tools, capabilities and expertise in the modeling of complex population-based systems via scalable entity-based modeling (EBM). Our initial focal application domain has been the dynamics of large populations exposed to disease-causing agents, a topic of interest to the Department of Homeland Security in the context of bioterrorism. In the academic community, discrete simulation technology based on individual entities has shown initial success, but the technology has not been scaled to the problem sizes or computational resources of LLNL. Our developmental emphasis has been on the extension of this technology to parallelmore » computers and maturation of the technology from an academic to a lab setting.« less
Agent-based model for the h-index - exact solution
NASA Astrophysics Data System (ADS)
Żogała-Siudem, Barbara; Siudem, Grzegorz; Cena, Anna; Gagolewski, Marek
2016-01-01
Hirsch's h-index is perhaps the most popular citation-based measure of scientific excellence. In 2013, Ionescu and Chopard proposed an agent-based model describing a process for generating publications and citations in an abstract scientific community [G. Ionescu, B. Chopard, Eur. Phys. J. B 86, 426 (2013)]. Within such a framework, one may simulate a scientist's activity, and - by extension - investigate the whole community of researchers. Even though the Ionescu and Chopard model predicts the h-index quite well, the authors provided a solution based solely on simulations. In this paper, we complete their results with exact, analytic formulas. What is more, by considering a simplified version of the Ionescu-Chopard model, we obtained a compact, easy to compute formula for the h-index. The derived approximate and exact solutions are investigated on a simulated and real-world data sets.
Agent-based model of angiogenesis simulates capillary sprout initiation in multicellular networks
Walpole, J.; Chappell, J.C.; Cluceru, J.G.; Mac Gabhann, F.; Bautch, V.L.; Peirce, S. M.
2015-01-01
Many biological processes are controlled by both deterministic and stochastic influences. However, efforts to model these systems often rely on either purely stochastic or purely rule-based methods. To better understand the balance between stochasticity and determinism in biological processes a computational approach that incorporates both influences may afford additional insight into underlying biological mechanisms that give rise to emergent system properties. We apply a combined approach to the simulation and study of angiogenesis, the growth of new blood vessels from existing networks. This complex multicellular process begins with selection of an initiating endothelial cell, or tip cell, which sprouts from the parent vessels in response to stimulation by exogenous cues. We have constructed an agent-based model of sprouting angiogenesis to evaluate endothelial cell sprout initiation frequency and location, and we have experimentally validated it using high-resolution time-lapse confocal microscopy. ABM simulations were then compared to a Monte Carlo model, revealing that purely stochastic simulations could not generate sprout locations as accurately as the rule-informed agent-based model. These findings support the use of rule-based approaches for modeling the complex mechanisms underlying sprouting angiogenesis over purely stochastic methods. PMID:26158406
Agent-based model of angiogenesis simulates capillary sprout initiation in multicellular networks.
Walpole, J; Chappell, J C; Cluceru, J G; Mac Gabhann, F; Bautch, V L; Peirce, S M
2015-09-01
Many biological processes are controlled by both deterministic and stochastic influences. However, efforts to model these systems often rely on either purely stochastic or purely rule-based methods. To better understand the balance between stochasticity and determinism in biological processes a computational approach that incorporates both influences may afford additional insight into underlying biological mechanisms that give rise to emergent system properties. We apply a combined approach to the simulation and study of angiogenesis, the growth of new blood vessels from existing networks. This complex multicellular process begins with selection of an initiating endothelial cell, or tip cell, which sprouts from the parent vessels in response to stimulation by exogenous cues. We have constructed an agent-based model of sprouting angiogenesis to evaluate endothelial cell sprout initiation frequency and location, and we have experimentally validated it using high-resolution time-lapse confocal microscopy. ABM simulations were then compared to a Monte Carlo model, revealing that purely stochastic simulations could not generate sprout locations as accurately as the rule-informed agent-based model. These findings support the use of rule-based approaches for modeling the complex mechanisms underlying sprouting angiogenesis over purely stochastic methods.
2015-01-01
Computational simulations are currently used to identify epidemic dynamics, to test potential prevention and intervention strategies, and to study the effects of social behaviors on HIV transmission. The author describes an agent-based epidemic simulation model of a network of individuals who participate in high-risk sexual practices, using number of partners, condom usage, and relationship length to distinguish between high- and low-risk populations. Two new concepts—free links and fixed links—are used to indicate tendencies among individuals who either have large numbers of short-term partners or stay in long-term monogamous relationships. An attempt was made to reproduce epidemic curves of reported HIV cases among male homosexuals in Taiwan prior to using the agent-based model to determine the effects of various policies on epidemic dynamics. Results suggest that when suitable adjustments are made based on available social survey statistics, the model accurately simulates real-world behaviors on a large scale. PMID:25815047
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, E.G.; Mioduszewski, R.J.
The Chemical Computer Man: Chemical Agent Response Simulation (CARS) is a computer model and simulation program for estimating the dynamic changes in human physiological dysfunction resulting from exposures to chemical-threat nerve agents. The newly developed CARS methodology simulates agent exposure effects on the following five indices of human physiological function: mental, vision, cardio-respiratory, visceral, and limbs. Mathematical models and the application of basic pharmacokinetic principles were incorporated into the simulation so that for each chemical exposure, the relationship between exposure dosage, absorbed dosage (agent blood plasma concentration), and level of physiological response are computed as a function of time. CARS,more » as a simulation tool, is designed for the users with little or no computer-related experience. The model combines maximum flexibility with a comprehensive user-friendly interactive menu-driven system. Users define an exposure problem and obtain immediate results displayed in tabular, graphical, and image formats. CARS has broad scientific and engineering applications, not only in technology for the soldier in the area of Chemical Defense, but also in minimizing animal testing in biomedical and toxicological research and the development of a modeling system for human exposure to hazardous-waste chemicals.« less
Agent-based modeling: case study in cleavage furrow models.
Mogilner, Alex; Manhart, Angelika
2016-11-07
The number of studies in cell biology in which quantitative models accompany experiments has been growing steadily. Roughly, mathematical and computational techniques of these models can be classified as "differential equation based" (DE) or "agent based" (AB). Recently AB models have started to outnumber DE models, but understanding of AB philosophy and methodology is much less widespread than familiarity with DE techniques. Here we use the history of modeling a fundamental biological problem-positioning of the cleavage furrow in dividing cells-to explain how and why DE and AB models are used. We discuss differences, advantages, and shortcomings of these two approaches. © 2016 Mogilner and Manhart. This article is distributed by The American Society for Cell Biology under license from the author(s). Two months after publication it is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).
Marshall, Thomas; Champagne-Langabeer, Tiffiany; Castelli, Darla; Hoelscher, Deanna
2017-12-01
To present research models based on artificial intelligence and discuss the concept of cognitive computing and eScience as disruptive factors in health and life science research methodologies. The paper identifies big data as a catalyst to innovation and the development of artificial intelligence, presents a framework for computer-supported human problem solving and describes a transformation of research support models. This framework includes traditional computer support; federated cognition using machine learning and cognitive agents to augment human intelligence; and a semi-autonomous/autonomous cognitive model, based on deep machine learning, which supports eScience. The paper provides a forward view of the impact of artificial intelligence on our human-computer support and research methods in health and life science research. By augmenting or amplifying human task performance with artificial intelligence, cognitive computing and eScience research models are discussed as novel and innovative systems for developing more effective adaptive obesity intervention programs.
NASA Astrophysics Data System (ADS)
Kanta, L.; Berglund, E. Z.
2015-12-01
Urban water supply systems may be managed through supply-side and demand-side strategies, which focus on water source expansion and demand reductions, respectively. Supply-side strategies bear infrastructure and energy costs, while demand-side strategies bear costs of implementation and inconvenience to consumers. To evaluate the performance of demand-side strategies, the participation and water use adaptations of consumers should be simulated. In this study, a Complex Adaptive Systems (CAS) framework is developed to simulate consumer agents that change their consumption to affect the withdrawal from the water supply system, which, in turn influences operational policies and long-term resource planning. Agent-based models are encoded to represent consumers and a policy maker agent and are coupled with water resources system simulation models. The CAS framework is coupled with an evolutionary computation-based multi-objective methodology to explore tradeoffs in cost, inconvenience to consumers, and environmental impacts for both supply-side and demand-side strategies. Decisions are identified to specify storage levels in a reservoir that trigger (1) increases in the volume of water pumped through inter-basin transfers from an external reservoir and (2) drought stages, which restrict the volume of water that is allowed for residential outdoor uses. The proposed methodology is demonstrated for Arlington, Texas, water supply system to identify non-dominated strategies for an historic drought decade. Results demonstrate that pumping costs associated with maximizing environmental reliability exceed pumping costs associated with minimizing restrictions on consumer water use.
ERIC Educational Resources Information Center
Jacobson, Michael J.; Kim, Beaumie; Pathak, Suneeta; Zhang, BaoHui
2015-01-01
This research explores issues related to the sequencing of structure that is provided as pedagogical guidance. A study was conducted that involved grade 10 students in Singapore as they learned concepts about electricity using four NetLogo Investigations of Electricity agent-based models. It was found that the low-to-high structure learning…
Social influence, agent heterogeneity and the emergence of the urban informal sector
NASA Astrophysics Data System (ADS)
García-Díaz, César; Moreno-Monroy, Ana I.
2012-02-01
We develop an agent-based computational model in which the urban informal sector acts as a buffer where rural migrants can earn some income while queuing for higher paying modern-sector jobs. In the model, the informal sector emerges as a result of rural-urban migration decisions of heterogeneous agents subject to social influence in the form of neighboring effects of varying strengths. Besides using a multinomial logit choice model that allows for agent idiosyncrasy, explicit agent heterogeneity is introduced in the form of socio-demographic characteristics preferred by modern-sector employers. We find that different combinations of the strength of social influence and the socio-economic composition of the workforce lead to very different urbanization and urban informal sector shares. In particular, moderate levels of social influence and a large proportion of rural inhabitants with preferred socio-demographic characteristics are conducive to a higher urbanization rate and a larger informal sector.
Contagion Shocks in One Dimension
NASA Astrophysics Data System (ADS)
Bertozzi, Andrea L.; Rosado, Jesus; Short, Martin B.; Wang, Li
2015-02-01
We consider an agent-based model of emotional contagion coupled with motion in one dimension that has recently been studied in the computer science community. The model involves movement with a speed proportional to a "fear" variable that undergoes a temporal consensus averaging based on distance to other agents. We study the effect of Riemann initial data for this problem, leading to shock dynamics that are studied both within the agent-based model as well as in a continuum limit. We examine the behavior of the model under distinguished limits as the characteristic contagion interaction distance and the interaction timescale both approach zero. The limiting behavior is related to a classical model for pressureless gas dynamics with "sticky" particles. In comparison, we observe a threshold for the interaction distance vs. interaction timescale that produce qualitatively different behavior for the system - in one case particle paths do not cross and there is a natural Eulerian limit involving nonlocal interactions and in the other case particle paths can cross and one may consider only a kinetic model in the continuum limit.
Evolutionary game theory using agent-based methods.
Adami, Christoph; Schossau, Jory; Hintze, Arend
2016-12-01
Evolutionary game theory is a successful mathematical framework geared towards understanding the selective pressures that affect the evolution of the strategies of agents engaged in interactions with potential conflicts. While a mathematical treatment of the costs and benefits of decisions can predict the optimal strategy in simple settings, more realistic settings such as finite populations, non-vanishing mutations rates, stochastic decisions, communication between agents, and spatial interactions, require agent-based methods where each agent is modeled as an individual, carries its own genes that determine its decisions, and where the evolutionary outcome can only be ascertained by evolving the population of agents forward in time. While highlighting standard mathematical results, we compare those to agent-based methods that can go beyond the limitations of equations and simulate the complexity of heterogeneous populations and an ever-changing set of interactors. We conclude that agent-based methods can predict evolutionary outcomes where purely mathematical treatments cannot tread (for example in the weak selection-strong mutation limit), but that mathematics is crucial to validate the computational simulations. Copyright © 2016 Elsevier B.V. All rights reserved.
An Agent Inspired Reconfigurable Computing Implementation of a Genetic Algorithm
NASA Technical Reports Server (NTRS)
Weir, John M.; Wells, B. Earl
2003-01-01
Many software systems have been successfully implemented using an agent paradigm which employs a number of independent entities that communicate with one another to achieve a common goal. The distributed nature of such a paradigm makes it an excellent candidate for use in high speed reconfigurable computing hardware environments such as those present in modem FPGA's. In this paper, a distributed genetic algorithm that can be applied to the agent based reconfigurable hardware model is introduced. The effectiveness of this new algorithm is evaluated by comparing the quality of the solutions found by the new algorithm with those found by traditional genetic algorithms. The performance of a reconfigurable hardware implementation of the new algorithm on an FPGA is compared to traditional single processor implementations.
CHAMPION: Intelligent Hierarchical Reasoning Agents for Enhanced Decision Support
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hohimer, Ryan E.; Greitzer, Frank L.; Noonan, Christine F.
2011-11-15
We describe the design and development of an advanced reasoning framework employing semantic technologies, organized within a hierarchy of computational reasoning agents that interpret domain specific information. Designed based on an inspirational metaphor of the pattern recognition functions performed by the human neocortex, the CHAMPION reasoning framework represents a new computational modeling approach that derives invariant knowledge representations through memory-prediction belief propagation processes that are driven by formal ontological language specification and semantic technologies. The CHAMPION framework shows promise for enhancing complex decision making in diverse problem domains including cyber security, nonproliferation and energy consumption analysis.
Ji, Zhiwei; Su, Jing; Wu, Dan; Peng, Huiming; Zhao, Weiling; Nlong Zhao, Brian; Zhou, Xiaobo
2017-01-31
Multiple myeloma is a malignant still incurable plasma cell disorder. This is due to refractory disease relapse, immune impairment, and development of multi-drug resistance. The growth of malignant plasma cells is dependent on the bone marrow (BM) microenvironment and evasion of the host's anti-tumor immune response. Hence, we hypothesized that targeting tumor-stromal cell interaction and endogenous immune system in BM will potentially improve the response of multiple myeloma (MM). Therefore, we proposed a computational simulation of the myeloma development in the complicated microenvironment which includes immune cell components and bone marrow stromal cells and predicted the effects of combined treatment with multi-drugs on myeloma cell growth. We constructed a hybrid multi-scale agent-based model (HABM) that combines an ODE system and Agent-based model (ABM). The ODEs was used for modeling the dynamic changes of intracellular signal transductions and ABM for modeling the cell-cell interactions between stromal cells, tumor, and immune components in the BM. This model simulated myeloma growth in the bone marrow microenvironment and revealed the important role of immune system in this process. The predicted outcomes were consistent with the experimental observations from previous studies. Moreover, we applied this model to predict the treatment effects of three key therapeutic drugs used for MM, and found that the combination of these three drugs potentially suppress the growth of myeloma cells and reactivate the immune response. In summary, the proposed model may serve as a novel computational platform for simulating the formation of MM and evaluating the treatment response of MM to multiple drugs.
Agent-based simulation of a financial market
NASA Astrophysics Data System (ADS)
Raberto, Marco; Cincotti, Silvano; Focardi, Sergio M.; Marchesi, Michele
2001-10-01
This paper introduces an agent-based artificial financial market in which heterogeneous agents trade one single asset through a realistic trading mechanism for price formation. Agents are initially endowed with a finite amount of cash and a given finite portfolio of assets. There is no money-creation process; the total available cash is conserved in time. In each period, agents make random buy and sell decisions that are constrained by available resources, subject to clustering, and dependent on the volatility of previous periods. The model proposed herein is able to reproduce the leptokurtic shape of the probability density of log price returns and the clustering of volatility. Implemented using extreme programming and object-oriented technology, the simulator is a flexible computational experimental facility that can find applications in both academic and industrial research projects.
Computational Research on Mobile Pastoralism Using Agent-Based Modeling and Satellite Imagery.
Sakamoto, Takuto
2016-01-01
Dryland pastoralism has long attracted considerable attention from researchers in diverse fields. However, rigorous formal study is made difficult by the high level of mobility of pastoralists as well as by the sizable spatio-temporal variability of their environment. This article presents a new computational approach for studying mobile pastoralism that overcomes these issues. Combining multi-temporal satellite images and agent-based modeling allows a comprehensive examination of pastoral resource access over a realistic dryland landscape with unpredictable ecological dynamics. The article demonstrates the analytical potential of this approach through its application to mobile pastoralism in northeast Nigeria. Employing more than 100 satellite images of the area, extensive simulations are conducted under a wide array of circumstances, including different land-use constraints. The simulation results reveal complex dependencies of pastoral resource access on these circumstances along with persistent patterns of seasonal land use observed at the macro level.
Computational Research on Mobile Pastoralism Using Agent-Based Modeling and Satellite Imagery
Sakamoto, Takuto
2016-01-01
Dryland pastoralism has long attracted considerable attention from researchers in diverse fields. However, rigorous formal study is made difficult by the high level of mobility of pastoralists as well as by the sizable spatio-temporal variability of their environment. This article presents a new computational approach for studying mobile pastoralism that overcomes these issues. Combining multi-temporal satellite images and agent-based modeling allows a comprehensive examination of pastoral resource access over a realistic dryland landscape with unpredictable ecological dynamics. The article demonstrates the analytical potential of this approach through its application to mobile pastoralism in northeast Nigeria. Employing more than 100 satellite images of the area, extensive simulations are conducted under a wide array of circumstances, including different land-use constraints. The simulation results reveal complex dependencies of pastoral resource access on these circumstances along with persistent patterns of seasonal land use observed at the macro level. PMID:26963526
Fire and Heat Spreading Model Based on Cellular Automata Theory
NASA Astrophysics Data System (ADS)
Samartsev, A. A.; Rezchikov, A. F.; Kushnikov, V. A.; Ivashchenko, V. A.; Bogomolov, A. S.; Filimonyuk, L. Yu; Dolinina, O. N.; Kushnikov, O. V.; Shulga, T. E.; Tverdokhlebov, V. A.; Fominykh, D. S.
2018-05-01
The distinctive feature of the proposed fire and heat spreading model in premises is the reduction of the computational complexity due to the use of the theory of cellular automata with probability rules of behavior. The possibilities and prospects of using this model in practice are noted. The proposed model has a simple mechanism of integration with agent-based evacuation models. The joint use of these models could improve floor plans and reduce the time of evacuation from premises during fires.
Agent-based modeling and systems dynamics model reproduction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
North, M. J.; Macal, C. M.
2009-01-01
Reproducibility is a pillar of the scientific endeavour. We view computer simulations as laboratories for electronic experimentation and therefore as tools for science. Recent studies have addressed model reproduction and found it to be surprisingly difficult to replicate published findings. There have been enough failed simulation replications to raise the question, 'can computer models be fully replicated?' This paper answers in the affirmative by reporting on a successful reproduction study using Mathematica, Repast and Swarm for the Beer Game supply chain model. The reproduction process was valuable because it demonstrated the original result's robustness across modelling methodologies and implementation environments.
NASA Astrophysics Data System (ADS)
Malafeyev, O. A.; Nemnyugin, S. A.; Rylow, D.; Kolpak, E. P.; Awasthi, Achal
2017-07-01
The corruption dynamics is analyzed by means of the lattice model which is similar to the three-dimensional Ising model. Agents placed at nodes of the corrupt network periodically choose to perfom or not to perform the act of corruption at gain or loss while making decisions based on the process history. The gain value and its dynamics are defined by means of the Markov stochastic process modelling with parameters established in accordance with the influence of external and individual factors on the agent's gain. The model is formulated algorithmically and is studied by means of the computer simulation. Numerical results are obtained which demonstrate asymptotic behaviour of the corruption network under various conditions.
The Power of Flexibility: Autonomous Agents That Conserve Energy in Commercial Buildings
NASA Astrophysics Data System (ADS)
Kwak, Jun-young
Agent-based systems for energy conservation are now a growing area of research in multiagent systems, with applications ranging from energy management and control on the smart grid, to energy conservation in residential buildings, to energy generation and dynamic negotiations in distributed rural communities. Contributing to this area, my thesis presents new agent-based models and algorithms aiming to conserve energy in commercial buildings. More specifically, my thesis provides three sets of algorithmic contributions. First, I provide online predictive scheduling algorithms to handle massive numbers of meeting/event scheduling requests considering flexibility , which is a novel concept for capturing generic user constraints while optimizing the desired objective. Second, I present a novel BM-MDP ( Bounded-parameter Multi-objective Markov Decision Problem) model and robust algorithms for multi-objective optimization under uncertainty both at the planning and execution time. The BM-MDP model and its robust algorithms are useful in (re)scheduling events to achieve energy efficiency in the presence of uncertainty over user's preferences. Third, when multiple users contribute to energy savings, fair division of credit for such savings to incentivize users for their energy saving activities arises as an important question. I appeal to cooperative game theory and specifically to the concept of Shapley value for this fair division. Unfortunately, scaling up this Shapley value computation is a major hindrance in practice. Therefore, I present novel approximation algorithms to efficiently compute the Shapley value based on sampling and partitions and to speed up the characteristic function computation. These new models have not only advanced the state of the art in multiagent algorithms, but have actually been successfully integrated within agents dedicated to energy efficiency: SAVES, TESLA and THINC. SAVES focuses on the day-to-day energy consumption of individuals and groups in commercial buildings by reactively suggesting energy conserving alternatives. TESLA takes a long-range planning perspective and optimizes overall energy consumption of a large number of group events or meetings together. THINC provides an end-to-end integration within a single agent of energy efficient scheduling, rescheduling and credit allocation. While SAVES, TESLA and THINC thus differ in their scope and applicability, they demonstrate the utility of agent-based systems in actually reducing energy consumption in commercial buildings. I evaluate my algorithms and agents using extensive analysis on data from over 110,000 real meetings/events at multiple educational buildings including the main libraries at the University of Southern California. I also provide results on simulations and real-world experiments, clearly demonstrating the power of agent technology to assist human users in saving energy in commercial buildings.
ERIC Educational Resources Information Center
Jones, Thomas; Laughlin, Thomas
2009-01-01
Nothing could be more effective than a wilderness experience to demonstrate the importance of conserving biodiversity. When that is not possible, though, there are computer models with several features that are helpful in understanding how biodiversity is measured. These models are easily used when natural resources, transportation, and time…
A Culture-Sensitive Agent in Kirman's Ant Model
NASA Astrophysics Data System (ADS)
Chen, Shu-Heng; Liou, Wen-Ching; Chen, Ting-Yu
The global financial crisis brought a serious collapse involving a "systemic" meltdown. Internet technology and globalization have increased the chances for interaction between countries and people. The global economy has become more complex than ever before. Mark Buchanan [12] indicated that agent-based computer models will prevent another financial crisis and has been particularly influential in contributing insights. There are two reasons why culture-sensitive agent on the financial market has become so important. Therefore, the aim of this article is to establish a culture-sensitive agent and forecast the process of change regarding herding behavior in the financial market. We based our study on the Kirman's Ant Model[4,5] and Hofstede's Natational Culture[11] to establish our culture-sensitive agent based model. Kirman's Ant Model is quite famous and describes financial market herding behavior from the expectations of the future of financial investors. Hofstede's cultural consequence used the staff of IBM in 72 different countries to understand the cultural difference. As a result, this paper focuses on one of the five dimensions of culture from Hofstede: individualism versus collectivism and creates a culture-sensitive agent and predicts the process of change regarding herding behavior in the financial market. To conclude, this study will be of importance in explaining the herding behavior with cultural factors, as well as in providing researchers with a clearer understanding of how herding beliefs of people about different cultures relate to their finance market strategies.
A Multi-Agent Approach to the Simulation of Robotized Manufacturing Systems
NASA Astrophysics Data System (ADS)
Foit, K.; Gwiazda, A.; Banaś, W.
2016-08-01
The recent years of eventful industry development, brought many competing products, addressed to the same market segment. The shortening of a development cycle became a necessity if the company would like to be competitive. Because of switching to the Intelligent Manufacturing model the industry search for new scheduling algorithms, while the traditional ones do not meet the current requirements. The agent-based approach has been considered by many researchers as an important way of evolution of modern manufacturing systems. Due to the properties of the multi-agent systems, this methodology is very helpful during creation of the model of production system, allowing depicting both processing and informational part. The complexity of such approach makes the analysis impossible without the computer assistance. Computer simulation still uses a mathematical model to recreate a real situation, but nowadays the 2D or 3D virtual environments or even virtual reality have been used for realistic illustration of the considered systems. This paper will focus on robotized manufacturing system and will present the one of possible approaches to the simulation of such systems. The selection of multi-agent approach is motivated by the flexibility of this solution that offers the modularity, robustness and autonomy.
Minimal model for tag-based cooperation
NASA Astrophysics Data System (ADS)
Traulsen, Arne; Schuster, Heinz Georg
2003-10-01
Recently, Riolo et al. [Nature (London) 414, 441 (2001)] showed by computer simulations that cooperation can arise without reciprocity when agents donate only to partners who are sufficiently similar to themselves. One striking outcome of their simulations was the observation that the number of tolerant agents that support a wide range of players was not constant in time, but showed characteristic fluctuations. The cause and robustness of these tides of tolerance remained to be explored. Here we clarify the situation by solving a minimal version of the model of Riolo et al. It allows us to identify a net surplus of random changes from intolerant to tolerant agents as a necessary mechanism that produces these oscillations of tolerance, which segregate different agents in time. This provides a new mechanism for maintaining different agents, i.e., for creating biodiversity. In our model the transition to the oscillating state is caused by a saddle node bifurcation. The frequency of the oscillations increases linearly with the transition rate from tolerant to intolerant agents.
NASA Astrophysics Data System (ADS)
Yoon, J.; Klassert, C. J. A.; Lachaut, T.; Selby, P. D.; Knox, S.; Gorelick, S.; Rajsekhar, D.; Tilmant, A.; Avisse, N.; Harou, J. J.; Gawel, E.; Klauer, B.; Mustafa, D.; Talozi, S.; Sigel, K.
2015-12-01
Our work focuses on development of a multi-agent, hydroeconomic model for purposes of water policy evaluation in Jordan. The model adopts a modular approach, integrating biophysical modules that simulate natural and engineered phenomena with human modules that represent behavior at multiple levels of decision making. The hydrologic modules are developed using spatially-distributed groundwater and surface water models, which are translated into compact simulators for efficient integration into the multi-agent model. For the groundwater model, we adopt a response matrix method approach in which a 3-dimensional MODFLOW model of a complex regional groundwater system is converted into a linear simulator of groundwater response by pre-processing drawdown results from several hundred numerical simulation runs. Surface water models for each major surface water basin in the country are developed in SWAT and similarly translated into simple rainfall-runoff functions for integration with the multi-agent model. The approach balances physically-based, spatially-explicit representation of hydrologic systems with the efficiency required for integration into a complex multi-agent model that is computationally amenable to robust scenario analysis. For the multi-agent model, we explicitly represent human agency at multiple levels of decision making, with agents representing riparian, management, supplier, and water user groups. The agents' decision making models incorporate both rule-based heuristics as well as economic optimization. The model is programmed in Python using Pynsim, a generalizable, open-source object-oriented code framework for modeling network-based water resource systems. The Jordan model is one of the first applications of Pynsim to a real-world water management case study. Preliminary results from a tanker market scenario run through year 2050 are presented in which several salient features of the water system are investigated: competition between urban and private farmer agents, the emergence of a private tanker market, disparities in economic wellbeing to different user groups caused by unique supply conditions, and response of the complex system to various policy interventions.
A hybrid computational model to explore the topological characteristics of epithelial tissues.
González-Valverde, Ismael; García-Aznar, José Manuel
2017-11-01
Epithelial tissues show a particular topology where cells resemble a polygon-like shape, but some biological processes can alter this tissue topology. During cell proliferation, mitotic cell dilation deforms the tissue and modifies the tissue topology. Additionally, cells are reorganized in the epithelial layer and these rearrangements also alter the polygon distribution. We present here a computer-based hybrid framework focused on the simulation of epithelial layer dynamics that combines discrete and continuum numerical models. In this framework, we consider topological and mechanical aspects of the epithelial tissue. Individual cells in the tissue are simulated by an off-lattice agent-based model, which keeps the information of each cell. In addition, we model the cell-cell interaction forces and the cell cycle. Otherwise, we simulate the passive mechanical behaviour of the cell monolayer using a material that approximates the mechanical properties of the cell. This continuum approach is solved by the finite element method, which uses a dynamic mesh generated by the triangulation of cell polygons. Forces generated by cell-cell interaction in the agent-based model are also applied on the finite element mesh. Cell movement in the agent-based model is driven by the displacements obtained from the deformed finite element mesh of the continuum mechanical approach. We successfully compare the results of our simulations with some experiments about the topology of proliferating epithelial tissues in Drosophila. Our framework is able to model the emergent behaviour of the cell monolayer that is due to local cell-cell interactions, which have a direct influence on the dynamics of the epithelial tissue. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
da Silva, Roberto; Vainstein, Mendeli H.; Gonçalves, Sebastián; Paula, Felipe S. F.
2013-08-01
Statistics of soccer tournament scores based on the double round robin system of several countries are studied. Exploring the dynamics of team scoring during tournament seasons from recent years we find evidences of superdiffusion. A mean-field analysis results in a drift velocity equal to that of real data but in a different diffusion coefficient. Along with the analysis of real data we present the results of simulations of soccer tournaments obtained by an agent-based model which successfully describes the final scoring distribution [da Silva , Comput. Phys. Commun.CPHCBZ0010-465510.1016/j.cpc.2012.10.030 184, 661 (2013)]. Such model yields random walks of scores over time with the same anomalous diffusion as observed in real data.
Modeling the Chagas’ disease after stem cell transplantation
NASA Astrophysics Data System (ADS)
Galvão, Viviane; Miranda, José Garcia Vivas
2009-04-01
A recent model for Chagas’ disease after stem cell transplantation is extended for a three-dimensional multi-agent-based model. The computational model includes six different types of autonomous agents: inflammatory cell, fibrosis, cardiomyocyte, proinflammatory cytokine tumor necrosis factor- α, Trypanosoma cruzi, and bone marrow stem cell. Only fibrosis is fixed and the other types of agents can move randomly through the empty spaces using the three-dimensional Moore neighborhood. Bone marrow stem cells can promote apoptosis in inflammatory cells, fibrosis regression and can differentiate in cardiomyocyte. T. cruzi can increase the number of inflammatory cells. Inflammatory cells and tumor necrosis factor- α can increase the quantity of fibrosis. Our results were compared with experimental data giving a fairly fit and they suggest that the inflammatory cells are important for the development of fibrosis.
Agent-based modelling in synthetic biology.
Gorochowski, Thomas E
2016-11-30
Biological systems exhibit complex behaviours that emerge at many different levels of organization. These span the regulation of gene expression within single cells to the use of quorum sensing to co-ordinate the action of entire bacterial colonies. Synthetic biology aims to make the engineering of biology easier, offering an opportunity to control natural systems and develop new synthetic systems with useful prescribed behaviours. However, in many cases, it is not understood how individual cells should be programmed to ensure the emergence of a required collective behaviour. Agent-based modelling aims to tackle this problem, offering a framework in which to simulate such systems and explore cellular design rules. In this article, I review the use of agent-based models in synthetic biology, outline the available computational tools, and provide details on recently engineered biological systems that are amenable to this approach. I further highlight the challenges facing this methodology and some of the potential future directions. © 2016 The Author(s).
Agent Based Modeling Applications for Geosciences
NASA Astrophysics Data System (ADS)
Stein, J. S.
2004-12-01
Agent-based modeling techniques have successfully been applied to systems in which complex behaviors or outcomes arise from varied interactions between individuals in the system. Each individual interacts with its environment, as well as with other individuals, by following a set of relatively simple rules. Traditionally this "bottom-up" modeling approach has been applied to problems in the fields of economics and sociology, but more recently has been introduced to various disciplines in the geosciences. This technique can help explain the origin of complex processes from a relatively simple set of rules, incorporate large and detailed datasets when they exist, and simulate the effects of extreme events on system-wide behavior. Some of the challenges associated with this modeling method include: significant computational requirements in order to keep track of thousands to millions of agents, methods and strategies of model validation are lacking, as is a formal methodology for evaluating model uncertainty. Challenges specific to the geosciences, include how to define agents that control water, contaminant fluxes, climate forcing and other physical processes and how to link these "geo-agents" into larger agent-based simulations that include social systems such as demographics economics and regulations. Effective management of limited natural resources (such as water, hydrocarbons, or land) requires an understanding of what factors influence the demand for these resources on a regional and temporal scale. Agent-based models can be used to simulate this demand across a variety of sectors under a range of conditions and determine effective and robust management policies and monitoring strategies. The recent focus on the role of biological processes in the geosciences is another example of an area that could benefit from agent-based applications. A typical approach to modeling the effect of biological processes in geologic media has been to represent these processes in a thermodynamic framework as a set of reactions that roll-up the integrated effect that diverse biological communities exert on a geological system. This approach may work well to predict the effect of certain biological communities in specific environments in which experimental data is available. However, it does not further our knowledge of how the geobiological system actually functions on a micro scale. Agent-based techniques may provide a framework to explore the fundamental interactions required to explain the system-wide behavior. This presentation will present a survey of several promising applications of agent-based modeling approaches to problems in the geosciences and describe specific contributions to some of the inherent challenges facing this approach.
A conceptual and computational model of moral decision making in human and artificial agents.
Wallach, Wendell; Franklin, Stan; Allen, Colin
2010-07-01
Recently, there has been a resurgence of interest in general, comprehensive models of human cognition. Such models aim to explain higher-order cognitive faculties, such as deliberation and planning. Given a computational representation, the validity of these models can be tested in computer simulations such as software agents or embodied robots. The push to implement computational models of this kind has created the field of artificial general intelligence (AGI). Moral decision making is arguably one of the most challenging tasks for computational approaches to higher-order cognition. The need for increasingly autonomous artificial agents to factor moral considerations into their choices and actions has given rise to another new field of inquiry variously known as Machine Morality, Machine Ethics, Roboethics, or Friendly AI. In this study, we discuss how LIDA, an AGI model of human cognition, can be adapted to model both affective and rational features of moral decision making. Using the LIDA model, we will demonstrate how moral decisions can be made in many domains using the same mechanisms that enable general decision making. Comprehensive models of human cognition typically aim for compatibility with recent research in the cognitive and neural sciences. Global workspace theory, proposed by the neuropsychologist Bernard Baars (1988), is a highly regarded model of human cognition that is currently being computationally instantiated in several software implementations. LIDA (Franklin, Baars, Ramamurthy, & Ventura, 2005) is one such computational implementation. LIDA is both a set of computational tools and an underlying model of human cognition, which provides mechanisms that are capable of explaining how an agent's selection of its next action arises from bottom-up collection of sensory data and top-down processes for making sense of its current situation. We will describe how the LIDA model helps integrate emotions into the human decision-making process, and we will elucidate a process whereby an agent can work through an ethical problem to reach a solution that takes account of ethically relevant factors. Copyright © 2010 Cognitive Science Society, Inc.
Understanding Emergency Care Delivery Through Computer Simulation Modeling.
Laker, Lauren F; Torabi, Elham; France, Daniel J; Froehle, Craig M; Goldlust, Eric J; Hoot, Nathan R; Kasaie, Parastu; Lyons, Michael S; Barg-Walkow, Laura H; Ward, Michael J; Wears, Robert L
2018-02-01
In 2017, Academic Emergency Medicine convened a consensus conference entitled, "Catalyzing System Change through Health Care Simulation: Systems, Competency, and Outcomes." This article, a product of the breakout session on "understanding complex interactions through systems modeling," explores the role that computer simulation modeling can and should play in research and development of emergency care delivery systems. This article discusses areas central to the use of computer simulation modeling in emergency care research. The four central approaches to computer simulation modeling are described (Monte Carlo simulation, system dynamics modeling, discrete-event simulation, and agent-based simulation), along with problems amenable to their use and relevant examples to emergency care. Also discussed is an introduction to available software modeling platforms and how to explore their use for research, along with a research agenda for computer simulation modeling. Through this article, our goal is to enhance adoption of computer simulation, a set of methods that hold great promise in addressing emergency care organization and design challenges. © 2017 by the Society for Academic Emergency Medicine.
Agent-Based Computational Modeling of Cell Culture ...
Quantitative characterization of cellular dose in vitro is needed for alignment of doses in vitro and in vivo. We used the agent-based software, CompuCell3D (CC3D), to provide a stochastic description of cell growth in culture. The model was configured so that isolated cells assumed a “fried egg shape” but became increasingly cuboidal with increasing confluency. The surface area presented by each cell to the overlying medium varies from cell-to-cell and is a determinant of diffusional flux of toxicant from the medium into the cell. Thus, dose varies among cells for a given concentration of toxicant in the medium. Computer code describing diffusion of H2O2 from medium into each cell and clearance of H2O2 was calibrated against H2O2 time-course data (25, 50, or 75 uM H2O2 for 60 min) obtained with the Amplex Red assay for the medium and the H2O2-sensitive fluorescent reporter, HyPer, for cytosol. Cellular H2O2 concentrations peaked at about 5 min and were near baseline by 10 min. The model predicted a skewed distribution of surface areas, with between cell variation usually 2 fold or less. Predicted variability in cellular dose was in rough agreement with the variation in the HyPer data. These results are preliminary, as the model was not calibrated to the morphology of a specific cell type. Future work will involve morphology model calibration against human bronchial epithelial (BEAS-2B) cells. Our results show, however, the potential of agent-based modeling
Rural-Urban Migration in D-Dimensional Lattices
NASA Astrophysics Data System (ADS)
Espíndola, Aquino L.; Penna, T. J. P.; Silveira, Jaylson J.
The rural-urban migration phenomenon is analyzed by using an agent-based computational model. Agents are placed on lattices which dimensions varying from d =2 up to d =7. The localization of the agents in the lattice defines that their social neighborhood (rural or urban) is not related to their spatial distribution. The effect of the dimension of lattice is studied by analyzing the variation of the main parameters that characterizes the migratory process. The dynamics displays strong effects even for around one million of sites, in higher dimensions (d =6, 7).
Clustering recommendations to compute agent reputation
NASA Astrophysics Data System (ADS)
Bedi, Punam; Kaur, Harmeet
2005-03-01
Traditional centralized approaches to security are difficult to apply to multi-agent systems which are used nowadays in e-commerce applications. Developing a notion of trust that is based on the reputation of an agent can provide a softer notion of security that is sufficient for many multi-agent applications. Our paper proposes a mechanism for computing reputation of the trustee agent for use by the trustier agent. The trustier agent computes the reputation based on its own experience as well as the experience the peer agents have with the trustee agents. The trustier agents intentionally interact with the peer agents to get their experience information in the form of recommendations. We have also considered the case of unintentional encounters between the referee agents and the trustee agent, which can be directly between them or indirectly through a set of interacting agents. The clustering is done to filter off the noise in the recommendations in the form of outliers. The trustier agent clusters the recommendations received from referee agents on the basis of the distances between recommendations using the hierarchical agglomerative method. The dendogram hence obtained is cut at the required similarity level which restricts the maximum distance between any two recommendations within a cluster. The cluster with maximum number of elements denotes the views of the majority of recommenders. The center of this cluster represents the reputation of the trustee agent which can be computed using c-means algorithm.
Understanding the Dynamics of Violent Political Revolutions in an Agent-Based Framework.
Moro, Alessandro
2016-01-01
This paper develops an agent-based computational model of violent political revolutions in which a subjugated population of citizens and an armed revolutionary organisation attempt to overthrow a central authority and its loyal forces. The model replicates several patterns of rebellion consistent with major historical revolutions, and provides an explanation for the multiplicity of outcomes that can arise from an uprising. The relevance of the heterogeneity of scenarios predicted by the model can be understood by considering the recent experience of the Arab Spring involving several rebellions that arose in an apparently similar way, but resulted in completely different political outcomes: the successful revolution in Tunisia, the failed protests in Saudi Arabia and Bahrain, and civil war in Syria and Libya.
Understanding the Dynamics of Violent Political Revolutions in an Agent-Based Framework
Moro, Alessandro
2016-01-01
This paper develops an agent-based computational model of violent political revolutions in which a subjugated population of citizens and an armed revolutionary organisation attempt to overthrow a central authority and its loyal forces. The model replicates several patterns of rebellion consistent with major historical revolutions, and provides an explanation for the multiplicity of outcomes that can arise from an uprising. The relevance of the heterogeneity of scenarios predicted by the model can be understood by considering the recent experience of the Arab Spring involving several rebellions that arose in an apparently similar way, but resulted in completely different political outcomes: the successful revolution in Tunisia, the failed protests in Saudi Arabia and Bahrain, and civil war in Syria and Libya. PMID:27104855
Suppression Characteristics of Cup-Burner Flames in Low Gravity
NASA Technical Reports Server (NTRS)
Takahashi, Fumiaki; Linteris, Gregory T.; Katta, Viswanath R.
2004-01-01
The structure and suppression of laminar methane-air co-flow diffusion flames formed on a cup burner have been studied experimentally and numerically using physically acting fire-extinguishing agents (CO2, N2, He, and Ar) in normal earth (lg) and zero gravity (0g). The computation uses a direct numerical simulation with detailed chemistry and radiative heat-loss models. An initial observation of the flame without agent was also made at the NASA Glenn 2.2-Second Drop Tower. An agent was introduced into a low-speed coflowing oxidizing stream by gradually replacing the air until extinguishment occurred under a fixed minimal fuel velocity. The suppression of cup-burner flames, which resemble real fires, occurred via a blowoff process (in which the flame base drifted downstream) rather than the global extinction phenomenon typical of counterflow diffusion flames. The computation revealed that the peak reactivity spot (the reaction kernel) formed in the flame base was responsible for attachment and blowoff phenomena of the trailing diffusion flame. The thermal and transport properties of the agents affected the flame extinguishment limits.
NASA Astrophysics Data System (ADS)
Inkoom, J. N.; Nyarko, B. K.
2014-12-01
The integration of geographic information systems (GIS) and agent-based modelling (ABM) can be an efficient tool to improve spatial planning practices. This paper utilizes GIS and ABM approaches to simulate spatial growth patterns of settlement structures in Shama. A preliminary household survey on residential location decision-making choice served as the behavioural rule for household agents in the model. Physical environment properties of the model were extracted from a 2005 image implemented in NetLogo. The resulting growth pattern model was compared with empirical growth patterns to ascertain the model's accuracy. The paper establishes that the development of unplanned structures and its evolving structural pattern are a function of land price, proximity to economic centres, household economic status and location decision-making patterns. The application of the proposed model underlines its potential for integration into urban planning policies and practices, and for understanding residential decision-making processes in emerging cities in developing countries. Key Words: GIS; Agent-based modelling; Growth patterns; NetLogo; Location decision making; Computational Intelligence.
Agent-Based Learning Environments as a Research Tool for Investigating Teaching and Learning.
ERIC Educational Resources Information Center
Baylor, Amy L.
2002-01-01
Discusses intelligent learning environments for computer-based learning, such as agent-based learning environments, and their advantages over human-based instruction. Considers the effects of multiple agents; agents and research design; the use of Multiple Intelligent Mentors Instructing Collaboratively (MIMIC) for instructional design for…
Modeling Co-evolution of Speech and Biology.
de Boer, Bart
2016-04-01
Two computer simulations are investigated that model interaction of cultural evolution of language and biological evolution of adaptations to language. Both are agent-based models in which a population of agents imitates each other using realistic vowels. The agents evolve under selective pressure for good imitation. In one model, the evolution of the vocal tract is modeled; in the other, a cognitive mechanism for perceiving speech accurately is modeled. In both cases, biological adaptations to using and learning speech evolve, even though the system of speech sounds itself changes at a more rapid time scale than biological evolution. However, the fact that the available acoustic space is used maximally (a self-organized result of cultural evolution) is constant, and therefore biological evolution does have a stable target. This work shows that when cultural and biological traits are continuous, their co-evolution may lead to cognitive adaptations that are strong enough to detect empirically. Copyright © 2016 Cognitive Science Society, Inc.
Controlling Hazardous Releases while Protecting Passengers in Civil Infrastructure Systems
NASA Astrophysics Data System (ADS)
Rimer, Sara P.; Katopodes, Nikolaos D.
2015-11-01
The threat of accidental or deliberate toxic chemicals released into public spaces is a significant concern to public safety, and the real-time detection and mitigation of such hazardous contaminants has the potential to minimize harm and save lives. Furthermore, the safe evacuation of occupants during such a catastrophe is of utmost importance. This research develops a comprehensive means to address such scenarios, through both the sensing and control of contaminants, and the modeling of and potential communication to occupants as they evacuate. A computational fluid dynamics model is developed of a simplified public space characterized by a long conduit (e.g. airport terminal) with unidirectional ambient flow that is capable of detecting and mitigating the hazardous contaminant (via boundary ports) over several time horizons using model predictive control optimization. Additionally, a physical prototype is built to test the real-time feasibility of this computational flow control model. The prototype is a blower wind-tunnel with an elongated test section with the capability of sensing (via digital camera) an injected `contaminant' (propylene glycol smoke), and then mitigating that contaminant using actuators (compressed air operated vacuum nozzles) which are operated by a set of pressure regulators and a programmable controller. Finally, an agent-based model is developed to simulate ``agents'' (i.e. building occupants) as they evacuate a public space, and is coupled with the computational flow control model such that agents must interact with a dynamic, threatening environment. NSF-CMMI #0856438.
Digital morphogenesis via Schelling segregation
NASA Astrophysics Data System (ADS)
Barmpalias, George; Elwes, Richard; Lewis-Pye, Andrew
2018-04-01
Schelling’s model of segregation looks to explain the way in which particles or agents of two types may come to arrange themselves spatially into configurations consisting of large homogeneous clusters, i.e. connected regions consisting of only one type. As one of the earliest agent based models studied by economists and perhaps the most famous model of self-organising behaviour, it also has direct links to areas at the interface between computer science and statistical mechanics, such as the Ising model and the study of contagion and cascading phenomena in networks. While the model has been extensively studied it has largely resisted rigorous analysis, prior results from the literature generally pertaining to variants of the model which are tweaked so as to be amenable to standard techniques from statistical mechanics or stochastic evolutionary game theory. In Brandt et al (2012 Proc. 44th Annual ACM Symp. on Theory of Computing) provided the first rigorous analysis of the unperturbed model, for a specific set of input parameters. Here we provide a rigorous analysis of the model’s behaviour much more generally and establish some surprising forms of threshold behaviour, notably the existence of situations where an increased level of intolerance for neighbouring agents of opposite type leads almost certainly to decreased segregation.
NASA Astrophysics Data System (ADS)
Rimland, Jeffrey; McNeese, Michael; Hall, David
2013-05-01
Although the capability of computer-based artificial intelligence techniques for decision-making and situational awareness has seen notable improvement over the last several decades, the current state-of-the-art still falls short of creating computer systems capable of autonomously making complex decisions and judgments in many domains where data is nuanced and accountability is high. However, there is a great deal of potential for hybrid systems in which software applications augment human capabilities by focusing the analyst's attention to relevant information elements based on both a priori knowledge of the analyst's goals and the processing/correlation of a series of data streams too numerous and heterogeneous for the analyst to digest without assistance. Researchers at Penn State University are exploring ways in which an information framework influenced by Klein's (Recognition Primed Decision) RPD model, Endsley's model of situational awareness, and the Joint Directors of Laboratories (JDL) data fusion process model can be implemented through a novel combination of Complex Event Processing (CEP) and Multi-Agent Software (MAS). Though originally designed for stock market and financial applications, the high performance data-driven nature of CEP techniques provide a natural compliment to the proven capabilities of MAS systems for modeling naturalistic decision-making, performing process adjudication, and optimizing networked processing and cognition via the use of "mobile agents." This paper addresses the challenges and opportunities of such a framework for augmenting human observational capability as well as enabling the ability to perform collaborative context-aware reasoning in both human teams and hybrid human / software agent teams.
Integrating macro and micro scale approaches in the agent-based modeling of residential dynamics
NASA Astrophysics Data System (ADS)
Saeedi, Sara
2018-06-01
With the advancement of computational modeling and simulation (M&S) methods as well as data collection technologies, urban dynamics modeling substantially improved over the last several decades. The complex urban dynamics processes are most effectively modeled not at the macro-scale, but following a bottom-up approach, by simulating the decisions of individual entities, or residents. Agent-based modeling (ABM) provides the key to a dynamic M&S framework that is able to integrate socioeconomic with environmental models, and to operate at both micro and macro geographical scales. In this study, a multi-agent system is proposed to simulate residential dynamics by considering spatiotemporal land use changes. In the proposed ABM, macro-scale land use change prediction is modeled by Artificial Neural Network (ANN) and deployed as the agent environment and micro-scale residential dynamics behaviors autonomously implemented by household agents. These two levels of simulation interacted and jointly promoted urbanization process in an urban area of Tehran city in Iran. The model simulates the behavior of individual households in finding ideal locations to dwell. The household agents are divided into three main groups based on their income rank and they are further classified into different categories based on a number of attributes. These attributes determine the households' preferences for finding new dwellings and change with time. The ABM environment is represented by a land-use map in which the properties of the land parcels change dynamically over the simulation time. The outputs of this model are a set of maps showing the pattern of different groups of households in the city. These patterns can be used by city planners to find optimum locations for building new residential units or adding new services to the city. The simulation results show that combining macro- and micro-level simulation can give full play to the potential of the ABM to understand the driving mechanism of urbanization and provide decision-making support for urban management.
ERIC Educational Resources Information Center
Plant, E. Ashby; Baylor, Amy L.; Doerr, Celeste E.; Rosenberg-Kima, Rinat B.
2009-01-01
Women's under-representation in fields such as engineering may result in part from female students' negative beliefs regarding these fields and their low self-efficacy for these fields. In this experiment, we investigated the use of animated interface agents as social models for changing male and female middle-school students' attitudes toward…
Community-aware task allocation for social networked multiagent systems.
Wang, Wanyuan; Jiang, Yichuan
2014-09-01
In this paper, we propose a novel community-aware task allocation model for social networked multiagent systems (SN-MASs), where the agent' cooperation domain is constrained in community and each agent can negotiate only with its intracommunity member agents. Under such community-aware scenarios, we prove that it remains NP-hard to maximize system overall profit. To solve this problem effectively, we present a heuristic algorithm that is composed of three phases: 1) task selection: select the desirable task to be allocated preferentially; 2) allocation to community: allocate the selected task to communities based on a significant task-first heuristics; and 3) allocation to agent: negotiate resources for the selected task based on a nonoverlap agent-first and breadth-first resource negotiation mechanism. Through the theoretical analyses and experiments, the advantages of our presented heuristic algorithm and community-aware task allocation model are validated. 1) Our presented heuristic algorithm performs very closely to the benchmark exponential brute-force optimal algorithm and the network flow-based greedy algorithm in terms of system overall profit in small-scale applications. Moreover, in the large-scale applications, the presented heuristic algorithm achieves approximately the same overall system profit, but significantly reduces the computational load compared with the greedy algorithm. 2) Our presented community-aware task allocation model reduces the system communication cost compared with the previous global-aware task allocation model and improves the system overall profit greatly compared with the previous local neighbor-aware task allocation model.
Ligmann-Zielinska, Arika; Kramer, Daniel B; Spence Cheruvelil, Kendra; Soranno, Patricia A
2014-01-01
Agent-based models (ABMs) have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1) efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2) conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system.
Ligmann-Zielinska, Arika; Kramer, Daniel B.; Spence Cheruvelil, Kendra; Soranno, Patricia A.
2014-01-01
Agent-based models (ABMs) have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1) efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2) conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system. PMID:25340764
NASA Technical Reports Server (NTRS)
Wakim, Nagi T.; Srivastava, Sadanand; Bousaidi, Mehdi; Goh, Gin-Hua
1995-01-01
Agent-based technologies answer to several challenges posed by additional information processing requirements in today's computing environments. In particular, (1) users desire interaction with computing devices in a mode which is similar to that used between people, (2) the efficiency and successful completion of information processing tasks often require a high-level of expertise in complex and multiple domains, (3) information processing tasks often require handling of large volumes of data and, therefore, continuous and endless processing activities. The concept of an agent is an attempt to address these new challenges by introducing information processing environments in which (1) users can communicate with a system in a natural way, (2) an agent is a specialist and a self-learner and, therefore, it qualifies to be trusted to perform tasks independent of the human user, and (3) an agent is an entity that is continuously active performing tasks that are either delegated to it or self-imposed. The work described in this paper focuses on the development of an interface agent for users of a complex information processing environment (IPE). This activity is part of an on-going effort to build a model for developing agent-based information systems. Such systems will be highly applicable to environments which require a high degree of automation, such as, flight control operations and/or processing of large volumes of data in complex domains, such as the EOSDIS environment and other multidisciplinary, scientific data systems. The concept of an agent as an information processing entity is fully described with emphasis on characteristics of special interest to the User-System Interface Agent (USIA). Issues such as agent 'existence' and 'qualification' are discussed in this paper. Based on a definition of an agent and its main characteristics, we propose an architecture for the development of interface agents for users of an IPE that is agent-oriented and whose resources are likely to be distributed and heterogeneous in nature. The architecture of USIA is outlined in two main components: (1) the user interface which is concerned with issues as user dialog and interaction, user modeling, and adaptation to user profile, and (2) the system interface part which deals with identification of IPE capabilities, task understanding and feasibility assessment, and task delegation and coordination of assistant agents.
Vera, Javier
2018-01-01
What is the influence of short-term memory enhancement on the emergence of grammatical agreement systems in multi-agent language games? Agreement systems suppose that at least two words share some features with each other, such as gender, number, or case. Previous work, within the multi-agent language-game framework, has recently proposed models stressing the hypothesis that the emergence of a grammatical agreement system arises from the minimization of semantic ambiguity. On the other hand, neurobiological evidence argues for the hypothesis that language evolution has mainly related to an increasing of short-term memory capacity, which has allowed the online manipulation of words and meanings participating particularly in grammatical agreement systems. Here, the main aim is to propose a multi-agent language game for the emergence of a grammatical agreement system, under measurable long-range relations depending on the short-term memory capacity. Computer simulations, based on a parameter that measures the amount of short-term memory capacity, suggest that agreement marker systems arise in a population of agents equipped at least with a critical short-term memory capacity.
Automated Intelligent Agents: Are They Trusted Members of Military Teams?
2008-12-01
computer -based team firefighting game (C3Fire). The order of presentation of the two trials (human – human vs. human – automation) was...agent. All teams played a computer -based team firefighting game (C3Fire). The order of presentation of the two trials (human – human vs. human...26 b. Participants’ Computer ..................27 C. VARIABLES .........................................27 1. Independent Variables
TOWARDS A MULTI-SCALE AGENT-BASED PROGRAMMING LANGUAGE METHODOLOGY
Somogyi, Endre; Hagar, Amit; Glazier, James A.
2017-01-01
Living tissues are dynamic, heterogeneous compositions of objects, including molecules, cells and extra-cellular materials, which interact via chemical, mechanical and electrical process and reorganize via transformation, birth, death and migration processes. Current programming language have difficulty describing the dynamics of tissues because: 1: Dynamic sets of objects participate simultaneously in multiple processes, 2: Processes may be either continuous or discrete, and their activity may be conditional, 3: Objects and processes form complex, heterogeneous relationships and structures, 4: Objects and processes may be hierarchically composed, 5: Processes may create, destroy and transform objects and processes. Some modeling languages support these concepts, but most cannot translate models into executable simulations. We present a new hybrid executable modeling language paradigm, the Continuous Concurrent Object Process Methodology (CCOPM) which naturally expresses tissue models, enabling users to visually create agent-based models of tissues, and also allows computer simulation of these models. PMID:29282379
DOE Office of Scientific and Technical Information (OSTI.GOV)
Basu, N.; Pryor, R.J.
1997-09-01
This report presents a microsimulation model of a transition economy. Transition is defined as the process of moving from a state-enterprise economy to a market economy. The emphasis is on growing a market economy starting from basic microprinciples. The model described in this report extends and modifies the capabilities of Aspen, a new agent-based model that is being developed at Sandia National Laboratories on a massively parallel Paragon computer. Aspen is significantly different from traditional models of the economy. Aspen`s emphasis on disequilibrium growth paths, its analysis based on evolution and emergent behavior rather than on a mechanistic view ofmore » society, and its use of learning algorithms to simulate the behavior of some agents rather than an assumption of perfect rationality make this model well-suited for analyzing economic variables of interest from transition economies. Preliminary results from several runs of the model are included.« less
TOWARDS A MULTI-SCALE AGENT-BASED PROGRAMMING LANGUAGE METHODOLOGY.
Somogyi, Endre; Hagar, Amit; Glazier, James A
2016-12-01
Living tissues are dynamic, heterogeneous compositions of objects , including molecules, cells and extra-cellular materials, which interact via chemical, mechanical and electrical process and reorganize via transformation, birth, death and migration processes . Current programming language have difficulty describing the dynamics of tissues because: 1: Dynamic sets of objects participate simultaneously in multiple processes, 2: Processes may be either continuous or discrete, and their activity may be conditional, 3: Objects and processes form complex, heterogeneous relationships and structures, 4: Objects and processes may be hierarchically composed, 5: Processes may create, destroy and transform objects and processes. Some modeling languages support these concepts, but most cannot translate models into executable simulations. We present a new hybrid executable modeling language paradigm, the Continuous Concurrent Object Process Methodology ( CCOPM ) which naturally expresses tissue models, enabling users to visually create agent-based models of tissues, and also allows computer simulation of these models.
Alterations in choice behavior by manipulations of world model.
Green, C S; Benson, C; Kersten, D; Schrater, P
2010-09-14
How to compute initially unknown reward values makes up one of the key problems in reinforcement learning theory, with two basic approaches being used. Model-free algorithms rely on the accumulation of substantial amounts of experience to compute the value of actions, whereas in model-based learning, the agent seeks to learn the generative process for outcomes from which the value of actions can be predicted. Here we show that (i) "probability matching"-a consistent example of suboptimal choice behavior seen in humans-occurs in an optimal Bayesian model-based learner using a max decision rule that is initialized with ecologically plausible, but incorrect beliefs about the generative process for outcomes and (ii) human behavior can be strongly and predictably altered by the presence of cues suggestive of various generative processes, despite statistically identical outcome generation. These results suggest human decision making is rational and model based and not consistent with model-free learning.
Alterations in choice behavior by manipulations of world model
Green, C. S.; Benson, C.; Kersten, D.; Schrater, P.
2010-01-01
How to compute initially unknown reward values makes up one of the key problems in reinforcement learning theory, with two basic approaches being used. Model-free algorithms rely on the accumulation of substantial amounts of experience to compute the value of actions, whereas in model-based learning, the agent seeks to learn the generative process for outcomes from which the value of actions can be predicted. Here we show that (i) “probability matching”—a consistent example of suboptimal choice behavior seen in humans—occurs in an optimal Bayesian model-based learner using a max decision rule that is initialized with ecologically plausible, but incorrect beliefs about the generative process for outcomes and (ii) human behavior can be strongly and predictably altered by the presence of cues suggestive of various generative processes, despite statistically identical outcome generation. These results suggest human decision making is rational and model based and not consistent with model-free learning. PMID:20805507
A Multiscale Agent-Based in silico Model of Liver Fibrosis Progression
Dutta-Moscato, Joyeeta; Solovyev, Alexey; Mi, Qi; Nishikawa, Taichiro; Soto-Gutierrez, Alejandro; Fox, Ira J.; Vodovotz, Yoram
2014-01-01
Chronic hepatic inflammation involves a complex interplay of inflammatory and mechanical influences, ultimately manifesting in a characteristic histopathology of liver fibrosis. We created an agent-based model (ABM) of liver tissue in order to computationally examine the consequence of liver inflammation. Our liver fibrosis ABM (LFABM) is comprised of literature-derived rules describing molecular and histopathological aspects of inflammation and fibrosis in a section of chemically injured liver. Hepatocytes are modeled as agents within hexagonal lobules. Injury triggers an inflammatory reaction, which leads to activation of local Kupffer cells and recruitment of monocytes from circulation. Portal fibroblasts and hepatic stellate cells are activated locally by the products of inflammation. The various agents in the simulation are regulated by above-threshold concentrations of pro- and anti-inflammatory cytokines and damage-associated molecular pattern molecules. The simulation progresses from chronic inflammation to collagen deposition, exhibiting periportal fibrosis followed by bridging fibrosis, and culminating in disruption of the regular lobular structure. The ABM exhibited key histopathological features observed in liver sections from rats treated with carbon tetrachloride (CCl4). An in silico “tension test” for the hepatic lobules predicted an overall increase in tissue stiffness, in line with clinical elastography literature and published studies in CCl4-treated rats. Therapy simulations suggested differential anti-fibrotic effects of neutralizing tumor necrosis factor alpha vs. enhancing M2 Kupffer cells. We conclude that a computational model of liver inflammation on a structural skeleton of physical forces can recapitulate key histopathological and macroscopic properties of CCl4-injured liver. This multiscale approach linking molecular and chemomechanical stimuli enables a model that could be used to gain translationally relevant insights into liver fibrosis. PMID:25152891
Manson, Steven M.; Evans, Tom
2007-01-01
We combine mixed-methods research with integrated agent-based modeling to understand land change and economic decision making in the United States and Mexico. This work demonstrates how sustainability science benefits from combining integrated agent-based modeling (which blends methods from the social, ecological, and information sciences) and mixed-methods research (which interleaves multiple approaches ranging from qualitative field research to quantitative laboratory experiments and interpretation of remotely sensed imagery). We test assumptions of utility-maximizing behavior in household-level landscape management in south-central Indiana, linking parcel data, land cover derived from aerial photography, and findings from laboratory experiments. We examine the role of uncertainty and limited information, preferences, differential demographic attributes, and past experience and future time horizons. We also use evolutionary programming to represent bounded rationality in agriculturalist households in the southern Yucatán of Mexico. This approach captures realistic rule of thumb strategies while identifying social and environmental factors in a manner similar to econometric models. These case studies highlight the role of computational models of decision making in land-change contexts and advance our understanding of decision making in general. PMID:18093928
Agent Based Modeling: Fine-Scale Spatio-Temporal Analysis of Pertussis
NASA Astrophysics Data System (ADS)
Mills, D. A.
2017-10-01
In epidemiology, spatial and temporal variables are used to compute vaccination efficacy and effectiveness. The chosen resolution and scale of a spatial or spatio-temporal analysis will affect the results. When calculating vaccination efficacy, for example, a simple environment that offers various ideal outcomes is often modeled using coarse scale data aggregated on an annual basis. In contrast to the inadequacy of this aggregated method, this research uses agent based modeling of fine-scale neighborhood data centered around the interactions of infants in daycare and their families to demonstrate an accurate reflection of vaccination capabilities. Despite being able to prevent major symptoms, recent studies suggest that acellular Pertussis does not prevent the colonization and transmission of Bordetella Pertussis bacteria. After vaccination, a treated individual becomes a potential asymptomatic carrier of the Pertussis bacteria, rather than an immune individual. Agent based modeling enables the measurable depiction of asymptomatic carriers that are otherwise unaccounted for when calculating vaccination efficacy and effectiveness. Using empirical data from a Florida Pertussis outbreak case study, the results of this model demonstrate that asymptomatic carriers bias the calculated vaccination efficacy and reveal a need for reconsidering current methods that are widely used for calculating vaccination efficacy and effectiveness.
Applications of Multi-Agent Technology to Power Systems
NASA Astrophysics Data System (ADS)
Nagata, Takeshi
Currently, agents are focus of intense on many sub-fields of computer science and artificial intelligence. Agents are being used in an increasingly wide variety of applications. Many important computing applications such as planning, process control, communication networks and concurrent systems will benefit from using multi-agent system approach. A multi-agent system is a structure given by an environment together with a set of artificial agents capable to act on this environment. Multi-agent models are oriented towards interactions, collaborative phenomena, and autonomy. This article presents the applications of multi-agent technology to the power systems.
Coevolution in management fashion: an agent-based model of consultant-driven innovation.
Strang, David; David, Robert J; Akhlaghpour, Saeed
2014-07-01
The rise of management consultancy has been accompanied by increasingly marked faddish cycles in management techniques, but the mechanisms that underlie this relationship are not well understood. The authors develop a simple agent-based framework that models innovation adoption and abandonment on both the supply and demand sides. In opposition to conceptions of consultants as rhetorical wizards who engineer waves of management fashion, firms and consultants are treated as boundedly rational actors who chase the secrets of success by mimicking their highest-performing peers. Computational experiments demonstrate that consultant-driven versions of this dynamic in which the outcomes of firms are strongly conditioned by their choice of consultant are robustly faddish. The invasion of boom markets by low-quality consultants undercuts popular innovations while simultaneously restarting the fashion cycle by prompting the flight of high-quality consultants into less densely occupied niches. Computational experiments also indicate conditions involving consultant mobility, aspiration levels, mimic probabilities, and client-provider matching that attenuate faddishness.
A cognitive computational model inspired by the immune system response.
Abdo Abd Al-Hady, Mohamed; Badr, Amr Ahmed; Mostafa, Mostafa Abd Al-Azim
2014-01-01
The immune system has a cognitive ability to differentiate between healthy and unhealthy cells. The immune system response (ISR) is stimulated by a disorder in the temporary fuzzy state that is oscillating between the healthy and unhealthy states. However, modeling the immune system is an enormous challenge; the paper introduces an extensive summary of how the immune system response functions, as an overview of a complex topic, to present the immune system as a cognitive intelligent agent. The homogeneity and perfection of the natural immune system have been always standing out as the sought-after model we attempted to imitate while building our proposed model of cognitive architecture. The paper divides the ISR into four logical phases: setting a computational architectural diagram for each phase, proceeding from functional perspectives (input, process, and output), and their consequences. The proposed architecture components are defined by matching biological operations with computational functions and hence with the framework of the paper. On the other hand, the architecture focuses on the interoperability of main theoretical immunological perspectives (classic, cognitive, and danger theory), as related to computer science terminologies. The paper presents a descriptive model of immune system, to figure out the nature of response, deemed to be intrinsic for building a hybrid computational model based on a cognitive intelligent agent perspective and inspired by the natural biology. To that end, this paper highlights the ISR phases as applied to a case study on hepatitis C virus, meanwhile illustrating our proposed architecture perspective.
A Cognitive Computational Model Inspired by the Immune System Response
Abdo Abd Al-Hady, Mohamed; Badr, Amr Ahmed; Mostafa, Mostafa Abd Al-Azim
2014-01-01
The immune system has a cognitive ability to differentiate between healthy and unhealthy cells. The immune system response (ISR) is stimulated by a disorder in the temporary fuzzy state that is oscillating between the healthy and unhealthy states. However, modeling the immune system is an enormous challenge; the paper introduces an extensive summary of how the immune system response functions, as an overview of a complex topic, to present the immune system as a cognitive intelligent agent. The homogeneity and perfection of the natural immune system have been always standing out as the sought-after model we attempted to imitate while building our proposed model of cognitive architecture. The paper divides the ISR into four logical phases: setting a computational architectural diagram for each phase, proceeding from functional perspectives (input, process, and output), and their consequences. The proposed architecture components are defined by matching biological operations with computational functions and hence with the framework of the paper. On the other hand, the architecture focuses on the interoperability of main theoretical immunological perspectives (classic, cognitive, and danger theory), as related to computer science terminologies. The paper presents a descriptive model of immune system, to figure out the nature of response, deemed to be intrinsic for building a hybrid computational model based on a cognitive intelligent agent perspective and inspired by the natural biology. To that end, this paper highlights the ISR phases as applied to a case study on hepatitis C virus, meanwhile illustrating our proposed architecture perspective. PMID:25003131
NASA Astrophysics Data System (ADS)
Chen, Biao; Jing, Zhenxue; Smith, Andrew
2005-04-01
Contrast enhanced digital mammography (CEDM), which is based upon the analysis of a series of x-ray projection images acquired before/after the administration of contrast agents, may provide physicians critical physiologic and morphologic information of breast lesions to determine the malignancy of lesions. This paper proposes to combine the kinetic analysis (KA) of contrast agent uptake/washout process and the dual-energy (DE) contrast enhancement together to formulate a hybrid contrast enhanced breast-imaging framework. The quantitative characteristics of materials and imaging components in the x-ray imaging chain, including x-ray tube (tungsten) spectrum, filter, breast tissues/lesions, contrast agents (non-ionized iodine solution), and selenium detector, were systematically modeled. The contrast-noise-ration (CNR) of iodinated lesions and mean absorbed glandular dose were estimated mathematically. The x-ray techniques optimization was conducted through a series of computer simulations to find the optimal tube voltage, filter thickness, and exposure levels for various breast thicknesses, breast density, and detectable contrast agent concentration levels in terms of detection efficiency (CNR2/dose). A phantom study was performed on a modified Selenia full field digital mammography system to verify the simulated results. The dose level was comparable to the dose in diagnostic mode (less than 4 mGy for an average 4.2 cm compressed breast). The results from the computer simulations and phantom study are being used to optimize an ongoing clinical study.
Multi-Agent Information Classification Using Dynamic Acquaintance Lists.
ERIC Educational Resources Information Center
Mukhopadhyay, Snehasis; Peng, Shengquan; Raje, Rajeev; Palakal, Mathew; Mostafa, Javed
2003-01-01
Discussion of automated information services focuses on information classification and collaborative agents, i.e. intelligent computer programs. Highlights include multi-agent systems; distributed artificial intelligence; thesauri; document representation and classification; agent modeling; acquaintances, or remote agents discovered through…
Mehta, Pakhuri; Srivastava, Shubham; Choudhary, Bhanwar Singh; Sharma, Manish; Malik, Ruchi
2017-12-01
Multidrug resistance along with side-effects of available anti-epileptic drugs and unavailability of potent and effective agents in submicromolar quantities presents the biggest therapeutic challenges in anti-epileptic drug discovery. The molecular modeling techniques allow us to identify agents with novel structures to match the continuous urge for its discovery. KCNQ2 channel represents one of the validated targets for its therapy. The present study involves identification of newer anti-epileptic agents by means of a computer-aided drug design adaptive protocol involving both structure-based virtual screening of Asinex library using homology model of KCNQ2 and 3D-QSAR based virtual screening with docking analysis, followed by dG bind and ligand efficiency calculations with ADMET studies, of which 20 hits qualified all the criterions. The best ligands of both screenings with least potential for toxicity predicted computationally were then taken for molecular dynamic simulations. All the crucial amino acid interactions were observed in hits of both screenings such as Glu130, Arg207, Arg210 and Phe137. Robustness of docking protocol was analyzed through Receiver operating characteristic (ROC) curve values 0.88 (Area under curve AUC = 0.87) in Standard Precision and 0.84 (AUC = 0.82) in Extra Precision modes. Novelty analysis indicates that these compounds have not been reported previously as anti-epileptic agents.
Bures, Vladimír; Otcenásková, Tereza; Cech, Pavel; Antos, Karel
2012-11-01
Biological incidents jeopardising public health require decision-making that consists of one dominant feature: complexity. Therefore, public health decision-makers necessitate appropriate support. Based on the analogy with business intelligence (BI) principles, the contextual analysis of the environment and available data resources, and conceptual modelling within systems and knowledge engineering, this paper proposes a general framework for computer-based decision support in the case of a biological incident. At the outset, the analysis of potential inputs to the framework is conducted and several resources such as demographic information, strategic documents, environmental characteristics, agent descriptors and surveillance systems are considered. Consequently, three prototypes were developed, tested and evaluated by a group of experts. Their selection was based on the overall framework scheme. Subsequently, an ontology prototype linked with an inference engine, multi-agent-based model focusing on the simulation of an environment, and expert-system prototypes were created. All prototypes proved to be utilisable support tools for decision-making in the field of public health. Nevertheless, the research revealed further issues and challenges that might be investigated by both public health focused researchers and practitioners.
Warnke, Tom; Reinhardt, Oliver; Klabunde, Anna; Willekens, Frans; Uhrmacher, Adelinde M
2017-10-01
Individuals' decision processes play a central role in understanding modern migration phenomena and other demographic processes. Their integration into agent-based computational demography depends largely on suitable support by a modelling language. We are developing the Modelling Language for Linked Lives (ML3) to describe the diverse decision processes of linked lives succinctly in continuous time. The context of individuals is modelled by networks the individual is part of, such as family ties and other social networks. Central concepts, such as behaviour conditional on agent attributes, age-dependent behaviour, and stochastic waiting times, are tightly integrated in the language. Thereby, alternative decisions are modelled by concurrent processes that compete by stochastic race. Using a migration model, we demonstrate how this allows for compact description of complex decisions, here based on the Theory of Planned Behaviour. We describe the challenges for the simulation algorithm posed by stochastic race between multiple concurrent complex decisions.
Nagoski, Emily; Janssen, Erick; Lohrmann, David; Nichols, Eric
2012-08-01
Risky sexual behaviors, including the decision to have unprotected sex, result from interactions between individuals and their environment. The current study explored the use of Agent-Based Modeling (ABM)-a methodological approach in which computer-generated artificial societies simulate human sexual networks-to assess the influence of heterogeneity of sexual motivation on the risk of contracting HIV. The models successfully simulated some characteristics of human sexual systems, such as the relationship between individual differences in sexual motivation (sexual excitation and inhibition) and sexual risk, but failed to reproduce the scale-free distribution of number of partners observed in the real world. ABM has the potential to inform intervention strategies that target the interaction between an individual and his or her social environment.
Studies of Opinion Stability for Small Dynamic Networks with Opportunistic Agents
NASA Astrophysics Data System (ADS)
Sobkowicz, Pawel
There are numerous examples of societies with extremely stable mix of contrasting opinions. We argue that this stability is a result of an interplay between society network topology adjustment and opinion changing processes. To support this position we present a computer model of opinion formation based on some novel assumptions, designed to bring the model closer to social reality. In our model, the agents, in addition to changing their opinions due to influence of the rest of society and external propaganda, have the ability to modify their social network, forming links with agents sharing the same opinions and cutting the links with those they disagree with. To improve the model further we divide the agents into "fanatics" and "opportunists," depending on how easy it is to change their opinions. The simulations show significant differences compared to traditional models, where network links are static. In particular, for the dynamical model where inter-agent links are adjustable, the final network structure and opinion distribution is shown to resemble real world observations, such as social structures and persistence of minority groups even when most of the society is against them and the propaganda is strong.
Properties of interaction networks underlying the minority game.
Caridi, Inés
2014-11-01
The minority game is a well-known agent-based model with no explicit interaction among its agents. However, it is known that they interact through the global magnitudes of the model and through their strategies. In this work we have attempted to formalize the implicit interactions among minority game agents as if they were links on a complex network. We have defined the link between two agents by quantifying the similarity between them. This link definition is based on the information of the instance of the game (the set of strategies assigned to each agent at the beginning) without any dynamic information on the game and brings about a static, unweighed and undirected network. We have analyzed the structure of the resulting network for different parameters, such as the number of agents (N) and the agent's capacity to process information (m), always taking into account games with two strategies per agent. In the region of crowd effects of the model, the resulting networks structure is a small-world network, whereas in the region where the behavior of the minority game is the same as in a game of random decisions, networks become a random network of Erdos-Renyi. The transition between these two types of networks is slow, without any peculiar feature of the network in the region of the coordination among agents. Finally, we have studied the resulting static networks for the full strategy minority game model, a maximal instance of the minority game in which all possible agents take part in the game. We have explicitly calculated the degree distribution of the full strategy minority game network and, on the basis of this analytical result, we have estimated the degree distribution of the minority game network, which is in accordance with computational results.
Anklam, Charles; Kirby, Adam; Sharevski, Filipo; Dietz, J Eric
2015-01-01
Active shooting violence at confined settings, such as educational institutions, poses serious security concerns to public safety. In studying the effects of active shooter scenarios, the common denominator associated with all events, regardless of reason/intent for shooter motives, or type of weapons used, was the location chosen and time expended between the beginning of the event and its culmination. This in turn directly correlates to number of casualties incurred in any given event. The longer the event protracts, the more casualties are incurred until law enforcement or another barrier can react and culminate the situation. Using AnyLogic technology, devise modeling scenarios to test multiple hypotheses against free-agent modeling simulation to determine the best method to reduce casualties associated with active shooter scenarios. Test four possible scenarios of responding to active shooter in a public school setting using agent-based computer modeling techniques-scenario 1: basic scenario where no access control or any type of security is used within the school; scenario 2, scenario assumes that concealed carry individual(s) (5-10 percent of the work force) are present in the school; scenario 3, scenario assumes that the school has assigned resource officer; scenario 4, scenario assumes that the school has assigned resource officer and concealed carry individual(s) (5-10 percent) present in the school. Statistical data from modeling scenarios indicating which tested hypothesis resulted in fewer casualties and quicker culmination of event. The use of AnyLogic proved the initial hypothesis that a decrease on response time to an active shooter scenario directly reduced victim casualties. Modeling tests show statistically significant fewer casualties in scenarios where on scene armed responders such as resource officers and concealed carry personnel were present.
Fire Suppression in Low Gravity Using a Cup Burner
NASA Technical Reports Server (NTRS)
Takahashi, Fumiaki; Linteris, Gregory T.; Katta, Viswanath R.
2004-01-01
Longer duration missions to the moon, to Mars, and on the International Space Station increase the likelihood of accidental fires. The goal of the present investigation is to: (1) understand the physical and chemical processes of fire suppression in various gravity and O2 levels simulating spacecraft, Mars, and moon missions; (2) provide rigorous testing of numerical models, which include detailed combustion suppression chemistry and radiation sub-models; and (3) provide basic research results useful for advances in space fire safety technology, including new fire-extinguishing agents and approaches. The structure and extinguishment of enclosed, laminar, methane-air co-flow diffusion flames formed on a cup burner have been studied experimentally and numerically using various fire-extinguishing agents (CO2, N2, He, Ar, CF3H, and Fe(CO)5). The experiments involve both 1g laboratory testing and low-g testing (in drop towers and the KC-135 aircraft). The computation uses a direct numerical simulation with detailed chemistry and radiative heat-loss models. An agent was introduced into a low-speed coflowing oxidizing stream until extinguishment occurred under a fixed minimal fuel velocity, and thus, the extinguishing agent concentrations were determined. The extinguishment of cup-burner flames, which resemble real fires, occurred via a blowoff process (in which the flame base drifted downstream) rather than the global extinction phenomenon typical of counterflow diffusion flames. The computation revealed that the peak reactivity spot (the reaction kernel) formed in the flame base was responsible for attachment and blowoff of the trailing diffusion flame. Furthermore, the buoyancy-induced flame flickering in 1g and thermal and transport properties of the agents affected the flame extinguishment limits.
Fire Suppression in Low Gravity Using a Cup Burner
NASA Technical Reports Server (NTRS)
Takahashi, Fumiaki; Linteris, Gregory T.; Katta, Viswanath R.
2004-01-01
Longer duration missions to the moon, to Mars, and on the International Space Station increase the likelihood of accidental fires. The goal of the present investigation is to: (1) understand the physical and chemical processes of fire suppression in various gravity and O2 levels simulating spacecraft, Mars, and moon missions; (2) provide rigorous testing of numerical models, which include detailed combustion-suppression chemistry and radiation sub-models; and (3) provide basic research results useful for advances in space fire safety technology, including new fire-extinguishing agents and approaches.The structure and extinguishment of enclosed, laminar, methane-air co-flow diffusion flames formed on a cup burner have been studied experimentally and numerically using various fire-extinguishing agents (CO2, N2, He, Ar, CF3H, and Fe(CO)5). The experiments involve both 1g laboratory testing and low-g testing (in drop towers and the KC-135 aircraft). The computation uses a direct numerical simulation with detailed chemistry and radiative heat-loss models. An agent was introduced into a low-speed coflowing oxidizing stream until extinguishment occurred under a fixed minimal fuel velocity, and thus, the extinguishing agent concentrations were determined. The extinguishment of cup-burner flames, which resemble real fires, occurred via a blowoff process (in which the flame base drifted downstream) rather than the global extinction phenomenon typical of counterflow diffusion flames. The computation revealed that the peak reactivity spot (the reaction kernel) formed in the flame base was responsible for attachment and blowoff of the trailing diffusion flame. Furthermore, the buoyancy-induced flame flickering in 1g and thermal and transport properties of the agents affected the flame extinguishment limits.
A reinforcement learning model of joy, distress, hope and fear
NASA Astrophysics Data System (ADS)
Broekens, Joost; Jacobs, Elmer; Jonker, Catholijn M.
2015-07-01
In this paper we computationally study the relation between adaptive behaviour and emotion. Using the reinforcement learning framework, we propose that learned state utility, ?, models fear (negative) and hope (positive) based on the fact that both signals are about anticipation of loss or gain. Further, we propose that joy/distress is a signal similar to the error signal. We present agent-based simulation experiments that show that this model replicates psychological and behavioural dynamics of emotion. This work distinguishes itself by assessing the dynamics of emotion in an adaptive agent framework - coupling it to the literature on habituation, development, extinction and hope theory. Our results support the idea that the function of emotion is to provide a complex feedback signal for an organism to adapt its behaviour. Our work is relevant for understanding the relation between emotion and adaptation in animals, as well as for human-robot interaction, in particular how emotional signals can be used to communicate between adaptive agents and humans.
Motion Planning in a Society of Intelligent Mobile Agents
NASA Technical Reports Server (NTRS)
Esterline, Albert C.; Shafto, Michael (Technical Monitor)
2002-01-01
The majority of the work on this grant involved formal modeling of human-computer integration. We conceptualize computer resources as a multiagent system so that these resources and human collaborators may be modeled uniformly. In previous work we had used modal for this uniform modeling, and we had developed a process-algebraic agent abstraction. In this work, we applied this abstraction (using CSP) in uniformly modeling agents and users, which allowed us to use tools for investigating CSP models. This work revealed the power of, process-algebraic handshakes in modeling face-to-face conversation. We also investigated specifications of human-computer systems in the style of algebraic specification. This involved specifying the common knowledge required for coordination and process-algebraic patterns of communication actions intended to establish the common knowledge. We investigated the conditions for agents endowed with perception to gain common knowledge and implemented a prototype neural-network system that allows agents to detect when such conditions hold. The literature on multiagent systems conceptualizes communication actions as speech acts. We implemented a prototype system that infers the deontic effects (obligations, permissions, prohibitions) of speech acts and detects violations of these effects. A prototype distributed system was developed that allows users to collaborate in moving proxy agents; it was designed to exploit handshakes and common knowledge Finally. in work carried over from a previous NASA ARC grant, about fifteen undergraduates developed and presented projects on multiagent motion planning.
Multi-Agent Patrolling under Uncertainty and Threats.
Chen, Shaofei; Wu, Feng; Shen, Lincheng; Chen, Jing; Ramchurn, Sarvapali D
2015-01-01
We investigate a multi-agent patrolling problem where information is distributed alongside threats in environments with uncertainties. Specifically, the information and threat at each location are independently modelled as multi-state Markov chains, whose states are not observed until the location is visited by an agent. While agents will obtain information at a location, they may also suffer damage from the threat at that location. Therefore, the goal of the agents is to gather as much information as possible while mitigating the damage incurred. To address this challenge, we formulate the single-agent patrolling problem as a Partially Observable Markov Decision Process (POMDP) and propose a computationally efficient algorithm to solve this model. Building upon this, to compute patrols for multiple agents, the single-agent algorithm is extended for each agent with the aim of maximising its marginal contribution to the team. We empirically evaluate our algorithm on problems of multi-agent patrolling and show that it outperforms a baseline algorithm up to 44% for 10 agents and by 21% for 15 agents in large domains.
Research on application of intelligent computation based LUCC model in urbanization process
NASA Astrophysics Data System (ADS)
Chen, Zemin
2007-06-01
Global change study is an interdisciplinary and comprehensive research activity with international cooperation, arising in 1980s, with the largest scopes. The interaction between land use and cover change, as a research field with the crossing of natural science and social science, has become one of core subjects of global change study as well as the front edge and hot point of it. It is necessary to develop research on land use and cover change in urbanization process and build an analog model of urbanization to carry out description, simulation and analysis on dynamic behaviors in urban development change as well as to understand basic characteristics and rules of urbanization process. This has positive practical and theoretical significance for formulating urban and regional sustainable development strategy. The effect of urbanization on land use and cover change is mainly embodied in the change of quantity structure and space structure of urban space, and LUCC model in urbanization process has been an important research subject of urban geography and urban planning. In this paper, based upon previous research achievements, the writer systematically analyzes the research on land use/cover change in urbanization process with the theories of complexity science research and intelligent computation; builds a model for simulating and forecasting dynamic evolution of urban land use and cover change, on the basis of cellular automation model of complexity science research method and multi-agent theory; expands Markov model, traditional CA model and Agent model, introduces complexity science research theory and intelligent computation theory into LUCC research model to build intelligent computation-based LUCC model for analog research on land use and cover change in urbanization research, and performs case research. The concrete contents are as follows: 1. Complexity of LUCC research in urbanization process. Analyze urbanization process in combination with the contents of complexity science research and the conception of complexity feature to reveal the complexity features of LUCC research in urbanization process. Urban space system is a complex economic and cultural phenomenon as well as a social process, is the comprehensive characterization of urban society, economy and culture, and is a complex space system formed by society, economy and nature. It has dissipative structure characteristics, such as opening, dynamics, self-organization, non-balance etc. Traditional model cannot simulate these social, economic and natural driving forces of LUCC including main feedback relation from LUCC to driving force. 2. Establishment of Markov extended model of LUCC analog research in urbanization process. Firstly, use traditional LUCC research model to compute change speed of regional land use through calculating dynamic degree, exploitation degree and consumption degree of land use; use the theory of fuzzy set to rewrite the traditional Markov model, establish structure transfer matrix of land use, forecast and analyze dynamic change and development trend of land use, and present noticeable problems and corresponding measures in urbanization process according to research results. 3. Application of intelligent computation research and complexity science research method in LUCC analog model in urbanization process. On the basis of detailed elaboration of the theory and the model of LUCC research in urbanization process, analyze the problems of existing model used in LUCC research (namely, difficult to resolve many complexity phenomena in complex urban space system), discuss possible structure realization forms of LUCC analog research in combination with the theories of intelligent computation and complexity science research. Perform application analysis on BP artificial neural network and genetic algorithms of intelligent computation and CA model and MAS technology of complexity science research, discuss their theoretical origins and their own characteristics in detail, elaborate the feasibility of them in LUCC analog research, and bring forward improvement methods and measures on existing problems of this kind of model. 4. Establishment of LUCC analog model in urbanization process based on theories of intelligent computation and complexity science. Based on the research on abovementioned BP artificial neural network, genetic algorithms, CA model and multi-agent technology, put forward improvement methods and application assumption towards their expansion on geography, build LUCC analog model in urbanization process based on CA model and Agent model, realize the combination of learning mechanism of BP artificial neural network and fuzzy logic reasoning, express the regulation with explicit formula, and amend the initial regulation through self study; optimize network structure of LUCC analog model and methods and procedures of model parameters with genetic algorithms. In this paper, I introduce research theory and methods of complexity science into LUCC analog research and presents LUCC analog model based upon CA model and MAS theory. Meanwhile, I carry out corresponding expansion on traditional Markov model and introduce the theory of fuzzy set into data screening and parameter amendment of improved model to improve the accuracy and feasibility of Markov model in the research on land use/cover change.
Zhu, Lusha; Mathewson, Kyle E; Hsu, Ming
2012-01-31
Decision-making in the presence of other competitive intelligent agents is fundamental for social and economic behavior. Such decisions require agents to behave strategically, where in addition to learning about the rewards and punishments available in the environment, they also need to anticipate and respond to actions of others competing for the same rewards. However, whereas we know much about strategic learning at both theoretical and behavioral levels, we know relatively little about the underlying neural mechanisms. Here, we show using a multi-strategy competitive learning paradigm that strategic choices can be characterized by extending the reinforcement learning (RL) framework to incorporate agents' beliefs about the actions of their opponents. Furthermore, using this characterization to generate putative internal values, we used model-based functional magnetic resonance imaging to investigate neural computations underlying strategic learning. We found that the distinct notions of prediction errors derived from our computational model are processed in a partially overlapping but distinct set of brain regions. Specifically, we found that the RL prediction error was correlated with activity in the ventral striatum. In contrast, activity in the ventral striatum, as well as the rostral anterior cingulate (rACC), was correlated with a previously uncharacterized belief-based prediction error. Furthermore, activity in rACC reflected individual differences in degree of engagement in belief learning. These results suggest a model of strategic behavior where learning arises from interaction of dissociable reinforcement and belief-based inputs.
ICCE/ICCAI 2000 Full & Short Papers (Educational Agent).
ERIC Educational Resources Information Center
2000
This document contains the full text of the following papers on educational agent from ICCE/ICCAI 2000 (International Conference on Computers in Education/International Conference on Computer-Assisted Instruction): (1) "An Agent-Based Intelligent Tutoring System" (C.M. Bruff and M.A. Williams); (2) "Design of Systematic Concept…
Durham, David P; Casman, Elizabeth A
2012-03-07
It is anticipated that the next generation of computational epidemic models will simulate both infectious disease transmission and dynamic human behaviour change. Individual agents within a simulation will not only infect one another, but will also have situational awareness and a decision algorithm that enables them to modify their behaviour. This paper develops such a model of behavioural response, presenting a mathematical interpretation of a well-known psychological model of individual decision making, the health belief model, suitable for incorporation within an agent-based disease-transmission model. We formalize the health belief model and demonstrate its application in modelling the prevalence of facemask use observed over the course of the 2003 Hong Kong SARS epidemic, a well-documented example of behaviour change in response to a disease outbreak.
Durham, David P.; Casman, Elizabeth A.
2012-01-01
It is anticipated that the next generation of computational epidemic models will simulate both infectious disease transmission and dynamic human behaviour change. Individual agents within a simulation will not only infect one another, but will also have situational awareness and a decision algorithm that enables them to modify their behaviour. This paper develops such a model of behavioural response, presenting a mathematical interpretation of a well-known psychological model of individual decision making, the health belief model, suitable for incorporation within an agent-based disease-transmission model. We formalize the health belief model and demonstrate its application in modelling the prevalence of facemask use observed over the course of the 2003 Hong Kong SARS epidemic, a well-documented example of behaviour change in response to a disease outbreak. PMID:21775324
A Contrast-Based Computational Model of Surprise and Its Applications.
Macedo, Luis; Cardoso, Amílcar
2017-11-19
We review our work on a contrast-based computational model of surprise and its applications. The review is contextualized within related research from psychology, philosophy, and particularly artificial intelligence. Influenced by psychological theories of surprise, the model assumes that surprise-eliciting events initiate a series of cognitive processes that begin with the appraisal of the event as unexpected, continue with the interruption of ongoing activity and the focusing of attention on the unexpected event, and culminate in the analysis and evaluation of the event and the revision of beliefs. It is assumed that the intensity of surprise elicited by an event is a nonlinear function of the difference or contrast between the subjective probability of the event and that of the most probable alternative event (which is usually the expected event); and that the agent's behavior is partly controlled by actual and anticipated surprise. We describe applications of artificial agents that incorporate the proposed surprise model in three domains: the exploration of unknown environments, creativity, and intelligent transportation systems. These applications demonstrate the importance of surprise for decision making, active learning, creative reasoning, and selective attention. Copyright © 2017 Cognitive Science Society, Inc.
Action understanding as inverse planning.
Baker, Chris L; Saxe, Rebecca; Tenenbaum, Joshua B
2009-12-01
Humans are adept at inferring the mental states underlying other agents' actions, such as goals, beliefs, desires, emotions and other thoughts. We propose a computational framework based on Bayesian inverse planning for modeling human action understanding. The framework represents an intuitive theory of intentional agents' behavior based on the principle of rationality: the expectation that agents will plan approximately rationally to achieve their goals, given their beliefs about the world. The mental states that caused an agent's behavior are inferred by inverting this model of rational planning using Bayesian inference, integrating the likelihood of the observed actions with the prior over mental states. This approach formalizes in precise probabilistic terms the essence of previous qualitative approaches to action understanding based on an "intentional stance" [Dennett, D. C. (1987). The intentional stance. Cambridge, MA: MIT Press] or a "teleological stance" [Gergely, G., Nádasdy, Z., Csibra, G., & Biró, S. (1995). Taking the intentional stance at 12 months of age. Cognition, 56, 165-193]. In three psychophysical experiments using animated stimuli of agents moving in simple mazes, we assess how well different inverse planning models based on different goal priors can predict human goal inferences. The results provide quantitative evidence for an approximately rational inference mechanism in human goal inference within our simplified stimulus paradigm, and for the flexible nature of goal representations that human observers can adopt. We discuss the implications of our experimental results for human action understanding in real-world contexts, and suggest how our framework might be extended to capture other kinds of mental state inferences, such as inferences about beliefs, or inferring whether an entity is an intentional agent.
Metareasoning and Social Evaluations in Cognitive Agents
NASA Astrophysics Data System (ADS)
Pinyol, Isaac; Sabater-Mir, Jordi
Reputation mechanisms have been recognized one of the key technologies when designing multi-agent systems. They are specially relevant in complex open environments, becoming a non-centralized mechanism to control interactions among agents. Cognitive agents tackling such complex societies must use reputation information not only for selecting partners to interact with, but also in metareasoning processes to change reasoning rules. This is the focus of this paper. We argue about the necessity to allow, as a cognitive systems designers, certain degree of freedom in the reasoning rules of the agents. We also describes cognitive approaches of agency that support this idea. Furthermore, taking as a base the computational reputation model Repage, and its integration in a BDI architecture, we use the previous ideas to specify metarules and processes to modify at run-time the reasoning paths of the agent. In concrete we propose a metarule to update the link between Repage and the belief base, and a metarule and a process to update an axiom incorporated in the belief logic of the agent. Regarding this last issue we also provide empirical results that show the evolution of agents that use it.
Computational Systems Biology in Cancer: Modeling Methods and Applications
Materi, Wayne; Wishart, David S.
2007-01-01
In recent years it has become clear that carcinogenesis is a complex process, both at the molecular and cellular levels. Understanding the origins, growth and spread of cancer, therefore requires an integrated or system-wide approach. Computational systems biology is an emerging sub-discipline in systems biology that utilizes the wealth of data from genomic, proteomic and metabolomic studies to build computer simulations of intra and intercellular processes. Several useful descriptive and predictive models of the origin, growth and spread of cancers have been developed in an effort to better understand the disease and potential therapeutic approaches. In this review we describe and assess the practical and theoretical underpinnings of commonly-used modeling approaches, including ordinary and partial differential equations, petri nets, cellular automata, agent based models and hybrid systems. A number of computer-based formalisms have been implemented to improve the accessibility of the various approaches to researchers whose primary interest lies outside of model development. We discuss several of these and describe how they have led to novel insights into tumor genesis, growth, apoptosis, vascularization and therapy. PMID:19936081
Chavali, Arvind K; Gianchandani, Erwin P; Tung, Kenneth S; Lawrence, Michael B; Peirce, Shayn M; Papin, Jason A
2008-12-01
The immune system is comprised of numerous components that interact with one another to give rise to phenotypic behaviors that are sometimes unexpected. Agent-based modeling (ABM) and cellular automata (CA) belong to a class of discrete mathematical approaches in which autonomous entities detect local information and act over time according to logical rules. The power of this approach lies in the emergence of behavior that arises from interactions between agents, which would otherwise be impossible to know a priori. Recent work exploring the immune system with ABM and CA has revealed novel insights into immunological processes. Here, we summarize these applications to immunology and, particularly, how ABM can help formulate hypotheses that might drive further experimental investigations of disease mechanisms.
Carney, Timothy Jay; Morgan, Geoffrey P; Jones, Josette; McDaniel, Anna M; Weaver, Michael T; Weiner, Bryan; Haggstrom, David A
2015-10-01
Nationally sponsored cancer-care quality-improvement efforts have been deployed in community health centers to increase breast, cervical, and colorectal cancer-screening rates among vulnerable populations. Despite several immediate and short-term gains, screening rates remain below national benchmark objectives. Overall improvement has been both difficult to sustain over time in some organizational settings and/or challenging to diffuse to other settings as repeatable best practices. Reasons for this include facility-level changes, which typically occur in dynamic organizational environments that are complex, adaptive, and unpredictable. This study seeks to understand the factors that shape community health center facility-level cancer-screening performance over time. This study applies a computational-modeling approach, combining principles of health-services research, health informatics, network theory, and systems science. To investigate the roles of knowledge acquisition, retention, and sharing within the setting of the community health center and to examine their effects on the relationship between clinical decision support capabilities and improvement in cancer-screening rate improvement, we employed Construct-TM to create simulated community health centers using previously collected point-in-time survey data. Construct-TM is a multi-agent model of network evolution. Because social, knowledge, and belief networks co-evolve, groups and organizations are treated as complex systems to capture the variability of human and organizational factors. In Construct-TM, individuals and groups interact by communicating, learning, and making decisions in a continuous cycle. Data from the survey was used to differentiate high-performing simulated community health centers from low-performing ones based on computer-based decision support usage and self-reported cancer-screening improvement. This virtual experiment revealed that patterns of overall network symmetry, agent cohesion, and connectedness varied by community health center performance level. Visual assessment of both the agent-to-agent knowledge sharing network and agent-to-resource knowledge use network diagrams demonstrated that community health centers labeled as high performers typically showed higher levels of collaboration and cohesiveness among agent classes, faster knowledge-absorption rates, and fewer agents that were unconnected to key knowledge resources. Conclusions and research implications: Using the point-in-time survey data outlining community health center cancer-screening practices, our computational model successfully distinguished between high and low performers. Results indicated that high-performance environments displayed distinctive network characteristics in patterns of interaction among agents, as well as in the access and utilization of key knowledge resources. Our study demonstrated how non-network-specific data obtained from a point-in-time survey can be employed to forecast community health center performance over time, thereby enhancing the sustainability of long-term strategic-improvement efforts. Our results revealed a strategic profile for community health center cancer-screening improvement via simulation over a projected 10-year period. The use of computational modeling allows additional inferential knowledge to be drawn from existing data when examining organizational performance in increasingly complex environments. Copyright © 2015 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Pham, Vinh Huy
2009-01-01
Stakeholders of the educational system assume that standardized tests are transparently about the subject content being tested and therefore can be used as a metric to measure achievement in outcome-based educational reform. Both analysis of longitudinal data for the Texas Assessment of Knowledge and Skills (TAKS) exam and agent based computer…
Seekhao, Nuttiiya; Shung, Caroline; JaJa, Joseph; Mongeau, Luc; Li-Jessen, Nicole Y K
2016-05-01
We present an efficient and scalable scheme for implementing agent-based modeling (ABM) simulation with In Situ visualization of large complex systems on heterogeneous computing platforms. The scheme is designed to make optimal use of the resources available on a heterogeneous platform consisting of a multicore CPU and a GPU, resulting in minimal to no resource idle time. Furthermore, the scheme was implemented under a client-server paradigm that enables remote users to visualize and analyze simulation data as it is being generated at each time step of the model. Performance of a simulation case study of vocal fold inflammation and wound healing with 3.8 million agents shows 35× and 7× speedup in execution time over single-core and multi-core CPU respectively. Each iteration of the model took less than 200 ms to simulate, visualize and send the results to the client. This enables users to monitor the simulation in real-time and modify its course as needed.
Cockrell, Chase; An, Gary
2017-10-07
Sepsis affects nearly 1 million people in the United States per year, has a mortality rate of 28-50% and requires more than $20 billion a year in hospital costs. Over a quarter century of research has not yielded a single reliable diagnostic test or a directed therapeutic agent for sepsis. Central to this insufficiency is the fact that sepsis remains a clinical/physiological diagnosis representing a multitude of molecularly heterogeneous pathological trajectories. Advances in computational capabilities offered by High Performance Computing (HPC) platforms call for an evolution in the investigation of sepsis to attempt to define the boundaries of traditional research (bench, clinical and computational) through the use of computational proxy models. We present a novel investigatory and analytical approach, derived from how HPC resources and simulation are used in the physical sciences, to identify the epistemic boundary conditions of the study of clinical sepsis via the use of a proxy agent-based model of systemic inflammation. Current predictive models for sepsis use correlative methods that are limited by patient heterogeneity and data sparseness. We address this issue by using an HPC version of a system-level validated agent-based model of sepsis, the Innate Immune Response ABM (IIRBM), as a proxy system in order to identify boundary conditions for the possible behavioral space for sepsis. We then apply advanced analysis derived from the study of Random Dynamical Systems (RDS) to identify novel means for characterizing system behavior and providing insight into the tractability of traditional investigatory methods. The behavior space of the IIRABM was examined by simulating over 70 million sepsis patients for up to 90 days in a sweep across the following parameters: cardio-respiratory-metabolic resilience; microbial invasiveness; microbial toxigenesis; and degree of nosocomial exposure. In addition to using established methods for describing parameter space, we developed two novel methods for characterizing the behavior of a RDS: Probabilistic Basins of Attraction (PBoA) and Stochastic Trajectory Analysis (STA). Computationally generated behavioral landscapes demonstrated attractor structures around stochastic regions of behavior that could be described in a complementary fashion through use of PBoA and STA. The stochasticity of the boundaries of the attractors highlights the challenge for correlative attempts to characterize and classify clinical sepsis. HPC simulations of models like the IIRABM can be used to generate approximations of the behavior space of sepsis to both establish "boundaries of futility" with respect to existing investigatory approaches and apply system engineering principles to investigate the general dynamic properties of sepsis to provide a pathway for developing control strategies. The issues that bedevil the study and treatment of sepsis, namely clinical data sparseness and inadequate experimental sampling of system behavior space, are fundamental to nearly all biomedical research, manifesting in the "Crisis of Reproducibility" at all levels. HPC-augmented simulation-based research offers an investigatory strategy more consistent with that seen in the physical sciences (which combine experiment, theory and simulation), and an opportunity to utilize the leading advances in HPC, namely deep machine learning and evolutionary computing, to form the basis of an iterative scientific process to meet the full promise of Precision Medicine (right drug, right patient, right time). Copyright © 2017. Published by Elsevier Ltd.
[Problem of bioterrorism under modern conditions].
Vorob'ev, A A; Boev, B V; Bondarenko, V M; Gintsburg, A L
2002-01-01
It is practically impossible to discuss the problem of bioterrorism (BT) and to develop effective programs of decreasing the losses and expenses suffered by the society from the BT acts without evaluation of the threat and prognosis of consequences based on research and empiric data. Stained international situation following the act of terrorism (attack on the USA) on September 11, 2001, makes the scenarios of the bacterial weapon use (the causative agents of plague, smallpox, anthrax, etc.) by international terrorists most probable. In this connection studies on the analysis and prognostication of the consequences of BT, including mathematical and computer modelling, are necessary. The authors present the results of initiative studies on the analysis and prognostication of the consequences of the hypothetical act of BT with the use of the smallpox causative agent in a city with the population of about 1,000,000 inhabitants. The analytical prognostic studies on the operative analysis and prognostication of the consequences of the BT act with the use of the smallpox causative agent has demonstrated that the mathematical (computer) model of the epidemic outbreak of smallpox is an effective instrument of calculation studies. Prognostic evaluations of the consequences of the act of BT under the conditions of different reaction of public health services (time of detection, interventions) have been obtained with the use of modelling. In addition, the computer model is necessary for training health specialists to react adequately to the acts of BT with the use of different kinds of bacteriological weapons.
NASA Technical Reports Server (NTRS)
Lee, S. Daniel
1990-01-01
We propose a distributed agent architecture (DAA) that can support a variety of paradigms based on both traditional real-time computing and artificial intelligence. DAA consists of distributed agents that are classified into two categories: reactive and cognitive. Reactive agents can be implemented directly in Ada to meet hard real-time requirements and be deployed on on-board embedded processors. A traditional real-time computing methodology under consideration is the rate monotonic theory that can guarantee schedulability based on analytical methods. AI techniques under consideration for reactive agents are approximate or anytime reasoning that can be implemented using Bayesian belief networks as in Guardian. Cognitive agents are traditional expert systems that can be implemented in ART-Ada to meet soft real-time requirements. During the initial design of cognitive agents, it is critical to consider the migration path that would allow initial deployment on ground-based workstations with eventual deployment on on-board processors. ART-Ada technology enables this migration while Lisp-based technologies make it difficult if not impossible. In addition to reactive and cognitive agents, a meta-level agent would be needed to coordinate multiple agents and to provide meta-level control.
Can Computational Models Be Used to Assess the Developmental Toxicity of Environmental Exposures?
Environmental causes of birth defects include maternal exposure to drugs, chemicals, or physical agents. Environmental factors account for an estimated 3–7% of birth defects although a broader contribution is likely based on the mother’s general health status and genetic blueprin...
Real-time path planning in dynamic virtual environments using multiagent navigation graphs.
Sud, Avneesh; Andersen, Erik; Curtis, Sean; Lin, Ming C; Manocha, Dinesh
2008-01-01
We present a novel approach for efficient path planning and navigation of multiple virtual agents in complex dynamic scenes. We introduce a new data structure, Multi-agent Navigation Graph (MaNG), which is constructed using first- and second-order Voronoi diagrams. The MaNG is used to perform route planning and proximity computations for each agent in real time. Moreover, we use the path information and proximity relationships for local dynamics computation of each agent by extending a social force model [Helbing05]. We compute the MaNG using graphics hardware and present culling techniques to accelerate the computation. We also address undersampling issues and present techniques to improve the accuracy of our algorithm. Our algorithm is used for real-time multi-agent planning in pursuit-evasion, terrain exploration and crowd simulation scenarios consisting of hundreds of moving agents, each with a distinct goal.
An, Gary
2009-01-01
The sheer volume of biomedical research threatens to overwhelm the capacity of individuals to effectively process this information. Adding to this challenge is the multiscale nature of both biological systems and the research community as a whole. Given this volume and rate of generation of biomedical information, the research community must develop methods for robust representation of knowledge in order for individuals, and the community as a whole, to "know what they know." Despite increasing emphasis on "data-driven" research, the fact remains that researchers guide their research using intuitively constructed conceptual models derived from knowledge extracted from publications, knowledge that is generally qualitatively expressed using natural language. Agent-based modeling (ABM) is a computational modeling method that is suited to translating the knowledge expressed in biomedical texts into dynamic representations of the conceptual models generated by researchers. The hierarchical object-class orientation of ABM maps well to biomedical ontological structures, facilitating the translation of ontologies into instantiated models. Furthermore, ABM is suited to producing the nonintuitive behaviors that often "break" conceptual models. Verification in this context is focused at determining the plausibility of a particular conceptual model, and qualitative knowledge representation is often sufficient for this goal. Thus, utilized in this fashion, ABM can provide a powerful adjunct to other computational methods within the research process, as well as providing a metamodeling framework to enhance the evolution of biomedical ontologies.
On the need and use of models to explore the role of economic confidence:a survey.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sprigg, James A.; Paez, Paul J.; Hand, Michael S.
2005-04-01
Empirical studies suggest that consumption is more sensitive to current income than suggested under the permanent income hypothesis, which raises questions regarding expectations for future income, risk aversion, and the role of economic confidence measures. This report surveys a body of fundamental economic literature as well as burgeoning computational modeling methods to support efforts to better anticipate cascading economic responses to terrorist threats and attacks. This is a three part survey to support the incorporation of models of economic confidence into agent-based microeconomic simulations. We first review broad underlying economic principles related to this topic. We then review the economicmore » principle of confidence and related empirical studies. Finally, we provide a brief survey of efforts and publications related to agent-based economic simulation.« less
DualTrust: A Distributed Trust Model for Swarm-Based Autonomic Computing Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maiden, Wendy M.; Dionysiou, Ioanna; Frincke, Deborah A.
2011-02-01
For autonomic computing systems that utilize mobile agents and ant colony algorithms for their sensor layer, trust management is important for the acceptance of the mobile agent sensors and to protect the system from malicious behavior by insiders and entities that have penetrated network defenses. This paper examines the trust relationships, evidence, and decisions in a representative system and finds that by monitoring the trustworthiness of the autonomic managers rather than the swarming sensors, the trust management problem becomes much more scalable and still serves to protect the swarm. We then propose the DualTrust conceptual trust model. By addressing themore » autonomic manager’s bi-directional primary relationships in the ACS architecture, DualTrust is able to monitor the trustworthiness of the autonomic managers, protect the sensor swarm in a scalable manner, and provide global trust awareness for the orchestrating autonomic manager.« less
Biocellion: accelerating computer simulation of multicellular biological system models
Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya
2014-01-01
Motivation: Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. Results: We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Availability and implementation: Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. Contact: seunghwa.kang@pnnl.gov PMID:25064572
Agent Based Fault Tolerance for the Mobile Environment
NASA Astrophysics Data System (ADS)
Park, Taesoon
This paper presents a fault-tolerance scheme based on mobile agents for the reliable mobile computing systems. Mobility of the agent is suitable to trace the mobile hosts and the intelligence of the agent makes it efficient to support the fault tolerance services. This paper presents two approaches to implement the mobile agent based fault tolerant service and their performances are evaluated and compared with other fault-tolerant schemes.
Chiêm, Jean-Christophe; Van Durme, Thérèse; Vandendorpe, Florence; Schmitz, Olivier; Speybroeck, Niko; Cès, Sophie; Macq, Jean
2014-08-01
Various elderly case management projects have been implemented in Belgium. This type of long-term health care intervention involves contextual factors and human interactions. These underlying complex mechanisms can be usefully informed with field experts' knowledge, which are hard to make explicit. However, computer simulation has been suggested as one possible method of overcoming the difficulty of articulating such elicited qualitative views. A simulation model of case management was designed using an agent-based methodology, based on the initial qualitative research material. Variables and rules of interaction were formulated into a simple conceptual framework. This model has been implemented and was used as a support for a structured discussion with experts in case management. The rigorous formulation provided by the agent-based methodology clarified the descriptions of the interventions and the problems encountered regarding: the diverse network topologies of health care actors in the project; the adaptation time required by the intervention; the communication between the health care actors; the institutional context; the organization of the care; and the role of the case manager and his or hers personal ability to interpret the informal demands of the frail older person. The simulation model should be seen primarily as a tool for thinking and learning. A number of insights were gained as part of a valuable cognitive process. Computer simulation supporting field experts' elicitation can lead to better-informed decisions in the organization of complex health care interventions. © 2013 John Wiley & Sons, Ltd.
Smart Swarms of Bacteria-Inspired Agents with Performance Adaptable Interactions
Shklarsh, Adi; Ariel, Gil; Schneidman, Elad; Ben-Jacob, Eshel
2011-01-01
Collective navigation and swarming have been studied in animal groups, such as fish schools, bird flocks, bacteria, and slime molds. Computer modeling has shown that collective behavior of simple agents can result from simple interactions between the agents, which include short range repulsion, intermediate range alignment, and long range attraction. Here we study collective navigation of bacteria-inspired smart agents in complex terrains, with adaptive interactions that depend on performance. More specifically, each agent adjusts its interactions with the other agents according to its local environment – by decreasing the peers' influence while navigating in a beneficial direction, and increasing it otherwise. We show that inclusion of such performance dependent adaptable interactions significantly improves the collective swarming performance, leading to highly efficient navigation, especially in complex terrains. Notably, to afford such adaptable interactions, each modeled agent requires only simple computational capabilities with short-term memory, which can easily be implemented in simple swarming robots. PMID:21980274
Smart swarms of bacteria-inspired agents with performance adaptable interactions.
Shklarsh, Adi; Ariel, Gil; Schneidman, Elad; Ben-Jacob, Eshel
2011-09-01
Collective navigation and swarming have been studied in animal groups, such as fish schools, bird flocks, bacteria, and slime molds. Computer modeling has shown that collective behavior of simple agents can result from simple interactions between the agents, which include short range repulsion, intermediate range alignment, and long range attraction. Here we study collective navigation of bacteria-inspired smart agents in complex terrains, with adaptive interactions that depend on performance. More specifically, each agent adjusts its interactions with the other agents according to its local environment--by decreasing the peers' influence while navigating in a beneficial direction, and increasing it otherwise. We show that inclusion of such performance dependent adaptable interactions significantly improves the collective swarming performance, leading to highly efficient navigation, especially in complex terrains. Notably, to afford such adaptable interactions, each modeled agent requires only simple computational capabilities with short-term memory, which can easily be implemented in simple swarming robots.
Module-based multiscale simulation of angiogenesis in skeletal muscle
2011-01-01
Background Mathematical modeling of angiogenesis has been gaining momentum as a means to shed new light on the biological complexity underlying blood vessel growth. A variety of computational models have been developed, each focusing on different aspects of the angiogenesis process and occurring at different biological scales, ranging from the molecular to the tissue levels. Integration of models at different scales is a challenging and currently unsolved problem. Results We present an object-oriented module-based computational integration strategy to build a multiscale model of angiogenesis that links currently available models. As an example case, we use this approach to integrate modules representing microvascular blood flow, oxygen transport, vascular endothelial growth factor transport and endothelial cell behavior (sensing, migration and proliferation). Modeling methodologies in these modules include algebraic equations, partial differential equations and agent-based models with complex logical rules. We apply this integrated model to simulate exercise-induced angiogenesis in skeletal muscle. The simulation results compare capillary growth patterns between different exercise conditions for a single bout of exercise. Results demonstrate how the computational infrastructure can effectively integrate multiple modules by coordinating their connectivity and data exchange. Model parameterization offers simulation flexibility and a platform for performing sensitivity analysis. Conclusions This systems biology strategy can be applied to larger scale integration of computational models of angiogenesis in skeletal muscle, or other complex processes in other tissues under physiological and pathological conditions. PMID:21463529
Agent-based computational models to explore diffusion of medical innovations among cardiologists.
Borracci, Raul A; Giorgi, Mariano A
2018-04-01
Diffusion of medical innovations among physicians rests on a set of theoretical assumptions, including learning and decision-making under uncertainty, social-normative pressures, medical expert knowledge, competitive concerns, network performance effects, professional autonomy or individualism and scientific evidence. The aim of this study was to develop and test four real data-based, agent-based computational models (ABM) to qualitatively and quantitatively explore the factors associated with diffusion and application of innovations among cardiologists. Four ABM were developed to study diffusion and application of medical innovations among cardiologists, considering physicians' network connections, leaders' opinions, "adopters' categories", physicians' autonomy, scientific evidence, patients' pressure, affordability for the end-user population, and promotion from companies. Simulations demonstrated that social imitation among local cardiologists was sufficient for innovation diffusion, as long as opinion leaders did not act as detractors of the innovation. Even in the absence of full scientific evidence to support innovation, up to one-fifth of cardiologists could accept it when local leaders acted as promoters. Patients' pressure showed a large effect size (Cohen's d > 1.2) on the proportion of cardiologists applying an innovation. Two qualitative patterns (speckled and granular) appeared associated to traditional Gompertz and sigmoid cumulative distributions. These computational models provided a semiquantitative insight on the emergent collective behavior of a physician population facing the acceptance or refusal of medical innovations. Inclusion in the models of factors related to patients' pressure and accesibility to medical coverage revealed the contrast between accepting and effectively adopting a new product or technology for population health care. Copyright © 2018 Elsevier B.V. All rights reserved.
New leads in speculative behavior
NASA Astrophysics Data System (ADS)
Kindler, A.; Bourgeois-Gironde, S.; Lefebvre, G.; Solomon, S.
2017-02-01
The Kiyotaki and Wright (1989) (henceforth KW) model of money emergence as a medium of exchange has been studied from various perspectives in recent papers. In the present work we propose a minimalistic model for the behavior of agents in the KW framework, which may either reproduce the theoretical predictions of Kiyotaki and Wright (1989) on the emerging Nash equilibria, or (less closely) the empirical results of Brown (1996), Duffy and Ochs (1999) and our own, introduced in a first part of the present paper. The main import is the systematic computer scanning of speculative monetary equilibria under drastic bounded rationality of agents, based on behavior previously observed in the lab.
Endogenous Crisis Waves: Stochastic Model with Synchronized Collective Behavior
NASA Astrophysics Data System (ADS)
Gualdi, Stanislao; Bouchaud, Jean-Philippe; Cencetti, Giulia; Tarzia, Marco; Zamponi, Francesco
2015-02-01
We propose a simple framework to understand commonly observed crisis waves in macroeconomic agent-based models, which is also relevant to a variety of other physical or biological situations where synchronization occurs. We compute exactly the phase diagram of the model and the location of the synchronization transition in parameter space. Many modifications and extensions can be studied, confirming that the synchronization transition is extremely robust against various sources of noise or imperfections.
NASA Astrophysics Data System (ADS)
Ho, Wan Ching; Dautenhahn, Kerstin; Nehaniv, Chrystopher
2008-03-01
In this paper, we discuss the concept of autobiographic agent and how memory may extend an agent's temporal horizon and increase its adaptability. These concepts are applied to an implementation of a scenario where agents are interacting in a complex virtual artificial life environment. We present computational memory architectures for autobiographic virtual agents that enable agents to retrieve meaningful information from their dynamic memories which increases their adaptation and survival in the environment. The design of the memory architectures, the agents, and the virtual environment are described in detail. Next, a series of experimental studies and their results are presented which show the adaptive advantage of autobiographic memory, i.e. from remembering significant experiences. Also, in a multi-agent scenario where agents can communicate via stories based on their autobiographic memory, it is found that new adaptive behaviours can emerge from an individual's reinterpretation of experiences received from other agents whereby higher communication frequency yields better group performance. An interface is described that visualises the memory contents of an agent. From an observer perspective, the agents' behaviours can be understood as individually structured, and temporally grounded, and, with the communication of experience, can be seen to rely on emergent mixed narrative reconstructions combining the experiences of several agents. This research leads to insights into how bottom-up story-telling and autobiographic reconstruction in autonomous, adaptive agents allow temporally grounded behaviour to emerge. The article concludes with a discussion of possible implications of this research direction for future autobiographic, narrative agents.
Lehnert, Teresa; Figge, Marc Thilo
2017-01-01
Mathematical modeling and computer simulations have become an integral part of modern biological research. The strength of theoretical approaches is in the simplification of complex biological systems. We here consider the general problem of receptor-ligand binding in the context of antibody-antigen binding. On the one hand, we establish a quantitative mapping between macroscopic binding rates of a deterministic differential equation model and their microscopic equivalents as obtained from simulating the spatiotemporal binding kinetics by stochastic agent-based models. On the other hand, we investigate the impact of various properties of B cell-derived receptors-such as their dimensionality of motion, morphology, and binding valency-on the receptor-ligand binding kinetics. To this end, we implemented an algorithm that simulates antigen binding by B cell-derived receptors with a Y-shaped morphology that can move in different dimensionalities, i.e., either as membrane-anchored receptors or as soluble receptors. The mapping of the macroscopic and microscopic binding rates allowed us to quantitatively compare different agent-based model variants for the different types of B cell-derived receptors. Our results indicate that the dimensionality of motion governs the binding kinetics and that this predominant impact is quantitatively compensated by the bivalency of these receptors.
Lehnert, Teresa; Figge, Marc Thilo
2017-01-01
Mathematical modeling and computer simulations have become an integral part of modern biological research. The strength of theoretical approaches is in the simplification of complex biological systems. We here consider the general problem of receptor–ligand binding in the context of antibody–antigen binding. On the one hand, we establish a quantitative mapping between macroscopic binding rates of a deterministic differential equation model and their microscopic equivalents as obtained from simulating the spatiotemporal binding kinetics by stochastic agent-based models. On the other hand, we investigate the impact of various properties of B cell-derived receptors—such as their dimensionality of motion, morphology, and binding valency—on the receptor–ligand binding kinetics. To this end, we implemented an algorithm that simulates antigen binding by B cell-derived receptors with a Y-shaped morphology that can move in different dimensionalities, i.e., either as membrane-anchored receptors or as soluble receptors. The mapping of the macroscopic and microscopic binding rates allowed us to quantitatively compare different agent-based model variants for the different types of B cell-derived receptors. Our results indicate that the dimensionality of motion governs the binding kinetics and that this predominant impact is quantitatively compensated by the bivalency of these receptors. PMID:29250071
Numerical Modeling of Mixing and Venting from Explosions in Bunkers
NASA Astrophysics Data System (ADS)
Liu, Benjamin
2005-07-01
2D and 3D numerical simulations were performed to study the dynamic interaction of explosion products in a concrete bunker with ambient air, stored chemical or biological warfare (CBW) agent simulant, and the surrounding walls and structure. The simulations were carried out with GEODYN, a multi-material, Godunov-based Eulerian code, that employs adaptive mesh refinement and runs efficiently on massively parallel computer platforms. Tabular equations of state were used for all materials with the exception of any high explosives employed, which were characterized with conventional JWL models. An appropriate constitutive model was used to describe the concrete. Interfaces between materials were either tracked with a volume-of-fluid method that used high-order reconstruction to specify the interface location and orientation, or a capturing approach was employed with the assumption of local thermal and mechanical equilibrium. A major focus of the study was to estimate the extent of agent heating that could be obtained prior to venting of the bunker and resultant agent dispersal. Parameters investigated included the bunker construction, agent layout, energy density in the bunker and the yield-to-agent mass ratio. Turbulent mixing was found to be the dominant heat transfer mechanism for heating the agent.
Peer, Xavier; An, Gary
2014-10-01
Agent-based modeling is a computational modeling method that represents system-level behavior as arising from multiple interactions between the multiple components that make up a system. Biological systems are thus readily described using agent-based models (ABMs), as multi-cellular organisms can be viewed as populations of interacting cells, and microbial systems manifest as colonies of individual microbes. Intersections between these two domains underlie an increasing number of pathophysiological processes, and the intestinal tract represents one of the most significant locations for these inter-domain interactions, so much so that it can be considered an internal ecology of varying robustness and function. Intestinal infections represent significant disturbances of this internal ecology, and one of the most clinically relevant intestinal infections is Clostridium difficile infection (CDI). CDI is precipitated by the use of broad-spectrum antibiotics, involves the depletion of commensal microbiota, and alterations in bile acid composition in the intestinal lumen. We present an example ABM of CDI (the C. difficile Infection ABM, or CDIABM) to examine fundamental dynamics of the pathogenesis of CDI and its response to treatment with anti-CDI antibiotics and a newer treatment therapy, fecal microbial transplant. The CDIABM focuses on one specific mechanism of potential CDI suppression: commensal modulation of bile acid composition. Even given its abstraction, the CDIABM reproduces essential dynamics of CDI and its response to therapy, and identifies a paradoxical zone of behavior that provides insight into the role of intestinal nutritional status and the efficacy of anti-CDI therapies. It is hoped that this use case example of the CDIABM can demonstrate the usefulness of both agent-based modeling and the application of abstract functional representation as the biomedical community seeks to address the challenges of increasingly complex diseases with the goal of personalized medicine.
Peer, Xavier; An, Gary
2014-01-01
Agent-based modeling is a computational modeling method that represents system-level behavior as arising from multiple interactions between the multiple components that make up a system. Biological systems are thus readily described using agent-based models (ABMs), as multi-cellular organisms can be viewed as populations of interacting cells, and microbial systems manifest as colonies of individual microbes. Intersections between these two domains underlie an increasing number of pathophysiological processes, and the intestinal tract represents one of the most significant locations for these inter-domain interactions, so much so that it can be considered an internal ecology of varying robustness and function. Intestinal infections represent significant disturbances of this internal ecology, and one of the most clinically relevant intestinal infections is Clostridium difficile infection (CDI). CDI is precipitated by the use of broad-spectrum antibiotics, involves the depletion of commensal microbiota, and alterations in bile acid composition in the intestinal lumen. We present an example ABM of CDI (the Clostridium difficile Infection ABM, or CDIABM) to examine fundamental dynamics of the pathogenesis of CDI and its response to treatment with anti-CDI antibiotics and a newer treatment therapy, Fecal Microbial Transplant (FMT). The CDIABM focuses on one specific mechanism of potential CDI suppression: commensal modulation of bile acid composition. Even given its abstraction, the CDIABM reproduces essential dynamics of CDI and its response to therapy, and identifies a paradoxical zone of behavior that provides insight into the role of intestinal nutritional status and the efficacy of anti-CDI therapies. It is hoped that this use case example of the CDIABM can demonstrate the usefulness of both agent-based modeling and the application of abstract functional representation as the biomedical community seeks to address the challenges of increasingly complex diseases with the goal of personalized medicine. PMID:25168489
Modeling the Impact of Motivation, Personality, and Emotion on Social Behavior
NASA Astrophysics Data System (ADS)
Miller, Lynn C.; Read, Stephen J.; Zachary, Wayne; Rosoff, Andrew
Models seeking to predict human social behavior must contend with multiple sources of individual and group variability that underlie social behavior. One set of interrelated factors that strongly contribute to that variability - motivations, personality, and emotions - has been only minimally incorporated in previous computational models of social behavior. The Personality, Affect, Culture (PAC) framework is a theory-based computational model that addresses this gap. PAC is used to simulate social agents whose social behavior varies according to their personalities and emotions, which, in turn, vary according to their motivations and underlying motive control parameters. Examples involving disease spread and counter-insurgency operations show how PAC can be used to study behavioral variability in different social contexts.
Anderson, Christine A; Whall, Ann L
2013-10-01
Opinion leaders are informal leaders who have the ability to influence others' decisions about adopting new products, practices or ideas. In the healthcare setting, the importance of translating new research evidence into practice has led to interest in understanding how opinion leaders could be used to speed this process. Despite continued interest, gaps in understanding opinion leadership remain. Agent-based models are computer models that have proven to be useful for representing dynamic and contextual phenomena such as opinion leadership. The purpose of this paper is to describe the work conducted in preparation for the development of an agent-based model of nursing opinion leadership. The aim of this phase of the model development project was to clarify basic assumptions about opinions, the individual attributes of opinion leaders and characteristics of the context in which they are effective. The process used to clarify these assumptions was the construction of a preliminary nursing opinion leader model, derived from philosophical theories about belief formation. © 2013 John Wiley & Sons Ltd.
Roche, Benjamin; Guégan, Jean-François; Bousquet, François
2008-10-15
Computational biology is often associated with genetic or genomic studies only. However, thanks to the increase of computational resources, computational models are appreciated as useful tools in many other scientific fields. Such modeling systems are particularly relevant for the study of complex systems, like the epidemiology of emerging infectious diseases. So far, mathematical models remain the main tool for the epidemiological and ecological analysis of infectious diseases, with SIR models could be seen as an implicit standard in epidemiology. Unfortunately, these models are based on differential equations and, therefore, can become very rapidly unmanageable due to the too many parameters which need to be taken into consideration. For instance, in the case of zoonotic and vector-borne diseases in wildlife many different potential host species could be involved in the life-cycle of disease transmission, and SIR models might not be the most suitable tool to truly capture the overall disease circulation within that environment. This limitation underlines the necessity to develop a standard spatial model that can cope with the transmission of disease in realistic ecosystems. Computational biology may prove to be flexible enough to take into account the natural complexity observed in both natural and man-made ecosystems. In this paper, we propose a new computational model to study the transmission of infectious diseases in a spatially explicit context. We developed a multi-agent system model for vector-borne disease transmission in a realistic spatial environment. Here we describe in detail the general behavior of this model that we hope will become a standard reference for the study of vector-borne disease transmission in wildlife. To conclude, we show how this simple model could be easily adapted and modified to be used as a common framework for further research developments in this field.
Shi, Zhenzhen; Chapes, Stephen K; Ben-Arieh, David; Wu, Chih-Hang
2016-01-01
We present an agent-based model (ABM) to simulate a hepatic inflammatory response (HIR) in a mouse infected by Salmonella that sometimes progressed to problematic proportions, known as "sepsis". Based on over 200 published studies, this ABM describes interactions among 21 cells or cytokines and incorporates 226 experimental data sets and/or data estimates from those reports to simulate a mouse HIR in silico. Our simulated results reproduced dynamic patterns of HIR reported in the literature. As shown in vivo, our model also demonstrated that sepsis was highly related to the initial Salmonella dose and the presence of components of the adaptive immune system. We determined that high mobility group box-1, C-reactive protein, and the interleukin-10: tumor necrosis factor-α ratio, and CD4+ T cell: CD8+ T cell ratio, all recognized as biomarkers during HIR, significantly correlated with outcomes of HIR. During therapy-directed silico simulations, our results demonstrated that anti-agent intervention impacted the survival rates of septic individuals in a time-dependent manner. By specifying the infected species, source of infection, and site of infection, this ABM enabled us to reproduce the kinetics of several essential indicators during a HIR, observe distinct dynamic patterns that are manifested during HIR, and allowed us to test proposed therapy-directed treatments. Although limitation still exists, this ABM is a step forward because it links underlying biological processes to computational simulation and was validated through a series of comparisons between the simulated results and experimental studies.
Chapes, Stephen K.; Ben-Arieh, David; Wu, Chih-Hang
2016-01-01
We present an agent-based model (ABM) to simulate a hepatic inflammatory response (HIR) in a mouse infected by Salmonella that sometimes progressed to problematic proportions, known as “sepsis”. Based on over 200 published studies, this ABM describes interactions among 21 cells or cytokines and incorporates 226 experimental data sets and/or data estimates from those reports to simulate a mouse HIR in silico. Our simulated results reproduced dynamic patterns of HIR reported in the literature. As shown in vivo, our model also demonstrated that sepsis was highly related to the initial Salmonella dose and the presence of components of the adaptive immune system. We determined that high mobility group box-1, C-reactive protein, and the interleukin-10: tumor necrosis factor-α ratio, and CD4+ T cell: CD8+ T cell ratio, all recognized as biomarkers during HIR, significantly correlated with outcomes of HIR. During therapy-directed silico simulations, our results demonstrated that anti-agent intervention impacted the survival rates of septic individuals in a time-dependent manner. By specifying the infected species, source of infection, and site of infection, this ABM enabled us to reproduce the kinetics of several essential indicators during a HIR, observe distinct dynamic patterns that are manifested during HIR, and allowed us to test proposed therapy-directed treatments. Although limitation still exists, this ABM is a step forward because it links underlying biological processes to computational simulation and was validated through a series of comparisons between the simulated results and experimental studies. PMID:27556404
NASA Astrophysics Data System (ADS)
Lucas, Iris; Cotsaftis, Michel; Bertelle, Cyrille
This paper introduces the implementation of a computational agent-based financial market model in which the system is described on both microscopic and macroscopic levels. This artificial financial market model is used to study the system response when a shock occurs. Indeed, when a market experiences perturbations, financial systems behavior can exhibit two different properties: resilience and robustness. Through simulations and different scenarios of market shocks, these system properties are studied. The results notably show that the emergence of collective herding behavior when market shock occurs leads to a temporary disruption of the system self-organization. Numerical simulations highlight that the market can absorb strong mono-shocks but can also be led to rupture by low but repeated perturbations.
Nonlinearity in Social Service Evaluation: A Primer on Agent-Based Modeling
ERIC Educational Resources Information Center
Israel, Nathaniel; Wolf-Branigin, Michael
2011-01-01
Measurement of nonlinearity in social service research and evaluation relies primarily on spatial analysis and, to a lesser extent, social network analysis. Recent advances in geographic methods and computing power, however, allow for the greater use of simulation methods. These advances now enable evaluators and researchers to simulate complex…
Strengthening Theoretical Testing in Criminology Using Agent-based Modeling.
Johnson, Shane D; Groff, Elizabeth R
2014-07-01
The Journal of Research in Crime and Delinquency ( JRCD ) has published important contributions to both criminological theory and associated empirical tests. In this article, we consider some of the challenges associated with traditional approaches to social science research, and discuss a complementary approach that is gaining popularity-agent-based computational modeling-that may offer new opportunities to strengthen theories of crime and develop insights into phenomena of interest. Two literature reviews are completed. The aim of the first is to identify those articles published in JRCD that have been the most influential and to classify the theoretical perspectives taken. The second is intended to identify those studies that have used an agent-based model (ABM) to examine criminological theories and to identify which theories have been explored. Ecological theories of crime pattern formation have received the most attention from researchers using ABMs, but many other criminological theories are amenable to testing using such methods. Traditional methods of theory development and testing suffer from a number of potential issues that a more systematic use of ABMs-not without its own issues-may help to overcome. ABMs should become another method in the criminologists toolbox to aid theory testing and falsification.
NASA Astrophysics Data System (ADS)
Johnson, Amy M.; Ozogul, Gamze; DiDonato, Matt D.; Reisslein, Martin
2013-10-01
Computer-based multimedia presentations employing animated agents (avatars) can positively impact perceptions about engineering; the current research advances our understanding of this effect to pre-college populations, the main target for engineering outreach. The study examines the effectiveness of a brief computer-based intervention with animated agents in improving perceptions about engineering. Five hundred sixty-five elementary, middle-, and high-school students in the southwestern USA viewed a short computer-based multimedia overview of four engineering disciplines (electrical, chemical, biomedical, and environmental) with embedded animated agents. Students completed identical surveys measuring five subscales of engineering perceptions immediately before and after the intervention. Analyses of pre- and post-surveys demonstrated that the computer presentation significantly improved perceptions for each student group, and that effects were stronger for elementary school students, compared to middle- and high-school students.
Users matter : multi-agent systems model of high performance computing cluster users.
DOE Office of Scientific and Technical Information (OSTI.GOV)
North, M. J.; Hood, C. S.; Decision and Information Sciences
2005-01-01
High performance computing clusters have been a critical resource for computational science for over a decade and have more recently become integral to large-scale industrial analysis. Despite their well-specified components, the aggregate behavior of clusters is poorly understood. The difficulties arise from complicated interactions between cluster components during operation. These interactions have been studied by many researchers, some of whom have identified the need for holistic multi-scale modeling that simultaneously includes network level, operating system level, process level, and user level behaviors. Each of these levels presents its own modeling challenges, but the user level is the most complex duemore » to the adaptability of human beings. In this vein, there are several major user modeling goals, namely descriptive modeling, predictive modeling and automated weakness discovery. This study shows how multi-agent techniques were used to simulate a large-scale computing cluster at each of these levels.« less
Blood-pool contrast agent for pre-clinical computed tomography
NASA Astrophysics Data System (ADS)
Cruje, Charmainne; Tse, Justin J.; Holdsworth, David W.; Gillies, Elizabeth R.; Drangova, Maria
2017-03-01
Advances in nanotechnology have led to the development of blood-pool contrast agents for micro-computed tomography (micro-CT). Although long-circulating nanoparticle-based agents exist for micro-CT, they are predominantly based on iodine, which has a low atomic number. Micro-CT contrast increases when using elements with higher atomic numbers (i.e. lanthanides), particularly at higher energies. The purpose of our work was to develop and evaluate a lanthanide-based blood-pool contrast agent that is suitable for in vivo micro-CT. We synthesized a contrast agent in the form of polymer-encapsulated Gd nanoparticles and evaluated its stability in vitro. The synthesized nanoparticles were shown to have an average diameter of 127 +/- 6 nm, with good size dispersity. Particle size distribution - evaluated by dynamic light scattering over the period of two days - demonstrated no change in size of the contrast agent in water and saline. Additionally, our contrast agent was stable in a mouse serum mimic for up to 30 minutes. CT images of the synthesized contrast agent (containing 27 mg/mL of Gd) demonstrated an attenuation of over 1000 Hounsfield Units. This approach to synthesizing a Gd-based blood-pool contrast agent promises to enhance the capabilities of micro-CT imaging.
>From naive to sophisticated behavior in multiagents-based financial market models
NASA Astrophysics Data System (ADS)
Mansilla, R.
2000-09-01
The behavior of physical complexity and mutual information function of the outcome of a model of heterogeneous, inductive rational agents inspired by the El Farol Bar problem and the Minority Game is studied. The first magnitude is a measure rooted in the Kolmogorov-Chaitin theory and the second a measure related to Shannon's information entropy. Extensive computer simulations were done, as a result of which, is proposed an ansatz for physical complexity of the type C(l)=lα and the dependence of the exponent α from the parameters of the model is established. The accuracy of our results and the relationship with the behavior of mutual information function as a measure of time correlation of agents choice are discussed.
Understanding Islamist political violence through computational social simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watkins, Jennifer H; Mackerrow, Edward P; Patelli, Paolo G
Understanding the process that enables political violence is of great value in reducing the future demand for and support of violent opposition groups. Methods are needed that allow alternative scenarios and counterfactuals to be scientifically researched. Computational social simulation shows promise in developing 'computer experiments' that would be unfeasible or unethical in the real world. Additionally, the process of modeling and simulation reveals and challenges assumptions that may not be noted in theories, exposes areas where data is not available, and provides a rigorous, repeatable, and transparent framework for analyzing the complex dynamics of political violence. This paper demonstrates themore » computational modeling process using two simulation techniques: system dynamics and agent-based modeling. The benefits and drawbacks of both techniques are discussed. In developing these social simulations, we discovered that the social science concepts and theories needed to accurately simulate the associated psychological and social phenomena were lacking.« less
Wynn, Michelle L.; Kulesa, Paul M.; Schnell, Santiago
2012-01-01
Follow-the-leader chain migration is a striking cell migratory behaviour observed during vertebrate development, adult neurogenesis and cancer metastasis. Although cell–cell contact and extracellular matrix (ECM) cues have been proposed to promote this phenomenon, mechanisms that underlie chain migration persistence remain unclear. Here, we developed a quantitative agent-based modelling framework to test mechanistic hypotheses of chain migration persistence. We defined chain migration and its persistence based on evidence from the highly migratory neural crest model system, where cells within a chain extend and retract filopodia in short-lived cell contacts and move together as a collective. In our agent-based simulations, we began with a set of agents arranged as a chain and systematically probed the influence of model parameters to identify factors critical to the maintenance of the chain migration pattern. We discovered that chain migration persistence requires a high degree of directional bias in both lead and follower cells towards the target. Chain migration persistence was also promoted when lead cells maintained cell contact with followers, but not vice-versa. Finally, providing a path of least resistance in the ECM was not sufficient alone to drive chain persistence. Our results indicate that chain migration persistence depends on the interplay of directional cell movement and biased cell–cell contact. PMID:22219399
Model-based Executive Control through Reactive Planning for Autonomous Rovers
NASA Technical Reports Server (NTRS)
Finzi, Alberto; Ingrand, Felix; Muscettola, Nicola
2004-01-01
This paper reports on the design and implementation of a real-time executive for a mobile rover that uses a model-based, declarative approach. The control system is based on the Intelligent Distributed Execution Architecture (IDEA), an approach to planning and execution that provides a unified representational and computational framework for an autonomous agent. The basic hypothesis of IDEA is that a large control system can be structured as a collection of interacting agents, each with the same fundamental structure. We show that planning and real-time response are compatible if the executive minimizes the size of the planning problem. We detail the implementation of this approach on an exploration rover (Gromit an RWI ATRV Junior at NASA Ames) presenting different IDEA controllers of the same domain and comparing them with more classical approaches. We demonstrate that the approach is scalable to complex coordination of functional modules needed for autonomous navigation and exploration.
Agent-based re-engineering of ErbB signaling: a modeling pipeline for integrative systems biology.
Das, Arya A; Ajayakumar Darsana, T; Jacob, Elizabeth
2017-03-01
Experiments in systems biology are generally supported by a computational model which quantitatively estimates the parameters of the system by finding the best fit to the experiment. Mathematical models have proved to be successful in reverse engineering the system. The data generated is interpreted to understand the dynamics of the underlying phenomena. The question we have sought to answer is that - is it possible to use an agent-based approach to re-engineer a biological process, making use of the available knowledge from experimental and modelling efforts? Can the bottom-up approach benefit from the top-down exercise so as to create an integrated modelling formalism for systems biology? We propose a modelling pipeline that learns from the data given by reverse engineering, and uses it for re-engineering the system, to carry out in-silico experiments. A mathematical model that quantitatively predicts co-expression of EGFR-HER2 receptors in activation and trafficking has been taken for this study. The pipeline architecture takes cues from the population model that gives the rates of biochemical reactions, to formulate knowledge-based rules for the particle model. Agent-based simulations using these rules, support the existing facts on EGFR-HER2 dynamics. We conclude that, re-engineering models, built using the results of reverse engineering, opens up the possibility of harnessing the power pack of data which now lies scattered in literature. Virtual experiments could then become more realistic when empowered with the findings of empirical cell biology and modelling studies. Implemented on the Agent Modelling Framework developed in-house. C ++ code templates available in Supplementary material . liz.csir@gmail.com. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Biocellion: accelerating computer simulation of multicellular biological system models.
Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya
2014-11-01
Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Sensitivity to the Sampling Process Emerges From the Principle of Efficiency.
Jara-Ettinger, Julian; Sun, Felix; Schulz, Laura; Tenenbaum, Joshua B
2018-05-01
Humans can seamlessly infer other people's preferences, based on what they do. Broadly, two types of accounts have been proposed to explain different aspects of this ability. The first account focuses on spatial information: Agents' efficient navigation in space reveals what they like. The second account focuses on statistical information: Uncommon choices reveal stronger preferences. Together, these two lines of research suggest that we have two distinct capacities for inferring preferences. Here we propose that this is not the case, and that spatial-based and statistical-based preference inferences can be explained by the assumption that agents are efficient alone. We show that people's sensitivity to spatial and statistical information when they infer preferences is best predicted by a computational model of the principle of efficiency, and that this model outperforms dual-system models, even when the latter are fit to participant judgments. Our results suggest that, as adults, a unified understanding of agency under the principle of efficiency underlies our ability to infer preferences. Copyright © 2018 Cognitive Science Society, Inc.
Managing a Common Pool Resource: Real Time Decision-Making in a Groundwater Aquifer
NASA Astrophysics Data System (ADS)
Sahu, R.; McLaughlin, D.
2017-12-01
In a Common Pool Resource (CPR) such as a groundwater aquifer, multiple landowners (agents) are competing for a limited resource of water. Landowners pump out the water to grow their own crops. Such problems can be posed as differential games, with agents all trying to control the behavior of the shared dynamic system. Each agent aims to maximize his/her own personal objective like agriculture yield, being aware that the action of every other agent collectively influences the behavior of the shared aquifer. The agents therefore choose a subgame perfect Nash equilibrium strategy that derives an optimal action for each agent based on the current state of the aquifer and assumes perfect information of every other agents' objective function. Furthermore, using an Iterated Best Response approach and interpolating techniques, an optimal pumping strategy can be computed for a more-realistic description of the groundwater model under certain assumptions. The numerical implementation of dynamic optimization techniques for a relevant description of the physical system yields results qualitatively different from the previous solutions obtained from simple abstractions.This work aims to bridge the gap between extensive modeling approaches in hydrology and competitive solution strategies in differential game theory.
Agent 2003 Conference on Challenges in Social Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Margaret Clemmons, ed.
Welcome to the Proceedings of the fourth in a series of agent simulation conferences cosponsored by Argonne National Laboratory and The University of Chicago. Agent 2003 is the second conference in which three Special Interest Groups from the North American Association for Computational Social and Organizational Science (NAACSOS) have been involved in planning the program--Computational Social Theory; Simulation Applications; and Methods, Toolkits and Techniques. The theme of Agent 2003, Challenges in Social Simulation, is especially relevant, as there seems to be no shortage of such challenges. Agent simulation has been applied with increasing frequency to social domains for several decades,more » and its promise is clear and increasingly visible. Like any nascent scientific methodology, however, it faces a number of problems or issues that must be addressed in order to progress. These challenges include: (1) Validating models relative to the social settings they are designed to represent; (2) Developing agents and interactions simple enough to understand but sufficiently complex to do justice to the social processes of interest; (3) Bridging the gap between empirically spare artificial societies and naturally occurring social phenomena; (4) Building multi-level models that span processes across domains; (5) Promoting a dialog among theoretical, qualitative, and empirical social scientists and area experts, on the one hand, and mathematical and computational modelers and engineers, on the other; (6) Using that dialog to facilitate substantive progress in the social sciences; and (7) Fulfilling the aspirations of users in business, government, and other application areas, while recognizing and addressing the preceding challenges. Although this list hardly exhausts the challenges the field faces, it does identify topics addressed throughout the presentations of Agent 2003. Agent 2003 is part of a much larger process in which new methods and techniques are applied to difficult social issues. Among the resources that give us the prospect of success is the innovative and transdisciplinary research community being built. We believe that Agent 2003 contributes to further progress in computational modeling of social processes, and we hope that you find these Proceedings to be stimulating and rewarding. As the horizons of this transdiscipline continue to emerge and converge, we hope to provide similar forums that will promote development of agent simulation modeling in the years to come.« less
Self-Organization of Vocabularies under Different Interaction Orders.
Vera, Javier
2017-01-01
Traditionally, the formation of vocabularies has been studied by agent-based models (primarily, the naming game) in which random pairs of agents negotiate word-meaning associations at each discrete time step. This article proposes a first approximation to a novel question: To what extent is the negotiation of word-meaning associations influenced by the order in which agents interact? Automata networks provide the adequate mathematical framework to explore this question. Computer simulations suggest that on two-dimensional lattices the typical features of the formation of word-meaning associations are recovered under random schemes that update small fractions of the population at the same time; by contrast, if larger subsets of the population are updated, a periodic behavior may appear.
A self-taught artificial agent for multi-physics computational model personalization.
Neumann, Dominik; Mansi, Tommaso; Itu, Lucian; Georgescu, Bogdan; Kayvanpour, Elham; Sedaghat-Hamedani, Farbod; Amr, Ali; Haas, Jan; Katus, Hugo; Meder, Benjamin; Steidl, Stefan; Hornegger, Joachim; Comaniciu, Dorin
2016-12-01
Personalization is the process of fitting a model to patient data, a critical step towards application of multi-physics computational models in clinical practice. Designing robust personalization algorithms is often a tedious, time-consuming, model- and data-specific process. We propose to use artificial intelligence concepts to learn this task, inspired by how human experts manually perform it. The problem is reformulated in terms of reinforcement learning. In an off-line phase, Vito, our self-taught artificial agent, learns a representative decision process model through exploration of the computational model: it learns how the model behaves under change of parameters. The agent then automatically learns an optimal strategy for on-line personalization. The algorithm is model-independent; applying it to a new model requires only adjusting few hyper-parameters of the agent and defining the observations to match. The full knowledge of the model itself is not required. Vito was tested in a synthetic scenario, showing that it could learn how to optimize cost functions generically. Then Vito was applied to the inverse problem of cardiac electrophysiology and the personalization of a whole-body circulation model. The obtained results suggested that Vito could achieve equivalent, if not better goodness of fit than standard methods, while being more robust (up to 11% higher success rates) and with faster (up to seven times) convergence rate. Our artificial intelligence approach could thus make personalization algorithms generalizable and self-adaptable to any patient and any model. Copyright © 2016. Published by Elsevier B.V.
Construction of Interaction Layer on Socio-Environmental Simulation
NASA Astrophysics Data System (ADS)
Torii, Daisuke; Ishida, Toru
In this study, we propose a method to construct a system based on a legacy socio-environmental simulator which enables to design more realistic interaction models in socio-environmetal simulations. First, to provide a computational model suitable for agent interactions, an interaction layer is constructed and connected from outside of a legacy socio-environmental simulator. Next, to configure the agents interacting ability, connection description for controlling the flow of information in the connection area is provided. As a concrete example, we realized an interaction layer by Q which is a scenario description language and connected it to CORMAS, a socio-envirionmental simulator. Finally, we discuss the capability of our method, using the system, in the Fire-Fighter domain.
KODAMA and VPC based Framework for Ubiquitous Systems and its Experiment
NASA Astrophysics Data System (ADS)
Takahashi, Kenichi; Amamiya, Satoshi; Iwao, Tadashige; Zhong, Guoqiang; Kainuma, Tatsuya; Amamiya, Makoto
Recently, agent technologies have attracted a lot of interest as an emerging programming paradigm. With such agent technologies, services are provided through collaboration among agents. At the same time, the spread of mobile technologies and communication infrastructures has made it possible to access the network anytime and from anywhere. Using agents and mobile technologies to realize ubiquitous computing systems, we propose a new framework based on KODAMA and VPC. KODAMA provides distributed management mechanisms by using the concept of community and communication infrastructure to deliver messages among agents without agents being aware of the physical network. VPC provides a method of defining peer-to-peer services based on agent communication with policy packages. By merging the characteristics of both KODAMA and VPC functions, we propose a new framework for ubiquitous computing environments. It provides distributed management functions according to the concept of agent communities, agent communications which are abstracted from the physical environment, and agent collaboration with policy packages. Using our new framework, we conducted a large-scale experiment in shopping malls in Nagoya, which sent advertisement e-mails to users' cellular phones according to user location and attributes. The empirical results showed that our new framework worked effectively for sales in shopping malls.
GDSCalc: A Web-Based Application for Evaluating Discrete Graph Dynamical Systems
Elmeligy Abdelhamid, Sherif H.; Kuhlman, Chris J.; Marathe, Madhav V.; Mortveit, Henning S.; Ravi, S. S.
2015-01-01
Discrete dynamical systems are used to model various realistic systems in network science, from social unrest in human populations to regulation in biological networks. A common approach is to model the agents of a system as vertices of a graph, and the pairwise interactions between agents as edges. Agents are in one of a finite set of states at each discrete time step and are assigned functions that describe how their states change based on neighborhood relations. Full characterization of state transitions of one system can give insights into fundamental behaviors of other dynamical systems. In this paper, we describe a discrete graph dynamical systems (GDSs) application called GDSCalc for computing and characterizing system dynamics. It is an open access system that is used through a web interface. We provide an overview of GDS theory. This theory is the basis of the web application; i.e., an understanding of GDS provides an understanding of the software features, while abstracting away implementation details. We present a set of illustrative examples to demonstrate its use in education and research. Finally, we compare GDSCalc with other discrete dynamical system software tools. Our perspective is that no single software tool will perform all computations that may be required by all users; tools typically have particular features that are more suitable for some tasks. We situate GDSCalc within this space of software tools. PMID:26263006
GDSCalc: A Web-Based Application for Evaluating Discrete Graph Dynamical Systems.
Elmeligy Abdelhamid, Sherif H; Kuhlman, Chris J; Marathe, Madhav V; Mortveit, Henning S; Ravi, S S
2015-01-01
Discrete dynamical systems are used to model various realistic systems in network science, from social unrest in human populations to regulation in biological networks. A common approach is to model the agents of a system as vertices of a graph, and the pairwise interactions between agents as edges. Agents are in one of a finite set of states at each discrete time step and are assigned functions that describe how their states change based on neighborhood relations. Full characterization of state transitions of one system can give insights into fundamental behaviors of other dynamical systems. In this paper, we describe a discrete graph dynamical systems (GDSs) application called GDSCalc for computing and characterizing system dynamics. It is an open access system that is used through a web interface. We provide an overview of GDS theory. This theory is the basis of the web application; i.e., an understanding of GDS provides an understanding of the software features, while abstracting away implementation details. We present a set of illustrative examples to demonstrate its use in education and research. Finally, we compare GDSCalc with other discrete dynamical system software tools. Our perspective is that no single software tool will perform all computations that may be required by all users; tools typically have particular features that are more suitable for some tasks. We situate GDSCalc within this space of software tools.
ERIC Educational Resources Information Center
Blikstein, Paulo; Wilensky, Uri
2009-01-01
This article reports on "MaterialSim", an undergraduate-level computational materials science set of constructionist activities which we have developed and tested in classrooms. We investigate: (a) the cognition of students engaging in scientific inquiry through interacting with simulations; (b) the effects of students programming simulations as…
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Schrenkenghost, Debra K.
2001-01-01
The Adjustable Autonomy Testbed (AAT) is a simulation-based testbed located in the Intelligent Systems Laboratory in the Automation, Robotics and Simulation Division at NASA Johnson Space Center. The purpose of the testbed is to support evaluation and validation of prototypes of adjustable autonomous agent software for control and fault management for complex systems. The AA T project has developed prototype adjustable autonomous agent software and human interfaces for cooperative fault management. This software builds on current autonomous agent technology by altering the architecture, components and interfaces for effective teamwork between autonomous systems and human experts. Autonomous agents include a planner, flexible executive, low level control and deductive model-based fault isolation. Adjustable autonomy is intended to increase the flexibility and effectiveness of fault management with an autonomous system. The test domain for this work is control of advanced life support systems for habitats for planetary exploration. The CONFIG hybrid discrete event simulation environment provides flexible and dynamically reconfigurable models of the behavior of components and fluids in the life support systems. Both discrete event and continuous (discrete time) simulation are supported, and flows and pressures are computed globally. This provides fast dynamic simulations of interacting hardware systems in closed loops that can be reconfigured during operations scenarios, producing complex cascading effects of operations and failures. Current object-oriented model libraries support modeling of fluid systems, and models have been developed of physico-chemical and biological subsystems for processing advanced life support gases. In FY01, water recovery system models will be developed.
NASA Astrophysics Data System (ADS)
Alfarano, Simone; Lux, Thomas; Wagner, Friedrich
2006-10-01
Following Alfarano et al. [Estimation of agent-based models: the case of an asymmetric herding model, Comput. Econ. 26 (2005) 19-49; Excess volatility and herding in an artificial financial market: analytical approach and estimation, in: W. Franz, H. Ramser, M. Stadler (Eds.), Funktionsfähigkeit und Stabilität von Finanzmärkten, Mohr Siebeck, Tübingen, 2005, pp. 241-254], we consider a simple agent-based model of a highly stylized financial market. The model takes Kirman's ant process [A. Kirman, Epidemics of opinion and speculative bubbles in financial markets, in: M.P. Taylor (Ed.), Money and Financial Markets, Blackwell, Cambridge, 1991, pp. 354-368; A. Kirman, Ants, rationality, and recruitment, Q. J. Econ. 108 (1993) 137-156] of mimetic contagion as its starting point, but allows for asymmetry in the attractiveness of both groups. Embedding the contagion process into a standard asset-pricing framework, and identifying the abstract groups of the herding model as chartists and fundamentalist traders, a market with periodic bubbles and bursts is obtained. Taking stock of the availability of a closed-form solution for the stationary distribution of returns for this model, we can estimate its parameters via maximum likelihood. Expanding our earlier work, this paper presents pertinent estimates for the Australian dollar/US dollar exchange rate and the Australian stock market index. As it turns out, our model indicates dominance of fundamentalist behavior in both the stock and foreign exchange market.
Modelling sociocognitive aspects of students' learning
NASA Astrophysics Data System (ADS)
Koponen, I. T.; Kokkonen, T.; Nousiainen, M.
2017-03-01
We present a computational model of sociocognitive aspects of learning. The model takes into account a student's individual cognition and sociodynamics of learning. We describe cognitive aspects of learning as foraging for explanations in the epistemic landscape, the structure (set by instructional design) of which guides the cognitive development through success or failure in foraging. We describe sociodynamic aspects as an agent-based model, where agents (learners) compare and adjust their conceptions of their own proficiency (self-proficiency) and that of their peers (peer-proficiency) in using explanatory schemes of different levels. We apply the model here in a case involving a three-tiered system of explanatory schemes, which can serve as a generic description of some well-known cases studied in empirical research on learning. The cognitive dynamics lead to the formation of dynamically robust outcomes of learning, seen as a strong preference for a certain explanatory schemes. The effects of social learning, however, can account for half of one's success in adopting higher-level schemes and greater proficiency. The model also predicts a correlation of dynamically emergent interaction patterns between agents and the learning outcomes.
Hulme, Adam; Thompson, Jason; Nielsen, Rasmus Oestergaard; Read, Gemma J M; Salmon, Paul M
2018-06-18
There have been recent calls for the application of the complex systems approach in sports injury research. However, beyond theoretical description and static models of complexity, little progress has been made towards formalising this approach in way that is practical to sports injury scientists and clinicians. Therefore, our objective was to use a computational modelling method and develop a dynamic simulation in sports injury research. Agent-based modelling (ABM) was used to model the occurrence of sports injury in a synthetic athlete population. The ABM was developed based on sports injury causal frameworks and was applied in the context of distance running-related injury (RRI). Using the acute:chronic workload ratio (ACWR), we simulated the dynamic relationship between changes in weekly running distance and RRI through the manipulation of various 'athlete management tools'. The findings confirmed that building weekly running distances over time, even within the reported ACWR 'sweet spot', will eventually result in RRI as athletes reach and surpass their individual physical workload limits. Introducing training-related error into the simulation and the modelling of a 'hard ceiling' dynamic resulted in a higher RRI incidence proportion across the population at higher absolute workloads. The presented simulation offers a practical starting point to further apply more sophisticated computational models that can account for the complex nature of sports injury aetiology. Alongside traditional forms of scientific inquiry, the use of ABM and other simulation-based techniques could be considered as a complementary and alternative methodological approach in sports injury research. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Conceptual Modeling in the Time of the Revolution: Part II
NASA Astrophysics Data System (ADS)
Mylopoulos, John
Conceptual Modeling was a marginal research topic at the very fringes of Computer Science in the 60s and 70s, when the discipline was dominated by topics focusing on programs, systems and hardware architectures. Over the years, however, the field has moved to centre stage and has come to claim a central role both in Computer Science research and practice in diverse areas, such as Software Engineering, Databases, Information Systems, the Semantic Web, Business Process Management, Service-Oriented Computing, Multi-Agent Systems, Knowledge Management, and more. The transformation was greatly aided by the adoption of standards in modeling languages (e.g., UML), and model-based methodologies (e.g., Model-Driven Architectures) by the Object Management Group (OMG) and other standards organizations. We briefly review the history of the field over the past 40 years, focusing on the evolution of key ideas. We then note some open challenges and report on-going research, covering topics such as the representation of variability in conceptual models, capturing model intentions, and models of laws.
Narang, Sahil; Best, Andrew; Curtis, Sean; Manocha, Dinesh
2015-01-01
Pedestrian crowds often have been modeled as many-particle system including microscopic multi-agent simulators. One of the key challenges is to unearth governing principles that can model pedestrian movement, and use them to reproduce paths and behaviors that are frequently observed in human crowds. To that effect, we present a novel crowd simulation algorithm that generates pedestrian trajectories that exhibit the speed-density relationships expressed by the Fundamental Diagram. Our approach is based on biomechanical principles and psychological factors. The overall formulation results in better utilization of free space by the pedestrians and can be easily combined with well-known multi-agent simulation techniques with little computational overhead. We are able to generate human-like dense crowd behaviors in large indoor and outdoor environments and validate the results with captured real-world crowd trajectories. PMID:25875932
LeRouge, Cynthia; Dickhut, Kathryn; Lisetti, Christine; Sangameswaran, Savitha; Malasanos, Toree
2016-01-01
This research focuses on the potential ability of animated avatars (a digital representation of the user) and virtual agents (a digital representation of a coach, buddy, or teacher) to deliver computer-based interventions for adolescents' chronic weight management. An exploration of the acceptance and desire of teens to interact with avatars and virtual agents for self-management and behavioral modification was undertaken. The utilized approach was inspired by community-based participatory research. Data was collected from 2 phases: Phase 1) focus groups with teens, provider interviews, parent interviews; and Phase 2) mid-range prototype assessment by teens and providers. Data from all stakeholder groups expressed great interest in avatars and virtual agents assisting self-management efforts. Adolescents felt the avatars and virtual agents could: 1) reinforce guidance and support, 2) fit within their lifestyle, and 3) help set future goals, particularly after witnessing the effect of their current behavior(s) on the projected physical appearance (external and internal organs) of avatars. Teens wanted 2 virtual characters: a virtual agent to act as a coach or teacher and an avatar (extension of themselves) to serve as a "buddy" for empathic support and guidance and as a surrogate for rewards. Preferred modalities for use include both mobile devices to accommodate access and desktop to accommodate preferences for maximum screen real estate to support virtualization of functions that are more contemplative and complex (e.g., goal setting). Adolescents expressed a desire for limited co-user access, which they could regulate. Data revealed certain barriers and facilitators that could affect adoption and use. The current study extends the support of teens, parents, and providers for adding avatars or virtual agents to traditional computer-based interactions. Data supports the desire for a personal relationship with a virtual character in support of previous studies. The study provides a foundation for further work in the area of avatar-driven motivational interviewing. This study provides evidence supporting the use of avatars and virtual agents, designed using participatory approaches, to be included in the continuum of care. Increased probability of engagement and long-term retention of overweight, obese adolescent users and suggests expanding current chronic care models toward more comprehensive, socio-technical representations. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Modeling formalisms in Systems Biology
2011-01-01
Systems Biology has taken advantage of computational tools and high-throughput experimental data to model several biological processes. These include signaling, gene regulatory, and metabolic networks. However, most of these models are specific to each kind of network. Their interconnection demands a whole-cell modeling framework for a complete understanding of cellular systems. We describe the features required by an integrated framework for modeling, analyzing and simulating biological processes, and review several modeling formalisms that have been used in Systems Biology including Boolean networks, Bayesian networks, Petri nets, process algebras, constraint-based models, differential equations, rule-based models, interacting state machines, cellular automata, and agent-based models. We compare the features provided by different formalisms, and discuss recent approaches in the integration of these formalisms, as well as possible directions for the future. PMID:22141422
New approaches in agent-based modeling of complex financial systems
NASA Astrophysics Data System (ADS)
Chen, Ting-Ting; Zheng, Bo; Li, Yan; Jiang, Xiong-Fei
2017-12-01
Agent-based modeling is a powerful simulation technique to understand the collective behavior and microscopic interaction in complex financial systems. Recently, the concept for determining the key parameters of agent-based models from empirical data instead of setting them artificially was suggested. We first review several agent-based models and the new approaches to determine the key model parameters from historical market data. Based on the agents' behaviors with heterogeneous personal preferences and interactions, these models are successful in explaining the microscopic origination of the temporal and spatial correlations of financial markets. We then present a novel paradigm combining big-data analysis with agent-based modeling. Specifically, from internet query and stock market data, we extract the information driving forces and develop an agent-based model to simulate the dynamic behaviors of complex financial systems.
Memory Transmission in Small Groups and Large Networks: An Agent-Based Model.
Luhmann, Christian C; Rajaram, Suparna
2015-12-01
The spread of social influence in large social networks has long been an interest of social scientists. In the domain of memory, collaborative memory experiments have illuminated cognitive mechanisms that allow information to be transmitted between interacting individuals, but these experiments have focused on small-scale social contexts. In the current study, we took a computational approach, circumventing the practical constraints of laboratory paradigms and providing novel results at scales unreachable by laboratory methodologies. Our model embodied theoretical knowledge derived from small-group experiments and replicated foundational results regarding collaborative inhibition and memory convergence in small groups. Ultimately, we investigated large-scale, realistic social networks and found that agents are influenced by the agents with which they interact, but we also found that agents are influenced by nonneighbors (i.e., the neighbors of their neighbors). The similarity between these results and the reports of behavioral transmission in large networks offers a major theoretical insight by linking behavioral transmission to the spread of information. © The Author(s) 2015.
Liu, Xuejin; Persson, Mats; Bornefalk, Hans; Karlsson, Staffan; Xu, Cheng; Danielsson, Mats; Huber, Ben
2015-07-01
Variations among detector channels in computed tomography can lead to ring artifacts in the reconstructed images and biased estimates in projection-based material decomposition. Typically, the ring artifacts are corrected by compensation methods based on flat fielding, where transmission measurements are required for a number of material-thickness combinations. Phantoms used in these methods can be rather complex and require an extensive number of transmission measurements. Moreover, material decomposition needs knowledge of the individual response of each detector channel to account for the detector inhomogeneities. For this purpose, we have developed a spectral response model that binwise predicts the response of a multibin photon-counting detector individually for each detector channel. The spectral response model is performed in two steps. The first step employs a forward model to predict the expected numbers of photon counts, taking into account parameters such as the incident x-ray spectrum, absorption efficiency, and energy response of the detector. The second step utilizes a limited number of transmission measurements with a set of flat slabs of two absorber materials to fine-tune the model predictions, resulting in a good correspondence with the physical measurements. To verify the response model, we apply the model in two cases. First, the model is used in combination with a compensation method which requires an extensive number of transmission measurements to determine the necessary parameters. Our spectral response model successfully replaces these measurements by simulations, saving a significant amount of measurement time. Second, the spectral response model is used as the basis of the maximum likelihood approach for projection-based material decomposition. The reconstructed basis images show a good separation between the calcium-like material and the contrast agents, iodine and gadolinium. The contrast agent concentrations are reconstructed with more than 94% accuracy.
Liu, Xuejin; Persson, Mats; Bornefalk, Hans; Karlsson, Staffan; Xu, Cheng; Danielsson, Mats; Huber, Ben
2015-01-01
Abstract. Variations among detector channels in computed tomography can lead to ring artifacts in the reconstructed images and biased estimates in projection-based material decomposition. Typically, the ring artifacts are corrected by compensation methods based on flat fielding, where transmission measurements are required for a number of material-thickness combinations. Phantoms used in these methods can be rather complex and require an extensive number of transmission measurements. Moreover, material decomposition needs knowledge of the individual response of each detector channel to account for the detector inhomogeneities. For this purpose, we have developed a spectral response model that binwise predicts the response of a multibin photon-counting detector individually for each detector channel. The spectral response model is performed in two steps. The first step employs a forward model to predict the expected numbers of photon counts, taking into account parameters such as the incident x-ray spectrum, absorption efficiency, and energy response of the detector. The second step utilizes a limited number of transmission measurements with a set of flat slabs of two absorber materials to fine-tune the model predictions, resulting in a good correspondence with the physical measurements. To verify the response model, we apply the model in two cases. First, the model is used in combination with a compensation method which requires an extensive number of transmission measurements to determine the necessary parameters. Our spectral response model successfully replaces these measurements by simulations, saving a significant amount of measurement time. Second, the spectral response model is used as the basis of the maximum likelihood approach for projection-based material decomposition. The reconstructed basis images show a good separation between the calcium-like material and the contrast agents, iodine and gadolinium. The contrast agent concentrations are reconstructed with more than 94% accuracy. PMID:26839904
An Investment Behavior Analysis using by Brain Computer Interface
NASA Astrophysics Data System (ADS)
Suzuki, Kyoko; Kinoshita, Kanta; Miyagawa, Kazuhiro; Shiomi, Shinichi; Misawa, Tadanobu; Shimokawa, Tetsuya
In this paper, we will construct a new Brain Computer Interface (BCI), for the purpose of analyzing human's investment decision makings. The BCI is made up of three functional parts which take roles of, measuring brain information, determining market price in an artificial market, and specifying investment decision model, respectively. When subjects make decisions, their brain information is conveyed to the part of specifying investment decision model through the part of measuring brain information, whereas, their decisions of investment order are sent to the part of artificial market to form market prices. Both the support vector machine and the 3 layered perceptron are used to assess the investment decision model. In order to evaluate our BCI, we conduct an experiment in which subjects and a computer trader agent trade shares of stock in the artificial market and test how the computer trader agent can forecast market price formation and investment decision makings from the brain information of subjects. The result of the experiment shows that the brain information can improve the accuracy of forecasts, and so the computer trader agent can supply market liquidity to stabilize market volatility without his loss.
A Stigmergy Collaboration Approach in the Open Source Software Developer Community
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cui, Xiaohui; Pullum, Laura L; Treadwell, Jim N
2009-01-01
The communication model of some self-organized online communities is significantly different from the traditional social network based community. It is problematic to use social network analysis to analyze the collaboration structure and emergent behaviors in these communities because these communities lack peer-to-peer connections. Stigmergy theory provides an explanation of the collaboration model of these communities. In this research, we present a stigmergy approach for building an agent-based simulation to simulate the collaboration model in the open source software (OSS) developer community. We used a group of actors who collaborate on OSS projects through forums as our frame of reference andmore » investigated how the choices actors make in contributing their work on the projects determines the global status of the whole OSS project. In our simulation, the forum posts serve as the digital pheromone and the modified Pierre-Paul Grasse pheromone model is used for computing the developer agents behavior selection probability.« less
Agent-based modeling of the interaction between CD8+ T cells and Beta cells in type 1 diabetes.
Ozturk, Mustafa Cagdas; Xu, Qian; Cinar, Ali
2018-01-01
We propose an agent-based model for the simulation of the autoimmune response in T1D. The model incorporates cell behavior from various rules derived from the current literature and is implemented on a high-performance computing system, which enables the simulation of a significant portion of the islets in the mouse pancreas. Simulation results indicate that the model is able to capture the trends that emerge during the progression of the autoimmunity. The multi-scale nature of the model enables definition of rules or equations that govern cellular or sub-cellular level phenomena and observation of the outcomes at the tissue scale. It is expected that such a model would facilitate in vivo clinical studies through rapid testing of hypotheses and planning of future experiments by providing insight into disease progression at different scales, some of which may not be obtained easily in clinical studies. Furthermore, the modular structure of the model simplifies tasks such as the addition of new cell types, and the definition or modification of different behaviors of the environment and the cells with ease.
NASA Astrophysics Data System (ADS)
Jonker, C. M.; Snoep, J. L.; Treur, J.; Westerhoff, H. V.; Wijngaards, W. C. A.
Within the areas of Computational Organisation Theory and Artificial Intelligence, techniques have been developed to simulate and analyse dynamics within organisations in society. Usually these modelling techniques are applied to factories and to the internal organisation of their process flows, thus obtaining models of complex organisations at various levels of aggregation. The dynamics in living cells are often interpreted in terms of well-organised processes, a bacterium being considered a (micro)factory. This suggests that organisation modelling techniques may also benefit their analysis. Using the example of Escherichia coli it is shown how indeed agent-based organisational modelling techniques can be used to simulate and analyse E.coli's intracellular dynamics. Exploiting the abstraction levels entailed by this perspective, a concise model is obtained that is readily simulated and analysed at the various levels of aggregation, yet shows the cell's essential dynamic patterns.
Designing an Agent-Based Model for Childhood Obesity Interventions: A Case Study of ChildObesity180.
Hennessy, Erin; Ornstein, Joseph T; Economos, Christina D; Herzog, Julia Bloom; Lynskey, Vanessa; Coffield, Edward; Hammond, Ross A
2016-01-07
Complex systems modeling can provide useful insights when designing and anticipating the impact of public health interventions. We developed an agent-based, or individual-based, computation model (ABM) to aid in evaluating and refining implementation of behavior change interventions designed to increase physical activity and healthy eating and reduce unnecessary weight gain among school-aged children. The potential benefits of applying an ABM approach include estimating outcomes despite data gaps, anticipating impact among different populations or scenarios, and exploring how to expand or modify an intervention. The practical challenges inherent in implementing such an approach include data resources, data availability, and the skills and knowledge of ABM among the public health obesity intervention community. The aim of this article was to provide a step-by-step guide on how to develop an ABM to evaluate multifaceted interventions on childhood obesity prevention in multiple settings. We used data from 2 obesity prevention initiatives and public-use resources. The details and goals of the interventions, overview of the model design process, and generalizability of this approach for future interventions is discussed.
A Unified Approach to Model-Based Planning and Execution
NASA Technical Reports Server (NTRS)
Muscettola, Nicola; Dorais, Gregory A.; Fry, Chuck; Levinson, Richard; Plaunt, Christian; Norvig, Peter (Technical Monitor)
2000-01-01
Writing autonomous software is complex, requiring the coordination of functionally and technologically diverse software modules. System and mission engineers must rely on specialists familiar with the different software modules to translate requirements into application software. Also, each module often encodes the same requirement in different forms. The results are high costs and reduced reliability due to the difficulty of tracking discrepancies in these encodings. In this paper we describe a unified approach to planning and execution that we believe provides a unified representational and computational framework for an autonomous agent. We identify the four main components whose interplay provides the basis for the agent's autonomous behavior: the domain model, the plan database, the plan running module, and the planner modules. This representational and problem solving approach can be applied at all levels of the architecture of a complex agent, such as Remote Agent. In the rest of the paper we briefly describe the Remote Agent architecture. The new agent architecture proposed here aims at achieving the full Remote Agent functionality. We then give the fundamental ideas behind the new agent architecture and point out some implication of the structure of the architecture, mainly in the area of reactivity and interaction between reactive and deliberative decision making. We conclude with related work and current status.
Novel Multiscale Modeling Tool Applied to Pseudomonas aeruginosa Biofilm Formation
Biggs, Matthew B.; Papin, Jason A.
2013-01-01
Multiscale modeling is used to represent biological systems with increasing frequency and success. Multiscale models are often hybrids of different modeling frameworks and programming languages. We present the MATLAB-NetLogo extension (MatNet) as a novel tool for multiscale modeling. We demonstrate the utility of the tool with a multiscale model of Pseudomonas aeruginosa biofilm formation that incorporates both an agent-based model (ABM) and constraint-based metabolic modeling. The hybrid model correctly recapitulates oxygen-limited biofilm metabolic activity and predicts increased growth rate via anaerobic respiration with the addition of nitrate to the growth media. In addition, a genome-wide survey of metabolic mutants and biofilm formation exemplifies the powerful analyses that are enabled by this computational modeling tool. PMID:24147108
Novel multiscale modeling tool applied to Pseudomonas aeruginosa biofilm formation.
Biggs, Matthew B; Papin, Jason A
2013-01-01
Multiscale modeling is used to represent biological systems with increasing frequency and success. Multiscale models are often hybrids of different modeling frameworks and programming languages. We present the MATLAB-NetLogo extension (MatNet) as a novel tool for multiscale modeling. We demonstrate the utility of the tool with a multiscale model of Pseudomonas aeruginosa biofilm formation that incorporates both an agent-based model (ABM) and constraint-based metabolic modeling. The hybrid model correctly recapitulates oxygen-limited biofilm metabolic activity and predicts increased growth rate via anaerobic respiration with the addition of nitrate to the growth media. In addition, a genome-wide survey of metabolic mutants and biofilm formation exemplifies the powerful analyses that are enabled by this computational modeling tool.
Connectionist agent-based learning in bank-run decision making
NASA Astrophysics Data System (ADS)
Huang, Weihong; Huang, Qiao
2018-05-01
It is of utter importance for the policy makers, bankers, and investors to thoroughly understand the probability of bank-run (PBR) which was often neglected in the classical models. Bank-run is not merely due to miscoordination (Diamond and Dybvig, 1983) or deterioration of bank assets (Allen and Gale, 1998) but various factors. This paper presents the simulation results of the nonlinear dynamic probabilities of bank runs based on the global games approach, with the distinct assumption that heterogenous agents hold highly correlated but unidentical beliefs about the true payoffs. The specific technique used in the simulation is to let agents have an integrated cognitive-affective network. It is observed that, even when the economy is good, agents are significantly affected by the cognitive-affective network to react to bad news which might lead to bank-run. Both the rise of the late payoffs, R, and the early payoffs, r, will decrease the effect of the affective process. The increased risk sharing might or might not increase PBR, and the increase in late payoff is beneficial for preventing the bank run. This paper is one of the pioneers that links agent-based computational economics and behavioral economics.
Niederalt, Christoph; Wendl, Thomas; Kuepfer, Lars; Claassen, Karina; Loosen, Roland; Willmann, Stefan; Lippert, Joerg; Schultze-Mosgau, Marcus; Winkler, Julia; Burghaus, Rolf; Bräutigam, Matthias; Pietsch, Hubertus; Lengsfeld, Philipp
2013-01-01
A physiologically based kidney model was developed to analyze the renal excretion and kidney exposure of hydrophilic agents, in particular contrast media, in rats. In order to study the influence of osmolality and viscosity changes, the model mechanistically represents urine concentration by water reabsorption in different segments of kidney tubules and viscosity dependent tubular fluid flow. The model was established using experimental data on the physiological steady state without administration of any contrast media or drugs. These data included the sodium and urea concentration gradient along the cortico-medullary axis, water reabsorption, urine flow, and sodium as well as urea urine concentrations for a normal hydration state. The model was evaluated by predicting the effects of mannitol and contrast media administration and comparing to experimental data on cortico-medullary concentration gradients, urine flow, urine viscosity, hydrostatic tubular pressures and single nephron glomerular filtration rate. Finally the model was used to analyze and compare typical examples of ionic and non-ionic monomeric as well as non-ionic dimeric contrast media with respect to their osmolality and viscosity. With the computational kidney model, urine flow depended mainly on osmolality, while osmolality and viscosity were important determinants for tubular hydrostatic pressure and kidney exposure. The low diuretic effect of dimeric contrast media in combination with their high intrinsic viscosity resulted in a high viscosity within the tubular fluid. In comparison to monomeric contrast media, this led to a higher increase in tubular pressure, to a reduction in glomerular filtration rate and tubular flow and to an increase in kidney exposure. The presented kidney model can be implemented into whole body physiologically based pharmacokinetic models and extended in order to simulate the renal excretion of lipophilic drugs which may also undergo active secretion and reabsorption. PMID:23355822
NASA Astrophysics Data System (ADS)
Pham, Vinh Huy
Stakeholders of the educational system assume that standardized tests are transparently about the subject content being tested and therefore can be used as a metric to measure achievement in outcome-based educational reform. Both analysis of longitudinal data for the Texas Assessment of Knowledge and Skills (TAKS) exam and agent based computer modeling of its underlying theoretical testing framework have yielded results that indicate the exam only rank orders students on a persistent but uncharacterized latent trait across domains tested as well as across years. Such persistent rank ordering of students is indicative of an instructionally insensitive exam. This is problematic in the current atmosphere of high stakes testing which holds teachers, administrators, and school systems accountable for student achievement.
1990-04-01
focus of attention ). The inherent local control in the FA/C model allows it to achieve just that, since it only requires a global goal to become...Computing Terms Agent Modelling : is concerned with modelling actor’s intentions and plans, and their modification in the light of information... model or program that is based on a mathematical system of logic. B-tree : or "binary-tree" is a self organising storage mechanism that works by taking
Approaching neuropsychological tasks through adaptive neurorobots
NASA Astrophysics Data System (ADS)
Gigliotta, Onofrio; Bartolomeo, Paolo; Miglino, Orazio
2015-04-01
Neuropsychological phenomena have been modelized mainly, by the mainstream approach, by attempting to reproduce their neural substrate whereas sensory-motor contingencies have attracted less attention. In this work, we introduce a simulator based on the evolutionary robotics platform Evorobot* in order to setting up in silico neuropsychological tasks. Moreover, in this study we trained artificial embodied neurorobotic agents equipped with a pan/tilt camera, provided with different neural and motor capabilities, to solve a well-known neuropsychological test: the cancellation task in which an individual is asked to cancel target stimuli surrounded by distractors. Results showed that embodied agents provided with additional motor capabilities (a zooming/attentional actuator) outperformed simple pan/tilt agents, even those equipped with more complex neural controllers and that the zooming ability is exploited to correctly categorising presented stimuli. We conclude that since the sole neural computational power cannot explain the (artificial) cognition which emerged throughout the adaptive process, such kind of modelling approach can be fruitful in neuropsychological modelling where the importance of having a body is often neglected.
Parallel computing in enterprise modeling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.
2008-08-01
This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priorimore » ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.« less
Interaction with Machine Improvisation
NASA Astrophysics Data System (ADS)
Assayag, Gerard; Bloch, George; Cont, Arshia; Dubnov, Shlomo
We describe two multi-agent architectures for an improvisation oriented musician-machine interaction systems that learn in real time from human performers. The improvisation kernel is based on sequence modeling and statistical learning. We present two frameworks of interaction with this kernel. In the first, the stylistic interaction is guided by a human operator in front of an interactive computer environment. In the second framework, the stylistic interaction is delegated to machine intelligence and therefore, knowledge propagation and decision are taken care of by the computer alone. The first framework involves a hybrid architecture using two popular composition/performance environments, Max and OpenMusic, that are put to work and communicate together, each one handling the process at a different time/memory scale. The second framework shares the same representational schemes with the first but uses an Active Learning architecture based on collaborative, competitive and memory-based learning to handle stylistic interactions. Both systems are capable of processing real-time audio/video as well as MIDI. After discussing the general cognitive background of improvisation practices, the statistical modelling tools and the concurrent agent architecture are presented. Then, an Active Learning scheme is described and considered in terms of using different improvisation regimes for improvisation planning. Finally, we provide more details about the different system implementations and describe several performances with the system.
Wiltshire, Serge W
2018-01-01
An agent-based computer model that builds representative regional U.S. hog production networks was developed and employed to assess the potential impact of the ongoing trend towards increased producer specialization upon network-level resilience to catastrophic disease outbreaks. Empirical analyses suggest that the spatial distribution and connectivity patterns of contact networks often predict epidemic spreading dynamics. Our model heuristically generates realistic systems composed of hog producer, feed mill, and slaughter plant agents. Network edges are added during each run as agents exchange livestock and feed. The heuristics governing agents' contact patterns account for factors including their industry roles, physical proximities, and the age of their livestock. In each run, an infection is introduced, and may spread according to probabilities associated with the various modes of contact. For each of three treatments-defined by one-phase, two-phase, and three-phase production systems-a parameter variation experiment examines the impact of the spatial density of producer agents in the system upon the length and size of disease outbreaks. Resulting data show phase transitions whereby, above some density threshold, systemic outbreaks become possible, echoing findings from percolation theory. Data analysis reveals that multi-phase production systems are vulnerable to catastrophic outbreaks at lower spatial densities, have more abrupt percolation transitions, and are characterized by less-predictable outbreak scales and durations. Key differences in network-level metrics shed light on these results, suggesting that the absence of potentially-bridging producer-producer edges may be largely responsible for the superior disease resilience of single-phase "farrow to finish" production systems.
Crowd Simulation Incorporating Agent Psychological Models, Roles and Communication
2005-01-01
system (PMFserv) that implements human behavior models from a range of ability, stress, emotion , decision theoretic and motivation sources. An...autonomous agents, human behavior models, culture and emotions 1. Introduction There are many applications of computer animation and simulation where...We describe a new architecture to integrate a psychological model into a crowd simulation system in order to obtain believable emergent behaviors
The Lagrangian Ensemble metamodel for simulating plankton ecosystems
NASA Astrophysics Data System (ADS)
Woods, J. D.
2005-10-01
This paper presents a detailed account of the Lagrangian Ensemble (LE) metamodel for simulating plankton ecosystems. It uses agent-based modelling to describe the life histories of many thousands of individual plankters. The demography of each plankton population is computed from those life histories. So too is bio-optical and biochemical feedback to the environment. The resulting “virtual ecosystem” is a comprehensive simulation of the plankton ecosystem. It is based on phenotypic equations for individual micro-organisms. LE modelling differs significantly from population-based modelling. The latter uses prognostic equations to compute demography and biofeedback directly. LE modelling diagnoses them from the properties of individual micro-organisms, whose behaviour is computed from prognostic equations. That indirect approach permits the ecosystem to adjust gracefully to changes in exogenous forcing. The paper starts with theory: it defines the Lagrangian Ensemble metamodel and explains how LE code performs a number of computations “behind the curtain”. They include budgeting chemicals, and deriving biofeedback and demography from individuals. The next section describes the practice of LE modelling. It starts with designing a model that complies with the LE metamodel. Then it describes the scenario for exogenous properties that provide the computation with initial and boundary conditions. These procedures differ significantly from those used in population-based modelling. The next section shows how LE modelling is used in research, teaching and planning. The practice depends largely on hindcasting to overcome the limits to predictability of weather forecasting. The scientific method explains observable ecosystem phenomena in terms of finer-grained processes that cannot be observed, but which are controlled by the basic laws of physics, chemistry and biology. What-If? Prediction ( WIP), used for planning, extends hindcasting by adding events that describe natural or man-made hazards and remedial actions. Verification is based on the Ecological Turing Test, which takes account of uncertainties in the observed and simulated versions of a target ecological phenomenon. The rest of the paper is devoted to a case study designed to show what LE modelling offers the biological oceanographer. The case study is presented in two parts. The first documents the WB model (Woods & Barkmann, 1994) and scenario used to simulate the ecosystem in a mesocosm moored in deep water off the Azores. The second part illustrates the emergent properties of that virtual ecosystem. The behaviour and development of an individual plankton lineage are revealed by an audit trail of the agent used in the computation. The fields of environmental properties reveal the impact of biofeedback. The fields of demographic properties show how changes in individuals cumulatively affect the birth and death rates of their population. This case study documents the virtual ecosystem used by Woods, Perilli and Barkmann (2005; hereafter WPB); to investigate the stability of simulations created by the Lagrangian Ensemble metamodel. The Azores virtual ecosystem was created and analysed on the Virtual Ecology Workbench (VEW) which is described briefly in the Appendix.
The LUE data model for representation of agents and fields
NASA Astrophysics Data System (ADS)
de Jong, Kor; Schmitz, Oliver; Karssenberg, Derek
2017-04-01
Traditionally, agents-based and field-based modelling environments use different data models to represent the state of information they manipulate. In agent-based modelling, involving the representation of phenomena as objects bounded in space and time, agents are often represented by classes, each of which represents a particular kind of agent and all its properties. Such classes can be used to represent entities like people, birds, cars and countries. In field-based modelling, involving the representation of the environment as continuous fields, fields are often represented by a discretization of space, using multidimensional arrays, each storing mostly a single attribute. Such arrays can be used to represent the elevation of the land-surface, the pH of the soil, or the population density in an area, for example. Representing a population of agents by class instances grouped in collections is an intuitive way of organizing information. A drawback, though, is that models in which class instances grouping properties are stored in collections are less efficient (execute slower) than models in which collections of properties are grouped. The field representation, on the other hand, is convenient for the efficient execution of models. Another drawback is that, because the data models used are so different, integrating agent-based and field-based models becomes difficult, since the model builder has to deal with multiple concepts, and often multiple modelling environments. With the development of the LUE data model [1] we aim at representing agents and fields within a single paradigm, by combining the advantages of the data models used in agent-based and field-based data modelling. This removes the barrier for writing integrated agent-based and field-based models. The resulting data model is intuitive to use and allows for efficient execution of models. LUE is both a high-level conceptual data model and a low-level physical data model. The LUE conceptual data model is a generalization of the data models used in agent-based and field-based modelling. The LUE physical data model [2] is an implementation of the LUE conceptual data model in HDF5. In our presentation we will provide details of our approach to organizing information about agents and fields. We will show examples of agent and field data represented by the conceptual and physical data model. References: [1] de Bakker, M.P., de Jong, K., Schmitz, O., Karssenberg, D., 2016. Design and demonstration of a data model to integrate agent-based and field-based modelling. Environmental Modelling and Software. http://dx.doi.org/10.1016/j.envsoft.2016.11.016 [2] de Jong, K., 2017. LUE source code. https://github.com/pcraster/lue
A problem of optimal control and observation for distributed homogeneous multi-agent system
NASA Astrophysics Data System (ADS)
Kruglikov, Sergey V.
2017-12-01
The paper considers the implementation of a algorithm for controlling a distributed complex of several mobile multi-robots. The concept of a unified information space of the controlling system is applied. The presented information and mathematical models of participants and obstacles, as real agents, and goals and scenarios, as virtual agents, create the base forming the algorithmic and software background for computer decision support system. The controlling scheme assumes the indirect management of the robotic team on the basis of optimal control and observation problem predicting intellectual behavior in a dynamic, hostile environment. A basic content problem is a compound cargo transportation by a group of participants in the case of a distributed control scheme in the terrain with multiple obstacles.
A market-based optimization approach to sensor and resource management
NASA Astrophysics Data System (ADS)
Schrage, Dan; Farnham, Christopher; Gonsalves, Paul G.
2006-05-01
Dynamic resource allocation for sensor management is a problem that demands solutions beyond traditional approaches to optimization. Market-based optimization applies solutions from economic theory, particularly game theory, to the resource allocation problem by creating an artificial market for sensor information and computational resources. Intelligent agents are the buyers and sellers in this market, and they represent all the elements of the sensor network, from sensors to sensor platforms to computational resources. These agents interact based on a negotiation mechanism that determines their bidding strategies. This negotiation mechanism and the agents' bidding strategies are based on game theory, and they are designed so that the aggregate result of the multi-agent negotiation process is a market in competitive equilibrium, which guarantees an optimal allocation of resources throughout the sensor network. This paper makes two contributions to the field of market-based optimization: First, we develop a market protocol to handle heterogeneous goods in a dynamic setting. Second, we develop arbitrage agents to improve the efficiency in the market in light of its dynamic nature.
Information-driven trade and price-volume relationship in artificial stock markets
NASA Astrophysics Data System (ADS)
Liu, Xinghua; Liu, Xin; Liang, Xiaobei
2015-07-01
The positive relation between stock price changes and trading volume (price-volume relationship) as a stylized fact has attracted significant interest among finance researchers and investment practitioners. However, until now, consensus has not been reached regarding the causes of the relationship based on real market data because extracting valuable variables (such as information-driven trade volume) from real data is difficult. This lack of general consensus motivates us to develop a simple agent-based computational artificial stock market where extracting the necessary variables is easy. Based on this model and its artificial data, our tests have found that the aggressive trading style of informed agents can produce a price-volume relationship. Therefore, the information spreading process is not a necessary condition for producing price-volume relationship.
Identity in agent-based models : modeling dynamic multiscale social processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ozik, J.; Sallach, D. L.; Macal, C. M.
Identity-related issues play central roles in many current events, including those involving factional politics, sectarianism, and tribal conflicts. Two popular models from the computational-social-science (CSS) literature - the Threat Anticipation Program and SharedID models - incorporate notions of identity (individual and collective) and processes of identity formation. A multiscale conceptual framework that extends some ideas presented in these models and draws other capabilities from the broader CSS literature is useful in modeling the formation of political identities. The dynamic, multiscale processes that constitute and transform social identities can be mapped to expressive structures of the framework
An information driven strategy to support multidisciplinary design
NASA Technical Reports Server (NTRS)
Rangan, Ravi M.; Fulton, Robert E.
1990-01-01
The design of complex engineering systems such as aircraft, automobiles, and computers is primarily a cooperative multidisciplinary design process involving interactions between several design agents. The common thread underlying this multidisciplinary design activity is the information exchange between the various groups and disciplines. The integrating component in such environments is the common data and the dependencies that exist between such data. This may be contrasted to classical multidisciplinary analyses problems where there is coupling between distinct design parameters. For example, they may be expressed as mathematically coupled relationships between aerodynamic and structural interactions in aircraft structures, between thermal and structural interactions in nuclear plants, and between control considerations and structural interactions in flexible robots. These relationships provide analytical based frameworks leading to optimization problem formulations. However, in multidisciplinary design problems, information based interactions become more critical. Many times, the relationships between different design parameters are not amenable to analytical characterization. Under such circumstances, information based interactions will provide the best integration paradigm, i.e., there is a need to model the data entities and their dependencies between design parameters originating from different design agents. The modeling of such data interactions and dependencies forms the basis for integrating the various design agents.
System design in an evolving system-of-systems architecture and concept of operations
NASA Astrophysics Data System (ADS)
Rovekamp, Roger N., Jr.
Proposals for space exploration architectures have increased in complexity and scope. Constituent systems (e.g., rovers, habitats, in-situ resource utilization facilities, transfer vehicles, etc) must meet the needs of these architectures by performing in multiple operational environments and across multiple phases of the architecture's evolution. This thesis proposes an approach for using system-of-systems engineering principles in conjunction with system design methods (e.g., Multi-objective optimization, genetic algorithms, etc) to create system design options that perform effectively at both the system and system-of-systems levels, across multiple concepts of operations, and over multiple architectural phases. The framework is presented by way of an application problem that investigates the design of power systems within a power sharing architecture for use in a human Lunar Surface Exploration Campaign. A computer model has been developed that uses candidate power grid distribution solutions for a notional lunar base. The agent-based model utilizes virtual control agents to manage the interactions of various exploration and infrastructure agents. The philosophy behind the model is based both on lunar power supply strategies proposed in literature, as well as on the author's own approaches for power distribution strategies of future lunar bases. In addition to proposing a framework for system design, further implications of system-of-systems engineering principles are briefly explored, specifically as they relate to producing more robust cross-cultural system-of-systems architecture solutions.
ADAM: analysis of discrete models of biological systems using computer algebra.
Hinkelmann, Franziska; Brandon, Madison; Guang, Bonny; McNeill, Rustin; Blekherman, Grigoriy; Veliz-Cuba, Alan; Laubenbacher, Reinhard
2011-07-20
Many biological systems are modeled qualitatively with discrete models, such as probabilistic Boolean networks, logical models, Petri nets, and agent-based models, to gain a better understanding of them. The computational complexity to analyze the complete dynamics of these models grows exponentially in the number of variables, which impedes working with complex models. There exist software tools to analyze discrete models, but they either lack the algorithmic functionality to analyze complex models deterministically or they are inaccessible to many users as they require understanding the underlying algorithm and implementation, do not have a graphical user interface, or are hard to install. Efficient analysis methods that are accessible to modelers and easy to use are needed. We propose a method for efficiently identifying attractors and introduce the web-based tool Analysis of Dynamic Algebraic Models (ADAM), which provides this and other analysis methods for discrete models. ADAM converts several discrete model types automatically into polynomial dynamical systems and analyzes their dynamics using tools from computer algebra. Specifically, we propose a method to identify attractors of a discrete model that is equivalent to solving a system of polynomial equations, a long-studied problem in computer algebra. Based on extensive experimentation with both discrete models arising in systems biology and randomly generated networks, we found that the algebraic algorithms presented in this manuscript are fast for systems with the structure maintained by most biological systems, namely sparseness and robustness. For a large set of published complex discrete models, ADAM identified the attractors in less than one second. Discrete modeling techniques are a useful tool for analyzing complex biological systems and there is a need in the biological community for accessible efficient analysis tools. ADAM provides analysis methods based on mathematical algorithms as a web-based tool for several different input formats, and it makes analysis of complex models accessible to a larger community, as it is platform independent as a web-service and does not require understanding of the underlying mathematics.
The distributed agent-based approach in the e-manufacturing environment
NASA Astrophysics Data System (ADS)
Sękala, A.; Kost, G.; Dobrzańska-Danikiewicz, A.; Banaś, W.; Foit, K.
2015-11-01
The deficiency of a coherent flow of information from a production department causes unplanned downtime and failures of machines and their equipment, which in turn results in production planning process based on incorrect and out-of-date information. All of these factors entail, as the consequence, the additional difficulties associated with the process of decision-making. They concern, among other, the coordination of components of a distributed system and providing the access to the required information, thereby generating unnecessary costs. The use of agent technology significantly speeds up the flow of information within the virtual enterprise. This paper includes the proposal of a multi-agent approach for the integration of processes within the virtual enterprise concept. The presented concept was elaborated to investigate the possible solutions of the ways of transmission of information in the production system taking into account the self-organization of constituent components. Thus it implicated the linking of the concept of multi-agent system with the system of managing the production information, based on the idea of e-manufacturing. The paper presents resulting scheme that should be the base for elaborating an informatics model of the target virtual system. The computer system itself is intended to be developed next.
Mosler, Hans-Joachim; Martens, Thomas
2008-09-01
Agent-based computer simulation was used to create artificial communities in which each individual was constructed according to the principles of the elaboration likelihood model of Petty and Cacioppo [1986. The elaboration likelihood model of persuasion. In: Berkowitz, L. (Ed.), Advances in Experimental Social Psychology. Academic Press, New York, NY, pp. 123-205]. Campaigning strategies and community characteristics were varied systematically to understand and test their impact on attitudes towards environmental protection. The results show that strong arguments influence a green (environmentally concerned) population with many contacts most effectively, while peripheral cues have the greatest impact on a non-green population with fewer contacts. Overall, deeper information scrutiny increases the impact of strong arguments but is especially important for convincing green populations. Campaigns involving person-to-person communication are superior to mass-media campaigns because they can be adapted to recipients' characteristics.
Vali, Alireza; Abla, Adib A; Lawton, Michael T; Saloner, David; Rayz, Vitaliy L
2017-01-04
In vivo measurement of blood velocity fields and flow descriptors remains challenging due to image artifacts and limited resolution of current imaging methods; however, in vivo imaging data can be used to inform and validate patient-specific computational fluid dynamics (CFD) models. Image-based CFD can be particularly useful for planning surgical interventions in complicated cases such as fusiform aneurysms of the basilar artery, where it is crucial to alter pathological hemodynamics while preserving flow to the distal vasculature. In this study, patient-specific CFD modeling was conducted for two basilar aneurysm patients considered for surgical treatment. In addition to velocity fields, transport of contrast agent was simulated for the preoperative and postoperative conditions using two approaches. The transport of a virtual contrast passively following the flow streamlines was simulated to predict post-surgical flow regions prone to thrombus deposition. In addition, the transport of a mixture of blood with an iodine-based contrast agent was modeled to compare and verify the CFD results with X-ray angiograms. The CFD-predicted patterns of contrast flow were qualitatively compared to in vivo X-ray angiograms acquired before and after the intervention. The results suggest that the mixture modeling approach, accounting for the flow rates and properties of the contrast injection, is in better agreement with the X-ray angiography data. The virtual contrast modeling assessed the residence time based on flow patterns unaffected by the injection procedure, which makes the virtual contrast modeling approach better suited for prediction of thrombus deposition, which is not limited to the peri-procedural state. Copyright © 2016 Elsevier Ltd. All rights reserved.
A comprehensive overview of the applications of artificial life.
Kim, Kyung-Joong; Cho, Sung-Bae
2006-01-01
We review the applications of artificial life (ALife), the creation of synthetic life on computers to study, simulate, and understand living systems. The definition and features of ALife are shown by application studies. ALife application fields treated include robot control, robot manufacturing, practical robots, computer graphics, natural phenomenon modeling, entertainment, games, music, economics, Internet, information processing, industrial design, simulation software, electronics, security, data mining, and telecommunications. In order to show the status of ALife application research, this review primarily features a survey of about 180 ALife application articles rather than a selected representation of a few articles. Evolutionary computation is the most popular method for designing such applications, but recently swarm intelligence, artificial immune network, and agent-based modeling have also produced results. Applications were initially restricted to the robotics and computer graphics, but presently, many different applications in engineering areas are of interest.
Linking Cognitive and Social Aspects of Sound Change Using Agent-Based Modeling.
Harrington, Jonathan; Kleber, Felicitas; Reubold, Ulrich; Schiel, Florian; Stevens, Mary
2018-03-26
The paper defines the core components of an interactive-phonetic (IP) sound change model. The starting point for the IP-model is that a phonological category is often skewed phonetically in a certain direction by the production and perception of speech. A prediction of the model is that sound change is likely to come about as a result of perceiving phonetic variants in the direction of the skew and at the probabilistic edge of the listener's phonological category. The results of agent-based computational simulations applied to the sound change in progress, /u/-fronting in Standard Southern British, were consistent with this hypothesis. The model was extended to sound changes involving splits and mergers by using the interaction between the agents to drive the phonological reclassification of perceived speech signals. The simulations showed no evidence of any acoustic change when this extended model was applied to Australian English data in which /s/ has been shown to retract due to coarticulation in /str/ clusters. Some agents nevertheless varied in their phonological categorizations during interaction between /str/ and /ʃtr/: This vacillation may represent the potential for sound change to occur. The general conclusion is that many types of sound change are the outcome of how phonetic distributions are oriented with respect to each other, their association to phonological classes, and how these types of information vary between speakers that happen to interact with each other. Copyright © 2018 The Authors. Topics in Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.
ERIC Educational Resources Information Center
Gu, X.; Blackmore, K. L.
2015-01-01
This paper presents the results of a systematic review of agent-based modelling and simulation (ABMS) applications in the higher education (HE) domain. Agent-based modelling is a "bottom-up" modelling paradigm in which system-level behaviour (macro) is modelled through the behaviour of individual local-level agent interactions (micro).…
Agent-Based Modeling of Chronic Diseases: A Narrative Review and Future Research Directions
Lawley, Mark A.; Siscovick, David S.; Zhang, Donglan; Pagán, José A.
2016-01-01
The United States is experiencing an epidemic of chronic disease. As the US population ages, health care providers and policy makers urgently need decision models that provide systematic, credible prediction regarding the prevention and treatment of chronic diseases to improve population health management and medical decision-making. Agent-based modeling is a promising systems science approach that can model complex interactions and processes related to chronic health conditions, such as adaptive behaviors, feedback loops, and contextual effects. This article introduces agent-based modeling by providing a narrative review of agent-based models of chronic disease and identifying the characteristics of various chronic health conditions that must be taken into account to build effective clinical- and policy-relevant models. We also identify barriers to adopting agent-based models to study chronic diseases. Finally, we discuss future research directions of agent-based modeling applied to problems related to specific chronic health conditions. PMID:27236380
Agent-Based Modeling of Chronic Diseases: A Narrative Review and Future Research Directions.
Li, Yan; Lawley, Mark A; Siscovick, David S; Zhang, Donglan; Pagán, José A
2016-05-26
The United States is experiencing an epidemic of chronic disease. As the US population ages, health care providers and policy makers urgently need decision models that provide systematic, credible prediction regarding the prevention and treatment of chronic diseases to improve population health management and medical decision-making. Agent-based modeling is a promising systems science approach that can model complex interactions and processes related to chronic health conditions, such as adaptive behaviors, feedback loops, and contextual effects. This article introduces agent-based modeling by providing a narrative review of agent-based models of chronic disease and identifying the characteristics of various chronic health conditions that must be taken into account to build effective clinical- and policy-relevant models. We also identify barriers to adopting agent-based models to study chronic diseases. Finally, we discuss future research directions of agent-based modeling applied to problems related to specific chronic health conditions.
Modelling the spread of innovation in wild birds.
Shultz, Thomas R; Montrey, Marcel; Aplin, Lucy M
2017-06-01
We apply three plausible algorithms in agent-based computer simulations to recent experiments on social learning in wild birds. Although some of the phenomena are simulated by all three learning algorithms, several manifestations of social conformity bias are simulated by only the approximate majority (AM) algorithm, which has roots in chemistry, molecular biology and theoretical computer science. The simulations generate testable predictions and provide several explanatory insights into the diffusion of innovation through a population. The AM algorithm's success raises the possibility of its usefulness in studying group dynamics more generally, in several different scientific domains. Our differential-equation model matches simulation results and provides mathematical insights into the dynamics of these algorithms. © 2017 The Author(s).
Representing Micro-Macro Linkages by Actor-Based Dynamic Network Models
ERIC Educational Resources Information Center
Snijders, Tom A. B.; Steglich, Christian E. G.
2015-01-01
Stochastic actor-based models for network dynamics have the primary aim of statistical inference about processes of network change, but may be regarded as a kind of agent-based models. Similar to many other agent-based models, they are based on local rules for actor behavior. Different from many other agent-based models, by including elements of…
Computer-generated Model of Purine Nucleoside Phosphorylase (PNP)
NASA Technical Reports Server (NTRS)
1987-01-01
Purine Nucleoside Phosphorylase (PNP) is an important target enzyme for the design of anti-cancer and immunosuppressive drugs. Bacterial PNP, which is slightly different from the human enzyme, is used to synthesize chemotherapuautic agents. Knowledge of the three-dimensional structure of the bacterial PNP molecule is useful in efforts to engineer different types of PNP enzymes, that can be used to produce new chemotherapeutic agents. This picture shows a computer model of bacterial PNP, which looks a lot like a display of colorful ribbons. Principal Investigator was Charles Bugg.
An agent-based model for emotion contagion and competition in online social media
NASA Astrophysics Data System (ADS)
Fan, Rui; Xu, Ke; Zhao, Jichang
2018-04-01
Recent studies suggest that human emotions diffuse in not only real-world communities but also online social media. However, a comprehensive model that considers up-to-date findings and multiple online social media mechanisms is still missing. To bridge this vital gap, an agent-based model, which concurrently considers emotion influence and tie strength preferences, is presented to simulate the emotion contagion and competition. Our model well reproduces patterns observed in the empirical data, like anger's preference on weak ties, anger-dominated users' high vitalities and angry tweets' short retweet intervals, and anger's competitiveness in negative events. The comparison with a previously presented baseline model further demonstrates its effectiveness in modeling online emotion contagion. It is also surprisingly revealed by our model that as the ratio of anger approaches joy with a gap less than 12%, anger will eventually dominate the online social media and arrives the collective outrage in the cyber space. The critical gap disclosed here can be indeed warning signals at early stages for outrage control. Our model would shed lights on the study of multiple issues regarding emotion contagion and competition in terms of computer simulations.
Economic agents and markets as emergent phenomena
Tesfatsion, Leigh
2002-01-01
An overview of recent work in agent-based computational economics is provided, with a stress on the research areas highlighted in the National Academy of Sciences Sackler Colloquium session “Economic Agents and Markets as Emergent Phenomena” held in October 2001. PMID:12011395
Optimizing agent-based transmission models for infectious diseases.
Willem, Lander; Stijven, Sean; Tijskens, Engelbert; Beutels, Philippe; Hens, Niel; Broeckhove, Jan
2015-06-02
Infectious disease modeling and computational power have evolved such that large-scale agent-based models (ABMs) have become feasible. However, the increasing hardware complexity requires adapted software designs to achieve the full potential of current high-performance workstations. We have found large performance differences with a discrete-time ABM for close-contact disease transmission due to data locality. Sorting the population according to the social contact clusters reduced simulation time by a factor of two. Data locality and model performance can also be improved by storing person attributes separately instead of using person objects. Next, decreasing the number of operations by sorting people by health status before processing disease transmission has also a large impact on model performance. Depending of the clinical attack rate, target population and computer hardware, the introduction of the sort phase decreased the run time from 26% up to more than 70%. We have investigated the application of parallel programming techniques and found that the speedup is significant but it drops quickly with the number of cores. We observed that the effect of scheduling and workload chunk size is model specific and can make a large difference. Investment in performance optimization of ABM simulator code can lead to significant run time reductions. The key steps are straightforward: the data structure for the population and sorting people on health status before effecting disease propagation. We believe these conclusions to be valid for a wide range of infectious disease ABMs. We recommend that future studies evaluate the impact of data management, algorithmic procedures and parallelization on model performance.
Modeling marine oily wastewater treatment by a probabilistic agent-based approach.
Jing, Liang; Chen, Bing; Zhang, Baiyu; Ye, Xudong
2018-02-01
This study developed a novel probabilistic agent-based approach for modeling of marine oily wastewater treatment processes. It begins first by constructing a probability-based agent simulation model, followed by a global sensitivity analysis and a genetic algorithm-based calibration. The proposed modeling approach was tested through a case study of the removal of naphthalene from marine oily wastewater using UV irradiation. The removal of naphthalene was described by an agent-based simulation model using 8 types of agents and 11 reactions. Each reaction was governed by a probability parameter to determine its occurrence. The modeling results showed that the root mean square errors between modeled and observed removal rates were 8.73 and 11.03% for calibration and validation runs, respectively. Reaction competition was analyzed by comparing agent-based reaction probabilities, while agents' heterogeneity was visualized by plotting their real-time spatial distribution, showing a strong potential for reactor design and process optimization. Copyright © 2017 Elsevier Ltd. All rights reserved.
Zhu, Lusha; Mathewson, Kyle E.; Hsu, Ming
2012-01-01
Decision-making in the presence of other competitive intelligent agents is fundamental for social and economic behavior. Such decisions require agents to behave strategically, where in addition to learning about the rewards and punishments available in the environment, they also need to anticipate and respond to actions of others competing for the same rewards. However, whereas we know much about strategic learning at both theoretical and behavioral levels, we know relatively little about the underlying neural mechanisms. Here, we show using a multi-strategy competitive learning paradigm that strategic choices can be characterized by extending the reinforcement learning (RL) framework to incorporate agents’ beliefs about the actions of their opponents. Furthermore, using this characterization to generate putative internal values, we used model-based functional magnetic resonance imaging to investigate neural computations underlying strategic learning. We found that the distinct notions of prediction errors derived from our computational model are processed in a partially overlapping but distinct set of brain regions. Specifically, we found that the RL prediction error was correlated with activity in the ventral striatum. In contrast, activity in the ventral striatum, as well as the rostral anterior cingulate (rACC), was correlated with a previously uncharacterized belief-based prediction error. Furthermore, activity in rACC reflected individual differences in degree of engagement in belief learning. These results suggest a model of strategic behavior where learning arises from interaction of dissociable reinforcement and belief-based inputs. PMID:22307594
An Active Learning Exercise for Introducing Agent-Based Modeling
ERIC Educational Resources Information Center
Pinder, Jonathan P.
2013-01-01
Recent developments in agent-based modeling as a method of systems analysis and optimization indicate that students in business analytics need an introduction to the terminology, concepts, and framework of agent-based modeling. This article presents an active learning exercise for MBA students in business analytics that demonstrates agent-based…
2013-11-18
for each valid interface between the systems. The factor is proportional to the count of feasible interfaces in the meta-architecture framework... proportional to the square root of the sector area being covered by each type of system, plus some time for transmitting data to, and double checking by, the...22] J.-H. Ahn, "An Archietcture Description method for Acknowledged System of Systems based on Federated Architeture ," in Advanced Science and
Dynamic social networks facilitate cooperation in the N-player Prisoner’s Dilemma
NASA Astrophysics Data System (ADS)
Rezaei, Golriz; Kirley, Michael
2012-12-01
Understanding how cooperative behaviour evolves in network communities, where the individual members interact via social dilemma games, is an on-going challenge. In this paper, we introduce a social network based model to investigate the evolution of cooperation in the N-player Prisoner’s Dilemma game. As such, this work complements previous studies focused on multi-player social dilemma games and endogenous networks. Agents in our model, employ different game-playing strategies reflecting varying cognitive capacities. When an agent plays cooperatively, a social link is formed with each of the other N-1 group members. Subsequent cooperative actions reinforce this link. However, when an agent defects, the links in the social network are broken. Computational simulations across a range of parameter settings are used to examine different scenarios: varying population and group sizes; the group formation process (or partner selection); and agent decision-making strategies under varying dilemma constraints (cost-to-benefit ratios), including a “discriminator” strategy where the action is based on a function of the weighted links within an agent’s social network. The simulation results show that the proposed social network model is able to evolve and maintain cooperation. As expected, as the value of N increases the equilibrium proportion of cooperators in the population decreases. In addition, this outcome is dependent on the dilemma constraint (cost-to-benefit ratio). However, in some circumstances the dynamic social network plays an increasingly important role in promoting and sustaining cooperation, especially when the agents adopt the discriminator strategy. The adjustment of social links results in the formation of communities of “like-minded” agents. Subsequently, this local optimal behaviour promotes the evolution of cooperative behaviour at the system level.
NASA Astrophysics Data System (ADS)
Lucas, Iris; Cotsaftis, Michel; Bertelle, Cyrille
2017-12-01
Multiagent systems (MAS) provide a useful tool for exploring the complex dynamics and behavior of financial markets and now MAS approach has been widely implemented and documented in the empirical literature. This paper introduces the implementation of an innovative multi-scale mathematical model for a computational agent-based financial market. The paper develops a method to quantify the degree of self-organization which emerges in the system and shows that the capacity of self-organization is maximized when the agent behaviors are heterogeneous. Numerical results are presented and analyzed, showing how the global market behavior emerges from specific individual behavior interactions.
Schneider, Petra; Hoy, Benjamin; Wessler, Silja; Schneider, Gisbert
2011-01-01
Background The human pathogen Helicobacter pylori (H. pylori) is a main cause for gastric inflammation and cancer. Increasing bacterial resistance against antibiotics demands for innovative strategies for therapeutic intervention. Methodology/Principal Findings We present a method for structure-based virtual screening that is based on the comprehensive prediction of ligand binding sites on a protein model and automated construction of a ligand-receptor interaction map. Pharmacophoric features of the map are clustered and transformed in a correlation vector (‘virtual ligand’) for rapid virtual screening of compound databases. This computer-based technique was validated for 18 different targets of pharmaceutical interest in a retrospective screening experiment. Prospective screening for inhibitory agents was performed for the protease HtrA from the human pathogen H. pylori using a homology model of the target protein. Among 22 tested compounds six block E-cadherin cleavage by HtrA in vitro and result in reduced scattering and wound healing of gastric epithelial cells, thereby preventing bacterial infiltration of the epithelium. Conclusions/Significance This study demonstrates that receptor-based virtual screening with a permissive (‘fuzzy’) pharmacophore model can help identify small bioactive agents for combating bacterial infection. PMID:21483848
Dialogue-Based CALL: An Overview of Existing Research
ERIC Educational Resources Information Center
Bibauw, Serge; François, Thomas; Desmet, Piet
2015-01-01
Dialogue-based Computer-Assisted Language Learning (CALL) covers applications and systems allowing a learner to practice the target language in a meaning-focused conversational activity with an automated agent. We first present a common definition for dialogue-based CALL, based on three features: dialogue as the activity unit, computer as the…
Self-organized Segregation on the Grid
NASA Astrophysics Data System (ADS)
Omidvar, Hamed; Franceschetti, Massimo
2018-02-01
We consider an agent-based model with exponentially distributed waiting times in which two types of agents interact locally over a graph, and based on this interaction and on the value of a common intolerance threshold τ , decide whether to change their types. This is equivalent to a zero-temperature ising model with Glauber dynamics, an asynchronous cellular automaton with extended Moore neighborhoods, or a Schelling model of self-organized segregation in an open system, and has applications in the analysis of social and biological networks, and spin glasses systems. Some rigorous results were recently obtained in the theoretical computer science literature, and this work provides several extensions. We enlarge the intolerance interval leading to the expected formation of large segregated regions of agents of a single type from the known size ɛ >0 to size ≈ 0.134. Namely, we show that for 0.433< τ < 1/2 (and by symmetry 1/2<τ <0.567), the expected size of the largest segregated region containing an arbitrary agent is exponential in the size of the neighborhood. We further extend the interval leading to expected large segregated regions to size ≈ 0.312 considering "almost segregated" regions, namely regions where the ratio of the number of agents of one type and the number of agents of the other type vanishes quickly as the size of the neighborhood grows. In this case, we show that for 0.344 < τ ≤ 0.433 (and by symmetry for 0.567 ≤ τ <0.656) the expected size of the largest almost segregated region containing an arbitrary agent is exponential in the size of the neighborhood. This behavior is reminiscent of supercritical percolation, where small clusters of empty sites can be observed within any sufficiently large region of the occupied percolation cluster. The exponential bounds that we provide also imply that complete segregation, where agents of a single type cover the whole grid, does not occur with high probability for p=1/2 and the range of intolerance considered.
Computational Modeling and Treatment Identification in the Myelodysplastic Syndromes.
Drusbosky, Leylah M; Cogle, Christopher R
2017-10-01
This review discusses the need for computational modeling in myelodysplastic syndromes (MDS) and early test results. As our evolving understanding of MDS reveals a molecularly complicated disease, the need for sophisticated computer analytics is required to keep track of the number and complex interplay among the molecular abnormalities. Computational modeling and digital drug simulations using whole exome sequencing data input have produced early results showing high accuracy in predicting treatment response to standard of care drugs. Furthermore, the computational MDS models serve as clinically relevant MDS cell lines for pre-clinical assays of investigational agents. MDS is an ideal disease for computational modeling and digital drug simulations. Current research is focused on establishing the prediction value of computational modeling. Future research will test the clinical advantage of computer-informed therapy in MDS.
Processing Diabetes Mellitus Composite Events in MAGPIE.
Brugués, Albert; Bromuri, Stefano; Barry, Michael; Del Toro, Óscar Jiménez; Mazurkiewicz, Maciej R; Kardas, Przemyslaw; Pegueroles, Josep; Schumacher, Michael
2016-02-01
The focus of this research is in the definition of programmable expert Personal Health Systems (PHS) to monitor patients affected by chronic diseases using agent oriented programming and mobile computing to represent the interactions happening amongst the components of the system. The paper also discusses issues of knowledge representation within the medical domain when dealing with temporal patterns concerning the physiological values of the patient. In the presented agent based PHS the doctors can personalize for each patient monitoring rules that can be defined in a graphical way. Furthermore, to achieve better scalability, the computations for monitoring the patients are distributed among their devices rather than being performed in a centralized server. The system is evaluated using data of 21 diabetic patients to detect temporal patterns according to a set of monitoring rules defined. The system's scalability is evaluated by comparing it with a centralized approach. The evaluation concerning the detection of temporal patterns highlights the system's ability to monitor chronic patients affected by diabetes. Regarding the scalability, the results show the fact that an approach exploiting the use of mobile computing is more scalable than a centralized approach. Therefore, more likely to satisfy the needs of next generation PHSs. PHSs are becoming an adopted technology to deal with the surge of patients affected by chronic illnesses. This paper discusses architectural choices to make an agent based PHS more scalable by using a distributed mobile computing approach. It also discusses how to model the medical knowledge in the PHS in such a way that it is modifiable at run time. The evaluation highlights the necessity of distributing the reasoning to the mobile part of the system and that modifiable rules are able to deal with the change in lifestyle of the patients affected by chronic illnesses.
NASA Astrophysics Data System (ADS)
Ramalingam, Srikumar
2001-11-01
A highly secure mobile agent system is very important for a mobile computing environment. The security issues in mobile agent system comprise protecting mobile hosts from malicious agents, protecting agents from other malicious agents, protecting hosts from other malicious hosts and protecting agents from malicious hosts. Using traditional security mechanisms the first three security problems can be solved. Apart from using trusted hardware, very few approaches exist to protect mobile code from malicious hosts. Some of the approaches to solve this problem are the use of trusted computing, computing with encrypted function, steganography, cryptographic traces, Seal Calculas, etc. This paper focuses on the simulation of some of these existing techniques in the designed mobile language. Some new approaches to solve malicious network problem and agent tampering problem are developed using public key encryption system and steganographic concepts. The approaches are based on encrypting and hiding the partial solutions of the mobile agents. The partial results are stored and the address of the storage is destroyed as the agent moves from one host to another host. This allows only the originator to make use of the partial results. Through these approaches some of the existing problems are solved.
Using Agent-Based Modeling to Enhance System-Level Real-time Control of Urban Stormwater Systems
NASA Astrophysics Data System (ADS)
Rimer, S.; Mullapudi, A. M.; Kerkez, B.
2017-12-01
The ability to reduce combined-sewer overflow (CSO) events is an issue that challenges over 800 U.S. municipalities. When the volume of a combined sewer system or wastewater treatment plant is exceeded, untreated wastewater then overflows (a CSO event) into nearby streams, rivers, or other water bodies causing localized urban flooding and pollution. The likelihood and impact of CSO events has only exacerbated due to urbanization, population growth, climate change, aging infrastructure, and system complexity. Thus, there is an urgent need for urban areas to manage CSO events. Traditionally, mitigating CSO events has been carried out via time-intensive and expensive structural interventions such as retention basins or sewer separation, which are able to reduce CSO events, but are costly, arduous, and only provide a fixed solution to a dynamic problem. Real-time control (RTC) of urban drainage systems using sensor and actuator networks has served as an inexpensive and versatile alternative to traditional CSO intervention. In particular, retrofitting individual stormwater elements for sensing and automated active distributed control has been shown to significantly reduce the volume of discharge during CSO events, with some RTC models demonstrating a reduction upwards of 90% when compared to traditional passive systems. As more stormwater elements become retrofitted for RTC, system-level RTC across complete watersheds is an attainable possibility. However, when considering the diverse set of control needs of each of these individual stormwater elements, such system-level RTC becomes a far more complex problem. To address such diverse control needs, agent-based modeling is employed such that each individual stormwater element is treated as an autonomous agent with a diverse decision making capabilities. We present preliminary results and limitations of utilizing the agent-based modeling computational framework for the system-level control of diverse, interacting stormwater elements.
Automation of multi-agent control for complex dynamic systems in heterogeneous computational network
NASA Astrophysics Data System (ADS)
Oparin, Gennady; Feoktistov, Alexander; Bogdanova, Vera; Sidorov, Ivan
2017-01-01
The rapid progress of high-performance computing entails new challenges related to solving large scientific problems for various subject domains in a heterogeneous distributed computing environment (e.g., a network, Grid system, or Cloud infrastructure). The specialists in the field of parallel and distributed computing give the special attention to a scalability of applications for problem solving. An effective management of the scalable application in the heterogeneous distributed computing environment is still a non-trivial issue. Control systems that operate in networks, especially relate to this issue. We propose a new approach to the multi-agent management for the scalable applications in the heterogeneous computational network. The fundamentals of our approach are the integrated use of conceptual programming, simulation modeling, network monitoring, multi-agent management, and service-oriented programming. We developed a special framework for an automation of the problem solving. Advantages of the proposed approach are demonstrated on the parametric synthesis example of the static linear regulator for complex dynamic systems. Benefits of the scalable application for solving this problem include automation of the multi-agent control for the systems in a parallel mode with various degrees of its detailed elaboration.
IA and PA network-based computation of coordinating combat behaviors in the military MAS
NASA Astrophysics Data System (ADS)
Xia, Zuxun; Fang, Huijia
2004-09-01
In the military multi-agent system every agent needs to analyze the dependent and temporal relations among the tasks or combat behaviors for working-out its plans and getting the correct behavior sequences, it could guarantee good coordination, avoid unexpected damnification and guard against bungling the change of winning a battle due to the possible incorrect scheduling and conflicts. In this paper IA and PA network based computation of coordinating combat behaviors is put forward, and emphasize particularly on using 5x5 matrix to represent and compute the temporal binary relation (between two interval-events, two point-events or between one interval-event and one point-event), this matrix method makes the coordination computing convenience than before.
A Heuristic Bioinspired for 8-Piece Puzzle
NASA Astrophysics Data System (ADS)
Machado, M. O.; Fabres, P. A.; Melo, J. C. L.
2017-10-01
This paper investigates a mathematical model inspired by nature, and presents a Meta-Heuristic that is efficient in improving the performance of an informed search, when using strategy A * using a General Search Tree as data structure. The work hypothesis suggests that the investigated meta-heuristic is optimal in nature and may be promising in minimizing the computational resources required by an objective-based agent in solving high computational complexity problems (n-part puzzle) as well as In the optimization of objective functions for local search agents. The objective of this work is to describe qualitatively the characteristics and properties of the mathematical model investigated, correlating the main concepts of the A * function with the significant variables of the metaheuristic used. The article shows that the amount of memory required to perform this search when using the metaheuristic is less than using the A * function to evaluate the nodes of a general search tree for the eight-piece puzzle. It is concluded that the meta-heuristic must be parameterized according to the chosen heuristic and the level of the tree that contains the possible solutions to the chosen problem.
Image/video understanding systems based on network-symbolic models
NASA Astrophysics Data System (ADS)
Kuvich, Gary
2004-03-01
Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/network models is found. Symbols, predicates and grammars naturally emerge in such networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type relational structure created via multilevel hierarchical compression of visual information. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. Spatial logic and topology naturally present in such structures. Mid-level vision processes like perceptual grouping, separation of figure from ground, are special kinds of network transformations. They convert primary image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models combines learning, classification, and analogy together with higher-level model-based reasoning into a single framework, and it works similar to frames and agents. Computational intelligence methods transform images into model-based knowledge representation. Based on such principles, an Image/Video Understanding system can convert images into the knowledge models, and resolve uncertainty and ambiguity. This allows creating intelligent computer vision systems for design and manufacturing.
Consentaneous Agent-Based and Stochastic Model of the Financial Markets
Gontis, Vygintas; Kononovicius, Aleksejus
2014-01-01
We are looking for the agent-based treatment of the financial markets considering necessity to build bridges between microscopic, agent based, and macroscopic, phenomenological modeling. The acknowledgment that agent-based modeling framework, which may provide qualitative and quantitative understanding of the financial markets, is very ambiguous emphasizes the exceptional value of well defined analytically tractable agent systems. Herding as one of the behavior peculiarities considered in the behavioral finance is the main property of the agent interactions we deal with in this contribution. Looking for the consentaneous agent-based and macroscopic approach we combine two origins of the noise: exogenous one, related to the information flow, and endogenous one, arising form the complex stochastic dynamics of agents. As a result we propose a three state agent-based herding model of the financial markets. From this agent-based model we derive a set of stochastic differential equations, which describes underlying macroscopic dynamics of agent population and log price in the financial markets. The obtained solution is then subjected to the exogenous noise, which shapes instantaneous return fluctuations. We test both Gaussian and q-Gaussian noise as a source of the short term fluctuations. The resulting model of the return in the financial markets with the same set of parameters reproduces empirical probability and spectral densities of absolute return observed in New York, Warsaw and NASDAQ OMX Vilnius Stock Exchanges. Our result confirms the prevalent idea in behavioral finance that herding interactions may be dominant over agent rationality and contribute towards bubble formation. PMID:25029364
Mobile agent location in distributed environments
NASA Astrophysics Data System (ADS)
Fountoukis, S. G.; Argyropoulos, I. P.
2012-12-01
An agent is a small program acting on behalf of a user or an application which plays the role of a user. Artificial intelligence can be encapsulated in agents so that they can be capable of both behaving autonomously and showing an elementary decision ability regarding movement and some specific actions. Therefore they are often called autonomous mobile agents. In a distributed system, they can move themselves from one processing node to another through the interconnecting network infrastructure. Their purpose is to collect useful information and to carry it back to their user. Also, agents are used to start, monitor and stop processes running on the individual interconnected processing nodes of computer cluster systems. An agent has a unique id to discriminate itself from other agents and a current position. The position can be expressed as the address of the processing node which currently hosts the agent. Very often, it is necessary for a user, a processing node or another agent to know the current position of an agent in a distributed system. Several procedures and algorithms have been proposed for the purpose of position location of mobile agents. The most basic of all employs a fixed computing node, which acts as agent position repository, receiving messages from all the moving agents and keeping records of their current positions. The fixed node, responds to position queries and informs users, other nodes and other agents about the position of an agent. Herein, a model is proposed that considers pairs and triples of agents instead of single ones. A location method, which is investigated in this paper, attempts to exploit this model.
Thermal Destruction Of CB Contaminants Bound On Building ...
Symposium Paper An experimental and theoretical program has been initiated by the U.S. EPA to investigate issues of chemical/biological agent destruction in incineration systems when the agent in question is bound on common porous building interior materials. This program includes 3-dimensional computational fluid dynamics modeling with matrix-bound agent destruction kinetics, bench-scale experiments to determine agent destruction kinetics while bound on various matrices, and pilot-scale experiments to scale-up the bench-scale experiments to a more practical scale. Finally, model predictions are made to predict agent destruction and combustion conditions in two full-scale incineration systems that are typical of modern combustor design.
Design and Implementation of Context-Aware Musuem Guide Agents
NASA Astrophysics Data System (ADS)
Satoh, Ichiro
This paper presents an agent-based system for building and operating context-aware services in public spaces, including museums. The system provides users with agents and detects the locations of users and deploys location-aware user-assistant agents at computers near the their current locations by using active RFID-tags. When a visitor moves between exhibits in a museum, this dynamically deploys his/her agent at the computers close to the exhibits by using mobile agent technology. It annotates the exhibits in his/her personalized form and navigate him/her user to the next exhibits along his/her routes. It also introduces user movement as a natural approach to interacting between users and agents. To demonstrate the utility and effectiveness of the system, we constructed location/user-aware visitor-guide services and experimented them for two weeks in a public museum.
Extending self-organizing particle systems to problem solving.
Rodríguez, Alejandro; Reggia, James A
2004-01-01
Self-organizing particle systems consist of numerous autonomous, purely reflexive agents ("particles") whose collective movements through space are determined primarily by local influences they exert upon one another. Inspired by biological phenomena (bird flocking, fish schooling, etc.), particle systems have been used not only for biological modeling, but also increasingly for applications requiring the simulation of collective movements such as computer-generated animation. In this research, we take some first steps in extending particle systems so that they not only move collectively, but also solve simple problems. This is done by giving the individual particles (agents) a rudimentary intelligence in the form of a very limited memory and a top-down, goal-directed control mechanism that, triggered by appropriate conditions, switches them between different behavioral states and thus different movement dynamics. Such enhanced particle systems are shown to be able to function effectively in performing simulated search-and-collect tasks. Further, computational experiments show that collectively moving agent teams are more effective than similar but independently moving ones in carrying out such tasks, and that agent teams of either type that split off members of the collective to protect previously acquired resources are most effective. This work shows that the reflexive agents of contemporary particle systems can readily be extended to support goal-directed problem solving while retaining their collective movement behaviors. These results may prove useful not only for future modeling of animal behavior, but also in computer animation, coordinated movement control in robotic teams, particle swarm optimization, and computer games.
Smell Detection Agent Based Optimization Algorithm
NASA Astrophysics Data System (ADS)
Vinod Chandra, S. S.
2016-09-01
In this paper, a novel nature-inspired optimization algorithm has been employed and the trained behaviour of dogs in detecting smell trails is adapted into computational agents for problem solving. The algorithm involves creation of a surface with smell trails and subsequent iteration of the agents in resolving a path. This algorithm can be applied in different computational constraints that incorporate path-based problems. Implementation of the algorithm can be treated as a shortest path problem for a variety of datasets. The simulated agents have been used to evolve the shortest path between two nodes in a graph. This algorithm is useful to solve NP-hard problems that are related to path discovery. This algorithm is also useful to solve many practical optimization problems. The extensive derivation of the algorithm can be enabled to solve shortest path problems.
Pfeiffer, Ulrich J; Schilbach, Leonhard; Timmermans, Bert; Kuzmanovic, Bojana; Georgescu, Alexandra L; Bente, Gary; Vogeley, Kai
2014-11-01
There is ample evidence that human primates strive for social contact and experience interactions with conspecifics as intrinsically rewarding. Focusing on gaze behavior as a crucial means of human interaction, this study employed a unique combination of neuroimaging, eye-tracking, and computer-animated virtual agents to assess the neural mechanisms underlying this component of behavior. In the interaction task, participants believed that during each interaction the agent's gaze behavior could either be controlled by another participant or by a computer program. Their task was to indicate whether they experienced a given interaction as an interaction with another human participant or the computer program based on the agent's reaction. Unbeknownst to them, the agent was always controlled by a computer to enable a systematic manipulation of gaze reactions by varying the degree to which the agent engaged in joint attention. This allowed creating a tool to distinguish neural activity underlying the subjective experience of being engaged in social and non-social interaction. In contrast to previous research, this allows measuring neural activity while participants experience active engagement in real-time social interactions. Results demonstrate that gaze-based interactions with a perceived human partner are associated with activity in the ventral striatum, a core component of reward-related neurocircuitry. In contrast, interactions with a computer-driven agent activate attention networks. Comparisons of neural activity during interaction with behaviorally naïve and explicitly cooperative partners demonstrate different temporal dynamics of the reward system and indicate that the mere experience of engagement in social interaction is sufficient to recruit this system. Copyright © 2014 Elsevier Inc. All rights reserved.
Conversational Agents Improve Peer Learning through Building on Prior Knowledge
ERIC Educational Resources Information Center
Tegos, Stergios; Demetriadis, Stavros
2017-01-01
Research in computer-supported collaborative learning has indicated that conversational agents can be pedagogically beneficial when used to scaffold students' online discussions. In this study, we investigate the impact of an agile conversational agent that triggers student dialogue by making interventions based on the academically productive talk…
The highly intelligent virtual agents for modeling financial markets
NASA Astrophysics Data System (ADS)
Yang, G.; Chen, Y.; Huang, J. P.
2016-02-01
Researchers have borrowed many theories from statistical physics, like ensemble, Ising model, etc., to study complex adaptive systems through agent-based modeling. However, one fundamental difference between entities (such as spins) in physics and micro-units in complex adaptive systems is that the latter are usually with high intelligence, such as investors in financial markets. Although highly intelligent virtual agents are essential for agent-based modeling to play a full role in the study of complex adaptive systems, how to create such agents is still an open question. Hence, we propose three principles for designing high artificial intelligence in financial markets and then build a specific class of agents called iAgents based on these three principles. Finally, we evaluate the intelligence of iAgents through virtual index trading in two different stock markets. For comparison, we also include three other types of agents in this contest, namely, random traders, agents from the wealth game (modified on the famous minority game), and agents from an upgraded wealth game. As a result, iAgents perform the best, which gives a well support for the three principles. This work offers a general framework for the further development of agent-based modeling for various kinds of complex adaptive systems.
Schryver, Jack; Nutaro, James; Shankar, Mallikarjun
2015-10-30
An agent-based simulation model hierarchy emulating disease states and behaviors critical to progression of diabetes type 2 was designed and implemented in the DEVS framework. The models are translations of basic elements of an established system dynamics model of diabetes. In this model hierarchy, which mimics diabetes progression over an aggregated U.S. population, was dis-aggregated and reconstructed bottom-up at the individual (agent) level. Four levels of model complexity were defined in order to systematically evaluate which parameters are needed to mimic outputs of the system dynamics model. Moreover, the four estimated models attempted to replicate stock counts representing disease statesmore » in the system dynamics model, while estimating impacts of an elderliness factor, obesity factor and health-related behavioral parameters. Health-related behavior was modeled as a simple realization of the Theory of Planned Behavior, a joint function of individual attitude and diffusion of social norms that spread over each agent s social network. Although the most complex agent-based simulation model contained 31 adjustable parameters, all models were considerably less complex than the system dynamics model which required numerous time series inputs to make its predictions. In all three elaborations of the baseline model provided significantly improved fits to the output of the system dynamics model. The performances of the baseline agent-based model and its extensions illustrate a promising approach to translate complex system dynamics models into agent-based model alternatives that are both conceptually simpler and capable of capturing main effects of complex local agent-agent interactions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schryver, Jack; Nutaro, James; Shankar, Mallikarjun
An agent-based simulation model hierarchy emulating disease states and behaviors critical to progression of diabetes type 2 was designed and implemented in the DEVS framework. The models are translations of basic elements of an established system dynamics model of diabetes. In this model hierarchy, which mimics diabetes progression over an aggregated U.S. population, was dis-aggregated and reconstructed bottom-up at the individual (agent) level. Four levels of model complexity were defined in order to systematically evaluate which parameters are needed to mimic outputs of the system dynamics model. Moreover, the four estimated models attempted to replicate stock counts representing disease statesmore » in the system dynamics model, while estimating impacts of an elderliness factor, obesity factor and health-related behavioral parameters. Health-related behavior was modeled as a simple realization of the Theory of Planned Behavior, a joint function of individual attitude and diffusion of social norms that spread over each agent s social network. Although the most complex agent-based simulation model contained 31 adjustable parameters, all models were considerably less complex than the system dynamics model which required numerous time series inputs to make its predictions. In all three elaborations of the baseline model provided significantly improved fits to the output of the system dynamics model. The performances of the baseline agent-based model and its extensions illustrate a promising approach to translate complex system dynamics models into agent-based model alternatives that are both conceptually simpler and capable of capturing main effects of complex local agent-agent interactions.« less
Pattern-oriented modeling of agent-based complex systems: Lessons from ecology
Grimm, Volker; Revilla, Eloy; Berger, Uta; Jeltsch, Florian; Mooij, Wolf M.; Railsback, Steven F.; Thulke, Hans-Hermann; Weiner, Jacob; Wiegand, Thorsten; DeAngelis, Donald L.
2005-01-01
Agent-based complex systems are dynamic networks of many interacting agents; examples include ecosystems, financial markets, and cities. The search for general principles underlying the internal organization of such systems often uses bottom-up simulation models such as cellular automata and agent-based models. No general framework for designing, testing, and analyzing bottom-up models has yet been established, but recent advances in ecological modeling have come together in a general strategy we call pattern-oriented modeling. This strategy provides a unifying framework for decoding the internal organization of agent-based complex systems and may lead toward unifying algorithmic theories of the relation between adaptive behavior and system complexity.
Pattern-Oriented Modeling of Agent-Based Complex Systems: Lessons from Ecology
NASA Astrophysics Data System (ADS)
Grimm, Volker; Revilla, Eloy; Berger, Uta; Jeltsch, Florian; Mooij, Wolf M.; Railsback, Steven F.; Thulke, Hans-Hermann; Weiner, Jacob; Wiegand, Thorsten; DeAngelis, Donald L.
2005-11-01
Agent-based complex systems are dynamic networks of many interacting agents; examples include ecosystems, financial markets, and cities. The search for general principles underlying the internal organization of such systems often uses bottom-up simulation models such as cellular automata and agent-based models. No general framework for designing, testing, and analyzing bottom-up models has yet been established, but recent advances in ecological modeling have come together in a general strategy we call pattern-oriented modeling. This strategy provides a unifying framework for decoding the internal organization of agent-based complex systems and may lead toward unifying algorithmic theories of the relation between adaptive behavior and system complexity.
Will the digital computer transform classical mathematics?
Rotman, Brian
2003-08-15
Mathematics and machines have influenced each other for millennia. The advent of the digital computer introduced a powerfully new element that promises to transform the relation between them. This paper outlines the thesis that the effect of the digital computer on mathematics, already widespread, is likely to be radical and far-reaching. To articulate this claim, an abstract model of doing mathematics is introduced based on a triad of actors of which one, the 'agent', corresponds to the function performed by the computer. The model is used to frame two sorts of transformation. The first is pragmatic and involves the alterations and progressive colonization of the content and methods of enquiry of various mathematical fields brought about by digital methods. The second is conceptual and concerns a fundamental antagonism between the infinity enshrined in classical mathematics and physics (continuity, real numbers, asymptotic definitions) and the inherently real and material limit of processes associated with digital computation. An example which lies in the intersection of classical mathematics and computer science, the P=NP problem, is analysed in the light of this latter issue.
Computer modeling describes gravity-related adaptation in cell cultures.
Alexandrov, Ludmil B; Alexandrova, Stoyana; Usheva, Anny
2009-12-16
Questions about the changes of biological systems in response to hostile environmental factors are important but not easy to answer. Often, the traditional description with differential equations is difficult due to the overwhelming complexity of the living systems. Another way to describe complex systems is by simulating them with phenomenological models such as the well-known evolutionary agent-based model (EABM). Here we developed an EABM to simulate cell colonies as a multi-agent system that adapts to hyper-gravity in starvation conditions. In the model, the cell's heritable characteristics are generated and transferred randomly to offspring cells. After a qualitative validation of the model at normal gravity, we simulate cellular growth in hyper-gravity conditions. The obtained data are consistent with previously confirmed theoretical and experimental findings for bacterial behavior in environmental changes, including the experimental data from the microgravity Atlantis and the Hypergravity 3000 experiments. Our results demonstrate that it is possible to utilize an EABM with realistic qualitative description to examine the effects of hypergravity and starvation on complex cellular entities.
Collins, Michael G.; Juvina, Ion; Gluck, Kevin A.
2016-01-01
When playing games of strategic interaction, such as iterated Prisoner's Dilemma and iterated Chicken Game, people exhibit specific within-game learning (e.g., learning a game's optimal outcome) as well as transfer of learning between games (e.g., a game's optimal outcome occurring at a higher proportion when played after another game). The reciprocal trust players develop during the first game is thought to mediate transfer of learning effects. Recently, a computational cognitive model using a novel trust mechanism has been shown to account for human behavior in both games, including the transfer between games. We present the results of a study in which we evaluate the model's a priori predictions of human learning and transfer in 16 different conditions. The model's predictive validity is compared against five model variants that lacked a trust mechanism. The results suggest that a trust mechanism is necessary to explain human behavior across multiple conditions, even when a human plays against a non-human agent. The addition of a trust mechanism to the other learning mechanisms within the cognitive architecture, such as sequence learning, instance-based learning, and utility learning, leads to better prediction of the empirical data. It is argued that computational cognitive modeling is a useful tool for studying trust development, calibration, and repair. PMID:26903892
Are Opinions Based on Science: Modelling Social Response to Scientific Facts
Iñiguez, Gerardo; Tagüeña-Martínez, Julia; Kaski, Kimmo K.; Barrio, Rafael A.
2012-01-01
As scientists we like to think that modern societies and their members base their views, opinions and behaviour on scientific facts. This is not necessarily the case, even though we are all (over-) exposed to information flow through various channels of media, i.e. newspapers, television, radio, internet, and web. It is thought that this is mainly due to the conflicting information on the mass media and to the individual attitude (formed by cultural, educational and environmental factors), that is, one external factor and another personal factor. In this paper we will investigate the dynamical development of opinion in a small population of agents by means of a computational model of opinion formation in a co-evolving network of socially linked agents. The personal and external factors are taken into account by assigning an individual attitude parameter to each agent, and by subjecting all to an external but homogeneous field to simulate the effect of the media. We then adjust the field strength in the model by using actual data on scientific perception surveys carried out in two different populations, which allow us to compare two different societies. We interpret the model findings with the aid of simple mean field calculations. Our results suggest that scientifically sound concepts are more difficult to acquire than concepts not validated by science, since opposing individuals organize themselves in close communities that prevent opinion consensus. PMID:22905117
Are opinions based on science: modelling social response to scientific facts.
Iñiguez, Gerardo; Tagüeña-Martínez, Julia; Kaski, Kimmo K; Barrio, Rafael A
2012-01-01
As scientists we like to think that modern societies and their members base their views, opinions and behaviour on scientific facts. This is not necessarily the case, even though we are all (over-) exposed to information flow through various channels of media, i.e. newspapers, television, radio, internet, and web. It is thought that this is mainly due to the conflicting information on the mass media and to the individual attitude (formed by cultural, educational and environmental factors), that is, one external factor and another personal factor. In this paper we will investigate the dynamical development of opinion in a small population of agents by means of a computational model of opinion formation in a co-evolving network of socially linked agents. The personal and external factors are taken into account by assigning an individual attitude parameter to each agent, and by subjecting all to an external but homogeneous field to simulate the effect of the media. We then adjust the field strength in the model by using actual data on scientific perception surveys carried out in two different populations, which allow us to compare two different societies. We interpret the model findings with the aid of simple mean field calculations. Our results suggest that scientifically sound concepts are more difficult to acquire than concepts not validated by science, since opposing individuals organize themselves in close communities that prevent opinion consensus.
Computational Modeling of Cultural Dimensions in Adversary Organizations
2010-01-01
Nodes”, In the Proceedings of the 9th Conference on Uncertainty in Artificial Intelli - gence, 1993. [8] Pearl, J. Probabilistic Reasoning in...the artificial life simulations; in con- trast, models with only a few agents typically employ quite sophisticated cognitive agents capa- ble of...Model Construction 45 cisions as to how to allocate scarce ISR assets (two Unmanned Air Systems, UAS ) among the two Red activities while at the same
NASA Astrophysics Data System (ADS)
Streitmatter, Seth W.; Stewart, Robert D.; Jenkins, Peter A.; Jevremovic, Tatjana
2017-08-01
A multi-scale Monte Carlo model is proposed to assess the dosimetric and biological impact of iodine-based contrast agents commonly used in computed tomography. As presented, the model integrates the general purpose MCNP6 code system for larger-scale radiation transport and dose assessment with the Monte Carlo damage simulation to determine the sub-cellular characteristics and spatial distribution of initial DNA damage. The repair-misrepair-fixation model is then used to relate DNA double strand break (DSB) induction to reproductive cell death. Comparisons of measured and modeled changes in reproductive cell survival for ultrasoft characteristic k-shell x-rays (0.25-4.55 keV) up to orthovoltage (200-500 kVp) x-rays indicate that the relative biological effectiveness (RBE) for DSB induction is within a few percent of the RBE for cell survival. Because of the very short range of secondary electrons produced by low energy x-ray interactions with contrast agents, the concentration and subcellular distribution of iodine within and near cellular targets have a significant impact on the estimated absorbed dose and number of DSB produced in the cell nucleus. For some plausible models of the cell-level distribution of contrast agent, the model predicts an increase in RBE-weighted dose (RWD) for the endpoint of DSB induction of 1.22-1.40 for a 5-10 mg ml-1 iodine concentration in blood compared to an RWD increase of 1.07 ± 0.19 from a recent clinical trial. The modeled RWD of 2.58 ± 0.03 is also in good agreement with the measured RWD of 2.3 ± 0.5 for an iodine concentration of 50 mg ml-1 relative to no iodine. The good agreement between modeled and measured DSB and cell survival estimates provides some confidence that the presented model can be used to accurately assess biological dose for other concentrations of the same or different contrast agents.
Artificial Exo-Society Modeling: a New Tool for SETI Research
NASA Astrophysics Data System (ADS)
Gardner, James N.
2002-01-01
One of the newest fields of complexity research is artificial society modeling. Methodologically related to artificial life research, artificial society modeling utilizes agent-based computer simulation tools like SWARM and SUGARSCAPE developed by the Santa Fe Institute, Los Alamos National Laboratory and the Bookings Institution in an effort to introduce an unprecedented degree of rigor and quantitative sophistication into social science research. The broad aim of artificial society modeling is to begin the development of a more unified social science that embeds cultural evolutionary processes in a computational environment that simulates demographics, the transmission of culture, conflict, economics, disease, the emergence of groups and coadaptation with an environment in a bottom-up fashion. When an artificial society computer model is run, artificial societal patterns emerge from the interaction of autonomous software agents (the "inhabitants" of the artificial society). Artificial society modeling invites the interpretation of society as a distributed computational system and the interpretation of social dynamics as a specialized category of computation. Artificial society modeling techniques offer the potential of computational simulation of hypothetical alien societies in much the same way that artificial life modeling techniques offer the potential to model hypothetical exobiological phenomena. NASA recently announced its intention to begin exploring the possibility of including artificial life research within the broad portfolio of scientific fields comprised by the interdisciplinary astrobiology research endeavor. It may be appropriate for SETI researchers to likewise commence an exploration of the possible inclusion of artificial exo-society modeling within the SETI research endeavor. Artificial exo-society modeling might be particularly useful in a post-detection environment by (1) coherently organizing the set of data points derived from a detected ETI signal, (2) mapping trends in the data points over time (assuming receipt of an extended ETI signal), and (3) projecting such trends forward to derive alternative cultural evolutionary scenarios for the exo-society under analysis. The latter exercise might be particularly useful to compensate for the inevitable time lag between generation of an ETI signal and receipt of an ETI signal on Earth. For this reason, such an exercise might be a helpful adjunct to the decisional process contemplated by Paragraph 9 of the Declaration of Principles Concerning Activities Following the Detection of Extraterrestrial Intelligence.
A CSP-Based Agent Modeling Framework for the Cougaar Agent-Based Architecture
NASA Technical Reports Server (NTRS)
Gracanin, Denis; Singh, H. Lally; Eltoweissy, Mohamed; Hinchey, Michael G.; Bohner, Shawn A.
2005-01-01
Cognitive Agent Architecture (Cougaar) is a Java-based architecture for large-scale distributed agent-based applications. A Cougaar agent is an autonomous software entity with behaviors that represent a real-world entity (e.g., a business process). A Cougaar-based Model Driven Architecture approach, currently under development, uses a description of system's functionality (requirements) to automatically implement the system in Cougaar. The Communicating Sequential Processes (CSP) formalism is used for the formal validation of the generated system. Two main agent components, a blackboard and a plugin, are modeled as CSP processes. A set of channels represents communications between the blackboard and individual plugins. The blackboard is represented as a CSP process that communicates with every agent in the collection. The developed CSP-based Cougaar modeling framework provides a starting point for a more complete formal verification of the automatically generated Cougaar code. Currently it is used to verify the behavior of an individual agent in terms of CSP properties and to analyze the corresponding Cougaar society.
Solovyev, Alexey; Mi, Qi; Tzen, Yi-Ting; Brienza, David; Vodovotz, Yoram
2013-01-01
Pressure ulcers are costly and life-threatening complications for people with spinal cord injury (SCI). People with SCI also exhibit differential blood flow properties in non-ulcerated skin. We hypothesized that a computer simulation of the pressure ulcer formation process, informed by data regarding skin blood flow and reactive hyperemia in response to pressure, could provide insights into the pathogenesis and effective treatment of post-SCI pressure ulcers. Agent-Based Models (ABM) are useful in settings such as pressure ulcers, in which spatial realism is important. Ordinary Differential Equation-based (ODE) models are useful when modeling physiological phenomena such as reactive hyperemia. Accordingly, we constructed a hybrid model that combines ODEs related to blood flow along with an ABM of skin injury, inflammation, and ulcer formation. The relationship between pressure and the course of ulcer formation, as well as several other important characteristic patterns of pressure ulcer formation, was demonstrated in this model. The ODE portion of this model was calibrated to data related to blood flow following experimental pressure responses in non-injured human subjects or to data from people with SCI. This model predicted a higher propensity to form ulcers in response to pressure in people with SCI vs. non-injured control subjects, and thus may serve as novel diagnostic platform for post-SCI ulcer formation. PMID:23696726
Model-free learning on robot kinematic chains using a nested multi-agent topology
NASA Astrophysics Data System (ADS)
Karigiannis, John N.; Tzafestas, Costas S.
2016-11-01
This paper proposes a model-free learning scheme for the developmental acquisition of robot kinematic control and dexterous manipulation skills. The approach is based on a nested-hierarchical multi-agent architecture that intuitively encapsulates the topology of robot kinematic chains, where the activity of each independent degree-of-freedom (DOF) is finally mapped onto a distinct agent. Each one of those agents progressively evolves a local kinematic control strategy in a game-theoretic sense, that is, based on a partial (local) view of the whole system topology, which is incrementally updated through a recursive communication process according to the nested-hierarchical topology. Learning is thus approached not through demonstration and training but through an autonomous self-exploration process. A fuzzy reinforcement learning scheme is employed within each agent to enable efficient exploration in a continuous state-action domain. This paper constitutes in fact a proof of concept, demonstrating that global dexterous manipulation skills can indeed evolve through such a distributed iterative learning of local agent sensorimotor mappings. The main motivation behind the development of such an incremental multi-agent topology is to enhance system modularity, to facilitate extensibility to more complex problem domains and to improve robustness with respect to structural variations including unpredictable internal failures. These attributes of the proposed system are assessed in this paper through numerical experiments in different robot manipulation task scenarios, involving both single and multi-robot kinematic chains. The generalisation capacity of the learning scheme is experimentally assessed and robustness properties of the multi-agent system are also evaluated with respect to unpredictable variations in the kinematic topology. Furthermore, these numerical experiments demonstrate the scalability properties of the proposed nested-hierarchical architecture, where new agents can be recursively added in the hierarchy to encapsulate individual active DOFs. The results presented in this paper demonstrate the feasibility of such a distributed multi-agent control framework, showing that the solutions which emerge are plausible and near-optimal. Numerical efficiency and computational cost issues are also discussed.
Markov Tracking for Agent Coordination
NASA Technical Reports Server (NTRS)
Washington, Richard; Lau, Sonie (Technical Monitor)
1998-01-01
Partially observable Markov decision processes (POMDPs) axe an attractive representation for representing agent behavior, since they capture uncertainty in both the agent's state and its actions. However, finding an optimal policy for POMDPs in general is computationally difficult. In this paper we present Markov Tracking, a restricted problem of coordinating actions with an agent or process represented as a POMDP Because the actions coordinate with the agent rather than influence its behavior, the optimal solution to this problem can be computed locally and quickly. We also demonstrate the use of the technique on sequential POMDPs, which can be used to model a behavior that follows a linear, acyclic trajectory through a series of states. By imposing a "windowing" restriction that restricts the number of possible alternatives considered at any moment to a fixed size, a coordinating action can be calculated in constant time, making this amenable to coordination with complex agents.
Decision theory with resource-bounded agents.
Halpern, Joseph Y; Pass, Rafael; Seeman, Lior
2014-04-01
There have been two major lines of research aimed at capturing resource-bounded players in game theory. The first, initiated by Rubinstein (), charges an agent for doing costly computation; the second, initiated by Neyman (), does not charge for computation, but limits the computation that agents can do, typically by modeling agents as finite automata. We review recent work on applying both approaches in the context of decision theory. For the first approach, we take the objects of choice in a decision problem to be Turing machines, and charge players for the "complexity" of the Turing machine chosen (e.g., its running time). This approach can be used to explain well-known phenomena like first-impression-matters biases (i.e., people tend to put more weight on evidence they hear early on) and belief polarization (two people with different prior beliefs, hearing the same evidence, can end up with diametrically opposed conclusions) as the outcomes of quite rational decisions. For the second approach, we model people as finite automata, and provide a simple algorithm that, on a problem that captures a number of settings of interest, provably performs optimally as the number of states in the automaton increases. Copyright © 2014 Cognitive Science Society, Inc.
Evolution of Collective Behaviour in an Artificial World Using Linguistic Fuzzy Rule-Based Systems
Lebar Bajec, Iztok
2017-01-01
Collective behaviour is a fascinating and easily observable phenomenon, attractive to a wide range of researchers. In biology, computational models have been extensively used to investigate various properties of collective behaviour, such as: transfer of information across the group, benefits of grouping (defence against predation, foraging), group decision-making process, and group behaviour types. The question ‘why,’ however remains largely unanswered. Here the interest goes into which pressures led to the evolution of such behaviour, and evolutionary computational models have already been used to test various biological hypotheses. Most of these models use genetic algorithms to tune the parameters of previously presented non-evolutionary models, but very few attempt to evolve collective behaviour from scratch. Of these last, the successful attempts display clumping or swarming behaviour. Empirical evidence suggests that in fish schools there exist three classes of behaviour; swarming, milling and polarized. In this paper we present a novel, artificial life-like evolutionary model, where individual agents are governed by linguistic fuzzy rule-based systems, which is capable of evolving all three classes of behaviour. PMID:28045964
Evolution of Collective Behaviour in an Artificial World Using Linguistic Fuzzy Rule-Based Systems.
Demšar, Jure; Lebar Bajec, Iztok
2017-01-01
Collective behaviour is a fascinating and easily observable phenomenon, attractive to a wide range of researchers. In biology, computational models have been extensively used to investigate various properties of collective behaviour, such as: transfer of information across the group, benefits of grouping (defence against predation, foraging), group decision-making process, and group behaviour types. The question 'why,' however remains largely unanswered. Here the interest goes into which pressures led to the evolution of such behaviour, and evolutionary computational models have already been used to test various biological hypotheses. Most of these models use genetic algorithms to tune the parameters of previously presented non-evolutionary models, but very few attempt to evolve collective behaviour from scratch. Of these last, the successful attempts display clumping or swarming behaviour. Empirical evidence suggests that in fish schools there exist three classes of behaviour; swarming, milling and polarized. In this paper we present a novel, artificial life-like evolutionary model, where individual agents are governed by linguistic fuzzy rule-based systems, which is capable of evolving all three classes of behaviour.
Shorebird Migration Patterns in Response to Climate Change: A Modeling Approach
NASA Technical Reports Server (NTRS)
Smith, James A.
2010-01-01
The availability of satellite remote sensing observations at multiple spatial and temporal scales, coupled with advances in climate modeling and information technologies offer new opportunities for the application of mechanistic models to predict how continental scale bird migration patterns may change in response to environmental change. In earlier studies, we explored the phenotypic plasticity of a migratory population of Pectoral sandpipers by simulating the movement patterns of an ensemble of 10,000 individual birds in response to changes in stopover locations as an indicator of the impacts of wetland loss and inter-annual variability on the fitness of migratory shorebirds. We used an individual based, biophysical migration model, driven by remotely sensed land surface data, climate data, and biological field data. Mean stop-over durations and stop-over frequency with latitude predicted from our model for nominal cases were consistent with results reported in the literature and available field data. In this study, we take advantage of new computing capabilities enabled by recent GP-GPU computing paradigms and commodity hardware (general purchase computing on graphics processing units). Several aspects of our individual based (agent modeling) approach lend themselves well to GP-GPU computing. We have been able to allocate compute-intensive tasks to the graphics processing units, and now simulate ensembles of 400,000 birds at varying spatial resolutions along the central North American flyway. We are incorporating additional, species specific, mechanistic processes to better reflect the processes underlying bird phenotypic plasticity responses to different climate change scenarios in the central U.S.
NASA Astrophysics Data System (ADS)
Mundhenk, Terrell N.; Dhavale, Nitin; Marmol, Salvador; Calleja, Elizabeth; Navalpakkam, Vidhya; Bellman, Kirstie; Landauer, Chris; Arbib, Michael A.; Itti, Laurent
2003-10-01
In view of the growing complexity of computational tasks and their design, we propose that certain interactive systems may be better designed by utilizing computational strategies based on the study of the human brain. Compared with current engineering paradigms, brain theory offers the promise of improved self-organization and adaptation to the current environment, freeing the programmer from having to address those issues in a procedural manner when designing and implementing large-scale complex systems. To advance this hypothesis, we discus a multi-agent surveillance system where 12 agent CPUs each with its own camera, compete and cooperate to monitor a large room. To cope with the overload of image data streaming from 12 cameras, we take inspiration from the primate"s visual system, which allows the animal to operate a real-time selection of the few most conspicuous locations in visual input. This is accomplished by having each camera agent utilize the bottom-up, saliency-based visual attention algorithm of Itti and Koch (Vision Research 2000;40(10-12):1489-1506) to scan the scene for objects of interest. Real time operation is achieved using a distributed version that runs on a 16-CPU Beowulf cluster composed of the agent computers. The algorithm guides cameras to track and monitor salient objects based on maps of color, orientation, intensity, and motion. To spread camera view points or create cooperation in monitoring highly salient targets, camera agents bias each other by increasing or decreasing the weight of different feature vectors in other cameras, using mechanisms similar to excitation and suppression that have been documented in electrophysiology, psychophysics and imaging studies of low-level visual processing. In addition, if cameras need to compete for computing resources, allocation of computational time is weighed based upon the history of each camera. A camera agent that has a history of seeing more salient targets is more likely to obtain computational resources. The system demonstrates the viability of biologically inspired systems in a real time tracking. In future work we plan on implementing additional biological mechanisms for cooperative management of both the sensor and processing resources in this system that include top down biasing for target specificity as well as novelty and the activity of the tracked object in relation to sensitive features of the environment.
Bravo, Rafael; Axelrod, David E
2013-11-18
Normal colon crypts consist of stem cells, proliferating cells, and differentiated cells. Abnormal rates of proliferation and differentiation can initiate colon cancer. We have measured the variation in the number of each of these cell types in multiple crypts in normal human biopsy specimens. This has provided the opportunity to produce a calibrated computational model that simulates cell dynamics in normal human crypts, and by changing model parameter values, to simulate the initiation and treatment of colon cancer. An agent-based model of stochastic cell dynamics in human colon crypts was developed in the multi-platform open-source application NetLogo. It was assumed that each cell's probability of proliferation and probability of death is determined by its position in two gradients along the crypt axis, a divide gradient and in a die gradient. A cell's type is not intrinsic, but rather is determined by its position in the divide gradient. Cell types are dynamic, plastic, and inter-convertible. Parameter values were determined for the shape of each of the gradients, and for a cell's response to the gradients. This was done by parameter sweeps that indicated the values that reproduced the measured number and variation of each cell type, and produced quasi-stationary stochastic dynamics. The behavior of the model was verified by its ability to reproduce the experimentally observed monocolonal conversion by neutral drift, the formation of adenomas resulting from mutations either at the top or bottom of the crypt, and by the robust ability of crypts to recover from perturbation by cytotoxic agents. One use of the virtual crypt model was demonstrated by evaluating different cancer chemotherapy and radiation scheduling protocols. A virtual crypt has been developed that simulates the quasi-stationary stochastic cell dynamics of normal human colon crypts. It is unique in that it has been calibrated with measurements of human biopsy specimens, and it can simulate the variation of cell types in addition to the average number of each cell type. The utility of the model was demonstrated with in silico experiments that evaluated cancer therapy protocols. The model is available for others to conduct additional experiments.
ERIC Educational Resources Information Center
Xiang, Lin
2011-01-01
This is a collective case study seeking to develop detailed descriptions of how programming an agent-based simulation influences a group of 8th grade students' model-based inquiry (MBI) by examining students' agent-based programmable modeling (ABPM) processes and the learning outcomes. The context of the present study was a biology unit on…
Hafnium-Based Contrast Agents for X-ray Computed Tomography.
Berger, Markus; Bauser, Marcus; Frenzel, Thomas; Hilger, Christoph Stephan; Jost, Gregor; Lauria, Silvia; Morgenstern, Bernd; Neis, Christian; Pietsch, Hubertus; Sülzle, Detlev; Hegetschweiler, Kaspar
2017-05-15
Heavy-metal-based contrast agents (CAs) offer enhanced X-ray absorption for X-ray computed tomography (CT) compared to the currently used iodinated CAs. We report the discovery of new lanthanide and hafnium azainositol complexes and their optimization with respect to high water solubility and stability. Our efforts culminated in the synthesis of BAY-576, an uncharged hafnium complex with 3:2 stoichiometry and broken complex symmetry. The superior properties of this asymmetrically substituted hafnium CA were demonstrated by a CT angiography study in rabbits that revealed excellent signal contrast enhancement.
Data-driven agent-based modeling, with application to rooftop solar adoption
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Haifeng; Vorobeychik, Yevgeniy; Letchford, Joshua
Agent-based modeling is commonly used for studying complex system properties emergent from interactions among many agents. We present a novel data-driven agent-based modeling framework applied to forecasting individual and aggregate residential rooftop solar adoption in San Diego county. Our first step is to learn a model of individual agent behavior from combined data of individual adoption characteristics and property assessment. We then construct an agent-based simulation with the learned model embedded in artificial agents, and proceed to validate it using a holdout sequence of collective adoption decisions. We demonstrate that the resulting agent-based model successfully forecasts solar adoption trends andmore » provides a meaningful quantification of uncertainty about its predictions. We utilize our model to optimize two classes of policies aimed at spurring solar adoption: one that subsidizes the cost of adoption, and another that gives away free systems to low-income house- holds. We find that the optimal policies derived for the latter class are significantly more efficacious, whereas the policies similar to the current California Solar Initiative incentive scheme appear to have a limited impact on overall adoption trends.« less
Data-driven agent-based modeling, with application to rooftop solar adoption
Zhang, Haifeng; Vorobeychik, Yevgeniy; Letchford, Joshua; ...
2016-01-25
Agent-based modeling is commonly used for studying complex system properties emergent from interactions among many agents. We present a novel data-driven agent-based modeling framework applied to forecasting individual and aggregate residential rooftop solar adoption in San Diego county. Our first step is to learn a model of individual agent behavior from combined data of individual adoption characteristics and property assessment. We then construct an agent-based simulation with the learned model embedded in artificial agents, and proceed to validate it using a holdout sequence of collective adoption decisions. We demonstrate that the resulting agent-based model successfully forecasts solar adoption trends andmore » provides a meaningful quantification of uncertainty about its predictions. We utilize our model to optimize two classes of policies aimed at spurring solar adoption: one that subsidizes the cost of adoption, and another that gives away free systems to low-income house- holds. We find that the optimal policies derived for the latter class are significantly more efficacious, whereas the policies similar to the current California Solar Initiative incentive scheme appear to have a limited impact on overall adoption trends.« less
NASA Astrophysics Data System (ADS)
Ekdahl, Bertil
2002-09-01
Of main concern in agent based computing is the conception that software agents can attain socially responsible behavior. This idea has its origin in the need for agents to interact with one another in a cooperating manner. Such interplay between several agents can be seen as a combinatorial play where the rules are fixed and the actors are supposed to closely analyze the play in order to behave rational. This kind of rationality has successfully being mathematically described. When the social behavior is extended beyond rational behavior, mere mathematical analysis falls short. For such behavior language is decisive for transferring concepts and language is a holistic entity that cannot be analyzed and defined mathematically. Accordingly, computers cannot be furnished with a language in the sense that meaning can be conveyed and consequently they lack all the necessary properties to be made social. The attempts to postulate mental properties to computer programs are a misconception that is blamed the lack of true understanding of language and especially the relation between formal system and its semantics.
ADAM: Analysis of Discrete Models of Biological Systems Using Computer Algebra
2011-01-01
Background Many biological systems are modeled qualitatively with discrete models, such as probabilistic Boolean networks, logical models, Petri nets, and agent-based models, to gain a better understanding of them. The computational complexity to analyze the complete dynamics of these models grows exponentially in the number of variables, which impedes working with complex models. There exist software tools to analyze discrete models, but they either lack the algorithmic functionality to analyze complex models deterministically or they are inaccessible to many users as they require understanding the underlying algorithm and implementation, do not have a graphical user interface, or are hard to install. Efficient analysis methods that are accessible to modelers and easy to use are needed. Results We propose a method for efficiently identifying attractors and introduce the web-based tool Analysis of Dynamic Algebraic Models (ADAM), which provides this and other analysis methods for discrete models. ADAM converts several discrete model types automatically into polynomial dynamical systems and analyzes their dynamics using tools from computer algebra. Specifically, we propose a method to identify attractors of a discrete model that is equivalent to solving a system of polynomial equations, a long-studied problem in computer algebra. Based on extensive experimentation with both discrete models arising in systems biology and randomly generated networks, we found that the algebraic algorithms presented in this manuscript are fast for systems with the structure maintained by most biological systems, namely sparseness and robustness. For a large set of published complex discrete models, ADAM identified the attractors in less than one second. Conclusions Discrete modeling techniques are a useful tool for analyzing complex biological systems and there is a need in the biological community for accessible efficient analysis tools. ADAM provides analysis methods based on mathematical algorithms as a web-based tool for several different input formats, and it makes analysis of complex models accessible to a larger community, as it is platform independent as a web-service and does not require understanding of the underlying mathematics. PMID:21774817
Disaggregation and Refinement of System Dynamics Models via Agent-based Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nutaro, James J; Ozmen, Ozgur; Schryver, Jack C
System dynamics models are usually used to investigate aggregate level behavior, but these models can be decomposed into agents that have more realistic individual behaviors. Here we develop a simple model of the STEM workforce to illuminate the impacts that arise from the disaggregation and refinement of system dynamics models via agent-based modeling. Particularly, alteration of Poisson assumptions, adding heterogeneity to decision-making processes of agents, and discrete-time formulation are investigated and their impacts are illustrated. The goal is to demonstrate both the promise and danger of agent-based modeling in the context of a relatively simple model and to delineate themore » importance of modeling decisions that are often overlooked.« less
Mechanical positioning of multiple nuclei in muscle cells.
Manhart, Angelika; Windner, Stefanie; Baylies, Mary; Mogilner, Alex
2018-06-01
Many types of large cells have multiple nuclei. In skeletal muscle fibers, the nuclei are distributed along the cell to maximize their internuclear distances. This myonuclear positioning is crucial for cell function. Although microtubules, microtubule associated proteins, and motors have been implicated, mechanisms responsible for myonuclear positioning remain unclear. We used a combination of rough interacting particle and detailed agent-based modeling to examine computationally the hypothesis that a force balance generated by microtubules positions the muscle nuclei. Rather than assuming the nature and identity of the forces, we simulated various types of forces between the pairs of nuclei and between the nuclei and cell boundary to position the myonuclei according to the laws of mechanics. We started with a large number of potential interacting particle models and computationally screened these models for their ability to fit biological data on nuclear positions in hundreds of Drosophila larval muscle cells. This reverse engineering approach resulted in a small number of feasible models, the one with the best fit suggests that the nuclei repel each other and the cell boundary with forces that decrease with distance. The model makes nontrivial predictions about the increased nuclear density near the cell poles, the zigzag patterns of the nuclear positions in wider cells, and about correlations between the cell width and elongated nuclear shapes, all of which we confirm by image analysis of the biological data. We support the predictions of the interacting particle model with simulations of an agent-based mechanical model. Taken together, our data suggest that microtubules growing from nuclear envelopes push on the neighboring nuclei and the cell boundaries, which is sufficient to establish the nearly-uniform nuclear spreading observed in muscle fibers.
Applications of agent-based modeling to nutrient movement Lake Michigan
As part of an ongoing project aiming to provide useful information for nearshore management (harmful algal blooms, nutrient loading), we explore the value of agent-based models in Lake Michigan. Agent-based models follow many individual “agents” moving through a simul...
A Buyer Behaviour Framework for the Development and Design of Software Agents in E-Commerce.
ERIC Educational Resources Information Center
Sproule, Susan; Archer, Norm
2000-01-01
Software agents are computer programs that run in the background and perform tasks autonomously as delegated by the user. This paper blends models from marketing research and findings from the field of decision support systems to build a framework for the design of software agents to support in e-commerce buying applications. (Contains 35…
Cockrell, Robert Chase; Christley, Scott; Chang, Eugene; An, Gary
2015-01-01
Perhaps the greatest challenge currently facing the biomedical research community is the ability to integrate highly detailed cellular and molecular mechanisms to represent clinical disease states as a pathway to engineer effective therapeutics. This is particularly evident in the representation of organ-level pathophysiology in terms of abnormal tissue structure, which, through histology, remains a mainstay in disease diagnosis and staging. As such, being able to generate anatomic scale simulations is a highly desirable goal. While computational limitations have previously constrained the size and scope of multi-scale computational models, advances in the capacity and availability of high-performance computing (HPC) resources have greatly expanded the ability of computational models of biological systems to achieve anatomic, clinically relevant scale. Diseases of the intestinal tract are exemplary examples of pathophysiological processes that manifest at multiple scales of spatial resolution, with structural abnormalities present at the microscopic, macroscopic and organ-levels. In this paper, we describe a novel, massively parallel computational model of the gut, the Spatially Explicitly General-purpose Model of Enteric Tissue_HPC (SEGMEnT_HPC), which extends an existing model of the gut epithelium, SEGMEnT, in order to create cell-for-cell anatomic scale simulations. We present an example implementation of SEGMEnT_HPC that simulates the pathogenesis of ileal pouchitis, and important clinical entity that affects patients following remedial surgery for ulcerative colitis.
Nanoparticle Contrast Agents for Computed Tomography: A Focus on Micelles
Cormode, David P.; Naha, Pratap C.; Fayad, Zahi A.
2014-01-01
Computed tomography (CT) is an X-ray based whole body imaging technique that is widely used in medicine. Clinically approved contrast agents for CT are iodinated small molecules or barium suspensions. Over the past seven years there has been a great increase in the development of nanoparticles as CT contrast agents. Nanoparticles have several advantages over small molecule CT contrast agents, such as long blood-pool residence times, and the potential for cell tracking and targeted imaging applications. Furthermore, there is a need for novel CT contrast agents, due to the growing population of renally impaired patients and patients hypersensitive to iodinated contrast. Micelles and lipoproteins, a micelle-related class of nanoparticle, have notably been adapted as CT contrast agents. In this review we discuss the principles of CT image formation and the generation of CT contrast. We discuss the progress in developing non-targeted, targeted and cell tracking nanoparticle CT contrast agents. We feature agents based on micelles and used in conjunction with spectral CT. The large contrast agent doses needed will necessitate careful toxicology studies prior to clinical translation. However, the field has seen tremendous advances in the past decade and we expect many more advances to come in the next decade. PMID:24470293
Evolution of cooperative strategies from first principles.
Burtsev, Mikhail; Turchin, Peter
2006-04-20
One of the greatest challenges in the modern biological and social sciences is to understand the evolution of cooperative behaviour. General outlines of the answer to this puzzle are currently emerging as a result of developments in the theories of kin selection, reciprocity, multilevel selection and cultural group selection. The main conceptual tool used in probing the logical coherence of proposed explanations has been game theory, including both analytical models and agent-based simulations. The game-theoretic approach yields clear-cut results but assumes, as a rule, a simple structure of payoffs and a small set of possible strategies. Here we propose a more stringent test of the theory by developing a computer model with a considerably extended spectrum of possible strategies. In our model, agents are endowed with a limited set of receptors, a set of elementary actions and a neural net in between. Behavioural strategies are not predetermined; instead, the process of evolution constructs and reconstructs them from elementary actions. Two new strategies of cooperative attack and defence emerge in simulations, as well as the well-known dove, hawk and bourgeois strategies. Our results indicate that cooperative strategies can evolve even under such minimalist assumptions, provided that agents are capable of perceiving heritable external markers of other agents.
Demeter, persephone, and the search for emergence in agent-based models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
North, M. J.; Howe, T. R.; Collier, N. T.
2006-01-01
In Greek mythology, the earth goddess Demeter was unable to find her daughter Persephone after Persephone was abducted by Hades, the god of the underworld. Demeter is said to have embarked on a long and frustrating, but ultimately successful, search to find her daughter. Unfortunately, long and frustrating searches are not confined to Greek mythology. In modern times, agent-based modelers often face similar troubles when searching for agents that are to be to be connected to one another and when seeking appropriate target agents while defining agent behaviors. The result is a 'search for emergence' in that many emergent ormore » potentially emergent behaviors in agent-based models of complex adaptive systems either implicitly or explicitly require search functions. This paper considers a new nested querying approach to simplifying such agent-based modeling and multi-agent simulation search problems.« less
Developing a Conceptual Architecture for a Generalized Agent-based Modeling Environment (GAME)
2008-03-01
4. REPAST (Java, Python , C#, Open Source) ........28 5. MASON: Multi-Agent Modeling Language (Swarm Extension... Python , C#, Open Source) Repast (Recursive Porous Agent Simulation Toolkit) was designed for building agent-based models and simulations in the...Repast makes it easy for inexperienced users to build models by including a built-in simple model and provide interfaces through which menus and Python
Radiation protective effects of baclofen predicted by a computational drug repurposing strategy.
Ren, Lei; Xie, Dafei; Li, Peng; Qu, Xinyan; Zhang, Xiujuan; Xing, Yaling; Zhou, Pingkun; Bo, Xiaochen; Zhou, Zhe; Wang, Shengqi
2016-11-01
Exposure to ionizing radiation causes damage to living tissues; however, only a small number of agents have been approved for use in radiation injuries. Radioprotector is the primary countermeasure to radiation injury and none radioprotector has indeed reached the drug development stage. Repurposing the long list of approved, non-radioprotective drugs is an attractive strategy to find new radioprotective agents. Here, we applied a computational approach to discover new radioprotectors in silico by comparing publicly available gene expression data of ionizing radiation-treated samples from the Gene Expression Omnibus (GEO) database with gene expression signatures of more than 1309 small-molecule compounds from the Connectivity Map (cmap) dataset. Among the best compounds predicted to be therapeutic for ionizing radiation damage by this approach were some previously reported radioprotectors and baclofen (P<0.01), a chemical that was not previously used as radioprotector. Validation using a cell-based model and a rodent in vivo model demonstrated that treatment with baclofen reduced radiation-induced cytotoxicity in vitro (P<0.01), attenuated bone marrow damage and increased survival in vivo (P<0.05). These findings suggest that baclofen might serve as a radioprotector. The drug repurposing strategy by connecting the GEO data and cmap can be used to identify known drugs as potential radioprotective agents. Copyright © 2016 Elsevier Ltd. All rights reserved.
A Computational Model and Multi-Agent Simulation for Information Assurance
2002-06-01
Podell , Information Security: an Integrated Collection of Essays, IEEE Computer Society Press, Los Alamitos, CA, 1994. Brinkley, D. L. and Schell, R...R., “What is There to Worry About? An Introduction to the Computer Security Problem,” ed. Abrams and Jajodia and Podell , Information Security: an
2017-03-01
determine the optimum required operational capability of the unmanned aerial vehicles to support Korean rear area operations. We use Map Aware Non ...area operations. Through further experimentations and analyses, we were able to find the optimum characteristics of an improved unmanned aerial...operations. We use Map Aware Non -Uniform Automata, an agent-based simulation software platform for computational experiments. The study models a scenario
NASA Astrophysics Data System (ADS)
Zhu, Hou; Hu, Bin
2017-03-01
Human flesh search as a new net crowed behavior, on the one hand can help us to find some special information, on the other hand may lead to privacy leaking and offending human right. In order to study the mechanism of human flesh search, this paper proposes a simulation model based on agent-based model and complex networks. The computational experiments show some useful results. Discovered information quantity and involved personal ratio are highly correlated, and most of net citizens will take part in the human flesh search or will not take part in the human flesh search. Knowledge quantity does not influence involved personal ratio, but influences whether HFS can find out the target human. When the knowledge concentrates on hub nodes, the discovered information quantity is either perfect or almost zero. Emotion of net citizens influences both discovered information quantity and involved personal ratio. Concretely, when net citizens are calm to face the search topic, it will be hardly to find out the target; But when net citizens are agitated, the target will be found out easily.
Accelerating Multiagent Reinforcement Learning by Equilibrium Transfer.
Hu, Yujing; Gao, Yang; An, Bo
2015-07-01
An important approach in multiagent reinforcement learning (MARL) is equilibrium-based MARL, which adopts equilibrium solution concepts in game theory and requires agents to play equilibrium strategies at each state. However, most existing equilibrium-based MARL algorithms cannot scale due to a large number of computationally expensive equilibrium computations (e.g., computing Nash equilibria is PPAD-hard) during learning. For the first time, this paper finds that during the learning process of equilibrium-based MARL, the one-shot games corresponding to each state's successive visits often have the same or similar equilibria (for some states more than 90% of games corresponding to successive visits have similar equilibria). Inspired by this observation, this paper proposes to use equilibrium transfer to accelerate equilibrium-based MARL. The key idea of equilibrium transfer is to reuse previously computed equilibria when each agent has a small incentive to deviate. By introducing transfer loss and transfer condition, a novel framework called equilibrium transfer-based MARL is proposed. We prove that although equilibrium transfer brings transfer loss, equilibrium-based MARL algorithms can still converge to an equilibrium policy under certain assumptions. Experimental results in widely used benchmarks (e.g., grid world game, soccer game, and wall game) show that the proposed framework: 1) not only significantly accelerates equilibrium-based MARL (up to 96.7% reduction in learning time), but also achieves higher average rewards than algorithms without equilibrium transfer and 2) scales significantly better than algorithms without equilibrium transfer when the state/action space grows and the number of agents increases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mondy, Lisa Ann; Rao, Rekha Ranjana; Shelden, Bion
We are developing computational models to elucidate the expansion and dynamic filling process of a polyurethane foam, PMDI. The polyurethane of interest is chemically blown, where carbon dioxide is produced via the reaction of water, the blowing agent, and isocyanate. The isocyanate also reacts with polyol in a competing reaction, which produces the polymer. Here we detail the experiments needed to populate a processing model and provide parameters for the model based on these experiments. The model entails solving the conservation equations, including the equations of motion, an energy balance, and two rate equations for the polymerization and foaming reactions,more » following a simplified mathematical formalism that decouples these two reactions. Parameters for the polymerization kinetics model are reported based on infrared spectrophotometry. Parameters describing the gas generating reaction are reported based on measurements of volume, temperature and pressure evolution with time. A foam rheology model is proposed and parameters determined through steady-shear and oscillatory tests. Heat of reaction and heat capacity are determined through differential scanning calorimetry. Thermal conductivity of the foam as a function of density is measured using a transient method based on the theory of the transient plane source technique. Finally, density variations of the resulting solid foam in several simple geometries are directly measured by sectioning and sampling mass, as well as through x-ray computed tomography. These density measurements will be useful for model validation once the complete model is implemented in an engineering code.« less
Exploring the Use of Computer Simulations in Unraveling Research and Development Governance Problems
NASA Technical Reports Server (NTRS)
Balaban, Mariusz A.; Hester, Patrick T.
2012-01-01
Understanding Research and Development (R&D) enterprise relationships and processes at a governance level is not a simple task, but valuable decision-making insight and evaluation capabilities can be gained from their exploration through computer simulations. This paper discusses current Modeling and Simulation (M&S) methods, addressing their applicability to R&D enterprise governance. Specifically, the authors analyze advantages and disadvantages of the four methodologies used most often by M&S practitioners: System Dynamics (SO), Discrete Event Simulation (DES), Agent Based Modeling (ABM), and formal Analytic Methods (AM) for modeling systems at the governance level. Moreover, the paper describes nesting models using a multi-method approach. Guidance is provided to those seeking to employ modeling techniques in an R&D enterprise for the purposes of understanding enterprise governance. Further, an example is modeled and explored for potential insight. The paper concludes with recommendations regarding opportunities for concentration of future work in modeling and simulating R&D governance relationships and processes.
Modeling mechanical inhomogeneities in small populations of proliferating monolayers and spheroids.
Lejeune, Emma; Linder, Christian
2018-06-01
Understanding the mechanical behavior of multicellular monolayers and spheroids is fundamental to tissue culture, organism development, and the early stages of tumor growth. Proliferating cells in monolayers and spheroids experience mechanical forces as they grow and divide and local inhomogeneities in the mechanical microenvironment can cause individual cells within the multicellular system to grow and divide at different rates. This differential growth, combined with cell division and reorganization, leads to residual stress. Multiple different modeling approaches have been taken to understand and predict the residual stresses that arise in growing multicellular systems, particularly tumor spheroids. Here, we show that by using a mechanically robust agent-based model constructed with the peridynamic framework, we gain a better understanding of residual stresses in multicellular systems as they grow from a single cell. In particular, we focus on small populations of cells (1-100 s) where population behavior is highly stochastic and prior investigation has been limited. We compare the average strain energy density of cells in monolayers and spheroids using different growth and division rules and find that, on average, cells in spheroids have a higher strain energy density than cells in monolayers. We also find that cells in the interior of a growing spheroid are, on average, in compression. Finally, we demonstrate the importance of accounting for stochastic fluctuations in the mechanical environment, particularly when the cellular response to mechanical cues is nonlinear. The results presented here serve as a starting point for both further investigation with agent-based models, and for the incorporation of major findings from agent-based models into continuum scale models when explicit representation of individual cells is not computationally feasible.
2018-01-01
An agent-based computer model that builds representative regional U.S. hog production networks was developed and employed to assess the potential impact of the ongoing trend towards increased producer specialization upon network-level resilience to catastrophic disease outbreaks. Empirical analyses suggest that the spatial distribution and connectivity patterns of contact networks often predict epidemic spreading dynamics. Our model heuristically generates realistic systems composed of hog producer, feed mill, and slaughter plant agents. Network edges are added during each run as agents exchange livestock and feed. The heuristics governing agents’ contact patterns account for factors including their industry roles, physical proximities, and the age of their livestock. In each run, an infection is introduced, and may spread according to probabilities associated with the various modes of contact. For each of three treatments—defined by one-phase, two-phase, and three-phase production systems—a parameter variation experiment examines the impact of the spatial density of producer agents in the system upon the length and size of disease outbreaks. Resulting data show phase transitions whereby, above some density threshold, systemic outbreaks become possible, echoing findings from percolation theory. Data analysis reveals that multi-phase production systems are vulnerable to catastrophic outbreaks at lower spatial densities, have more abrupt percolation transitions, and are characterized by less-predictable outbreak scales and durations. Key differences in network-level metrics shed light on these results, suggesting that the absence of potentially-bridging producer–producer edges may be largely responsible for the superior disease resilience of single-phase “farrow to finish” production systems. PMID:29522574
Model Checking Degrees of Belief in a System of Agents
NASA Technical Reports Server (NTRS)
Raimondi, Franco; Primero, Giuseppe; Rungta, Neha
2014-01-01
Reasoning about degrees of belief has been investigated in the past by a number of authors and has a number of practical applications in real life. In this paper we present a unified framework to model and verify degrees of belief in a system of agents. In particular, we describe an extension of the temporal-epistemic logic CTLK and we introduce a semantics based on interpreted systems for this extension. In this way, degrees of beliefs do not need to be provided externally, but can be derived automatically from the possible executions of the system, thereby providing a computationally grounded formalism. We leverage the semantics to (a) construct a model checking algorithm, (b) investigate its complexity, (c) provide a Java implementation of the model checking algorithm, and (d) evaluate our approach using the standard benchmark of the dining cryptographers. Finally, we provide a detailed case study: using our framework and our implementation, we assess and verify the situational awareness of the pilot of Air France 447 flying in off-nominal conditions.
Multi-agent fare optimization model of two modes problem and its analysis based on edge of chaos
NASA Astrophysics Data System (ADS)
Li, Xue-yan; Li, Xue-mei; Li, Xue-wei; Qiu, He-ting
2017-03-01
This paper proposes a new framework of fare optimization & game model for studying the competition between two travel modes (high speed railway and civil aviation) in which passengers' group behavior is taken into consideration. The small-world network is introduced to construct the multi-agent model of passengers' travel mode choice. The cumulative prospect theory is adopted to depict passengers' bounded rationality, the heterogeneity of passengers' reference point is depicted using the idea of group emotion computing. The conceptions of "Langton parameter" and "evolution entropy" in the theory of "edge of chaos" are introduced to create passengers' "decision coefficient" and "evolution entropy of travel mode choice" which are used to quantify passengers' group behavior. The numerical simulation and the analysis of passengers' behavior show that (1) the new model inherits the features of traditional model well and the idea of self-organizing traffic flow evolution fully embodies passengers' bounded rationality, (2) compared with the traditional model (logit model), when passengers are in the "edge of chaos" state, the total profit of the transportation system is higher.
Exploring social structure effect on language evolution based on a computational model
NASA Astrophysics Data System (ADS)
Gong, Tao; Minett, James; Wang, William
2008-06-01
A compositionality-regularity coevolution model is adopted to explore the effect of social structure on language emergence and maintenance. Based on this model, we explore language evolution in three experiments, and discuss the role of a popular agent in language evolution, the relationship between mutual understanding and social hierarchy, and the effect of inter-community communications and that of simple linguistic features on convergence of communal languages in two communities. This work embodies several important interactions during social learning, and introduces a new approach that manipulates individuals' probabilities to participate in social interactions to study the effect of social structure. We hope it will stimulate further theoretical and empirical explorations on language evolution in a social environment.
A Distributed Platform for Global-Scale Agent-Based Models of Disease Transmission
Parker, Jon; Epstein, Joshua M.
2013-01-01
The Global-Scale Agent Model (GSAM) is presented. The GSAM is a high-performance distributed platform for agent-based epidemic modeling capable of simulating a disease outbreak in a population of several billion agents. It is unprecedented in its scale, its speed, and its use of Java. Solutions to multiple challenges inherent in distributing massive agent-based models are presented. Communication, synchronization, and memory usage are among the topics covered in detail. The memory usage discussion is Java specific. However, the communication and synchronization discussions apply broadly. We provide benchmarks illustrating the GSAM’s speed and scalability. PMID:24465120
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paul M. Torrens; Atsushi Nara; Xun Li
2012-01-01
Human movement is a significant ingredient of many social, environmental, and technical systems, yet the importance of movement is often discounted in considering systems complexity. Movement is commonly abstracted in agent-based modeling (which is perhaps the methodological vehicle for modeling complex systems), despite the influence of movement upon information exchange and adaptation in a system. In particular, agent-based models of urban pedestrians often treat movement in proxy form at the expense of faithfully treating movement behavior with realistic agency. There exists little consensus about which method is appropriate for representing movement in agent-based schemes. In this paper, we examine popularly-usedmore » methods to drive movement in agent-based models, first by introducing a methodology that can flexibly handle many representations of movement at many different scales and second, introducing a suite of tools to benchmark agent movement between models and against real-world trajectory data. We find that most popular movement schemes do a relatively poor job of representing movement, but that some schemes may well be 'good enough' for some applications. We also discuss potential avenues for improving the representation of movement in agent-based frameworks.« less
Undecidability in macroeconomics
NASA Technical Reports Server (NTRS)
Chandra, Siddharth; Chandra, Tushar Deepak
1993-01-01
In this paper we study the difficulty of solving problems in economics. For this purpose, we adopt the notion of undecidability from recursion theory. We show that certain problems in economics are undecidable, i.e., cannot be solved by a Turing Machine, a device that is at least as powerful as any computational device that can be constructed. In particular, we prove that even in finite closed economies subject to a variable initial condition, in which a social planner knows the behavior of every agent in the economy, certain important social planning problems are undecidable. Thus, it may be impossible to make effective policy decisions. Philosophically, this result formally brings into question the Rational Expectations Hypothesis which assumes that each agent is able to determine what it should do if it wishes to maximize its utility. We show that even when an optimal rational forecast exists for each agency (based on the information currently available to it), agents may lack the ability to make these forecasts. For example, Lucas describes economic models as 'mechanical, artificial world(s), populated by ... interacting robots'. Since any mechanical robot can be at most as computationally powerful as a Turing Machine, such economies are vulnerable to the phenomenon of undecidability.
Emotions are emergent processes: they require a dynamic computational architecture
Scherer, Klaus R.
2009-01-01
Emotion is a cultural and psychobiological adaptation mechanism which allows each individual to react flexibly and dynamically to environmental contingencies. From this claim flows a description of the elements theoretically needed to construct a virtual agent with the ability to display human-like emotions and to respond appropriately to human emotional expression. This article offers a brief survey of the desirable features of emotion theories that make them ideal blueprints for agent models. In particular, the component process model of emotion is described, a theory which postulates emotion-antecedent appraisal on different levels of processing that drive response system patterning predictions. In conclusion, investing seriously in emergent computational modelling of emotion using a nonlinear dynamic systems approach is suggested. PMID:19884141
Advances in Enterprise Control. AEC Proceedings, November 15-16, 1999/San Diego, California
1999-11-16
structure of tropical termite mounds. 2.3. Mechanisms that each agent deposits one unit of pheromone per unit time and that a proportion 0 < E < 1 of the...Stubberud 137 Section 4: Distributed and Agent-Based Strategies 149 Synthetic Pheromones for Distributed Motion Control by H. Van Dyke Parunak and...Computer Engineering, June, 1994. 147 148 Section 4 Distributed and Agent-Based Strategies 149 150 Synthetic Pheromones for Distributed Motion
Combination of Multi-Agent Systems and Wireless Sensor Networks for the Monitoring of Cattle
Barriuso, Alberto L.; De Paz, Juan F.; Lozano, Álvaro
2018-01-01
Precision breeding techniques have been widely used to optimize expenses and increase livestock yields. Notwithstanding, the joint use of heterogeneous sensors and artificial intelligence techniques for the simultaneous analysis or detection of different problems that cattle may present has not been addressed. This study arises from the necessity to obtain a technological tool that faces this state of the art limitation. As novelty, this work presents a multi-agent architecture based on virtual organizations which allows to deploy a new embedded agent model in computationally limited autonomous sensors, making use of the Platform for Automatic coNstruction of orGanizations of intElligent Agents (PANGEA). To validate the proposed platform, different studies have been performed, where parameters specific to each animal are studied, such as physical activity, temperature, estrus cycle state and the moment in which the animal goes into labor. In addition, a set of applications that allow farmers to remotely monitor the livestock have been developed. PMID:29301310
Combination of Multi-Agent Systems and Wireless Sensor Networks for the Monitoring of Cattle.
Barriuso, Alberto L; Villarrubia González, Gabriel; De Paz, Juan F; Lozano, Álvaro; Bajo, Javier
2018-01-02
Precision breeding techniques have been widely used to optimize expenses and increase livestock yields. Notwithstanding, the joint use of heterogeneous sensors and artificial intelligence techniques for the simultaneous analysis or detection of different problems that cattle may present has not been addressed. This study arises from the necessity to obtain a technological tool that faces this state of the art limitation. As novelty, this work presents a multi-agent architecture based on virtual organizations which allows to deploy a new embedded agent model in computationally limited autonomous sensors, making use of the Platform for Automatic coNstruction of orGanizations of intElligent Agents (PANGEA). To validate the proposed platform, different studies have been performed, where parameters specific to each animal are studied, such as physical activity, temperature, estrus cycle state and the moment in which the animal goes into labor. In addition, a set of applications that allow farmers to remotely monitor the livestock have been developed.
Development of Aspen: A microanalytic simulation model of the US economy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pryor, R.J.; Basu, N.; Quint, T.
1996-02-01
This report describes the development of an agent-based microanalytic simulation model of the US economy. The microsimulation model capitalizes on recent technological advances in evolutionary learning and parallel computing. Results are reported for a test problem that was run using the model. The test results demonstrate the model`s ability to predict business-like cycles in an economy where prices and inventories are allowed to vary. Since most economic forecasting models have difficulty predicting any kind of cyclic behavior. These results show the potential of microanalytic simulation models to improve economic policy analysis and to provide new insights into underlying economic principles.more » Work already has begun on a more detailed model.« less
The ultimatum game: Discrete vs. continuous offers
NASA Astrophysics Data System (ADS)
Dishon-Berkovits, Miriam; Berkovits, Richard
2014-09-01
In many experimental setups in social-sciences, psychology and economy the subjects are requested to accept or dispense monetary compensation which is usually given in discrete units. Using computer and mathematical modeling we show that in the framework of studying the dynamics of acceptance of proposals in the ultimatum game, the long time dynamics of acceptance of offers in the game are completely different for discrete vs. continuous offers. For discrete values the dynamics follow an exponential behavior. However, for continuous offers the dynamics are described by a power-law. This is shown using an agent based computer simulation as well as by utilizing an analytical solution of a mean-field equation describing the model. These findings have implications to the design and interpretation of socio-economical experiments beyond the ultimatum game.
NASA Astrophysics Data System (ADS)
Ying, Shen; Li, Lin; Gao, Yurong
2009-10-01
Spatial visibility analysis is the important direction of pedestrian behaviors because our visual conception in space is the straight method to get environment information and navigate your actions. Based on the agent modeling and up-tobottom method, the paper develop the framework about the analysis of the pedestrian flow depended on visibility. We use viewshed in visibility analysis and impose the parameters on agent simulation to direct their motion in urban space. We analyze the pedestrian behaviors in micro-scale and macro-scale of urban open space. The individual agent use visual affordance to determine his direction of motion in micro-scale urban street on district. And we compare the distribution of pedestrian flow with configuration in macro-scale urban environment, and mine the relationship between the pedestrian flow and distribution of urban facilities and urban function. The paper first computes the visibility situations at the vantage point in urban open space, such as street network, quantify the visibility parameters. The multiple agents use visibility parameters to decide their direction of motion, and finally pedestrian flow reach to a stable state in urban environment through the simulation of multiple agent system. The paper compare the morphology of visibility parameters and pedestrian distribution with urban function and facilities layout to confirm the consistence between them, which can be used to make decision support in urban design.
Physics and financial economics (1776-2014): puzzles, Ising and agent-based models.
Sornette, Didier
2014-06-01
This short review presents a selected history of the mutual fertilization between physics and economics--from Isaac Newton and Adam Smith to the present. The fundamentally different perspectives embraced in theories developed in financial economics compared with physics are dissected with the examples of the volatility smile and of the excess volatility puzzle. The role of the Ising model of phase transitions to model social and financial systems is reviewed, with the concepts of random utilities and the logit model as the analog of the Boltzmann factor in statistical physics. Recent extensions in terms of quantum decision theory are also covered. A wealth of models are discussed briefly that build on the Ising model and generalize it to account for the many stylized facts of financial markets. A summary of the relevance of the Ising model and its extensions is provided to account for financial bubbles and crashes. The review would be incomplete if it did not cover the dynamical field of agent-based models (ABMs), also known as computational economic models, of which the Ising-type models are just special ABM implementations. We formulate the 'Emerging Intelligence Market Hypothesis' to reconcile the pervasive presence of 'noise traders' with the near efficiency of financial markets. Finally, we note that evolutionary biology, more than physics, is now playing a growing role to inspire models of financial markets.
Physics and financial economics (1776-2014): puzzles, Ising and agent-based models
NASA Astrophysics Data System (ADS)
Sornette, Didier
2014-06-01
This short review presents a selected history of the mutual fertilization between physics and economics—from Isaac Newton and Adam Smith to the present. The fundamentally different perspectives embraced in theories developed in financial economics compared with physics are dissected with the examples of the volatility smile and of the excess volatility puzzle. The role of the Ising model of phase transitions to model social and financial systems is reviewed, with the concepts of random utilities and the logit model as the analog of the Boltzmann factor in statistical physics. Recent extensions in terms of quantum decision theory are also covered. A wealth of models are discussed briefly that build on the Ising model and generalize it to account for the many stylized facts of financial markets. A summary of the relevance of the Ising model and its extensions is provided to account for financial bubbles and crashes. The review would be incomplete if it did not cover the dynamical field of agent-based models (ABMs), also known as computational economic models, of which the Ising-type models are just special ABM implementations. We formulate the ‘Emerging Intelligence Market Hypothesis’ to reconcile the pervasive presence of ‘noise traders’ with the near efficiency of financial markets. Finally, we note that evolutionary biology, more than physics, is now playing a growing role to inspire models of financial markets.
Framework of distributed coupled atmosphere-ocean-wave modeling system
NASA Astrophysics Data System (ADS)
Wen, Yuanqiao; Huang, Liwen; Deng, Jian; Zhang, Jinfeng; Wang, Sisi; Wang, Lijun
2006-05-01
In order to research the interactions between the atmosphere and ocean as well as their important role in the intensive weather systems of coastal areas, and to improve the forecasting ability of the hazardous weather processes of coastal areas, a coupled atmosphere-ocean-wave modeling system has been developed. The agent-based environment framework for linking models allows flexible and dynamic information exchange between models. For the purpose of flexibility, portability and scalability, the framework of the whole system takes a multi-layer architecture that includes a user interface layer, computational layer and service-enabling layer. The numerical experiment presented in this paper demonstrates the performance of the distributed coupled modeling system.
A computational neural model of goal-directed utterance selection.
Klein, Michael; Kamp, Hans; Palm, Guenther; Doya, Kenji
2010-06-01
It is generally agreed that much of human communication is motivated by extra-linguistic goals: we often make utterances in order to get others to do something, or to make them support our cause, or adopt our point of view, etc. However, thus far a computational foundation for this view on language use has been lacking. In this paper we propose such a foundation using Markov Decision Processes. We borrow computational components from the field of action selection and motor control, where a neurobiological basis of these components has been established. In particular, we make use of internal models (i.e., next-state transition functions defined on current state action pairs). The internal model is coupled with reinforcement learning of a value function that is used to assess the desirability of any state that utterances (as well as certain non-verbal actions) can bring about. This cognitive architecture is tested in a number of multi-agent game simulations. In these computational experiments an agent learns to predict the context-dependent effects of utterances by interacting with other agents that are already competent speakers. We show that the cognitive architecture can account for acquiring the capability of deciding when to speak in order to achieve a certain goal (instead of performing a non-verbal action or simply doing nothing), whom to address and what to say. Copyright 2010 Elsevier Ltd. All rights reserved.
Teachable Agents and the Protege Effect: Increasing the Effort towards Learning
ERIC Educational Resources Information Center
Chase, Catherine C.; Chin, Doris B.; Oppezzo, Marily A.; Schwartz, Daniel L.
2009-01-01
Betty's Brain is a computer-based learning environment that capitalizes on the social aspects of learning. In Betty's Brain, students instruct a character called a Teachable Agent (TA) which can reason based on how it is taught. Two studies demonstrate the "protege effect": students make greater effort to learn for their TAs than they do…
Neural computations underlying inverse reinforcement learning in the human brain
Pauli, Wolfgang M; Bossaerts, Peter; O'Doherty, John
2017-01-01
In inverse reinforcement learning an observer infers the reward distribution available for actions in the environment solely through observing the actions implemented by another agent. To address whether this computational process is implemented in the human brain, participants underwent fMRI while learning about slot machines yielding hidden preferred and non-preferred food outcomes with varying probabilities, through observing the repeated slot choices of agents with similar and dissimilar food preferences. Using formal model comparison, we found that participants implemented inverse RL as opposed to a simple imitation strategy, in which the actions of the other agent are copied instead of inferring the underlying reward structure of the decision problem. Our computational fMRI analysis revealed that anterior dorsomedial prefrontal cortex encoded inferences about action-values within the value space of the agent as opposed to that of the observer, demonstrating that inverse RL is an abstract cognitive process divorceable from the values and concerns of the observer him/herself. PMID:29083301
Agent-based user-adaptive service provision in ubiquitous systems
NASA Astrophysics Data System (ADS)
Saddiki, H.; Harroud, H.; Karmouch, A.
2012-11-01
With the increasing availability of smartphones, tablets and other computing devices, technology consumers have grown accustomed to performing all of their computing tasks anytime, anywhere and on any device. There is a greater need to support ubiquitous connectivity and accommodate users by providing software as network-accessible services. In this paper, we propose a MAS-based approach to adaptive service composition and provision that automates the selection and execution of a suitable composition plan for a given service. With agents capable of autonomous and intelligent behavior, the composition plan is selected in a dynamic negotiation driven by a utility-based decision-making mechanism; and the composite service is built by a coalition of agents each providing a component necessary to the target service. The same service can be built in variations for catering to dynamic user contexts and further personalizing the user experience. Also multiple services can be grouped to satisfy new user needs.
The practice of agent-based model visualization.
Dorin, Alan; Geard, Nicholas
2014-01-01
We discuss approaches to agent-based model visualization. Agent-based modeling has its own requirements for visualization, some shared with other forms of simulation software, and some unique to this approach. In particular, agent-based models are typified by complexity, dynamism, nonequilibrium and transient behavior, heterogeneity, and a researcher's interest in both individual- and aggregate-level behavior. These are all traits requiring careful consideration in the design, experimentation, and communication of results. In the case of all but final communication for dissemination, researchers may not make their visualizations public. Hence, the knowledge of how to visualize during these earlier stages is unavailable to the research community in a readily accessible form. Here we explore means by which all phases of agent-based modeling can benefit from visualization, and we provide examples from the available literature and online sources to illustrate key stages and techniques.
NASA Technical Reports Server (NTRS)
Burleigh, Scott C.
2011-01-01
Contact Graph Routing (CGR) is a dynamic routing system that computes routes through a time-varying topology of scheduled communication contacts in a network based on the DTN (Delay-Tolerant Networking) architecture. It is designed to enable dynamic selection of data transmission routes in a space network based on DTN. This dynamic responsiveness in route computation should be significantly more effective and less expensive than static routing, increasing total data return while at the same time reducing mission operations cost and risk. The basic strategy of CGR is to take advantage of the fact that, since flight mission communication operations are planned in detail, the communication routes between any pair of bundle agents in a population of nodes that have all been informed of one another's plans can be inferred from those plans rather than discovered via dialogue (which is impractical over long one-way-light-time space links). Messages that convey this planning information are used to construct contact graphs (time-varying models of network connectivity) from which CGR automatically computes efficient routes for bundles. Automatic route selection increases the flexibility and resilience of the space network, simplifying cross-support and reducing mission management costs. Note that there are no routing tables in Contact Graph Routing. The best route for a bundle destined for a given node may routinely be different from the best route for a different bundle destined for the same node, depending on bundle priority, bundle expiration time, and changes in the current lengths of transmission queues for neighboring nodes; routes must be computed individually for each bundle, from the Bundle Protocol agent's current network connectivity model for the bundle s destination node (the contact graph). Clearly this places a premium on optimizing the implementation of the route computation algorithm. The scalability of CGR to very large networks remains a research topic. The information carried by CGR contact plan messages is useful not only for dynamic route computation, but also for the implementation of rate control, congestion forecasting, transmission episode initiation and termination, timeout interval computation, and retransmission timer suspension and resumption.
Validation techniques of agent based modelling for geospatial simulations
NASA Astrophysics Data System (ADS)
Darvishi, M.; Ahmadi, G.
2014-10-01
One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS) is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS), biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI's ArcGIS, OpenMap, GeoTools, etc) for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.
Scott, Nick; Livingston, Michael; Reporter, Iyanoosh; Dietze, Paul
2017-06-01
Many variations of venue lockout and last-drink policies have been introduced in attempts to reduce drinking-related harms. We estimate the public health gains and licensee costs of these policies using a computer simulated population of young adults engaging in heavy drinking. Using an agent-based model we implemented 1 am/2 am/3 am venue lockouts in conjunction with last drinks zero/one/two hours later, or at current closing times. Outcomes included: the number of incidents of verbal aggression in public drinking venues, private venues or on the street; and changed revenue to public venues. The most effective policy in reducing verbal aggression among agents was 1 am lockouts with current closing times. All policies produced substantial reductions in street-based incidents of verbal aggression among agents (33-81%) due to the smoothing of transport demand. Direct revenue losses were 1-9% for simulated licensees, with later lockout times and longer periods between lockout and last drinks producing smaller revenue losses. Simulation models are useful for exploring consequences of policy change. Our simulation suggests that additional hours between lockout and last drinks could reduce aggression by easing transport demand, while minimising revenue loss to venue owners. Implications for public health: Direct policies to reduce late-night transport-related disputes should be considered. © 2017 The Authors.
A Cybernetic Approach to the Modeling of Agent Communities
NASA Technical Reports Server (NTRS)
Truszkowski, Walt; Karlin, Jay
2000-01-01
In an earlier paper [1] examples of agent technology in a NASA context were presented. Both groundbased and space-based applications were addressed. This paper continues the discussion of one aspect of the Goddard Space Flight Center's continuing efforts to develop a community of agents that can support both ground-based and space-based systems autonomy. The paper focuses on an approach to agent-community modeling based on the theory of viable systems developed by Stafford Beer. It gives the status of an initial attempt to capture some of the agent-community behaviors in a viable system context. This paper is expository in nature and focuses on a discussion of the modeling of some of the underlying concepts and infrastructure that will serve as the basis of more detailed investigative work into the behavior of agent communities. The paper is organized as follows. First, a general introduction to agent community requirements is presented. Secondly, a brief introduction to the cybernetic concept of a viable system is given. This concept forms the foundation of the modeling approach. Then the concept of an agent community is modeled in the cybernetic context.
Reinforcement Learning in a Nonstationary Environment: The El Farol Problem
NASA Technical Reports Server (NTRS)
Bell, Ann Maria
1999-01-01
This paper examines the performance of simple learning rules in a complex adaptive system based on a coordination problem modeled on the El Farol problem. The key features of the El Farol problem are that it typically involves a medium number of agents and that agents' pay-off functions have a discontinuous response to increased congestion. First we consider a single adaptive agent facing a stationary environment. We demonstrate that the simple learning rules proposed by Roth and Er'ev can be extremely sensitive to small changes in the initial conditions and that events early in a simulation can affect the performance of the rule over a relatively long time horizon. In contrast, a reinforcement learning rule based on standard practice in the computer science literature converges rapidly and robustly. The situation is reversed when multiple adaptive agents interact: the RE algorithms often converge rapidly to a stable average aggregate attendance despite the slow and erratic behavior of individual learners, while the CS based learners frequently over-attend in the early and intermediate terms. The symmetric mixed strategy equilibria is unstable: all three learning rules ultimately tend towards pure strategies or stabilize in the medium term at non-equilibrium probabilities of attendance. The brittleness of the algorithms in different contexts emphasize the importance of thorough and thoughtful examination of simulation-based results.
Extended Neural Metastability in an Embodied Model of Sensorimotor Coupling
Aguilera, Miguel; Bedia, Manuel G.; Barandiaran, Xabier E.
2016-01-01
The hypothesis that brain organization is based on mechanisms of metastable synchronization in neural assemblies has been popularized during the last decades of neuroscientific research. Nevertheless, the role of body and environment for understanding the functioning of metastable assemblies is frequently dismissed. The main goal of this paper is to investigate the contribution of sensorimotor coupling to neural and behavioral metastability using a minimal computational model of plastic neural ensembles embedded in a robotic agent in a behavioral preference task. Our hypothesis is that, under some conditions, the metastability of the system is not restricted to the brain but extends to the system composed by the interaction of brain, body and environment. We test this idea, comparing an agent in continuous interaction with its environment in a task demanding behavioral flexibility with an equivalent model from the point of view of “internalist neuroscience.” A statistical characterization of our model and tools from information theory allow us to show how (1) the bidirectional coupling between agent and environment brings the system closer to a regime of criticality and triggers the emergence of additional metastable states which are not found in the brain in isolation but extended to the whole system of sensorimotor interaction, (2) the synaptic plasticity of the agent is fundamental to sustain open structures in the neural controller of the agent flexibly engaging and disengaging different behavioral patterns that sustain sensorimotor metastable states, and (3) these extended metastable states emerge when the agent generates an asymmetrical circular loop of causal interaction with its environment, in which the agent responds to variability of the environment at fast timescales while acting over the environment at slow timescales, suggesting the constitution of the agent as an autonomous entity actively modulating its sensorimotor coupling with the world. We conclude with a reflection about how our results contribute in a more general way to current progress in neuroscientific research. PMID:27721746
Extended Neural Metastability in an Embodied Model of Sensorimotor Coupling.
Aguilera, Miguel; Bedia, Manuel G; Barandiaran, Xabier E
2016-01-01
The hypothesis that brain organization is based on mechanisms of metastable synchronization in neural assemblies has been popularized during the last decades of neuroscientific research. Nevertheless, the role of body and environment for understanding the functioning of metastable assemblies is frequently dismissed. The main goal of this paper is to investigate the contribution of sensorimotor coupling to neural and behavioral metastability using a minimal computational model of plastic neural ensembles embedded in a robotic agent in a behavioral preference task. Our hypothesis is that, under some conditions, the metastability of the system is not restricted to the brain but extends to the system composed by the interaction of brain, body and environment. We test this idea, comparing an agent in continuous interaction with its environment in a task demanding behavioral flexibility with an equivalent model from the point of view of "internalist neuroscience." A statistical characterization of our model and tools from information theory allow us to show how (1) the bidirectional coupling between agent and environment brings the system closer to a regime of criticality and triggers the emergence of additional metastable states which are not found in the brain in isolation but extended to the whole system of sensorimotor interaction, (2) the synaptic plasticity of the agent is fundamental to sustain open structures in the neural controller of the agent flexibly engaging and disengaging different behavioral patterns that sustain sensorimotor metastable states, and (3) these extended metastable states emerge when the agent generates an asymmetrical circular loop of causal interaction with its environment, in which the agent responds to variability of the environment at fast timescales while acting over the environment at slow timescales, suggesting the constitution of the agent as an autonomous entity actively modulating its sensorimotor coupling with the world. We conclude with a reflection about how our results contribute in a more general way to current progress in neuroscientific research.
Agent-based model for rural-urban migration: A dynamic consideration
NASA Astrophysics Data System (ADS)
Cai, Ning; Ma, Hai-Ying; Khan, M. Junaid
2015-10-01
This paper develops a dynamic agent-based model for rural-urban migration, based on the previous relevant works. The model conforms to the typical dynamic linear multi-agent systems model concerned extensively in systems science, in which the communication network is formulated as a digraph. Simulations reveal that consensus of certain variable could be harmful to the overall stability and should be avoided.
Toward Agent Programs with Circuit Semantics
NASA Technical Reports Server (NTRS)
Nilsson, Nils J.
1992-01-01
New ideas are presented for computing and organizing actions for autonomous agents in dynamic environments-environments in which the agent's current situation cannot always be accurately discerned and in which the effects of actions cannot always be reliably predicted. The notion of 'circuit semantics' for programs based on 'teleo-reactive trees' is introduced. Program execution builds a combinational circuit which receives sensory inputs and controls actions. These formalisms embody a high degree of inherent conditionality and thus yield programs that are suitably reactive to their environments. At the same time, the actions computed by the programs are guided by the overall goals of the agent. The paper also speculates about how programs using these ideas could be automatically generated by artificial intelligence planning systems and adapted by learning methods.
NASA Astrophysics Data System (ADS)
Fort, H.; Viola, S.
2004-03-01
We analyze, both analytically and numerically, the self-organization of a system of “selfish” adaptive agents playing an arbitrary iterated pairwise game (defined by a 2×2 payoff matrix). Examples of possible games to play are the prisoner’s dilemma (PD) game, the chicken game, the hero game, etc. The agents have no memory, use strategies not based on direct reciprocity nor “tags” and are chosen at random, i.e., geographical vicinity is neglected. They can play two possible strategies: cooperate (C) or defect (D). The players measure their success by comparing their utilities with an estimate for the expected benefits and update their strategy following a simple rule. Two versions of the model are studied: (1) the deterministic version (the agents are either in definite states C or D) and (2) the stochastic version (the agents have a probability c of playing C). Using a general master equation we compute the equilibrium states into which the system self-organizes, characterized by their average probability of cooperation ceq. Depending on the payoff matrix, we show that ceq can take five different values. We also consider the mixing of agents using two different payoff matrices and show that any value of ceq can be reached by tuning the proportions of agents using each payoff matrix. In particular, this can be used as a way to simulate the effect of a fraction d of “antisocial” individuals—incapable of realizing any value to cooperation—on the cooperative regime hold by a population of neutral or “normal” agents.
Fort, H; Viola, S
2004-03-01
We analyze, both analytically and numerically, the self-organization of a system of "selfish" adaptive agents playing an arbitrary iterated pairwise game (defined by a 2 x 2 payoff matrix). Examples of possible games to play are the prisoner's dilemma (PD) game, the chicken game, the hero game, etc. The agents have no memory, use strategies not based on direct reciprocity nor "tags" and are chosen at random, i.e., geographical vicinity is neglected. They can play two possible strategies: cooperate (C) or defect (D). The players measure their success by comparing their utilities with an estimate for the expected benefits and update their strategy following a simple rule. Two versions of the model are studied: (1) the deterministic version (the agents are either in definite states C or D) and (2) the stochastic version (the agents have a probability c of playing C). Using a general master equation we compute the equilibrium states into which the system self-organizes, characterized by their average probability of cooperation c(eq). Depending on the payoff matrix, we show that c(eq) can take five different values. We also consider the mixing of agents using two different payoff matrices and show that any value of c(eq) can be reached by tuning the proportions of agents using each payoff matrix. In particular, this can be used as a way to simulate the effect of a fraction d of "antisocial" individuals--incapable of realizing any value to cooperation--on the cooperative regime hold by a population of neutral or "normal" agents.
Spreading dynamics on complex networks: a general stochastic approach.
Noël, Pierre-André; Allard, Antoine; Hébert-Dufresne, Laurent; Marceau, Vincent; Dubé, Louis J
2014-12-01
Dynamics on networks is considered from the perspective of Markov stochastic processes. We partially describe the state of the system through network motifs and infer any missing data using the available information. This versatile approach is especially well adapted for modelling spreading processes and/or population dynamics. In particular, the generality of our framework and the fact that its assumptions are explicitly stated suggests that it could be used as a common ground for comparing existing epidemics models too complex for direct comparison, such as agent-based computer simulations. We provide many examples for the special cases of susceptible-infectious-susceptible and susceptible-infectious-removed dynamics (e.g., epidemics propagation) and we observe multiple situations where accurate results may be obtained at low computational cost. Our perspective reveals a subtle balance between the complex requirements of a realistic model and its basic assumptions.
Agent based reasoning for the non-linear stochastic models of long-range memory
NASA Astrophysics Data System (ADS)
Kononovicius, A.; Gontis, V.
2012-02-01
We extend Kirman's model by introducing variable event time scale. The proposed flexible time scale is equivalent to the variable trading activity observed in financial markets. Stochastic version of the extended Kirman's agent based model is compared to the non-linear stochastic models of long-range memory in financial markets. The agent based model providing matching macroscopic description serves as a microscopic reasoning of the earlier proposed stochastic model exhibiting power law statistics.
NASA Astrophysics Data System (ADS)
Kock, B. E.
2008-12-01
The increased availability and understanding of agent-based modeling technology and techniques provides a unique opportunity for water resources modelers, allowing them to go beyond traditional behavioral approaches from neoclassical economics, and add rich cognition to social-hydrological models. Agent-based models provide for an individual focus, and the easier and more realistic incorporation of learning, memory and other mechanisms for increased cognitive sophistication. We are in an age of global change impacting complex water resources systems, and social responses are increasingly recognized as fundamentally adaptive and emergent. In consideration of this, water resources models and modelers need to better address social dynamics in a manner beyond the capabilities of neoclassical economics theory and practice. However, going beyond the unitary curve requires unique levels of engagement with stakeholders, both to elicit the richer knowledge necessary for structuring and parameterizing agent-based models, but also to make sure such models are appropriately used. With the aim of encouraging epistemological and methodological convergence in the agent-based modeling of water resources, we have developed a water resources-specific cognitive model and an associated collaborative modeling process. Our cognitive model emphasizes efficiency in architecture and operation, and capacity to adapt to different application contexts. We describe a current application of this cognitive model and modeling process in the Arkansas Basin of Colorado. In particular, we highlight the potential benefits of, and challenges to, using more sophisticated cognitive models in agent-based water resources models.
Simulating Microbial Community Patterning Using Biocellion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kang, Seung-Hwa; Kahan, Simon H.; Momeni, Babak
2014-04-17
Mathematical modeling and computer simulation are important tools for understanding complex interactions between cells and their biotic and abiotic environment: similarities and differences between modeled and observed behavior provide the basis for hypothesis forma- tion. Momeni et al. [5] investigated pattern formation in communities of yeast strains engaging in different types of ecological interactions, comparing the predictions of mathematical modeling and simulation to actual patterns observed in wet-lab experiments. However, simu- lations of millions of cells in a three-dimensional community are ex- tremely time-consuming. One simulation run in MATLAB may take a week or longer, inhibiting exploration of the vastmore » space of parameter combinations and assumptions. Improving the speed, scale, and accu- racy of such simulations facilitates hypothesis formation and expedites discovery. Biocellion is a high performance software framework for ac- celerating discrete agent-based simulation of biological systems with millions to trillions of cells. Simulations of comparable scale and accu- racy to those taking a week of computer time using MATLAB require just hours using Biocellion on a multicore workstation. Biocellion fur- ther accelerates large scale, high resolution simulations using cluster computers by partitioning the work to run on multiple compute nodes. Biocellion targets computational biologists who have mathematical modeling backgrounds and basic C++ programming skills. This chap- ter describes the necessary steps to adapt the original Momeni et al.'s model to the Biocellion framework as a case study.« less
Multi-issue Agent Negotiation Based on Fairness
NASA Astrophysics Data System (ADS)
Zuo, Baohe; Zheng, Sue; Wu, Hong
Agent-based e-commerce service has become a hotspot now. How to make the agent negotiation process quickly and high-efficiently is the main research direction of this area. In the multi-issue model, MAUT(Multi-attribute Utility Theory) or its derived theory usually consider little about the fairness of both negotiators. This work presents a general model of agent negotiation which considered the satisfaction of both negotiators via autonomous learning. The model can evaluate offers from the opponent agent based on the satisfaction degree, learn online to get the opponent's knowledge from interactive instances of history and negotiation of this time, make concessions dynamically based on fair object. Through building the optimal negotiation model, the bilateral negotiation achieved a higher efficiency and fairer deal.
Memoryless cooperative graph search based on the simulated annealing algorithm
NASA Astrophysics Data System (ADS)
Hou, Jian; Yan, Gang-Feng; Fan, Zhen
2011-04-01
We have studied the problem of reaching a globally optimal segment for a graph-like environment with a single or a group of autonomous mobile agents. Firstly, two efficient simulated-annealing-like algorithms are given for a single agent to solve the problem in a partially known environment and an unknown environment, respectively. It shows that under both proposed control strategies, the agent will eventually converge to a globally optimal segment with probability 1. Secondly, we use multi-agent searching to simultaneously reduce the computation complexity and accelerate convergence based on the algorithms we have given for a single agent. By exploiting graph partition, a gossip-consensus method based scheme is presented to update the key parameter—radius of the graph, ensuring that the agents spend much less time finding a globally optimal segment.
An agent-based model of leukocyte transendothelial migration during atherogenesis.
Bhui, Rita; Hayenga, Heather N
2017-05-01
A vast amount of work has been dedicated to the effects of hemodynamics and cytokines on leukocyte adhesion and trans-endothelial migration (TEM) and subsequent accumulation of leukocyte-derived foam cells in the artery wall. However, a comprehensive mechanobiological model to capture these spatiotemporal events and predict the growth and remodeling of an atherosclerotic artery is still lacking. Here, we present a multiscale model of leukocyte TEM and plaque evolution in the left anterior descending (LAD) coronary artery. The approach integrates cellular behaviors via agent-based modeling (ABM) and hemodynamic effects via computational fluid dynamics (CFD). In this computational framework, the ABM implements the diffusion kinetics of key biological proteins, namely Low Density Lipoprotein (LDL), Tissue Necrosis Factor alpha (TNF-α), Interlukin-10 (IL-10) and Interlukin-1 beta (IL-1β), to predict chemotactic driven leukocyte migration into and within the artery wall. The ABM also considers wall shear stress (WSS) dependent leukocyte TEM and compensatory arterial remodeling obeying Glagov's phenomenon. Interestingly, using fully developed steady blood flow does not result in a representative number of leukocyte TEM as compared to pulsatile flow, whereas passing WSS at peak systole of the pulsatile flow waveform does. Moreover, using the model, we have found leukocyte TEM increases monotonically with decreases in luminal volume. At critical plaque shapes the WSS changes rapidly resulting in sudden increases in leukocyte TEM suggesting lumen volumes that will give rise to rapid plaque growth rates if left untreated. Overall this multi-scale and multi-physics approach appropriately captures and integrates the spatiotemporal events occurring at the cellular level in order to predict leukocyte transmigration and plaque evolution.
An agent-based model of leukocyte transendothelial migration during atherogenesis
Bhui, Rita; Hayenga, Heather N.
2017-01-01
A vast amount of work has been dedicated to the effects of hemodynamics and cytokines on leukocyte adhesion and trans-endothelial migration (TEM) and subsequent accumulation of leukocyte-derived foam cells in the artery wall. However, a comprehensive mechanobiological model to capture these spatiotemporal events and predict the growth and remodeling of an atherosclerotic artery is still lacking. Here, we present a multiscale model of leukocyte TEM and plaque evolution in the left anterior descending (LAD) coronary artery. The approach integrates cellular behaviors via agent-based modeling (ABM) and hemodynamic effects via computational fluid dynamics (CFD). In this computational framework, the ABM implements the diffusion kinetics of key biological proteins, namely Low Density Lipoprotein (LDL), Tissue Necrosis Factor alpha (TNF-α), Interlukin-10 (IL-10) and Interlukin-1 beta (IL-1β), to predict chemotactic driven leukocyte migration into and within the artery wall. The ABM also considers wall shear stress (WSS) dependent leukocyte TEM and compensatory arterial remodeling obeying Glagov’s phenomenon. Interestingly, using fully developed steady blood flow does not result in a representative number of leukocyte TEM as compared to pulsatile flow, whereas passing WSS at peak systole of the pulsatile flow waveform does. Moreover, using the model, we have found leukocyte TEM increases monotonically with decreases in luminal volume. At critical plaque shapes the WSS changes rapidly resulting in sudden increases in leukocyte TEM suggesting lumen volumes that will give rise to rapid plaque growth rates if left untreated. Overall this multi-scale and multi-physics approach appropriately captures and integrates the spatiotemporal events occurring at the cellular level in order to predict leukocyte transmigration and plaque evolution. PMID:28542193
Pain expressiveness and altruistic behavior: an exploration using agent-based modeling.
de C Williams, Amanda C; Gallagher, Elizabeth; Fidalgo, Antonio R; Bentley, Peter J
2016-03-01
Predictions which invoke evolutionary mechanisms are hard to test. Agent-based modeling in artificial life offers a way to simulate behaviors and interactions in specific physical or social environments over many generations. The outcomes have implications for understanding adaptive value of behaviors in context. Pain-related behavior in animals is communicated to other animals that might protect or help, or might exploit or predate. An agent-based model simulated the effects of displaying or not displaying pain (expresser/nonexpresser strategies) when injured and of helping, ignoring, or exploiting another in pain (altruistic/nonaltruistic/selfish strategies). Agents modeled in MATLAB interacted at random while foraging (gaining energy); random injury interrupted foraging for a fixed time unless help from an altruistic agent, who paid an energy cost, speeded recovery. Environmental and social conditions also varied, and each model ran for 10,000 iterations. Findings were meaningful in that, in general, contingencies that evident from experimental work with a variety of mammals, over a few interactions, were replicated in the agent-based model after selection pressure over many generations. More energy-demanding expression of pain reduced its frequency in successive generations, and increasing injury frequency resulted in fewer expressers and altruists. Allowing exploitation of injured agents decreased expression of pain to near zero, but altruists remained. Decreasing costs or increasing benefits of helping hardly changed its frequency, whereas increasing interaction rate between injured agents and helpers diminished the benefits to both. Agent-based modeling allows simulation of complex behaviors and environmental pressures over evolutionary time.
ERIC Educational Resources Information Center
Thompson, Kate; Reimann, Peter
2010-01-01
A classification system that was developed for the use of agent-based models was applied to strategies used by school-aged students to interrogate an agent-based model and a system dynamics model. These were compared, and relationships between learning outcomes and the strategies used were also analysed. It was found that the classification system…
Micro-Level Adaptation, Macro-Level Selection, and the Dynamics of Market Partitioning
García-Díaz, César; van Witteloostuijn, Arjen; Péli, Gábor
2015-01-01
This paper provides a micro-foundation for dual market structure formation through partitioning processes in marketplaces by developing a computational model of interacting economic agents. We propose an agent-based modeling approach, where firms are adaptive and profit-seeking agents entering into and exiting from the market according to their (lack of) profitability. Our firms are characterized by large and small sunk costs, respectively. They locate their offerings along a unimodal demand distribution over a one-dimensional product variety, with the distribution peak constituting the center and the tails standing for the peripheries. We found that large firms may first advance toward the most abundant demand spot, the market center, and release peripheral positions as predicted by extant dual market explanations. However, we also observed that large firms may then move back toward the market fringes to reduce competitive niche overlap in the center, triggering nonlinear resource occupation behavior. Novel results indicate that resource release dynamics depend on firm-level adaptive capabilities, and that a minimum scale of production for low sunk cost firms is key to the formation of the dual structure. PMID:26656107
Micro-Level Adaptation, Macro-Level Selection, and the Dynamics of Market Partitioning.
García-Díaz, César; van Witteloostuijn, Arjen; Péli, Gábor
2015-01-01
This paper provides a micro-foundation for dual market structure formation through partitioning processes in marketplaces by developing a computational model of interacting economic agents. We propose an agent-based modeling approach, where firms are adaptive and profit-seeking agents entering into and exiting from the market according to their (lack of) profitability. Our firms are characterized by large and small sunk costs, respectively. They locate their offerings along a unimodal demand distribution over a one-dimensional product variety, with the distribution peak constituting the center and the tails standing for the peripheries. We found that large firms may first advance toward the most abundant demand spot, the market center, and release peripheral positions as predicted by extant dual market explanations. However, we also observed that large firms may then move back toward the market fringes to reduce competitive niche overlap in the center, triggering nonlinear resource occupation behavior. Novel results indicate that resource release dynamics depend on firm-level adaptive capabilities, and that a minimum scale of production for low sunk cost firms is key to the formation of the dual structure.
The evolution of social behavior in the prehistoric American southwest.
Gumerman, George J; Swedlund, Alan C; Dean, Jeffrey S; Epstein, Joshua M
2003-01-01
Long House Valley, located in the Black Mesa area of northeastern Arizona (USA), was inhabited by the Kayenta Anasazi from circa 1800 B.C. to circa A.D. 1300. These people were prehistoric precursors of the modern Pueblo cultures of the Colorado Plateau. A rich paleoenvironmental record, based on alluvial geomorphology, palynology, and dendroclimatology, permits the accurate quantitative reconstruction of annual fluctuations in potential agricultural production (kg maize/hectare). The archaeological record of Anasazi farming groups from A.D. 200 to 1300 provides information on a millennium of sociocultural stasis, variability, change, and adaptation. We report on a multi-agent computational model of this society that closely reproduces the main features of its actual history, including population ebb and flow, changing spatial settlement patterns, and eventual rapid decline. The agents in the model are monoagriculturalists, who decide both where to situate their fields and where to locate their settlements.
Elements of decisional dynamics: An agent-based approach applied to artificial financial market
NASA Astrophysics Data System (ADS)
Lucas, Iris; Cotsaftis, Michel; Bertelle, Cyrille
2018-02-01
This paper introduces an original mathematical description for describing agents' decision-making process in the case of problems affected by both individual and collective behaviors in systems characterized by nonlinear, path dependent, and self-organizing interactions. An application to artificial financial markets is proposed by designing a multi-agent system based on the proposed formalization. In this application, agents' decision-making process is based on fuzzy logic rules and the price dynamics is purely deterministic according to the basic matching rules of a central order book. Finally, while putting most parameters under evolutionary control, the computational agent-based system is able to replicate several stylized facts of financial time series (distributions of stock returns showing a heavy tail with positive excess kurtosis, absence of autocorrelations in stock returns, and volatility clustering phenomenon).
Elements of decisional dynamics: An agent-based approach applied to artificial financial market.
Lucas, Iris; Cotsaftis, Michel; Bertelle, Cyrille
2018-02-01
This paper introduces an original mathematical description for describing agents' decision-making process in the case of problems affected by both individual and collective behaviors in systems characterized by nonlinear, path dependent, and self-organizing interactions. An application to artificial financial markets is proposed by designing a multi-agent system based on the proposed formalization. In this application, agents' decision-making process is based on fuzzy logic rules and the price dynamics is purely deterministic according to the basic matching rules of a central order book. Finally, while putting most parameters under evolutionary control, the computational agent-based system is able to replicate several stylized facts of financial time series (distributions of stock returns showing a heavy tail with positive excess kurtosis, absence of autocorrelations in stock returns, and volatility clustering phenomenon).
Formal Modeling of Multi-Agent Systems using the Pi-Calculus and Epistemic Logic
NASA Technical Reports Server (NTRS)
Rorie, Toinette; Esterline, Albert
1998-01-01
Multi-agent systems have become important recently in computer science, especially in artificial intelligence (AI). We allow a broad sense of agent, but require at least that an agent has some measure of autonomy and interacts with other agents via some kind of agent communication language. We are concerned in this paper with formal modeling of multi-agent systems, with emphasis on communication. We propose for this purpose to use the pi-calculus, an extension of the process algebra CCS. Although the literature on the pi-calculus refers to agents, the term is used there in the sense of a process in general. It is our contention, however, that viewing agents in the AI sense as agents in the pi-calculus sense affords significant formal insight. One formalism that has been applied to agents in the AI sense is epistemic logic, the logic of knowledge. The success of epistemic logic in computer science in general has come in large part from its ability to handle concepts of knowledge that apply to groups. We maintain that the pi-calculus affords a natural yet rigorous means by which groups that are significant to epistemic logic may be identified, encapsulated, structured into hierarchies, and restructured in a principled way. This paper is organized as follows: Section 2 introduces the pi-calculus; Section 3 takes a scenario from the classical paper on agent-oriented programming [Sh93] and translates it into a very simple subset of the n-calculus; Section 4 then shows how more sophisticated features of the pi-calculus may bc brought into play; Section 5 discusses how the pi-calculus may be used to define groups for epistemic logic; and Section 6 is the conclusion.
Silverman, Barry G; Hanrahan, Nancy; Bharathy, Gnana; Gordon, Kim; Johnson, Dan
2015-02-01
Explore whether agent-based modeling and simulation can help healthcare administrators discover interventions that increase population wellness and quality of care while, simultaneously, decreasing costs. Since important dynamics often lie in the social determinants outside the health facilities that provide services, this study thus models the problem at three levels (individuals, organizations, and society). The study explores the utility of translating an existing (prize winning) software for modeling complex societal systems and agent's daily life activities (like a Sim City style of software), into a desired decision support system. A case study tests if the 3 levels of system modeling approach is feasible, valid, and useful. The case study involves an urban population with serious mental health and Philadelphia's Medicaid population (n=527,056), in particular. Section 3 explains the models using data from the case study and thereby establishes feasibility of the approach for modeling a real system. The models were trained and tuned using national epidemiologic datasets and various domain expert inputs. To avoid co-mingling of training and testing data, the simulations were then run and compared (Section 4.1) to an analysis of 250,000 Philadelphia patient hospital admissions for the year 2010 in terms of re-hospitalization rate, number of doctor visits, and days in hospital. Based on the Student t-test, deviations between simulated vs. real world outcomes are not statistically significant. Validity is thus established for the 2008-2010 timeframe. We computed models of various types of interventions that were ineffective as well as 4 categories of interventions (e.g., reduced per-nurse caseload, increased check-ins and stays, etc.) that result in improvement in well-being and cost. The 3 level approach appears to be useful to help health administrators sort through system complexities to find effective interventions at lower costs. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Burello, E.; Bologa, C.; Frecer, V.; Miertus, S.
Combinatorial chemistry and technologies have been developed to a stage where synthetic schemes are available for generation of a large variety of organic molecules. The innovative concept of combinatorial design assumes that screening of a large and diverse library of compounds will increase the probability of finding an active analogue among the compounds tested. Since the rate at which libraries are screened for activity currently constitutes a limitation to the use of combinatorial technologies, it is important to be selective about the number of compounds to be synthesized. Early experience with combinatorial chemistry indicated that chemical diversity alone did not result in a significant increase in the number of generated lead compounds. Emphasis has therefore been increasingly put on the use of computer assisted combinatorial chemical techniques. Computational methods are valuable in the design of virtual libraries of molecular models. Selection strategies based on computed physicochemical properties of the models or of a target compound are introduced to reduce the time and costs of library synthesis and screening. In addition, computational structure-based library focusing methods can be used to perform in silico screening of the activity of compounds against a target receptor by docking the ligands into the receptor model. Three case studies are discussed dealing with the design of targeted combinatorial libraries of inhibitors of HIV-1 protease, P. falciparum plasmepsin and human urokinase as potential antivirial, antimalarial and anticancer drugs. These illustrate library focusing strategies.
Cockrell, Robert Chase; Christley, Scott; Chang, Eugene; An, Gary
2015-01-01
Perhaps the greatest challenge currently facing the biomedical research community is the ability to integrate highly detailed cellular and molecular mechanisms to represent clinical disease states as a pathway to engineer effective therapeutics. This is particularly evident in the representation of organ-level pathophysiology in terms of abnormal tissue structure, which, through histology, remains a mainstay in disease diagnosis and staging. As such, being able to generate anatomic scale simulations is a highly desirable goal. While computational limitations have previously constrained the size and scope of multi-scale computational models, advances in the capacity and availability of high-performance computing (HPC) resources have greatly expanded the ability of computational models of biological systems to achieve anatomic, clinically relevant scale. Diseases of the intestinal tract are exemplary examples of pathophysiological processes that manifest at multiple scales of spatial resolution, with structural abnormalities present at the microscopic, macroscopic and organ-levels. In this paper, we describe a novel, massively parallel computational model of the gut, the Spatially Explicitly General-purpose Model of Enteric Tissue_HPC (SEGMEnT_HPC), which extends an existing model of the gut epithelium, SEGMEnT, in order to create cell-for-cell anatomic scale simulations. We present an example implementation of SEGMEnT_HPC that simulates the pathogenesis of ileal pouchitis, and important clinical entity that affects patients following remedial surgery for ulcerative colitis. PMID:25806784
A Coupled Simulation Architecture for Agent-Based/Geohydrological Modelling
NASA Astrophysics Data System (ADS)
Jaxa-Rozen, M.
2016-12-01
The quantitative modelling of social-ecological systems can provide useful insights into the interplay between social and environmental processes, and their impact on emergent system dynamics. However, such models should acknowledge the complexity and uncertainty of both of the underlying subsystems. For instance, the agent-based models which are increasingly popular for groundwater management studies can be made more useful by directly accounting for the hydrological processes which drive environmental outcomes. Conversely, conventional environmental models can benefit from an agent-based depiction of the feedbacks and heuristics which influence the decisions of groundwater users. From this perspective, this work describes a Python-based software architecture which couples the popular NetLogo agent-based platform with the MODFLOW/SEAWAT geohydrological modelling environment. This approach enables users to implement agent-based models in NetLogo's user-friendly platform, while benefiting from the full capabilities of MODFLOW/SEAWAT packages or reusing existing geohydrological models. The software architecture is based on the pyNetLogo connector, which provides an interface between the NetLogo agent-based modelling software and the Python programming language. This functionality is then extended and combined with Python's object-oriented features, to design a simulation architecture which couples NetLogo with MODFLOW/SEAWAT through the FloPy library (Bakker et al., 2016). The Python programming language also provides access to a range of external packages which can be used for testing and analysing the coupled models, which is illustrated for an application of Aquifer Thermal Energy Storage (ATES).
Lim, Morgan E; Worster, Andrew; Goeree, Ron; Tarride, Jean-Éric
2013-05-22
Computer simulation studies of the emergency department (ED) are often patient driven and consider the physician as a human resource whose primary activity is interacting directly with the patient. In many EDs, physicians supervise delegates such as residents, physician assistants and nurse practitioners each with different skill sets and levels of independence. The purpose of this study is to present an alternative approach where physicians and their delegates in the ED are modeled as interacting pseudo-agents in a discrete event simulation (DES) and to compare it with the traditional approach ignoring such interactions. The new approach models a hierarchy of heterogeneous interacting pseudo-agents in a DES, where pseudo-agents are entities with embedded decision logic. The pseudo-agents represent a physician and delegate, where the physician plays a senior role to the delegate (i.e. treats high acuity patients and acts as a consult for the delegate). A simple model without the complexity of the ED is first created in order to validate the building blocks (programming) used to create the pseudo-agents and their interaction (i.e. consultation). Following validation, the new approach is implemented in an ED model using data from an Ontario hospital. Outputs from this model are compared with outputs from the ED model without the interacting pseudo-agents. They are compared based on physician and delegate utilization, patient waiting time for treatment, and average length of stay. Additionally, we conduct sensitivity analyses on key parameters in the model. In the hospital ED model, comparisons between the approach with interaction and without showed physician utilization increase from 23% to 41% and delegate utilization increase from 56% to 71%. Results show statistically significant mean time differences for low acuity patients between models. Interaction time between physician and delegate results in increased ED length of stay and longer waits for beds. This example shows the importance of accurately modeling physician relationships and the roles in which they treat patients. Neglecting these relationships could lead to inefficient resource allocation due to inaccurate estimates of physician and delegate time spent on patient related activities and length of stay.
NASA Astrophysics Data System (ADS)
Huzil, J. Torin; Sivaloganathan, Siv; Kohandel, Mohammad; Foldvari, Marianna
2011-11-01
The advancement of dermal and transdermal drug delivery requires the development of delivery systems that are suitable for large protein and nucleic acid-based therapeutic agents. However, a complete mechanistic understanding of the physical barrier properties associated with the epidermis, specifically the membrane structures within the stratum corneum, has yet to be developed. Here, we describe the assembly and computational modeling of stratum corneum lipid bilayers constructed from varying ratios of their constituent lipids (ceramide, free fatty acids and cholesterol) to determine if there is a difference in the physical properties of stratum corneum compositions.
An, Gary
2015-01-01
Agent-based modeling has been used to characterize the nested control loops and non-linear dynamics associated with inflammatory and immune responses, particularly as a means of visualizing putative mechanistic hypotheses. This process is termed dynamic knowledge representation and serves a critical role in facilitating the ability to test and potentially falsify hypotheses in the current data- and hypothesis-rich biomedical research environment. Importantly, dynamic computational modeling aids in identifying useful abstractions, a fundamental scientific principle that pervades the physical sciences. Recognizing the critical scientific role of abstraction provides an intellectual and methodological counterweight to the tendency in biology to emphasize comprehensive description as the primary manifestation of biological knowledge. Transplant immunology represents yet another example of the challenge of identifying sufficient understanding of the inflammatory/immune response in order to develop and refine clinically effective interventions. Advances in immunosuppressive therapies have greatly improved solid organ transplant (SOT) outcomes, most notably by reducing and treating acute rejection. The end goal of these transplant immune strategies is to facilitate effective control of the balance between regulatory T cells and the effector/cytotoxic T-cell populations in order to generate, and ideally maintain, a tolerant phenotype. Characterizing the dynamics of immune cell populations and the interactive feedback loops that lead to graft rejection or tolerance is extremely challenging, but is necessary if rational modulation to induce transplant tolerance is to be accomplished. Herein is presented the solid organ agent-based model (SOTABM) as an initial example of an agent-based model (ABM) that abstractly reproduces the cellular and molecular components of the immune response to SOT. Despite its abstract nature, the SOTABM is able to qualitatively reproduce acute rejection and the suppression of acute rejection by immunosuppression to generate transplant tolerance. The SOTABM is intended as an initial example of how ABMs can be used to dynamically represent mechanistic knowledge concerning transplant immunology in a scalable and expandable form and can thus potentially serve as useful adjuncts to the investigation and development of control strategies to induce transplant tolerance. PMID:26594211
An, Gary
2015-01-01
Agent-based modeling has been used to characterize the nested control loops and non-linear dynamics associated with inflammatory and immune responses, particularly as a means of visualizing putative mechanistic hypotheses. This process is termed dynamic knowledge representation and serves a critical role in facilitating the ability to test and potentially falsify hypotheses in the current data- and hypothesis-rich biomedical research environment. Importantly, dynamic computational modeling aids in identifying useful abstractions, a fundamental scientific principle that pervades the physical sciences. Recognizing the critical scientific role of abstraction provides an intellectual and methodological counterweight to the tendency in biology to emphasize comprehensive description as the primary manifestation of biological knowledge. Transplant immunology represents yet another example of the challenge of identifying sufficient understanding of the inflammatory/immune response in order to develop and refine clinically effective interventions. Advances in immunosuppressive therapies have greatly improved solid organ transplant (SOT) outcomes, most notably by reducing and treating acute rejection. The end goal of these transplant immune strategies is to facilitate effective control of the balance between regulatory T cells and the effector/cytotoxic T-cell populations in order to generate, and ideally maintain, a tolerant phenotype. Characterizing the dynamics of immune cell populations and the interactive feedback loops that lead to graft rejection or tolerance is extremely challenging, but is necessary if rational modulation to induce transplant tolerance is to be accomplished. Herein is presented the solid organ agent-based model (SOTABM) as an initial example of an agent-based model (ABM) that abstractly reproduces the cellular and molecular components of the immune response to SOT. Despite its abstract nature, the SOTABM is able to qualitatively reproduce acute rejection and the suppression of acute rejection by immunosuppression to generate transplant tolerance. The SOTABM is intended as an initial example of how ABMs can be used to dynamically represent mechanistic knowledge concerning transplant immunology in a scalable and expandable form and can thus potentially serve as useful adjuncts to the investigation and development of control strategies to induce transplant tolerance.
NASA Astrophysics Data System (ADS)
Gosha, Kinnis
This dissertation presents the design, development and short-term evaluation of an embodied conversational agent designed to mentor human users. An embodied conversational agent (ECA) was created and programmed to mentor African American computer science majors on their decision to pursue graduate study in computing. Before constructing the ECA, previous research in the fields of embodied conversational agents, relational agents, mentorship, telementorship and successful mentoring programs and practices for African American graduate students were reviewed. A survey used to find areas of interest of the sample population. Experts were then interviewed to collect information on those areas of interest and a dialogue for the ECA was constructed based on the interview's transcripts. A between-group, mixed method experiment was conducted with 37 African American male undergraduate computer science majors where one group used the ECA mentor while the other group pursued mentoring advice from a human mentor. Results showed no significant difference between the ECA and human mentor when dealing with career mentoring functions. However, the human mentor was significantly better than the ECA mentor when addressing psychosocial mentoring functions.
Computational Modeling and Simulation of Developmental ...
SYNOPSIS: The question of how tissues and organs are shaped during development is crucial for understanding human birth defects. Data from high-throughput screening assays on human stem cells may be utilized predict developmental toxicity with reasonable accuracy. Other types of models are necessary, however, for mechanism-specific analysis because embryogenesis requires precise timing and control. Agent-based modeling and simulation (ABMS) is an approach to virtually reconstruct these dynamics, cell-by-cell and interaction-by-interaction. Using ABMS, HTS lesions from ToxCast can be integrated with patterning systems heuristically to propagate key events This presentation to FDA-CFSAN will update progress on the applications of in silico modeling tools and approaches for assessing developmental toxicity.
Population growth and collapse in a multiagent model of the Kayenta Anasazi in Long House Valley.
Axtell, Robert L; Epstein, Joshua M; Dean, Jeffrey S; Gumerman, George J; Swedlund, Alan C; Harburger, Jason; Chakravarty, Shubha; Hammond, Ross; Parker, Jon; Parker, Miles
2002-05-14
Long House Valley in the Black Mesa area of northeastern Arizona (U.S.) was inhabited by the Kayenta Anasazi from about 1800 before Christ to about anno Domini 1300. These people were prehistoric ancestors of the modern Pueblo cultures of the Colorado Plateau. Paleoenvironmental research based on alluvial geomorphology, palynology, and dendroclimatology permits accurate quantitative reconstruction of annual fluctuations in potential agricultural production (kg of maize per hectare). The archaeological record of Anasazi farming groups from anno Domini 200-1300 provides information on a millennium of sociocultural stasis, variability, change, and adaptation. We report on a multiagent computational model of this society that closely reproduces the main features of its actual history, including population ebb and flow, changing spatial settlement patterns, and eventual rapid decline. The agents in the model are monoagriculturalists, who decide both where to situate their fields as well as the location of their settlements. Nutritional needs constrain fertility. Agent heterogeneity, difficult to model mathematically, is demonstrated to be crucial to the high fidelity of the model.
Population growth and collapse in a multiagent model of the Kayenta Anasazi in Long House Valley
Axtell, Robert L.; Epstein, Joshua M.; Dean, Jeffrey S.; Gumerman, George J.; Swedlund, Alan C.; Harburger, Jason; Chakravarty, Shubha; Hammond, Ross; Parker, Jon; Parker, Miles
2002-01-01
Long House Valley in the Black Mesa area of northeastern Arizona (U.S.) was inhabited by the Kayenta Anasazi from about 1800 before Christ to about anno Domini 1300. These people were prehistoric ancestors of the modern Pueblo cultures of the Colorado Plateau. Paleoenvironmental research based on alluvial geomorphology, palynology, and dendroclimatology permits accurate quantitative reconstruction of annual fluctuations in potential agricultural production (kg of maize per hectare). The archaeological record of Anasazi farming groups from anno Domini 200-1300 provides information on a millennium of sociocultural stasis, variability, change, and adaptation. We report on a multiagent computational model of this society that closely reproduces the main features of its actual history, including population ebb and flow, changing spatial settlement patterns, and eventual rapid decline. The agents in the model are monoagriculturalists, who decide both where to situate their fields as well as the location of their settlements. Nutritional needs constrain fertility. Agent heterogeneity, difficult to model mathematically, is demonstrated to be crucial to the high fidelity of the model. PMID:12011406
NASA Astrophysics Data System (ADS)
Iwamura, T.; Fragoso, J.; Lambin, E.
2012-12-01
The interactions with animals are vital to the Amerindian, indigenous people, of Rupunini savannah-forest in Guyana. Their connections extend from basic energy and protein resource to spiritual bonding through "paring" to a certain animal in the forest. We collected extensive dataset of 23 indigenous communities for 3.5 years, consisting 9900 individuals from 1307 households, as well as animal observation data in 8 transects per communities (47,000 data entries). In this presentation, our research interest is to model the driver of land use change of the indigenous communities and its impacts on the ecosystem in the Rupunini area under global change. Overarching question we would like to answer with this program is to find how and why "tipping-point" from hunting gathering society to the agricultural society occurs in the future. Secondary question is what is the implication of the change to agricultural society in terms of biodiversity and carbon stock in the area, and eventually the well-being of Rupunini people. To answer the questions regarding the society shift in agriculture activities, we built as simulation with Agent-Based Modeling (Multi Agents Simulation). We developed this simulation by using Netlogo, the programming environment specialized for spatially explicit agent-based modeling (ABM). This simulation consists of four different process in the Rupunini landscape; forest succession, animal population growth, hunting of animals, and land clearing for agriculture. All of these processes are carried out by a set of computational unit, called "agents". In this program, there are four types of agents - patches, villages, households, and animals. Here, we describe the impacts of hunting on the biodiversity based on actual demographic data from one village named Crush Water. Animal population within the hunting territory of the village stabilized but Agouti/Paca dominates the landscape with little population of armadillos and peccaries. White-tailed deers, Tapirs, Capybara exist but very low. This finding is well aligned with the hunting dataset - Agouti/Paca consists 27% of total hunting. Based on our simulation, it seems the dominance of Agouti/Paca among hunted animals shown in the field data can be explained solely by their high carrying capacity against human extraction (population density of the Paca/Agouti = 60 per square km, whereas other animals ranges 0.63 to 7). When we incorporate agriculture, the "rodentation" of the animal population toward Agouti/Paca becomes more obvious. This simulation shows the interactions of people and animals through land change and hunting, which were observed in our fields.
Rasheed, Nadia; Amin, Shamsudin H M
2016-01-01
Grounded language acquisition is an important issue, particularly to facilitate human-robot interactions in an intelligent and effective way. The evolutionary and developmental language acquisition are two innovative and important methodologies for the grounding of language in cognitive agents or robots, the aim of which is to address current limitations in robot design. This paper concentrates on these two main modelling methods with the grounding principle for the acquisition of linguistic ability in cognitive agents or robots. This review not only presents a survey of the methodologies and relevant computational cognitive agents or robotic models, but also highlights the advantages and progress of these approaches for the language grounding issue.
Rasheed, Nadia; Amin, Shamsudin H. M.
2016-01-01
Grounded language acquisition is an important issue, particularly to facilitate human-robot interactions in an intelligent and effective way. The evolutionary and developmental language acquisition are two innovative and important methodologies for the grounding of language in cognitive agents or robots, the aim of which is to address current limitations in robot design. This paper concentrates on these two main modelling methods with the grounding principle for the acquisition of linguistic ability in cognitive agents or robots. This review not only presents a survey of the methodologies and relevant computational cognitive agents or robotic models, but also highlights the advantages and progress of these approaches for the language grounding issue. PMID:27069470
Retrospective revaluation in sequential decision making: a tale of two systems.
Gershman, Samuel J; Markman, Arthur B; Otto, A Ross
2014-02-01
Recent computational theories of decision making in humans and animals have portrayed 2 systems locked in a battle for control of behavior. One system--variously termed model-free or habitual--favors actions that have previously led to reward, whereas a second--called the model-based or goal-directed system--favors actions that causally lead to reward according to the agent's internal model of the environment. Some evidence suggests that control can be shifted between these systems using neural or behavioral manipulations, but other evidence suggests that the systems are more intertwined than a competitive account would imply. In 4 behavioral experiments, using a retrospective revaluation design and a cognitive load manipulation, we show that human decisions are more consistent with a cooperative architecture in which the model-free system controls behavior, whereas the model-based system trains the model-free system by replaying and simulating experience.
Systems modeling and simulation applications for critical care medicine
2012-01-01
Critical care delivery is a complex, expensive, error prone, medical specialty and remains the focal point of major improvement efforts in healthcare delivery. Various modeling and simulation techniques offer unique opportunities to better understand the interactions between clinical physiology and care delivery. The novel insights gained from the systems perspective can then be used to develop and test new treatment strategies and make critical care delivery more efficient and effective. However, modeling and simulation applications in critical care remain underutilized. This article provides an overview of major computer-based simulation techniques as applied to critical care medicine. We provide three application examples of different simulation techniques, including a) pathophysiological model of acute lung injury, b) process modeling of critical care delivery, and c) an agent-based model to study interaction between pathophysiology and healthcare delivery. Finally, we identify certain challenges to, and opportunities for, future research in the area. PMID:22703718
2007-12-01
model. Finally, we build a small agent-based model using the component architecture to demonstrate the library’s functionality. 15. NUMBER OF...and a Behavioral model. Finally, we build a small agent-based model using the component architecture to demonstrate the library’s functionality...prototypes an architectural design which is generalizable, reusable, and extensible. We have created an initial set of model elements that demonstrate
The fractional volatility model: An agent-based interpretation
NASA Astrophysics Data System (ADS)
Vilela Mendes, R.
2008-06-01
Based on the criteria of mathematical simplicity and consistency with empirical market data, a model with volatility driven by fractional noise has been constructed which provides a fairly accurate mathematical parametrization of the data. Here, some features of the model are reviewed and extended to account for leverage effects. Using agent-based models, one tries to find which agent strategies and (or) properties of the financial institutions might be responsible for the features of the fractional volatility model.
NASA Astrophysics Data System (ADS)
Ginovart, Marta
2014-08-01
The general aim is to promote the use of individual-based models (biological agent-based models) in teaching and learning contexts in life sciences and to make their progressive incorporation into academic curricula easier, complementing other existing modelling strategies more frequently used in the classroom. Modelling activities for the study of a predator-prey system for a mathematics classroom in the first year of an undergraduate program in biosystems engineering have been designed and implemented. These activities were designed to put two modelling approaches side by side, an individual-based model and a set of ordinary differential equations. In order to organize and display this, a system with wolves and sheep in a confined domain was considered and studied. With the teaching material elaborated and a computer to perform the numerical resolutions involved and the corresponding individual-based simulations, the students answered questions and completed exercises to achieve the learning goals set. Students' responses regarding the modelling of biological systems and these two distinct methodologies applied to the study of a predator-prey system were collected via questionnaires, open-ended queries and face-to-face dialogues. Taking into account the positive responses of the students when they were doing these activities, it was clear that using a discrete individual-based model to deal with a predator-prey system jointly with a set of ordinary differential equations enriches the understanding of the modelling process, adds new insights and opens novel perspectives of what can be done with computational models versus other models. The complementary views given by the two modelling approaches were very well assessed by students.
NASA Astrophysics Data System (ADS)
Barbaro, Alethea
2015-03-01
Agent-based models have been widely applied in theoretical ecology to explain migrations and other collective animal movements [2,5,8]. As D'Orsogna and Perc have expertly highlighted in [6], the recent emergence of crime modeling has opened another interesting avenue for mathematical investigation. The area of crime modeling is particularly suited to agent-based models, because these models offer a great deal of flexibility within the model and also ease of communication among criminologist, law enforcement and modelers.
Khelassi, Abdeldjalil
2014-01-01
Active research is being conducted to determine the prognosis for breast cancer. However, the uncertainty is a major obstacle in this domain of medical research. In that context, explanation-aware computing has the potential for providing meaningful interactions between complex medical applications and users, which would ensure a significant reduction of uncertainty and risks. This paper presents an explanation-aware agent, supported by Intensive Knowledge-Distributed Case-Based Reasoning Classifier (IK-DCBRC), to reduce the uncertainty and risks associated with the diagnosis of breast cancer. A meaningful explanation is generated by inferring from a rule-based system according to the level of abstraction and the reasoning traces. The computer-aided detection is conducted by IK-DCBRC, which is a multi-agent system that applies the case-based reasoning paradigm and a fuzzy similarity function for the automatic prognosis by the class of breast tumors, i.e. malignant or benign, from a pattern of cytological images. A meaningful interaction between the physician and the computer-aided diagnosis system, IK-DCBRC, is achieved via an intelligent agent. The physician can observe the trace of reasoning, terms, justifications, and the strategy to be used to decrease the risks and doubts associated with the automatic diagnosis. The capability of the system we have developed was proven by an example in which conflicts were clarified and transparency was ensured. The explanation agent ensures the transparency of the automatic diagnosis of breast cancer supported by IK-DCBRC, which decreases uncertainty and risks and detects some conflicts.
Experimental econophysics: Complexity, self-organization, and emergent properties
NASA Astrophysics Data System (ADS)
Huang, J. P.
2015-03-01
Experimental econophysics is concerned with statistical physics of humans in the laboratory, and it is based on controlled human experiments developed by physicists to study some problems related to economics or finance. It relies on controlled human experiments in the laboratory together with agent-based modeling (for computer simulations and/or analytical theory), with an attempt to reveal the general cause-effect relationship between specific conditions and emergent properties of real economic/financial markets (a kind of complex adaptive systems). Here I review the latest progress in the field, namely, stylized facts, herd behavior, contrarian behavior, spontaneous cooperation, partial information, and risk management. Also, I highlight the connections between such progress and other topics of traditional statistical physics. The main theme of the review is to show diverse emergent properties of the laboratory markets, originating from self-organization due to the nonlinear interactions among heterogeneous humans or agents (complexity).